Programming Robots

July 11, 2017 | Author: Deep Chaudhari | Category: Eye, Computer Vision, Image Resolution, Charge Coupled Device, Visual Perception
Share Embed Donate


Short Description

Download Programming Robots...

Description

PROGRAMMING ROBOTS Useful robot algorithms in both pseudocode and source code. Because programming is a very huge subject and there are billions of books and tutorials on how to program already written, all I plan to cover is specifically what is important to programming robots not mentioned in common literature. ATMEGA BOOTLOADER TUTORIAL

Before starting this tutorial, please read my bootloading tutorial (coming soon!) and my UART tutorial! You must also already have a UART connection set up on your robot for this to work: $50 Robot UART tutorial. As you should know, when I finish my other bootloading tutorial, is that bootloaders are software that can replace your hardware programmer. Instead of hooking up a programmer, you can program using a serial connection. You will need a programmer to upload the bootloader, but you won't ever need the programmer again, except for maybe programming fuses or lockbits. Now that you understand what a bootloader is and the benefits for one, now I will demonstrate how to install a bootloader onto your $50 Robot or any other robot with an ATmega microcontroller. We will be using a bootloader that has an auto-baud feature, where your microcontroller will attempt to reconfigure its internal baud settings to ensure proper bootloader configuration. This does not mean your other hardware will auto-configure, soooo . . . Important: Make sure that all your external hardware is set to the same exact baud rate or this will not work! Just for reference, the bootloader I selected is the open sourcefast tiny & mega UART bootloader. This bootloader is a bit buggy, comes with zero documentation, and not much in comments in the source code . . . but it's the best I've found that can handle a wide range of ATmega types. I've made some small config changes to it for use on the$50 Robot, and those files will be downloadable later in this tutorial.

Configure BAUD (if you haven't already done so) Click: Start->Settings->Control Panel->System A new window will come up called 'System Properties'. Open the Hardware tab and click device manager. You should see this:

Go to Ports, select the one you are using, and right click it. Select Properties. A new window should come up, and select the Port Settings tab:

Now configure the settings as you like, as described in the UART tutorial.

Upload Bootloader You have two options here. You can either use my precompiled bootloader: ATmega8 Bootloader hex file ATmega168 Bootloader hex file Axon USB Bootloader hex file beta v2.1 Bootloader files and then upload it using AVR Studio. Or you can custom modify it for your specific setup/ATmega. In the following steps I'll explain both. update: new bootloader software is available

Programming your own Bootloader

Note: If you do not plan to modify the bootloader code, you may skip this step. Open up AVR Studio, and in the Project Wizard start a new project called bootloader:

Make sure you select 'Atmel AVR Assembler' since we will be programming in Assembly. Don't worry, its mostly done for you already. You do not need to program in Assembly to write a bootloader, but the particular bootloader we are using is written in that language, and so we must compile for it. Click Finish, and the new project should load up.

Install Files Now, download this zip file, and unzip it into your bootloader directory: Bootloader Source Files (v1.9) Bootloader Source Files (v2.1) note: this tutorial teaches only v1.9, but v2.1 is better Now you must also put your own robot program .hex into the bootloader file as well. For example, suppose you just modified your own custom photovore code and you compiled it. Take that compiled .hex and place it into your bootloader folder. Don't forget to do this every time you change your custom code! Optional: Compile Code Note: If you do not plan to modify the bootloader code, you may skip this step.

Look for a file that matches the microcontroller you are using. For example, if you are using the ATmega168, look for the file M168.asm. Open that file up, and copy/paste the contents into your bootloader.asm that is already open in AVR Studio. Now looking at the datasheet of your microcontroller (pin out section), verify that the Tx and Rx pins are correct in bootloader.asm. This is an important step, and in rare cases can break something if you skip this step!!! Make any changes as needed. For example, this is what it should look like for both the ATmega8 and ATmega168: .equ .equ .equ

STX_PORT STX_DDR STX

= PORTD = DDRD = PD1

.equ .equ .equ

SRX_PIN SRX_PORT SRX

= PIND = PORTD = PD0

Now compile it by pressing build:

Upload Code to ATmega Now that you have your new custom bootloader .hex file, you need to simply upload that to your microcontroller. Use your hardware programmer like you have always done:

And finally, you need to program a fuse to tell it to use the bootloader. IMPORTANT: If you change the wrong fuse you can possibly destroy your ATmega! Don't change any other fuses unless you know what you are doing! You want to set BOOTRST to 0 by checking this box, and then pushing Program:

Your Bootloader is Uploaded and Ready! Now disconnect your programmer cable. You won't be needing that again! You will need to power cycle your microcontroller (turn it off then on again) after uploading your bootloader for the settings to take effect.

Upload YOUR Program Through UART update 2010: A GUI version of the bootloader can be found on the Axon II setup tutorial. Now open up a command prompt by going to start->Run...

and typing in 'cmd' and pushing ok:

A new command prompt should open up. Using the command 'cd', go into the directory of your bootloader files. See below image for an example. With your robot turned off and UART ready to go, type in this command: fboot17.exe -b38400 -c1 -pfile.hex -vfile.hex 38400 is your desired baud (9600, 38400, 115200, etc) c1 is your com port (c1, c2, c3, etc) 'file' is the name of your program you want uploaded. The filename MUST be 8 characters or less or it will not work (a bug in the software), and the file must be located in the same folder as fboot.exe. For example, if photovore.hex was your file, do this: -pphotovore.hex -vphotovore.hex (yes, you need to say it twice, with p for the first time and v for the second time) Press enter, and now you will see a / symbol spinning. Turn on your robot, and it should now upload. This is what you should see upon a successful bootload:

For some unexplained reason, I occasionally get an error that says: Bootloader VFFFFFFFF.FF Error, wrong device informations If you get this error, just repeat this step again and it should work. note: after typing in a command once into the command prompt, you do not need to type it again. Just push the up arrow key to cycle through previously typed commands.

The Bootloader Didn't Work?! What if I did the tutorial but its still not working/connecting? Chances are you missed a step. Go back to the beginning and make sure you did everything correctly. Try power cycling your microcontroller. Make sure the hardware programmer is unplugged. Make sure baud is configured properly on ALL of your hardware and ALL of your involved software. Make sure no other device is trying to use the same com port at the same time, such as AVR Studio, HyperTerminal, etc.

Some mistakes that you can make will cause your command prompt window to freeze up. Just open up a new window and try again. Some users have noticed that too many unused globabl variables in your source code will cause problems. See this forum post for more info. And a side note . . . this bootloader can only connect with com ports 1 to 4. The developer of the bootloader for some odd reason thought there isn't anything wrong with this decision . . . If you need a different port, go to the com port settings and change the port you are using. Also to note, some of your UART hardware might not be fast enough as the software doesn't wait for hardware to keep up. TheEasy Radio module will not work, for example. A direct serial/USB connection will work without a problem. PROGRAMMING - COMPUTER VISION TUTORIAL

Introduction to Computer Vision Computer vision is an immense subject, more than any single tutorial can cover. In the following tutorials I will cover the basics of computer vision in four parts, each focused on need-to-know practical knowledge. Part 1: Vision in Biology Part 1 will talk about vision in biology, such as the human eye, vision in insects, etc. By understanding how biology processes visual images, you may then be able to apply what you learned towards your own creations. This will help you turn the 'magic' into an understanding of how vision really works. Part 2: Computer Image Processing Part 2 will go into computer image processing. I will talk about how a camera captures an image, how it is stored in a computer, and how you can do basic alterations of an image. Basic machine vision tricks such as heuristics, thresholding, and greyscaling will be covered.

Part 3: Computer Vision Algorithms Part 3 covers the typical computer vision algorithms, where I talk about how to do some higher level processing of what your robot sees. Edge detection, blob counting, middle mass, image correlation, facial recognition, and stereo vision will be covered. Part 4: Computer Vision Algorithms for Motion Part 4 covers computer vision algorithms for motion. Motion detection, tracking, optical flow, background subtraction, and feature tracking will be explained. There is also a problem set to test you on what you have learned in this computer vision tutorial series. PROGRAMMING - COMPUTER VISION TUTORIAL Part 1: Vision in Biology

Vision in Biology So why vision in biology? What does biology have to do with robots? Well,biomimetics is the study of biology to aid in the design of new technology - such as robots. The purpose of this tutorial is so that you can understand how biology approaches the vision problem. As we progress through parts2,3, and4 you will start to draw parallels between how a robot can see the world and how you and I see the world. I will assume you have a basic understanding of biology, so I will try to build upon what you already know with a bottom->up approach, and hopefully not bore you with what you already know. The Eye The eye is stage one of the human vision system. Here is a diagram of the human eye:

Light first passes through the iris. The iris is what adjusts for the amount of light entering the eye - an auto-brightness adjuster. This is so no matter how much light the eye sees, it tries to adjust the eye to always gather a set amount. Note that if the light is still too bright, you will feel naturally compelled to cover your eyes with your hands. Light then passes to the lens, which is stretched and compressed by muscles to focus the image. This is similar to auto-focus on a digital camera. Notice how the lens inverts the image upside-down? With two eyes creates stereo vision, as they do not look in parallel straight lines. For example, look at your finger, then place your finger on your nose - see how you automatically become cross eyed? The angle of your eyes to each other generates ranging information which is then sent to your brain. Note: this however is not the only method the eyes use to generate range data. Cones and Rods The light then goes into contact with special neurons in the eye (cones for color androds for brightness) that convert light energy to chemical energy. This process is complicated, but the end result is neurons that fire in special patterns that are sent to the brain by way of the optical nerve. Cones and Rods are the biological versions of pixels. But unlike in a camera where each pixel is equal, this is not true for the human eye.

What the above chart shows is the number of rods and cones in the eye vs location in the eye. At the very center of the eye (fovea = 0) you will notice a huge number of cones, and zero rods. Further out from the center the number of cones sharply decrease, with a gradual increase in rods. What does this mean? It means only the center of your eye is capable of processing color - the information from the rods going to your brain is significantly higher! Note the section labeled optic disk. This is where the optic nerveattaches to your eye, leaving no space left for light receptors. It is also called your blind spot. Compound Eyes Compound eyes work in the same way the human eye above works. But instead of rods and cones being the pixels, each individual compound eye acts as a pixel. Unlike popular folk-lore, the insect doesnt actually see hundreds of images. Instead it is hundreds of pixels, combined.

An robot example of a compound eye would be getting a hundredphotoresistors and combining them into a matrix to form a single greyscale image.

What advantage does a compound eye have over a human eye? If you poke a human eye out, his ability to see (total pixels gathered) drops to 50%. If you poke an insect eye out, it will still have 99% visual capability. It can also simply regrow an eye. Optic Nerve 'Image Processing' Most people dont realize how jumbled the information from the human eye really is. The image is inverted from the lens, rods and cones are not equally distributed, and neither eye sees the exact same image! This is where the optic nerve comes into play. By reorganizing neurons physically, it can reassemble an image to something more useful.

Notice how the criss-crossing reorganizes the information from the eyes - that which is seen on the left is processed in the right brain, and that which is seen on the right is processed in the left brain. The problem of two eyes seeing two different images is partially solved. Also interesting to note, there are significantly fewer neurons in the optic nerve then there are cones and rods in the eye. Theory goes that there is summing and averaging going on of 'pixels' that are in close proximity in the eye. What happens after this is still unknown to science, but significant progress has been made. Brain Processing This is where your brain 'magically' assembles the image into something comprehendable. Although the details are fuzzy, it has been determined that different parts of your brain process different parts of the image. One part may process color, another part detecting motion, yet another determining shape. This should give you clues to how to program such a system, in that everything can be treated as seperate subsystems/algorithms.

And yet more Brain Processing . . . All of the basic visual information is gathered, and then processed again into yet a higher level. This is where the brain asks, what is it do I really see? Again, science has not entirely solved this problem (yet), but we have really good theories on what probably happens. Supposedly the brain keeps a large database of reference information - such as what a mac-n-cheese dinner looks like. The brain 'observes' something, then goes through the reference library to make conclusions on what is observed.

How could this happen? Well, the brain knows the color should be orange, it knows it should have a shiny texture, and that the shape should be tube-like. Somehow the brain makes this connection, and tells you 'this is mac-n-cheese, yo.' Your other senses work in a similar manner. More specifically, the theory is about pattern recognition . . . its sorta like me showing you an ink blot, then asking you 'what do you see?' Your brain will try and figure it out, despite the fact it doesnt actually represent anything. Its a subconscious effort.

Your brain also uses its understanding of the physical world (how things connect together in 3D space) to understand what it sees. Dont believe me? Then tell me how many legs this elephant has.

I highly recommend doing agoogle search on optical illusions. This is when the image processing rules of the brain 'break,' and is often used by scientists to figure out how we understand what we see. Stereo Image Processing What has baffled scientists for the longest time, and only recently solved (in my opinion), is what allows us to see a 2D image and yet picture it in 3D. Look at a painting of a scene, and you can immediately determine a fairly accurate measurement and distance away of every object in the picture.Scientists at CMUhave recently solved how a computer can accomplish this. Basically a computer keeps a huge index of about a 1000 or so images, each with range data assigned (trained) to it. Then by probability analysis, it can make connections with future images that need to be processed. Here are examples of figuring out 3D from 2D. ALL lines that are parallel in 3D converge in 2D. This is a picture of a traintrack. Notice how the parellel lines converge to a single point? This is a method the brain uses to guestimate range data.

The brain uses the relation of objects located on the 2D ground to determine 3D scenes. Here is a picture of a forest. By looking at where the trees are located on the ground, you can quickly figure out how far away the trees are located from each other. What tree is closest to the photographer? Why? How do you program that as an algorithm?

If I removed the ground reference, what then would you rely on to figure out how far each tree is from each other? The next method would probably be size comparisons. You would assume trees that are located closer would appear larger.

But this wouldnt work if you had a giant tree far away and a tiny tree close up - as both would appear the same size! So the brain has yet many more methods, such as comparisons ofdetails (size of leaves, for example), shading and shadows, etc. The below image is just a circle, but appears as a sphere because of shading. An algorithm that can process shading can convert 2D images to 3D.

Now that you understand the basics of biological vision processing in ourComputer Vision Tutorial Series, you may continue on to Part 2: Computer Image Processing. PROGRAMMING - COMPUTER VISION TUTORIAL Part 2: Computer Image Processing Pixels and Resolution 2D Matrices Decreasing Resolution Thresholding and Heuristics Image Color Inversion Image Brightness / Darkness Addendum (1D -> 4D) Computer Image Processing In part 2 of the Computer Vision Tutorial Serieswe will talk about how images are stored in a computer, as well as basic image manipulation algorithms. Mona Lisa (original image above) will be our guiding example throughout this tutorial. Image Collection The very first step would be to capture an image. A camera captures data as a stream of information, reading from a single light receptor at a time and storing each complete 'scan' as one single file. Different cameras can work differently, so check the manual on how it sends out image data. There are two main types of cameras, CCD and CMOS.

A CCD transports the charge across the chip and reads it at one corner of the array. An analog-to-digital converter (ADC) then turns each pixel's value into a digital value by measuring the amount of charge at each photosite and converting that measurement to binary form. CMOS devices use several transistors at each pixel to amplify and move the charge using more traditional wires. The CMOS signal is digital, so it needs no ADC. CCD sensors create high-quality, low-noise images. CMOS sensors are generally more susceptible to noise. Because each pixel on a CMOS sensor has several transistors located next to it, the light sensitivity of a CMOS chip is lower. Many of the photons hit the transistors instead of the photodiode. CMOS sensors traditionally consume little power. CCDs, on the other hand, use a process that consumes lots of power. CCDs consume as much as 100 times more power than an equivalent CMOS sensor. CCD sensors have been mass produced for a longer period of time, so they are more mature. They tend to have higher quality pixels, and more of them. Below is how colored pixels are arranged on a CCD chip:

When storing or processing an image, make sure the image is uncompressed - meaning don't use JPG's . . . BMP's, GIF's, and PNG's are often (although not always) uncompressed. If you decide to transmit an image as compressed data (for faster

transmission speed), you will have to uncompress the image before processing. This is important with how the file is understood . . .

Pixels and Resolution In every image you have pixels. These are the tiny little dots of color you see on your screen, and the smallest possible size any image can get. When an image is stored, the image file contains information on every single pixel in that image. This information includes two things: color, and pixel location. Images also have a set number of pixels per size of the image, known as resolution. You might see terms such as dpi (dots per square inch), meaning the number of pixels you will see in a square inch of the image. A higher resolution means there are more pixels in a set area, resulting in a higher quality image. The disadvantage of higher resolution is that it requires more processing power to analyze an image. When programming computer vision into a robot, use low resolution.

The Matrix (the math kind) Images are stored in 2D matrices, which represent the locations of all pixels. All images have an X component, and a Y component. At each point, a color value is stored. If the image is black and white (binary), either a 1 or a 0 will be stored at each location. If the color is greyscale, it will store a range of values. If it is a color image (RBG), it will store sets of values. Obviously, the less color involved, the faster the image can be processed. For many applications, binary images can acheive most of what you want. Here is a matrix example of a binary image of a triangle: 0 0 0 1 0

0 0 1 1 0

0 1 0 1 0

1 0 0 1 0

0 1 0 1 0

0 0 1 1 0

0 0 0 1 0

It has a resolution of 7 x 5, with a single bit stored in each location. Memory required is therefore 7 x 5 x 1 = 35 bits. Here is a matrix example of a greyscale (8 bit) image of a triangle: 0 0 55 255 55 0

0 55 255 255 55 0

55 255 55 255 55 0

255 55 55 255 55 0

55 255 55 255 55 0

0 55 255 255 55 0

0 0 55 255 55 0

It has a resolution of 7 x 6, with 8 bits stored in each location. Memory required is therefore 7 x 6 x 8 = 336 bits. As you can see, increasing resolution and information per pixel can significantly slow down your image processing speed. After converting color data to generate greyscale, Mona Lisa looks like this:

Decreasing Resolution The very first operation I will show you is how to decrease the resolution of an image. The basic concept in decreasing resolution is that you are selectively deleting data from the image. There are several ways you can do this: The first method is just delete 1 pixel out of every group of pixels in both X and Y directions of the matrix. For example, using our greyscale image of a triangle above, and deleting one out of every two pixels in the X direction, we would get: 0 0 55 255 55 0

55 255 55 255 55 0

55 255 55 255 55 0

0 0 55 255 55 0

and continuing with the Y direction: 0 55 55

55 55 55

55 55 55

0 55 55

and will result in a 4 x 3 matrix, for memory usage of 96 bits.

Another way of decreasing resolution would be to choose a pixel, average the values of all surrounding pixels, store that value in the choosen pixel location, then delete all the surrounding pixels. For example, 13 112 112 13 145 166 166 145 103 103 103 103

Using the latter method for resolution reduction, this is what Mona Lisa would look like (below). You can see how pixels are averaged along the edges of her hair.

Thresholding and Heuristics While the above method reduces image file size by resolution reduction, thresholding reduces file size by reducing color data in each pixel. To do this, you first need to analyze your image by using a method called heuristics. Heuristics is when you statistically look at an image as a whole, such as determining the overall brightness of an image, or counting the total number of pixels that contain a certain color. For an example histogram, here is my sample greyscale pixel histogram of Mona Lisa, and sample histogram generation code. An example image heuristic plotting pixel count (Y-axis) versus pixel color intensity (0 to 255, X-axis):

Often heuristics is used for improving image contrast. The image is analyzed, and then bright pixels is made brighter, and dark pixels is made darker. Im not going to go into contrast details here as it is a little complicated, but this is what an improved contrast of Mona Lisa would look like (before and after):

In this particular thresholding example, we will convert all colors to binary. How do you decide which pixel is a 1 and which is a 0? The first thing you do is determine a threshold - all pixel values above the threshold becomes a 1, and all below becomes a 0. Your threshold can be chosen arbitrarily, or it can be based on your heuristic analysis. For example, converting our greyscale triangle to binary, using 40 as our threshold, we will get: 0 0 1 1 1 0

0 1 1 1 1 0

1 1 1 1 1 0

1 1 1 1 1 0

1 1 1 1 1 0

0 1 1 1 1 0

0 0 1 1 1 0

If the threshold was 100, we would get this better image: 0 0 0 1 0 0

0 0 1 1 0 0

0 1 0 1 0 0

1 0 0 1 0 0

0 1 0 1 0 0

0 0 1 1 0 0

0 0 0 1 0 0

As you can see, setting a good threshold is very important. In the first example, you cannot see the triangle, yet in the second you can. Poor thresholds result in poor images. In the following example, I used heuristics to determine the average pixel value (add all pixels together, and then divide by the total number of pixels in the image). I then set this average as the threshold. Setting this threshold for Mona Lisa, we get this binary image:

Note that if the threshold was 1, the entire image would be black. If the threshold was 255, the entire image would be white. Thresholding really excels when the background colors are very different from the target colors, as this automatically removes the distracting background from your image. If your target is the color red, and there is little to no red in the background, your robot can easily locate any object that is red by simply thresholding the red value of the image.

Image Color Inversion Color image inversion is a simple equation that inverts the colors of the image. I havnt found any use for this on a robot, but it does however make a good example . . . The greyscale equation is simply: 255 - pixel_value = new_pixel_value The greyscale triangle then becomes: 255 255 200 0 200 255

255 200 0 0 200 255

200 0 200 0 200 255

0 200 200 0 200 255

200 0 200 0 200 255

255 200 0 0 200 255

255 255 200 0 200 255

An RBG of Mona Lisa becomes:

Brightness (and Darkness) Increasing brightness is another simple algorithm. All you do is add (or subtract) some arbitrary value to each pixel: new_pixel_value = pixel_value + 10 You must also make sure that no pixel goes above an exceeded value. With 8 bit greyscale, no value can exceed 255. A simple check can be added like this: if (pixel_value + 10 > 255) { new_pixel_value = 255; } else { new_pixel_value = pixel_value + 10; } And for our lovely and now radiant Mona Lisa:

The problem with increasing brightness too much is that it will result in whiteout. For example, if your arbitrarily added value was 255, every pixel would be white. It also does not improve a robot's ability to understand an image, so you probably will not find a use for this algorithm directly.

Addendum: 1D, 2D, 3D, 4D A 1D image can be obtained from use of a 1 pixel sensor, such as a photoresistor. As metioned in part 1 of this vision tutorial, if you put several photoresistors together, you can generate an image matrix. You can also generate a 2D image matrix by scanning a 1 pixel sensor, such as with a scanning Sharp IR. If you use a ranging sensor, you can easily store 3D info into a much more easily processed 2D matrix. 4D images include time data. They are actually stored as a set of 2D matrix images, with each pixel containing range data, and a new 2D matrix being stored after every X seconds of time passing. This makes processing simple, as you can just analyze each 2D matrix seperately, and then compare images to process change in time. This is just like film of a movie, which is actually just a set of 2D images changing so fast it appears to be moving. This is also quite similar to how a human processes temporal information, as we see about 25 images per second - each processed individually. Actually, biologically, its a bit more complicated than this. Feel free to read an email I recieved from Mr Bill concerning biological fps. But for all intents and purposes, 25fps is an appropriate benchmark.

Now that you understand the basics of computer image processing in ourComputer Vision Tutorial Series, you may continue on to Part 3: Computer Vision Algorithms (coming soon!). PROGRAMMING - COMPUTER VISION TUTORIAL Part 3: Computer Vision Algorithms Edge Detection Shape Detection Middle Mass and Blobs Pixel Classification

Image Correlation Facial Recognition Stereo Vision Now that you have learned about biological visionand computer image processing, we now continue on to the basic algorithms of computer vision. Computer Vision vs Machine Vision Computer vision and machine vision differ in how images are created and processed. Computer vision is done with everyday real world video and photography. Machine vision is done in oversimplified situations as to significantly increase reliability while decreasing cost of equipment and complexity of algorithms. As such, machine vision is used for robots in factories, while computer vision is more appropriate for robots that operate in human environments. Machine vision is more rudimentary yet more practical, while computer vision relates to AI. There is a lesson in this . . .

Edge Detection Edge detection is a technique to locate the edges of objects in the scene. This can be useful for locating the horizon, the corner of an object, white line following, or for determing the shape of an object. The algorithm is quite simple: sort through the image matrix pixel by pixel for each pixel, analyze each of the 8 pixels surrounding it record the value of the darkest pixel, and the lightest pixel if (darkest_pixel_value - lightest_pixel_value) > threshold) then rewrite that pixel as 1; else rewrite that pixel as 0; What the algorithm does is detect sudden changes in color or lighting, representing the edge of an object. Check out the edges on Mona Lisa:

A challenge you may have is choosing a good threshold. This left image has a threshold thats too low, and the right image has a threshold thats too high. You will need to run an image heuristics programfor it to work properly.

You can also do other neat tricks with images, such as thresholding only a particular color like red.

Shape Detection and Pattern Recognition Shape detection requires preprogramming in a mathematical representation database of the shapes you wish to detect. For example, suppose you are writing a program that can distinguish between a triangle, a square, and a circle. This is how you would do it: run edge detection to find the border line of each shape count the number of continuous edges a sharp change in line direction signifies a different line do this by determining the average vector between adjacent pixels

if three lines detected, then its a triangle if four lines, then a square if one line, then its a circle by measure angles between lines you can determine more info (rhomboid, equilateral triangle, etc.)

The basic shapes are very easy, but as you get into more complex shapes (pattern recognition) you will have to use probability analysis. For example, suppose your algorithm needed to recognize between 10 different fruits (only by shape) such as an apple, an orange, a pear, a cherry, etc. How would you do it? Well all are circular, but none perfectly circular. And not all apples look the same, either. By using probability, you can run an analysis that says 'oh, this fruit fits 90% of the characteristics of an apple, but only 60% the characteristics of an orange, so its more likely an apple.' Its the computational version of an 'educated guess.' You could also say 'if this particular feature is present, then it has a 20% higher probability of being an apple.' The feature could be a stem such as on an apple, fuzziness like on a coconut, or spikes like on a pinneapple, etc. This method is known as feature detection.

Middle Mass and Blob Detection Blob detection is an algorithm used to determine if a group of connecting pixels are related to each other. This is useful for identifying seperate objects in a scene, or counting the number of objects in a scene. Blob detection would be useful for counting people in an airport lobby, or fish passing by a camera. Middle mass would be useful for a baseball catching robot, or a line following robot.

To find a blob, you threshold the image by a specific color as shown below. The blue dot represents the middle mass, or the average location of all pixels of the selected color.

If there is only one blob in a scene, the middle mass is always located in the center of an object. But what if there were two or more blobs? This is where it fails, as the middle mass is no longer located on any object:

To solve for this problem, your algorithm needs to label each blob as seperate entities. To do this, run this algorithm: go through each pixel in the array: if the pixel is a blob color, label it '1' otherwise label it 0 go to the next pixel if it is also a blob color and if it is adjacent to blob 1 label it '1' else label it '2' (or more)

repeat until all pixels are done What the algorithm does is labels each blob by a number, counting up for every new blob it encounters. Then to find middle mass, you can just find it for each individual blob. In this below video, I ran a few algorithms in tandem. First, I removed all non-red objects. Next, I blurred the video a bit to make blobs more connected. Then, using blob detection, I only kept the blob that had the most pixels (the largest red object). This removed background objects such as the fire extinguisher. Lastly, I did center of mass to track the actual location of the object. I also ran a population threshold algorithm that made the object edges really sharp. It doesnt improve the algorithm in this case, but it does make it look nicer as a video. Feel free to download my custom blob detection RoboRealm file that I used. In this video, I programmed my ERP to do nothing but middle mass tracking:

Pixel Classification Pixel Classification is when you assign each pixel in an image to an object class. For example, all greenish pixels would be grass, all blueish pixels would be sky or water, all greyish pixels would be road, and all yellow would be a road lane divider. There are other ways to classify each pixel, but color is typically the easiest. This method is clearly useful for picking out the road for road following and obstacles for obstacle avoidance. Its also used in satellite image processing, such as this image of a city (yellow/red for buildings), forest (green), and river (blue):

If Greenpeace wanted to know how much forest has been cut down, a simple pixel density count can be done. To do this, simply count and compare the forest pixels from before and after the logging. A major benefit to this bottom-up method to image processing is its immunity to heavy image noise. Blobs do not need to be identified first. By finding the middle mass of these pixels, the center location of each object can be found. Need an algorithm to identify roads for your driving robot? This below video (from my house front door) is an example of me simply maximizing RBG (red blue green) colors. Pixels that are more blue than any other color become all blue, pixels more green than any other color become all green, and the same for red. What you get is the road being all blue, the grass being all green, and houses being red. Its not perfect, yet still works amazingly well for a simple pixel classification algorithm. This algorithm would well compliment another algorithm(s). Feel free to download my custom pixel classification RoboRealm file that I used. 1.7.3.3 1 C:\Documents and Settings\Pika\Desktop\snowpics\MOV03312 mpg.avi TRUE 1 62 62 400 300 20 113 4 -1 Current Indeo? video 5.10 C:\Documents and Settings\Pika\Desktop\snowpics\RBGmaxinga.avi 1

Image Correlation (Template Matching) Image correlation is one of the many forms of template matching for simple object recognition. This method works by keeping a large database of various imaged features, and computing 'intensity similiarity' of an entire image or window with another. In this example, various features of an adorably cute squirrel (its the species name) are obtained for comparison with other objects.

This method is also used for feature detection (mentioned earlier) and facial recognition . ..

Facial Recognition Facial recognition is a more advanced type of pattern recognition. With shape recognition you only need a small database of mathematical representations of shapes. But while basic shapes like a triangle can be easily described, how do you mathematically represent a face?

Here is an excercise for you. Suppose you have a friend coming to your family's house and she/he wants to recognize every face by name before arriving. If you could only give

a written list of facial features of each family member, what would you say about each face? You might describe hair color, length, or style. Maybe your sister has a beard. One person might have a more rounded face, while another person might have a very thin face. For a family of 4 people this excercise is really easy. But what if you had to do it for everyone in your class? You might also analyze skin tone, eye color, wrinkles, mouth size . . . the list goes on. As the number of people that will be analyzed grows, so would the number of required descriptions for each face. One popular way of digitizing faces is to measure the distance between each eye, size of the head, distance between eyes and mouth, and length of mouth. By keeping a database of these values, surprisingly you can accurately identify thousands of different faces. Hint: notice how the features on Mona Lisa's face above is much easier to identify and locate after edge detection.

Unfortunately for law enforcement this method does not work outside of the lab. This is because it requires facial images that are really close and clear for the measurements to be done accurately. It is also difficult to control which way a person is looking, too. For example, can you make out the facial measurements of the man in this security cam image?

Have a look at this below image. Despite these pictures also being tiny and blurry, you can somehow recognize many of them! The human brain obviously has other yet undiscovered methods of facial recognition . . .

Stereo Vision Stereo vision is a method of determing the 3D location of objects in a scene by comparing images of two seperate cameras. Now suppose you have some robot on Mars and he sees an alien (at point P(X,Y)) with two video cameras. Where does the robot need to drive to run over this alien (for 20 kill points)?

First lets analyze the robot camera itself. Although a simplification resulting in minor error, the pinhole camera model will be used in the following examples:

The image plane is where the photo-receptors are located in the camera, and the lensis the lens of the camera. The focal distance is the distance between the lens and the photoreceptors (can be found in the camera datasheet). Point P is the location of the alien, and point p is where the alien appears on the photo-receptors. The optical axis is the direction the camera is pointing. Redrawing the diagram to make it mathematically simpler to understand, we get this new diagram

with the following equations for a single camera: x_camL = focal_length * X_actual / Z_actual y_camL = focal_length * Y_actual / Z_actual CASE 1: Parallel Cameras Now moving on to two parallel facing cameras (L for left camera and R for right camera), we have this diagram:

The Z-axis is the optical axis (the direction the cameras are pointing). b is the distance between cameras, while f is still the focal length. The equations of stereo triangulation (because it looks like a triangle) are: Z_actual = (b * focal_length) / (x_camL - x_camR) X_actual = x_camL * Z_actual / focal_length Y_actual = y_camL * Z_actual / focal_length

CASE 2a: Non-Parallel Cameras, Rotation About Y-axis And lastly, what if the cameras are pointing in different non-parallel directions? In this below diagram, the Z-axis is the optical axis for the left camera, while the Zo-axis is the optical axis of the right camera. Both cameras lie on the XZ plane, but the right camera is rotated by some angle phi. The point where both optical axes (plural for axis, pronounced ACKS - I) intersect at the point (0,0,Zo) is called the fixation point. Note that the fixation point could also be behind the cameras when Zo < 0.

calculating for the alien location . . . Zo = b / tan(phi) Z_actual = (b * focal_length) / (x_camL - x_camR + focal_length * b / Zo) X_actual = x_camL * Z_actual / focal_length Y_actual = y_camL * Z_actual / focal_length CASE 2b: Non-Parallel Cameras, Rotation About X-axis calculating for the alien location . . . Z_actual = (b * focal_length) / (x1 - x2) X_actual = x_camL * Z_actual / focal_length Y_actual = y_camL * Z_actual / focal_length + tan(phi) * Z CASE 2c: Non-Parallel Cameras, Rotation About Z-axis For simplicity, rotation around the optical axis is usually dealt with by rotating the image before applying matching and triangulation. Given the translation vector T and rotation matrix R describing the transormation from left camera to right camera coordinates, the equation to solve for stereo triangulation is: p' = RT ( p - T )

where p and p' are the coordinates of P in the left and right camera coordinates respectively, and RT is the transpose (or the inverse) matrix of R. Please continue on in the Computer Vision Tutorial Seriesfor Part 4: Computer Vision Algorithms for Motion. PROGRAMMING - COMPUTER VISION TUTORIAL Part 4: Computer Vision Algorithms for Motion Motion Detection Tracking Optical Flow Background Subtraction Feature Tracking Practice Problems Download Software

In part 4 of the Computer Vision Tutorial Serieswe will continue with computer vision algorithms for motion.

Motion Detection (Bulk Motion) Motion detection works on the basis of frame differencing - meaning comparing how pixels (usuallyblobs) change location after each frame. There are two ways you can do motion detection. The first method just looks for a bulk change in the image: calculate the average of a selected color in frame 1 wait X seconds calculate the average of a selected color in frame 2 if (abs(avg_frame_1 - avg_frame_2) > threshold) then motion detected The other method looks at the motion of the middle mass: calculate the middle mass in frame 1 wait X seconds

calculate the middle mass in frame 2 if (mm_frame_1 - mm_frame_2) > threshold) then motion detected The problem with these motion detection methods is that neither detects very slow moving objects, determined by the sensitivity of the threshold. But if the threshold is too sensitive, it will detect things like shadows and changes in sunlight! The algorithm also cant handle a rotating object - an object that moves, but which has a middle mass that does not change location.

Tracking By doing motion detection by calculating the motion of the middle mass, you can run more advanced algorithms such as tracking. By doing vector math, and knowing the pixel to distance ratio, one may calculate the displacement, velocity, and acceleration of a movingblob.

Here is an example on how to calculate speed of a car: calculate the middle mass in frame 1 wait X seconds calculate the middle mass in frame 2 speed = (mm_frame_1 - mm_frame_2) * distance / per_pixel Problems with tracking:

The major issue with this algorithm is determining the distance to pixel ratio. If your camera is at an angle to the horizon (not looking overhead and pointing straight down), or your camera experiences the lens effect (all cameras do, to some extent), then you need to write a separate algorithm that maps this ratio for a given pixel located at X and Y position. The below image is an exagerated lens effect, with pixels further down the trail equaling a greater distance than the pixels closer to the camera.

This Mars Rover camera image is a good example of the lens effect:

Lens radial distortion can be modelled by the following equations: x_actual = xd * (1 + distortion_constant * (xd^2 + yd^2)) y_actual = yd * (1 + distortion_constant * (xd^2 + yd^2)) The variables xd and yd are the image coordinates of the distorted image. The distortion_constant is a constant depending on the distortion of the lens. This constant can either be determined experimentally, or from data sheets of the lens or camera. Cross over is the other major problem. This is when multiple objects cross over each other (ie one blob passes behind another blob) and the algorithm gets confused which

blob is which. For an example, here is a video showing the problem. Notice how the algorithm gets confused as the man goes behind the tree, or crosses over another tracked object? The algorithm must remember a decent number of features of each tracked object for crossovers to work. (video was taken from here)

Optical Flow This computer vision method completely ignores and has zero interest in identifying observed objects. It works by analyzing the bulk/individual motion of pixels. It is useful for tracking, 3D analysis, altitude measurement, and velocity measurement. This method has the advantage that it can work with low resolution cameras, while the more simple algorithms require minimal processing power. Optical flow is a vector field that shows the direction and magnitude of these intensity changes from one image to the other, as shown here:

Applications for Optical Flow Altitude Measurement (for constant speed) Ever notice when traveling by plane, the higher you are the slower the ground below you seems to move? For aeriel robots that have a known constant speed, by analyzing pixel velocity from a downward facing camera the altitude can be calculated. The slower the pixels travel, the higher the robot. A potential problem however is when your robot rotates in the air, but this can be accounted for by adding additional sensors like gyros and accelerometers.

Velocity Measurement (for constant altitude) For a robot that is traveling at some known altitude, by analyzing pixel velocity, the robot velocity can be calculated. This is the converse of the altitude measurement method. It is impossible to gather both altitude and velocity data simultaneously using only optical flow, so a second sensor (such as GPS or an altimeter) needs to be used. If however your robot was an RC car, the altitude is already known (probably an inch above the ground). Velocity can then be calculated using optical flow with no other sensors. Optical flow can be used to directly compute time to impact for missles. Optical flow also is a technique often used by insects to gauge flight speed and direction. Tracking Please see tracking above, and background subtraction below. The optical flow method of tracking combines both of those methods together. By removing the background, all that needs to be done is analyze the motion of the moving pixels. 3D Scene Analysis By analyzing motion of all pixels, it is possible to generate rough 3D measurements of the observed scene. For example, the below image of the subway train: the pixels on the far left are moving fast, and they are both converging and slowing down towards the center of the image. With this information, 3D information of the train can be calculated (including velocity of train, and angle of the track).

Problems with optical flow . . . Generally, optical flow corresponds to the motion field, but not always. For example, the motion field and optical flow of a rotating barber's pole are different:

Although it is only rotating about the z-axis, optical flow will say the red bars are moving upwards in the z-axis. Obviously, assumptions need to be made of the expected observed objects for this to work properly. Accounting for multiple objects gets really complicated . . . especially if they cross each other . . . And lastly, the equations get yet more complicated when you track not just linear motion of pixels, but rotational motion as well. With optical flow, how do you tell if the center point of this ferris wheel is connected to the outer half?

Background Subtraction Background subtraction is the method of removing pixels that do not move, focusing only on objects that do. The method works like this: capture two frames compare the pixel colors on each frame

if the colors are the same, replace with the color white else, keep the new pixel Here is an example of a guy moving with a static background. Some pixels did not appear to change when he moved, resulting in error:

The problem with this method as above is that if the object stops moving, then it becomes invisible. If my hand moves, but my body doesnt, all you see is a moving hand. There is also the chance that although something is moving, not all the individual pixels change color because the object is of a uniform color. To correct for this, this algorithm must be combined with other algorithms such as edge detection and blob finding, to make sure all pixels within a moving boundary arent discarded. There is one other form of background subtraction called blue-screening (or greenscreening, or chroma-key). What you do is physically replace the background with a solid color - a big green curtain (called a chroma-key) typically works best. Then the computer replaces all pixels of that color with pixels from another scene. This technique is commonly used for weather anchor people, and is why they never wear green ties =P

This blue-screening method is more a machine vision technique, as it will not work in everyday situations - only in studios with expert lighting.

Here is a video of my ERP that I made using chroma key. If you look carefully, you'll see various chroma key artifacts as I didn't put much effort into getting it perfect. I used Sony Vegas Movie Studioto make the video.

Feature Tracking A feature is a specific identified point in the image that a tracking algorithm can lock onto and follow through multiple frames. Often features are selected because they are bright/dark spots, edges or corners - depending on the particular tracking algorithm.Template matching is also quite common. What is important is that each feature represents a specific point on the surface of a real object. As a feature is tracked it becomes a series of two-dimensional coordinates that represent the position of the feature across a series of frames. This series is referred to as a track. Once tracks have been created they can be used immediately for 2D motion tracking, or then be used to calculate 3D information.

(for a realplayer streaming video example of feature tracking, click the image)

Visual Servoing Visual servoing is a method of using video data to determine position data of your robot. For example, your robot sees a door and wants to go through it. Visual servoing will allow the front of your robot to align itself with the door and pass through. If your robot wanted to pick something up, it can use visual servoing to move the arm to that location. To drive down a road, visual servoing would track the road with respect to the robots heading.

To do visual servoing, first you need to use the vision processing methods listed in this tutorial to locate the object. Then your robot needs to decide how to orient itself to reach that location using some type of PID loop - the error being the distance between where the robot wants to be, and where it sees it is. If you would like to learn more about robot arms for use in visual servoing, see myrobot arms tutorial. ROBOT ARM TUTORIAL Degrees of Freedom Robot Workspace Mobile Manipulators Force Calculations Forward Kinematics Inverse Kinematics Motion Planning Velocity Sensing End Effector Design

About this Robot Arm Tutorial The robot arm is probably the most mathematically complex robot you could ever build. As such, this tutorial can't tell you everything you need to know. Instead, I will cut to the chase and talk about the bare minimum you need to know to build an effective robot arm. Enjoy! To get you started, here is a video of a robot arm assignment I had when I took Robotic Manipulation back in college. My group programmed it to type the current time into the keyboard . . . (lesson learned, don't crash robot arms into your keyboard at full speed while testing in front of your professor) You might be also interested in a robot arm I built that can shuffle, cut, and deal playing cards.

Degrees of Freedom (DOF) The degrees of freedom, or DOF, is a very important term to understand. Each degree of freedom is a joint on the arm, a place where it can bend or rotate or translate. You can typically identify the number of degrees of freedom by the number of actuators on the robot arm. Now this is very important - when building a robot arm you want as few degrees of freedom allowed for your application!!! Why? Because each degree requires a motor, often anencoder, and exponentially complicated algorithms and cost. Denavit-Hartenberg (DH) Convention The Robot Arm Free Body Diagram (FBD) The Denavit-Hartenberg (DH) Convention is the accepted method of drawing robot arms in FBD's. There are only two motions a joint could make: translate and rotate. There are only three axes this could happen on: x, y, and z (out of plane). Below I will show a few robot arms, and then draw a FBD next to it, to demonstrate the DOF relationships and symbols. Note that I did not count the DOF on the gripper (otherwise known as the end effector). The gripper is often complex with multiple DOF, so for simplicity it is treated as separate in basic robot arm design. 4 DOF Robot Arm, three are out of plane:

3 DOF Robot Arm, with a translation joint:

5 DOF Robot Arm:

Notice between each DOF there is a linkage of some particular length. Sometimes a joint can have multiple DOF in the same location. An example would be the human shoulder. The shoulder actually has three coincident DOF. If you were to mathematically represent this, you would just say link length = 0.

Also note that a DOF has its limitations, known as the configuration space. Not all joints can swivel 360 degrees! A joint has some max angle restriction. For example, no human joint can rotate more than about 200 degrees. Limitations could be from wire wrapping, actuator capabilities, servo max angle, etc. It is a good idea to label each link length and joint max angle on the FBD.

(image credit: Roble.info)

Your robot arm can also be on a mobile base, adding additional DOF. If the wheeled robot can rotate, that is a rotation joint, if it can move forward, then that is a translational joint. This mobile manipulator robot is an example of a 1 DOF arm on a 2 DOF robot (3 DOF total).

Robot Workspace The robot workspace (sometimes known as reachable space) is all places that the end effector (gripper) can reach. The workspace is dependent on the DOF angle/translation limitations, the arm link lengths, the angle at which something must be picked up at, etc. The workspace is highly dependent on the robot configuration. Since there are many possible configurations for your robot arm, from now on we will only talk about the one shown below. I chose this 3 DOF configuration because it is simple, yet isnt limiting in ability.

Now lets assume that all joints rotate a maximum of 180 degrees, because most servo motorscannot exceed that amount. To determine the workspace, trace all locations that the end effector can reach as in the image below.

Now rotating that by the base joint another 180 degrees to get 3D, we have this workspace image. Remember that because it uses servos, all joints are limited to a max of 180 degrees. This creates a workspace of a shelled semi-sphere (its a shape because I said so).

If you change the link lengths you can get very different sizes of workspaces, but this would be the general shape. Any location outside of this space is a location the arm cant reach. If there are objects in the way of the arm, the workspace can get even more complicated. Here are a few more robot workspace examples:

Cartesian Gantry Robot Arm

Cylindrical Robot Arm

Spherical Robot Arm

Scara Robot Arm

Articulated Robot Arm

Mobile Manipulators A moving robot with a robot arm is a sub-class of robotic arms. They work just like other robotic arms, but the DOF of the vehicle is added to the DOF of the arm. If say you have a differential driverobot (2 DOF) with a robot arm (5 DOF) attached (see yellow robot below), that would give the robot arm a total sum of 7 DOF. What do you think the workspace on this type of robot would be?

Force Calculations of Joints This is where this tutorial starts getting heavy with math. Before even continuing, I strongly recommend you read the mechanical engineering tutorials forstatics and dynamics. This will give you a fundamental understanding of moment arm calculations. The point of doing force calculations is for motor selection. You must make sure that the motor you choose can not only support the weight of the robot arm, but also what the robot arm will carry (the blue ball in the image below). The first step is to label your FBD, with the robot arm stretched out to its maximum length.

Choose these parameters: o o o o

weight of each linkage weight of each joint weight of object to lift length of each linkage

Next you do a moment arm calculation, multiplying downward force times the linkage lengths. This calculation must be done for each lifting actuator. This particular design has just two DOF that requires lifting, and the center of mass of each linkage is assumed to be Length/2. Torque About Joint 1: M1 = L1/2 * W1 + L1 * W4 + (L1 + L2/2) * W2 + (L1 + L3) * W3 Torque About Joint 2: M2 = L2/2 * W2 + L3 * W3 As you can see, for each DOF you add the math gets more complicated, and the joint weights get heavier. You will also see that shorter arm lengths allow for smaller torque requirements.

Too lazy to calculate forces and torques yourself? Try my robot arm calculator to do the math for you.

Forward Kinematics Forward kinematics is the method for determining the orientation and position of the end effector, given the joint angles and link lengths of the robot arm. To calculate forward kinematics, all you need is highschool trig and algebra.

For our robot arm example, here we calculate end effector location with given joint angles and link lengths. To make visualization easier for you, I drew blue triangles and labeled the angles.

Assume that the base is located at x=0 and y=0. The first step would be to locate x and y of each joint. Joint 0 (with x and y at base equaling 0): x0 = 0 y0 = L0 Joint 1 (with x and y at J1 equaling 0): cos(psi) = x1/L1 => x1 = L1*cos(psi) sin(psi) = y1/L1 => y1 = L1*sin(psi) Joint 2 (with x and y at J2 equaling 0): sin(theta) = x2/L2 => x2 = L2*sin(theta) cos(theta) = y2/L2 => y2 = L2*cos(theta) End Effector Location (make sure your signs are correct): x0 + x1 + x2, or 0 + L1*cos(psi) + L2*sin(theta) y0 + y1 + y2, or L0 + L1*sin(psi) + L2*cos(theta) z equals alpha, in cylindrical coordinates The angle of the end effector, in this example, is equal to theta + psi. Too lazy to calculate forward kinematics yourself? Check out my Robot Arm Designer v1 in excel.

Inverse Kinematics Inverse kinematics is the opposite of forward kinematics. This is when you have a desired end effector position, but need to know the joint angles required to achieve it. The robot sees a kitten and wants to grab it, what angles should each joint go to? Although way more useful than forward kinematics, this calculation is much more complicated too. As such, I will not show you how to derive the equation based on your robot arm configuration. Instead, I will just give you the equations for our specific robot design: psi = arccos((x^2 + y^2 - L1^2 - L2^2) / (2 * L1 * L2)) theta = arcsin((y * (L1 + L2 * c2) - x * L2 * s2) / (x^2 + y^2)) where c2 = (x^2 + y^2 - L1^2 - L2^2) / (2 * L1 * L2); and s2 = sqrt(1 - c2^2); So what makes inverse kinematics so hard? Well, other than the fact that it involvesnonlinear simultaneous equations, there are other reasons too. First, there is the very likely possibility of multiple, sometimes infinite, number of solutions (as shown below). How would your arm choose which is optimal, based on torques, previous arm position, gripping angle, etc.?

There is the possibility of zero solutions. Maybe the location is outside the workspace, or maybe the point within the workspace must be gripped at an impossible angle. Singularities, a place of infinite acceleration, can blow up equations and/or leave motors lagging behind (motors cant achieve infinite acceleration). And lastly, exponential equations take forever to calculate on a microcontroller. No point in having advanced equations on a processor that cant keep up. Too lazy to calculate inverse kinematics yourself? Check out my Robot Arm Designer v1 in excel.

Motion Planning Motion planning on a robot arm is fairly complex so I will just give you the basics.

Suppose your robot arm has objects within its workspace, how does the arm move through the workspace to reach a certain point? To do this, assume your robot arm is just a simple mobile robot navigating in 3D space. The end effector will traverse the space just like a mobile robot, except now it must also make sure the other joints and links do not collide with anything too. This is extremely difficult to do . . . What if you want your robot end effector to draw straight lines with a pencil? Getting it to go from point A to point B in a straight line is relatively simple to solve. What your robot should do, by using inverse kinematics, is go to many points between point A and point B. The final motion will come out as a smooth straight line. You can not only do this method with straight lines, but curved ones too. On expensive professional robotic arms all you need to do is program two points, and tell the robot how to go between the two points (straight line, fast as possible, etc.). For further reading, you could use the wavefront algorithm to plan this two point trajectory.

Velocity (and more Motion Planning) Calculating end effector velocity is mathematically complex, so I will go only into the basics. The simplest way to do it is assume your robot arm (held straight out) is a rotating wheel of L diameter. The joint rotates at Y rpm, so therefore the velocity is Velocity of end effector on straight arm = 2 * pi * radius * rpm However the end effector does not just rotate about the base, but can go in many directions. The end effector can follow a straight line, or curve, etc. With robot arms, the quickest way between two points is often not a straight line. If two joints have two different motors, or carry different loads, then max velocity can vary between them. When you tell the end effector to go from one point to the next, you have

two decisions. Have it follow a straight line between both points, or tell all the joints to go as fast as possible - leaving the end effector to possibly swing wildly between those points. In the image below the end effector of the robot arm is moving from the blue point to the red point. In the top example, the end effector travels a straight line. This is the only possible motion this arm can perform to travel a straight line. In the bottom example, the arm is told to get to the red point as fast as possible. Given many different trajectories, the arm goes the method that allows the joints to rotate the fastest.

Which method is better? There are many deciding factors. Usually you want straight lines when the object the arm moves is really heavy, as it requires the momentum change for movement (momentum = mass * velocity). But for maximum speed (perhaps the arm isn't carrying anything, or just light objects) you would want maximum joint speeds. Now suppose you want your robot arm to operate at a certain rotational velocity, how much torque would a joint need? First, lets go back to our FBD:

Now lets suppose you want joint J0 to rotate 180 degrees in under 2 seconds, what torque does the J0 motor need? Well, J0 is not affected by gravity, so all we need to consider is momentum and inertia. Putting this in equation form we get this:

torque = moment_of_inertia * angular_acceleration breaking that equation into sub components we get: torque = (mass * distance^2) * (change_in_angular_velocity / change_in_time) and change_in_angular_velocity = (angular_velocity1)-(angular_velocity0) angular_velocity = change_in_angle / change_in_time Now assuming at start time 0 that angular_velocity0 is zero, we get torque = (mass * distance^2) * (angular_velocity / change_in_time) where distance is defined as the distance from the rotation axis to the center of mass of the arm: center of mass of the arm = distance = 1/2 * (arm_length) (use arm mass) but you also need to account for the object your arm holds: center of mass of the object = distance = arm_length (use object mass) So then calculate torque for both the arm and then again for the object, then add the two torques together for the total: torque(of_object) + torque(of_arm) = torque(for_motor) And of course, if J0 was additionally affected by gravity, add the torque required to lift the armto the torque required to reach the velocity you need. To avoid doing this by hand, just use the robot arm calculator. But it gets harder . . . the above equation is for rotational motion and not for straight line motions. Look up something called a Jacobian if you enjoy mathematical pain =P Another Video! In order to better understand robot arm dynamics, we had a robot arm bowling competition using the same DENSO 6DOF robot arms as in the clocks video. Each team programs an arm to do two tasks:

o o

Try to place all three of its pegs in the opponents' goal Block opponent pegs from going in your own goal

Enjoy! (notice the different arm trajectories) Arm Sagging Arm sagging is a common affliction of badly designed robot arms. This is when an arm is too long and heavy, bending when outwardly stretched. When designing your arm, make sure the arm is reinforced and lightweight. Do a finite element analysis to determine bending deflection/stress such as I did on my ERP robot:

Keep the heaviest components, such as motors, as close to the robot arm base as possible.It might be a good idea for the middle arm joint to be chain/belt driven by a motor located at the base (to keep the heavy motor on the base and off the arm). The sagging problem is even worse when the arm wobbles between stop-start motions. The solve this, implement a PID controller so as to slow the arm down before it makes a full stop.

Sensing

Most robot arms only have internal sensors, such as encoders. But for good reasons you may want to add additional sensors, such as video, touch, haptic, etc. A robot arm without video sensing is like an artist painting with his eyes closed. Using basic visual feedback algorithms, a robot arm could go from point to point on its own without a list of preprogrammed positions. Giving the arm a red ball, it could actually reach for it (visual tracking and servoing). If the arm can locate a position in X-Y space of an image, it could then direct the end effector to go to that same X-Y location (by using inverse kinematics). If you are interested in learning more about the vision aspect of visual servoing, please read the Computer Vision Tutorials for more information.

Haptic sensing is a little different in that there is a human in the loop. The human controls the robot arm movements remotely. This could be done by wearing a special glove, or by operating a miniature model with position sensors. Robotic arms for amputees are doing a form of haptic sensing. Also to note, some robot arms have feed back sensors (such as touch) that gets directed back to the human (vibrating the glove, locking model joints, etc.).

Tactile sensing (sensing by touch) usually involves force feedback sensors andcurrent sensors. These sensors detect collisions by detecting unexpected force/current spikes,

meaning a collision has occurred. A robot end effector can detect a successful grasp, and not grasp too tight or too lightly, just by measuring force. Another method would be to use current limiters - sudden large current draws generally mean a collision/contact has occurred. An arm could also adjust end effector velocity by knowing if it is carrying a heavy object or a light object - perhaps even identify the object by its weight.

Try this. Close your eyes, and put both of your hands in your lap. Now keeping your eyes closed, move your handslowly to reach for your computer mouse. Do it!!!! You will see why soon . . . Now what will happen is that your hand will partially miss, but at least one of your fingers will touch the mouse. After that finger touches, your hand will suddenly re-adjust its position because it now knows exactly where that mouse is. This is the benefit of tactile sensing - no precision encoders required for perfect contact!

End Effector Design In the future I will write a separate tutorial on how to design robot grippers, as it will require many more pages of material. In the meantime, you might be interested in reading the tutorial for calculating friction and force for robot end effectors. I also went in to some detail describing my robot arm card dealing gripper. Anyway, I hope you have enjoyed this robot arm tutorial!

Practice What You Learned These three below images are made from sonar capable of generating a 2D mapped field of an underwater scene with fish (for fisheries counting). Since the data is stored in a similar way to data from a camera, vision algorithms can be applied.

(scene 1, scene 2, and scene 3) So here is your challenge: What two different algorithms can acheive the change from scene 1 to scene 2 (hint: scene 2 only shows moving fish)? Name the algorithm that can acheive the change from scene 2 to scene 3 (hint: color is made binary)? What algorithm allows finding the location of the fish in the scene? If in scene two we were to identify the types of fish, what three different algorithms might work?

answers are at the bottom of this page

Downloadable Software (not affiliated with SoR) For those interested in vision code for the hacking, here is a great source forcomputer vision source code. To quickly get started with computer vision processing, tryRoboRealm. Its simple GUI interface allows you to do histograms, edge detection, filtering, blob detection, matching, feature tracking, thresholding, transforms and morphs, coloring, and a few others. answers: background subtraction and optical flow; blob detection; middle mass; image correlation, shape detection and pattern recognition, and facial recognition techniques

PROGRAMMING - DATA LOGGING TUTORIAL

Data Logging Data logging is the method of recording sensor measurements over a period of time. Typically in robotics you will not need a datalogger. But there are times when you may need to analyze a complex situation, process large amounts of data, diagnose an error, or perhaps need an automated way to run an experiment. For example, you can use a data logger to measureforce and torque sensors, perform current or power use measurements, or just record data for future analysis. ROBOT FORCE AND TORQUE SENSORS

(images: left 6 detect force, right 5 detect torque) Theory

Capacity

Strain Gauge Wheatstone Bridge Costs

Damage Installation Cables

Force Sensors (Force Transducers) There are many types of force sensors, usually referred to as torque cells (to measure torque) and load cells (to measure force). From this point on I will refer to them as 'force transducers.' Force transducers are devices useful in directly measuring torques and forces within your mechanical system. In order to get the most benefit from a force transducer, you must have a basic understanding of the technology, construction, and operation of this unique device.

Digital Load Cell Cutaway

Theory of Measuring Forces There are many reasons why you would need to directly measure forces for your robot. Parameter optimization, force quantitization, and weight measurement are a few. You may want to put force transducers on your bipedal robot to know how much weight is on each leg at any point in time. You may want to put force transducers in your robot grippers to control gripper friction - so as to not crush or drop anything picked up. Or you could use it so that your robot knows it has reached its maximum carrying weight (or even determine how much weight it is carrying). First, I will talk about how a force transducer converts this force into a measurable electrical signal.

Strain Gauge The strain gauge is a tiny flat coil of conductive wire (ultra-thin heat-treated metallic foil chemically bonded to a thin dielectric layer blah blah blah) that changes its resistance when you bend it. The idea is to place the strain gauge on a beam (with a special adhesive), bend the beam, then measure the change in resistance to determine the strain. Note that strain is directly related to the force applied to bend the beam. Unfortunately strain gauges are somewhat expensive at about $10-20 each, usually coming in packs of 5-10 (so its like $50-$100). If you are willing to experiment, and your forces are small, you can also useconductive foam as a strain gauge. Compressing the foam lowers the electrical resistance. If you want more details, see this strain gauge tutorial. (please note that compression and tension are mislabled, and should be swapped in the below animation - sorry!)

Wheatstone Bridge The typical strain gauge has a VERY LOW change in resistance when bent. So to measure this change in resistance, several tricks are applied. There is a ton of theory on this so I wont go into how it works, but basically a neat circuit invented in the 1800's can be used to easily amplify this difference. These circuits are built into all load and torque

sensors so you do not need to be concerned with how they work, just how to use them. The strain gages inside the force transducer, usually a multiple of four, are connected into a Wheatstone bridge configuration in order to convert the very small change in resistance into a usable electrical signal. Passive components such as resistors and temperature dependent wires are used to compensate and calibrate the bridge output signal. Anyway, most force transducers have four wires coming out of them, so all you need to do is attach them as prescribed here:

Note that the wire colors are usually red, black, green, and white, and that some manufacturers for some lame reason use the red and black wires for signal and not for power. You will probably need to further amplify the signal a factor of another few thousand, but that can easily be done with a voltage difference amplifier. Your output will give you a negative voltage for one direction of force, and a positive voltage for the opposite direction. If you are measuring voltage with an oscilliscope or multimeter, this is easy to measure. But for a microcontroller, you cannot have any negative voltage output. A microcontroller can only read 0V to 5V. As a solution, use a 2.5V voltage regulator for ground of your force transducer, and a 7.5V ~ 8V voltage regulator for power to your force transducer. This will effectively shift the output voltage to 2.5V neutral output to your microcontroller. Your range should be between 0 and 5V. To limit your sensor within that range, experiment with your amplifier gain.

Costs Unfortunately force transducers are on the expensive side. Expect to spend between a few hundred to a few thousand dollars each. There are many different types of sensors, of different dimensions and capacities and qualities, from a large variety of companies. Check the ad window on the top right of this page for several force transducer companies. Know that some companies hire actual engineers for tech support, some don't. Actual conversation I once had: "I have some technical questions, are you an engineer?" "Ummm, I don't have a degree in engineering, if that is what you mean. But I think I can

help you." Surprisingly, some companies do not actually include a spec sheet (Certificate of Calibration) with their sensor, so you have no idea what the voltagetorque curve is! Insist on getting one, or expect to spend hourstesting and graphing when you get your sensor.

Don't make your choice on sensors based solely on price - cost of ownership is more important too. Maintenance costs, recalibration time, possibility of failure, etc should all be factors. As a side comment, there are ways to make your own force transducers if your too poor, but that is outside of the scope of this tutorial. So if you think buying a force transducer is for you, continue reading.

Capacity Selection Force overload is the primary reason for transducer failure, although the process of selecting the right force capacity looks easy and straight forward. There are several terms you must understand to properly select for load capacity: The measuring range is the range of values of mass for which the result of measurement is not affected by outer limit error. The safe load/torque limit is the maximum load/torque that can be applied without producing a permanent shift in the performance characteristics beyond specified. The safe side load is the maximum load that can act 90 degrees to the axis along which the transducer is designed to be loaded at without producing a permanent shift in the performance beyond specified. A force transducer will perform within specifications until the safe load/torque limit or safe side load limit is passed. Beyond this point, even for a very short period of time, the transducer will be permanently damaged. Capacity Selection, Derating Unfortunately you cannot just rate your transducer by

static forces alone. There are many additional issues you must be concerned about: o o o o o o

Shock loading (sudden short term forces) Dynamic influences (momentum) Off centre distribution of force The possibility of an overload weight/torque Strain Gauge Fatigue (constant use and wear) Cable Entry Fatigue (the output wire bending a lot)

If there is a possibility that any of these may occur, you must then derateyour force sensor (use a higher capacity). For example, if you expect a high fatigue rate, you should multiply your required capacity by two. Make sure you understand what you are measuring so that you do not waste money on a soon-to-be-broken force transducer. Over time, you may want to recalibrate your sensor occasionally in case of long term fatigue damage.

Damage Because force transducers are expensive, preventing them from being damaged should be a high priority. There are many ways to damage a transducer. Shock, overloading, lightning strikes or heavy surges in current, chemical or moisture ingress, mishandling (dropping, pulling on cable, etc.), vibration, seismic events, or internal component malfunctioning to name a few. If your sensor becomes damaged, don't just re-calibrate it. Mechanical failure may have catastrophic effects and you will no longer have a reliable sensor.

Lightning "Investigations indicate that a lightning strike within a 900ft radius of the geometrical centre of the site will definitely have a detrimental effect on the weighbridge." In most

cases, the actual damage is a direct result of a potential difference (1000+ volts) between the sensor circuit and sensor housing. If lighting strikes commonly happen near your area, make all grounds on your circuit common so the voltage floats together - and use surge protectors! And of course, no electric welding should be done near your sensor (hey, it has happened).

Moisture Obviously, water and electronics do not mix. Force transducers are always sealed to keep out the elements, however moisture/condensation damage occurs from a slow seeping over a long period of time. The damage can be multiplied when acids or alkalines are present. The most likely entry area for moisture is at the cable entry point, so it is important to keep this area protected more than any other. Manufactures employ many techniques to seal it off, but there are additional techniques you the user can also employ. Know that often temperature changes can cause a pumping action to occur, pushing moisture down the inside of the cable. Entry also can be via a leaking junction box or through a damaged part of the cable. This can take some time to reach critical areas, but once there it will become sealed in place and do critical damage.

Corrosion The effects of corrosion on your force transducer will be the result of both the manufacturing quality and the environment in which the sensor is used. Make sure you understand how likely your choice in transducer is likely to corrode over time. Consider the metal type of the outer casing, the surface finish, the weld areas, thickness/quality of moisture seals, and cable material (PVC, PUR, or teflon). Also understand the environment - salt water, for example, has different corrosion effects depending on the local circumstances. Stainless steel in stagnant salt water is subject to crevice corrosion (a regular wash down is necessary to avoid degradation). Don't assume stainless steel means "no corrosion, no problem and no maintenance". In certain applications, painted or plated

load cells may offer better long-term protection. An alternative is wrap-around protective covers. These can provide good environmental protection, but can be self destructive if corrosive material is trapped inside the cover. Sealing compounds and rubbers used on some transducers can deteriorate when exposed to chemicals or direct sunlight. Because they embrittle rubber, chlorine-based compounds are a particular problem. Always make sure you keep your sensor maintained and clean to avoid corrosion.

Installation There are several considerations that are often forgotten during the mounting of force transducers. For example, it is a common misconception that a force transducer can be considered as a solid piece of metal on which other parts can be mounted. The performance of a force transducer depends primarily on its ability to deflect repeatably under conditions when load/torque is applied or removed. Make sure all supports are designed to avoid lateral forces, bending moments, torsion moments, off center loading, and vibration. These effects not only compromise the performance of your force transducer, but they can also lead to permanent damage. Also, consider self aligning mounts.

The S-Beam Load Cell Changes Shape Under Load

Force Transducer Cables Special attention should be paid in preventing the transducer cable from being damaged during and after installation. Never carry transducers at their cables and provide dripping loops to prevent water from running directly into the cable entry. Don't forget to provide adequate protection for the cable, near the sensor if possible. Load cells are always produced with a four- or six-wire cable. A four-wire cable is calibrated and temperature compensated with a certain length of cable. The performance of the load cell, in terms of temperature stability, will be compromised if the cable is cut; never cut a four-wire load cell cable! 6 wire cables can be cut, but all wires must be cut evenly to avoid any differences.

Extras What I have talked about is actually a very watered down tutorial for force transducers. If you would like to learn more, read the advanced load cell tutorial. I didn't write it, so good luck! You may also be interested in the data loggingtutorial so that you can log your force/torque sensor data effectively. SENSORS - CURRENT SENSOR

Current Sensing Current sensing is as it says - sensing the amount of current in use by a particular circuit or device. If you want to know the amount of power being used for any robot component, current sensing is the way to go. Applications Current sensing is not a typical application in robotics. Most robots would never need a current sensing ability. Current sensing is a way for a robot to measure it's internal state and rarely required to explore the outside world. It is useful for a robot builder to better understand power use of the various components within a robot. Sensing can be done forDC motors, circuits, orservos to measure actuator power requirements. It can be done for things likemicrocontrollers to measure power performance in different situations. It can be useful for things likerobot battery monitors. And lastly, robot hand grasp detection devices and collision detection. For example, if the current use suddenly increases, that means a physical object is causing resistance. Methods There are several methods to sense current, each having its own advantages and disadvantages. The easiest method is using a typical benchtop DC power supply.

This device is somewhat expensive as it ranges in the hundreds, but they are very common and you can easily find one available in any typical university lab. These devices are a must for any electrical engineer or robot builder. Operation of this device should be straigtforward. Apply a voltage to your component, and it will quickly give a readout of the current you are drawing. Although this takes seconds and little effort to do, there are a few disadvantages to this method. The first disadvantage is that it is not highly accurate. Usually they can only measure in increments rounded off to the nearest 10mA. This is fine for high powered applications where an extra 5mA does not matter, but for low current draw devices this can be an issue. The next disadvantage is timing. A benchtop power supplies only takes current measurements in set periods of time - usually 3 times a second. If your device draws a steady current over time this is not a problem. But if for example your device ramps from 0 to 3 amps five times a second, the current reading you get will not be accurate. The last major disadvantage is that there is no data logging ability - therefore you cannot analyze any complex current draw data on a computer. The second method is using a digital multimeter.

The digital multimeter is another commonly available device capable of analyzing many different characteristics of your circuit - voltage, current, capacitance, resistance, temperature, frequency, etc. If you do not already have one, you definitely need one to make a robot. It would be like cooking without heat if you didnt have one . . . For cost, they range in price from around $10 to about $100. The price depends on features and accuracy. To measure current, all you do is connect your two leads in series with one of your power source wires. But again, there are disadvantages to this method. Like the benchtop power supplies, digital multimeters suffer timing issues. However,accuracy is usually an extra one or two decimal places better. Good enough for most applications. As for data logging, several available multimeters actually have computer linkup cables so that you may record current data to process later. The last method is using a chip called a Current Sense IC.

This ~$5 chip, using a really tiny resistor and a built in high gain amplifier, outputs a voltage in proportion to the current passing through it. Put the chip in series with what you want to measure, and connect the output to a data logging device such as a microcontroller. The microcontroller can print out data to hyperterminal on your computer, and from there you can transfer it to any data analyzing program you wish (like Excel).

This particular schematic below (click for the full expanded circuit) can measure current use of aservo. But it can easily measure current from any other device with no modification - and even multiple items simultaneously too! The capacitor is optional as it acts as a voltage buffer, ensuring maximum continuous current.

Parts of a Data Logger Typically, data loggers are very simple devices that contain just three basic parts: 1) A sensor (or sensors) measures an event. Anything can be measured. Humidity, temperature, light intensity, voltages, pressure, fault occurrences, etc.

2) A microcontroller then stores this information, usually placing a time-stamp next to each data set. For example, if you have a data logger that measures the temperature in your room over a period of a day, it will record both the temperature and the time that temperature was detected. The information stored on the microcontroller will be sent to a PC using a UART for user analysis. 3) And lastly the data logger will have some sort of power supply, typically a battery that will last at least the duration of the measurements. HyperTerminal HyperTerminal is a program that comes with Windows and works great for data logging. Back in the day it was used for things such as text-based webpage browsing and/or uploading pirated files to your friends over the then super fast 56k modems.

I will now show you how to set HyperTerminal so that you may record data outputted by a microcontroller through rs232 serial. First, you need your microcontroller to handle serial data communications. If it has a serial or USB cable attached to it already, then you are set to continue. Next, you need a program that reads and prints out sensor data to serial as in this example: printf("%u, %u, %lu\r\n", analog(PIN_A0), analog(PIN_A1), get_timer1()); If you are interested, feel free to read more with the printf() function tutorialand the microcontroller UART tutorial .

PROGRAMMING - PRINTF()

Printing Out Data Printing out data from a microcontroller is extremely useful in robotics. You can use it for debugging your program, for use as a data logger, or to have your robot simply communicate with someone or something else. This short tutorial will give you sample code, and explain what everything means. printf() The most convenient way of writing to a computer through serial or to a LCD is to use the formatted print utility printf(). If you have programmed in C++ before, this is the equivalent of cout. Unlike most C functions this function takes a variable number of parameters. The first two parameters are always the output channel and the formatting string; these may then be followed by variables or values of any type. The % character is used within the string to indicate a variable value is to be formatted and output. Variables That Can Follow The % Character When you output a variable, you must also define what variable type is being outputed. This is very important, as for example a variable printed out as a signed long int will often not print out the same as say the same variable printed out as an unsigned int. c u x X d lu lx LX ld e f

Character Unsigned int (decimal) Unsigned int (hex - lower case) Unsigned int (hex - upper case) Signed int (decimal) Unsigned long int (decimal) Unsigned long int (hex - lower case) Unsigned long int (hex - upper case) Signed long int (decimal) Float (scientific) Float (decimal)

End Of Line Control Characters Sometimes you would like to control the spacing and positioning of text that is printed

out. To do this, you would add one of the commands below. I recommend just putting \n\r at the end of all printf() commands. \n \r \b \' \" \\ \t \v

go to new line carraige reset backspace single quote double quote backslash horizontal tab vertical tab

Examples of printf(): printf("hello, world!");

printf("squirels are cute\n\rpikachu is cuter\n\r"); printf("the answer is %u", answer); printf("the answer is %u and %f", answer, float(answer)); printf("3 + 4 = %u", (3+4)); printf("%f, %u\n\r", (10/3), answer); printf("the sensor value is %lu", analog(1)); MICROCONTROLLER UART TUTORIAL RS232 EIA232F TTL and USB Adaptor Examples Tx and Rx Baud Rate, Misc Asynchronous Tx Loop-Back Test $50 Robot UART

What is the UART? The UART, or Universal Asynchronous Receiver / Transmitter, is a feature of

yourmicrocontroller useful for communicating serial data (text, numbers, etc.) to your PC. The device changes incoming parallel information (within the microcontroller/PC) to serial data which can be sent on a communication line. Adding UART functionality is extremely useful for robotics. With the UART, you can add an LCD, bootloading,bluetooth wireless, make a datalogger, debug code, test sensors, and much more! Understanding the UART could be complicated, so I filtered out the useless information and present to you only the useful need-to-know details in an easy to understand way . . . The first half of this tutorial will explain what the UART is, while the second half will give you instructions onhow to add UART functionality to your $50 robot. What is RS232, EIA-232, TTL, serial, and USB? These are the different standards/protocols used from transmitting data. They are incompatible with each other, but if you understand what each is, then you can easily convert them to what you need for your robot.

RS232 RS232 is the old standard and is starting to become obsolete. Few if any laptops even have RS232 ports (serial ports) today, with USB becoming the new universal standard for attaching hardware. But since the world has not yet fully swapped over, you may encounter a need to understand this standard. Back in the day circuits were noisy, lacking filters and robust algorithms, etc. Wiring was also poor, meaning signals became weaker as wiring became longer (relates to resistance of the wire). So to compensate for the signal loss, they used very high voltages. Since a serial signal is basically a square wave, where the wavelengths relate to the bit data transmitted, RS232 was standardized as +/-12V. To get both +12V and -12V, the most common method is to use the MAX232 IC (or ICL232 or ST232 - different IC's that all do the same thing), accompanied with a few capacitors and a DB9 connector. But personally, I feel wiring these up is just a pain . . . here is a schematic if you want to do it yourself (instead of a kit):

EIA232F Today signal transmission systems are much more robust, meaning a +/-12V signal is unnecessary. The EIA232F standard (introduced in 1997) is basically the same as the RS232 standard, but now it can accept a much more reasonable 0V to 5V signal. Almost all current computers (after 2002) utilize a serial port based on this EIA-232 standard. This is great, because now you no longer need the annoying MAX232 circuit! Instead what you can use is something called the RS232 shifter - a circuit that takes signals from the computer/microcontroller (TTL) and correctly inverts and amplifies the serial signals to the EIA232F standard. If you'd like to learn more about these standards, check out this RS232 and EIA232 tutorial (external site). The cheapest RS232 shifter I've found is the $7RS232 Shifter Board Kitfrom SparkFun. They have schematics of their board posted if you'd rather make your own. This is the RS232 shifter kit in the baggy it came in . . .

And this is the assembled image. Notice that I added some usefulwire connectorsthat did not come with the kit so that I may easily connect it to the headers on my microcontroller board. Also notice how two wires are connected to power/ground, and the other two are for Tx and Rx (I'll explain this later in the tutorial).

TTL and USB The UART takes bytes of data and transmits the individual bits in a sequential fashion. At the destination, a second UART re-assembles the bits into complete bytes.

You really do not need to understand what TTL is, other than that TLL is the signal transmitted and received by your microcontroller UART. This TTL signal is different from what your PC serial/USB port understands, so you would need to convert the signal. You also do not really need to understand USB, other than that its fast becoming the only method to communicate with your PC using external hardware. To use USB with your robot, you will need an adaptor that converts to USB. You can easily find converters under $20, or you can make your own by using either the FT232RL or CP2102 IC's.

Signal Adaptor Examples Without going into the details, and without you needing to understand them, all you really need to do is just buy an adaptor. For example: TTL -> TTL to RS232 adaptor -> PC TTL -> TTL to EIA-232 adaptor -> PC TTL -> TTL to EIA-232 adaptor -> EIA-232 to USB adaptor -> PC TTL -> TTL to USB adaptor -> PC TTL -> TTL to wireless adaptor -> wireless to USB adaptor -> PC If you wanted bluetooth wireless, get a TTL to bluetooth adaptor, or if you want ethernet, get a TTL to ethernet adaptor, etc. There are many combinations, just choose one based on what adaptors/requirements you have. For example, if your laptop only has USB, buy a TTL to USB adaptor as shown with mySparkFun Breakout Board for CP2103 USB:

There are other cheaper ones you can buy today, you just need to look around. On the left of this below image is my $15 USB to RS232 adaptor, and the right cable is my RS232 extension cable for those robots that like to run around:

Below is my USB to wireless adaptor that I made in 2007 (although now companies sell them wired up for you). It converts a USB type signal to a TTL type signal, and then my

Easy Radio wireless transmitter converts it again to a method easily transmitted by air to my robot:

And a close-up of the outputs. I soldered on a male header row and connected the ground, Tx, and Rx to my wireless transmitter. I will talk about Tx and Rx soon:

Even my bluetooth transceiver has the same Tx/Rx/Power/Ground wiring:

If you have a CMUcam or GPS, again, the same connections. Other Terminology . . .

Tx and Rx As you probably guessed, Tx represents transmit and Rx represents receive. The transmit pin always transmits data, and the receive pin always receives it. Sounds easy, but it can be a bit confusing . . . For example, suppose you have a GPS device that transmits a TTL signal and you want to connect this GPS to your microcontroller UART. This is how you would do it:

Notice how Tx is connected to Rx, and Rx is connected to Tx. If you connect Tx to Tx, stuff will fry and kittens will cry. If you are the type of person to accidentally plug in your wiring backwards, you may want to add a resistor of say ~2kohm coming out of your UART to each pin. This way if you connect Tx to Tx accidentally, the resistor will absorb all the bad ju-ju (current that will otherwise fry your UART).

Tx pin -> connector wire -> resistor -> Rx pin And remember to make your ground connection common!

Baud Rate Baud is a measurement of transmission speed in asynchronous communication. The computer, any adaptors, and the UART must all agree on a single speed of information 'bits per second'. For example, your robot would pass sensor data to your laptop at 38400 bits per second and your laptop would listen for this stream of 1s and 0s expecting a new bit every 1/38400bps = 26us (0.000026 seconds). As long as the robot outputs bits at the predetermined speed, your laptop can understand it. Remember to always configure all your devices to the same baud rate for communication to work! Data bits, Parity, Stop Bits, Flow Control The short answer: don't worry about it. These are basically variations of the signal, each with long explanations of why you would/wouldn't use them. Stick with the defaults, and make sure you follow the suggested settings of your adaptor. Usually you will use 8 data bits, no parity, 1 stop bit, and no flow control - but not always. Note that if you are using a PIC microcontroller you would have to declare these settings in your code (google for sample code, etc). I will talk a little more about this in coming sections, but mostly just don't worry about it. Bit Banging What if by rare chance your microcontroller does not have a UART (check the datasheet), or you need a second UART but your microcontroller only has one? There is still another method, called bit banging. To sum it up, you send your signal directly to a digital input/output port and manually toggle the port to create the TTL signal. This method is fairly slow and painful, but it works . . .

Asynchronous Serial Transmission As you should already know, baud rate defines bits sent per second. But baud only has meaning if the two communicating devices have a synchronized clock. For example, what if your microcontroller crystal has a slight deviation of .1 second, meaning it thinks 1 second is actually 1.1 seconds long. This could cause your baud rates to break! One solution would be to have both devices share the same clock source, but that just adds extra wires . . . All of this is handled automatically by the UART, but if you would like to understand more, continue reading . . .

Asynchronous transmission allows data to be transmitted without the sender having to send a clock signal to the receiver. Instead, the sender and receiver must agree on timing parameters in advance and special bits are added to each word which are used to synchronize the sending and receiving units. When a word is given to the UART for Asynchronous transmissions, a bit called the "Start Bit" is added to the beginning of each word that is to be transmitted. The Start Bit is used to alert the receiver that a word of data is about to be sent, and to force the clock in the receiver into synchronization with the clock in the transmitter. These two clocks must be accurate enough to not have the frequency drift by more than 10% during the transmission of the remaining bits in the word. (This requirement was set in the days of mechanical teleprinters and is easily met by modern electronic equipment.)

When data is being transmitted, the sender does not know when the receiver has 'looked' at the value of the bit - the sender only knows when the clock says to begin transmitting the next bit of the word. When the entire data word has been sent, the transmitter may add a Parity Bit that the transmitter generates. The Parity Bit may be used by the receiver to perform simple error checking. Then at least one Stop Bit is sent by the transmitter. When the receiver has received all of the bits in the data word, it may check for the Parity Bits (both sender and receiver must agree on whether a Parity Bit is to be used), and then the receiver looks for a Stop Bit. If the Stop Bit does not appear when it is supposed to, the UART considers the entire word to be garbled and will report a Framing Error to the host processor when the data word is read. The usual cause of a Framing Error is that the sender and receiver clocks were not running at the same speed, or that the signal was interrupted.

Regardless of whether the data was received correctly or not, the UART automatically discards the Start, Parity and Stop bits. If the sender and receiver are configured identically, these bits are not passed to the host. If another word is ready for transmission, the Start Bit for the new word can be sent as soon as the Stop Bit for the previous word has been sent. In short, asynchronous data is 'self synchronizing'.

The Loop-Back Test The loop-back test is a simple way to verify that your UART is working, as well as to locate the failure point of your UART communication setup. For example, suppose you are transmitting a signal from your microcontroller UART through a TTL to USB converter to your laptop and it isn't working. All it takes is one failure point for the entire system to not work, but how do you find it? The trick is to connect the Rx to the Tx, hence the loop-back test. For example, to verify that the UART is outputting correctly: o o o

connect the Rx and Tx of the UART together printf the letter 'A' have an if statement turn on a LED if 'A' is received

If it still doesn't work, you know that your code was the failure point (if not more than one failure point). Then do this again on the PC side using HyperTerminal, directly connecting Tx and Rx of your USB port. And then yet again using the TTL to USB adaptor. You get the idea . . . I'm willing to bet that if you have a problem getting it to work, it is because your baud rates aren't the same/synchronized. You may also find it useful to connect your Tx line to an oscilloscope to verify your transmitting frequency:

Top waveform: UART transmitted 0x0F Bottom waveform: UART received 0x0F

Adding UART Functions to AVR and your $50 Robot To add UART functionality to your $50 robot(or any AVR based microcontroller) you need to make a few minor modifications to your code and add a small amount of extra hardware. Full and Half Duplex Full Duplex is defined by the ability of a UART to simultaneously send and receive data. Half Duplex is when a device must pause either transmitting or receiving to perform the other. A Half Duplex UART cannot send and receive data simultaneously. While most microcontroller UARTs are Full Duplex, most wireless transceivers are Half Duplex. This is due to the fact that it is difficult to send two different signals at the same time under the same frequency, resulting in data collision. If your robot is wirelessly transmitting data, in effect it will not be able to receive commands during that transmission, assuming it is using a Half Duplex transmitter. Please check out the step-by-step instructions onhow to add UART functionality to your $50 robot >>>.

The time stamp isnt always necessary, and you can always add or remove ADC (analog to digital) inputs. Note that the get_timer1() command must be called right before, during, or directly after the sensor readings - or the time recorded will be meaningless.

Also, use commas to separate output data values as already shown. I will explain the importance of this later. Steps to Logging With HyperTerminal Now open up HyperTerminal Start => Programs => Accessories => Communications => HyperTerminal You should see this window:

Type in a desired name and push OK. The icon doesn't matter.

Now select a COM port, while ignoring the other options. Push OK.

In the COM properties, select the options you want/need depending on the microcontroller and serial communications setup you have. Push OK. Chances are if you just change the Bits per second to 115200 and leave the other options as default it should work fine. To make sure, check your C code header for a line that looks something like this: #use rs232(baud=115200, parity=N, bits=8, xmit=PIN_C6, rcv=PIN_C7) Now in the menu, select Transfer => Capture Text... create a text file, and select it in the Capture Text window click Start Now connect your microcontroller to your computer (by serial/usb) and then turn the microcontroller on. Next you want to tell HyperTerminal to Call. Select the image of the phone:

Finally, tell your microcontroller to start logging data, and you will see the data appear on screen. Even if you do not plan to save your data, this can be a great feedback tool when testing your robot.

When logging is completed, click the disconnect button:

Then select Transfer => Capture Text => Stop You should now have a .txt file saved with all your data. But you're not done yet!

Rename the file to a .csv, or Comma Separated Value (CSV) file format. What this does is allows you to open the file in Excel with each value seperated by columns and rows, making data processing much easier. Now you may interpret the sensor data any way you like. ROBOT SENSOR INTERPRETATION

Robot Sensor Interpretation Most roboticists understand faily well how sensors work. They understand that mostsensors give continuous readings over a particular range. Most usually understand the physics behind them as well, such as speed of sound forsonar or sun interference for IR. Yet most do not understand how to interpret sensor data into a mathematical form readable by computers. Roboticists would just make case based situations for their sensors, such as 'IF low reading, DO turn right' and 'IF high reading, DO turn left.' That is perfectly ok to use . . . unless you want fine angle control. The other problem with case based programming is if your sensor reading bounces between two cases, your robot will spass out like crazy (oscillate). Most amazingly, to do fine angle control is actually almost just as simple. There are only 3 steps you need to follow: o o o

Gather Sensor Data (data logging) Graph Sensor Data Generate Line Equation

The first step is incredibly simple, just somewhat time consuming. Graphing just takes minutes. And generating the line equation is usually just a few clicks of your mouse. Gather Sensor Data This is fairly straight forward. Do something with your sensor, and record it's output using Excel. If you have a range sensor (such as sonar orSharp IR), record the distance

of the object in front of it and the range data output. If you have a photoresistor, record the amount of light (probably arbitrarily . . . # of candles maybe?) and the sensor data from it. If you have a force sensor, apply weight to it, record the weight, and yes, the data. This is very simple and probably brain deadening easy, but there are a few things you will have to watch out for. First is non-continuity. Some sensors (such as sonar and Sharp IR) do not work properly at very close range. Stupid physics, I know. The next is non-linearity. For example, your sensor readings may be 10, 20, and 30. But the distance might be 12cm, 50cm, and 1000cm. You will have to watch for these curves. Usually however they occur only near the minimum and maximum values a sensor can read. Then there is sensor noise. Five readings in the same exact situation could give you five near yet different values. Verify the amount of sensor noise you have, as some sensors can have it fairly bad. The way to get rid of noise is get a bunch of readings, then only keep the average. Make sure you test for noise in the actual environment your robot will be in. Obvious, but some desktop robot builders forget. The last issue you will have is the number of data points to record. For perfectly linear sensors you only need to record the two extremes, and draw a line between them. However since this is almost always not the case, you should record more. You should always record more points the more non-linear your sensor is. If your sensor is non-linear only at certain cases, record extra points just in those cases of concern. And obviously, the more points you have recorded, the more accurate you can get your sensor representation. However do you really need 10,000 points for a photoresistor? Its a balance. Graph Sensor Data Ok now that you have all your data recorded in two columns in Excel, now you need to graph it. But this is simple. 1) First scroll with your mouse and highlight the cells with data in the first column. 2) Then hold Ctrl and scroll the cells in the other column of data. You should now have two columns seperately highlighted. 3) Next click the graph button in the top menu. 4) A window should open up. Select XY (Scatter). Then in Chart sub-type select the middle left option. Its the one with curvy lines with dots in them. Click next. 5) If you want to compare multiple robot sensors, use the Series tab. Otherwise just use the Data Range tab. Make sure the 'Series in: Columns' is selected. Click next. 6) Pick the options you want, label your chart, etc. Click next and finish. A chart should now appear. 7) Still confused? Download my excel sensor graph examples.

There are some possible graphs you may see with your sensors:

This above graph is of a linear sensor. There is a straight line, so a simple 10th grade y=x*c+d equation can predict distance given a data value.

This above graph is non-continuous and non-linear. You will see crazy stuff happen at the beginning and end of the graph. This is because the sensor cannot read at very close distances or very far distances. But it is simpler than it looks. Crop off the crazy stuff, and you will get a very simple non-linear x=y^2 line. You basically need to make sure that your sensors do not encounter those situations, as a program would not be able to distinguish them from a normal situation.

Although this above graph looks simple, it can be a little tricky. You actually have two different lines with two different equations. The first half is an x=y^2 equation and the second half is a linear equation. You must do a case based program to determine which equation to use for interpreting data. Or if you do not care about accuracy too much, you can approximate both cases as a single linear equation. Generate Line Equation After determining what kind of graph you have, now all you need to do is use the excel trendline ability. Basically this will convert any line into a simple equation for you. 1) If there is no non-continuities (kinks in the graph), right click the line in the graph, and click 'Add Trendline..." If you do have a non-continuity, seperate the non-continuous lines and make two graphs. That way each can be interpreted individually. If you do not care about error, or the error will be small, one graph is fine. 2) Now select the Trend/Regression type. Just remember, although more complex equations can reduce error, it increases computation time. And microcontrollerusually can only handle linear and exponential equations. Click OK and see how well the lines fit. 3) Now click the new trendline and click 'Format Trendline.' A new window should appear. 4) Go to the Options tab and check the box that says 'Display equation on chart.' Click OK. 5) There you have it, your equation that you can use on your robot! Given any x data value, your equation will pump out the exact distance or light amount or force or whatever. Load Cell Linearity Graph Example This is a graph and equation I generated using a Load Cell (determines force). I had to put the sensor in a voltage amplifier to get a good measurable voltage.

Additional Info On Data Logging There are many ways to log data, depending on the situation. There is event based data logging, meaning that it only records data when a specific single-instant event occurs. This event could be a significant change in sensor output or a passing of a user defined threshold. The advantage of this method is that it significantly reduces data that needs to be stored and analyzed. The other method is selective logging, which means logging will occur over just a set period of time (usually a short period of time). If for example you want to analyze an event, your data logger would start logging at the beginning of the event and stop at the end. The advantage of this method is that you can get high resolution data without wasting memory. Can I Buy a Data Logger for My PC? Of course. They are called DAQ, or Data Acquisition devices, and have a lot of neat built in software and hardware to make things easier for you. But they can get costly, ranging in the $100's. PROGRAMMING - DIFFERENTIAL DRIVE

What is a Differential Drive Robot? Differential drive is a method of controlling a robot with only two motorized wheels. What makes this algorithm important for a robot builder is that it is also the simplest control method for a robot. The term 'differential' means that robot turning speed is determined by the speed difference between both wheels, each on either side of your robot. For example: keep the left wheel still, and rotate the right wheel forward, and the robot will turn left. If you are clever with it, or use PID control, you can get interesting curved paths just by varying the speeds of both wheels over time. Dont want to turn? As long as both wheels go at the same speed, the robot does not turn - only going forward or reverse. PROGRAMMING - PID CONTROL

PID Control A proportional integral derivative controller (PID controller) is a common method of controlling robots. PID theory will help you design a better control equation for your robot. Shown here is the basic closed-loop (a complete cycle) control diagram:

The point of a control system is to get your robot actuators (or anything really) to do what you want without . . . ummmm . . . going out of control. The sensor (usually anencoder on the actuator) will determine what is changing, the program you write defines what the final result should be, and the actuator actually makes the change. Another sensor could sense the environment, giving the robot a higher-level sense of where to go. Terminology To get you started, here are a few terms you will need to know: error - The error is the amount at which your device isnt doing something right. For example, if your robot is going 3mph but you want it to go 2mph, the error is 3mph-2mph = 1mph. Or suppose your robot is located at x=5 but you want it at x=7, then the error is 2. A control system cannot do anything if there is no error - think about it, if your robot is doing what you want, it wouldnt need control!

proportional (P) - The proportional term is typically the error. This is usually the distance you want the robot to travel, or perhaps a temperature you want something to be at. The robot is at position A, but wants to be at B, so the P term is A - B. derivative (D) - The derivative term is the change in error made over a set time period (t). For example, the error was C before and now its D, and t time has passed, then the derivative term is (C-D)/t. Use the timer on your microcontrollerto determine the time passed (see timer tutorial). PROGRAMMING - TIMERS

Timers for Microcontrollers The timer function is one of the basic features of a microcontroller. Although some compilers provide simple macros that implement delay routines, in order to determine time elapsed and to maximize use of the timer, understanding the timer functionality is necessary. This example will be done using the PIC16F877 microcontroller in C. To introduce delays in an application, the CCSC macro delay_ms() and delay_us() can be used. These macros provide an ability to block the MCU until the specified delay has elapsed. But what if you instead want to determine elapsed time for say a PID controller, or adata logger? For tasks that require the ability to measure time, it is possible to write code that uses the microcontroller timers. The Timer Different microcontrollers have different numbers and types of timers (Timer0, Timer1, Timer2, watchdog timer, etc.). Check the data sheet for the microcontroller you are using for specific details. These timers are essentially counters that increment based on the clock cycle and the timer prescaler. An application can monitor these counters to determine how much time has elapsed.

On the PIC16F877, Timer0 and Timer2 are 8-bit counters whereas Timer1 is a 16-bit counter. Individual timer counters can be set to an arbitrary value using the CCSC macro set_timer0, set_timer1, or set_timer2. When the counter reaches its limit (255 for 8-bit and 65535 for 16-bit counters), it overflows and wraps around to 0. Interrupts can be generated when wrap around occurs, allowing you to count these resets or initiate a timed event. Timer1 is normally used for PWM or capture and compare functions. Each timer can be configured with a different source (internal or external) and a prescaler. The prescaler determines the timer granularity (resolution). A timer with a prescaler of 1 increments its counter every 4 clock cycles - 1,000,000 times a second if using a 4 MHz clock. A timer with a prescaler of 8 increments its counter every 32 clock cycles. It is recommended to use the highest prescaler possible with your application. Calculating Time Passed The equation to determine the time passed after counting the number of ticks would be: delay (in ms) = (# ticks) * 4 * prescaler * 1000 / (clock frequency)

for example . . . Assume that Timer1 is set up with a prescaler of 8 on a MCU clocked at 20 MHz. Assume that a total of 6250 clicks were counted. then . . . delay (in ms) = (# ticks) * 4 * 8 * 1000 / (20000000)

delay (in ms) = (6250) / 625 = 10 ms Code in C First you must initialize the timer: long delay;

setup_timer_0(T0_INTERNAL | T0_DIV_BY_8); //Set Timer0 prescaler to 8 now put this code in your main loop: set_timer0(0); //reset timer to zero where needed

printf("I eat bugs for breakfast."); //do something that takes time //calculate elapsed time in ms, use it for something like PID delay = get_timer0() / 625;

//or print out data and put a time stamp on it for data logging printf("%u, %u, %lu\r\n", analog(PIN_A0), analog(PIN_A1), get_timer0()); Note that it is very important that you do not call the get_timer0() command until exactly when it is needed. In the above example I call the timer in my printf()statement - exactly when I need it. Timer Overflow You should also be careful that the timer never overflows in your loop or the timer will be wrong. If you expect it to overflow, you could call a timer overflow interrupt that counts the number of overflows - each overflow being a known set of time depending on your prescaler. In CCSC, interrupt service routines are functions that are preceded with #int_xxx. For instance, a Timer1 interrupt service routine would be declared as follows: #int_timer1

//timer1 has overflowed void timer1_interrupt() { //do something quickly here //maybe count the interrupt //or perform some task //good practice not stay in interrupt too long }

To enable interrupts, the global interrupt bit must be set and then the specific interrupt bits must be set. For instance, to enable Timer0 interrupts, one would program the following lines right after the timer is initialized: enable_interrupts(GLOBAL); enable_interrupts(INT_TIMER0);

If you want to stop the application from processing interrupts, you can disable the interrupts using the disable_interrupts(INT_TIMER0) CCSC macro. You can either disable a specific interrupt or all interrupts using the GLOBAL define. Timer Delay Here is another code sample that shows how to create a delay of 50 ms before resuming execution (alternative to delay_ms): setup_timer_0(T0_INTERNAL | T0_DIV_BY_8); //Set Timer0 prescaler to 8

set_timer0(0); //reset timer while (get_timer0() < 3125); // wait for 50ms

integral (I) - The integral term is the accumulative error made over a set period of time (t). For example, your robot continually is on average off by a certain amount all the time, the I term will catch it. Lets say at t1 the error was A, at t2 it was B, and at t3 it was C. The integral term would be A/t1 + B/t2 + C/t3. tweak constant (gain) - Each term (P, I, D) will need to be tweaked in your code. There are many things about a robot that is very difficult to model mathematically (ground friction, motor inductance, center of mass, ducktape holding your robot together, etc.). So often times it is better to just build the robot, implement a control equation, then tweak the equation until it works properly. A tweak constant is just a guessed number that you multiple each term with. For example, Kd is the derivative constant. Idealy you want the tweak constant high enough that your settling time is minimal but low enough so that there is no overshoot. P*Kp + I*Ki + D*Kd

What you see in this image is typically what will happen with your PID robot. It will start with some error and the actuator output will change until the error goes away (near the final value). The time it takes for this to happen is called the settling time. Shorter settling times are almost always better. Often times you might not design the system properly and the system will change so fast that it overshoots (bad!), causing some oscillation until the system settles. And there will usually be some error band. The error band is dependent on how fine a control your design is capable of - you will have to program your robot to ignore error within the error band or it will probably oscillate. There will always be an error band, no matter how advanced the system. ignoring acceptable error band example:

if error >>. $50 ROBOT UART TUTORIAL

Adding UART Functions to AVR and your $50 Robot To add UART functionality to your $50 Robot(or any AVR based microcontroller) you need to make a few minor modifications to your code and add a small amount of extra hardware. Now of course I could just give you the code for you to use right away and skip this tutorial, or I can explain how and why these changes are made so you can 'catch your own fish' without me giving it to you in the future . . . Now about the speed increase . . . We will be using the maximum frequency that your microcontroller can handle without adding an external crystal. How do you know what that frequency is? From the datasheet of your ATmega8/ATmega168, we can find: "By default, the Internal RC Oscillator provides an approximate 8.0 MHz clock." Listed in the 'System Clock and Clock Options->Calibrated Internal RC Oscillator' section. Since we do not have an external crystal, we will configure the entire system (all individual components and code) to 8MHz. If you want a different frequency, I will also show you how to change it to your frequency of choice.

Open up your makefile, and add in rprintf.c and uart.c if it isn't already there:

# List C source files here. (C dependencies are automatically generated.) SRC = $(TARGET).c a2d.c buffer.c rprintf.c uart.c These are AVRlibfiles needed to do the hard UARTand printf commands for you. If you are using the $50 Robot source code, then you already have AVRlib installed and ready to use (so don't worry about it). Otherwise, read theinstructions on installing AVRlib.

Also, look for this line towards the top: F_CPU = 3686400 and replace it with your desired frequency. In this example we will use: F_CPU = 8000000

Open up SoR_Utils.h and add in two AVRlib files, uart.h and rprintf.h: //AVRlib includes #include "global.h" #include "buffer.h" #include "uart.h" #include "rprintf.h" //#include "timerx8.h" #include "a2d.h" // A/D

// global settings // buffer function library // uart function library // printf library // timer library (timing, PWM, etc) converter function library

Now your compiler knows to use these AVRlib files. If you aren't using SoR_Utils.h, just add these lines at the top of your code where you declare includes. I recommend not using the timer because its default interrupt settings will cause your servos and UART to spass out . . .

Open up global.h and set the CPU to 8Mhz: #define F_CPU

8000000

// 8MHz processor

Now power up your microcontroller and connect it to your AVR as if you were to program it:

Now BE VERY CAREFUL IN THIS STEP. If you push the wrong fuse, there is a possibility you could permanently disable your microcontroller from being further programmed. BAD JU-JU!!! Click the Fuses tab (see below image), and uncheck 'Divide clock by 8'. Having this setting checked makes your clock 8 times slower. Having a slower clock makes your microcontroller more efficient, and unless your algorithm requires a lot of math, a fast processor isn't needed. But in this case we want a fast UART speed, so we need the faster clock. Now check the 3rd 'Int. RC Osc. 8Mhz'. By default this should already be checked, but I'm noting it just in case. If you were using a crystal or a different frequency, just scroll down in the Fuses tab for other options. as shown here:

Then push Program and you should get a message that looks something like this: Entering programming mode.. OK! Writing fuses .. 0xF9, 0xDF, 0xE2 .. OK!

Reading fuses .. 0xF9, 0xDF, 0xE2 .. OK! Fuse bits verification.. OK Leaving programming mode.. OK!

Please note that the $50 Robot was designed for the lower clock speed. What this means is that all your functions that involved time will operate 8 times faster. In terms of processing this is great! But all your delay and servo commands must be multiplied by 8 for them to work again.

For example, delay_cycles(500); servo_left(45); must now be delay_cycles(4000);//500*8 servo_left(360);//45*8 Or if you are really really lazy and don't care about timing error, go into SoR_Utils.h and change this: void delay_cycles(unsigned long int cycles) { while(cycles > 0) cycles--; }

to this: void delay_cycles(unsigned long int cycles) { cycles=cycles*8;//makes delays take 8x longer while(cycles > 0) cycles--; }

Now we need to select a baud rate, meaning the speed at which data can be transferred. Typically you'd want to have a baud of around 115.2k. This is a very common speed, and is very fast for what most people need.

But can your microcontroller handle this speed? To find out, check the datasheet. For my ATmega168, I went to the 'USART0 -> Examples of Baud Rate Setting' section and found a chart that looks something like this:

I immediately found the column marked 8.0000 MHz (the internal clock of your microcontroller), which I circled in green for you. Then I went to the row marked as 115.2k, marked in blue. Now what this means is that your UART can do this baud rate, but notice that it says the error is 8.5%. This means that there is a good chance that there will be syncing problems with your microcontroller. Error arises from the fact that Fosc is usually not a standard UART frequency, so the baud rate when you divide Fosc by some number isn't going to be a standard UART frequency. Rather than bothering with this possible problem, I decided to go down a few rates to 57.6k (circled in red). 3.5% could still be a bit high, so if you have problems with it, go down again to 38.4k with a .2% error (mostly negligable). So what error rate is considered optimal or best? It depends entirely on your hardware and so I don't have an answer for you. If you want to learn more, feel free to read about howasynchronous serial transmission is 'self synchronizing'. The other option is to set the U2X register bit to 1 (default is 0), meaning the error at 115.2k is now only -3.5%. This doubles the UART speed, which can sometimes make it possible for you to achieve baud rates closer to standard values. If you take a look at the formulas for calculating baud rate from the value of the UBRR (baud rate) register, the error rates should make sense: U2X = 0 => baud rate = Fosc / 16 / (UBRR + 1) U2X = 1 => baud rate = Fosc / 8 / (UBRR + 1)

Be aware that even if your microcontroller UART can operate at your desired baud, the adaptorsyou use might not be. Check the datasheets before deciding on a baud rate!!! It turns out my RS232 Shifter is rated for 38400 in the datasheet, so thats the baud I ended up with to do this tutorial.

Now after deciding on baud, in the file SoR_Utils.h (or in your main if you want) add the following code to your initialization (using your chosen BAUD rate): uartInit(); // initialize UART uartSetBaudRate(38400);// set UART baud rate rprintfInit(uartSendByte);// initialize rprintf system

Relax, the hard part is done!

Now we need to add a line in our main() code that uses the UART. Add this somewhere in your main() code: //read analog to display a sensor value rangefinder = a2dConvert8bit(5); //output message to serial (use HyperTerminal) rprintf("Hello, World! My Analog: %d\r\n", rangefinder);

You don't actually need to output a sensor value, but figured I'd show you how now so you can try it out on your robot sensors.

The last programming step.

yaaaaayyyy! =) Save and then compile your code like you normally would:

Software is done!

Now for the hardware.

I will assume you already read about adaptors for the UART and know that you have just four wires to connect to your robot: 5V Ground Tx Rx

Now plug the power connections (regulated 5V and ground) into an unused header on your circuit board just like you would a sensor (the sensors use regulated 5V). Then plug the Tx of your adaptor into the Rx header of your microcontroller, and then the Rx of your adaptor into the Tx header of your microcontroller. Don't know which pin is which? Check the datasheet for the pinout and look for RXD and TXD. Or look at the $50 Robot schematic.

It just so happens that you probably have servos on pins 2 and 3 - oh no! Don't worry, just move your servos onto a different pin, such as 4 (PD2), 5 (PD3), and/or 6 (PD4). To move the servos to a different port in code, find this in SoR_Utils.h: void servo_left(signed long int speed) { PORT_ON(PORTD, 0); delay_cycles(speed); PORT_OFF(PORTD, 0);//keep off }

and change '0' to the new port. If you wanted port PD2, then use '2': PORT_ON(PORTD, 2); delay_cycles(speed); PORT_OFF(PORTD, 2);

Don't forget to make that change for your other servos, too. (and of course, save and compile again)

This is what my hardware plugged in looks like (click to enlarge):

Notice how I labeled my wires and pins so I didn't get them confused (with the result of maybe frying something). I used wire connectors to connect it all. At the top right is my RS232 Shifter and at the bottom right is my RS232 to USB adaptor.

Let's do a quick test. Chances are your adaptor will have a transmit/receive LED, meaning the LED turns on when you are transmitting or receiving.

Turn on your robot and run it. Does the LED turn on? If not, you might be doing something wrong . . . My adaptor has two LEDs, one for each line, and so my Rx LED flashes when the robot transmits.

Now you need to set up your computer to receive the serial data and to verify that its working.

If you are using an adaptor, make sure it is also configured for the baud rate you plan to use. Typically the instructions for it should tell you how, but not always. So in this step I'll show you one method to do this. Click: Start->Settings->Control Panel->System A new window will come up called 'System Properties'. Open the Hardware tab and click device manager. You should see this:

Go to Ports, select the one you are using, and right click it. Select Properties. A new window should come up, and select the Port Settings tab:

Now configure the settings as you like. I'd recommend using the settings I did, but with your desired baud rate.

This is the last step!

To view the output data, use the HyperTerminal tutorial to set it up for your desired baud rate and com port. Make sure you close out the AVR programming window so that you can free up the com port for HyperTerminal! Two programs cannot use the same com port at the same time (a common mistake I always make). Now if you did everything right, you should start seeing text show up in hypterminal:

If it isn't working, consider doing aloop-back test for your UART debugging needs. You're finished! Good job! PROGRAMMING - VARIABLES

C Variables Controlling variables in your program are very important. Unlike with programming computers (such as in C++ or Java) where you can call floats and long ints left and right, doing so on a microcontroller would cause serious problems. With microcontrollers you need to always be careful about limited memory, limited processing speeds, overflows, signs, and rounding. C Variable Reference Chart definition short int char int

bit 2^x 1 -bit 8-bit 8-bit

number span allowed 0, 1 (False, True) a-z, A-Z, 0-9 0 .. 255

unsigned int signed int long int unsigned long int signed long int float

8-bit 8-bit 16-bit 16-bit 16-bit 32-bit

0 .. 255 -128 .. 127 0 .. 65535 0 .. 65535 -32768 .. 32767 1.2 x 10^(-38) .. 3.4 x 10^(38)

Limited Memory Obviously the little microcontroller thats the size of a quarter on your robot isn't going to have the practically infinite memory of your PC. Although most microcontrollers today can be programmed without too much worry on memory limits, there are specific instances where it would be important. If your robot does mapping, for example, efficient use of memory is important. You always need to remember to use the variable type that requires the least amount of memory yet still stores the information you need. For example, if a variable is only expected to store a number from 100 to 200, why use a long int when just an int would work? Also, the fewer bits that need to be processed, the faster the processing can occur. Limited Processing Speeds Your microcontroller is not a 2.5 GHz processor. Don't treat it like one. Chances are its a 4 MHz to 20 MHz processor. This means that if you write a mathematically complex equation, your microcontroller could take up to seconds to process it. By that time your robot might have collided into a cute squirrel without even knowing!!! With robots you generally want to process yoursensor data about 3 to 8 times per second, depending on the speed and environment of your robot. This means you should avoid all use of 16-bit and 32-bit variables at all costs. You should also avoid all use of exponents and trigonometry - both because they are software implemented and require heavy processing. What if your robot requires a complex equation and there is no way around it? Well, what you would do is take shortcuts. Use lookup tables for often made calculations, such as for trigonometry. To avoid floats, instead of 13/1.8 use 130/18 by multiplying both numbers by 10 before dividing. Or round your decimal places - speed is almost always more important than accuracy with robots. Be very careful with the order of operations in your equation, as certain orders retain higher accuracy than others. Don't even think about derivatives or integrals. Overflows An overflow is when a value for a variable exceeds the allowed number span. For example, an int for a microcontroller cannot exceed 255. If it does, it will loop back. unsigned int variable = 255; variable = variable+1; //variable will now equal 0, not 256!!!

To avoid this overflow, you would have to change your variable type to something else, such as a long int. You might also be interested in reading about timers, as accounting for timer overflows is often important when using them. Signs Remember that signed variables can be either negative or positive but unsigned variables can only be positive. In reality you do not always need a negative number. A positive number can often suffice because you can always arbitralily define the symantics of a variable. For example, numbers between 0 and 128 can represent negatives, and numbers between 129 and 255 can represent positive numbers. But there will often be times when you would perfer to use a negative number for intuitive reasons. For example, when I program a robot, I use negative numbers to represent a motor going in reverse, and a positive for a motor going forward. The main reason I would avoid using negative numbers is simply because a signed int overflows at 128 (or -128) while unsigned overflows at 256 (or 0). Extras For further reading of programming variables for robots, have a look at thefuzzy logic tutorial. Examples of Variables in C Code: Defining variables: #define ANGLE_MAX 255//global constants must be defined first in C int answer; signed long int answer2; int variable = 3; signed long int constant = -538;

variable math examples (assume answer is reset after each example): answer = variable * 2; //answer = 6

answer = variable / 2; //answer = 1 (because of rounding down) answer = variable + constant; //answer = 233 (because of overflows and signs) answer2 = signed long int(variable) + constant; //answer2 = -535 answer = variable - 4; //answer = 255 (because of overflow) answer = (variable + 1.2)/3; //answer = 1 (because of rounding)

answer = variable/3 + 1.2/3; //answer = 1 (because of rounding and order of operations) answer = answer + variable; //answer = RANDOM GARBAGE (because answer is not defined) WAVEFRONT ALGORITHM

Robot Mapping and Navigation The theories behind robot maze navigation is immense - so much that it would take several books just to cover the basics! So to keep it simple this tutorial will teach you one of the most basic but still powerful methods of intelligent robot navigation. For reasons I will explain later, this robot navigation method is called the wavefront algorithm. There are four main steps to running this algorithm. Step 1: Create a Discretized Map Create an X-Y grid matrix to mark empty space, robot/goal locations, and obstacles. For example, this is a pic of my kitchen. Normally there isn't a cereal box on the floor like that, so I put it there as an example of an obstacle:

Using data from the robot sensor scan, I then lay a basic grid over it:

This is what it looks like with all the clutter removed. I then declare the borders (red) impassable, as well as enclose any areas with objects as also impassable (also blocked off by red). Objects, whether big or small, will be treated as the entire size of one grid unit. You may either hardcore the borders and objects into your code, or your robot can add

the objects and borders as it detects them with a sensor. What you get is an X-Y grid matrix, with R representing where the robot is located:

But of course this is not what it really looks like in robot memory. Instead, it looks much more like this matrix below. All I did was flatten out the map, and stored it as a matrix in my code. Use 0 to represent impassable and 1 to represent the robot (marked as R on the image).

Note: In my source code I used the below values. My examples here are just simplifications so that you can more easily understand the wavefront algorithm. // WaveFront Variables int nothing=0; int wall=255; int goal=1; int robot=254;

An example of a map matrix in C looks something like this: //X is horizontal, Y is vertical int map[6][6]= {{0,0,0,0,0,0}, {0,0,0,0,0,0}, {0,0,0,0,0,0}, {0,0,0,0,0,0}, {0,0,0,0,0,0}, {0,0,0,0,0,0}};

Step 2: Add in Goal and Robot Locations Next your robot must choose its goal location, G (usually preprogrammed for whatever reason). The goal could be your refrigerator, your room, etc. To simplify things, although not optimal, we are assuming this robot can only rotate 90 degrees. In my source code I call this function: new_state=propagate_wavefront(robot_x,robot_y,goal_x,goal_y); robot_x and robot_y marks the robots' coordinates, and goal_x and goal_y is of course the goal location. Step 3: Fill in Wavefront This is where it gets a bit hard so bare with me here. In a nutshell the algorithm will check node by node, starting at the top left, which nodes it is next to. Ignore walls, look at nodes around your target node, then count up. For example, if a bordering node has the number 5, and its the lowest bordering node, make target node a 6. Keep scanning the matrix until the robot node borders a number. Following this pseudocode Ill show you graphic examples. Pseudocode: check node A at [0][0] now look north, south, east, and west of this node (boundary nodes) if (boundary node is a wall) ignore this node, go to next node B else if (boundary node is robot location && has a number in it) path found! find the boundary node with the smallest number return that direction to robot controller robot moves to that new node else if (boundary node has a goal) mark node A with the number 3 else if (boundary node is marked with a number) find the boundary node with the smallest number mark node A with (smallest number + 1) if (no path found) go to next node B at [0][1] (sort through entire matrix in order) if (no path still found after full scan) go to node A at [0][0] (start over, but do not clear map) (sort through entire matrix in order) repeat until path found

if (no path still found && matrix is full) this means there is no solution clear entire matrix of obstacles and start over this accounts for moving objects! adaptivity!

Here is a graphic example. The goal and robot locations are already marked on the map. Now going through the matrix one node at a time, I've already scanned through the first 2 columns (X). On column 3, I scanned about halfway down until I reached the 5th node. Checking bordering nodes it is next to the Goal. So I mark this node with a 3 as shown.

Continuing on the 3rd column, I keep going down node by node. I check for bordering nodes, and add +1 to the target node. As you can see, the rest of the column gets filled in. Notice the 'wave' action, yet? This is why it's called a wavefront. It has also been called the Brushfire algorithm because it spreads like a brushfire . . .

Now go to the 4th column and start checking each node. When you get to the 4th row, your target node borders the goal. Mark it with a 3. Then keep scanning down. Ignore the goal, and ignore walls. On the 9th row, you will notice the target node borders the number 7 on the left. Its the lowest value bordering node, so 7 + 1 = 8. Mark this target node as 8.

Then going to the 10th row you notice the target node is the robot location. If the robot location borders a filled in number (in this case, the number 8) then the algorithm is finished. A full path has been found!

Step 4: Direct Robot to Count Down Now that a solution exists, tell your robot to drive to the square with the current number minus one. In this case, the current number was 9, so the robot must drive to square 8. There are multiple squares labeled 8, so the robot can go to either one. In this case, the 8 square on the left is more optimal because it results in fewer rotations of the robot. But for simplicity, it really doesn't matter. Then have your robot go to box 7, then box 6, then 5, and so on. Your robot will drive straight to the goal as so. Adaptive Mapping For adaptive mapping, your robot does not always know where all obstacles are located. In this case, it will find a 'solution' that might not actually work. Perhaps it didn't see all obstacles, or perhaps something in the environment moved? So what you do is: 1) have your robot scan after each move it makes 2) update the map with new or removed obstacles

3) re-run the wavefront algorithm 4) and then react to the new updated solution. If no solution is found at all, delete obstacles from your map until a solution is found. In my source code, the robot deletes all obstacles when no solution is found - not always desirable, but it works. Results To test my algorithm, I put it on my modded iRobot Create platform. It uses a scanning Sharp IR as its only sensor. MODDING THE iROBOT CREATE

The iRobot Create The iRobot Create is a commercial robot hacked up from a previous robot vacuum cleaner they produced. They have been trying to encourage the hobbyist and educational community to start developing these things and through one of their schemes I landed a free Create to toy with. A video of the robot running around my house using the default programming: My end goal of this project was to implement real-time SLAM (simultaneous localization and mapping) onto the robot. But I made this plan before I was aware of the capabilities (i.e. limitations) of the iRobot. For a start, it only uses the ATmega168 microcontroller. This is incredibly slow with huge memory limitations!

Instead I decided to implement real-time adaptive mapping, and just have it update the map with new scans. Its not matching to an old map such as in SLAM, but it is still updating to remove the accumulated navigation errors. The Create Command Structure To communicate with the Create, you must send serial commands to its magical green box circuit board thingy inside the Create. Upon breaking this made-in-China box open I still couldn't make out most of the electronics . . .

So to send these serial commands, we must program the Command Module. This green box thing has an ATmega168 microcontroller inside, a common and easier microcontroller to use. This is the same microcontroller Im using on the $50 Robot, so all the source code is cross-platform.

Now so to command the Create to do stuff, all you do is occasionally send commands to it from your ATmega168. You can also ask for sensor data from it using the delayAndUpdateSensors(update_delay);command in my source code. The Create Encoders The Create does have high resolution encoders, but I'm not sure what the resolution is because its not in any of the manuals.

Yet despite the high-resolution encoders, they are still inaccurate. I'm not sure if its dust or what, but the counts were constantly skipping. I wouldn't rely on the encoder at all. The sample softwarethat comes with the iRobot Create does not effectively use the encoders. It only does a point and shoot kind of algorithm for angle rotations, and does a 'good enough' algorithm for distance measurement. The source code doesn't even use the 156 (wait for distance) or 157 (wait for angle) commands! Of course, for their uses, the source didn't need these commands. But I need perfect encoder measurements for encoder based navigation. So I had to write up my own stuff . . . First I tried implementing the 156 and 157 commands but for some reason it occasionally didn't work. Strange things happened, even resulting in program crashes. And even when it did work, the encoder measurements were still error-ing (its a word because I made it up). The best I could is with my own written methods. Use these functions for somewhat accurate Create movement: //rotate clockwise (CW) and counter-clockwise (CCW) rotate_CCW(180,250); //(angle, rotation_speed) rotate_CW(90,150); //(angle, rotation_speed) //move straight (use negative velocity to go in reverse) straight(cell_size,100); //(distance, velocity) stop();//don't make me explain this function!!!

Although it doesn't correct for overshoot or undershoot, it at least keeps track of it to account for it in the next motion. This still results in error, but not as much as before. The Stampy Edge Detection Algorithm To start off my robots' adventures, I needed to implement a basic highly reactive algorithm to test out my proto-code and sensor setup. I decided to implement my Stampy Edge Detection algorithm. I originally developed this algorithm for my Stampy sumo robot so that it can quickly locate the enemy robot. But I also found many other neat tricks that the algorithm can do, such as with my outdoor line following robot that uses a singlephotoresistor! The concept is simple. A scanning Sharp IR rangefinder does only two things: If no object is seen, the scanner turns right. If an object is seen, the scanner turns left. As shown, the scanner goes left if it sees a googly-eyed robot. If it doesn't detect it, the scanner turns right until it does. As a result, the scanner converges on the left edge of the googly-eyed robot:

Now the robot always keeps track of the angle of the scanner. So if the scanner is pointing left, the robot turns left. If the scanner is pointing right, the robot turns right. And of course, if the scanner is pointing straight ahead, the robot just drives straight ahead. For more detailed info, visit my write-up on my Stampy sumo robot. Building the Scanner The first step to the hardware is to make a mount for the scanner. Below are the parts you need, followed by a video showing how everything is assembled.

Wiring up the Hardware Now I needed to get some wiring to attach the servo and Sharp IR to the Create robot serial port. Going through my box of scrap wire, I found this.

To make it, I just took some serial cable and put some headers on it with heatshrink. You can use whatever you want.

Then I plugged it into the center serial port as so:

How do I know which pins in the serial port do I use? Well, I looked up the pin-out in the manual:

To distribute wiring (with a power bus), you need four pins: power and ground, an analog pin for the sensor, and a digital output pin for the servo. What does that mean? Connect all the grounds (black) to each other, and connect all the power lines (red) to each other. Each signal line (yellow) gets its own serial cable wire. To make a power bus, I got a piece of breadboard and male headers as such:

Then I used a Dremel to cut off a small piece of it, and soldered on the headers with the proper power distributing wiring:

Then I plugged in everything to the power bus as so. Refer to the pin-out in the previous step to make sure where everything plugs in to.

The last step is to attach the servo. You need to have the Sharp IR sensor centrally located, and the only place I can find available was that large empty space in the center (it was a no-brainer). I didn't want to drill holes or make a special mount (too much unnecessary effort), so I decided to use extra strength double sided sticky tape (see below image). My only concern about this tape was that I may have difficulties removing the servo in the future . . . (its not a ghetto mount, this stuff really holds).

To attach the servo on, I cut a piece off and stuck it to the bottom of my servo:

And then I stuck the other side of the tape onto the robot as so:

Programming the iRobot Create with Mod There are many ways to program your Create. iRobot recommends you usingWinAVR (22.8mb) as do I. Install that program. But I prefer to program using the IDE calledAVR Studio, version 4.13, build 528 (73.8mb). An optional install. I wont go into detail in programming the Create because themanual tells you how. But if you are still curious how I did it, here is my write-up onhow to program an AVR. Unfortunately I was never able to get AVR Studio to communicate with my Create . . . so instead I used AVR Dude (that came with WinAVR). To do this, I just opened up a command window and did stuff like this (click to enlarge):

avrdude -p atmega168 -P com9 -c stk500 -U flash:w:iRobot.hex Again, its all in the Create manuals. To help you get started, here is my source code: iRobot Create Sharp IR Scanner source code (August 20th, 2007) After uploading the program, just turn on your robot, push the black button, and off to attacking cute kittens it goes! Enjoy the video: Yes, it is programmed to chase a can of beer . . . Robot Navigation The next step to programming more intelligence into your robot is for it to have memory, and then to make plans with that memory. In this case, it will store maps of places the robot can 'see'. Then by using these maps, the robot can then navigate around obstacles intelligently to reach goals. The robot can also update these maps so that it accounts for people moving around in the area. The method I used for this is the wavefront algorithm, using a Sharp IR scanner as my sensor. The scanner would do a high-resolution scan so that it can find even the smallest of objects. If anything is detected in a block, even something as thin as a chair leg (the enemy of robots), it will consider the entire block impassable. Why would I do this? Because the memory on a microcontroller is limited and cannot store massive amounts of data. In fact, it was incapable of storing maps greater than 20x20!!! Plus, the increase in robot movement efficiency does not compare to the much larger increase in computational inefficiency.

A quick example of what the map would look like:

I decided that the 'world' should be discretized into square units exactly the same size as the robot. This way each movement the robot takes will be one robot unit length. Why did I do this? Computational simplicity. Higher map resolution requires more processing power. But for a home scenario high accuracy isn't required, so no need to get complicated. As you can see, each terrain situation requires a different optimal discretization . . . Remember to check out my wavefront algorithm tutorial if you want to learn more. Results Enjoy! Notice that its an unedited video: And what you have really been waiting for, the WaveFront Source Code: iRobot Create WaveFront Source Code (September 9th 2007) Also, videos showing whats inside a Roomba vacuum, just to see what you can hack out of it.

Enjoy! Notice that its an unedited video: Yes, I do realize I have a lot of cereal boxes . . . I actually have more . . . I like cereal =)

And what you have been really waiting for, the WaveFront Source Code: iRobot Create WaveFront Source Code (September 9th 2007) Recursive WaveFront There is another way to do the wavefront algorithm using recursive functions. This method I've been told is inefficient, especially on very large maps. But anyway it doesn't matter because microcontrollers cannot do recursive functions. This is an animation of the recursive wavefront process:

I won't go in to detail on this, but it's obviously a 'wavefront'! Wavefront Simulation It can be quite time consuming to test out robot navigation algorithms on the actual robot. It takes forever to tweak the program, compile, upload to robot, set up robot, turn it on, watch it run, then figure out why it failed . . . the list goes on. Instead, it is much easier to do this with simulation. You write the program, compile, then run it locally. You get an instant output of results to view. The disadvantage to simulation is that it's really hard to simulate the environment as well as get the robot physics perfect, but for most applications simulation is best to work out all the big bugs in the algorithm. This is a simulation I did showing a robot doing a wavefront, moving to the next location, then doing another wavefront update. For a robot (R) moving through terrain with moving objects (W), the robot must recalculate the wavefront after each move towards the goal (G). I didn't implement the adaptive mapping in simulation, just the wavefront and robot movement.

If you want to see the entire simulation, check out thesimulation results.txt Starting Wavefront Old 0 W 0 W 0 W 0 W 0 5 0 0

Map: 0 0 0 0 0 0 0 0 W 0 0 0 4 3 2 0 0 0

W 0 0 0 0 0

Unpropagation Complete: R W 0 0 0 W 0 W 0 0 0 0 0 W 0 0 W 0 0 W 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Adding Goal: R W G 0 0 W 0 W 0 0 0 0 0 W 0 0 W 0 0 W 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Sweep R W G 0 W 2 0 W 3 0 W 4 0 0 5 0 0 6

#: 1 2 3 W 3 4 5 4 W 6 5 6 7 6 7 8 7 8 9

Sweep R W G 0 W 2 0 W 3 0 W 4 0 6 5 0 7 6

#: 2 2 3 W 3 4 5 4 W 6 5 6 7 6 7 8 7 8 9

Sweep R W G 0 W 2 0 W 3 0 W 4 7 6 5 8 7 6

#: 3 2 3 W 3 4 5 4 W 6 5 6 7 6 7 8 7 8 9

Sweep R W G 0 W 2 0 W 3

#: 4 2 3 W 3 4 5 4 W 6

8 W 4 5 6 7 7 6 5 6 7 8 8 7 6 7 8 9 Sweep R W G 0 W 2 9 W 3 8 W 4 7 6 5 8 7 6

#: 5 2 3 W 3 4 5 4 W 6 5 6 7 6 7 8 7 8 9

Sweep #: 6 R W G 2 3 W 10 W 2 3 4 5 9 W 3 4 W 6 8 W 4 5 6 7 7 6 5 6 7 8 8 7 6 7 8 9 Finished Wavefront: R W G 2 3 W 10 W 2 3 4 5 9 W 3 4 W 6 8 W 4 5 6 7 7 6 5 6 7 8 8 7 6 7 8 9 Old Map: R W G 2 3 W 10 W 2 3 4 5 9 W 3 4 W 6 8 W 4 5 6 7 7 6 5 6 7 8 8 7 6 7 8 9 Unpropagation Complete: 0 W G 0 0 W R W 0 0 0 0 0 W 0 0 W 0 0 W 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Adding Goal: 0 W G 0 0 W R W 0 0 0 0 0 W 0 0 W 0 0 W 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Sweep 0 W G R W 2 0 W 3 0 W 4

#: 1 2 3 W 3 4 5 4 W 6 5 6 7

0 0 5 6 7 8 0 0 6 7 8 9 Sweep 0 W G R W 2 0 W 3 0 W 4 0 6 5 0 7 6

#: 2 2 3 W 3 4 5 4 W 6 5 6 7 6 7 8 7 8 9

Sweep 0 W G R W 2 0 W 3 0 W 4 7 6 5 8 7 6

#: 3 2 3 W 3 4 5 4 W 6 5 6 7 6 7 8 7 8 9

Sweep 0 W G R W 2 0 W 3 8 W 4 7 6 5 8 7 6

#: 4 2 3 W 3 4 5 4 W 6 5 6 7 6 7 8 7 8 9

Sweep 0 W G R W 2 9 W 3 8 W 4 7 6 5 8 7 6

#: 5 2 3 W 3 4 5 4 W 6 5 6 7 6 7 8 7 8 9

Finished Wavefront: 0 W G 2 3 W R W 2 3 4 5 9 W 3 4 W 6 8 W 4 5 6 7 7 6 5 6 7 8 8 7 6 7 8 9 Old 0 W R W 9 W 8 W 7 6 8 7

Map: G 2 3 2 3 4 3 4 W 4 5 6 5 6 7 6 7 8

W 5 6 7 8 9

Unpropagation Complete: 0 W G 0 0 W 0 W 0 0 0 0 R W 0 0 W 0 0 W 0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 Adding Goal: 0 W G 0 0 W 0 W 0 0 0 0 R W 0 0 W 0 0 W 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Sweep 0 W G 0 W 2 R W 3 0 W 4 0 0 5 0 0 6

#: 1 2 3 W 3 4 5 4 W 6 5 6 7 6 7 8 7 8 9

Sweep 0 W G 0 W 2 R W 3 0 W 4 0 6 5 0 7 6

#: 2 2 3 W 3 4 5 4 W 6 5 6 7 6 7 8 7 8 9

Sweep 0 W G 0 W 2 R W 3 0 W 4 7 6 5 8 7 6

#: 3 2 3 W 3 4 5 4 W 6 5 6 7 6 7 8 7 8 9

Sweep 0 W G 0 W 2 R W 3 8 W 4 7 6 5 8 7 6

#: 4 2 3 W 3 4 5 4 W 6 5 6 7 6 7 8 7 8 9

Finished Wavefront: 0 W G 2 3 W 0 W 2 3 4 5 R W 3 4 W 6 8 W 4 5 6 7 7 6 5 6 7 8 8 7 6 7 8 9 Old 0 W 0 W R W 8 W 7 6 8 7

Map: G 2 3 2 3 4 3 4 W 4 5 6 5 6 7 6 7 8

W 5 6 7 8 9

Unpropagation Complete: 0 W G 0 0 W 0 W 0 0 0 0 0 W 0 0 W 0 R W 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Adding Goal: 0 W G 0 0 W 0 W 0 0 0 0 0 W 0 0 W 0 R W 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Sweep 0 W G 0 W 2 0 W 3 R W 4 0 0 5 0 0 6

#: 1 2 3 W 3 4 5 4 W 6 5 6 7 6 7 8 7 8 9

Sweep 0 W G 0 W 2 0 W 3 R W 4 0 6 5 0 7 6

#: 2 2 3 W 3 4 5 4 W 6 5 6 7 6 7 8 7 8 9

Sweep 0 W G 0 W 2 0 W 3 R W 4 7 6 5 8 7 6

#: 3 2 3 W 3 4 5 4 W 6 5 6 7 6 7 8 7 8 9

Finished Wavefront: 0 W G 2 3 W 0 W 2 3 4 5 0 W 3 4 W 6 R W 4 5 6 7 7 6 5 6 7 8 8 7 6 7 8 9 Old 0 W 0 W 0 W R W 7 6 8 7

Map: G 2 3 2 3 4 3 4 W 4 5 6 5 6 7 6 7 8

W 5 6 7 8 9

Unpropagation Complete: 0 W G 0 0 W 0 W 0 0 0 0 0 W 0 0 W 0 0 W 0 0 0 0 R 0 0 0 0 0 0 0 0 0 0 0 Adding Goal: 0 W G 0 0 W 0 W 0 0 0 0 0 W 0 0 W 0 0 W 0 0 0 0 R 0 0 0 0 0 0 0 0 0 0 0 Sweep 0 W G 0 W 2 0 W 3 0 W 4 R 0 5 0 0 6

#: 1 2 3 W 3 4 5 4 W 6 5 6 7 6 7 8 7 8 9

Sweep 0 W G 0 W 2 0 W 3 0 W 4 R 6 5 0 7 6

#: 2 2 3 W 3 4 5 4 W 6 5 6 7 6 7 8 7 8 9

Finished Wavefront: 0 W G 2 3 W 0 W 2 3 4 5 0 W 3 4 W 6 0 W 4 5 6 7 R 6 5 6 7 8 0 7 6 7 8 9 Old 0 W 0 W 0 W 0 W R 6 0 7

Map: G 2 3 2 3 4 3 4 W 4 5 6 5 6 7 6 7 8

W 5 6 7 8 9

Unpropagation Complete: 0 W G 0 0 W 0 W 0 0 0 0 0 W 0 0 W 0 0 W 0 0 0 0 0 R 0 0 0 0 0 0 0 0 0 0 Adding Goal:

0 0 0 0 0 0

W W W W R 0

G 0 0 0 0 0

Sweep 0 W G 0 W 2 0 W 3 0 W 4 0 R 5 0 0 6

0 0 0 0 0 0

0 0 W 0 0 0

W 0 0 0 0 0

#: 1 2 3 W 3 4 5 4 W 6 5 6 7 6 7 8 7 8 9

Finished Wavefront: 0 W G 2 3 W 0 W 2 3 4 5 0 W 3 4 W 6 0 W 4 5 6 7 0 R 5 6 7 8 0 0 6 7 8 9 Old 0 W 0 W 0 W 0 W 0 R 0 0

Map: G 2 3 2 3 4 3 4 W 4 5 6 5 6 7 6 7 8

W 5 6 7 8 9

Unpropagation Complete: 0 W G 0 0 W 0 W 0 0 0 0 0 W 0 0 W 0 0 W 0 0 0 0 0 0 R 0 0 0 0 0 0 0 0 0 Adding Goal: 0 W G 0 0 W 0 W 0 0 0 0 0 W 0 0 W 0 0 W 0 0 0 0 0 0 R 0 0 0 0 0 0 0 0 0 Finished Wavefront: 0 W G 2 3 W 0 W 2 3 4 5 0 W 3 4 W 6 0 W 4 5 6 7 0 0 R 0 0 0 0 0 0 0 0 0 Old Map: 0 W G 2 3 W

0 0 0 0 0

W W W 0 0

2 3 4 R 0

3 4 5 0 0

4 W 6 0 0

5 6 7 0 0

Unpropagation Complete: 0 W G 0 0 W 0 W 2 0 0 0 0 W 3 0 W 0 0 W R 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Adding Goal: 0 W G 0 0 W 0 W 2 0 0 0 0 W 3 0 W 0 0 W R 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Finished Wavefront: 0 W G 2 3 W 0 W 2 3 4 5 0 W 3 4 W 6 0 W R 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Old 0 W 0 W 0 W 0 W 0 0 0 0

Map: G 2 3 2 3 4 3 4 W R 0 0 0 0 0 0 0 0

W 5 6 0 0 0

Unpropagation Complete: 0 W G 0 0 W 0 W 2 0 0 0 0 W R 0 W 0 0 W 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Adding Goal: 0 W G 0 0 W 0 W 2 0 0 0 0 W R 0 W 0 0 W 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Finished Wavefront: 0 W G 2 3 W 0 W 2 3 4 5

0 0 0 0

W W 0 0

Old 0 W 0 W 0 W 0 W 0 0 0 0

R 0 0 0

0 0 0 0

W 0 0 0

0 0 0 0

Map: G 2 3 2 3 4 R 0 W 0 0 0 0 0 0 0 0 0

W 5 0 0 0 0

Unpropagation Complete: 0 W G 0 0 W 0 W R 0 0 0 0 W 0 0 W 0 0 W 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Adding Goal: 0 W G 0 0 W 0 W R 0 0 0 0 W 0 0 W 0 0 W 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Finished Wavefront: 0 W G 2 3 W 0 W R 0 0 0 0 W 0 0 W 0 0 W 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Press any key to continue . . .

You can download a copy of mywavefront simulation software and source. I compiled the software using Bloodshed Dev-C++. If you want, you may also try a different C language compiler. You can also find wavefront code in BASIC andwavefront in Python posted on the forum.

View more...

Comments

Copyright ©2017 KUPDF Inc.