Autonomous robots act on their own. Humans program the robot to respond to outside stimuli. The very simple bump-and-go robot is a good illustration of how this works.
This sort of robot has a bumper sensor to detect obstacles. When you turn the robot on, it zips along in a straight line. When it finally hits an obstacle, the impact triggers its bumper sensor. The robot's programming tells it to back up, turn to the right and move forward again, in response to every bump. In this way, the robot changes direction any time it encounters an obstacle.
Some autonomous robots can only work in a familiar, constrained environment. Lawn-mowing robots, for example, depend on buried border markers to define the limits of their yard. An office-cleaning robot might need a map of the building in order to maneuver from point to point. Amazon's warehouse robots use colored magnetic tape on the warehouse floor to help them navigate. Among other jobs, the online retailer uses bots to deliver items to humans, keeping their employees focused on packaging orders rather than searching warehouse shelves.
Mobile robots often use infrared or ultrasound sensors to see obstacles. These sensors work the same way as animal echolocation: The robot sends out a sound signal or a beam of infrared light and detects the signal's reflection. The robot locates the distance to obstacles based on how long it takes the signal to bounce back. More sophisticated robots may be equipped with Light Detection and Ranging (lidar) equipment, which uses light rather than sound to help the robot determine its position within its environment.
Even off-the-shelf robotic vacuums use several methods to find their way around your living room. In addition to bump sensors, they have cliff sensors (is it about to fall?), wall sensors (what's ahead of it?) and optical encoders (how far has it gone?). Creating a map using multiple sensors this way is known as simultaneous localization and mapping (SLAM).
Some robots use stereo vision to see the world around them. Two cameras give these robots depth perception, and image-recognition software gives them the ability to locate and classify various objects. Robots might also use microphones and smell sensors to analyze the world around them. Boston Dynamics' Spot dog-like robot is equipped with a 360-degree panoramic camera, but the company also offers pan-tilt-zoom and infrared radiometric cameras. This enabled the U.S. Marines to test the robot's ability to look around corners to find enemies before venturing out into the open.
More advanced robots analyze and adapt to unfamiliar environments, even to areas with rough terrain. These robots may associate certain terrain patterns with certain actions. A rover robot, for example, might construct a map of the land in front of it based on its visual sensors. If the map shows a very bumpy terrain pattern, the robot knows to travel another way. NASA's Perseverance rover is an example.
Follower robots learn from watching us. Autonomous farming robot manufacturer Burro uses a combination of cameras and GPS to get around, but the robot's artificial intelligence system learns its job by following humans around. Piaggio Fast Forward's Gita robots follow their human leaders while carrying their stuff. The device can even tail you while you're on your bike. It has a maximum speed of 35 miles per hour (56 kilometers per hour).