In robotics, vision is a captured image that is interpreted based on programmed templates. In a manufacturing environment, where robotic arms build cars or robots inspect the microscopic connections on semiconductor chips, you're dealing with a controlled environment. The lighting is always the same, the angle is always the same, and there is a limited number of things to look at and understand. In the real (and unstructured) world, however, the number of things to look at and understand increases greatly.
A humanoid robot that must navigate through homes, buildings, or outdoors while performing jobs must be able to make sense of the many objects it "sees." Shadows, odd angles and movement must be understandable. For example, to walk on its own into an unknown area, a robot would have to detect and recognize objects in real time, selecting features such as color, shape and edges to compare to a database of objects or environments it knows about. There can be thousands of objects in the robot's "memory."
ASIMO's vision system consists of two basic video cameras for eyes, located in its head. ASIMO uses stereoscopic vision and a proprietary vision algorithm that lets it see, recognize, and avoid running into objects even if their orientation and lighting are not the same as those in its memory database. These cameras can detect multiple objects, determine distance, perceive motion, recognize programmed faces and even interpret hand motions. For example, when you hold your hand up to ASIMO in a "stop" position, ASIMO stops. The facial recognition feature allows ASIMO to greet "familiar" people.
ASIMO can recognize objects in motion by interpreting the images captured by the cameras in its head. It can assess a moving object’s distance and direction, which allows ASIMO to follow a person, stop its own progress to allow a moving object to cross its path, or greet you as you approach.
The cameras also relay what ASIMO sees to ASIMO's controller. That way, if you're controlling ASIMO from a PC, you can see what ASIMO sees.
In addition to the cameras in its head, ASIMO has several sensors that help it maneuver through environments and interact with objects and people. Floor surface sensors allow ASIMO to detect objects and changes in the floor. Ultrasonic sensors help orient ASIMO by detecting surrounding objects. The sensors help ASIMO resolve discrepancies between the internal map of the area preprogrammed in its memory and the actual environment.
ASIMO even has a sense of touch, in a way. The force sensors in ASIMO's wrists allow ASIMO to judge how much force to use when picking up a tray, handing you a file or shaking your hand. ASIMO can integrate information gathered by its cameras and force sensors to move in sync with a person while holding hands. When pushing a cart, ASIMO's force sensors help the robot to adjust the amount of force needed to push the cart (for example, ASIMO can push a cart with more force if the sensors detect an incline).
Another way ASIMO can sense the environment is through the use of IC Communication cards. IC cards use infrared signals to receive and transmit information. If you hold an IC card with your information encoded on it, ASIMO can detect your presence even if you aren’t within the line of sight of its cameras. These cards enhance ASIMO’s ability to interact with others. For example, if you were to visit Honda’s office and receive an IC card as a visitor pass, ASIMO could greet you and direct you to the right room after electronically reading the information encoded on your card.