Optimizing Machine Vision with Stereo Cameras and Time of Flight Sensors
The physical world is three-dimensional and Artificial Intelligence (AI) relies on the input of vision systems to make decisions. Stereo vision camera systems deploy two or more lenses to make it possible to perceive depth by analyzing the deformation in the image. Time of Flight (ToF) measures the time it takes light to reflect from an object to calculate the distance. This blog post discusses the use cases for both technologies and the practicality of utilizing both in the same robot or UAV.
Stereo Camera Systems
Stereo vision cameras are commonly used in robotic systems and autonomous vehicles to create a 3D world for the robot’s AI by employing triangulation to create depth from 2D images similar to the human eye. However, there are a few drawbacks to standalone stereo vision camera systems:
- The robot is limited to the camera selection and packaging of the stereo camera system. By implementing Myriad-X (link page) to manage stereo vision, it is possible to deploy the cameras of choice that best fit the application, packaging and environment requirements.
- Stereo Cameras can be fooled by lighting conditions and can mistakenly identify an object when it is just a shadow. In addition, they do not detect the depth variations within an object very well and thus it is ideal to also utilize the functionality of ToF sensors.
Time-of-Flight Sensors
Conventional imaging systems are great for capturing basic information about an object or a scene but don’t always give an accurate impression of distance. Human perception is such that we can often discern the depth of field in a still image and understand, sometimes quite vaguely, how far away the camera is positioned from the subject. Stereo cameras are designed to detect depth and can be very effective, however they can be spoofed by lighting conditions, shadows or occlusions.
Autonomous systems and robotics often improve accuracy and overcome the limitations of stereo cameras by using principle known as Time-of-Flight (ToF) to calculate the distance between a sensing element and the subject or subjects of interest.
The Time-of-Flight principle involves measuring the round-trip time of a light signal – such as a laser – emitted from a ToF device. The phase shift between the illumination and reflection can be measured and translated to distance. Time-of-Flight sensors are ideal for real-time depth-mapping applications as they enable high frame rate readings across an expanded field-of-view. This is why time-of-flight sensors are often part of the same discussions as 3D machine vision.
Time-of-Flight sensors, including Broadcom AFBR-S50MV68I , can supply the system with adjunct depth information which a stereo camera system may not accurately detect. For example, the camera system could be confused by seeing an object on a highly reflective surface and mistakenly think there is an object present when the object is just a reflection. This is a critical real-world problem that all camera systems face, and which can be alleviated by adding ToF sensors. A robotic arm application may have stereo cameras in the ‘head’ or ‘shoulder’ and use ToF sensors for the finer movements in the hand.
High quality ToF sensors have a high measurement frame rate (up to 3 kHz) and a high level of immunity from ambient light and other optical signals. It is important to note that the goal of a ToF sensor is not to reconstruct the surrounding world or the objects but to provide a robust indication of a distance from the surroundings. The sensor can detect any type of obstacles or surface discontinuity, for example stairs, with high degree of accuracy. Tauro Technologies has integrated Intel Myriad-X VPU (link) to combine Stereo Vision systems and ToF sensors to the edge to get the best of both worlds.
Which Applications Could Benefit?
The range of applications that can take advantage of a combination of machine vision, AI and depth sensing is immense. Robots can map places and tasks, such as figuring out how best to avoid people. Other uses include pick-and-place, combined assembly and inspection, and moving shelves of items from one location to another. There are many ways to implement these complementary technologies and trends evolve quickly in machine vision.
The team at Tauro Technologies has integrated hardware platforms that combine the power of the Intel Movidius Myriad X VPU and its neural engine with the enhanced depth sensing achieved by using the ToF sensor and customized stereo vision cameras is ready to speak with you about your applications. Get in touch for more information.