MIT Researchers Develop New Collison Avoidance System to Make Autonomous Systems Safer

0
700

The adoption of autonomous cars and systems has risen dramatically over the past few years. Each individual car or system will no doubt be tested quite thoroughly (and approved) for safety, but there are still areas of improvement which can be made across the board.

One such area that MIT researchers have been working hard to improve in the autonomous world is the ability for systems to better detect moving objects. The way in which they’ve achieved this is by developing a system that can detect if a moving object is around the corner. 

The new system works by sensing tiny changes to the shadows on the ground and could one day be used by autonomous cars to avoid a potential collision with whatever may be lurking around the corner. It’s a system that could also be adopted by hospital robots which navigate through hospital hallways to avoid hitting people when delivering medicines or supplies. 


In a recent paper published by the researchers, they explained how experiments involving an autonomous car and autonomous wheelchair were both completed successfully. In fact, when it came to sensing and stopping for oncoming vehicles, the newly developed system beat LIDAR by a fraction of a second!  

So, all-in-all, it seems like it’s a fantastic system that aid vehicles and robots in being safer by giving them an early warning sign that there is something approaching around the corner. This allows the vehicle or machine to slow down, readjust its positioning, and advance in a better manner in order to avoid a collision. 

Using computer-vision techniques, the system, aptly named “ShadowCam” detects any changes to the shadows on the ground. By measuring the changes in light intensity over time, the system is able to determine if something is getting closer or moving further away. This information is computed and then classified. If the computer detects there is an encroaching object it reacts accordingly.   

In an attempt to try and perfect the system to be used in autonomous vehicles, the researchers developed a system that uses image registration along with a new kind of visual odometry technique. Image registration is something that is often used in computer vision and what it does is it overlays multiple images in order to reveal any variations among them. Visual odometry is a technique that’s used for Mars Rovers. Essentially it is responsible for estimating the motion of a camera in real-time. 


One specific technique employed by the researchers is called “Direct Sparse Odometry”, or DSO for short). This system uses a 3D point cloud in which to plot features points in environments. A computer-vision pipeline is then used to select only those areas of interest, such as near a corner. As images of the selected area are taken the DSO method sets to work overlaying them from the robot’s viewpoint. 

The way in which an object is classified as a dynamic moving one is through signal amplification. Any pixels that may have shadows are boosted in color. This, in turn, reduces the signal-to-noise ratio, making weak signals from any shadow changes stand out even more. Once this signal reaches a certain threshold, ShadowCam automatically classes the image as dynamic. And depending on how strong that signal is, it determines whether the robot is instructed to slow down or stop.

Next on the horizon for the researchers is to develop the system further so that it works effectively both in and outdoors. They will also look at a way of speeding up the system’s shadow detection.