Tesla asserts that artificial intelligence, the heart of self-driving technology, collects all the data it needs from cameras. CEO Elon Musk says cameras and radar sometimes provide conflicting data. “When radar and vision differ, which one do you believe?” He tweeted. “Vision has much more resolution, so it’s better to double the vision than to fuse the sensor.”
But many automakers believe that a suite of sensors — cameras, radar and lidar (which Tesla never used) — are essential for safe autonomous driving, because each sensor complements others’ strengths and compensates for weaknesses.
Since human drivers rely primarily on vision, cameras have an intuitive meaning. Tesla’s approach uses a single camera or monocular to analyze a particular scene. But for a single camera pointed in only one direction, the world is flat. The system must infer depth using different cues in the scene, such as the known size of a vehicle or a human. Without true depth perception, a 2D camera can’t distinguish a live scene from a picturesque poster, for example. Another failure mode for cameras is poor visibility, such as bad weather or unlit night conditions – situations that do not reduce radar performance.
The radar provides a significant advantage of the measured and accurate 3D depth. It can perceive objects from a distance and measure the physical distance between the car and the objects on the road. The radar is effective in any light or weather conditions, including dark, fog and snow. However, radar cannot identify objects clearly, so it must be used in conjunction with other high-resolution sensors. Lidar uses lasers to produce more accurate 3D images, but at a shorter range, lower resolution than cameras and a much higher cost. Furthermore, lidar relies on scanning the scene with pulses of light, which takes time. Because of the inherent low resolution of lidar, many surveys must be completed to fully assess the scene, which can cause lidar to miss important objects that could pose a risk to the vehicle.
Another limitation of the Tesla approach is the computational load that Tesla neural networks place on the processors on the board. The AI brain of an autonomous vehicle consists of complex computing systems that integrate sensors, computations, communication, storage, energy management, and full-stack software.
Self-driving vehicles process vast amounts of time-sensitive data essential to safety, such as lane markings, traffic flow, stop signs and lights. These systems have enough computing power to become ‘data centers on wheels’.
But even this kind of power has its limits. The computing power of a Tesla pales in comparison to the power of the human brain. Because of computing limitations, Tesla has to prioritize features that burden the car’s handling. Many believe Tesla might do well to reassess its priorities, including the need to solve known safety issues, before using the power of its AI “mind” for innovations like the assertive driving mode. Removing the radar and prioritizing new features like assertive mode are examples of trade-offs that Tesla is making that can cost human lives.
#Teslas #Focus #Camera #Data