As we’ve written a million times before, self driving cars have some combination of cameras, radar, and lidar to “see” their surroundings. But merely observing certain objects doesn’t actually help the car steer around streets, people or obstacles—the car has to know what those objects are, how they behave and what they might do. This is why knowing the difference between a cyclist and a small child is so important—both of them will act very differently, and the car needs to have that knowledge.
Typically, cars detect road signs by looking for their distinctive shape or color with a camera. But rain, darkness or tree cover can obscure these signals and make it hard for a computer to confidently identify them. Moreover, if the sign isn’t easily identifiable, the car may stop erroneously in response to other things it sees, such as a line painted across the road or an arrow.
The way that industry leaders like Waymo (which has a self-driving car that’s available to consumers) and most others are doing it, they use LiDAR sensors on the roof of their vehicles to build a 3D picture of what they’re observing. But Tesla has been trying to avoid those sensors, saying they’re bulky and detract from the aesthetics of its vehicles. The company instead is relying on cameras, and it just released a video showing what its system looks for when it’s on autopilot. The video is pretty amazing and offers a glimpse of what the second-generation hardware package, known as Autopilot 2.0, sees.