@JRB, I've never considered these as software "failures" but rather, they are design and systems engineering failures. While Tesla has done a lot of impressive design, they have specifically eschewed lidar, which would have eliminated many issues associated with pure image processing, namely, "Is that a solid object that I'm about to hit?" and "Are those vaguely identifiable objects on a collision course with me?" This is very basic navigation and collision avoidance, but we obviously see at least three system designs where these decisions aren't even remotely embedded in the decision trees.
In the Florida collision with the semi trailer, as with the Uber collision with a pedestrian, everything is an instantaneous decision and the system does not make use of history, i.e., I saw something at time1, and then something else at time2, but moved, a human would think, "Oh, I can't figure out what the hell it is, but it's moving into my path," while both Autopilot and Uber's system say, "I see thing1", "I see thing2", "Oh, crap, thing3 is in front of my path." I've seen this sort of blind allegiance to the image processing without any attempt to connect the dots in time, even at my job, and it was a failure to not do systems engineering, nor address what the end objective of the image processing is for, which is not even remotely done when it finds objects, that's just the start of the process. In the Florida Autopilot case, lidar would have shown that the white side of the semi trailer was not sky, and therefore an collision was imminent, maybe.
In a case in San Jose, Autopilot lane followed by following only one set of lane markers, because the second set was unrecognizable, resulting in following the lane into a gore point. That would have been marginally OK, had the collision prevention cannisters been properly maintained, but there had been a prior accident, and the cannisters were empty, allowing the car to collide with the metal barriers. So, Tesla decided to not augment the lane following with map information, nor any information about the other cars, both of which would have indicated that the road was curving away from where Autopilot was steering toward. Again, lidar would have told Autopilot a collision was imminent, maybe.
I added "maybe" because it's not clear that Autopilot is designed to consider multiple sources of information; ironically, "sensor fusion" is much older that Tesla, but basic idea is what humans do to integrate information streams from their eyes and ears into a coherent story. Autopilot doesn't appear to be designed that way, because even without lidar, it has other information that it ignores to concentrate only on lane following. That makes it pretty stupid and not worthy of consideration, at this moment in history.
TTFN (ta ta for now)
I can do absolutely anything. I'm an expert!
faq731-376 forum1529 Entire Forum list