Having worked in the software industry for some 36+ years, it's interesting to hear people talk about software errors. Someone will say something like "the software failed". Or "it didn't work". Or some such things.
Well, guess what, by definition, in virtually every case, the software ran without any errors. Now don't get me wrong, there are a lot of problems caused by software not running as expected, or not running as it should have, but in nearly 100% of those cases, it's NOT the software that failed but rather the programming/design that was at fault.
Software always does what software is told to do. So unless there is some sort of hardware problem, the result will be exactly what the software was programmed to do. If you're a software guys, it's always a 'hardware problem' ;-)
Now getting back to the issue of this thread, if the Tesla auto-navigation system fails, it's probably NOT a software failure, as in 'it didn't do what it was programmed to do'. It's more likely that it did EXACTLY what it was programmed to do. It's just that whatever the problem was, 1), it was something that was not properly anticipated (programming/design error), or 2), it was something the system was not able to detect (hardware error, although it could also be that the software was not designed to detect, which goes back to issue 1).
Now I'll be the first to admit that since the advent of so-called AI-based systems, this idea that the software will ALWAYS execute exactly as programmed starts to get a bit gray because, by definition, with true AI systems, the execution can't always be predicted 100%, or at least one can't predict with 100% certainty how a true AI system will 'see' an unexpected or unforeseen situation and how it will 'choose' to react to that situation. This is why, contrary to a lot of marketing hype, there are very few true AI systems out there, first because it's hard as hell to program, and second, because turning a true AI system on is very worrisome for people who design products, particularly where liability, either human or financial, is at high risk. Of course, the problem is that without the hope of what AI will bring to solving the problem, that the problem may not have any practical solution to start with. This is why they talk a lot about AI systems 'learning' as they go since it's impossible to anticipate everything and therefore the idea is that we need systems which can adapt to the what's happening around them. Now, I'm not sure how much of that sort of thing is playing a role in today's auto-navigation systems, but people are certainly assuming that in order to truly build a system that will approach 100% reliability in the future, we will need a fare amount of that hoped for AI 'magic'.
John R. Baker, P.E. (ret)
EX-'Product Evangelist'
Irvine, CA
Siemens PLM:
UG/NX Museum:
The secret of life is not finding someone to live with
It's finding someone you can't live without