The autonomous vehicle (AV) scene has been, well… blah, lately. Why? Because much of what can be done to advance it is hitting the technology wall. We are stuck in between levels three and four. We have maxed out level three technology (except for incremental upgrades). However, the leap from level three to four is much wider than the previous levels.
There are several reasons we cannot go sans-drivers yet. But that is not what I want to talk about in this column. What I do want to discuss is how we are moving to level four.
Some will argue that the technology exists to have level four. I would argue that we have some success in some of the segments that make level four possible. However, there are some critical components still missing or too rudimentary to make full level four a reality – primarily today’s artificial intelligence.
The current focus is on sensors. They are where most of the money is at present. Cameras, microphones, shock, and vibration, environmental (temperature, moisture, air), air interface (RF), all have elevated to highly functional levels. So, I would be willing to say that this part of the goal has reached level four. But what has not, is real-time preemption (comprehension, if you will) and that is the tipping point.
There are plenty of examples of driverless vehicle successes in some segments. However, they are all in some sort of controlled environment. That shows such vehicles are capable of operating without human intervention. However, controlled applications do not require situation comprehension or intuition. Nor do they require a fat database of possible scenarios, or AI, really. So, none of this is sufficiently sophisticated to be unleashed in uncontrolled, real-life environments.
There is, however, quite a bit of quiet activity going on in this space. Most of it has to do with tweaking existing technologies and adding components toward the goal of level four, as well as figuring out ways to make vehicles recognize a situation and react based upon the highest probability of being correct.
That may sound simple, but it is not – especially the real-time situation and reaction relationship. The reason is simple enough, though – the multiplicity and complexity of possible scenarios – i.e. to be able to predict the future. Thus, we are stalled here.
Of course, no one, not even AI can predict the future. However, the use case for AI in autonomous vehicles has to be able to predict the most probable outcome in any given scenario with a high success rate. That requires some awareness of future outcomes. So far, AI, for all its capabilities, cannot do that well.
The solution is, of course, AI combined with complex algorithms. And, speaking of complex algorithms, it just so happens that the TÜV (this is the Technischer Überwachungsverein or Technical Inspection Association of Germany and Austria – the vehicular inspection and product certification organization) is working on exactly that.
One of the issues I have had with full level four and five autonomous vehicles is that it will take forever to amass enough scenarios to make the vehicle sufficiently intelligent to have a five-nines, or better, accuracy.
Even with AI, ML, MI, fuzzy logic, deep learning, and everything else, current AI empowered vehicles cannot manage all of the scenarios. Yet for the autonomous vehicle to thrive, it must be able to predict with (ideally) perfect accuracy. There is also the argument that even mediocre AI-controlled AVs are as capable, if not more so than many drivers.
That may or may not be true but let us play with some numbers. Consider this: there are, today, roughly 1.42 billion cars in operation worldwide, including 1.06 billion passenger cars and 363 million commercial vehicles. Depending upon where you live, the number varies but the global average is 18 car-related deaths per 100,000 people – all with differing accident conditions.
Doing a bit of math that puts the global death rate around 260,000, give or take, for the 1.42 billion cars. Percentage-wise, that is roughly, 0.0017 percent. And that is for fatal incidents. Add non-fatal, and those not reported, and that number is likely two or three times as low. Thusly, for AVs to break even, that would have to be the number they need to hit.
Now, this is absolute math, and it assumes every vehicle is autonomous. But my point is that scaling AI in AVs will have to be very intelligent.
However, even with the most basic understanding of connected and autonomous vehicles it just makes sense that simply operating according to known scenarios, obstacles and potential causes of accidents will not work. They must be able, as are humans, to react to uncommon and to predict unknown scenarios.
Common knowledge is that AI and its cohorts are the only way this can be accomplished (short of using the Vulcan mind-meld on the vehicle computers).
At the base level, the answer to this, and to just about every other computer managed device, is the algorithm.
It just so happens that the TÜV has taken on pondering about the worst thing that could happen, at any and every given moment, and figuring out how to get out of it without endangering or obstructing traffic.
They have developed a new, self-driving car algorithm dubbed the Continuous Learning Machine AI tool that automatically labels and mines training data to enable connected autonomous vehicles to react to unpredicted events such as bicycles swerving onto the road amidst traffic, or kids running into the street. In essence, it is all about predicting doom – and there is a lot of it. However, the solution is not to create as many scenarios as can occur. As I mentioned earlier, that is a rather daunting challenge. A better solution is to use AI to create patterns.
The fundamentals of this are not particularly complicated. What is complicated is to have the AI be able to create patterns – pattern recognition – learning from large quantities of data. Then use that learning to “guess” what is most likely to occur. The result is much less static data and faster computational capabilities to approximate real time.
Just as no human is born with the knowledge to drive a car safely, neither is a computer. And, thus far, they are less effective than humans simply because there is virtually, an unlimited sink of data that must be stored to consider every possible scenario. Whereas humans can deduce from far less data, all things being equal.
So, it is about the ability to use rational thinking – to deduce something from a collection of experiences. Something only the human mind is capable of. Sure, we can approximate that with huge volumes of data, and complicated neural networks, and the like. But the problem is still that a huge amount of data is needed for computers to even come close to deductive reasoning.
The traditional approach is to drive and drive and drive. While that is possible, the number of miles and time it takes to build even a reasonable database is prohibitive. A better approach is to collect data from multiple, in this case, millions at least, sources.
Neither of these approaches is practical at the moment, unfortunately. Even doing this virtually has its challenges because they would have to program a nearly unlimited range of scenarios.
This is why the TÜV is going in this direction. There is a need for AI algorithms, one of which has been developed by Germany’s Technical University of Munich (TUM), to be flexible. The purpose of such algorithms is to constantly predict the worst possible situation. However, that is extremely computationally intensive (quantum computing, anyone?).
The trick is to constantly improve the algorithm by giving it enough data on uncommon events and given actions to execute if such events occur. This would allow these algorithms to improve by increasing accuracy and the number of cases it can predict. This is an excellent use case for big data.
As I mentioned, it is relatively straightforward to develop a vehicle that can operate in a known environment. But that will not work in edge cases. Thusly, levels four and five autonomous driving in real-life environments are still a long way off.
Finally, there are non-technical issues yet to even be realized. Legal issues, ownership issues, responsible party issues, liability, and more. These issues are constantly being debated and will likely be so for some time to come before solutions are reached.
Yes, we are still a long way off for levels four and five AVs.
Level 4 requires high automation.
Level 5 requires full autonomy.