The recent fatal accident of a Tesla has caused the autonomous vehicle (AV) segment to expand their awareness of exactly how all of this is going to come together and expand the scope of how to deal with some of the challenges. One, in particular, that has not had a lot of visibility, is to ensure that every bit and byte, wireless and otherwise, is fully and completely secured.
Because autonomous vehicles will make extensive use of wireless platforms, and not just the upcoming 5G, and 6G one day, but other cellular platforms (4G for some time to come), it has a relatively large threat surface. It’s wireless interconnect subsystems also include Wi-Fi and Bluetooth, from a commercial vector and proprietary platform such as Dedicated Short-Range Communications (DSRC). Additionally, as the segment matures, other wireless technologies may evolve that will expand the surface even further.
However, unlike some targets, compromising a vehicle can have much more dire circumstances than an attack on, say, Target, or Amazon, or other companies that do not deal with life safety.
This same urgency holds true for other segments such as medical and nuclear power plants (as well this can also diffuse into other pilotless vehicles – drones for example). So, there are definitely some segments that need to secure their interconnect infrastructure at a higher level than others. Plus, as wide as AV’s reach into data I/O and the variety of channels it uses, it presents a plethora of opportunities.
This is not news, or new, to the AV industry. However, like most other platforms, wireless and otherwise (IP for example) stern warnings have been issued by a number of overwatch organizations that the AV segment is not placing as high a priority on security as it should (read: must be the overarching directive as the industry evolves. Something as common as malware attacks, for example, on the control systems of self-driving cars would have disastrous outcomes.
Of course, there is work going on in the security segment from some of the players. And there are players who are dissecting the work, particularly in security.
One is Macquarie University, whose Department of Computing has identified certain key vulnerabilities in self-driving vehicles that show just how exposed to potential sabotage their control systems can be (are you listening, people in the AV space?).
They focused on some vulnerabilities around safety. And, as it turns out there are a variety of different kinds of attacks that can be perpetrated on self-driving or AVs that have the potential to be disastrous from a safety perspective. And, they are no more difficult than attacking any other target.
One of the weak spots is in a critical part of the computer vision systems used by AVs to recognize and classify images. The scientific name for it is “convolutional neural networks” (CNN).
The list of who’s who working involved in this is impressive. It is a collaboration with computer scientists at Harvard, UCLA, and the University of Sydney which is published by the International Conference on Pervasive Computing and Communications. It details the five major security threats to AVs which rely on the logic developed for CNN.
Cameras and LiDAR are number one. They are the “eyes” of the self-driving vehicle, collecting information about the driving scene and environment and sending it to the CNN onboard computer. Based on that data, the computer makes decisions such as speed adjustment, steering corrections, braking, etc..
However, these are just that – cameras and light-pulsed radar. They have the same vulnerabilities that any such technology has in any similar application – precision. To trick them is an easy process because they are so precise that false data fed to them can be too small for the human eye to see (false images, signs, light variations, etc.).
The example used by the scientists was provided by Tencent Keen Security Lab. They set up a falsified image attack on the Tesla autopilot system, which caused the Tesla to turn on rain wipers when there was no rain. This is a rather benign example but the fact that it succeeded is what this is all about.
What worries experts the most is the susceptibility of AVs to what is called “adversarial machine learning.” This is data fed to AV computer systems via a variety of possible access routes (over-the-air (OTA) updates, for example, can be used to inject malware into an AV’s driving system when the vehicle connects to the internet to upgrade software and firmware). That causes the algorithms to make just about any error the malware is designed to do. Depending upon the error and attack, the results could be fatal (stuck accelerator, steering, or brake attacks).
There is also the ability to tamper with the “black box” (same as the ones used in aircraft) that AVs have to do exactly what their aerial counterparts do – record actions. These are still in the early stages of development but will be a requirement on all level 3 and above AVs. These are yet to be fully secured and are not tamper-proof (sounds like a great mystery movie storyline to me).
Another possibility is using “interference.” We all know that radio signals can be jammed. It is entirely possible to do that with AVs wireless systems (stop it on a railroad track for example to assassinate a government official for example – the movie Eraser, anyone?).
These are just a smattering of possibilities. In reality, we do not know just how wide the threat surface of an AV is going to be once we finally reach advanced stage 4 and final stage five. And this discussion is focused only on a couple of components. So, the actual challenges are much more complex and diversified across a much wider swath of technology.
For the good news, those working on AVs realize the gravity of any breaches in security (let us hope that is the case, anyway). And there is awareness. The question is, will those that are seizing the opportunity to capitalize on this ecosystem take security seriously? Or will there need to be a disaster first and then everybody comes to the table.
I am optimistic. But it is a young industry and like 5G, there are a lot of upfront costs that are born by the supply side. Cheapening out on 5G (security and otherwise) is one thing but at the extreme, it too can cause life safety issues. With AVs security becomes way more critical. If the industry does not get it together before much more development is done, it is not if, but when people are going to die.
Ernest Worthman is an executive editor with AGL Media Group.
The autonomous vehicle (AV) scene has been, well… blah, lately. Why? Because much of what can be done to advance it is hitting the technology wall. We are stuck in between levels three and four. We have maxed out level three technology (except for incremental upgrades). However, the leap from level three to four is much wider than the previous levels.
There are several reasons we cannot go sans-drivers yet. But that is not what I want to talk about in this column. What I do want to discuss is how we are moving to level four.
Some will argue that the technology exists to have level four. I would argue that we have some success in some of the segments that make level four possible. However, there are some critical components still missing or too rudimentary to make full level four a reality – primarily today’s artificial intelligence.
The current focus is on sensors. They are where most of the money is at present. Cameras, microphones, shock, and vibration, environmental (temperature, moisture, air), air interface (RF), all have elevated to highly functional levels. So, I would be willing to say that this part of the goal has reached level four. But what has not, is real-time preemption (comprehension, if you will) and that is the tipping point.
There are plenty of examples of driverless vehicle successes in some segments. However, they are all in some sort of controlled environment. That shows such vehicles are capable of operating without human intervention. However, controlled applications do not require situation comprehension or intuition. Nor do they require a fat database of possible scenarios, or AI, really. So, none of this is sufficiently sophisticated to be unleashed in uncontrolled, real-life environments.
There is, however, quite a bit of quiet activity going on in this space. Most of it has to do with tweaking existing technologies and adding components toward the goal of level four, as well as figuring out ways to make vehicles recognize a situation and react based upon the highest probability of being correct.
That may sound simple, but it is not – especially the real-time situation and reaction relationship. The reason is simple enough, though – the multiplicity and complexity of possible scenarios – i.e. to be able to predict the future. Thus, we are stalled here.
Of course, no one, not even AI can predict the future. However, the use case for AI in autonomous vehicles has to be able to predict the most probable outcome in any given scenario with a high success rate. That requires some awareness of future outcomes. So far, AI, for all its capabilities, cannot do that well.
The solution is, of course, AI combined with complex algorithms. And, speaking of complex algorithms, it just so happens that the TÜV (this is the Technischer Überwachungsverein or Technical Inspection Association of Germany and Austria – the vehicular inspection and product certification organization) is working on exactly that.
One of the issues I have had with full level four and five autonomous vehicles is that it will take forever to amass enough scenarios to make the vehicle sufficiently intelligent to have a five-nines, or better, accuracy.
Even with AI, ML, MI, fuzzy logic, deep learning, and everything else, current AI empowered vehicles cannot manage all of the scenarios. Yet for the autonomous vehicle to thrive, it must be able to predict with (ideally) perfect accuracy. There is also the argument that even mediocre AI-controlled AVs are as capable, if not more so than many drivers.
That may or may not be true but let us play with some numbers. Consider this: there are, today, roughly 1.42 billion cars in operation worldwide, including 1.06 billion passenger cars and 363 million commercial vehicles. Depending upon where you live, the number varies but the global average is 18 car-related deaths per 100,000 people – all with differing accident conditions.
Doing a bit of math that puts the global death rate around 260,000, give or take, for the 1.42 billion cars. Percentage-wise, that is roughly, 0.0017 percent. And that is for fatal incidents. Add non-fatal, and those not reported, and that number is likely two or three times as low. Thusly, for AVs to break even, that would have to be the number they need to hit.
Now, this is absolute math, and it assumes every vehicle is autonomous. But my point is that scaling AI in AVs will have to be very intelligent.
However, even with the most basic understanding of connected and autonomous vehicles it just makes sense that simply operating according to known scenarios, obstacles and potential causes of accidents will not work. They must be able, as are humans, to react to uncommon and to predict unknown scenarios.
Common knowledge is that AI and its cohorts are the only way this can be accomplished (short of using the Vulcan mind-meld on the vehicle computers).
At the base level, the answer to this, and to just about every other computer managed device, is the algorithm.
It just so happens that the TÜV has taken on pondering about the worst thing that could happen, at any and every given moment, and figuring out how to get out of it without endangering or obstructing traffic.
They have developed a new, self-driving car algorithm dubbed the Continuous Learning Machine AI tool that automatically labels and mines training data to enable connected autonomous vehicles to react to unpredicted events such as bicycles swerving onto the road amidst traffic, or kids running into the street. In essence, it is all about predicting doom – and there is a lot of it. However, the solution is not to create as many scenarios as can occur. As I mentioned earlier, that is a rather daunting challenge. A better solution is to use AI to create patterns.
The fundamentals of this are not particularly complicated. What is complicated is to have the AI be able to create patterns – pattern recognition – learning from large quantities of data. Then use that learning to “guess” what is most likely to occur. The result is much less static data and faster computational capabilities to approximate real time.
Just as no human is born with the knowledge to drive a car safely, neither is a computer. And, thus far, they are less effective than humans simply because there is virtually, an unlimited sink of data that must be stored to consider every possible scenario. Whereas humans can deduce from far less data, all things being equal.
So, it is about the ability to use rational thinking – to deduce something from a collection of experiences. Something only the human mind is capable of. Sure, we can approximate that with huge volumes of data, and complicated neural networks, and the like. But the problem is still that a huge amount of data is needed for computers to even come close to deductive reasoning.
The traditional approach is to drive and drive and drive. While that is possible, the number of miles and time it takes to build even a reasonable database is prohibitive. A better approach is to collect data from multiple, in this case, millions at least, sources.
Neither of these approaches is practical at the moment, unfortunately. Even doing this virtually has its challenges because they would have to program a nearly unlimited range of scenarios.
This is why the TÜV is going in this direction. There is a need for AI algorithms, one of which has been developed by Germany’s Technical University of Munich (TUM), to be flexible. The purpose of such algorithms is to constantly predict the worst possible situation. However, that is extremely computationally intensive (quantum computing, anyone?).
The trick is to constantly improve the algorithm by giving it enough data on uncommon events and given actions to execute if such events occur. This would allow these algorithms to improve by increasing accuracy and the number of cases it can predict. This is an excellent use case for big data.
As I mentioned, it is relatively straightforward to develop a vehicle that can operate in a known environment. But that will not work in edge cases. Thusly, levels four and five autonomous driving in real-life environments are still a long way off.
Finally, there are non-technical issues yet to even be realized. Legal issues, ownership issues, responsible party issues, liability, and more. These issues are constantly being debated and will likely be so for some time to come before solutions are reached.
Yes, we are still a long way off for levels four and five AVs.
Level 4 requires high automation.
Level 5 requires full autonomy.
I had heard about this incident in Las Vegas a few weeks ago where an autonomous vehicle ran over a robot and was planning a serious missive to discuss what some of the ramifications of this is, with respect to the autonomous vehicle space. But, first I need to get the LOL out of the way. You have to admit, it is funny.
What this does is bring out one of the issues that exists in the self-driving space. The details are not all that important. But, briefly, the car was a Tesla, the robot was one of those host models that is being developed to act as a service unit in places such as museums, hotels, banks, shopping and business centers. It is the next generation of a robot that has the capability to maneuver around obstacles and move its head and arms (Danger, Will Robinson!). It also has a display to interact with people and give them information.
The accident details were, simply, a robot gone rogue. One of several, it somehow lost its bearing and headed for the street, where the Tesla, which was in self-driving mode, mowed it down. Here is what is funny. The police were called. Seriously?
Shades of Westworld and Futureworld movies. Of course, the robot (affectionately called Promobot) will be given a post mortem to see why it went rogue.
Now – the real-world implications. Unless you live under a rock you are aware that this is not the first time there has been a mishap with self-driving vehicles. While this one may have a bit of a comic relief, the others were very serious. One happened last year when an autonomous Uber vehicle killed a pedestrian. In an another incident, a Tesla vehicle was involved in a fatal accident in 2018 when the autopilot system was engaged. As well, there have been other incidents prior to those.
One of the arguments is that there are bound to be accidents involving autonomous vehicles. Why? Because, first of all, there are just too many circumstances that cannot be preemptively foreseen. The same can be said for human drivers. However, with humans, there is the element of intuition (the non-scientific term), which enables cognitive reactions to recognize, ever so slight, deviations from the norm. Such capabilities will never exist, at least not for the foreseeable future, in an autonomous vehicle.
We can come close, with tons of pre-programmed scenarios, but will that be good enough? Perhaps, when quantum computing does a Vulcan mind meld with AI, and big data algorithms are refined, the space will narrow. But for now, the reality is that there are just too many variables to be handled by current autonomous vehicle technology.
However, there are arguments that an autonomous vehicle ecosystem will be much safer than the present driver-controlled one. Amen to that, but it will not occur until we reach the tipping point where both autonomous vehicles and driver vehicles are operating under a controlled environment. As long as human judgement and free will driving is involved, errors will continue to occur at about the same rate they are, presently. Autonomous vehicles will remove the judgement errors but will introduce other errors (although they should be significantly less among autonomous vehicles).
The interesting thing here is that the Tesla hit the robot just as it would have hit a pedestrian under the same conditions. Non-human devices cannot be expected to differentiate on an emotional scale. There can be certain parameters programmed into the mechanics (such as heat sensors) to give the device more data (unless you live in Alaska or some other frozen land where everything is cold), or character (facial) recognition algorithms (if the data is coming from the front of the human) that hedges the bet. But this is not foolproof either.
One can also go in the opposite direction and simply stop the autonomous vehicle if there is any uncertainty in the scenario, But, then it will get rear-ended by the driven vehicle because the driver happens to be texting. The industry does not have that figured out quite yet.
What all this brings up is that we are a long way away from anything other than driver assist, no matter how advanced it gets. This will be the scenario for years to come. The nice thing is that driver assist will become much more intelligent and offer more options. But letting the vehicle drive itself is not one of them in the near future.
Whether it is a robot, or a human that gets nailed by an autonomous vehicle, the end result is the same in the absolute sense that it was an incident involving a driverless vehicle. That means we have quite a ways to go before we have level 5 autonomy.
My position is that we will not have a fully autonomous vehicle infrastructure until everything and everyone can be precisely identified, and communication is two-way. That is years out.
RIP Little Promobot!
Komatsu America, a heavy equipment manufacturer, has qualified to operate an autonomous haulage system (AHS) using private LTE mobile broadband technology, a first for the mining industry.
Komatsu’s FrontRunner AHS allows unmanned operation of ultra-class mining trucks, which are designed improve mine-site safety, reduce costs, and increase productivity.
The company completed a year-long qualification program on Nokia’s Future X infrastructure. The industry is moving away from less predictable wireless technologies such as Wi-Fi, and toward private LTE networks, that improve security, capacity, and overall performance within a multi-application environment, according to a Komatsu official.
In November of last year, Nokia unveiled “Future X for industries,” which is a strategy and architecture to increase productivity across industrial sectors. The strategy, which will span both advanced LTE and 5G will exploit multiple technologies including industrial internet of things (IIoT), distributed (edge) cloud, augmented intelligence, augmented and virtual reality.
Kathrin Buvac, president of Nokia Enterprise, said, “Private LTE is a key element in the Nokia Bell Labs Future X architecture to help industries such as mining create an intelligent, dynamic, high-performance network that increases the safety, productivity and efficiency of their business.”
Testing autonomous vehicles can be tricky. When its autonomous vehicle struck and killed a woman in March, Uber suspended testing in Tempe, Arizona, as well as in Pittsburgh, San Francisco and Toronto. The government in South Korea may have found a way around that problem.
Also in March, presumably, after the accident, the government brought 188 companies together, including Hyundai, Samsung and SK Telecom, to study autonomous vehicle development. What resulted was a much safer way to test these cars by building an unpopulated city.
Work on K-City was recently completed for testing autonomous vehicles using 5G networks, Yonhap News Agency recently reported. The mock urban area, located southwest of Seoul, spans 223 square miles at a cost of $11 million.
K-City has five testing environments — highway, downtown road, suburban street, parking lot and community facilities — for autonomous vehicles, according to Yonhap.
In particular, the 5G networks will allow companies, universities and research institutes to test a variety of connected car services in those different environments, the ministry said.
Samsung Electronics and the Korea Transportation Safety Authority (KOTSA) plan to build 4G LTE, 5G and V2X networks to support the testing area, according top TAAS Magazine.
Ten testing sites for automated vehicle technologies were selected in the United States early in 2017 from more than 60 applicants, according to Forbes. However, in October of this year, the Trump Administration dropped the existing federally-recognized “automated vehicle proving grounds” as it prepared a new autonomous vehicle testing initiative.