Like 5G, O-RAN is being positioned as the pièce de résistance platform for the next generation of hardware for the RAN. And there certainly seems to be a ration of organizations hopping on the O-RAN bandwagon, especially of late.
Brushing aside the hype, O-RAN is an attractive solution to the daunting challenge faced by 5G and wireless networks in general. However, using my favorite meme, all that glitters is not gold, has some applicability to the state of O-RAN.
Before we drill down on this, just for the fun of it, let us do a quick review of O-RAN. The particular definition varies slightly from organizations like the O-RAN alliance to major players involved in it (like Nokia). However, no matter how you slice and dice it, O-RAN is all about disaggregation of the RAN and open architecture. By doing that it offers an environment for any piece of O-RAN-compliant hardware to work with any other piece of O-RAN hardware – that is the bottom line.
One would think that would get everyone on board. However, there is so much more to this beyond the technology. The situation is not unique to O-RAN. Similar situations exist with openAI, various compute platforms, and dozens of others. Even elements that have achieved accepted open status, Unix for example, are not necessarily the de facto go-to for everything and everyone. However, with 5G it would definitely be smart for everyone to be on a common hardware platform, open or otherwise. And, in a perfect world, open hardware would be the best solution.
There are a lot of places where open standards exist and work well. Car parts are an excellent example. With tires, a certain size tire, regardless of who makes it will fit any vehicle that can run that size. Other parts, alternators, belts, batteries, etc. as well. If a battery is specified as a certain group, no matter who makes it, it will fit the particular vehicle that uses that group.
The same should be true for O-RAN. With O-RAN the focus is on its three main building blocks – the radio unit (RU), the distributed unit (DU), and the centralized unit (CU). The idea being that any manufacture’s O-RAN RU will work in another manufacturer’s DU, or CU O-RAN, system that complies with the O-RAN standard. There are other layers, of course, but these are the critical ones.
Open hardware systems bring to the table more competition, better designs, lowered CAPEX, and user benefits such as adding features, increasing deployment flexibility, capacity scaling, and upgrading components. As well new services, such as AI layers and virtualization are easier to integrate.
However, proponents of the other camp, proprietary hardware, take some pretty strong positions and present similarly strong arguments. Proprietary hardware has better profitability because there is less competition. There is also the question of reliability across multiple vendors.
Proprietary hardware locks in a particular vendor – job security if you will. Deals are made, hardware is locked in and the marriage between the vendor and user is inked. It provides a known supply chain, service sector, and responsibility platforms. It is a mature model and is very well understood and has been accepted since the beginning of wireless time.
Proprietary hardware is always more expensive than open hardware. Tertiary elements such as service, parts, warranties, and the like are locked in as well. Simply put, because proprietary hardware is more profitable than open hardware, vendors find the proprietary model much more desirable.
With open hardware, much of this becomes a free-for-all. For example, who will service the equipment? If a third party is involved in service and something goes wrong, there is often a lot of finger-pointing that happens. And who do you hold responsible when something goes wrong? The vendor? The servicer? Or do you simply self-service and fight with the vendor whether it was their equipment or some other cause.
What about longevity? Say a particular vendor goes out of business and its hardware is no longer available. The user is stuck with replacing that particular hardware with someone else’s proprietary hardware. That can be a nightmare. And, quite honestly, vendors like being locked in. Users, not so much.
There are also the issues of stability, capacity, and scalability. These have been challenging to all open platforms since the idea evolved. Some, such as computer software and hardware have conquered these, but it takes time and the more complex the platform is (such as O-RAN and openAI) the longer it takes to achieve stability. And even mature open platforms continue to have occasional (some regular) burps.
Next, there are peripheral components. AI is set to play a huge role in wireless. It is critical that AI understand the complexities that accompany 5G – ultra-reliable low-latency communications (URLLC), dynamic spectrum sharing (DSS), Massive MIMO, dynamic and intelligent, software-driven spectrum allocation, virtualization, software-defined networks (SDN) and many of the other new characteristics of 5G.
The use of AI in the RAN presents the same challenges as with other domains. AI has natural biases due to the way algorithms function. By nature, it has errors and dependencies. It is more visible in platforms such as facial recognition, but it also exists in stock analysis, hiring, employment, and others. There is no reason not to assume it will have similar issues in the RAN.
And, of course, let us not forget security. The more open the interface the more difficult security becomes and the larger the threat surface. Add to that the eventual massive deployment surface of this platform and its tangential vectors (Internet of Anything/Everything (IoX), for example),and it certainly becomes critical.
Also, in the 5G space, strict latency requirements, for example, are added to the queue. That requires similarly strict encryption requirements (that is a very interesting discussion but too lengthy for this column). And, with the tens and eventually, hundreds of billions of devices expected on networks as 5G matures, keeping rogue devices and bad actors at bay will be challenging.
So, what else is holding up the rapid deployment of O-RAN, which will be essential for 5G? The technical issues remain quite challenging regardless of how rah-rah some proponents of it, are. Some even claim it will never happen. But there are other issues as well.
But, all of that aside, just getting all the players on board will be tantamount to herding cats. There are a lot of different players with a variety of angles and getting everyone to agree is difficult. This is a cooperative environment. For the manufacture, open interfaces require best practices among ALL manufacture to ensure the integrity of the link – the play nice, everybody, scenario. And frankly, this is a big change to a very well entrenched and mature industry – resistance to change will be hard to overcome.
Beyond that, there is also the issue of retrofit. That is not a problem with the greenfield segment of 5G. However, in existing equipment that is a significant challenge.
A recent report from the Dell’Oro Group predicted that O-RAN will not account for more than 10 percent of the overall market by 2025. ABI Research does not expect the CAPEX of O-RAN hardware to surpass traditional RAN until close to the end of this decade.
In the end, and down the road, O-RAN will, most likely, get the issues ironed out and, unless an unknown platform suddenly emerges, become the standard hardware platform for 5G. The really tricky part is for 5G development to start buying into O-RAN, or at least prep for it, and not keep adding proprietary hardware just to get 5G out.
So, while the noise about O-RAN seems to be making it the answer to all of our deployment problems, that really is not the case. It has a bit of a haul in front of it. But it is fun to follow the various threads.
Ernest Worthman is an executive editor with AGL Media Group, a senior member of IEEE and an adjunct professor at the CSU Walter Scott Jr. College of Engineering.
The autonomous vehicle (AV) scene has been, well… blah, lately. Why? Because much of what can be done to advance it is hitting the technology wall. We are stuck in between levels three and four. We have maxed out level three technology (except for incremental upgrades). However, the leap from level three to four is much wider than the previous levels.
There are several reasons we cannot go sans-drivers yet. But that is not what I want to talk about in this column. What I do want to discuss is how we are moving to level four.
Some will argue that the technology exists to have level four. I would argue that we have some success in some of the segments that make level four possible. However, there are some critical components still missing or too rudimentary to make full level four a reality – primarily today’s artificial intelligence.
The current focus is on sensors. They are where most of the money is at present. Cameras, microphones, shock, and vibration, environmental (temperature, moisture, air), air interface (RF), all have elevated to highly functional levels. So, I would be willing to say that this part of the goal has reached level four. But what has not, is real-time preemption (comprehension, if you will) and that is the tipping point.
There are plenty of examples of driverless vehicle successes in some segments. However, they are all in some sort of controlled environment. That shows such vehicles are capable of operating without human intervention. However, controlled applications do not require situation comprehension or intuition. Nor do they require a fat database of possible scenarios, or AI, really. So, none of this is sufficiently sophisticated to be unleashed in uncontrolled, real-life environments.
There is, however, quite a bit of quiet activity going on in this space. Most of it has to do with tweaking existing technologies and adding components toward the goal of level four, as well as figuring out ways to make vehicles recognize a situation and react based upon the highest probability of being correct.
That may sound simple, but it is not – especially the real-time situation and reaction relationship. The reason is simple enough, though – the multiplicity and complexity of possible scenarios – i.e. to be able to predict the future. Thus, we are stalled here.
Of course, no one, not even AI can predict the future. However, the use case for AI in autonomous vehicles has to be able to predict the most probable outcome in any given scenario with a high success rate. That requires some awareness of future outcomes. So far, AI, for all its capabilities, cannot do that well.
The solution is, of course, AI combined with complex algorithms. And, speaking of complex algorithms, it just so happens that the TÜV (this is the Technischer Überwachungsverein or Technical Inspection Association of Germany and Austria – the vehicular inspection and product certification organization) is working on exactly that.
One of the issues I have had with full level four and five autonomous vehicles is that it will take forever to amass enough scenarios to make the vehicle sufficiently intelligent to have a five-nines, or better, accuracy.
Even with AI, ML, MI, fuzzy logic, deep learning, and everything else, current AI empowered vehicles cannot manage all of the scenarios. Yet for the autonomous vehicle to thrive, it must be able to predict with (ideally) perfect accuracy. There is also the argument that even mediocre AI-controlled AVs are as capable, if not more so than many drivers.
That may or may not be true but let us play with some numbers. Consider this: there are, today, roughly 1.42 billion cars in operation worldwide, including 1.06 billion passenger cars and 363 million commercial vehicles. Depending upon where you live, the number varies but the global average is 18 car-related deaths per 100,000 people – all with differing accident conditions.
Doing a bit of math that puts the global death rate around 260,000, give or take, for the 1.42 billion cars. Percentage-wise, that is roughly, 0.0017 percent. And that is for fatal incidents. Add non-fatal, and those not reported, and that number is likely two or three times as low. Thusly, for AVs to break even, that would have to be the number they need to hit.
Now, this is absolute math, and it assumes every vehicle is autonomous. But my point is that scaling AI in AVs will have to be very intelligent.
However, even with the most basic understanding of connected and autonomous vehicles it just makes sense that simply operating according to known scenarios, obstacles and potential causes of accidents will not work. They must be able, as are humans, to react to uncommon and to predict unknown scenarios.
Common knowledge is that AI and its cohorts are the only way this can be accomplished (short of using the Vulcan mind-meld on the vehicle computers).
At the base level, the answer to this, and to just about every other computer managed device, is the algorithm.
It just so happens that the TÜV has taken on pondering about the worst thing that could happen, at any and every given moment, and figuring out how to get out of it without endangering or obstructing traffic.
They have developed a new, self-driving car algorithm dubbed the Continuous Learning Machine AI tool that automatically labels and mines training data to enable connected autonomous vehicles to react to unpredicted events such as bicycles swerving onto the road amidst traffic, or kids running into the street. In essence, it is all about predicting doom – and there is a lot of it. However, the solution is not to create as many scenarios as can occur. As I mentioned earlier, that is a rather daunting challenge. A better solution is to use AI to create patterns.
The fundamentals of this are not particularly complicated. What is complicated is to have the AI be able to create patterns – pattern recognition – learning from large quantities of data. Then use that learning to “guess” what is most likely to occur. The result is much less static data and faster computational capabilities to approximate real time.
Just as no human is born with the knowledge to drive a car safely, neither is a computer. And, thus far, they are less effective than humans simply because there is virtually, an unlimited sink of data that must be stored to consider every possible scenario. Whereas humans can deduce from far less data, all things being equal.
So, it is about the ability to use rational thinking – to deduce something from a collection of experiences. Something only the human mind is capable of. Sure, we can approximate that with huge volumes of data, and complicated neural networks, and the like. But the problem is still that a huge amount of data is needed for computers to even come close to deductive reasoning.
The traditional approach is to drive and drive and drive. While that is possible, the number of miles and time it takes to build even a reasonable database is prohibitive. A better approach is to collect data from multiple, in this case, millions at least, sources.
Neither of these approaches is practical at the moment, unfortunately. Even doing this virtually has its challenges because they would have to program a nearly unlimited range of scenarios.
This is why the TÜV is going in this direction. There is a need for AI algorithms, one of which has been developed by Germany’s Technical University of Munich (TUM), to be flexible. The purpose of such algorithms is to constantly predict the worst possible situation. However, that is extremely computationally intensive (quantum computing, anyone?).
The trick is to constantly improve the algorithm by giving it enough data on uncommon events and given actions to execute if such events occur. This would allow these algorithms to improve by increasing accuracy and the number of cases it can predict. This is an excellent use case for big data.
As I mentioned, it is relatively straightforward to develop a vehicle that can operate in a known environment. But that will not work in edge cases. Thusly, levels four and five autonomous driving in real-life environments are still a long way off.
Finally, there are non-technical issues yet to even be realized. Legal issues, ownership issues, responsible party issues, liability, and more. These issues are constantly being debated and will likely be so for some time to come before solutions are reached.
Yes, we are still a long way off for levels four and five AVs.
Level 4 requires high automation.
Level 5 requires full autonomy.
Artificial Intelligence, one of the hottest leading-edge technologies, can teach a camera to spot a cheetah, help a doctor make a diagnosis or allow a car to be driven autonomously. One new platform can now make companies with field service operations, such as telecom services, become more efficient, according to David Simmons, director of innovation and technology for telecommunications at Black & Veatch.
“We are using intelligent automation to be able to learn as we are performing scopes of work for our clients, whether it involves self-performing or subcontracting that work out,” Simmons told AGL eDigest. “Learning how to best connect that scope of work with the best resource depends on automatically accessing a number of factors, such as location, performance, skills and safety.”
The Intelligent Service Automation and Control (ISAC) platform provided by Zinier takes those overall variables into account in real time as work orders pass through it. This aligns the resources with the right work at the right time. By deepening real-time visibility into the field, ISAC anticipates service disruptions through AI-driven recommendations, allowing improved operational efficiencies by automating manual front-office, back-office and field-office tasks.
“AI analyzes the data as a human would, but without the emotion or biases of a human,” Simmons said. “We look at it as an opportunity for our subcontract partners to get consistent work with near-real-time payment, because we close out our work orders so effectively and efficiently. We want to leverage the technology to be their preferred partner. We want to make it easy for the subcontractors to work with us.”
For example, if a crew is deployed to a site and it is missing a part, it can report that back in real time to the Zinier platform, which automatically checks inventory. The component is either dispatched to the site or the crew is diverted to work at a site nearby, while the part is back ordered.
“The whole idea is to keep the subcontractor out from behind the wheel of the truck and working at the site,” Simmons said. “That’s what we all want. We need to be efficient, so the crews are not sitting around waiting. They could spend a week working on the two sites, instead of waiting an extended period of time waiting for the part at the first site and not getting paid promptly for either.”
The services firm is able to keep historic diagnostic data for all telco equipment, ensuring the appropriately skilled technician shows up for each maintenance job. The ISAC platform performs predictive analytics to send technicians to perform maintenance before problems occur.
“We want to make sure the crews have all the components they need to successfully perform their jobs, including the engineering artifacts (drawings, structural analysis), and that they have all the permits in place, the right materials, as well as the necessary documentation to validate the work performance. It should all available in one spot,” Simmons said. “Then you throw in location-based services to be able to evaluation their proximity with the location of the work that we have scheduled at their disposal.”
Figuring out what site is most optimal for the crew to go next relies on a set of data elements used within the platform.
“This is going to prevent folks from having to search for the information, calling back and forth, to do their job,” Simmons said. “Information on how to get the work done is at everyone fingertips.”
AI: the Right Tech, the Right Time
With carriers pressured to deploy higher data speeds over faster, cheaper networks, it is the telecom services companies’ jobs to facilitate the transition to next-generation 5G wireless communications technology.
“We feel like our partners in the field [tower/fiber crews] are at such a disadvantage compared to the people in the office, “Simmons said. “We have to close that gap. They are the critical lynchpin into 5G and the next generation of telecom. In the context, there is so much opportunity from a work perspective that we could do 50 percent more work, which means we could become more efficient with the current workforce and hire additional workers.”
AI is necessary for building out small cells, where the profit margin per site is slim. In the near term, the industry is no longer installing tens of thousands of sites at a macro level annually, but instead is looking at installing hundreds of thousand sites from a small cell perspective.
“The paradigm in which the work is done for small cells has to change, Simmons said. “Technology has to be at the forefront of that change. We can’t do that efficiently and effectively if it doesn’t scale.”
Black & Veatch launched its first round of deployments using the AI tool last month with a team that is performing fiber splicing. In 2020, the firm intends to partner with its subcontractors in macrocells, small cells and fiber to optimize collaboration on the Zinier platform.
“We have to make the transformation,” Simmons said. “With our partnership with Zinier and with the technology we are confident we make a significant, positive improvement throughout the supply chain.”
We are all aware that AI has been pervasively deployed in the generation of assistive technology from Amazon, Google and others. Until now they have been, relatively, low-tech and simple (including their lack of security).
However, that is about to change. In anticipation of the upcoming holiday season, the major players, Amazon, Facebook, and Google are all upping the game. One might say that AI 2.0 is about to be released.
These next-generation devices go from listen and reply to becoming smart display devices, adding video to them.
Amazon unveiled Echo Show, and Google is releasing the Home Hub, Pixel 3, Pixel Stand and Pixel Slate. Facebook rolled out Portal and Portal+ devices for Facebook Messenger video chat and Alexa with tablet-sized, rotating screens. It also is connected to Newsy.
Google Home Hub, is connected to a number of apps that help you with everything from cooking to smart home management to ride sharing. It too, comes with a smart screen.
The Amazon offering of Echo Show offers new video visuals and the ability to be a hands-free video calling center. It also has the ability to integrate with smart homes.
However, what all of these devices still have in common are security issues. Adjacent to all of these evolutionary devices is the specter of compromise. Recall that Facebook recently exposed 50 million accounts, with 30 million of them having data stolen. In a similar scenario, Google+ was pulled one day before its debut because a security hole was discovered in the software.
Do not think Amazon escapes the security scrutiny. The fact that the Echo has been criticized for the way it captures data and uses it for any number of purposes has been going on for some time now. And, tangentially, one of Amazon’s more underhanded actions was the recent discovery of an algorithm, in its hiring and recruitment processes, that penalized applications with “women” in them for years. Not a security issue but certainly an unconscionable course.
However, back to privacy issues. While the knowledge of this is growing, it is not as significant as it should be. Recently, a PricewaterhouseCoopers survey noted that only 10 percent of nonusers do not own smart speakers due to privacy concerns. In other words, 90 percent of non-users either have no clue about potential security issues, or do not care. That is a disturbing metric. To support that, such assistant adoption has grown steadily. Moreover, analysts do not see that abating.
These device manufacturers, as well as the app developers linked to them do not seem to show much of a penchant to up security or protect private data. Most of what they do is damage control. All Facebook did was to limit initial use cases for Portal, keeping out much of its knowledge of one’s social life. That is why Portal did not debut with facial recognition software, as had initially been expected.
The big challenge for these segments is trust. I will grant that it is difficult for them to be all that they can be while maintaining security and privacy. Security is the easier of the two. Privacy is more challenging because the users want private and personal data to be available to varying degrees, depending upon personal preferences. In addition, the majority of users cannot be expected to understand how to manage their privacy until it becomes a function that they can understand in very simple terms.
This is a complex wheelhouse that requires a great deal of understanding, by both the user and the provider, regardless of whether it is an app or a device. Add to that the impending Internet of Everything/Everyone (IoX) and it gets even murkier.
In the end, part of it will fall on the user, part on the provider. In any event, personal and private data needs to be, fundamentally, protected and unavailable unless the user, specifically, allows access to it. Storing it anywhere but with the user is not cool. That is the pivotal issue that the vendors need to focus on.
Artificial intelligence platforms, applications, programs, tools, functions, systems, whatever one wants to call them, have been the buzzword of technology for some time now. In fact, AI is considered one of the great enablers for the upcoming 5G ecosystem.
However, knowing what I know about this technology, I have taken a rather conservative opinion, in my writings, of just how much faith has been put in AI to solve the world’s problems.
Google, Microsoft, Amazon, and others have put AI into our everyday lives with AI-enabled devices such as Siri and Echo, but peeling back the layers, such implementations are still on the basic scale, even though their creators would have you believe they are the future. Not quite true, but it follows much of the AI hype we have been hearing for the last couple of years.
In that vein, I recently received a report by an organization called Riot Research. They claim the AI bubble is about to burst and bring forth a new era of AI development and implementation. Digging down a bit, I found, not altogether, a shortage of opinions supporting this.
One of their points is that the AI hype has generated unrealistic expectations of what AI is, and will be, capable of – at least for the next five years. Interesting point. Let us look at some of the data that supports the Riot observation.
First of all, it’s true AI cannot happen without deep learning and neural networks. The integration is often referred to as machine intelligence. (See my recent PowerPoint presentation on these technologies.) This, again, plays to the 5G ecosystem where much of the intelligence will be distributed and require a high level of intelligence at places like the edge. To be effective (due to the overwhelming integration of platforms, technologies, applications, and the like), AI will have to be able to become self-aware to some degree. So far, that is not the case, for either 5G or other platforms.
A classic case of that is the reference to AI (deep learning) recognizing an object. An oft-used example is of a cat. A few years ago, AI (in this case a neural network) was able to recognize the face of a cat from video streams. That was heralded as a breakthrough. But, cutting to the chase, AI may be able to identify a cat, as a cat, from its database or learning algorithms. However, it is still quite incapable of knowing whether the cat is real, or just a picture (because it has no awareness of what a cat is), without assistance (human, or other) – back to the real issue, self-awareness. The concepts are solid but the technology lags.
Now, before I get a flood of responses saying we have self-aware systems, I want to clarify something. I am not talking about the kind of self-awareness that one can find in a thermostat that regulates the very temperature that it measures. While that is technically correct, it is not the point of the self-awareness I am discussing. I am talking about real self-awareness – ultimately, the concept that machines realize that the human race is a threat and would not hesitate to eliminate all forms of life on the planet in order to protect its autonomy (as is depicted in so many sci-fi scenarios).
Of course, that scenario is far out (if ever possible) on the radar screen but it defines the ultimate in machine intelligence. However, initial implementations of this path are visible and will be the core of the AI of tomorrow. How the rest turns out is anybody’s guess.
Do not get me wrong. We have a really good start on AI and its capabilities. However, its current capabilities have been oversold and this has led to the current bubble. Yes, there is quite of bit of low-hanging fruit available and is what has VCs and other investors throwing money at the platforms. Nevertheless, eventually, we are going to have to separate the wheat from the chaff and that will be the reality check coming down the line.
Executive Editor/Applied Wireless Technology
His 20-plus years of editorial experience includes being the Editorial Director of Wireless Design and Development and Fiber Optic Technology, the Editor of RF Design, the Technical Editor of Communications Magazine, Cellular Business, Global Communications and a Contributing Technical Editor to Mobile Radio Technology, Satellite Communications, as well as computer-related periodicals such as Windows NT. His technical writing practice client list includes RF Industries, GLOBALFOUNDRIES, Agilent Technologies, Advanced Linear Devices, Ceitec, SA, Lucent Technologies, , Qwest, City and County of Denver, Sandia National Labs, Goldman Sachs, and others. Before becoming exclusive to publishing, he was a computer consultant and regularly taught courses and seminars in applications software, hardware technology, operating systems, and electronics. His credentials include a BS, Electronic Engineering Technology; A.A.S, Electronic Digital Technology. He has held a Colorado Post-Secondary/Adult teaching credential, member of IBM’s Software Developers Assistance Program and Independent Vendor League, a Microsoft Solutions Provider Partner. He is a senior/life member of the IEEE, the Press Liaison for the IEEE Vehicular Technology Society and a member of the IEEE Communications Society, IEEE MTT Society, IEEE Vehicular Technology Society and the IEEE 5G Community. He was also a first-class FCC technician in the early days of radio. Ernest Worthman may be contacted at: [email protected], or [email protected]