Main trends on Edge AI to watch on 2022

3 de febrero de 2022, by David Purón

Edge AI or AI on the Edge has emerged as a game-changer technology for the Industrial World. Industries with highly distributed critical assets will be the great beneficiaries of taking advanced computing to the Edge.

Since it was added as one a field of study in 1956 at Universities, Artificial Intelligence has gone through both, periods of optimism and pessimism in equal measure. There is no doubt that today, we are witnessing one of great optimism.

Data Science is the third most sought job post worldwide (in fact, in our recent study, about the State of Edge Computing in Spain, Data Scientist is the most wanted professional amongst Spanish companies) in a market that is experiencing exponential growth, that will reach up to 190 billion dollars by 2025. 

Such is the prominence of AI in the market industry that it does not longer make sense to speak of it as a technology but as many branches that serve different uses and to different industries.

This is perfectly drawn at Gartner´s “Hype” cycle of 2021.

Among the trends identified as the most mature and closest to the production stage are the ones we can identify with, in our daily routine.  The likes of the  plain language processing that we use when we talk with increasingly human-like chatbots, the machine imaging that makes it possible to automate real-time video processing, and semantics searches, that leads to better search results.

At the other extreme, there are more futuristic ones that will not emerge for at least 10 years. Some interesting examples are the AI TRISM (Trust, Risk and Security Management) technologies, which make it possible to regulate AI models making them more resilient to security and privacy attacks, and Transformers, which make it possible to adapt AI models to fit the context and will have a great impact on improving applications such as translators, automatic document creation, or the analysis of biological sequences.

Between the two extremes are other enabling technologies that will take to 2 to 5 years from deployment to market maturity, which can be called as the “Near Future of AI”. Among these are human-centred AI, generative AI, the orchestration and automation of AI and, leading all the others on the maturity curve, is AI on the Edge, also known as “Edge AI”.  In 2021, Edge AI became the technology that will mature in the near future. 

Edge AI and the distributed intelligence revolution in  the industrial world

Edge AI or AI on the Edge can be summed up as the ability to execute artificial intelligence algorithms on devices ( IoT devices, edge devices) that are very close to the source of data.

This technology is growing above all, supported by a daunting statistic: more than 60% of industrial organisations do not have a Cloud infrastructure in place that help them innovate efficiently.

So, if we take a magnifying glass to Edge AI projects, what are the most disruptive trends that we will witness in 2022 and 2023?

Below is a summary of our top 5:

1. Critical industries will be major drivers: from SCADA to Edge AI

At Barbara IoT we are finding repeated patterns in industries that are at the forefront on Edge AI. All of them handle many critical distributed assets. In other words, are industries that face great challenges from technological fragmentation, scalability and cybersecurity; and that can be minimised by executing AI algorithms at the edge. We can forecast these industries will develop very ambitious and transformative use cases. Up to a point, that are very familiar with concepts like IoT and Edge Computing. 

The SCADA systems that have been used since the 80s have similar purposes in terms of data capture and processing. However, SCADA systems need to be complemented by more modern technologies so that they can respond to the increasingly demanding requirements for interoperability, openness and security, and it is where Edge AI can help; to multiply the value of these systems.

2. Thin Edge will complement Thick Edge

There are different interpretations about the meaning of what “edge” is, when we refer to Edge AI. Traditionally, the edge has been identified as the network operator infrastructure closest to the user. For example when we talk about 5G networks, we refer to operators  that are rolling out a multitude of nodes called “Multiaccess Edge Computing” that are used for up-close data processing. These nodes are installed on servers very similar to those that can be found in a data centre designed to host cloud services and they have high potential as well as the resources capability for processing complex AI algorithms. This is what some analysts name the “Thick” Edge. 

However, recently Edge nodes of another type are beginning to be developed, ones directly connected to sensors and switches, which, when are installed on low-power devices such as gateways or concentrators, serve to run simpler AI algorithms with shorter response times that are closer to real-time. This new type of Edge, called “Thin” Edge, will make it possible to rapidly and flexibly tackle larger scale projects that include remote locations or requirements for high security for and the isolation of the data.

3. Edge Mesh as the new paradigm to enable distributed Artificial Intelligence

Edge AI is traditionally based on decision models that are trained using large data. The model, consisting  of a series of mathematical formulas is installed on Edge nodes. From there, each node is able to make its own decisions depending on the data it receives and the model that has been installed.

The new paradigm, known as Edge Mesh, makes it possible a node’s decision to be conditioned by another nodes as if it were a lattice network. One good example for understanding the power of this new architecture is a smart traffic system.

An Edge node can make decisions about the time of a traffic light using AI algorithms that takes into account  the number of cars and people detected by sensors. However, this decision could be perfectly complemented by the decisions being made by other nodes in nearby streets.

The aim of Edge Mesh is to distribute intelligence amongst various nodes in order to offer better performance, response times and fault tolerance than with more traditional architectures.

4. Lifecycle management using MLOps, is increasingly more important 

As industry moves towards rolling out Edge AI with more distributed nodes and more complex training algorithms, the ability to maintain the lifecycle of these trained models, and the devices that execute them, will be key to the future of this technology.

In this sense, the projects and companies that apply the DevOps philosophy for the development, roll-out and maintenance of AI algorithms will be enhanced.

This way of working is called MLOps, a combination of Machine Learning and DevOps. 

But what exactly is it? Basically, it aims to reduce the development, testing and implementation times of AI on the Edge models through the ongoing integration of equipment and development environments, testing and operations.

5. Edge AI enables Sovereign Data Exchange

There is no doubt that data sharing will be paramount for improving processes in industry sectors with many stakeholders within the value chain.

Let´s look into the near future electricity grid model – a Smart Grid –. To be able to receive or to offer better service, it is essential for suppliers to be able to analyse and process information from a number of stakeholders like prosumers, operators, distributors, aggregators, etc… Without a transparent, agile data exchange will be impossible to reach the required grid optimisation  by 2050.

With Edge AI an ON-centralised data processing is possible, which will help overcome some of the obstacles the industry is currently facing such as data security, privacy and sovereignty.

Si buscas desarrollar un proyecto IoT, contáctanos