AI at the Edge: cornerstone of the new industrial revolution

Advances in Artificial Intelligence continue to act as a driver of the industry's digital transition. With the power of AI and analytics pushing into IT operations, it is only a matter of time before artificial intelligence is integrated throughout all industrial processes.

Technology

A new generation of processing units


Over the past few years, computational power and the exponential growth of hardware technologies have aligned to create a new demand in the technology sector: Artificial Intelligence on Edge devices.

With the advent of the IoT and continuous data generation, machine learning models have benefited from being able to train themselves on millions of new data points and thus constantly reconfigure their model weights. But to do so, they need to run this task many, many times.

CPUs are limited by their processing power, but this is changing with the advent of GPUs, which are now designed to process large blocks of data in parallel.

Also influential has been the emergence over the last decade of a different type of chip, designed specifically for AI tasks. These chips are now available in most computers and devices, which means they can begin to train and infer machine learning models.

All this breeding ground leads us to the conclusion that: micro data centres They will become increasingly important, due to their ability to offer low latency, deploy AI models and process large volumes of data, thus avoiding their transport to the cloud.

AI on your devices

Today, the cloud model is the most common and involves using the cloud to compute the data and, through an API, train and serve a Machine Learning model.

Using this model, the device simply transmits the data over a network to the service and lets the service do the calculations, and then the results are sent back to the user. Cloud computing has traditionally done most of the heavy lifting for advanced models.

Whereas with the AI on the Edge there is no need to send the data over the network for another machine to do the processing. Instead, the data can remain in the same place, and it is the device itself that can perform the calculations.

This opens up a new paradigm, where we realise how the IoT makes manual data entry largely obsolete, leaving artificial intelligence (AI) to autonomously take care of decision making, which is a huge step forward and an incredible cost saving.

Benefits of AI at the Edge

Removing the cloud part of the service and moving it to Edge devices brings three main benefits:

1) Cost savings.
2) Reduced latency time and network autonomy
3) Increased privacy and better security
Fog Computing example

Cost savings:

AI on the Edge alleviates workloads on cloud computers. Networks are not overloaded and CPU, GPU and memory usage decreases significantly as their workloads are distributed across devices on the Edge.

Intelligence at the Edge brings data (pre)processing and data processing closer together.

decision making closer to the data source, reducing communication delays. This (pre)processing allows data to be aggregated and condensed before being sent to central IoT cloud services or stored.

All this elimination of overloads significantly improves the speed of decision making, which means a significant reduction in costs. significant cost savings in processes critical processes such as industrial ones.

Devices and sensors often produce more data than it is economical to transmit to the cloud. This problem is solved by AI on the Edge, applying analytical algorithms, which process incoming sensor data and send only decisions or alarms instead of raw data.

Desde Barbara hemos diseñado y desarrollado nuestra tecnología para que funcione en el Thin Edge (parte del Edge más cercana a la toma de datos), capaz de procesar datos en alta frecuencia (>1kHz) y actuar con latencias ultra bajas (<1ms). Todo ello con la escalabilidad que permite gestionar miles de nodos de manera remota minimizando así las visitas al campo.

Latency and network autonomy:

When cloud computing performs all the computations for a service, one of its central locations becomes overloaded. This is when the networks see too much traffic to carry the data back to the source and the machines start to perform their tasks. This causes the networks to become busy again, sending data back to the user. Edge devices eliminate these back and forth transfers.

AI in the Edge also allows us to load balance the user, application or network based on changes in the central infrastructure, adapt to temporary failures or maintenance procedures and make decisions based on alarms or the exchange of pre-processed information between Edge devices.

For example, when devices and sensors are located in places with

intermittent connectivity, they need local data processing and decision making to continue functioning. Edge computing can provide data buffering as well as predictive rules or algorithms to enable autonomous operation.

Improved privacy:

Data is most vulnerable to theft when it travels. Even if it is not stolen, third parties can know that something was transmitted between one party and another, including finding out what kind of information is sent over the network.

Making Machine Learning deployments over location means that data, and the predictions made on that data, have a lower risk of being seen as it travels. In the Edge your data is not compromised and your relationship with the AI service provider can remain protected.

With AI on the Edge, we have more control over who knows what about us, preventing third parties from accessing confidential or sensitive information.

Barbara has been conceived with security by design and integrates protection mechanisms in accordance with the recommendations of leading industry bodies and standards, such as IEC-62443, reducing the risks associated with cyber-attacks against your equipment, your intellectual property and the integrity of your data.

The new model of distributed Artificial Intelligence vs. the Cloud

The main drawbacks of the Edge compared to the Cloud are: lower computing power and heterogeneity of devices and technologies. Some detractors point out that although Edge computing is good, it still lacks the computing power available in a cloud system.

It is true that the power of the cloud today is not comparable to the Edge, so it will continue to be responsible for creating and serving the most computationally intensive models. While the lighter models are delegated to the Edge, which is also responsible for handling smaller transfer learning tasks in a distributed manner. Nevertheless, Edge technology is enabling more computing power every day, so it will be able to handle more and more complex applications.

The IoT has revolutionised the Edge computing model by introducing new usage scenarios with the following conditions in common:

  • Real-time: Industries where millisecond decision making is required.
  • Connectivity: Today's mobile networks are often patchy and cannot always guarantee connection to the cloud. Some services need to be always connected.
  • Data volume: The amount of data generated by sensors can be enormous, which could clog wide-area communication channels.
  • Context: a business context that follows the trend of decentralisation, allowing IoT data to be interpreted for decision-making.

The disruption of the cloud model does not mean the disappearance of the cloud, but rather its extension to the periphery. The cloud will continue to exist. In fact, certain functions are better performed in the cloud, such as the training of predictive algorithms, as usually only the cloud has all the necessary history.

AI at the Edge is thus a new model of fully distributed computing, supporting a wide range of communications and interactions. This enables such powerful functionalities as:

  • Autonomous and local decision making based on incoming IoT data and cached enterprise information.
  • Peer-to-peer networks: devices that communicate with each other about an object within their range.
  • Distributed queries across data that is stored on devices, in the cloud and anywhere.
  • Distributed data management, e.g. data ageing: what data to store, where and for how long.
  • Self-learning algorithms that learn and run on the Edge, or in the cloud.
  • Isolation, with devices that are switched off for long periods of time, operating with minimal power consumption to maximise their lifespan.

Barbara, with this distributed computing model, it is possible to go beyond data analytics and not only connect industrial assets, but also coordinate them to analyse situations and make decisions in real time.

Recommended reading: Distributed Computing as a Catalyst for Change in the Industrial Sector

A final challenge: the diversity of machinery and manufacturers.

Reliance on edge devices means that there is a greater heterogeneity of technologies. Therefore, it is often more difficult to integrate and connect them together and failures are more common.

To solve this problem, the Barbara platform has an extensive connector library that allows Edge Nodes to connect to any sensor, actuator, PLC or industrial equipment for the exchange of information and commands, as well as to IT platforms for hybrid cloud-edge use cases.

Barbara also has an Edge orchestrator that allows applications to be deployed, managed and configured centrally on the nodes in a single click. Our system allows applications from different authors to run in parallel, which can operate independently or be coordinated to work synchronously. The deployment spaces are multi-language and multi-framework, and are isolated to protect the intellectual property of the application author.

The implementation of AI at the Edge revolutionises the industry as we know it, creating value and new opportunities for stakeholders.
If you are interested in this article, contact us to learn more about Barbara IoT Technology and request a personalised demo.