Intelligence moves to the Edge with IoT and Edge Computing. Edge Computing technology has emerged as one of the hottest technologies that proposes, data does not need to be centralised entirely, instead part of it can be processed on distributed computers called – Edge Nodes – that is in the same place where the data is being generated.
The world’s first IoT device was born in the early 1980s, when students at Carnegie Mellon University in Pennsylvania found a way for a Coca-Cola vending machine to communicate its stock through the campus computer network thus, avoiding unnecesary trips if the machine was running out of stock.
Today, more than half of the electronic devices manufactured in the world are IoT devices, i.e. are capable of communicating data over computer networks; and this will increase exponentially by 2025. The total installed base of Internet of things connected devices worldwide is projected to amount to 30.9 billion units by 2025 according to Statista.
It is fair to say that virtually any data needed for a company to strengthen its decision making process, or optimise its operational processes, is available on its computer networks at one level or another.
In 2011, the industry coined the term «data-lake» to define those company databases that centralise data from a wide range of connected devices, without very rigid structures so that they can easily evolve towards any use of the data.
In keeping with this analogy, some analysts have transformed the term into «data-tusnami», referring to the inability of many companies to take advantage of the huge volumes of data. The most important battle today is not how much data you can get, but how to acquire and process it in an optimal way in order to get the most value out of it, in an efficient manner.
To navigate this «data-tsunami» that come from thousands of IoT devices, Edge Computing technology has emerged as a solution that proposes that data does not have to be centralised entirety, instead, part of it can be processed on distributed computers – called – Edge Nodes – in the same place where the data is generated. Only the result or aggregation of such computation will be then centralised, thus avoiding overloading the infrastructure, eliminating unnecessary latencies, and mitigating the security and data sovereignty risks, that matter so much to businesses and citizens today.
Imagine, for example, an energy distribution company that wants to balance its production, in almost real time, depending on the production and consumption lever of users. The infrastructure to communicate, centralise and store all this data from thousands of sensors is so complicated that the return on investment may not be viable. However, through Edge Computing, each transformation centre can analyse the information in real time and only communicate with the centralised infrastructure the relevant deviations that will cause a significant impact on the network.
Edge computing is witnessing a significant interest with new use cases, especially after the introduction of 5G. The 2021 State of the Edge report by the Linux Foundation predicts that the global market capitalization of edge computing infrastructure would be worth more than $800 billion by 2028.
In recent years there has been a great deal of work by large corporations defining and explaining what edge computing is and its different case studies that ends with a number of definitions and classifications. All of them, group the different types of edge depending on the location in which the data processing is carried out.
When data processing is conducted at the closest point to the network and furthest from the devices, we speak of «Fog-Computing» (a term coined by Cisco) or «Thick-Edge». This occurs at distances of 100m to 40km from the devices, and is carried out by very powerful edge nodes, or in some cases even embedded in the network core equipment itself. This is the case for example with some 5G communications towers, which can perform data storage and processing avoiding unnecessary latency when the communicating devices are on the same network.
If data processing is performed on network equipment or data aggregators located in the local network itself, it is named by the «Far-Edge» or «Thin-Edge». The physical distances in these cases can range from 1m to 100m, and is characterised by being carried out by medium-power Edge Nodes, 1GHz and no more than 8GB of RAM, which in many cases act as data concentrators, IoT gateways, or even intelligent industrial automation equipment.
Finally, when the processing is embedded in the IoT equipment itself, we talk about the so-called Micro-Edge, which in many cases has a very limited functionality as the devices themselves usually have a very limited computing capacity to avoid price increases or battery consumption.
There is no doubt that IoT at the Edge is one of the new enablers that will accelerate the digital transformation of enterprises. However, its deployment is not without challenges that any enterprise must consider in its design and implementation phase. The most important challenges we have identified are:
Across all sectors, industrial companies are undergoing digital transformation processes. Being able to connect devices, as well as collect and exploit this data, is becoming crucial to be competitive. The industries where IoT Edge can have the most impact are those that work with a high volume of connected devices. The impact is also exponentially greater where these devices are in distributed geographies and generate data at high frequencies.
In this sense, they are clearly positioned:
If you were interested in this article and want to know more about the possibilities of IoT Edge Technology, please contact us!