In recent years, the Internet of Things (IOT) – devices and sensors collecting and receiving data, has proliferated into new and vast applications such as autonomous vehicles, video surveillance, logistics, agriculture, consumer electronics, augmented and virtual reality, industrial automation, battlefield technology, – the list goes on.
According to the Alliance of Internet of Things Innovation, by 2021, so-called “the age of the IoT”, there will be 48 billion (!) devices connected to the internet (AIOTI.EU). This expansion caused a shift in the processing requirements associated with it.
Cloud Computing – mega data centers, few and far in between, which enable businesses to process and store their information and applications remotely, fall short from providing an adequate solution, especially where the data is mission critical or requires zero latency.
For these applications, “Edge Computing” provides the answer, by adding computational capabilities at the periphery (edges) of the network in closer proximity to the device being served or as part of it. There is no need to wait anymore for the “smarts” to be generated hundreds or thousands of miles away. Latency at the edge is eliminated.
As IOT is becoming prevalent, Edge Computing is on the rise, replacing the old “cloud” paradigm.
At the first stage, increased computational capabilities are moving outside the data centers and into mid-layer servers or aggregation gateways at the periphery of the network, a process referred to as “Fog Computing”.
In parallel, where the application allows, processing chips are also being integrated into the sensors themselves, which is referred to as full “Edge Computing”. It is acceptable to treat both phenomena under the term Edge Computing.