blog-from-egde-to-fog-to-cloud

From edge to fog to cloud

In the last few months, edge and fog computing in the Industrial IoT have become a hype topic. But what is the truth behind the hype? And does edge and fog computing really solve the major challenges of the future?

According to a study by IDC, the volume of data worldwide will increase to 163 zettabytes by 2025. The volume is growing gigantically. 163 zettabytes is roughly equivalent to the storage capacity of 40 trillion DVDs, which, stacked on top of each other, would reach to the moon and back over 100 million times. The reason for this is primarily the rapidly increasing number of networked devices. On the one hand, these are consumer devices such as fitness trackers, smartwatches and household appliances. On the other hand, however, large-scale networking is also taking place in almost all sectors of the economy as a result of the digital transformation. In the business world, this is referred to as Industrial IoT (IIoT).In addition to the pure growth in the volume of data, the criticality of the data is increasing significantly. Whereas in the past it was predominantly business data, in the coming years it will increasingly be data whose misuse can be life-threatening. This is more than obvious from connected, autonomous cars and communicating medical devices. What is less clear is the fact that data will have to be processed in real time in the future. This is where edge computing comes into play. Edge computing is a method of optimizing cloud computing systems by moving control of computing applications, data and services from some central node to the other logical extreme (the "edge") of the Internet. Edge computing is effectively the inverse of the cloud. Instead of centralizing everything, edge computing pursues maximum decentralization.

Fog computing, on the other hand, is a complement to edge computing. As an intermediate layer between the edge and the cloud, fog computing allows more complex calculations that normally take place in the cloud to be performed as close as possible to the edge. This means that on the Fog layer, different devices on the local network can share their computing capacity horizontally. Edge and Fog computing are here complementary concepts to the cloud, which perform different tasks in interaction and basically only make sense in combination. In the future, IoT architectures will therefore consist of three different layers. The cloud layer, the fog layer and of course the edge layer. Which task is executed on which layer is primarily a question of the computing power available at the respective layer. Modern edge devices are becoming more and more powerful and can perform increasingly complex tasks. Tasks that overtax the computing power of the hardware are executed on the Fog Layer, but are thus still subject to local processing. As the computing power of edge devices increases, tasks that were previously implemented in the Fog are also moved to the Edge.

As the Fog Layer also becomes more powerful, tasks from the cloud will also move to the Fog. Thus, it is foreseeable that there will be several waves with which tasks will move from one layer to the one below. However, there will still be functions in the cloud that will never run on the Fog or Edge layer. This applies to all types of reporting and analytics that take place on centrally aggregated data. The move to edge computing is a necessary evolution based primarily on global data growth. But the changing criticality of data also plays a crucial role. Edge computing, however, offers far more possibilities than just shifting bandwidth problems. It is now possible to process critical data directly on site. With regard to data protection requirements, this is an advantage that should not be underestimated. It provides the certainty that data is no longer stored uncontrolled on servers in third countries. Only anonymized data is transferred to the cloud for processing and evaluation. The personal reference remains on the edge, so to speak. This assumes that the security of the data is guaranteed by the edge device. This is much more difficult compared to the cloud, since the data in the cloud is usually well protected in secure data centers, at least from physical access. With storage and processing in the Edge, this is no longer the case. Decentralization results in devices operating in physically insecure and often uncontrolled environments. Protection against physical manipulation is therefore an absolute necessity if data is to be processed and stored in the edge. With the division into cloud, fog and edge layer, new threats to the data arise on all levels that must be adequately countered. This starts with the physical protection of the data. The operating system must also be protected against manipulation, which can be achieved by measures such as secure boot and encrypted root file systems. As far as the Fog layer is concerned, the first thing to mention is secure authentication of the devices among themselves. If a device makes its computing power available to another device, it must be ensured that the latter is also authorized to do so.

But how do you secure edge/fog computing environments in a meaningful way? Encryption is the solution here. But because of the physical access possibilities, the keys should be protected by hardware and not stored in software. Secure elements that also allow applications running on the edge to store and use key material in a secure element make sense here. All connections established with the system should be established by means of two-sided, certificate-based authentication. This prevents unauthorized systems from establishing a connection to the device. The result is a Trusted EcoSystem in which only devices and applications that trust each other can communicate with each other. This trust is managed via a public key infrastructure (PKI). The issue of security is not a one-time purchase. Rather, security must be checked and renewed on a regular basis. Both for the operating system and for the security certificates, regular updates are a mandatory prerequisite to ensure the long-term secure operation of the environment.

Summary

Edge computing is not a trend, but a necessity resulting from the development and change of data volume and criticality. Only by moving the processing of data closer to the point of data generation itself can future bandwidth and latency requirements be met. Quite incidentally, this can solve security issues, as the data is not moved away from the point of origin and processed in third-party data centers. At the same time, however, new security threats must be countered, and not only those resulting from the elimination of the perimeter. Not only the physical protection of the distributed systems must be ensured, but rather a multilayer security architecture is required that extends from the hardware to the cloud.

Blog post by Christian Schmitz.

To find more blog posts related with below topics, click on one of the keywords:

How can we help you?

Talk to one of our specialists and find out how Utimaco can support you today.
You have selected two different types of downloads, so you need to submit different forms which you can select via the two tabs.

Your download request(s):

    By submitting below form you will receive links for your selected downloads.

    Your download request(s):

      For this type of documents, your e-mail address needs to be verified. You will receive the links for your selected downloads via e-mail after submitting below form.

      Your collection of download requests is empty. Visit our Downloads section and select from resources such as data sheets, white papers, webinar recordings and much more. 

      Downloads

       

      0