Introducing Edge Computing
Conceptually, edge computing is concerned with when it’s best to migrate computational functionally toward source of data and when it is best to move the data itself. This abstract concept of function versus data migration drives not only the fundamental motivations of edge computing, but also the broader field of distributed systems. The act of distributing processes makes even the simplest tasks more complicated.
What’s this “edge” you speak of?
As any programmer is painfully aware, developing and debugging multi-threaded applications compared with their single-threaded counterparts is challenging. The network engineer can attest to the complexity of a multi-homed autonomous router compared with a home access point. Likewise, the systems professional might give you a laugh when asked to explain the differences between an application running on a laptop and geographically distributed cluster.
If distributing things is this hard then why do we do it? In most cases, computational and infrastructure distribution is driven out of necessity. It’s no more practical to run Google.com (the entire search engine) from my laptop than it is to require a vehicular collision system to transfer data across a country to make critical sub-millisecond decisions. Although you might agree with the previous arguments, you should ask yourself what edge computing has to do with you? Perhaps at this moment you’re holding a smart phone (are dumb ones still sold?) — a device that, at this point, is a phone primarily in name only. If on June 28, 2007 — the day before the iPhone was released — you were told that your phone would replace many functions of your computer, TV, and much more, you might not have believed it. Likewise, few of us who experienced the early days of the Internet predicted the pervasive global impact, for better or worse, it has had on our lives. What events occurred between the first message being sent over ARPANET in 1969 and my thermostat consorting with my AI assistant as to when I might get home and what temperature it should be when I get there?
Ok, stay with us here as we journey down the “rabbit hole.” Those of you who’ve been around communications know the difference between circuit- and packet-switched networks. For the unfamiliar, a circuit-switched network is what might come to mind when you see an old movie; when someone picks up a phone, they are connected to switchboard operator, who manually plugs a cable into a socket physically connecting two telephones. You might be surprised to learn that although advancements have been made, public telephone and 2/3G cellular networks still make use of circuit switching. Circuit switching is considered inefficient because it dedicates end-to-end resources for communications, regardless of link utilization.
In contrast, packet switching breaks data into units, which we call packets, allowing independent communications over shared links. If you’ve broadband Internet or a 4G or newer phone, you are making use of a packet-switched network. Why is this important? Consider all the network devices in your home, office, or even on your person. If these devices made use of circuit-switching technology, you’d have the equivalent of a physical phone line per device. Can you imagine ordering a new phone line installed for your Internet-connected toaster and a separate line for your TV? Those involved in the early days of the Internet didn’t conceive of connecting your home lighting to a remote cloud service: the cloud didn’t exist, and there was no necessity for the average company, much less individual, to have access to generalized packet-switched networks, allowing access to and from many devices globally. What new computational paradigm allows for and perhaps drives a new age of technology?
You could argue that the current practice of individual devices communicating directly with remote cloud services is as silly as installing a phone line for your toaster. If devices are to generate data which is acted upon by more than one system — perhaps AI in a cooperative manner — then not only must data democratization occur, but new computational paradigms must be used to support it. The leap to make here is that as packet-switching technology fundamentally changes the approach to connectivity by allowing communications decisions to be made closer to devices, such as your TV and toaster sharing your connection to the Internet, edge computing can have the same impact on the computational level.
Imagine the constraints of transmitting data from a device across the country which needs to communicate to a device across the room, which then might need to communicate to a service on the other coast for decision-making. Now, consider the benefits of data processing taking place on your phone, in your house, in your neighborhood, city, or any other remotely connected location based on the global optimization of infrastructure, data, and application needs.
What makes edge computing different?
To understand the utility, and some might argue necessity, of edge computing you must take a step back and think about the fundamental characteristics of various types of networks. In this context, we’re not necessarily referring to communications networks, but the more fundamental theory behind networks of all type of works. The figure below illustrates three types of networks and was presented by Paul Baran in his paper “On Distributed Communication Networks” nearly sixty years ago. Although more modern conceptions of the distributed systems communications differ from this early work, we find it useful to explain where edge computing fits in the distributed space.
A system is defined as a set of connected components that form a complex whole. Systems are described as centralized if individual components are directly controlled by a central component. An example of a centralized system is a personal computer. Whereas a computer is comprised of many functional components, the overall system is useless without a central processing unit (CPU). Likewise, a mobile phone interacting with a remote web site is also operating in a centralized paradigm. Examples of centralized systems exist outside of technology, such as a medical practice with a single physician. Similar to a computer without a CPU, a medical practice without a licensed physician can’t function.
Individual services provided by systems can be described in terms of workloads. Workloads consume resources to perform specific tasks. For example, calculating the first N Fibonacci numbers is a computer workload. In the medical example, performing a personal physical is a medical workload. The study of workload arrival, servicing, and departure is referred to as queueing theory. In theory, the duplication of centralized systems doubles the potential aggregate output of that system.
A system is said to be distributed if individual components can process workload tasks and coordinate with each other to provide a common service. An example of a distributed system is a cluster of computers which can calculate the first N Fibonacci numbers in parallel, or in the healthcare example, a medical practice with multiple practicing partners. In the given centralized and decentralized examples, workloads are processed by distributed systems, but workload scheduling (workload arrival, sequencing, and resource assignment) in these systems is controlled centrally.
As distributed systems with central scheduling grow in both size and service complexity, scheduling functions must also be distributed. The process of dispersing the functional control of a central authority, is defined as decentralization. Through decentralization, control functions are divided into interdependent layers. The arrangement of distributed control layers and workload-servicing components is defined as a hierarchy. Levels of hierarchy both provide and accept control from other levels as it maintains a service-level autonomy.
A university is an example of a decentralized (hierarchical) system. High-level administrators and advisory boards evaluate metrics and implement university-wide changes by communicating with colleges, colleges implement college-level changes and communicate with departments, and departments implement departmental changes by communicating with faculty and staff.
For example, suppose a university (high-level administration) wanted to increase the number of engineering graduates. A change in configuration (strategic direction, increased resource allocations, and so on) is communicated to the college of engineering. The college determines that based on metrics reported from all departments under their hierarchy, the computer science department can benefit most from a configuration change. Once the change is communicated to the department, resources can be committed, and metrics related to the configuration change and be reported back through the system. In the following subsections we discuss various aspects of edge computing including its relation to existing computational models and motivations.
Client-server and cloud models
When you talk about a centralized computing, one of the most common computational examples is that of a “client-server model” like the one in Figure 3. This typically means a system in which remote clients contacts one or more central servers that provides resources in response to single or on-going requests. What you have is a central system (computer, server, or group of servers) in control of the storing, processing, and distributing some set of resources or data to clients who request this data. If you’ve ever used an email client or a web browser this should be familiar. Although you may compose your email or read a web page on your device, the serverhandles the background routing and formatting of your emails or the renders of the web content, usually coupled with resources pulled from a database. The figure below illustrates data from many potentially distributed devices being aggregated by an internet of things (IoT) gateway for remote cloud processing by a single VM.
In these types of systems, most of the work is done by the central server and the clients only interact as needed. As we said, there’s a world of workflows for which this is a perfectly suitable approach. As you’ll see, as the size of the data increases or the number or distance between clients grows, some limitations start to determine how useful or even practical this approach can be. In our example, the IoT gateway and VM function as centralized choke points and single points of failure. If the IoT gateway or VM fails, capacity is exceeded, or communication is disrupted, the entire system fails.
Let’s assume that you replace your single VM with a globally distributed cloud service, or perhaps your entire back-end system based on a multi-region auto-scaling Kubernetes deployment. Although you might have confidence in our backend system, the important part, namely data acquisition and response, is still at risk. The Internet is inherently a decentralized system operating across antonymous nodes which route data based on changes in link topologies, utilization capacities, and traffic classifications. No one, not even Amazon or Google can deterministically predict the end-to-end path a packet will take from an end-user device across the public Internet to arrive at some distant cloud service. Likewise, no end-to-end offering exist (this is not a comment on net neutrality) to guarantee end-to-end service for network communications. If one can neither predict, avoid, or guarantee quality of service of communications between end-devices and the public cloud across the Internet, then it’s a game of chance. This game of chance is played potentially hundreds of times for every interaction a device might have with a remote service. It’s a testament to our network communication forbearers that it’s more common these days than not making exclusive use of highly centralized clouds for nearly all system operations. This, as you’ll learn, isn’t a tenable future strategy, which is where edge computing comes in.
How can edge computing help
What if emergency medicine functioned like devices interacting with the public cloud? Suppose that consolidating local clinics and regional hospitals, free national healthcare was provided at a dozen state-of-the art medical facilities across the US. Now suppose you’re in a traffic accident and are in need emergency medical treatment. Your medical card has the address of the closest facility, but you must find directions to this potentially remote and distant facility without further assistance.
Perhaps you can drive yourself, take an Uber, or book a flight. This may seem unreasonable, but how could the remote centralized medical facility possibly help you get there? They don’t know the extent of your condition until you get there; they don’t know the local landscape, your fear of flying, or to never take Champions Avenue before or after the big game. Medicine doesn’t work this way in the real world. You rely on local emergency medical dispatchers to determine (monitor) the following:
- What resources are needed
- Notifying local emergency medical services to provide urgent local treatment (alerting and escalation)
- Assessing patient condition and vitals (data enrichment) and relaying information to the health care provider (data aggregation)
- Engaging the appropriate emergency physicians to treat (complex event processing) life-threating illnesses
- Referring patients to the appropriate specialist
Emergency medicine has evolved into a complex decentralized hierarchical ensemble of players with well-defined roles, protocols, and operational boundaries to ensure that patients receive the correct treatment in a timely manner by delegating resources — such as time, effort, and expertise — to where they are most efficiently used. This delegation of resources by imposing a decentralized hierarchical organization makes sense in the world of computing as well.
Edge computing aims to decentralize operational functions, including but not limited to computation and communications, allowing the overall system to continue to function, and if needed, dynamically scale, to tolerate both failures and capacity fluctuations. Figure 4 illustrates a refactoring of the previously described client-server-cloud model to be deployed using the edge computing paradigm, pushing computation and related decision-making toward sources of data generation.
As shown in the figure, the public cloud hasn’t been removed from the system, but you’re no longer dependent on the centralized functions provided by the IoT gateway or a remote and potentially distant clouds. Borrowing from our emergency medical example, our edge devices function as local dispatchers and EMS. Once data is processed locally and, if needed, prepared for transport, the edge-computing system determines the path of transport, which could be to an intermediate resource for additional processing, such as a regional hospital. Depending on the capabilities of the local dispatchers and EMS (application), enriched data might be further transported to a high-level trauma center (public cloud).
Having seen an example of how edge computing can improve operations in an emergency medical operation over a more traditional client-server approach, we’ll next introduce some of the fundamental components in edge computing that allow for this decentralized approach.
The components of edge computing
Now that you’re familiar with the general background and motivations of edge computing, we’ll briefly describe what we consider the foundational components of edge computing. At a fundamental level, edge computing involves multiple layers of abstraction that organize tasks and resources to allow for heterogeneous, distributed tasks to be concurrently processed on heterogenous, distributed hardware in a managed fashion. In figure 5 you can see a breakdown of the layers of abstraction. As edge computing is a new field, these may vary from implementation to implementation as features are added and removed. Starting from the bottom layer, which provides an abstraction that organizes the underlying infrastructure and data the edge-computing framework operates on, each higher layer adds new functionality and further abstraction to the layer below it. Those of you with a background in networking will recognize the similarity of the edge computing component hierarchy to the OSI model. We’ll next briefly describe the layers that compose an edge-computing system.
Distributed data and infrastructure
At the bottom of the pile is the edge-computing framework code that runs on your embedded device, computer, or server. In this layer, devices and data sources are organized into networks and hierarchies that structure how they can and will be used by the layers above them. This abstraction can re-organize, for instance, devices into contained groups by device class, department of use, or ownership, with needing to relocate the physical device to allow for segmentation to take place at the network level. Typically, this layer runs directly on the host machine and facilitates all operations required by the framework itself. From start-up to building secure data connections, this layer contains the backbone of operations meaning when one of the above layers requires some action, such as launching an application or building a new communications channel, this layer handles the bulk of the work.
Data transport and integration service bus
The data transport and information service bus, or data layer for short, handles direct communication paths between hosts in your distributed infrastructure. These paths can serve a couple of purposes: to handle framework control messages, as in the integration service bus, or to pass data messages, as in the data transport. These channels can be either fixed, long running messaging paths, or other external ad-hoc connections utilizing other means of communication. This system also handles path routing to the desired end host as well as any return messages such as in a remote procedure call (RPC.)
Semantic data layer
Although the data layer is mainly focused on where the data is going and how it needs to get there, the information layer is instead concerned with what kind of data is being transmitted. As this is a distributed infrastructure, with functions that may be spun up and down dynamically, using fixed data paths is impractical. To remedy this, edge computing frameworks typically choose to look at data according to its semantics. For example, rather than saying, “I need a TCP connection to camera A with IP 126.96.36.199 onto which it will put its video feed”, the framework may instead say, “I need video data tagged with camera A as a source.” This subtle shift allows functions to request a type of data, and possibly a specific tag that may correspond to an identifier but ignore tracking down an exact location of where to grab the bits.
The application layer is the most visible to the user of an edge computing framework. In this layer, one or more function(s), or small pieces of computation to be performed, are distributed over the framework, either statically, automatically, or dynamically (if efficient resource scheduling is provided). These functions perform predefined tasks, usually on incoming data, and, if necessary, provide the results of their computation to other functions. End users can either use existing functions compatible with their chosen edge computing framework or write their own as the software allows.
The quote “Dans les champs de l’observation le hasard ne favorise que les esprits prepares — Where observation is concerned, chance favors only the prepared mind” is attributed to Louis Pasteur. The purpose of this book is to prepare you to identify the challenges that edge computing intends to address across the continuum of computational layers. In the next section we describe a few examples that make use of edge computing.
Example uses of edge computing
Edge computing can be difficult. Distributing computation, storage, and other parts of a computer system introduces a host of challenges which must be overcome. Hardware fails, people press buttons and pull on cables, disasters happen, and parts of the system could fail at any time. Because edge computing requires extra effort, it doesn’t make sense to use it when it isn’t useful unless you’re into that sort of thing (no judgement here). But how do you know when it’s most useful?
In short, you only need it when you need it. That situation often looks like a problem you can almost solve. The Three Laws of Edge Computing can help us determine when edge computing techniques might be appropriate. Wherever existing techniques run afoul of physics, economics, or the long, wrinkly arm of the Law, edge computing should be considered. A simpler rule of thumb is to consider using edge computing when you need to handle lots of streaming data or have latency constraints. Like other rules of thumb, they aren’t written in stone. Some of the leading cloud computing providers already have commercial edge computing platforms, and as more people enter the field, they’re bound to start “abstracting away” the complexity of edge computing in order to put these powerful tools in the hands of more people. In the meantime, I know of a great book on edge computing if you’re interested!
We’ll apply these rules of thumb to a few examples that represent typical uses of edge computing. You’ll see two example applications: gunshot detection in a smarter city and managing patient alerts in a hospital. As you walk through these examples, you’ll see ways the existing techniques fall short.
Gunshot detection in a smarter city
Smarter cities are called “smarter” because they use various sensors to monitor different kinds of activity within the city. These sensors are connected to computer networks that process the data and, in some cases, take some kind of action. That could mean turning a switch on or off, contacting the police, changing traffic signals, or many other things.
For this example, consider a fictional smarter city with a large population. The city government worked with a local university to develop a gunshot detection system that uses a network of over nine thousand microphones strategically placed throughout the city to detect gunshots. This system uses machine learning to determine when a gunshot occurs and approximately where it came from. The models require a stream of uncompressed digital audio from as many microphones as possible, with fewer microphones causing reduced performance. When a gunshot is detected, the police are notified and provided with a video feed from traffic cameras near the putative shot location. The system also stores a copy of the data from all of the microphones near the shot for a ten-minute window before and after a positive detection. This system must process large amounts of streaming data and respond quickly, which smells like edge computing (but nothing like teen spirit).
The machine learning models in our example don’t require much computing power — the computationally hard work was done during training. They require a large amount of data to make accurate determinations, and this data must be handled in near real- time in order to be useful. This sounds like a job for edge computing! Rather than ship all our audio data to a datacenter for processing, which might be impossible in some cases due to limited infrastructure, you can feed it to the machine learning models on low-powered computers near where it’s generated. Those computers only have to send something indicating a gunshot detection and the location, generating miniscule network traffic compared to the microphones. They can also communicate with other networked devices close by more quickly than they could with a distant server, allowing them to respond more quickly. Without edge computing, you’d run smack into the laws of physics and be forced to accept the latency and bandwidth possible any given moment.
Edge computing also helps make the system more economical to build and operate. If you need more bandwidth than you have, there are two options: live with less or pay for more bandwidth. For something at the scale of a large city, one doesn’t call the internet company and ask for an upgrade. If the infrastructure isn’t there, it has to be built, and there are costs associated with maintenance of additional hardware. You might be thinking, “Well, I’ll go completely wireless! How about that, Jack?” Jack says, “Wireless networks always have wires somewhere, and power ain’t free.” City governments aren’t known for being awash with cash, and you can’t always assume that it’s possible to buy or build enough infrastructure. Edge computing can help by reducing the need for new infrastructure and making better use of what’s there, saving us from the Law of Economics.
Privacy and security issues are particularly salient in our society at the moment. With numerous high-profile data breaches and scandal after predictable scandal from Silicon Valley, people are rightfully cautious about what happens to their personal data. The idea of a giant network of microphones which are always on and connected to a data center somewhere is more than a little unnerving. The actions of the city government, and indiscriminate recording of the citizenry may not go over well, and there will be legal and political ramifications. Once again, edge computing can help. Pushing the processing to those small computers at the edge of the network lets us reduce how far the sensitive data must travel and it limits opportunities for interception or interference. Also, you can avoid recording all of the audio by storing only the last thirty seconds received and overwriting old data with the new in a structure called a ring buffer. When a gunshot is detected, one could dump the contents of the buffer and begin recording on all of the microphones that “heard” the shot, only saving the data which is likely to be relevant. Take that, Law of the Land!
Managing patient alerts in a hospital
Another low-hanging edge computing fruit can be found in healthcare. Modern medical practice generates massive amounts of data in various forms. Perhaps the best-known example of this is diagnostic imaging: the various flavors of CT, MRI, PET, and x-ray, the pumpkin spice latte of medical imaging. Some other forms of digitized medical data include the various sensors and machines to monitor patient conditions, whole-slide images (WSI), health records, patient charts, physicians’ notes, pathology reports, and lab results. A particularly important concern is the need to maintain patient privacy and control of data, often codified in laws like the Health Insurance Portability and Accountability Act, better known as HIPAA. Once again, you have lots of data (like streaming data), privacy, ethical and legal concerns, and the need for rapid responses. Hmm…what kind of tools are great for that? It’s something like, “periphery computing” or “margin computing” or, uh … edge computing! Yeah, that one! The healthcare industry is fertile ground for the growth of edge computing because edge computing helps to address the privacy concerns. Privacy concerns are often cited as one of the main barriers to innovation and exchange of information in the healthcare industry.
As an example, imagine a fictitious patient monitoring system that integrates with various hospital information systems to provide a picture of a patient’s health status with links to the patient’s records. When the system detects a condition that requires attention, it pages the appropriate personnel to respond immediately. The hospital is part of a network of hospitals which have agreed to use the same system to facilitate the secure and seamless transfer of patient records as needed by different healthcare providers from different institutions. Physicians often list excessive paperwork and frustration with electronic health records (EHR) as one of the things that most negatively impacts their practice, and this feature is an important part of the system.
Designing such a system presents numerous difficulties. Standard formats for EHR are well-suited for sharing between institutions, but securely transferring only what’s needed when it’s needed presents a tougher challenge because all of the participating hospitals must coordinate in near real-time. Edge computing provides tools to help achieve that coordination without extra investment in hardware or infrastructure. As with the smarter city example, this system uses various machine learning models to detect health problems. In this case, there is a twist: the models are personalized, and they’re at least partly trained using the patient’s own data. This allows for better, more personal care and could be easily and securely shared with other institutions. Rather than send all of the patient’s data, only the details of the machine learning model are sent. It’s impossible to reverse engineer the models to arrive at unencrypted patient data, and the only thing a would-be hacker gets is a model tailored to Jane Q. Patient rather than all of Jane’s health records. Huzzah!
Such a system is expensive or downright impossible to build without edge computing. The political and legal factors at play only serve to increase the difficulty. To make the application we’ve described work, you have to move large amounts of data as quickly as possible to a data center that continuously trains the models and runs them against the input data. Processing the huge amounts of data required in the short time permissible takes tons of hardware and infrastructure if you take a traditional client-server approach: hardware for obvious reasons and infrastructure to get the data and responses to their destinations quickly enough. As before, you can overcome these difficulties by moving processing closer to the point of data generation, greatly reducing the amount of data that needs to be sent through the interior portions of the network. Some of the monitoring equipment may be able to run the models generated for the patient to detect negative conditions, and the more expensive training part could be done on a workstation or small server located near the patient’s room instead of in a central hospital data center. If distributed storage is also employed, the only thing you need to centralize is the “command and control” of the system — including the alerts — and an index of where each patient’s records reside. Each hospital could then share their index with the others and provide a secure interface that returns an encrypted copy of a patient’s records when queried. Existing techniques for achieving consensus in distributed systems could be employed to ensure that all institutions that need a copy of a patient’s record have it and that it’s kept up to date. Such an interface also allows for authorized entities to revoke the right to store a patient’s records, triggering an automatic deletion of the encryption key needed to decrypt the data and rendering it unreadable, even if backups are taken. The keys need to be kept secure, and you need a way to make sure that the key can’t be backed up, and a bunch of other things you haven’t even thought of yet but know to be out there, where the wild things are. The implementation details of the security system for our hypothetical app aren’t needed to illustrate that edge computing could help overcome numerous security challenges, and we ask you to hold your questions and suspend your disbelief a little in the interest of brevity. The important ideas are that less data transmitted means less to be intercepted and edge techniques can be used with relatively inexpensive hardware to perform encryption/ decryption at the point of generation/use.
Hobbyist app: Personal fitness coach
At this point, you might be thinking that edge computing is only useful or applicable for big institutions doing big things, but it can be useful for smaller projects. For this example, suppose you need a personal fitness coaching app that uses heart rate, GPS, barometric pressure, and accelerometer data to deliver real-time coaching cues to runners and cyclists. Free web-based services with advanced analytics and tracking exist, but they require surrendering one’s data to be analyzed. They also have the drawback of requiring an Internet connection to function, which means they can’t be used in remote areas where connectivity is limited or unavailable. A programmer with an interest in endurance sports or an athlete with an interest in programming could employ edge computing techniques to create an app that doesn’t require sending off their personal data yet allows for cloud storage when it’s available.
At the heart of this app is the analysis of historical data and the use of real-time data to gauge how hard an athlete is exerting himself and how hard he should be exerting himself according the goals he has defined. Cyclists already have this in the form of power meters and cycle computers that integrate power, cadence, speed, heart rate, and GPS data and can include the ability to store user-generated workouts. Power meters for runners exist but haven’t yet seen widespread adoption among the running public, and they must rely on other methods like pace and heart rate. The proposed app differs because it not only “knows” your fitness level and can compare your current performance to a predefined standard, but it can guess what you’re capable of on any particular day by using…*drumroll* machine learning models. No surprise there, right? It’s worth noting that edge computing doesn’t have to involve any machine learning stuff. We keep using it because it’s useful for examples and it’s currently a hot topic, but there’s way more to edge computing than machine learning applications. Anyway, back to our app. The machine learning models are trained iteratively, with each new run or ride used to improve the models. These are run against the data being recorded during a run or ride to predict how hard the athlete is working. Because there’s GPS and barometric data available as well, the software could also dynamically adjust workout intensity based on upcoming terrain to keep the difficulty level of the workout at or below desired levels, which is useful for exploring new areas.
Perhaps now you’re thinking, “cool story bro, what’s this got to do with edge computing?” Although it may be possible to run an app like we’ve described in the cloud, it means you can’t use it where there’s no Internet connection available. You could run it entirely on a mobile phone or similar small portable computer, but you’d run out of storage space fairly quickly. What if you could use both? Like the lime and the coconut, you put the computation near the data source and drink ’em both up — edge computing cocktail! It can make your head spin, but it goes down smoother than you think. You already have a small computer on hand in the form of a mobile phone or cycle computer, and you can run the most recent version of the machine learning model against the input data on those devices, which also record the data. After a workout, those devices can upload the data to a home or cloud-based server where it’s used to update the models and can be archived for further analysis. As long as a model has been downloaded and the various sensors are operational, the system works without an Internet connection.
The edge is inevitable
You might be wondering if future technology advancements will mitigate the need for such a computational paradigm. As you’re no-doubt aware, companies have been solving problems in distributed computing for as long as we’ve been networking computers. Over the years these computers have made huge gains in memory, processing power, and networking speed in an attempt to keep up with the challenges general computing seeks to solve. As the cost of these advances has come down, the number of devices in the wild has skyrocketed, enough that we’ve had to update the dictionary to include a term for it — internet of things or IoT — which loosely defines as the system of interconnected computers, devices, and machines communicating — usually in an autonomous or human-less fashion — heterogenous streams of data. This increasing pervasiveness of IoT devices and infrastructure will only continue to compound issues related to data ETL — or extraction, transformation, and loading. In the following section we discuss the so-called “Three Laws of IoT,” which describe the ongoing need for edge services that make use of edge computing.
The three laws of IoT
More connected devices are generating more data than ever before, data which is increasing in value with each passing year. This data growth, and even the nature of the data itself, has natural consequences for which certain considerations must be made when designing distributed systems. These consequences can be summarized in the so-called three laws of IoT: Laws of Physics, Laws of Economics, and Laws of the Land.
Laws of Physics
Data transfer is governed by the speed of light, and the overall speed of computational inference (decision making) is governed, in part, by the speed in which data and be translated from the source of generation to computation. For example, a multi-node collision avoidance system can’t transmit data across the US and be effective. Moving computational resources and functions closer to sources of data generation, as practiced in edge computing addresses physics constraints.
Laws of Economics
Data transfer and storage cost money, and although all data might need to be processed, not all data needs to be transferred or stored. Battery powered sensors and satellite communications used in high-risk monitoring systems are extreme cases of economic cost vs risk in distributed computing, but many other examples exist. Moving initial filtering and complex event processing closer to sources of data generation, as you provide control of what data should be propagated, what resilience policies are being satisfied, and what data can be safely ignored, as address by edge computing, addresses many economic challenges.
Laws of the Land
In many cases, such as medical, defense, and other areas, data can’t be legally transmitted in the form in which it was generated. Compensating controls related to privacy, preservation, de-identification, and other functions might be required before data can be acquired from distributed system. Likewise, for regulatory purposes, such as in laboratory testing data streams might require enrichment to indicate the province of data relating to a specific instrument. Edge computing allows for complete application and data-layer control of information at sources of data generation satisfying law of the land constraints.
You can argue that as a consequence of the described laws, that edge computing will continue to exist regardless of any future technical advancements. In fact, an argument can be made that edge computing techniques are required to address not only the challenges of large numbers of distributed devices and their resulting volumes of data, but also how to manage resulting information.
That’s all for this article. If you want to learn more about the book, you can check it out on our browser-based liveBook reader here.
This article was originally published here: https://freecontent.manning.com/introducing-edge-computing/