HP TECH TAKES /...

Exploring today's technology for tomorrow's possibilities
What is Edge Computing

What is Edge Computing?

Linsey Knerl
|
Reading time: 7 minutes
Edge computing is a technology where devices remote from the main network can measure and process data through a local service. It reduces the amount of data that needs to be sent to a central data center and seeks to do more computing via the remote devices at the "edge" of the network.
Only the most important data gets sent back to the data centers. Businesses look to edge computing to reduce latency and improve responsiveness at each device location.
Edge computing began as content distribution networks (CDNs) that served multimedia content around the late 1990s. As time went on, more types of data and additional applications were included in the technology. Today, we have the addition of sensors, small computers, and smart devices, all processing and generating data in a location near the user or conditions to monitor.

How edge computing works

Edge computing may seem complicated, but it uses a rather simple workflow:
  1. A device in a remote location, such as a smart speaker, collects and measures data.
  2. This data is processed locally, either by the speaker or on a local server. Asking the speaker to play songs from your desktop computer would be a local request.
  3. Only data that can't or shouldn't be processed locally gets sent to the central data center. An example would be a request to your smart speaker to add a pack of pens to your next Amazon shopping order.
  4. The device and local network continue to determine what data stays local and what gets sent to the central network for processing or storage on a system-wide level.
Edge computing is different from traditional computing in a few ways, including:
Where data is stored: While traditional computing relies on the cloud or large data centers to keep and process data, edge computing offloads some of the storage to the local network or the end user's device.
Insights: Edge computing processes some data locally, resulting in lightning-fast analytics. This same real-time response isn’t possible with traditional computing models that must wait for data to go to data centers and then return.
Network loads: With more devices processing data through edge computing, network load is distributed more efficiently. Traditional systems, on the other hand, put the burden largely on a central server.

Key drivers of edge computing

What exactly has caused the need for edge computing? One large factor is the growth of Internet of Things (IoT) devices. With more speakers, security cameras, and even refrigerators collecting data, there was a need for this data to be processed quickly and closer to the source.
Artificial intelligence (AI) is another driving trend. The most useful AI-powered devices use sensors and 24/7 data collection to look for trends. All of this data would have been difficult to transmit and store offsite. With edge computing, they can continually work, get analysis in real time, and use technology like machine learning to respond and even adapt.
Cost is another consideration. It’s expensive to store large volumes of data at a central location, and the network congestion between devices and data centers can be costly. By offloading more computing tasks to devices, congestion and expense often decrease.

Applications and use cases

Edge computing used to be reserved for more cutting-edge technology sectors, like healthcare or information technology. Today, we see it used almost everywhere, even in consumer applications.
Here are a few examples:
  • A chain of restaurants all use the same point-of-sale (POS) systems to process payments and real-time inventory updates from the POS itself.
  • Autonomous (self-driving) cars use the technology to stop for pedestrians or assess weather conditions to adjust driving speeds.
  • Farmers use irrigation systems with water sensors to map out moisture needs and dispense only the water needed for that field.
  • Most consumer wearables, such as fitness trackers, store and analyze data locally on the device in real time, transmitting data to the cloud service at regular, delayed intervals.
  • Security cameras, such as those used in doorbells, continually record and monitor data but only send recordings to the cloud when movement is detected.
The list of actual use cases goes on and on. As more devices depend on real-time data analysis, we will see the list of applications grow.

Advantages of edge computing

As evidenced by the examples, any scenario that requires a quick response can be enhanced by edge computing. Here are some benefits:

Enhanced productivity for teams

The very short response time and lower latency mean less waiting for data to return from the central storage center. Seconds can add up over time and empower employees to do things quicker.

Improved safety

In environments where a malfunctioning technology could cause bodily harm, having the fastest response times can also save a life. In factories, for example, monitoring line equipment by sensors could detect conditions and predict a danger in time to prevent it from happening in the first place.

More connectivity in the least connected places

Not everyone has high-speed internet or even internet at all. For organizations with remote teams in rural areas or where infrastructure is poor, remote devices ensure they can continue working and connect to the central data center just when needed.

Better security

Some of the best security systems rely more on real-time data and AI analytics, whether it’s to protect high-priced jewelry or reduce the chances of an active shooter. These monitoring devices must be responsive and not depend on network transmission or updates. Edge computing can also be more secure for the data it stores since it travels less and has fewer opportunities to be compromised.

Data ownership

Finally, who owns the data you collect? If you store it on some servers or storage solutions, it may not be yours. To comply with increasing data regulatory requirements, some companies opt for edge computing systems, which keep the data local and don’t subject it to third-party storage or usage agreements.

Challenges and considerations

With all the great things edge computing offers, it’s still a newer tech. The distributed nature requires some additional safeguarding and infrastructure planning, as well.
Other challenges include scaling quickly to new locations and deciding what to do with all the data that’s collected locally. At some point, this data needs to be stored and backed up, potentially creating the need for another data center.
Finally, while owning the data on each device may seem more secure, it also pushes the responsibility to keep it secure on you. Just as cloud storage and computing companies need to protect data from cybercriminals and data corruption, so do you when dealing with local data stores. As technology advances and cyberattacks become more sophisticated, you'll need more advanced campaigns to secure your devices and data.

The future of edge computing

Edge computing is already leading many computing trends, but the best may be yet to come. Some of the advancements we could see in 2024 and beyond include:

5G and even 6G capabilities

While we are just beginning to see how edge computing helps to manage the vast amounts of data that 5G enables, 6G is right around the corner. With higher frequencies and more capacity than 5G, we will see devices evolve to capture and process this data. The technology is still in the idea stage (and isn't expected to appear until 2030 or later), but when it comes, we can expect its effect on edge computing to be significant.

More spending

A new IDG report predicts $250.6 billion in global spending on edge technology in 2024, with a five-year compound annual growth rate (CAGR) of 12.5%. This will likely result in more vendors for edge computing support services, devices, and security solutions to keep all the data and infrastructure safe. In short, this industry should see a comfortable expansion and attract new tech professionals.

New threats

As we see with any new tech leap, there will be a fresh crop of criminals looking to take advantage, too. Hackers are drawn to edge deployments due to how remote they can be from the main data centers (and therefore, experts well-versed in keeping them safe.) Every device is an entry point to a larger data ecosystem and needs to be updated and maintained to avoid becoming a Trojan Horse for hackers. Add in all the places between the device and the data center, as well as any connected cloud services, and it can make for an overwhelming number of checkpoints to monitor and secure.

Summary

Edge computing may be a new concept to many, but that isn’t likely to last. With a growing influence on even the common tools we use in homes and workplaces every day, it’s just a matter of time before it’s as discussed as ChatGPT or voice assistants.
And while the possibilities with this tech are truly unmatched, there are some considerations to make before diving in. With each new capability comes the call to protect it. Remote data hubs need attention and care to work well within the larger infrastructure without becoming a security risk. Reaching out to an expert is the best way to assess if it’s the right step for you.

About the Author

Linsey Knerl is a contributing writer for HP Tech Takes.

Disclosure: Our site may get a share of revenue from the sale of the products featured on this page.