If you’re at all involved in tech, chances are you’ve heard about parallel computing. You probably know it’s got something to do with more than one computer or processor working on the same problem at the same time.
But what exactly is parallel computing? Do coders, data scientists, and even business people need to understand it? If so, what are the key points?
The good news is, you’re almost certainly using parallel computers every day. That said, it’s important for tech types - and soon the rest of us - to know the ins and outs of parallel computer use. The key fact? As the
Internet of Things (IoT) takes hold, billions of devices will need to use this core computing strategy to keep from drowning in a rising sea of data.
What is parallel computing?
Parallel computing uses multiple computer cores to attack several operations at once. Unlike serial computing, parallel architecture can break down a job into its component parts and multi-task them. Parallel computer systems are well suited to modeling and simulating real-world phenomena.
With old-school serial computing, a processor takes steps one at a time, like walking down a road. That’s an inefficient system compared to doing things in parallel. By contrast, parallel processing is like cloning yourself 3 or 5 times, then all of you walking side by side, covering many steps along the road at once.
Why is parallel computing important?
Without parallel computing, performing digital tasks would be tedious, to say the least. If your iPhone or
HP Spectre x360 laptop could only do one operation at a time, every task would take much longer. To understand the speed (or lack thereof) of serial computing, think back to the smartphones of 2010. The iPhone 4 and Motorola Droid used serial processors. Opening an email on your phone could take 30 seconds or more - a lifetime, compared to now. And if there was an attachment? Forget it!
The first multi-core processors for Android and iPhone appeared in 2011 [1]. IBM released the first multi-core processors for computers ten years before that in 2001 [2]. But wait - if we’ve had parallel computers for decades, why all the sudden chatter about them?
Introduction to parallel computing
The exponential growth of processing and network speeds means that parallel architecture isn’t just a good idea; it’s necessary. Big data and the IoT will soon force us to crunch trillions of data points at once.
Dual-core, quad-core, 8-core, and even 56-core chips are all examples of parallel computing [3]. So, while parallel computers aren’t new, here’s the rub: new technologies are cranking out ever-faster networks, and computer performance has grown
250,000 times in 20 years [4].
For instance, in just the healthcare sector,
AI tools will be rifling through the heart rates of a hundred million patients, looking for the telltale signs of A-fib or V-tach and saving lives. They won’t be able to make it work if they have to plod along performing one operation at a time.
Benefits of parallel computing
The advantages of parallel computing are that computers can execute code more efficiently, which can save time and money by sorting through “big data” faster than ever. Parallel programming can also solve more complex problems, bringing more resources to the table. That helps with applications ranging from improving solar power to changing how the financial industry works.
1. Parallel computing models the real world
The world around us isn’t serial. Things don’t happen one at a time, waiting for one event to finish before the next one starts. To crunch numbers on data points in weather, traffic, finance, industry, agriculture, oceans, ice caps, and healthcare, we need parallel computers.
2. Saves time
Serial computing forces fast processors to do things inefficiently. It’s like using a Ferrari to drive 20 oranges from Maine to Boston, one at a time. No matter how fast that car can travel, it’s inefficient compared to grouping the deliveries into one trip.
3. Saves money
By saving time, parallel computing makes things cheaper. The more efficient use of resources may seem negligible on a small scale. But when we scale up a system to billions of operations - bank software, for example - we see massive cost savings.
4. Solve more complex or larger problems
Computing is maturing. With AI and big data, a single web app may process millions of transactions every second. Plus, “grand challenges” like securing cyberspace or making solar energy affordable will require petaFLOPS of computing resources [5]. We’ll get there faster with parallel computing.
5. Leverage remote resources
Human beings create 2.5 quintillion bytes of information per day [6]. That’s the number 25 with 29 zeros. We can’t possibly crunch those numbers. Or can we? With parallel processing, multiple computers with several cores each can sift through many times more real-time data than serial computers working on their own.
Examples of parallel computing
You may be using a parallel computer to read this article, but here’s the thing: parallel computers have been around since the early 1960s. They’re as small as the inexpensive Raspberry Pi or as robust as the world’s most powerful Summit supercomputer. See a few examples below of how parallel processing drives our world.
1. Smartphones
The iPhone 5 has a 1.5 GHz dual-core processor. The iPhone 11 has 6 cores. The Samsung Galaxy Note 10 has 8 cores. These phones are all examples of parallel computing.
2. Laptops and desktops
The Intel® processors that power most modern computers are examples of parallel computing. The Intel Core™ i5 and Core i7 chips in the
HP Spectre Folio and
HP EliteBook x360 each have 4 processing cores. The
HP Z8 - the world’s most powerful workstation - packs in 56-cores of computer power that lets it perform real-time video editing in 8K video or run complex 3D simulations.
3. ILLIAC IV
This was the first “massively” parallel computer, built largely at the University of Illinois. The machine was developed in the 1960s with help from NASA and the U.S. Air Force. It had 64 processing elements capable of handling 131,072 bits at a time [7].
4. NASA’s space shuttle computer system
The Space Shuttle program uses 5 IBM AP-101 computers in parallel [8]. They control the shuttle’s avionics, processing large amounts of fast-paced real-time data. The machines can perform 480,000 instructions per second. The same system has also been used in F-15 fighter jets and the B-1 bomber [9].
5. American Summit supercomputer
The most powerful supercomputer on Earth is the American Summit. The machine was built by the U.S. Department of Energy at their Oak Ridge National Laboratory. It’s a 200-petaFLOPS machine that can process 200 quadrillion operations per second. If every human on earth did one calculation per second, they’d need 10 months to do what Summit can do in a single second [10].
The machine weighs 340 tons and is cooled by 4,000 gallons of water per minute. Scientists are using it to understand genomics, earthquakes, weather, and physics, and to craft new materials to make our lives easier.
6. SETI
Does life exist on other planets? So far, the best way to find out is to listen for radio signals from other worlds. The Search for Extraterrestrial Intelligence (SETI) monitors millions of frequencies all day and night. To ease the workload, SETI uses parallel computing through the Berkeley Open Infrastructure for Network Computing (BOINC) [11].
Millions of people donate unused computer time to process all those signals. Want to help? You can gift your computer downtime to SETI or other BOINC projects like tracking asteroids or ending AIDS [12].
7. Bitcoin
Bitcoin is a blockchain tech that uses multiple computers to validate transactions. You’ll use
blockchain to do almost anything money-related in the coming years. Blockchain and Bitcoin don’t work without parallel computing. In a serial computing world, the “chain” part of blockchain would evaporate.
8. The Internet of Things (IoT)
With 20 billion devices and more than 50 billion sensors, the floodgates are open on our daily data flow. From soil sensors to smart cars, drones, and pressure sensors, traditional computing can’t keep pace with the avalanche of real-time telemetry data from the IoT.
9. Multithreading
While multithreading has been around since the 1950s, the first multithreaded processor didn’t hit consumer desktops until 2002 [13]. Multithreading is a parallel computing software method that works best in parallel computer systems.
10. Python
A special multiprocessing module simplifies parallel programming in the Python language. It uses “subprocesses” in place of threads. The difference? Threads share memory, while subprocesses use different memory “heaps.” The upshot is a faster, fuller parallel computer usage model [14].
11. Parallel computing in R
The programming language R was developed as a serial coding language for statistical and graphical computing. Traditionally, R worked serially no matter how many cores your processor had. The parallel package released in 2011 lets R programmers use parallel programming and make efficient use of multiple cores [15].
12. Parallel Computing Toolbox
The Parallel Computing Toolbox from MathWorks lets programmers make the most of multi-core machines. The Matlab Toolbox lets users handle big data tasks too large for a single processor to grapple with [16].
Parallel vs distributed computing
How does parallel computing work? It either uses one machine with multiple processors, or lots of machines cooperating in a network. There are 3 distinct architectures.
- Shared memory parallel computers use multiple processors to access the same memory resources. Examples of shared memory parallel architecture are modern laptops, desktops, and smartphones.
- Distributed memory parallel computers use multiple processors, each with their own memory, connected over a network. Examples of distributed systems include cloud computing, distributed rendering of computer graphics, and shared resource systems like SETI [17].
- Hybrid memory parallel systems combine shared-memory parallel computers and distributed memory networks. Most “distributed memory” networks are actually hybrids. You may have thousands of desktops and laptops with multi-core processors all connected in a network and working on a massive problem.
What’s next for parallel computing?
As amazing as it is, parallel computing may be reaching the end of what it can do with traditional processors. In the next decade,
quantum computers could vastly enhance parallel computations. How do we know? Google recently announced - unofficially - that it reached “quantum supremacy.” If true, it has built a machine that can do in 4 minutes what the most powerful supercomputer on Earth would take 10,000 years to accomplish [18].
With quantum computing, parallel processing takes a huge leap forward. Think of it this way: serial computing does one thing at a time. An 8-core parallel computer can do 8 things at once. A 300-qubit quantum computer could do more operations at once than the number of atoms in our universe [19].
Parallel computing is all around us
At its simplest, parallel computing is part of the multi-core processors in our phones and laptops that make them run efficiently. At its most complex, it’s the staggering 200,000+ cores in the American Summit supercomputer that are helping us unlock problems in genetics, cancer, the environment, and even model how a supernova works. It’s the idea that a computer can break down a problem into parts and work on them at the same time. As the data in our world grows, parallel computing will keep pace to help us make sense of it.
About the Author
Tom Gerencer is a contributing writer for HP® Tech Takes. Tom is an ASJA journalist, career expert at Zety.com, and a regular contributor to Boys' Life and Scouting magazines. His work is featured in The Boston Globe, Costco Connection, FastCompany, and many more.