UMD Team Led by Vishkin Unveils Improved Parallel Programming Language
A team of University of Maryland researchers recently unveiled a new parallel programming language designed to increase speed and efficiency in multi-core processors, considered the backbone of most computing devices on the market today.
Led by Uzi Vishkin, a professor of electrical and computer engineering with an appointment in the University of Maryland Institute for Advanced Computer Studies, the UMD team developed Immediate Concurrent Execution, or ICE, which builds upon previous research in establishing utility of the theory of parallel algorithms developed in the 1980s and early 1990s.
ICE provides comparable performance to other highly-optimized programs used in parallel computing, while requiring much less effort from the programmer and a reduction in lines of code needed—both important factors in productivity and ease-of-use for those tasked with writing code for multi-core processors.
The researchers published their work in the September 2017 journal, IEEE Transactions on Parallel and Distributed Systems.
“Easy PRAM-Based High Performance Programming with ICE” was authored by Vishkin, Rajeev Barua, a professor of electrical and computer engineering with an appointment in the Institute for Systems Research, and Fady Ghanim, a doctoral student in electrical and computer engineering.
The paper explains that since 2005, all mainstream commercial computers—including most smartphones—have been using multi-core processors that rely on parallel computing, a paradigm that allows a computer to perform many different tasks simultaneously.
Programming of multi-core processor machines to operate in parallel is done through partitioning the task at hand to mostly self-controlling subtasks, called threads, whose operation the programmer is expected to orchestrate. This is often a difficult and costly undertaking, the researchers say.
To meet this challenge, Vishkin introduced in the late 1990s the concept of Explicit Multi-Threading, or XMT, a computer system that builds on Parallel Random-Access Machines, known as PRAM, considered the foremost parallel algorithmic theory at the time.
The UMD-designed XMT system makes programming simpler for software developers as it allows much shorter threads, greatly reducing the variance among subtasks in spite of the fact that the hardware “under the hood” is multi-threaded.
The paper goes on to explain that performance programming of standard computers relies on C language, and XMT performance programming is done using an extension of C called XMTC.
While driven by the lock-step, synchronous theory of parallel algorithms, the responsibility of XMT multithreaded programming still shared some of the challenges of orchestrating subtasks. The new ICE lock-step parallel programming language enables a completely different programming paradigm: tightly-synchronous, threading-free programming for multi-threaded execution.
This novel arrangement using the ICE system provides comparable performance to highly-optimized XMTC programs, while requiring much less effort from the programmer and a reduction in lines of code needed.
In fact, the textbook description of parallel algorithms is typically all that is needed for producing an ICE program, Vishkin says.
“Regimenting divergent subtasks required by multi-threaded programming often amounts to herding cats,” he explains. “In contrast, regimenting ICE is no more than a variant of serial programming—the programming framework that brought about the prevalence of computing technology in today’s economy and science.”
The relevance of the recent work on ICE work is potentially far-reaching, Vishkin says.
“During the 50-plus years during when serial computing dominated, general-purpose processors were the quintessential form of computing,” he says. “But with transition to parallel computing in 2005, it brought about a divergent reality where computing was mostly delegated to accelerators like GPUs and other technologies.”
Now, Vishkin concludes, new research on ICE and similar programming concepts developed at UMD suggests that general-purpose processors can once again serve as a backbone platform for computing, as opposed to the current demand for GPUs and other accelerators.