Chips may be inherently vulnerable to Spectre and Meltdown attacks
Most malware exploits coding errors and poor design. But Google security researchers say a fundamental flaw in the nature of computing could make some threats impossible to defeat.
Malicious software represents an ongoing threat to modern life, attacking everything from databases and cameras to e-commerce, power stations, and hospitals. In its more insidious forms, malware can steal sensitive information without anyone knowing a leak has taken place.
The fight against these attacks rests on an important assumption: that suitably powerful and well-designed software can guarantee the security of any information. Indeed, vast cybersecurity businesses are based on this idea.
But today, Ross McIlroy and colleagues at Google say this assumption is dangerously wrong. Their work focuses on a new generation of malicious attacks that have forced them to reconsider the nature of cybersecurity and how it works.
The new attacks, known as Spectre and Meltdown, have been studied since early 2018. But their broader significance is only now becoming clear.
Google’s shocking discovery is that they exploit a foundational flaw in the way information processors work. And because of this, security experts may never be able to protect these devices—even in principle.
The Google team say the threat affects all chipmakers, including Intel, ARM, AMD, MIPS, IBM, and Oracle. “This class of flaws are deeper and more widely distributed than perhaps any security flaw in history, affecting billions of CPUs in production across all device classes,” say McIlroy and co.
In the past, malware has tended to exploit poorly designed code and the errors it contains. These errors provide malicious actors with ways to disrupt calculations or access confidential information. So an important approach is to fix these errors with software patches before they can be exploited.
But when the flaw is in the foundations of computer design, software patches offer meager protection. The challenge is that the very nature of computation allows information to leak via mechanisms called side channels.
One example of a side channel is the blinking lights on a modem, router or even a PC. Various security researchers have pointed out that the flashing is correlated with data transfer and that a malicious actor can simply watch the flashes to eavesdrop. Indeed, security researchers have demonstrated similar attacks with a bewildering array of side channels, including energy consumption, microphones, and high-resolution cameras.
The new threat is more insidious because it exists at the interface between hardware and software, known as the machine architecture. At this level, a processor treats all programming languages in the same way. It executes commands one after the other without regard for which program requested them.
Computer scientists have always assumed that these commands can be separated in a way that guarantees confidentiality. The thinking is that some suitably advanced software ought to be able to marshal the commands in a way that keeps them separated.
But the Google team’s key result is to show that this assumption is wrong. A processor cannot tell the difference between a good command and a malicious one—even in principle. So if a command tells it to send information to an area of the memory that can be easily accessed later, the machine obeys.
It’s easy to imagine that this can be prevented with software that separates good commands from bad ones. But the Google team show that this just adds another layer of complexity to the challenge, along with a new set of potential side channels.
To show the ubiquity of threat, the Google team constructed a “universal read gadget.” This is the ultimate eavesdropper—a routine that can read all addressable memory in a processor, unknown to the user.
It is by no means a perfect piece of software. It sometimes operates probabilistically and so can fail. But there is no way to prevent it from working when it does.
Variant 4 of the universal reading gadget is particularly worrying. McIlroy and co say they were unable to find an effective a way to combat it or reduce its threat. “We do not believe that variant 4 can be effectively mitigated in software,” they say.
The team’s attempts to combat these attacks had a significant impact on computing performance. For example, one form of mitigation for the first variant of the universal read gadget led to a 2.8X slowdown, as measured by a Java benchmarking program called Octane.
During the last year, Intel has redesigned its chips in attempt to mitigate the most serious threats from Spectre and Meltdown attacks. But this has reportedly come at the cost of a performance drop of up to 14%. And the modifications are unlikely to be fail-safe.
One reason for Google’s concern is the threat to e-commerce. It’s not hard to imagine an attack that reveals the cryptographic keys used to secure transactions, thereby allowing large-scale theft.
But the threat goes much deeper. Many of the problems come about because of the complex architecture of devices based on intellectual property that is carefully guarded.
This complexity is itself part of the problem. The designs are based on abstract models that have become more complex as manufacturers have pursued the goal of faster computation. McIlroy and co show that these abstract models always have side channels that exist outside the model. “We have discovered that untrusted code can construct a universal read gadget to read all memory in the same address space through side-channels,” they say. “This puts arbitrary in-memory data at risk, even data ‘at rest’ that is not currently involved in computations and was previously considered safe from side-channel attacks.”
There is a little good news, however. So far there are no known attacks that exploit Spectre or Meltdown. For the moment, the threat is confined to the labs of cybersecurity researchers like McIlroy and his colleagues.
But that provides little comfort to chip makers and security experts. It is not hard to imagine that malicious actors—including state-sponsored teams—might be developing ways to exploit this vulnerability. This is a problem, as McIlroy and co say, that “seems destined to haunt us for a long time.”