Categories
Tech

In a first, researchers extract secret key used to encrypt Intel CPU code

Promotional close-up photo of computer component.

Researchers have extracted the secret key that encrypts updates to an assortment of Intel CPUs, a feat that could have wide-ranging consequences for the way the chips are used and, possibly, the way they’re secured.

The key makes it possible to decrypt the microcode updates Intel provides to fix security vulnerabilities and other types of bugs. Having a decrypted copy of an update may allow hackers to reverse engineer it and learn precisely how to exploit the hole it’s patching. The key may also allow parties other than Intel—say a malicious hacker or a hobbyist—to update chips with their own microcode, although that customized version wouldn’t survive a reboot.

“At the moment, it is quite difficult to assess the security impact,” independent researcher Maxim Goryachy said in a direct message. “But in any case, this is the first time in the history of Intel processors when you can execute your microcode inside and analyze the updates.” Goryachy and two other researchers—Dmitry Sklyarov and Mark Ermolov, both with security firm Positive Technologies—worked jointly on the project.

The key can be extracted for any chip—be it a Celeron, Pentium, or Atom—that’s based on Intel’s Goldmont architecture.

Tumbling down the rabbit hole

The genesis for the discovery came three years ago when Goryachy and Ermolov found a critical vulnerability, indexed as Intel SA-00086, that allowed them to execute code of their choice inside the independent core of chips that included a subsystem known as the Intel Management Engine. Intel fixed the bug and released a patch, but because chips can always be rolled back to an earlier firmware version and then exploited, there’s no way to effectively eliminate the vulnerability.

The Chip Red Pill logo.

The Chip Red Pill logo.

Sklyarov et al.

Five months ago, the trio was able to use the vulnerability to access “Red Unlock,” a service mode (see page 6 here) embedded into Intel chips. Company engineers use this mode to debug microcode before chips are publicly released. In a nod to The Matrix movie, the researchers named their tool for accessing this previously undocumented debugger Chip Red Pill, because it allows researchers to experience a chip’s inner workings that are usually off-limits. The technique works using a USB cable or special Intel adapter that pipes data to a vulnerable CPU.

Accessing a Goldmont-based CPU in Red Unlock mode allowed the researchers to extract a special ROM area known as the MSROM, short for microcode sequencer ROM. From there, they embarked on the painstaking process of reverse engineering the microcode. After months of analysis, it revealed the update process and the RC4 key it uses. The analysis, however, didn’t reveal the signing key Intel uses to cryptographically prove the authenticity of an update.

In a statement, Intel officials wrote:

The issue described does not represent security exposure to customers, and we do not rely on obfuscation of information behind red unlock as a security measure. In addition to the INTEL-SA-00086 mitigation, OEMs following Intel’s manufacturing guidance have mitigated the OEM specific unlock capabilities required for this research.

The private key used to authenticate microcode does not reside in the silicon, and an attacker cannot load an unauthenticated patch on a remote system.

Impossible until now

What this means is that attackers can’t use Chip Red Pill and the decryption key it exposes to remotely hack vulnerable CPUs, at least not without chaining it to other vulnerabilities that are currently unknown. Similarly, attackers can’t use these techniques to infect the supply chain of Goldmont-based devices. But the technique does open possibilities for hackers who have physical access to a computer running one of these CPUs.

“There’s a common misconception that modern CPUs are mostly fixed in place from the factory, and occasionally they will get narrowly scoped microcode updates for especially egregious bugs,” Kenn White, product security principal at MongoDB, told me. “But to the extent that’s true (and it largely isn’t), there are very few practical limits on what an engineer could do with the keys to the kingdom for that silicon.”

One possibility might be hobbyists who want to root their CPU in much the way people have jailbroken or rooted iPhones and Android devices or hacked Sony’s PlayStation 3 console.

In theory, it might also be possible to use Chip Red Pill in an evil maid attack, in which someone with fleeting access to a device hacks it. But in either of these cases, the hack would be tethered, meaning it would last only as long as the device was turned on. Once restarted, the chip would return to its normal state. In some cases, the ability to execute arbitrary microcode inside the CPU may also be useful for attacks on cryptography keys, such as those used in trusted platform modules.

“For now, there’s only one but very important consequence: independent analysis of a microcode patch that was impossible until now,” Positive Technologies researcher Mark Ermolov said. “Now, researchers can see how Intel fixes one or another bug/vulnerability. And this is great. The encryption of microcode patches is a kind of security through obscurity.”


Credit: Source link

Leave a Reply

Your email address will not be published. Required fields are marked *