What’s next for processor security?

(For further reading on security topics, I also suggest the Sigarch blog which has run an excellent series of posts on a range of topics: https://www.sigarch.org/tag/security/.)

Processor security concerns have made the past several years a very exciting time to be a computer architect.  Traditional computer architecture is about optimizing the common case.  Processor security studies what can happen (good or bad) when you induce the worst case.  It is clear that when it comes to the worst case, we as computer architects have a lot more to learn about how software interacts with microarchitecture, and how microarchitecture interacts with circuits.

This blog post will discuss two major open challenges in processor security—one related to attacks and one related to defenses—that I believe computer architects are in the best position to solve.

Anticipating the Next “Spectre”

Announced in January 2018, Spectre and Meltdown have since become household names.  These attacks combine a well-studied phenomenon—that a program’s hardware resource usage depends on program data—with processor mis-speculation to disclose, in the worst case, all of program memory, kernel memory, etc.  This is obviously a security disaster, and triggered a massive response from industry to academia, with mitigations spanning the computing stack.

These attacks have led many security and computer architecture researchers to describe the threat landscape with the below figure (left).  The idea is, there are so-called “traditional” attacks (pre-Spectre/Meltdown) such as cache-based side channels leaking cryptographic keys, and then there is Spectre and Meltdown.  Importantly, all three attack classes result from software<->hardware interactions (in this case how hardware resource usage reflects on a program’s behavior, as mentioned above).

Through my own work in this area, I have come to see the landscape looking somewhat different, shown on the right side of the figure.  At a high level, we still have traditional attacks, Spectre and Meltdown.  (Although I draw the circles differently.  Since the security property needed to block Meltdown is relatively well understood, I draw its circle smaller.  Since in many cases Spectre discloses data that would have been disclosed by a traditional attack anyway, I draw it as having more overlap with traditional attacks.)  But, most importantly, there is a new category which is intentionally drawn much larger than any of the other circles.  This is reserved for the next debilitating processor security vulnerability, which need not have anything to do with speculative execution.

In short, speculative execution is just one of many tricks we play to get performance in a multi-billion transistor chip.  It stands to reason that other characteristics of software, microarchitecture and circuits–and their interplay—will lead to similarly disastrous security situations.  For example, we now know of at least two other hardware mechanisms that when utilized in a worst-case fashion can leak all of a program’s memory (DRAM cells and Compressed caches).  And leaking program memory is just one example security violation.  Recent work on misusing processor DVFS and frequency scaling (CLKSCREW and Plundervolt) illustrates how subtle interactions between microarchitecture, layout and circuits can be manipulated to write to a victim’s memory.

We need to feverishly be on the lookout for these next big vulnerabilities, before they appear in deployed hardware.  I have seen sentiment in our community that coming up with attacks should be the security community’s job, not our job.  As a counter argument, let me quote a colleague from the security community who finds processor vulnerabilities for a living: “Right now we are engaged in pure science, akin to dissecting a frog.  But, in this case, this frog actually is the product of intelligent design.” That is, the approach in the security community is by definition reactive and slow-to-develop: attacks are only worth salt once they have been found in real products, and due to lack of computer architecture expertise, take longer to find than they ideally should.  Computer architects can jump both of these hurdles, and I call on our community, today, to ramp up this effort and enable safer processors tomorrow.

Designing Formally Verifiable Processors

The second major open challenge is to design methodologies for and implementations of formally verifiable and formally verified processors.  To start, hardware security today is dominated by point solutions.  For example, a hardware mechanism that blocks a specific type of a specific class of attack.  These often do not enforce precise security properties, compose with other point solutions or lend themselves to automated verification.  Further, as their scope is narrow, such defenses can usually be evaded by clever attackers that change their strategy to take into account the defense.  By contrast, we should be building systems that simultaneously tolerate broader classes of attackers (and whatever evasion strategy those attackers come up with), permit important performance optimizations, and enable machine-checked verification.

To get there, I argue the main paradigm shift needed is to start developing verifiable hardware.  Said another way, even with powerful formal verification tools, it is difficult to design hardware that is even amenable to formal verification.  Consider this anecdote related to speculative execution attacks like Spectre: I was talking to a colleague in industry who designs hardware page walkers.  The colleague remarked that their hardware block—which takes as input virtual page numbers corresponding to TLB misses and outputs cache addresses to traverse the page tables—isn’t even aware of whether incoming TLB miss is speculative or not.  That is, how is this hardware block supposed to uphold some security invariant related to speculative execution when its interface lacks key information related to the invariant?

Designing formally verifiable hardware requires a co-design between security property, abstractions/interfaces, implementation and the capabilities of formal verification tools.  Security properties (such as safety or hyperproperties) define requirements on confidentiality, integrity and availability.  For example, that processor timing should be independent of sensitive data (also see Mark Hill’s related post on Architecture 2.0).  Abstractions/interfaces (such as instruction set architectures or the input/output ports for individual hardware blocks) implement security properties for individual components, or provide a way to compose parts of a design so that the overall system is able to enforce the security property.  For example, the page walker interface could include the status (speculative vs. non-speculative) for each TLB miss, and refuse to service that miss until it is non-speculative.  Hardware implementations are of course “implementations” of abstractions/interfaces that uphold the stated functionality and security requirements.

Finally, all of the above must be done in the context of what is practical for formal verification tools and we must actually commit to performing that verification.  There is no way around this step.  Quoting another industry colleague: “the largest RISC-V core is smaller than our smallest FUB (Functional Unit Block)”.  That sounds like an exaggeration, but it’s clear commercial processors are far too complex for manual verification, especially when that verification involves checking abstract requirements like security properties.  Committing to formal verification also, likely, has additional ripple effects felt throughout the design process.  For example, it may be the case that tools are only scalable to verify individual hardware blocks, which clearly influences hardware implementation and interface design.

To summarize, putting the above pieces together is clearly a moonshot problem, but one that will significantly advance our knowledge of how to build secure hardware systems.  All of the above processes must work together in concert.  That is, a buggy implementation undermines its interface and can be caught by automated tools, but the automated tools need interface and property design to perform the verification.  It is also clear this is in our — Computer Architecture’s — wheelhouse to solve.  Using Mark Hill’s terminology, Architecture 1.0 is deeply rooted in understanding interface vs. implementation.  Designing secure systems is much the same principles, just extending our definitions for what it means to be correct.

What’s next for processor security?
+ posts