Current Trusted Execution Environment Implementation Situation

Problems can arise if you run software on someone's server. You can't be sure that your data and code haven't been observed, or worse, you want to be sure that you haven't been tampered with. Trust is your only guarantee.

Of course I chose to stay away from him.

But there is hope, in the form of Trusted Execution Environments (TEEs) and TEEOS, that TEEs will be leveraged to minimize the trust you need to run confidently on someone else's hardware.

This article delves into this question, how TEEs work and their limitations, provides a TEE primer, and explains how Enarx's similar TEEOS addresses these limitations that face forgiveness.

Problems Trusted Execution Environments Solve

Before we touched TEE, we knew that there was an administrator mode and a privileged mode. A substantial reality of running software is that any lower layer of the computing stack on the same computer can control and inspect the running software . This applies to layers such as the operating system, virtual machine manager (VMM or hypervisor), container management stack (if any), and any other middleware.

Thus, anyone with (legal or unauthorized) root access to a computer can see, modify, terminate, and otherwise manipulate any code and data running on the computer.

For anyone running programs on someone else's machine, that's pretty much all you can do in terms of security and privacy.

In cloud environments, where the control and protection of thousands of physical machines hosting thousands of virtual machines is entrusted to service providers, some organizations view the lack of basic security and privacy guarantees as problematic.

For anyone running a program on someone else's machine, in terms of security and privacy, hehe, it's not worth mentioning. What's the security? What color are the underpants seen.

Marilyn Monroe

A Trusted Execution Environment (TEE) is a response to the need to maintain data confidentiality and integrity "in use", that is, at runtime (program execution), regardless of who may own or have access to the machine running the software.

What does TEE bring that we couldn't do before?

[Trusted Execution Environment (TEE)] is a fairly new technical approach to solving some of these problems.

They allow you to run applications in a set of memory pages that are encrypted by the host CPU so that even the owner of the host system cannot peek into or modify the running processes in the TEE instance.

All TEEs provide confidentiality guarantees for the code and data running within them, meaning that running workloads cannot be seen from outside the TEE.

isolation, voyeurism, invisibility

TEEs also provide memory integrity protection ( [4] , [5] ), which prevents data loaded into the TEE from being modified from the outside (we will return to this issue below).

But TEE is not the solution to all problems.

Because lower stack levels still have to be able to control scheduling and TEE startup and can block syscalls.

There are also various attacks (including but not limited to replay, TOCTOU, and Foreshadow) that have been successfully targeted against previous or current TEE implementations ( [3] , [7] ).

  • replay:https://en.wikipedia.org/wiki/Replay_attack
  • TOCTOU:https://en.wikipedia.org/wiki/Time-of-check_to_time-of-use
  • Foreshadow:https://foreshadowattack.eu/

However, TEE provides new capabilities to run user-space applications that are not visible to the operating system, VMM, or middleware. They have the potential to enable security and privacy features for sensitive workloads in previously unavailable environments such as the cloud.

Therefore, in the field of security, there is no saying that there is any technology that can be done once and for all, only that it will continue to increase the difficulty of attackers. Let the equipment become more secure!

The difference between TEE and TPM, HSM

Other classes of hardware for specialized cryptographic purposes already exist, notably Trusted Platform Modules (TPMs) and Hardware Security Modules (HSMs). However, the purpose of a TEE is quite different from these other classes of cryptographic hardware.

A TPM is a chip designed to provide a "hardware root of trust" by holding a secret (key) in such a way that anyone who physically tries to open it or remove it from the computer motherboard it is soldered on to access it Secrets are difficult and obvious.

TPM is not designed to provide general computing power. They do provide some basic (read: "slow") computing power: they can generate random keys, encrypt small amounts of data with the secrets they hold, and they can measure components of a system, and configure registers (PCRs) in the platform A log of these measurements is maintained in .

You can implement many functions of a TPM in a TEE, but it doesn't make sense to create a "full" TPM implementation within a TEE: one of the key use cases of a TPM is the measurement of boot sequences using PCR, while the TEE provides a general processing environment .

Unlike TPMs, TEEs are not a good physical basis for trust. The capabilities of the TPM are also carefully defined to meet the requirements of TCG (Trusted Computing Group, the standards body for TPM), which are more restrictive than those of TEE.

On the other hand, a hardware security module (HSM) is an external physical device that provides cryptographic operations, usually receiving plaintext, encrypting it with a key it holds, and returning ciphertext (encrypted text), so that the operating system Encryption keys will not be processed.

Like TPMs, they are designed to thwart, detect, and/or enable overt physical tampering, making them useful tools for keeping secrets in a safe place. They usually provide a higher level of protection than a TEE, but are separate modules from the main CPU and motherboard, accessible via the PCI bus, network, or similar.

All TEE instances and some HSMs (depending on the model) can be used as general-purpose functional processing units, or programmed for specific purposes (e.g. PKCS#11 modules). HSMs are expensive (typically thousands of dollars) compared to TEEs, which are part of normal priced chipsets. The work of programming HSMs for specific tasks (beyond modular use) is often very difficult and highly skilled.

To sum up these three points, it can be said that:

  • TEE provides a common processing environment. They are built into the chipset.
  • The TPM provides the physical root of trust, measurement of other components and boot sequence, and has limited processing power. They are cheap chips built into many computers.
  • HSM provides a secure environment to store secrets, process data, and can provide a common processing environment. They are expensive external devices that usually require expertise to use properly.
Device Processing capabilities Complexity Cost
TEE general high none (built-in)
TPM limited average very low
HSM general very high high

Finally, we should mention earlier approaches to TEE that did not quite fit our definition of TEE.

For example, recent iPhones have a "Secure Enclave," an entirely separate CPU that runs alongside the main CPU, while Android phones using ARM chips include a system called TrustZone.

TEEs must provide a trusted environment in which software can be loaded from a common operating system, but these early models relied on a second operating environment running in parallel with the common operating system.

This approach provides some of the functionality we want from a TEE, but also introduces some issues and limitations, such as limiting the ability of ordinary users to run software from userland in a trusted environment.

Mainly a TEE and REE time-sharing multiplexing

Different Types of TEE

While there is some consensus on its goals, there are multiple approaches to TEE architecture and implementation.

different methods, but no standard

As mentioned earlier, a TEE provides confidentiality to userspace software by encrypting a range of memory with a key (one or more keys) held in hardware that is not available to the operating system or any other software, even if Run at the highest privilege level.

Beyond that, however, there is currently no industry consensus on the safest or most efficient way to create a TEE, and various hardware manufacturers have created fundamentally different implementations.

What each of these implementations share is a reliance on the CPU to create and enforce access to TEEs, and the ability for end users to specify which processes should run in encrypted memory regions .

From here, the industry is currently divided into two different TEE models: process-based models (such as Intel's SGX (9)) and virtual machine -based models (such as AMD's SEV (10)). It's worth noting that the CPU has to be specifically designed to support TEE with accompanying firmware, most CPUs in 2019 don't support TEE of any kind.

Process-based TEE

In the process-based TEE model, the processes that need to run safely are divided into two parts: trusted (assumed to be safe) and untrusted (assumed to be unsafe).

  • Trusted components reside in encrypted memory and handle confidential computations,
  • The untrusted component interfaces with the operating system and propagates I/O from encrypted memory to the rest of the system.

Data can only enter and exit this encrypted area through predefined channels, and the size and type of the passed data are strictly checked . Ideally, all data entering or leaving an encrypted storage area is also encrypted in transit, and is only decrypted when the data reaches the TEE, at which point it is only visible to software running in the TEE.

Advantages of this model include a smaller Trusted Computing Base (TCB) compared to VM-based models, since only the CPU and process-specific components are trusted (1). Smaller TCBs generally mean less room for error because fewer components are involved in trusted work.

This also allows monitoring of all inputs and outputs to the TEE, arguably improving safety. Additionally, current implementations, such as Intel's SGX, provide memory integrity protection.

An oft-cited shortcoming of this model is the lack of bidirectional isolation : while a TEE's process enjoys hardware protection from other processes and lower stack layers, this is not the case.

There is no hardware protection in the TEE to prevent software from accessing or interfering with other processes or operating systems that are protected only by standard access rights.

This one-sided protection raises serious concerns about the misuse of the TEE to house malware : due to these hardware protections, the operating system would find it harder to root out malware in the TEE.

**Another major disadvantage is the need to develop applications specifically for this type of TEE,** such as developing software for Intel's SGX SDK, which divides the program into trusted and untrusted components.

While there are years of academic research and practical experience in using VM boundaries for process isolation, this is not the case for process-based models. There is some debate as to whether this is an advantage or a disadvantage, as disrupting traditional hierarchical trust models and imposing new security boundaries creates uncertainty.

Current implementations of process-based approaches include Intel's SGX (Software Guard Extensions). Another process-based TEE known today is OpenPOWER's Sanctum, which has not entered the market at the time of writing.

VM-based TEE

In this model, memory is encrypted along traditional VM boundaries running on top of the VMM.

While traditional VMs (as well as containers) provide some degree of isolation, virtual machines in this TEE model are protected by hardware-based encryption keys that prevent interference by malicious VMMs (2 ) .

Current implementations, such as AMD's SEV, provide individual ephemeral encryption keys to each virtual machine, thus also protecting virtual machines from each other.

A significant advantage of this model is that it provides bi-directional isolation between the VM and the system, so there is less concern about this type of TEE containing malware capable of interfering with the rest of the system.

AMD's implementation of the model also places no requirements on software development, meaning developers don't need to write specific APIs to run code in this type of TEE.

However, this latter advantage is overshadowed by the fact that the VMM running the software must write a custom API ( 8 ).

Several disadvantages of this model include the relatively large TCB, which includes the operating system running inside the VM (1), which theoretically increases the attack surface. Current implementations, such as AMD's SEV, allow the VMM to control data input to trusted VMs (3), meaning that the host can still potentially alter workloads that are considered safe. It also requires kernel and hardware emulation in a virtual machine and is relatively heavy, especially for microservices.

AMD's SEV is the most fully developed implementation of this model, although there are others, such as Intel's MKTME (Multi-Key Total Memory Encryption, 12). A third implementation is IBM's Protected Execution Facility or "PEF", which will be open source (6).

in addition

There has been some discussion of TEEs on other hardware platforms, including for example the MIPS architecture.

Current approaches are highly dependent on specific technology

As we have seen, there are two broad models of TEEs. But other than that, how do you actually get the code to run in it?

The situation here is by no means simple.

Writing TEE applications

Given the current lack of standardization of TEEs, two different implementations of TEEs may not necessarily provide the same security or performance results.

To make matters worse, applications that need to run in a TEE (or an application's custom VMM) must be developed specifically for each of these hardware technologies.

This is inconvenient for development, can lead to a lack of compatibility between software versions (those that can take advantage of the TEE vs. to move between.

For example, developing an application for Intel SGX requires defining all input and output channels of the TEE, as well as trusted and untrusted components.

However, these definitions are meaningless for application versions running on CPUs without TEE capabilities, so TEE-compatible and non-TEE-compatible versions of software need to be separated. More recently, efforts have been made to reduce friction for developers who want to write code for some TEE implementations, most notably the Open Enclave project.

The effort required by developers to write applications for currently available TEE technologies will likely be repeated again to take advantage of future TEEs that may offer better security or performance benefits.

prove

A key aspect of deploying software to a TEE is the "trusted" part: making sure that you're actually deploying to an actual trusted execution environment, and not masquerading as one . Essentially, a TEE needs to prove that it is authentic before it can be trusted: this process is called attestation.

Only a real TEE running on a real TEE-capable CPU can create valid proofs , ideally this should be easy to check from the validator side. Validators in the cloud computing example are individuals or organizations who wish to use a cloud environment to run confidential workloads on machines they do not own.

While attestation is critical to using any security feature of a TEE, there is currently no standard for attestation, and the burden of creating and implementing attestation methods falls on those who develop and deploy applications. This makes it rather difficult to use TEEs in practice and prevents their widespread adoption.

Although both TEE models currently rely on the manufacturer's certificate chain to certify that the CPU is authentic, and to report the TEE's measurements after issuance (allowing the content of the TEE to be verified), they differ in the kinds and types of keys that the certificate chain must verify. There are differences in the number and sequence of operations of the authentication process.

The lack of standardization in both the development API and the certification process means that once code is written for a platform-specific TEE implementation, developers and users of the software are locked in.

Rewriting the software or the custom VMM that runs it, or having to recreate the proof verification process for different platforms with different TEE implementations would require a significant time investment. This principle also negatively impacts users of cloud platforms as well as cloud service providers (CSPs) themselves, as users cannot easily take advantage of new TEEs offered by CSPs, and their software is tied to a different physical implementation.

in conclusion

People are increasingly aware of the importance of encrypting data at rest (with full disk encryption) or in transit (TLS and HTTPS), but we have only recently developed the technical capability to encrypt data at runtime.

Trusted Execution Environments are an exciting advancement in confidentiality. The ability to encrypt data at runtime provides developers and users of software with previously unavailable security and privacy features.

While this is an exciting time for security, there are currently some huge gaps in the standardization of this new technology.

https://next.redhat.com/2019/12/02/current-trusted-execution-environment-landscape/

Guess you like

Origin blog.csdn.net/weixin_45264425/article/details/132507823