Trust Nobody, Be Everywhere - Introduction to Enarx

When you run a workload as a VM, container, or in a serverless environment, that workload is vulnerable to interference from anyone or software with hypervisor, root, or kernel access.

Enarx is a new open source project designed to simplify the process of deploying workloads to various Trusted Execution Environments (TEEs), in the public cloud, on-premises or elsewhere, and keep your application workloads as secure as possible.

When you run workloads in the cloud, there is no technical barrier that prevents the cloud provider or its employees from seeing your workload, seeing the data, or even changing the running process.

That's because when you run workloads as VMs, containers, or serverless, the way they're implemented means that a human or software entity with sufficient access can interfere with any process running on that machine.

There are other issues to consider. Not only do you need to trust the public cloud provider and its employees, but you also need to trust all software running on their systems, because if it is malicious or compromised, then other people or entities could view or interfere with your workload.

You have to trust the OS, firmware libraries, hypervisors, application stacks, third-party libraries, middleware, drivers: nothing on every system needs to be malicious or broken.

Of course, public cloud providers generally have little incentive to interfere with our workloads -- except, of course, at the request of law enforcement -- and they have many incentives to run workloads properly.

They work hard to try to make the systems on which your workloads run as secure as possible. But their focus is not to protect your workload from it, but to protect their servers from your workload, and to protect your workload from other tenants' workloads.

That's not to say they don't take steps to protect you, but given most existing threats, it's understandable that protecting you from malicious workloads and tenants is a top priority.

The decision businesses and organizations need to make is whether they want to take on the business of running customer details, financial information, legal proceedings, payroll information, medical data, firewall settings, authentication databases or countless other sensitive workloads in the public cloud risk.

What are the options? If you don't want to trust a cloud service provider, it might be safer to run your workloads on your own premises, but even that is definitely not perfect.

There are people running your servers in-house -- there are operators, system administrators, database specialists -- and many of them have access to the same data and processes if you run servers on your premises.

How happy are you that your junior sysadmin can see -- and potentially change -- the CEO's compensation and compensation package, or spill the details of an acquisition?

trust

So what exactly is the problem? A classic way of thinking about a computer system is as a set of layers, called a stack. A stack consists of various hardware, firmware, and software layers, which are almost always provided by different entities.

The diagram below shows an example stack of a cloud virtualization architecture, where applications (your workloads) run on various layers. In this example, the colors represent the different providers for each tier, representing five different entities - the numbers could be higher.

In order for you (the workload owner) to ensure that your application is as secure as possible, you need to ensure that no layer is malicious or compromised, and even if that were true, you would still need to trust the cloud service provider that is actually running the host system business.

insert image description here
The next diagram shows a more modern example, where workloads ("applications") are running in containers.

Clearly, if you're a malicious actor, there are not only more layers to protect or exploit, but also more providers: seven in this case.

Of course, there are many techniques to address different layers of security, not the least of which is acquiring different layers from trusted and reputable organizations, but ultimately, there is a large attack surface that malicious actors can exploit.

insert image description here

TEEs to the Rescue?

Trusted Execution Environments (TEEs) are a fairly new technological approach to solving some of these problems. It allows you to run applications in a set of memory pages that are encrypted by the host CPU so that even the owner of the host system cannot view or modify the running processes in the TEE instance.

Most notable are AMD's Secure Encrypted Virtualization (SEV) and Intel's Software Guard Extensions (SGX), although other silicon vendors are also discussing alternatives to their architectures. AMD and Intel have taken very different approaches, a point we'll come back to below.

As mentioned above, the use of TEEs allows you to design your application in such a way that the host cannot see the TEE instances running your workload. This is an improvement on the current state and, if implemented properly, essentially eliminates the need to trust cloud service providers or even internal staff, but there are still some issues.

Before we address those issues, however, there's a word of concern in the previous paragraph: the word "correct" in the phrase "when implemented correctly." The problem is, setting up and running a workload in a TEE is far from an easy proposition.

Imagine if someone could fake a TEE and make you believe that your sensitive workload is running in a TEE when it is actually executing on a standard and compromised host system without TEE protection.

Arguably, you're in a worse position than you were before the TEE, because at least back then you knew to be careful with sensitive workloads: now, you have a false sense of security and are tricked into exposing applications that need protection.

Of course the silicon vendors have noticed the problem of spoofing TEE instances, so they provide mechanisms by which the CPU (which is setting up the TEE instance) can tell the party wishing to run the workload that it's real, cryptographically proving that It: This is called proof.

Performing a full proof and checking its correctness is not a simple process, and designing this into an application can be complex, but let's assume you've already achieved this step and are ready to move on.

Given a properly authenticated TEE, we can now be cryptographically confident that we're running our application inside a real TEE instance, but that doesn't solve all of our problems.

At least two issues remain to be resolved. First, although we have now isolated the application from some layers in the stack, there are still some layers that we depend on and therefore trust.

What exactly depends on the TEE type and deployment model, right now, all but the simplest applications have zero dependencies, but for example, if you're using a VM to run a TEE instance, then you'd typically put the entire operation The system runs in the VM as a dependency: again, we have the kernel, userspace, etc. - that's a big trust to manage!

Enarx is a project that aims to solve the above problem: use TEE instances, but in a way that allows you to reduce the number of trust relationships while maintaining a higher level of security and ease of use.

https://github.com/enarx

Enarx: Simplifying Trust Relationships

Enarx is a framework for running applications in TEE instances, which we call "Keeps" in the project, without the need to implement proofs separately, trust a large number of dependencies, and rewrite applications. It's designed to work across silicon architectures transparently to the user (you), so that your applications can run on AMD silicon as easily as they run on Intel silicon, without recompiling your code. As other TEE types become available, we plan to support them as well.
Given that this is a Red Hat project, it is and will continue to be open source software. Given that this is a security related project, we aim to keep it as small and easy to audit as possible.
Key components of Enarx include:
authentication components;
Enarx API and core;
Enarx runtime environment;
management components.

We'll examine these in detail in a more technical article, but let's see what Enarx is trying to achieve. If we consider one of the stacks mentioned above, the goal of Enarx is to: Remove trust in any layer above the CPU/firmware (provided by silicon suppliers, such as Intel or AMD), which means that during execution, trust is required The next layer below the application is the middleware layer (see diagram below). To be clear, TEEs, like any other security capability, are not guaranteed to be perfect. You can use them to reduce your attack surface and the number of layers you need to trust.

Enarx also has a component at the middleware layer - the Enarx runtime environment - which we plan to miniaturize so it can be easily audited. It's open source, which means anyone can take a closer look and decide whether to trust it. Our goal is to work with the open source community and encourage them to conduct audits, giving those who cannot perform the analysis themselves a high level of trust in the Enarx code.

insert image description here

Other parts allow the application to be authenticated, packaged and loaded in a manner transparent to the user. First, you ask the Enarx component to check whether the host you plan to deploy to is launching a real TEE instance. Once the TEE is validated and verified, the management component encrypts the relevant portion of the application, along with any required data, and sends it to the host for execution in Enarx Keep.

Our vision for Enarx extends beyond on-premises and public cloud to include any TEE-enabled system. We want to enable telecom type edge use cases, mobile use cases, IoT use cases, etc. It's early days, but if you're interested, we urge you to visit the project website to learn more and hopefully contribute.

Enarx is designed to simplify the process of deploying workloads to a variety of different TEEs in the cloud, on your premises or elsewhere, and give you confidence that your application workloads are as secure as possible. We will announce more details as the project develops.

https://next.redhat.com/2019/08/16/trust-no-one-run-everywhere-introducing-enarx/

Guess you like

Origin blog.csdn.net/weixin_45264425/article/details/132504175