10 minutes to understand microservices, containers and Kubernetes and their relationship

What are microservices?

What are microservices? Should you be using microservices? What does microservices have to do with containers and Kubernetes? If these things keep popping up in your day-to-day life and you need a 10-minute overview, then this blog post is for you.

Fundamentally, a microservice is just a computer program that runs on a server or virtual computing instance and responds to network requests.

How does this differ from a typical Rails/Django/Node.js application? It's fundamentally no different. In fact, you may find that you have a dozen or so microservices deployed in your organization. There isn't any new magical technology that qualifies your application as a microservice. A microservice is not defined by how it is built, but how it becomes a more general system or solution.

So how do you make a service a microservice? In general, microservices are narrower in scope and focus on doing smaller tasks well. Let's explore further by looking at an example.


Microservice Example: Amazon Product Listing

Let's examine the system that gets you this product page on Amazon. It contains several pieces of information, possibly retrieved from different databases:

  • Product description, including price, title, photo, etc.

  • Recommended items, which are similar books that other people have purchased.

  • A list of sponsors associated with this project.

  • Information about the author of this book.

  • customer reviews.

  • Your own history of viewing other products in the Amazon store.

If you're writing code for this list quickly, the easy way would be something like this:

When a user's request comes from the browser, it is served by the web application (Linux or Windows process). Typically, the called piece of application code is called a request handler. The logic inside the handler will make multiple calls to the database in sequence to get the information needed to render the page, stitch it together, and render the web page back to the user. very simple, right? In fact, many Ruby on Rails books have tutorials and examples like this. So, you might ask, why complicate things?

Imagine what happens as the application grows and more engineers get involved. The recommendation engine in the example above is maintained by a small team of programmers and data scientists. There are dozens of different teams responsible for rendering certain components of this page. Each of these teams typically wants the freedom to:

  • Change their database schema.

  • Release their code to production quickly and frequently.

  • Use their programming language of choice or development tools such as data storage.

  • Make your own trade-offs between computing resources and developer productivity.

  • Preference to maintain/monitor its functionality.

As you can imagine, getting the team to agree on everything to release a new version of the web store application will become more difficult over time.

The solution is to split components into smaller, independent services (aka microservices).

The upgrade process becomes smaller and less cumbersome. It's basically a proxy that simply breaks down incoming page requests into several specialized requests and forwards them to the corresponding microservices, which are now their own processes and run elsewhere. An "application microservice" is basically an aggregator of data returned by specialized services. You could even get rid of it entirely and offload that work to the user's device, letting this code run as a single-page JavaScript application in the browser.

Other microservices are now separated, and each development team developing a microservice can:

  • Deploy their services as they please without interfering with other teams.

  • Extend their services as they see fit. For example, using an AWS instance type of their choice, or perhaps running on dedicated hardware.

  • Have their own monitoring, backup and disaster recovery specific to their services.

What is a container?

Technically, a container is just a process spawned from an executable, running on a Linux machine, with some limitations, such as:

  • A container is not allowed to "see" all of the filesystem, it can only access specified parts of it.

  • A container is not allowed to use all CPU or RAM.

  • Containers are limited in how they can use the network.

Historically, modern operating systems have always imposed restrictions on processes, such as every Linux process running with the privileges of the system user, but containerization technology introduces more possible restrictions and makes it more flexible.

Basically, any Linux executable can be restricted, ie "containerized".

Most of the time, when people say "containers," they don't just mean Linux processes, but the way executables are packaged and stored.

A tool like Docker allows developers to take their executable and its dependencies, along with any other files they want, and package them all into a single file. This technique is not much different from archives such as tarballs. Docker also allows to include some additional instructions and configuration to run this packaged executable. Often these files, often referred to as "container images", are also called containers.

But for simplicity, remember:

  • A container is a linux process running restricted

  • A container image is a dependency and configuration package for an executable process

Container images are self-sufficient. They will run on any Linux machine, so containerization makes it easier to copy (deploy) code from a developer's machine to any environment.

What is the difference between microservices and containers?

We just learned that containers are just a way to package, deploy and run Linux programs/processes. You can have a huge monolithic application as a container, or you can have a bunch of microservices that don't use containers at all.

Containers are a useful resource allocation and sharing technology. This is something that DevOps people get excited about. Microservices is a software design architecture. This is something developers are excited about.

They are related, but not required of each other. You can deploy monolithic applications as containers, or you can have unlimited, non-containerized microservices.

When to use Microservices?

The idea behind microservices is not new. For decades, software architects have struggled with decomposing monolithic applications into reusable components.

Benefits of Microservices

The benefits of microservices are many, including:

  • Easier automated testing;

  • Fast and flexible deployment mode;

  • More flexible expansion and contraction.

Another benefit of adopting microservices is the ability to choose the best tool for the job. Some parts of the application can benefit from the speed of C++, while other parts can benefit from the productivity gains of higher-level languages ​​such as Python or JavaScript.

Disadvantages of Microservices

Disadvantages of microservices include:

  • requires more careful planning;

  • Higher research and development investment;

  • The temptation to overdesign.

If the application and development team are small enough and the workload is not challenging, there is usually no need to devote additional engineering resources to solving problems you have not yet solved and use microservices. However, if you start to see that microservices do more good than harm, here are some specific design considerations:

  • Separate computing and storage. These resources have very different scaling costs and characteristics as your CPU power and storage needs grow. Not having to rely on local storage from the start will allow you to accommodate future workloads with relative ease. This applies both to simple forms of storage such as file systems, and to more complex solutions such as databases.

  • Asynchronous processing. The traditional approach of incrementally building an application by adding more and more child processes or objects that call each other stops working as the workload grows, and the application itself has to scale across multiple machines or even data centers. Applications will need to be rebuilt around an event-driven model. This means sending events (instead of waiting for results) instead of calling functions and waiting for results synchronously.

  • Embrace the message bus. This is a direct consequence of having to implement an asynchronous processing model. As your monolithic application is decomposed into event handlers and event emitters, a robust, performant and flexible message bus is required. There are several options, and the choice depends on the size and complexity of the application. For a simple use case, something like Redis will do the trick. If you need your application to be truly cloud-native and scale up and down on its own, you may need to be able to process events from multiple event sources: from streaming pipelines like Kafka to infrastructure, and even monitoring events.

  • API versioning. Since your microservices will use each other's APIs to communicate with each other over the bus, an architecture designed to maintain backward compatibility will be critical. By simply deploying the latest version of a microservice, developers should not be asking others to upgrade their code. This would be a step towards backwards compatibility in the overall approach, and the development team would have to strike a reasonable compromise between supporting old APIs forever and maintaining a higher development velocity. This also means that API design becomes an important skill. Frequent breaking API changes are one of the reasons why teams cannot effectively develop complex microservices.

  • Rethink your security. Many developers don't realize this, but moving to microservices creates an opportunity for a better security model. Since each microservice is a dedicated process, it's best to only allow it access to the resources it needs. This way, a vulnerability in just one microservice does not expose the rest of the system to attackers. This is in contrast to large monoliths, which tend to run with higher privileges (a superset of what everyone needs) and can lead to more breaches.

What does Kubernetes have to do with microservices?

Kubernetes is too complex to describe in detail here, but it's worth an overview because it's mentioned by many people in conversations about microservices.

Strictly speaking, the main benefit of Kubernetes (aka K8s) is increased infrastructure utilization by efficiently sharing computing resources across multiple processes. Kubernetes is a master at dynamically allocating computing resources to meet demand. This allows organizations to avoid paying for computing resources they don't use. However, some side benefits of K8s make the transition to microservices easier.

When you decompose a monolithic application into separate, loosely coupled microservices, your teams gain more autonomy and freedom. However, microservices must still work closely together when interacting with the infrastructure on which they must operate.

You must address the following issues:

  • Predict how much computing resources each service will need;

  • how these requirements change under load;

  • How to partition infrastructure and divide them between microservices;

  • Implement resource constraints.

Kubernetes solves these problems very elegantly and provides a general framework to describe, inspect, and reason about the sharing and utilization of infrastructure resources. That's why adopting Kubernetes as part of a microservices re-architect is a good idea.

However, Kubernetes is a complex technology to learn and is more difficult to manage. If you can, you should take advantage of a managed Kubernetes service from your cloud provider. But that's not always possible for companies that need to run their own Kubernetes clusters across multiple cloud providers and enterprise data centers.

in conclusion

in conclusion:

  • Containers are just Linux processes with application restrictions. Examples of restrictions include how much CPU or memory a process is allowed to use. Tools like Docker allow developers to package their executables with dependencies and additional configuration. These packages are called images, and are often and confusingly also called containers.

  • Microservices are not new. This is an old software design pattern that has grown in popularity due to the growing size of Internet companies. Microservices don't have to be containerized. Likewise, a monolithic application can be a microservice.

  • Small projects should not shy away from overall design. It provides greater productivity for smaller teams.

  • Kubernetes is an excellent platform for complex applications composed of multiple microservices.

  • Kubernetes is also a complex system with a steep learning curve and very expensive to manage.

Guess you like

Origin blog.csdn.net/m0_37723088/article/details/131223540