Adopting a Cloud-Native Architecture: Architecture Evolution and Maturity

key takeaways

  • As part of the hasty adoption of microservices, architectural stability gaps and anti-patterns can emerge.
  • Understanding the caveats and pitfalls of historical paradigm shifts should enable us to learn from previous mistakes and enable our organizations to thrive in the latest wave of technology.
  • It is important to understand the pros and cons of different architectural styles such as monolithic applications, microservices and serverless functions.
  • Repeating cycles of architectural evolution: The initial stages of not understanding best practices in a new paradigm accelerates technical debt. As the industry develops new models to address gaps, teams adopt new standards and models.
  • Think of architectural patterns as strategies that facilitate rapid technological development while protecting business applications from volatility.

Technology trends such as microservices, cloud computing, and containerization have grown so rapidly in recent years that most of these technologies are now part of the daily responsibilities of top IT engineers, architects, and leaders.

We live in a cloud-enabled world. However, being cloud-enabled does not mean cloud-native. In fact, enabling cloud without cloud native is not only possible, but dangerous.

insert image description here

Before we examine these trends and discuss what architectural and organizational changes companies should implement to make the most of a cloud-enabled world, it's important to understand where we've been, where we are now, and where we're going.

Understanding the caveats and pitfalls of historic paradigm shifts should allow us to learn from previous mistakes and enable our organizations to thrive on this latest wave of technology.

anti-pattern

As we briefly describe this evolution, we'll explore the concept of antipatterns, which are common responses to recurring problems that are often ineffective and potentially counterproductive.

This series of articles will describe the mentioned anti-patterns.

Architecture Evolution

Over the past 50 years or so, software architecture and application hosting models have undergone a major shift from mainframes to microservices and serverless.

Figure 1 shows this evolution of architectural models and the paradigms they drive.

insert image description here
Figure 1: Architecture evolution from mainframe to cloud and microservices

concentration

Back in the '70s and '80s, mainframes were the way of computing. Mainframes are based on a centralized data storage and computing model, with basic clients for data entry and data display on the original screen.

The first mainframe computers used punched cards, and most of the computation happened in batch processing. There is no online processing and the latency is 100% because there is no real-time processing.

The mainframe paradigm has undergone some evolution with the introduction of online processing and user interface terminals. However, the overarching paradigm of large central processing units contained within the four walls of a single organization still has a "one-size-fits-all" approach and can only partially provide the functionality required by most business applications.

Centralized -> Decentralized

A client/server architecture puts most of the logic on the server side and some processing on the client side. Client/server was the first attempt in distributed computing to replace the mainframe as the primary hosting model for business applications.

In the early years of this architecture, the development community was still writing software for client/server using the same process, single-tier principle as mainframe development, which led to anti-patterns like spaghetti code and blobs . This organic growth of software has also led to other anti-patterns, such as the Big ball of mud . The industry has to figure out how to prevent teams from following these bad practices, so what's necessary to write sane client/server code has to be studied.

This research effort develops several anti-patterns and best-practice design and coding patterns. It introduced a major improvement called Object-Oriented Programming (OOP) with inheritance, polymorphism and encapsulation, and a paradigm for dealing with scattered data (versus a mainframe with one version of the truth) and how the industry handles it Guides can meet new challenges.

The client/server model is based on a three-tier architecture consisting of presentation (UI), business logic, and data layers. But most applications are written using a two-tier model with a thick client that encapsulates all presentation, business and data access logic, accessing the database directly. Although the industry has begun to discuss the need to separate presentation from business and data access, this practice didn't really become critical until the advent of Internet-based applications.

In general, this model was an improvement over the limitations of the mainframe, but the industry quickly ran into its limitations, such as the need to install client applications on each user's computer, and the inability to operate at a fine-grained level as a business function to expand.

Decentralized -> Connect/Share (www)

In the mid-90s, the Internet revolution took place and a whole new paradigm followed. The web browser becomes the client software, and the web and application servers host all the processing and logic. The World Wide Web (www) paradigm facilitates a true three-tier architecture, where presentation (UI) code is hosted on a web server, business logic (API) is hosted on an application server, and data is stored on a database server.

The development community began to migrate from fat (desktop) clients to thin (web) clients, mainly driven by concepts such as Service Oriented Architecture (SOA), which reinforced the need for a three-tier architecture, and was driven by improvements in client technology. Promoted as well as the rapid development of web browsers. This move speeds time to market and eliminates the need to install client software. But developers are still creating software as tightly coupled designs, leading to confusion and other anti-patterns.

In response, the industry has proposed evolving three-tier architectures and practices such as Domain Driven Design (DDD), Enterprise Integration Patterns (EIP), SOA, and loosely coupled techniques.

Virtual Machine Hosting -> Cloud Hosting

The first decade of the 21st century saw a major shift in application hosting when hosting was offered as a service in the form of cloud computing. Application use cases that require distributed computing, networking, storage, computing, and more have become easier to offer cloud hosting at a reasonable cost compared to traditional infrastructure. Additionally, consumers are taking advantage of the elasticity of resources to scale up and down based on demand. They only pay for the storage and computing resources they use.

The elasticity capabilities introduced in IaaS and PaaS allow a single instance of a service to scale as needed, eliminating instance duplication for scalability. However, these capabilities cannot compensate for duplicating instances for other purposes, such as having multiple versions, or as a byproduct of a monolithic deployment.

The appeal of cloud-based hosting is that development and operations teams no longer have to worry about server infrastructure. It offers three different hosting options:

  • Infrastructure as a Service (IaaS): Developers choose a server specification to host their applications, while the cloud provides the hardware, operating system, and networking. This is the most flexible of all three styles, but does place some burden on the development team who have to specify servers.
  • Platform as a Service (PaaS): Developers only need to worry about their applications and configuration. The cloud provider is responsible for all server infrastructure, networking and monitoring tasks.
  • Software as a Service (SaaS): The cloud provider provides the actual application hosted on the cloud, so the client organization can use the application as a whole without taking responsibility for the application code. This option provides out-of-the-box software services, but it is not flexible if the customer needs to have any custom business functionality beyond what the provider provides.

PaaS emerges as the best choice among cloud options because it allows developers to host their own custom business applications without having to worry about configuring or maintaining the underlying infrastructure.

While cloud hosting encourages modular application design and deployment, many organizations find it tempting to lift and move legacy applications that were not designed to work on a resilient distributed architecture directly to the cloud, resulting in a concept known as " monopoly " The "modern anti-pattern hell " of body applications.

To address these challenges, the industry has come up with new architectural patterns such as microservices and 12-factor applications.

Moving to the cloud also presents the industry with the challenge of managing application dependencies on third-party libraries and technologies. Developers started to struggle with too many options and insufficient criteria for choosing third-party tools, and we started to see some dependency hell.

Dependency hell can happen at different levels:

  • Libraries: Poorly managed dependencies on libraries (JARs in the Java world and DLLs in the .NET world) can cause problems. For example, a typical SpringBoot application contains more than 140 library JAR files. Make sure that unnecessary libraries are not packaged in your application.
  • Class: Clarifies all dependencies of objects inside the application. For example, the Controller class depends on the Business Service class, which
    in turn depends on the Repository class. Spend time reviewing dependencies in your application during code reviews and make sure there are no incorrect dependencies.
  • Services: If you are using microservices in your system, verify that there are no direct dependencies between different services.

Library-based dependency hell is a packaging challenge, the last two are design challenges. A future article in this series will examine these dependency hell scenarios in more detail and provide design patterns to avoid unintended consequences to prevent any proliferation of the technology.

Microservices: Fine-Grained Reusability

Software design practices like DDD and EIP have been around since around 2003, when some teams had been developing applications as modular services, but traditional infrastructure such as heavyweight J2EE application servers for Java applications and IIS for .NET applications doesn't help with modular deployment.

With the advent of cloud hosting, especially PaaS offerings like Heroku and Cloud Foundry, the developer community has everything it needs to deploy and scale business applications truly modularly. This sparked the evolution of microservices. Microservices offer the possibility of fine-grained, reusable functional and non-functional services.

Microservices became more popular in 2013-2014. They are powerful and allow smaller teams to have full-cycle development of specific business and technical capabilities. Developers can deploy or upgrade code at any time without adversely affecting other parts of the system (client applications or other services). These services can also be scaled up or down on an individual service level based on demand.

Client applications that need to use specific business functions call the appropriate microservices without requiring developers to write the solution from scratch or package the solution as a library in the application. The microservices approach encourages contract-driven development between service providers and service consumers. This speeds up overall development time and reduces dependencies between teams. In other words, microservices make teams more loosely coupled and accelerate solution development, which is critical for organizations, especially business startups.

Microservices also help establish clear boundaries between business processes and domains (eg, customers vs. orders vs. inventory). They can be developed independently within a vertical modularity called "Bounded Context" in the organization.

This evolution also accelerates the evolution of other good practices like DevOps and provides agility and faster time-to-market at the organizational level. Each development team will own one or more microservices within its domain and be responsible for the entire process of designing, coding, deploying to production, and post-support and maintenance.

However, similar to the previous architectural models, the microservices approach has encountered its own problems.

Legacy applications that were not designed bottom-up as microservices began to be cannibalized in an attempt to force them into a microservice architecture, leading to an anti-pattern known as monolith hell. Other attempts have attempted to artificially decompose a monolithic application into multiple microservices, even though these microservices are not functionally isolated and still rely heavily on other microservices separated from the same monolithic application. This is an anti-pattern called microliths .

It's worth noting that monoliths and microservices are two different patterns, and the latter doesn't always replace the former. If we're not careful, we could end up creating tightly coupled, hybrid microservices. The right choice depends on the business and scalability requirements of the application functionality.

Another undesirable side effect of the microservice explosion is the so-called "Death Star" anti-pattern. The proliferation of microservices without a governance model in terms of service interaction and service-to-service security (authentication and authorization) often leads to a situation where any service can call any other service at will. It also becomes a challenge to monitor how many services are being used by different client applications without properly coordinating these service calls.

Figure 2 shows how organizations such as Netflix and Twitter encountered this nightmarish scenario and had to come up with new models to deal with the "Death Star" problem.

insert image description here

Figure 2: Death Star architecture due to explosion of microservices without governance

While the example depicted in Figure 2 may seem like an extreme case that only happens to giants, don't underestimate the exponentially destructive power of the cloud antipattern. The industry has to learn how to operate a weapon much larger than anything in the world. "With great power comes great responsibility," said Franklin D. Roosevelt.

Emerging architectural patterns such as service meshes, sidecars, service orchestration, and containers can be effective defense mechanisms against misbehavior in the cloud world.

Organizations should understand these patterns and drive adoption as quickly as possible.

A quick look at key cloud-first design patterns

service mesh

With the emergence of cloud platforms, especially the emergence of container orchestration technologies such as Kubernetes, Service Mesh has attracted more and more attention. A service mesh is a bridge between application services, adding additional capabilities such as flow control, service discovery, load balancing, resiliency, observability, security, and more. It allows applications to offload these functions from application-level libraries and allows developers to focus on business logic.

Some service mesh technologies (like Istio ) also support features like chaos injection so that developers can test the resiliency and robustness of their applications and their potentially dozens of interdependent microservices.

Service meshes fit well on top of Platform-as-a-Service (PaaS) and Container-as-a-Service (CaaS) and enhance the cloud adoption experience with the common platform services described above.

Future articles will delve into service mesh-based architectures and discuss specific use cases and comparisons of solutions with and without service meshes.

Serverless Architecture

Another trend that has received much attention in recent years is serverless architecture , also known as serverless computing. Serverless goes one step further than the PaaS model because it completely abstracts the server infrastructure from the application developer.

In serverless, we write business services as functions and deploy those functions into cloud infrastructure. Some examples of serverless technologies include Amazon Lambda , Spring Cloud Function , Google Cloud Functions , and Microsoft Azure Functions .

The serverless model sits between PaaS and SaaS within the scope of cloud hosting, as shown in the diagram below.

insert image description here

Figure 3: Cloud computing, containers, service meshes and serverless

Similar to the conclusion discussing monolithic services vs microservices, not all solutions should be implemented as functions. Also, we shouldn't replace all microservices with serverless functions, just like we shouldn't replace or decompose all monolithic applications into microservices. Only fine-grained business and technical functions, such as user authentication or customer notification, should be designed as serverless functions.

Based on our application functional and non-functional requirements, such as performance and scalability and transaction boundaries, we should choose the appropriate monolithic, microservices, or serverless model for each specific use case. Often, we may need to use all three of these patterns in a solution architecture.

If not designed properly, serverless solutions can end up being nanostones, where each function is tightly coupled to other functions or microservices and cannot function independently.

container technology

Complementary trends such as container technology emerged around the same time as microservices to help deploy services and applications in a microserver environment, providing true business service isolation and scalability of individual services. Container technologies such as Docker, containerd, rkt, and Kubernetes can complement microservice development well. Today, we cannot mention one without the other - microservices or containers.

Monoliths vs Microservices vs Serverless

As mentioned earlier, it's important to understand the pros and cons of the three architectural styles: monolithic applications, microservices, and serverless functions. Written case studies on monoliths vs microservices detail one decision to avoid microservices.

Table 1 highlights the high-level differences between these three options.
insert image description here

Table 1: Service Architecture Models and When to Use or Avoid Them

Stability Gaps
It is important for us to pay close attention to anti-patterns that may appear in our software architecture and code over time. Not only do anti-patterns lead to technical debt, but more importantly, they can drive subject matter experts out of an organization. An organization may find itself with only people who don't care about architectural biases or anti-patterns.

After the brief history above, let's focus on stability gaps and anti-patterns that can arise as part of a hasty adoption of microservices.

Specific factors such as the team structure in the organization, the business domain, and the skill set in the team determine which applications should be implemented as microservices and which should be implemented as monolithic solutions. But we can look at some general considerations for choosing to design a solution as a microservice.

Eric Evans' book Domain-Driven Design (DDD) changed the way we develop software. Eric advocates the idea of ​​looking at business requirements from a domain perspective rather than a technology-based perspective.

This book considers microservices to be a derivation of the Aggregate pattern. But many software development teams are taking the microservice design philosophy to the extreme, trying to convert all existing applications into microservices. This leads to anti-patterns like Monolithic Hell, Microliths, etc.

Here are some anti-patterns that architecture and development teams need to be aware of:

  • monolithic hell
  • microliths
  • Jenga tower
  • logo slide (also known as Frankenstein)
  • square wheel
  • Death Star

Evolving architectural patterns

To close the stability gaps and anti-patterns found in different application hosting models, the industry has come up with evolving architectural patterns and best practices to close the gap.

The following table summarizes these architectural models, stability gaps, and patterns.

insert image description here

Table 2: Application hosting models, anti-patterns, and patterns

Figure 4 shows all of these architectural models, stability gaps in the form of anti-patterns, and evolving design patterns and best practices.

insert image description here

Figure 4: Architecture evolution and application hosting model

what history tells us

Figure 5 lists the steps in the evolution of the architecture, including the initial stages of not knowing best practices in the new paradigm, which accelerates technical debt. As the industry develops new design patterns to address stability gaps, teams adopt new standards and patterns in their architectures.
insert image description here

5:架构模型和新模式的采用

Business and technical
IT leaders must protect their investments from the rapid and growing transformation of technology, while delivering a stable set of business applications that run on an ever-evolving and optimized technology foundation. IT executives around the world are dealing with this issue with increasing frequency.

They and we should embrace the evolution of technology, but not at the cost of constant instability in the applications that support the business.
A disciplined system architecture should be able to do this. Think of the patterns discussed in this series as strategies to support rapid technological development while protecting business applications from volatility. Let's explore how to do this in the next article.

Conclusion
From mainframes to more recently cloud-native architectures, a variety of hosting models influence how we develop, deploy, and maintain business applications. Every time the industry discovers a new hosting model, teams are challenged to capture the full benefits of the architecture. This led to unintended consequences such as architectural biases and anti-patterns, which resulted in significant technical debt. Over time, new design patterns evolved to address the stability gaps introduced by the new hosting model.

Technical debt management plays a vital role in the health of the overall system as well as the team. IT leaders who don't deal with technical debt in a timely manner can cause software-related damage and organizational damage. Technical debt can feed itself and create more debt, while institutionalizing bad practices and excluding top talent.

When these signs appear, stop immediately and evaluate. Then take firm action.

Make sure you empower your team to address all forms of technical debt.

Subsequent articles in this series will examine the common service platform my organization developed during the adoption of the microservices architecture. We'll also discuss how the company leverages different cloud-native architectural components such as containers, PaaS, and service meshes.

The next article will dive into anti-patterns teams should be aware of and cloud-native design patterns they should adopt in their architecture. We'll discuss the specifics of adopting an enterprise cloud-native service mesh strategy that will help with many of these capabilities. Finally, we'll share some recommendations for architecture and organization.

refer to

Guess you like

Origin blog.csdn.net/xixihahalelehehe/article/details/123717753