[Cloud Native] Cloud Native Architecture

background

Today, every IT resource or product is offered as a service. And with the rolling wave of cloud computing, the concept of cloud native (Cloud Native) came into being. Cloud native is very popular, and it is a mess. As a result, cloud-native software development becomes a key requirement for every business, regardless of its size and nature. Before jumping on the cloud computing bandwagon, it is important to understand what a cloud-native architecture is and how to design the right architecture for cloud-native application requirements.

A cloud-native architecture is an innovative approach to software development designed to take full advantage of the cloud computing model. It enables organizations to build applications as loosely coupled services using a microservices architecture and run them on a dynamically orchestrated platform. Therefore, applications built on cloud-native application architectures are reliable, provide scale and performance, and reduce time to market.
insert image description here

Some people will say that cloud native is microservices, which I think is wrong. Cloud native and microservices are two different dimensions. Cloud native focuses more on the operating environment of applications, which is a cloud environment based on k8s and containers. The "Cloud Native Computing Foundation" is committed to creating a set of tools to help applications develop, test, run and deploy to cloud environments.

Microservices are the software architecture of an application and can be monolithic or microservice. Microservices are based on distributed computing. Your application can be cloud-native, such as distributed, without using a microservice architecture, but the effect is not as good as microservices. If it is monolithic, cloud native will basically not have any advantages. In addition, microservice programs may not be cloud-native. Although they are two different things, cloud native and microservices are a natural match, complementing each other and complementing each other. And many cloud-native tools are designed for the microservice architecture.

Of course, a more appropriate way is to say that the trend of modern applications is "microservices + cloud native". Because several major features of cloud native are: containerized packaging management, service orchestration, microservice architecture, continuous delivery, and DevOps.

In today's ever-changing world of big data information, cloud-native architecture is no longer optional but required. Change is the only constant in the cloud, which means your software development environment should be flexible enough to quickly adapt to new technologies and approaches without disrupting business operations. A cloud-native architecture provides the right environment for building applications with the right tools, technologies, and processes. The key to taking full advantage of the cloud revolution is designing the right cloud architecture for software development needs. It is recommended to implement the right automation in the right areas, take full advantage of managed services, incorporate DevOps best practices, and apply the best cloud-native application architectural patterns.

1. Cloud Native

Days are fading: Traditional software development models?
Traditional software development environments rely on a so-called "waterfall" model driven by a monolithic architecture, where software is developed sequentially.

  1. Designers prepare product designs and related documents.
  2. Developers write the code and send it to the testing department.
  3. Testing teams run different types of tests to identify bugs and measure the performance of cloud-native applications.
  4. When errors are found, the code is sent back to the developers.
  5. After the code successfully passes all tests, it is deployed to the test production environment and deployed to the live environment.
    Use small steps to break the waterfall development process

If you need to update your code or add/remove features, you have to go through the whole process all over again. Coordinating code changes with each other is a big challenge when multiple teams are working on the same project. It also limits them to a single programming language. Furthermore, deploying a large software project requires building a huge infrastructure as well as extensive functional testing mechanisms. The whole process is inefficient and time-consuming.

A microservices architecture was introduced to address most of these challenges. A microservices architecture is a service-oriented architecture in which applications are built as loosely coupled independent services that communicate with each other through APIs. It enables developers to work on different services and use different languages ​​independently. With a central repository that acts as a version control system, organizations are able to simultaneously work on different parts of the code and update specific features without disrupting the software or causing application downtime. Additionally, with automation in place, businesses can easily and frequently make high-impact changes with little effort.

2. Introduction to Cloud Native

Improved and enhanced cloud-native applications based on microservices architecture take advantage of highly scalable, flexible and distributed cloud characteristics to produce customer-centric software products in a continuous delivery environment. The distinguishing feature of cloud-native architecture is that it allows you to abstract all layers of your infrastructure such as database, network, server, operating system, security, etc., being able to automate and manage each layer independently using scripts. At the same time, the required infrastructure can be launched immediately using code. As a result, developers can focus on adding functionality to the software and orchestrating the infrastructure without worrying about the platform, operating system, or runtime environment.

insert image description here

3. Three cornerstones of technology

3.1. Infrastructure as code

Infrastructure As Code refers to storing commands for creating infrastructure (including servers and network environments) in source code libraries like applications, and performing version management. In this way, the process of creating infrastructure becomes the process of deploying software. Its greatest benefit is repeatability. The previous method is to manually type in commands to create a running environment, and if something goes wrong, it will be repaired on the original basis. Once the entire environment needs to be re-established, it is difficult to ensure that it is the same as the original one. After using infrastructure as code, there is no such worry anymore.

3.2. Immutable infrastructure

Immutable Infrastructure, which is an upgraded version of infrastructure as code. With infrastructure as code, you can build an identical server and other required equipment by running the software at any time, and you can also pre-install applications, and the creation time is still in seconds. At this time, when there is a problem with the server, there is no need to spend time to find the cause and fix it, but directly destroy the server and create a new one. Therefore, the infrastructure at this time is immutable, only creation and deletion, but no modification operations. This has completely changed the way operations are done.

3.3. Declarative API

Declarative APIs: Declarative API is also an upgraded version of infrastructure as code. In the beginning, when using software to define infrastructure, it used a procedural description, that is, by running a series of commands to create an operating environment. It was later found that a better way is to describe the state of the final operating environment, and the system decides how to create this environment. For example, your description becomes "create a cluster with three Nginx", instead of running the command to create Nginx three times to form a cluster. The advantage of this is that when the operating environment does not match the description, the system can detect the difference and repair it automatically, so that the system has the function of automatic fault tolerance.

4. Advantages of Cloud Native

4.1. Accelerate the software development cycle

Cloud-native applications complement a DevOps-based continuous delivery environment and embed automation throughout the product lifecycle, bringing speed and quality to the table. Cross-functional teams consist of members from design, development, testing, operations, and business, seamlessly collaborating and working together through the SDLC. With an automated CI/CD pipeline for the development part and an IaC-based infrastructure for the operations part working together, the overall process can be better controlled, making the entire system fast, efficient, and error-free. The entire environment also maintains transparency. All of these elements significantly speed up the software development lifecycle.

Software Development Life Cycle (SDLC) refers to the stages involved in the development of a software product. A typical SDLC includes seven distinct phases.

  1. Requirements Gathering/Planning Phase: Gather information about current problems, business needs, customer requests, etc.
  2. Analysis phase: definition of prototype system requirements, market research of existing prototypes, analysis of customer needs for proposed prototypes, etc.
  3. Design phase: prepare product design, software requirements specification documents, coding guidelines, technology stacks, frameworks, etc.
  4. Development Phase: Writing code to build the product according to specification and guidance documents
  5. Testing Phase: Test the code for bugs/bugs and evaluate the quality against the SRS document.
  6. Deployment Phase: Infrastructure Configuration, Software Deployment to Production
  7. Operations and maintenance phase: Product maintenance, handling customer issues, monitoring performance against metrics, etc.

4.2. Faster time to market

Speed ​​and quality of service are two important requirements in today's fast-moving IT world. A cloud-native application architecture enhanced by DevOps practices helps easily build and automate continuous delivery pipelines to deliver better software faster. IaC tools make it possible to automatically provision infrastructure on demand, while allowing infrastructure to be scaled up or down anytime, anywhere. By simplifying IT management and gaining greater control over the entire product lifecycle, SDLC is significantly accelerated, enabling organizations to go to market faster. DevOps focuses on a customer-centric approach where teams are responsible for the entire product lifecycle. As a result, updates and subsequent releases also get faster and better. Shorter development times, overproduction, over-engineering, and technical debt can also reduce overall development costs. Likewise, increased productivity increases income.

4.3. High Availability and Elasticity

Modern IT systems do not allow downtime. If the product is down frequently, that's a big problem. By combining cloud-native architectures with microservices and Kubernetes, it is possible to build elastic and fault-tolerant systems that are self-healing. During an outage, the application remains available because the failed system can simply be isolated and the other system can be started automatically to run the application. As a result, higher availability, improved customer experience and uptime can be achieved.

4.4. Lower cost

Cloud-native application architectures come with a pay-per-use model, which means that organizations involved pay only for the resources they use, while benefiting greatly from economies of scale. As capital expenditures become operational expenditures, businesses can convert their initial investment into acquiring development resources. On the OpEx side, cloud-native environments leverage containerization technology managed by open-source Kubernetes software. There are other cloud-native tools in the market to manage the system efficiently. With serverless architecture, infrastructure standardization, and open-source tools, operating costs also drop, reducing TCO.

4.5. Turning applications into APIs

Cloud-native environments are able to connect massive amounts of enterprise data with front-end applications using API-based integrations. Since every IT resource is in the cloud and using APIs, applications become APIs. As such, it provides an engaging customer experience and allows the use of legacy infrastructure, extending it to the web and mobile era of cloud-native applications.

5. Detailed explanation of the characteristics of cloud native architecture model

5.1. Pay as you go

In a cloud architecture, resources are centrally hosted and delivered over the Internet on a pay-per-use or pay-as-you-go model. Clients pay based on resource usage. This means resources can be scaled when needed, optimizing resources to cores. It also offers flexibility and service options at various payout rates. For example, serverless architectures allow resources to be provisioned only when code is executed, which means you only pay for what your application uses.

5.2. Self-service infrastructure

Infrastructure as a Service (IaaS) is a key attribute of cloud-native application architectures. Whether you're deploying applications in elastic, virtual, or shared environments, your applications are automatically retuned to the underlying infrastructure, scaling up and down based on changing workloads. This means not having to seek and obtain permission from servers, load balancers, or central management systems to create, test, or deploy IT resources. Simplify IT management while reducing wait times.

5.3. Distributed architecture

Distributed architecture is another key component of cloud-native architecture, which allows software to be installed and managed across infrastructure. It is a network of independent components installed in different locations. These components share information to achieve a single goal. Distributed systems enable organizations to massively scale resources while giving the end user the impression that he is working on a single machine. In this case, resources such as data, software or hardware are shared and a single function runs on multiple machines simultaneously. These systems are fault-tolerant, transparent, and highly scalable. While client-server architectures were used earlier, modern distributed systems use multi-tier, three-tier, or peer-to-peer network architectures. Distributed systems provide unlimited horizontal scalability, fault tolerance, and low latency. On the downside, they require intelligent monitoring, data integration, and data synchronization. Avoiding network and communication failures is a challenge. The cloud provider is responsible for governance, security, engineering, evolution and lifecycle control. Don't worry about updates, patches, and compatibility issues in cloud-native applications.

5.4. Management Services

Cloud architecture allows to take full advantage of cloud hosting services, thereby efficiently managing cloud infrastructure, from migration and provisioning to management and maintenance, while optimizing core time and cost. Since each service is viewed as an independent lifecycle, it's easy to manage it as an agile DevOps process. Multiple CI/CD pipelines can be used concurrently or managed independently.

For example, AWS Fargate is a serverless computing engine that allows applications to be built without managing servers through a pay-per-use model. Amazon lambda is another tool for the same purpose. Amazon RDS enables you to build, scale, and manage relational databases in the cloud. Amazon Cognito is a powerful tool that helps securely manage user authentication, authorization, and management across all cloud applications. With the help of these tools, it is easy to set up and manage a cloud development environment with minimal cost and effort.

5.5. Automatic scaling

Autoscaling is a powerful feature of cloud-native architectures that automatically adjust resources to maintain applications at optimal levels. The benefit of autoscaling is that you can abstract each scalable layer and scale specific resources. There are two ways to scale resources. Vertical scaling increases the configuration of machines to handle growing traffic, while horizontal scaling adds more machines to scale resources out. Vertical scaling is limited by capacity. Horizontal scaling provides unlimited resources.

For example, AWS provides horizontal autoscaling out of the box. Whether it's an Elastic Compute Cloud (EC2) instance, a DynamoDB index, an Elastic Container Service (ECS) container, or an Aurora cluster, Amazon monitors and adjusts resources according to the uniform scaling policies defined for each application. You can define scalability priorities, such as cost optimization or high availability, or balance both. AWS' Autoscaling feature is free, but you will be charged for resources that scale out.

5.6, automatic recovery

In today's world where products must be available 24/7, it is important to have a disaster recovery plan for all services, data resources and infrastructure in order to ensure high availability of all resources. Cloud architecture allows resiliency to be built into applications from the start. Self-healing applications can be designed and data, source code repositories, and resources can be instantly restored.

For example, IaC tools such as Terraform or CloudFormation allow for automatic configuration of the underlying infrastructure in case the system goes down. Automate all phases of the disaster recovery workflow, from provisioning EC2 instances and VPCs to management and security policies. It also helps you instantly roll back changes to your infrastructure or recreate instances when needed. Likewise, changes made to CI/CD pipelines can be rolled back using a CI automation server such as Jenkins or Gitlab. This means disaster recovery is fast and cost-effective.

5.7. Automation and Infrastructure as Code IaC

Organizations can achieve speed and agility in business processes by running containers on a microservices architecture supported by modern system design. To extend this capability into production environments, enterprises are now implementing infrastructure as code (IaC). Organizations can automate resource provisioning by applying software engineering practices and manage infrastructure through configuration files. Deployment can be automated to keep the infrastructure in the desired state by testing and versioning the deployment. When a resource allocation needs to be changed, you can simply define it in a configuration file and have it automatically applied to the infrastructure. IaC brings single-use systems into the picture, where production environments can be created, managed, and destroyed on the fly, while automating each task.

Cloud design is very conducive to automation. You can automate infrastructure management using Terraform or CloudFormation, CI/CD pipelines using Jenkins/Gitlab, and auto-scaling resources using AWS built-in features. A cloud-native architecture enables you to build cloud-agnostic applications that can be deployed to any cloud provider platform. Terraform is a powerful tool that helps you create templates using Hashicorp Configuration Language (HCL) to automatically configure applications on popular cloud platforms like AWS, Azure, GCP, and more. CloudFormation is a popular feature offered by AWS for automating the provisioning of resources for workloads to run on AWS services. It allows you to easily automate the setup and deployment of various IaaS offerings on AWS services. If you use various AWS services, you can easily automate your infrastructure with CloudFormation.

5.8. Immutable infrastructure

Immutable infrastructure or immutable code deployments are deployment server concepts so they cannot be edited or changed. If changes are required, the server is destroyed and a new server instance is deployed at that location from the public image repository. Not every deployment depends on the previous one, and there is no configuration drift. Since each deployment is timestamped and versioned, you can roll back to an earlier version if needed.

Immutable infrastructure enables administrators to easily replace problematic servers without disrupting applications. Plus, it makes deployment predictable, easy, and consistent across all environments. It also makes testing easy. Automatic scaling is also made easy. Overall, it improves the reliability, consistency, and efficiency of the deployment environment. Docker, Kubernetes, Terraform, and Spinnaker are some popular tools that facilitate immutable infrastructure. Additionally, implementing the principles of the 12-factor approach also helps maintain an immutable infrastructure.

5.9, 12-factor methodology

To facilitate seamless collaboration between developers working on the same application and efficiently manage the dynamic organic growth of the application over time while minimizing the cost of software erosion, the developers at Heroku came up with a 12 A factor approach that helps organizations easily build and deploy applications in the cloud-native application architecture. The key takeaways from this approach are that the application should use a single codebase for all deployments and should package all dependencies in isolation from each other. Configuration code should be separated from application code. Processes should be stateless so that you can run them individually, scale them, and terminate them. Likewise, automated CI/CD pipelines should be built while managing build, release, and run stateless processes separately. Another key recommendation is that applications should be one-off so that you can start, stop and scale each resource independently. The 12-factor approach is well suited for cloud architecture.

Guess you like

Origin blog.csdn.net/u011397981/article/details/131294850