An Explanation of Deliverable Software Deployment in DevOps

Dan Zentgraf is a Domain Architect at Ascendant Technology. His mission is to help customers adopt DevOps and agile practices. A consultant and product manager, he has 12 years of experience working with commercial and engineering executives in the software field.

This article describes the transition from the traditional way of software development to the use of emerging Devops theories, the challenges posed by the technology, and the changes to be aware of in the new way. The scope of this article includes the definition of what to deploy and the organizational and cultural changes required to take advantage of DevOps.

The general principle of agile development is that development teams should consistently deliver software at a sustainable pace. At the same time, based on the associated virtualization and cloud computing, many new operational tools and infrastructures are already available to us. Although many R&D teams have taken the initiative to adopt agile development methods, the software they develop is often not acceptable to users because their programs are full of bugs, error-prone manual tasks, or delays related to operations and maintenance.

At the same time, an increasingly competitive market environment is putting enormous pressure on business executives to get new software and features into the hands of customers in shorter cycles. The three factors of agile development, new operation and maintenance technology and market demand orientation are making people rethink how to think about software development.

What these three trends have in common is that the rate of change has increased. It is no longer acceptable to put software features that users need on hold for months before developing them. Software is only valuable when it is used, and getting that value quickly is especially important in today's market. This shift calls into question the assumptions that teams develop software, and the result of that questioning is the advent of DevOps.

To understand software delivery in general, and software deployment with DevOps in particular, it is important to understand two things:

First, it is necessary to have a clear and correct understanding of DevOps and why various stakeholders are interested in it;

Second, when we try to apply DevOps, it's important to understand how assumptions about software deployment change.

When these are understood, it is possible to understand the framework that supports DevOps executable deployment.

Understanding DevOps and Stakeholders

The term DevOps, generated from the combination of the English words development and IT operations, generally refers to a method of unifying the related work of a software system using all the various rules, so that changes are delivered to the system at the speed required by the business department. It is often combined with agile development that makes quick and small changes, with the aim of focusing on high-value work, minimizing the risk of defects arising from large changes, and reducing the risk of defects due to software characteristics and business requirements after project completion. The software value offset caused by serious deviation. On the other hand, an operations perspective refers to viewing change as a cause of instability. Instability will inevitably lead to downtime, which is the most serious negative event in operation and maintenance. As a result, operations teams often resist change.

However, DevOps is often seen as a change because in the vast majority of development teams there is a series of conflicting actions and rewards between development, operations and marketing, leading to countless misconduct. The marketplace rewards development teams for developing new features and pursuing change, but penalizes operations teams for downtime. As a result, development teams are pressing for more deployments from operations; operations teams are demanding more structure and precision in their delivery modules. These contradictions are ingrained in many development teams and are exacerbated by the inherent differences in the maturity of their respective teams. DevOps seeks to remove these barriers and create a platform for these competing teams to work together, thereby focusing them on the same goal. To achieve this, it is important to understand the mindset of each team.

research and development

R&D is generally considered to be closer to market demand. The Agile Manifesto has been around for over a decade. In a way, this manifesto is the result of the early days of extreme programming, pair programming, and practice. To be fair, the software part of the puzzle is seen as a low-hanging fruit (it's easy to change, but is theoretically infrastructure and platform independent). Infrastructure is traditionally considered an expensive capital expenditure that is difficult to change and has a long amortization cycle. Unfortunately, complex software requires a lot of infrastructure and requires that the infrastructure evolves in parallel with the software itself. This connection is why early DevOps was sometimes referred to as "agile operations".

Whatever people call it, insisting on the idea of ​​separating software and infrastructure is not sustainable if technology keeps pace with market demands. This forces R&D teams to accept the reality of maintaining complex infrastructure in production.

Operation and maintenance

Fortunately, some new science and technology have become the forefront of the operation and maintenance field, thus helping the operation and maintenance more in line with the market demand. The main disruptive technology in the field of operations and maintenance is the widespread availability of cheap hardware commodity virtualization. This has given rise to new approaches to systems management and, of course, cloud computing. This technology is rapidly gaining popularity because it allows development teams to quickly and easily integrate their underutilized computing resources to directly generate value. US Gartner consulting firm estimates that 50% of applications now run in a virtual environment.

However, simple integration is only a limited way to recover value and reduce costs. Virtualization also enables some variability in infrastructure, allowing operations teams to improve infrastructure at previously impossible speeds without compromising stability. Virtualization exploits often appear in projects related to cloud computing. These emerging technologies give operations teams a way to be as agile and market-responsive as R&D teams.

business

For business executives, they have learned that understanding and leveraging technology to achieve their goals is more critical than ever. According to IBM's 2012 CEO Survey (which was based on interviews with 1,700 CEOs in 64 countries), technology factors were the number one factor affecting development teams. Technology was only sixth out of nine in 2004, and the ranking has risen steadily since then.

Business executives therefore note that the ability to respond to customer needs is a competitive advantage. From this, it can be inferred that poor technical execution is an inherent threat to the business. This threat won't become a reality overnight, but it's on the verge of. The same survey also discussed how respondents traded off understanding their customers' needs and meeting their needs with shorter lead times. In the end, this is a business pressure that conflicts with the reality of the process. This process reality is how the technical disciplines of R&D and O&M have worked in the past and stimulated discussions about how to do business better.

DevOps and Deployment

high frequency

To achieve these goals, one of the most obvious changes is our assumption that putting new software features into the hands of users makes software more valuable. This assumption means that the sooner a new software feature is released, the better, because it means that the value brought by the new software feature is realized faster. But the vast majority of development teams used to be oriented towards developing very large software over a long period of time.

This short-cycle, high-frequency development assumption is at the root of the shift in the definition of software development. DevOps abandons the old idea of ​​suspending the development process many times a year to do big software integration and assembly. Instead, a development system should be able to run continuously and be good at continuously developing large numbers of small increments . The key point is substitution. Traditional big development-oriented systems are often unwieldy, making it difficult to run at high speeds to support DevOps. Attempts to "do it the old way, just do it as fast as possible" often fail because the assumptions of the old way were never created to support high frequency activity. It's not good or bad, it's just an engineering problem that needs to be solved.

Given that this new delivery process becomes an inherent part of extracting value from software investments, it is only natural that this process becomes a new extension of the overall system. It has become the most basic way to evaluate R&D investment. Thus, this also creates the assumption that the efficiency and effectiveness of managing the delivery process has direct and measurable business value to the overall system. The implication of this assumption is that it is not feasible to ignore the cost of maintaining delivery capacity.

System panorama

Another characteristic of DevOps development is that it looks at complete systems rather than simple code changes that can achieve software functionality. DevOps strictly believes that application code relies on infrastructure such as servers, networks, and databases to achieve its value. Therefore, the DevOps deployment approach treats all changes to system components equally, tracking those changes in the same way. Some infrastructure changes, such as a prudent network switch upgrade or storage additions, are seen as performance enhancements (new capabilities in the system), even though the changes may be less noticeable. Likewise, network server or SAN firmware patches may be considered fixes or bugs. No matter how a development team categorizes things, the key is that they can treat other parts with the same rigor to ensure the continued stability of the entire system.

In a DevOps environment, applying a global systems perspective to a high-level model results in the following four core changes that will be delivered:

application

Applications are the real feature of writing code in the system. It is a highly visible function that drives the entire core business process. This visibility has given it a lot of attention, but that attention has brought little proactive value consideration to the environment that supports it.

Operating system services

Operating system services apply to all machines (virtual or real), operating system services, libraries, and middleware that enables applications to run. The coordination of configuration across all environments, from test to production, is a key driver of application stability and quality.

Internet service

Network services centralize all network devices and configurations around the application. These configurations obviously affect operation and availability, but can also affect application behavior and architecture. For example, the application and load balancer must agree on session handling among other things.

database

The database in the application contains key data used and produced by the application. Throughout the life of the application, the database and application must be perfectly synchronized in terms of the data planning structure.

Features of DevOps Deployment Systems

DevOps deployments require a very disciplined approach to delivering changes to the software systems that the deployment supports. The more exceptions, changes or special events a system needs to withstand, the more expensive the system development and especially maintenance will be. Controlling the complexity of the delivery process is all about understanding and controlling what will be delivered and how they will be delivered. This requires the development team to agree on a set of defined packages that make up each delivery unit and a system for harmoniously deploying the program into the desired environment.

defined package

An effective delivery system requires that delivered items meet certain criteria. A real-world container is a good example. This standard container can be transported by standard equipment through an extremely complex logistics network. This logistics network includes many different modes of transportation, such as trains, cargo ships and trucks. With the help of cranes and tracking systems, we can transport anything, anywhere, as long as it fits in a container. Standardizing these container lines is a revolution in the history of world commodity transportation and trade.

The good news is that we don't have to develop an international standard to deliver changes into a software system. Many development teams have achieved this through some activity expansion. Many development teams are developing software builds at a steady pace through continuous integration or other initiatives. These builds have or should have some unique identity that enables those who use them, such as testers, to anticipate the performance and features of the software, etc.

This approach provides development teams with a baseline for understanding groups of changes in the other three components. If each of the four components had a standard way of identifying changes, it would be relatively straightforward to track combinations of the four components with unique change indicators. A combination of the build number of the app, the config server, and the schema version of the database, etc.

can be recorded and deployed to any test or usage environment.

delivery system

However, unlike containers, deployable packet systems are not necessarily tied to hydraulics and diesel fuel. Rather, it involves a set of tools and procedures that allow development teams to manipulate their development, test, and production environments in the same way and deploy packages to any of them at any time. These tools and programs involve various functions in the environment. These functions contain different areas of expertise and are used in different ways depending on the problems that arise in the four components. Given the various functional complexities, it would be helpful to have a framework to visualize and separate them into different categories based on what they do.

For DevOps, the capacity taxonomy consists of six categories, each of which has many subcategories. This framework is not for classification's sake, nor does it mean that all parts must have all properties. However, it provides a useful tool for understanding the barriers to prioritization when building a DevOps delivery system.

The following definitions summarize the six categories:

Change management - the activities undertaken to ensure that improvements made to the system are properly recorded;

Orchestration - synchronizing and coordinating all activities in a distributed system;

Deployment - activities related to managing the lifecycle of various modules that are running on the infrastructure;

Monitoring - providing indicators to keep the environment healthy and providing stakeholders with feedback on system operation;

System registry - centrally archive the shared infrastructure information required for the operation of the entire system;

Provisioning - Ensuring that the infrastructure environment provides a sufficient number of the correct components for the system to operate.

Taxonomy application

The value of a taxonomy is that it provides a way for us to understand the environment, identify its overarching goals, and evaluate appropriate solutions. It enables a development team to quickly gain a structured understanding of the target and the performance of existing tools. Additionally, it can also be used to evaluate how a development team or solution has performed over time. This structural taxonomy will also make the evaluation content more detailed to a certain extent, depending on the specific situation.

For example, in 2012 IBM released a beta version of a continuous deployment framework software called the SmartCloud continuous delivery system. The purpose of this framework is to develop delivery tools and integrators that can help development organizations apply DevOps delivery methods.

By applying only some functions on the upper layer of the classification, we can see how this classification method systematically and completely includes six major categories:

First, the SmartCloud continuous delivery beta leverages IBM's Rational Team Concert software and Rational® Asset Manager software to provide some change management, orchestration, and deployment capabilities.

Second, there are some integrated approaches that leverage IBM's cloud systems infrastructure tools (such as IBM Workload Deployer, IBM® SmartCloud™ Provisionin, IBM® PureSystems) to provide system registration and provisioning capabilities.

Third, there are also reporting feedback programs that provide monitoring capabilities.

Finally, there are integration tools such as Rational Automation Framework and Rational Build Forge. They have deployment and orchestration capabilities.

Even if you take a casual look at the structured classification method, you can know that SmartCloud continuous delivery involves these six functions, but the depth involved in each function is different. The different depths of each function are exactly what tool providers expect when emphasizing different approaches, integrating different products or product portfolios, and optimizing customer requirements. A coordinated approach to evaluating solutions is key for development teams to ensure customers get what they want.

system operation

No matter what structure is used to understand it, a DevOps delivery system is the central agent for all changes in an application system. This centralization applies to all environments in which applications run, including production and reproduction environments. This ensures that the application always runs in a known state and an environment with a known configuration. Getting to that state requires more than a clear understanding of the principles and framework in which new features are developed. It requires more application and adherence to principles, and development teams must respect some factors to be more effective.

Environmental management

There are always multiple environments that allow software systems to run. We have a production environment and of course some QA and testing environments as well. These quality assurance environments are where development teams verify that a given change will have the desired effect. These environments are often seen as second only to production, but the truth is that incorrect information in test environments can cause production outages. This dependency means that development teams must take these environments seriously. This is what successful DevOps delivery depends on.

The first step in taking these environments seriously is to ensure that all of them are truly representative. Confirming that a given configuration in a quality assurance environment is a good proxy for a production environment is an engineering effort. After this baseline is established, maintaining this state is much simpler and more straightforward.

The second step is to ensure that all changes are promoted through the quality assurance environment before making their way into production. Because an application should be treated as a whole, configuration changes to any aspect of the production environment that do not pass through the quality assurance environment are not allowed. This requires us to change our perspective and stop thinking that change is easy or that the changes we make are different from others. In addition to the obvious benefits of reduced risk and increased reliability, this approach also reduces the cost of environmental maintenance and synchronization. This approach means that environments are always running in a known state, and it also means less duplication of effort to manage multi-environment configurations.

exception handling

Even with a unified approach to environmental management, emergencies can arise. Abnormal events in the environment should be very rare. A serious event is often caused by external factors. Vendor bugs or emergency security patches in the platform are good examples. These should not be changes caused by internal factors.

When dealing with exceptions, we should treat exceptions as important events and should correct and reflect on them in a timely manner. Correction must focus on resynchronization of the environment. For example, if the standard change delivery system is not used to apply the change to an unexpected environment, the change must be added to the standard change delivery system. Regardless of how the change is applied, there must also be a post-mortem process to explain why the exception occurred, and more importantly, how to reduce the chance of it happening again.

Measurement and continuous improvement

Delivering software changes with a DevOps approach gives us ample opportunities to measure and improve programs. In addition to the consistency of standard systems, the high frequency of DevOps methods also provides more data points. Various metrics, such as cycles, packet gaps, and program failures, are common metrics. These metrics are not typical operational metrics such as availability and available time. Instead, they work with guaranteed availability and available time. Other good track record objects are measures of program effectiveness and efficiency. The effort required to maintain the application system, the time required to maintain the DevOps infrastructure, and the wait time required to deliver to a given environment are all good metrics.

A DevOps delivery approach also enables applications to have powerful instrumentation capabilities. These metrics are only app-specific, but they reveal performance, user experience feedback, and more. Therefore, these metrics tell us whether we should proactively compare operating environments, identify bottlenecks in application systems, or manage capacity requirements.

loosely coupled

Tools in DevOps delivery must be loosely coupled to the environment in which they are deployed. This means that development teams must be able to replace a tool without major disruption to the entire system. Architecturally, there are ways to achieve this. For example, focusing on tools that use web service APIs as a fundamental integration principle is a popular practice when deploying DevOps tools. Regardless of the technical approach, the development team must be aware that changing a tool has some degree of impact on the team's overall ability to deploy the change. When development teams want to replace a tool, they must carefully analyze the impact and weigh it against the benefits of the replacement.

Summarize

DevOps advocates a fundamental change in software delivery. The advantage of this is to deliver higher-quality software more quickly and frequently. Taking a structured approach to the application system as a unified whole can help the development team to coordinate and cooperate. That kind of coordination can be supported by a systems approach to more efficiently deliver and improve the functionality used to deliver the entire system. DevOps methodologies support agile development characterized by stable, multi-cycle incremental improvements, and provide development teams with the structure and insight they need to consistently deliver valuable software on demand by users. This is the essence of developing and deploying software.

 

To view the original text in English, please click here .

Guess you like

Origin http://10.200.1.11:23101/article/api/json?id=326703681&siteId=291194637