How to establish hierarchical security in the entire DevOps?

By dividing our DevSecOps approach into multiple layers, we can find the right balance between the need for strong security and the need for fast-moving and frequent deployment.

How to establish hierarchical security in the entire DevOps?

The DevOps movement has changed the way we integrate and publish. It allows us to transition from a slow release cycle (sometimes an annual release cycle) to a daily (sometimes even hourly) release. We can write code and see changes in production almost immediately. DevOps is an amazing first step to break through barriers and support rapid response to market changes and customer needs, but we still need to break an important barrier. We need to compromise an important group: security operations (SecOps).

In order to include this vital group in the continuous integration and deployment of production changes (CI/CD), we redefine DevOps as DevSecOps. The challenges we face in this work are the same as those faced when integrating development and operations: developers want to move quickly and make frequent changes, while operations require stable and infrequent changes. Security teams tend to support stability and infrequent changes, as changes may mean repeating security testing and recertifying the environment.

As we progress at the speed of DevOps, how do we expect these teams to redo their work daily or weekly?

Security layering

Before delving into this issue, we should talk about a key security practice: layered security or defense in depth. Hierarchical security is the practice of applying multiple security measures, each layer overlapping the previous and next layers to create a security control network that can work together to protect technical systems.

How to establish hierarchical security in the entire DevOps?
In a layered security approach, companies use access controls such as WAN gateway firewalls and data-at-rest encryption to alleviate the privacy of technical systems. The list of controls is extensive, but the point is that no controls can adequately protect the technical system. The same method applies to performing security analysis on our application.

Contact the company's application security team and ask them which scanning tools they use to ensure the security of the applications they write. It is possible that they will not use one tool to respond because no one tool can do it all. Instead, they may provide you with a list or types of tools that they use or expect the development team to use.

This brings us back to the previous question: how do we expect to maintain a continuous deployment cycle while performing all these scans and using all these tools? This is a difficult task. Some of these scans and tools can take hours, days or more.

Inline scan

Although some security tools and scanners do take a long time to run, there are still some faster tools that can be used early in the development life cycle to form our first layer of DevSecOps. This is the idea behind shift left: move the process from the end (or right side) of the development life cycle to the beginning or the middle, that is, to the left again.

The first layer should include tools and scanners that take a few seconds (or minutes) to run. Some common examples are code lint, unit testing, static code analyzers like SonarQube, third-party dependency vulnerability checks such as OWASP Dependency Checker, and a subset of integration tests.

You may ask: "How does tidying up code and running unit tests fit in DevSecOps?" Errors in software can provide the perfect solution for opponents who are looking for. For example, OWASP listed code injection as the number one vulnerability in the past two key web application security reports (2013 and 2017) . Unit testing and static code analysis can help catch some of our errors and may help prevent security holes in the code.

Since these tool scans require very little time, it is best to push them to the far left in the development life cycle. When a developer pushes the code to our Git repository and opens a request, these tools and scanners will run to ensure that the code passes before the merge. In addition to ensuring that our main branch remains buildable, having these tools early in the development life cycle can also provide early and frequent feedback to developers.

Scan before deployment

The second layer of DevSecOps and the tools that run inline with our deployment pipeline can take several minutes or even an hour to complete. This may include deeper third-party vulnerability scanning, Docker image scanning, and malware scanning.

One of the keys to this layer is that scanners and tools can run after generating build artifacts and before storing them in any location such as Artifactory or Amazon Elastic Container Registry. More importantly, any failure in this layer should immediately stop the current deployment and provide feedback to the development team.

Another key is to achieve parallelization in this layer and all future layers. Developers want to deploy their changes as quickly as possible, and running multiple scans in a row (each scan can be as long as an hour) will unnecessarily slow down the deployment cycle. By running these tools in parallel, the reduction in deployment speed is equal to the longest running scan.

Scan after deployment

The next layer of DevSecOps includes tools and scanners that we can use after deploying the code to the pre-production environment. These tools may include performance and integration testing and application scanners such as OWASP Zap. We should strive to make this layer run quickly, hoping to run in an hour or less, to provide rapid feedback to developers and limit the impact on the CD process.

To ensure that we do not mistakenly deploy vulnerable code to production, this layer should be run as part of the CD pipeline, with the purpose of removing artifacts and rolling back the environment when any scanner finds a vulnerability or the following conditions. Otherwise it fails.

According to industry, security and regulatory requirements, we can automatically deploy to production after this layer is successfully completed. There should be enough automatic scanning and testing in the pipeline to reasonably prove the safety and robustness of the application.

Continuous scanning

Most of the scanners and tools we discussed are already embedded in the CI/CD pipeline. Our goal is to balance the impact of these tools on the timeline of the CI/CD pipeline while providing reasonable assurance for the security of the application.

The last layer of DevSecOps is continuous scanning or continuous security (CS). Just as continuous integration, testing and deployment are synonymous with DevOps, continuous security is the synonym and cornerstone of DevSecOps. This layer includes tools such as Nessus, Qualys, IBM App Scan, and other infrastructure, applications, and network scanning tools.

CS is not embedded in the CI/CD pipeline, but as asynchronous and provides continuous feedback to developers. How developers receive and respond to this feedback requires discussion and consensus. Developers will deal with any feedback immediately after receiving it and resolve it within a longer turnaround time.

How these tools and scanning programs are started and how often they run is another aspect of CS that stakeholders should agree with. After the deployment is complete, the tool with API can be launched through the CI/CD pipeline. Others may be completed as needed or based on a certain time rhythm. No matter how it is done, it is important that these tools and scanners are not run only once, or even once or twice a year. Instead, these tools and scanners should be run as frequently as possible and should make sense to the application.

in conclusion

Just as we cannot ensure the security of a technical system by using one or two tools or security principles, the security of our applications cannot only depend on one or two types of tools or scanners. It uses a layered approach to apply different tools and scanners to reasonably ensure the security of our applications and the infrastructure on which they run.

Guess you like

Origin blog.51cto.com/11064706/2544527