How to overcome the challenges of microservices testing and maximize the benefits

Microservices have been an industry trend for years, but organizations have failed to benefit from the approach and struggled with failed releases. These failures often come down to difficulties testing the interfaces between services to achieve the expected quality, security, and performance.

Ultimately, these APIs were not tested in a robust enough way. The silver lining is that the testing concepts and solutions are the same between legacy SOA testing and microservices testing. If you can solve API testing problems, you can improve your microservice version.

Parasoft product free trial

Microservices bring new challenges

Today, it is commonplace to have hundreds or even thousands of microservices combined to define modern architectures.

Highly distributed enterprise systems have large and complex deployments, and the complexity of this deployment is often cited as a reason not to implement microservices. In fact, you are decomposing the complexity in the overall architecture just into more complex deployment environments.

To the dismay of the uninitiated, complexity does not really disappear but evolves into a new complexity.

While microservices hold the promise of improving parallel development efficiency, they also bring a new set of challenges. Here are some examples.

  • More interactions need to be tested at the API layer, whereas previously API testing was limited to testing externally exposed endpoints.
  • Parallel development barriersLimit the time-to-market benefits of breaking up monolithic structures. Dependencies on other services and complex deployment environments reduce real-world parallelism.
  • The impact on successful traditional testing methods in monolithic applications (such as end-to-end UI testing) must now move to the API layer.
  • With distributed data and computing,there are more potential points of failure, making troubleshooting and root cause analysis difficult and complex.
3 Key Steps to Microservice Testing

What are the three key steps for microservice testing? Briefly described, these steps are:

  1. Record
  2. monitor
  3. control

Logging, monitoring, and control will help you implement your testing approach to efficiently discover which tests to create, while helping you automate component testing when you need to isolate downstream APIs.

The enabling technology that enables this is service virtualization. Service virtualization brings all 3 of these concepts to life through one essential feature: the message broker.

By using a message broker in your deployment environment, you can monitor and log the flow of messages between APIs and control where messages are sent.

How does a message broker work?

It is designed to listen to a given endpoint or queue and then forward the messages it receives to the target endpoint or queue. Basically, it's the middleman for the API. You actively choose to inject one between integrated systems. Once it's in place, you can start taking advantage of it.

API and microservices testing

Whether your environment is highly distributed, partially distributed, or mostly monolithic, the challenges that come with testing APIs and overcoming them are not that different. Some of the challenges become more severe the greater the number of integrations that exist, but the basic approach is the same.

When considering an API or microservice as a black box, we can break it down into clients of the service and dependencies of the service. Testing your API as a black box means that your testing tool acts as a client to the service, and its job is to verify that it receives the correct response. Dependencies are things that your service needs to integrate to function properly. Optimization is required between both clients and dependencies.

Before exploring these optimizations, let's start with the design phase, where planning your microservices and testing strategy should begin.

Microservice life cycle

Design Phase: Defining Clarity Requirements

A widely accepted API development best practice is to implement service definitions during the design phase. For RESTful services, the OpenAPI specification is usually used.

Your customers use these service definitions to understand what resources and operations your service supports and how your service expects to receive and send data. A REST service's service definition will contain a JSON schema that describes the structure of the message body so that client applications know how to consume your API.

But service definitions are not only important for helping other teams understand how your API works, but they also have a positive impact on your testing strategy.

You can think of a service definition as a contract. This is the basis for starting to build your API governance strategy. Like any software testing, one of the biggest challenges in API testing is dealing with change.

As agile practices gain popularity, change has never been faster. When you're trying to argue with many teams that are building APIs and then expecting all of those APIs to somehow work well with each other, enforcing those contracts is the first step in solving the mess.

Verify and enforce contracts

How to enforce a contract?

As part of any automated regression suite, you should check the service definitions written by your team for any errors. Swagger and OpenAPI have their own schemas that define how service definitions must be written. You can use them to automate these checks early in the API development lifecycle.

Then, in addition to validating the contract itself, you want to check that the response returned by your service also conforms to the contract. Your API testing framework should have built-in support for catching instances where an API returns a response that deviates from its service definition schema.

Think of it this way. A car is made up of thousands of individual parts, all of which need to fit together perfectly.

If the team responsible for the power unit delivers an engine that deviates from the design specifications, you may run into big problems when you try to connect a gearbox built by another team because they are referencing the design of the engine to see where the bolts need to line up.

That's what you're checking here. Good API governance can help you avoid these types of integration issues. Designing according to and adhering to these contracts should be one of the first focuses of your API testing practice.

Service definition contracts can also help your testing process become more resilient to change. The time required to refactor test cases for changes in the API can have a huge impact on testing, causing it to be pushed outside of the sprint.

This is both a quality risk and a safety risk. This also means additional testing time is required. When using an API testing framework, it needs to help teams refactor existing test cases in batches when the API design changes so that they can keep up with the fast pace of agile and sprint testing. A service contract and the right tools can make this less painful.

Implementation Phase: Applying Best Practices

Microservices development does not mean skipping unit testing for free. Code is code, and unit testing is a fundamental quality practice. It helps teams find regression issues quickly and early and fix them in a way that’s easy for developers to fix—no matter the software.

The ugly truth is that software teams with non-existent or reactive unit testing practices tend to get poorer quality results. The overhead of unit testing is considered too much, so many managers and leaders do not prioritize it.

This is unfortunate because the tool market has matured, developers' lives have become easier and more efficient, and unit testing and the trade-offs between cost and quality are much less common than before. Unit tests form the basis of solid testing practices and should not be ignored.

Additionally, software quality practices for developers have matured, with dedicated coding standards for API development. In 2019, the international non-profit organization OWASP released the OWASP Top 10 API security standards.

Coding standards like these can help microservices teams avoid common security and reliability anti-patterns that can create business risks for their projects. Trying to adopt coding standards without tools is nearly impossible.

Fortunately, modern static analysis tools, also known as static application security testing (SAST) tools, keep up with industry standards and will support it. Developers can scan code as it is written and as part of the continuous integration process to ensure nothing is missed.

Component Testing Phase: Using API Dependency Proxy

Component testing means testing your microservices individually. Achieving true isolation comes with some challenges, such as knowing how to handle a microservice's dependencies. Additionally, one of the hardest things to predict as a microservices developer is understanding exactly how other systems will use your API.

Client applications for the API may discover creative uses for microservices that they had never considered. This is both a blessing in business and a curse in engineering, which is why it’s so important to put the effort into understanding API use cases.

API governance during the design phase is an important first step, but even with clearly defined contracts and automated schema validation for microservice responses, you can never fully predict how the requirements of your end-to-end product will evolve and how this will evolve. Impact microservices in your domain.

Logging, monitoring, and control will help you implement a testing approach that efficiently discovers which tests are needed, while helping you automate component testing when you need to isolate downstream APIs. The enabling technology that enables this is service virtualization.

Service virtualization brings all three concepts to life through one fundamental feature: message brokering. Use a message broker to monitor and log the flow of messages between APIs and control where messages are sent.

Carefully planned and well-designed services

Orchestration and orchestration (or reactive) services are fancy terms that describe synchronous or asynchronous messaging patterns.

If your service uses a message broker to communicate with protocols such as AMQP, Kafka, or JMS, then you are testing an orchestrated or reactive service.

If you are testing REST or GraphQL interfaces, then this is a orchestrated or synchronous service.

For the purposes of this article, it doesn't matter which protocol and message exchange pattern you're dealing with. However, if your API testing framework doesn't support the protocol your organization chooses to adopt, you may find it more difficult to apply these principles to asynchronous messaging.

Capture client usage scenarios

It’s difficult for microservices teams to predict how other teams will use their APIs. We know that end-to-end, fully integrated testing is expensive and slow. Using the message broker feature in a service virtualization tool, you can record traffic from upstream client applications so that you can capture real-world usage scenarios, significantly improving your understanding of which tests should be run in your CI/CD pipeline. These client applications do not need to be present to trigger this traffic.

In other words, logging enables you to replay these integration scenarios for automated regression testing, which is simpler and more manageable than asking customers to run tests that indirectly test your API. That’s why tools that combine service virtualization with API testing are so popular. They can easily record this API traffic and then use it for testing API scenarios under your control.

The diagram shows the monitor on the left generating API traffic, which flows to the message broker and then to another monitor on the right (showing the API test scenario).

Make dependencies easy to manage

Testing your service can quickly become difficult because it relies on other APIs in the test environment, which creates issues with availability, actual test data, and capacity.

This can push testing efforts outside of the sprint and make it difficult for teams to identify integration issues early enough to have time to work on them. This is a traditional use case for service virtualization, a technology that allows mocking or simulating the responses of downstream APIs (in doing so, creating a virtual version of the service) providing isolation so that you can fully test your API earlier and more easily with your CI /CD pipe.

This diagram shows the costs, test data limits, and capacity constraints for QA staff, developers, and performance test engineers trying to test microservices in a real environment.

When a message broker is deployed in an environment, teams can log API traffic and then build virtual services that respond faithfully and realistically to the microservices (including stateful transactions, where mock dependencies must correctly handle PUT, POST, and DELETE operations) without Dependencies available for real operations.

Stateful service virtualization is an important feature to create the most realistic virtual services covering all test cases.

This diagram shows QA staff, developers, and performance test engineers using service virtualization to test services in a real test environment.

Integration testing phase: controlling the test environment

Now let's narrow the scope of testing further to integration testing. During this phase, sometimes called the system integration testing or SIT phase, the test environment is a production-like environment that ensures no defects are missed.

You should expect the message broker to have control over whether you want an isolated connection to your dependencies or a real-world connection. Another aspect of control is ensuring that the message broker can be easily managed via the API. Organizations with highly mature CI/CD processes are automating deployment and destruction workflows where programmatic control of message brokers is a must-have requirement.

When you are in the integration testing phase, what optimizations can you extract from the visibility (or observability) exposed by the message broker?

This is where monitoring capabilities are critical to a service virtualization solution that supports automated testing. Monitoring from the message broker will expose the inner workings of these complex workflows so that you can implement better tests that tell you where the problem is, not just that it exists.

For example, consider an order processing system that needs to check multiple downstream services (such as inventory, billing, and shipping systems) to fulfill the order. For a given test input, you can make assertions about certain behind-the-scenes behavior to help developers pinpoint why the test failed. When your team spends less time figuring out why a problem occurs, they have more time to actually fix it.

The diagram shows the partner integration process, purchase order workflow, product, billing, and shipping microservices.

Five tips for testing microservices

Here are five tips to help you develop a microservices testing strategy. Remember, these are just suggestions. As with all types of test plans, you need to consider the specifics of your setup.

  1. Think of each service as a software module. Perform unit tests on the service as you would any new code. In a microservices architecture, each service is considered a black box. Therefore, perform similar tests on each.
  2. Identify the essential links in the architecture and test them. For example, if there are strong links between the login service, the frontend that displays user details, and the details database, test these links.
  3. Don't just test happy path scenarios. Microservices can fail, and it is important to simulate failure scenarios to build resilience into the system.
  4. Try to test across phases. Experience has proven that testers who use a combination of multiple testing practices, starting with development and gradually expanding the scope of testing, not only increase the chance of bug exposure, but are also highly efficient. This is especially true in complex virtual environments, where there are minor differences between the various libraries and, despite the presence of a visualization layer, the underlying hardware architecture can produce unforeseen and undesirable results.
  5. Use "canary testing" on new code and test it with real users. Make sure all code is fully instrumented. And also use any monitoring provided by your platform provider. This satisfies both the "shift left" test and the "shift right" test since you are also testing "in the wild".
Demystifying Microservices

Microservices are here to stay. Unfortunately, organizations fail to reap the benefits of this approach. What it comes down to is how difficult it is to test the interfaces (i.e. APIs) between distributed systems to achieve the expected quality, security and performance.

We need a microservices testing approach that can discover, create, and automate component testing. The enabling technologies are message brokers and service virtualization, which are tightly integrated with a feature-rich API testing framework.

The message broker can log API traffic, perform monitoring to discover scenarios and use cases, and provide controls to manage and automate API test suites. Combined with service virtualization, automated microservice testing becomes an achievable reality.

Guess you like

Origin blog.csdn.net/m0_67129275/article/details/134871525