I recently came across testcontainers-java which is a Java library that supports JUnit
tests, providing lightweight, throwaway instances of common databases, Selenium web browsers, or anything else that can run in a Docker container (check out https://testcontainers.org/ for details!)
We can use it to run integration tests for Kafka
based applications against an actual Kafka
instance. This blog will show you how to use this library to test Kafka Streams
topologies along with the Kafka Stream testing utility classes (discussed in an earlier blog post)
Example available on GitHub
In simple words, testcontainers-java
library allows you to spin up Docker containers programmatically. You can obviously use a Java Docker client such as this directly, but testcontainers-java
provides some benefits and ease of use. It's only pre-requisite is Docker itself
Kafka with testcontainers
With testcontainers-java, you can start the actual Kafka agent (or cluster) and run integration tests against it instead of the embedded version.
You can start off by using the GenericContainer
API to spin up a Kafka container for e.g. using the confluent kafka docker image. This would something like this:
public GenericContainer kafka = new GenericContainer<>("confluentinc/cp-kafka")
.withExposedPorts(9092);
....
String host = kafka.getContainerIpAddress();
Integer port = kafka.getFirstMappedPort();
String bootstrapServer = host+":"+Integer.toString(port)
....
//use the bootstrapServer in tests...
We generated a Kafka container based on the Docker image and obtained a randomly generated port, which was mapped to our local computer, and we started the game. But we can do better!
Ťestcontainers-java
is flexible and supports the concept of ready-to-use modules. There is one available for Kafka already and it makes things a little easier. Thanks to the KafkaConŤainer
module, all we need to do is start off the Kafka container for e.g. we can use a JUnit @Rule
or @ClassRule
as such which will start off before the tests start and tear down after they end.
@ClassRule
public KafkaContainer kafka = new KafkaContainer();
... or with @ Before / @ BeforeClass and @ After / @ AfterClass if you need more control.
Other noteworthy points include...
- You don't need to handle
Zookeeper
dependency but the module is flexible enough to provide you an option to access an external one (if needed) e.g.KafkaContainer kafka = new KafkaContainer().withExternalZookeeper("zk-ext:2181");
- If you have containerized Kafka client applications, they can access the
KafkaContainer
instance as well - Ability to select a specific version of Confluent platform e.g.
new KafkaContainer("5.4.0")
- Custom techniques such as using a
Dockerfile
instead of referring to a Docker image or using a DSL to programmatically buildDockerfile
Example: How to use this for testing Kafka Streams apps?
Make sure you have the required dependencies, for example for Maven
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-streams-test-utils</artifactId>
<version>2.4.0</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.testcontainers</groupId>
<artifactId>kafka</artifactId>
<version>1.13.0</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>4.12</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.hamcrest</groupId>
<artifactId>hamcrest-core</artifactId>
<version>1.3</version>
<scope>test</scope>
</dependency>
This is an example:
Use @ before setting method-start Kafka container and set boot server properties for Kafka Streams application
public class AppTest {
KafkaContainer kafka = new KafkaContainer();
.....
@Before
public void setUp() {
kafka.start();
config = new Properties();
config.setProperty(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, kafka.getBootstrapServers());
config.setProperty(StreamsConfig.APPLICATION_ID_CONFIG, App.APP_ID);
.....
}
This is a simple topology ...
static Topology filterWordsLongerThan5Letters() {
StreamsBuilder builder = new StreamsBuilder();
KStream<String, String> stream = builder.stream(INPUT_TOPIC);
stream.filter((k, v) -> v.length() > 5).to(OUTPUT_TOPIC);
return builder.build();
}
... can be tested like this:
@Test
public void shouldIncludeValueWithLengthGreaterThanFive() {
topology = App.filterWordsLongerThan5Letters();
td = new TopologyTestDriver(topology, config);
inputTopic = td.createInputTopic(App.INPUT_TOPIC, Serdes.String().serializer(), Serdes.String().serializer());
outputTopic = td.createOutputTopic(App.OUTPUT_TOPIC, Serdes.String().deserializer(), Serdes.String().deserializer());
inputTopic.pipeInput("foo", "foobar");
assertThat("output topic was empty", outputTopic.isEmpty(), is(false));
assertThat(outputTopic.readValue(), equalTo("foobar"));
assertThat("output topic was not empty", outputTopic.isEmpty(), is(true));
}
That's it! This was a quick introduction to testcontainers-java
along with an example of how to use it alongside Kafka Streams test utility (full sample on GitHub)
from: https://dev.to//itnext/using-docker-to-test-your-kafka-applications-fmm