Recently, some business processes need to be monitored end-to-end. These businesses are composed of several microservices. The microservices are all written in Java Spring. We need to understand the traffic statistics and performance status of each module involved in the entire business, such as a total of How many business request calls, how many successful or failed replies, how much time each step takes, and so on. Therefore, I also studied how to output statistical indicators in Java Spring applications, collect indicators uniformly through Prometheus, and present this information through different reports in Grafana.
First, let's define a simple business process. Suppose we have two Spring applications. One is an HTTP call that provides a business request interface. After receiving the business request, the information carried in it is sent to Kafka. Another application is to subscribe to Kafka messages, obtain the business data sent by Application 1, and process it.
application one
Create a new application in the start.spring.io website, the artifact name is kafka-sender-example, and select Apache kafka for spring, Actuator, Spring Web in Dependencies. Open the generated project file, add a class named RemoteCommandController, implement an http interface, the code is as follows:
package cn.roygao.kafkasenderexample;
import java.util.Collections;
import java.util.Map;
import java.util.UUID;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.TimeoutException;
import java.util.logging.Logger;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.http.ResponseEntity;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RestController;
import com.alibaba.fastjson.JSONObject;
@RestController
public class RemoteCommandController {
@Autowired
private KafkaTemplate<Integer, String> template;
private final static Logger LOGGER = Logger.getLogger(RemoteCommandController.class.getName());
@PostMapping("/sendcommand")
public ResponseEntity<Map<String, Object>> sendCommand(@RequestBody JSONObject commandMsg) {
String requestId = UUID.randomUUID().toString();
String vin = commandMsg.getString("vin");
String command = commandMsg.getString("command");
LOGGER.info("Send command to vehicle:" + vin + ", command:" + command);
Map<String, Object> requestIdObj = Collections.singletonMap("requestId", requestId);
ProducerRecord<Integer, String> record = new ProducerRecord<>("remotecommand", 1, command);
try {
System.out.println(System.currentTimeMillis());
template.send(record).get(10, TimeUnit.SECONDS);
}
catch (ExecutionException e) {
LOGGER.info("Error");
LOGGER.info(e.getMessage());
}
catch (TimeoutException | InterruptedException e) {
LOGGER.info("Timeout");
LOGGER.info(e.getMessage());
}
return ResponseEntity.accepted().body(requestIdObj);
}
}
This code is very simple. It provides a POST/sendcommand interface. The user calls this interface and provides the VIN number of the vehicle and the command information to be sent. After receiving the request, the business request information will be forwarded to the message topic of Kafka. KafkaTemplate is used here to send messages. To do this, define a configuration class named KafkaSender with the following code:
package cn.roygao.kafkasenderexample;
import java.util.HashMap;
import java.util.Map;
import org.apache.kafka.clients.admin.NewTopic;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.common.serialization.IntegerSerializer;
import org.apache.kafka.common.serialization.StringSerializer;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.kafka.config.TopicBuilder;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.core.ProducerFactory;
@Configuration
public class KafkaSender {
@Bean
public NewTopic topic() {
return TopicBuilder.name("remotecommand")
.build();
}
@Bean
public ProducerFactory<Integer, String> producerFactory() {
return new DefaultKafkaProducerFactory<>(producerConfigs());
}
@Bean
public Map<String, Object> producerConfigs() {
Map<String, Object> props = new HashMap<>();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, IntegerSerializer.class);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
// See https://kafka.apache.org/documentation/#producerconfigs for more properties
return props;
}
@Bean
public KafkaTemplate<Integer, String> kafkaTemplate() {
return new KafkaTemplate<Integer, String>(producerFactory());
}
}
The code defines the address of the Kafka server, message topics and other configurations.
Run ./mvnw clean package to compile and package.
Application two
Create a new application in the start.spring.io website, the name of the artifact is kafka-sender-example, and select Apache kafka for spring, Actuator in Dependencies. Open the generated project file and create a new class named RemoteCommandHandler to realize the function of receiving Kafka information. The code is as follows:
package cn.roygao.kafkareceiverexample;
import java.util.concurrent.TimeUnit;
import org.springframework.kafka.annotation.KafkaListener;
import org.springframework.kafka.listener.adapter.ConsumerRecordMetadata;
import org.springframework.stereotype.Component;
import io.micrometer.core.instrument.MeterRegistry;
import io.micrometer.core.instrument.Timer;
@Component
public class RemoteCommandHandler {
private Timer timer;
public RemoteCommandHandler(MeterRegistry registry) {
this.timer = Timer
.builder("kafka.process.latency")
.publishPercentiles(0.15, 0.5, 0.95)
.publishPercentileHistogram()
.register(registry);
}
@KafkaListener(id = "myId", topics = "remotecommand")
public void listen(String in, ConsumerRecordMetadata meta) {
long latency = System.currentTimeMillis()-meta.timestamp();
timer.record(latency, TimeUnit.MILLISECONDS);
}
}
The constructor of this class needs to pass in a MeterRetistry object, and then create a new Timer object, which is one of the four Metrics provided by Micrometer and can be used to record duration information. Register this Timer to the MeterRegistry.
In the listen method, it is defined to subscribe to the message from Kafka's message topic, obtain the timestamp of the generation time in the metadata of the message, and compare it with the current time, calculate the time-consuming from message generation to message consumption, and then use timer to calculate. Timer will perform distribution statistics of different percentile intervals according to the previous definition.
Similarly, we also need to define a Kafka configuration class, the code is as follows:
package cn.roygao.kafkareceiverexample;
import java.util.HashMap;
import java.util.Map;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.kafka.annotation.EnableKafka;
import org.springframework.kafka.config.ConcurrentKafkaListenerContainerFactory;
import org.springframework.kafka.config.KafkaListenerContainerFactory;
import org.springframework.kafka.core.ConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.listener.ConcurrentMessageListenerContainer;
@Configuration
@EnableKafka
public class KafkaConfig {
@Bean
KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<Integer, String>>
kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<Integer, String> factory =
new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
factory.setConcurrency(3);
factory.getContainerProperties().setPollTimeout(3000);
return factory;
}
@Bean
public ConsumerFactory<Integer, String> consumerFactory() {
return new DefaultKafkaConsumerFactory<>(consumerConfigs());
}
@Bean
public Map<String, Object> consumerConfigs() {
Map<String, Object> props = new HashMap<>();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
props.put("key.deserializer", "org.apache.kafka.common.serialization.IntegerDeserializer");
props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
return props;
}
}
Add the following configuration to the application.properties file:
spring.kafka.consumer.auto-offset-reset=earliest
server.port=7777
management.endpoints.web.exposure.include=health,info,prometheus
management.endpoints.enabled-by-default=true
management.endpoint.health.show-details: always
Then run ./mvnw clean package to compile and package.
Start Kafka
Here I use Docker to start Kafka. The content of the compose file is as follows:
---
version: '2'
services:
zookeeper:
image: confluentinc/cp-zookeeper:6.1.0
hostname: zookeeper
container_name: zookeeper
ports:
- "2181:2181"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
broker:
image: confluentinc/cp-server:6.1.0
hostname: broker
container_name: broker
depends_on:
- zookeeper
ports:
- "9092:9092"
- "9101:9101"
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker:29092,PLAINTEXT_HOST://localhost:9092
KAFKA_METRIC_REPORTERS: io.confluent.metrics.reporter.ConfluentMetricsReporter
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
KAFKA_CONFLUENT_LICENSE_TOPIC_REPLICATION_FACTOR: 1
KAFKA_CONFLUENT_BALANCER_TOPIC_REPLICATION_FACTOR: 1
KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
KAFKA_JMX_PORT: 9101
KAFKA_JMX_HOSTNAME: localhost
KAFKA_CONFLUENT_SCHEMA_REGISTRY_URL: http://schema-registry:8081
CONFLUENT_METRICS_REPORTER_BOOTSTRAP_SERVERS: broker:29092
CONFLUENT_METRICS_REPORTER_TOPIC_REPLICAS: 1
CONFLUENT_METRICS_ENABLE: 'true'
CONFLUENT_SUPPORT_CUSTOMER_ID: 'anonymous'
schema-registry:
image: confluentinc/cp-schema-registry:6.1.0
hostname: schema-registry
container_name: schema-registry
depends_on:
- broker
ports:
- "8081:8081"
environment:
SCHEMA_REGISTRY_HOST_NAME: schema-registry
SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: 'broker:29092'
SCHEMA_REGISTRY_LISTENERS: http://0.0.0.0:8081
connect:
image: cnfldemos/cp-server-connect-datagen:0.4.0-6.1.0
hostname: connect
container_name: connect
depends_on:
- broker
- schema-registry
ports:
- "8083:8083"
environment:
CONNECT_BOOTSTRAP_SERVERS: 'broker:29092'
CONNECT_REST_ADVERTISED_HOST_NAME: connect
CONNECT_REST_PORT: 8083
CONNECT_GROUP_ID: compose-connect-group
CONNECT_CONFIG_STORAGE_TOPIC: docker-connect-configs
CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: 1
CONNECT_OFFSET_FLUSH_INTERVAL_MS: 10000
CONNECT_OFFSET_STORAGE_TOPIC: docker-connect-offsets
CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: 1
CONNECT_STATUS_STORAGE_TOPIC: docker-connect-status
CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: 1
CONNECT_KEY_CONVERTER: org.apache.kafka.connect.storage.StringConverter
CONNECT_VALUE_CONVERTER: io.confluent.connect.avro.AvroConverter
CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL: http://schema-registry:8081
# CLASSPATH required due to CC-2422
CLASSPATH: /usr/share/java/monitoring-interceptors/monitoring-interceptors-6.1.0.jar
CONNECT_PRODUCER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor"
CONNECT_CONSUMER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor"
CONNECT_PLUGIN_PATH: "/usr/share/java,/usr/share/confluent-hub-components"
CONNECT_LOG4J_LOGGERS: org.apache.zookeeper=ERROR,org.I0Itec.zkclient=ERROR,org.reflections=ERROR
control-center:
image: confluentinc/cp-enterprise-control-center:6.1.0
hostname: control-center
container_name: control-center
depends_on:
- broker
- schema-registry
- connect
- ksqldb-server
ports:
- "9021:9021"
environment:
CONTROL_CENTER_BOOTSTRAP_SERVERS: 'broker:29092'
CONTROL_CENTER_CONNECT_CLUSTER: 'connect:8083'
CONTROL_CENTER_KSQL_KSQLDB1_URL: "http://ksqldb-server:8088"
CONTROL_CENTER_KSQL_KSQLDB1_ADVERTISED_URL: "http://localhost:8088"
CONTROL_CENTER_SCHEMA_REGISTRY_URL: "http://schema-registry:8081"
CONTROL_CENTER_REPLICATION_FACTOR: 1
CONTROL_CENTER_INTERNAL_TOPICS_PARTITIONS: 1
CONTROL_CENTER_MONITORING_INTERCEPTOR_TOPIC_PARTITIONS: 1
CONFLUENT_METRICS_TOPIC_REPLICATION: 1
PORT: 9021
ksqldb-server:
image: confluentinc/cp-ksqldb-server:6.1.0
hostname: ksqldb-server
container_name: ksqldb-server
depends_on:
- broker
- connect
ports:
- "8088:8088"
environment:
KSQL_CONFIG_DIR: "/etc/ksql"
KSQL_BOOTSTRAP_SERVERS: "broker:29092"
KSQL_HOST_NAME: ksqldb-server
KSQL_LISTENERS: "http://0.0.0.0:8088"
KSQL_CACHE_MAX_BYTES_BUFFERING: 0
KSQL_KSQL_SCHEMA_REGISTRY_URL: "http://schema-registry:8081"
KSQL_PRODUCER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor"
KSQL_CONSUMER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor"
KSQL_KSQL_CONNECT_URL: "http://connect:8083"
KSQL_KSQL_LOGGING_PROCESSING_TOPIC_REPLICATION_FACTOR: 1
KSQL_KSQL_LOGGING_PROCESSING_TOPIC_AUTO_CREATE: 'true'
KSQL_KSQL_LOGGING_PROCESSING_STREAM_AUTO_CREATE: 'true'
ksqldb-cli:
image: confluentinc/cp-ksqldb-cli:6.1.0
container_name: ksqldb-cli
depends_on:
- broker
- connect
- ksqldb-server
entrypoint: /bin/sh
tty: true
ksql-datagen:
image: confluentinc/ksqldb-examples:6.1.0
hostname: ksql-datagen
container_name: ksql-datagen
depends_on:
- ksqldb-server
- broker
- schema-registry
- connect
command: "bash -c 'echo Waiting for Kafka to be ready... && \
cub kafka-ready -b broker:29092 1 40 && \
echo Waiting for Confluent Schema Registry to be ready... && \
cub sr-ready schema-registry 8081 40 && \
echo Waiting a few seconds for topic creation to finish... && \
sleep 11 && \
tail -f /dev/null'"
environment:
KSQL_CONFIG_DIR: "/etc/ksql"
STREAMS_BOOTSTRAP_SERVERS: broker:29092
STREAMS_SCHEMA_REGISTRY_HOST: schema-registry
STREAMS_SCHEMA_REGISTRY_PORT: 8081
rest-proxy:
image: confluentinc/cp-kafka-rest:6.1.0
depends_on:
- broker
- schema-registry
ports:
- 8082:8082
hostname: rest-proxy
container_name: rest-proxy
environment:
KAFKA_REST_HOST_NAME: rest-proxy
KAFKA_REST_BOOTSTRAP_SERVERS: 'broker:29092'
KAFKA_REST_LISTENERS: "http://0.0.0.0:8082"
KAFKA_REST_SCHEMA_REGISTRY_URL: 'http://schema-registry:8081'
Run nohup docker compose up > ./kafka.log 2>&1 & to start. Enter localhost:9021 in the browser to view Kafka-related information on the console interface.
Run application 1 and application 2 respectively, and then call the POST http://localhost:8080/remotecommand interface to send business requests, such as the following commands:
curl --location --request POST 'http://localhost:8080/sendcommand' \
--header 'Content-Type: application/json' \
--data-raw '{
"vin": "ABC123",
"command": "engine-start"
}'
On the Kafka console, you can see that there is a remotecommand message topic, and a message is sent and consumed.
Start Prometheus and Grafana
Also use docker compose to start, the content of the compose file is as follows:
services:
prometheus:
image: prom/prometheus-linux-amd64
#network_mode: host
container_name: prometheus
restart: unless-stopped
volumes:
- ./config:/etc/prometheus/
command:
- '--config.file=/etc/prometheus/prometheus.yaml'
ports:
- 9090:9090
grafana:
image: grafana/grafana
user: '472'
#network_mode: host
container_name: grafana
restart: unless-stopped
links:
- prometheus:prometheus
volumes:
- ./data/grafana:/var/lib/grafana
environment:
- GF_SECURITY_ADMIN_PASSWORD=admin
ports:
- 3000:3000
depends_on:
- prometheus
Create a new config directory under the compose file directory, which stores the prometheus configuration file, the content is as follows:
scrape_configs:
- job_name: 'Spring Boot Application input'
metrics_path: '/actuator/prometheus'
scrape_interval: 2s
static_configs:
- targets: ['172.17.0.1:7777']
labels:
application: 'My Spring Boot Application'
The targets configuration here is the address exposed by application 2, and metrics_path is the path for collecting metrics.
Create a new data/grafana directory under the compose file directory and mount it to the Grafana file directory. Note that you need to use chmod 777 to modify the directory permissions, otherwise Grafana will report a permission error.
Run nohup docker compose up > ./prometheus.log 2>&1 & run.
Open localhost:9090 to access the prometheus page, and then we can enter kafka to search, and we can see the kafka_process_latency index data reported by application 2, and the statistics of the three percentile intervals of 0.15, 0.5, and 0.95 are carried out according to our definition .
Open localhost:3000 to access the Grafana page, configure the datasource, select the address of the Prometheus container, and then save&test. Afterwards, you can create a new dashboard, and then display the indicator graph of kafka_process_latency in the report.
[To be continued] It is necessary to add the Counter metric for calling the Http interface, and define more reports in Grafana, including other service indicators and so on.