Why use Avro with Kafka - How to handle POJOs

adpap :

I have a spring application that is my kafka producer and I was wondering why avro is the best way to go. I read about it and all it has to offer, but why can't I just serialize my POJO that I created myself with jackson for example and send it to kafka?

I'm saying this because the POJO generation from avro is not so straight forward. On top of it, it requires the maven plugin and an .avsc file.

So for example I have a POJO on my kafka producer created myself called User:

public class User {

    private long    userId;

    private String  name;

    public String getName() {
        return name;
    }

    public void setName(String name) {
        this.name = name;
    }

    public long getUserId() {
        return userId;
    }

    public void setUserId(long userId) {
        this.userId = userId;
    }

}

I serialize it and send it to my user topic in kafka. Then I have a consumer that itself has a POJO User and deserialize the message. Is it a matter of space? Is it also not faster to serialize and deserialize this way? Not to mention that there is an overhead of maintaining a schema-registry.

cricket_007 :

You don't need AVSC, you can use an AVDL file, which basically looks the same as a POJO with only the fields

@namespace("com.example.mycode.avro")
protocol ExampleProtocol {
   record User {
     long id;
     string name;
   }
}

Which, when using the idl-protocol goal of the Maven plugin, will create this AVSC for you, rather than you writing it yourself.

{
  "type" : "record",
  "name" : "User",
  "namespace" : "com.example.mycode.avro",
  "fields" : [ {
    "name" : "id",
    "type" : "long"
  }, {
    "name" : "name",
    "type" : "string"
  } ]
}

And it'll also place a SpecificData POJO User.java on your classpath for using in your code.


If you already had a POJO, you don't need to use AVSC or AVDL files. There are libraries to convert POJOs. For example, you can use Jackson, which is not only for JSON, you would just need to likely create a JacksonAvroSerializer for Kafka, for example, or find if one exists.

Avro also has built-in library based on reflection.


So to the question - why Avro (for Kafka)?

Well, having a schema is a good thing. Think about RDBMS tables, you can explain the table, and you see all the columns. Move to NoSQL document databases, and they can contain literally anything, and this is the JSON world of Kafka.

Let's assume you have consumers in your Kafka cluster that have no idea what is in the topic, they have to know exactly who/what has been produced into a topic. They can try the console consumer, and if it were a plaintext like JSON, then they have to figure out some fields they are interested in, then perform flaky HashMap-like .get("name") operations again and again, only to run into an NPE when a field doesn't exist. With Avro, you clearly define defaults and nullable fields.

You aren't required to use a Schema Registry, but it provides that type of explain topic semantics for the RDBMS analogy. It also saves you from needing to send the schema along with every message, and the expense of extra bandwidth on the Kafka topic. The registry is not only useful for Kafka, though, as it could be used for Spark, Flink, Hive, etc for all Data Science analysis surrounding streaming data ingest.


Assuming you did want to use JSON, then try using MsgPack instead and you'll likely see an increase in your Kafka throughput and save disk space on the brokers


You can also use other formats like Protobuf or Thrift, as Uber has compared

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=73387&siteId=1