Communicating with Kafka: What is the way to go?

eMagiz
6 min readAug 10, 2021

Event Streaming can help you to identify and manage the data streams in your organization, and put them to use for real-time decision making, business intelligence and seamless data integration. In our previous blogs we discussed the value of event streaming and event stream processing for deriving real-time insights from your data.

However, to benefit from these advantages your applications first need to able to exchange data using Kafka. So how can we let our apps ‘speak’ the Kafka language? There are two possibilities for this. Either we can learn our apps to speak Kafka natively, or we use a translator to translate between your app and Kafka. In this blog, we will discuss the implications of both options as well as the requirements for implementation.

Option 1: Teaching your app to speak Kafka natively

When your app natively speaks Kafka, you are implementing the Kafka Consumer and/or Publisher APIs directly in your app. Kafka has a dumb broker/smart consumer architecture, this means that your app obtains a great level of control on how messages are consumed and published. This also allows your app to take full advantage of Kafka features like retention, with the ability to re-read any messages, and near-real time consumption since your app can directly interact with the cluster at its own speed.

But with great power comes great responsibility, and when setting up a consumer or publisher there are many configuration options which have to be optimized depending on the requirements of your app. We will highlight a few key configuration options that oftentimes need fine tuning.

For consumers:

  • Throughput and latency — Events in Kafka are delivered in (small) batches to consumers. Developers can increase the minimum number of messages that is returned in one batch, as well as reduce the time between batch requests. Increasing batch size also improves resource consumption and overall efficiency, and also impacts latency. Also, setting a minimum batch size helps to increase throughput, and by increasing your maximum batch size it helps to increase latency by lowering the number of requests.
  • Accounting for data loss — Depending on the durability requirements, consumers can take increased control on committing their consumption progress. It is easiest to use Kafka’s auto commit system, by reducing or increasing this interval you can make a trade-off between data duplication and data loss. If no data duplication or data loss (on the consumer side at least) is acceptable, you can also manually use the commit APIs from Kafka to commit consumer progress to Kafka.
  • Recovery — When one of your consumer groups fail, they have some time to recover so they can participate again in the message consumption. However, if the recovery takes too long, a rebalance may also occur and other members can take over. Also, by changing the heartbeat interval, you can optimize how often you want to check whether consumer groups are active and alive, so you can act sooner on failing consumers. Of course, this comes at a price of additional overhead.
  • Rebalancing and scalability — In Kafka, consumers can work together in groups to concurrently consume messages from topic partitions. While enabling consumer groups is easy, rebalancing work among consumer groups can be tricky. With rebalances, the partitions of a topic are assigned to members in a consumer group depending on the workload they can handle. Rebalancing has an impact on the performance of a consumer group, and while rebalancing is needed in dynamic environments with variable workloads, static group memberships and reduced rebalancing intervals can contribute to improved performance in less dynamic environments.

For producers:

  • Durability — To prevent message loss, you can configure your producer to await full, or partial acknowledgement from the broker that the events have been received and reproduced against a certain number of replicas. Awaiting this acknowledgement increases latency but also increases durability.
  • Order — Enabling full acknowledgement also ensures that your events arrived on broker in the order that they are sent. Because of this, the producer can send multiple messages simultaneously. If you do not use full acknowledgement, you cannot have simultaneous message deliveries if you also want to preserve the order of messages.
  • Reliability — Transactions can be used to ensure exactly once writes for events delivered across partitions in combinations with idempotence. Transactions in Kafka ensure that events within a transaction are produced exactly once, or not at all. Idempotence should also be enabled, this ensures retried deliveries will never result in duplicate messages on a partition level. Overall, your choice for delivery semantics will impact your consumer and producer configurations as well as your performance.
  • Throughput and latency — Batching allows a producer to bundle multiple events in a single request to the broker. This increases latency, but also increases throughput. Similar to batching, compression is also supported but also increases latency, while increasing throughput due to reduced request sizes.

Configurations of your producers and consumers are seldom final and static. Make sure to regularly monitor your Kafka solution to ensure consumers and producers are working as expected. For instance, when managing Kafka through eMagiz, you can easily monitor your consumers to find out whether they are performant and able to consume messages without any lag. Tools like eMagiz for managing your Kafka cluster can help you to identify problems in your data consumption and production, and take appropriate action.

Option 2: Using a translator

While speaking Kafka natively offers the most flexibility to maximize throughput, latency and reliability, it is not always feasible or desirable to migrate applications to use Kafka’s native APIs.

One of the key use cases for Kafka is to deploy it as a centralized communication layer for all applications and services in an organization. To this end it is essential that not only native Kafka applications can be connected, but also legacy applications. Fortunately, there is a wide framework of connectors available that can translate between many different communication protocols and Kafka. Most of these connectors are built with Kafka Connect, a framework that is offered as part of the Kafka open source project. Kafka Connect offers a foundation for building scalable, distributed, and reliable connectors. You can use Kafka Connects APIs to build your own connector, but in many instances, pre-build Kafka Connect connectors are already available for use. The advantage of pre-build connectors is that the consumer / producer settings are already optimized for the interfacing technology and that scaling is considered out of the box, so that you only have to setup those settings that are variable for your specific implementation. Since Kafka Connect is a framework rather than a platform, the deployment and configuration of a Kafka Connect cluster does require cluster management or service supervision.

In addition to Kafka Connect, there are also platforms for building connectors to interface with other applications. A platform allows you to build a connector between Kafka and other technologies without the need to worry about how these connectors are deployed and managed. One example of this is a REST proxy / interface for Kafka. Such a REST proxy can be used to enable the production and consumption of messages to allow any application that can send and receive HTTP messages to communicate using Kafka. Integration platforms and API management platforms, such as eMagiz, can help you to quickly build a REST proxy for your Kafka cluster.

What is the best option for you? The choice is yours!

Both native Kafka consumers and producers, as well as connector applications and platforms such as Kafka Connect and eMagiz are both great options for interfacing with Apache Kafka.

For applications and services with high demands for event consumption and delivery, such as banking applications, using the native Kafka APIs provides the most control of throughput, latency and reliability. Also for new applications, interfacing directly with Kafka can be an efficient way to integrate with a wide range of other applications with minimal effort.

For most existing applications, connectors are the best option to easily plug into the Kafka event management platform. Using these connectors, applications can quickly benefit from the benefits of event streaming like data centralization, real time and high-throughput data sharing, and data retention.

eMagiz can help you set-up your Kafka solution and determine the best way to connect applications, consumers and producers whether that’s through speaking Kafka natively or using a translator. If you have some questions about this or you want to discuss the possibilities for your organization you can reach out any time! We’re here to help you get up to speed with Kafka!

By Mark de la Court, Product owner Event management @ eMagiz

--

--