Running Kafka consumer and producer in Kerberos Authorization

kafka

No! No! No! Not this guy, we are talking about Apache Kafka.

apachekafka

What is Kafka

Kafka is used for building real-time data pipelines and streaming apps. It is horizontally scalable, fault-tolerant, wicked fast, and runs in production in thousands of companies.

Feature:

  1. Scalability
  2. Fault-tolerant
  3. Multi-source
  4. Real-time streaming data
  5. Fast

Versions of my Ambari components

  • Ambari 2.7.1.0
  • HDP 3.0.1.0
  • HDF 3.2.0.0

Preparation

Update configuration properties

When you using Ambari to manage your Kafka, you don’t need to complete some annoying installation and configuration steps. But you still need to change or add some properties for Kerberos authorization.

  1. Change the property “listener” in the “Kafka Broker” tab.
    Because of using Kerberos, the protocol of the listeners must be updated.

    1
    listeners = SASL_PLAINTEXT://localhost:6667
  2. Update the protocol of brokers’ security.
    Change your brokers’ protocol (“security.inter.broker.protocol
    “) in the “Advanced Kafka-broker” tab.

  3. Add “KAFKA_OPTS” for the JAAS configuration file.
    After enabling Kerberos, Ambari sets up a JAAS (Java Authorization and Authorization Service) login configuration file for the Kafka client. Settings in this file will be used for any client (consumer, producer) that connects to a Kerberos-enabled Kafka cluster.
    You don’t need to change the content of the JAAS configuration file, you just need to add a command in the “kafka-env template” in the “Advanced Kafka-env” tab.

    1
    2
    export KAFKA_PLAIN_PARAMS="-Djava.security.auth.login.config=/usr/hdp/3.0.1.0-187/kafka/conf/kafka_jaas.conf"
    export KAFKA_OPTS="$KAFKA_PLAIN_PARAMS $KAFKA_OPTS"

    You definitely should change the location of your JAAS file.

Create a Kafka topic

It is a key step that creating a Kafka topic before running a consumer and a producer.
Excepting the topic creation command, the other beneficial topic commands shown below.

  1. Creating

    1
    /usr/hdp/3.0.1.0-187/kafka/bin/kafka-topics.sh --create --zookeeper ambari.com:2181 --replication-factor 1 --partitions 1 --topic flume-kafka
  2. Listing all topics

    1
    /usr/hdp/3.0.1.0-187/kafka/bin/kafka-topics.sh --list --zookeeper ambari.com:2181
  3. Describing a specific topic

    1
    /usr/hdp/3.0.1.0-187/kafka/bin/kafka-topics.sh --describe --zookeeper ambari.com:2181 --topic flume-kafka

Running a Consumer and a Producer

  1. Initiating Kerberos authorization for Kafka client.

    1
    kinit -kt /etc/security/keytabs/kafka.service.keytab kafka/hostname@REALM.COM
  2. Running a consumer

    1
    /usr/hdp/3.0.1.0-187/kafka/bin/kafka-console-consumer.sh --topic flume-kafka --zookeeper ambari.com:2181 --from-beginning --security-protocol SASL_PLAINTEXT

    The security protocol must be specific.

  3. Running a producer

    1
    /usr/hdp/3.0.1.0-187/kafka/bin/kafka-console-producer.sh --topic flume-kafka --broker-list ambari.com:6667 --security-protocol SASL_PLAINTEXT

    The security protocol must be specific.

So, now you can use the terminal tool to test your consumer and producer.