Kafka Console Consumer with Kerberos Authentication

Kafka Console consumer with kerberos authentication

Kerberos-enabled clusters can pose some tricky challenges at times. I've had to deal with some of these myself.

If the Kafka Cluster is Kerberos-enabled then you'll need to supply a jaas.conf file with the Kerberos details. Try following these steps(they worked for me):

  1. Create a jaas.conf file with the following contents:
KafkaClient {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab="<path-to-the-keytab-file>"
principal="<kafka-principal>";
};

Note: I've assumed that the Kafka principal and the associated keytab is already created. If not, you'll need to create these first.


  1. Create a properties file (say "consumer.properties") with the following contents:
security.protocol=SASL_PLAINTEXT
sasl.kerberos.service.name=kafka

  1. Then at the terminal run the following command:
$export KAFKA_OPTS="-Djava.security.auth.login.config=<path-to-jaas.conf>"

  1. Execute the Kafka-console-consumer script:
$ kafka-console-consumer --topic <topic-name> --from-beginning 
--bootstrap-server <anybroker>:9092 --consumer.config <consumer.properties>

EDIT - Steps 3 and 4 could be combined just in case there is a preference to keep these as one command in the command history.

I hope this helps.

kafka-console-producer with kerberos throws security-protocol is not a recognized option

My advise would be to add all the properties under a single file (e.g. client-ssl.properties) with the following content:

security.protocol=SASL_PLAINTEXT

and finally use –-producer.config to pass the property file to the console producer:

/usr/hdp/3.1.0.0-78/kafka/bin/kafka-console-producer.sh –-broker-list HOSTNAME:6667 –-topic test_new_topic –-producer.config client-ssl.properties

If you don't want to use a property file, you can use --producer-property to pass the security.protocol configuration:

/usr/hdp/3.1.0.0-78/kafka/bin/kafka-console-producer.sh --broker-list HOSTNAME:6667 --topic test_new_topic  --producer-property security.protocol=SASL_PLAINTEXT

Kafka Console Consumer on Kerberized Cluster : KRBError : Additional pre-authentication required, Server not found in Kerberos database

From the provided logs I've got as most significant information

>>>KRBError:
sTime is Wed Mar 28 07:37:59 EDT 2018 1522237079000
suSec is 467340
error code is 7
error Message is Server not found in Kerberos database
sname is "kafka"/local-dn-1.HADOOP.COM@HADOOP.COM
msgType is 30
KrbException: Server not found in Kerberos database (7)
Caused by: KrbException: Identifier doesn't match expected value (906)

18/03/28 07:38:53 DEBUG network.Selector: Connection with local-dn-1.HADOOP.COM/10.133.144.108 disconnected
javax.security.sasl.SaslException: An error: (java.security.PrivilegedActionException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Server not found in Kerberos database (7))]) occurred when evaluating SASL token received from the Kafka Broker. Kafka Client will go to AUTH_FAILED state. [Caused by javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Server not found in Kerberos database (7))]]
Caused by: GSSException: No valid credentials provided (Mechanism level: Server not found in Kerberos database Caused by: KrbException: Server not found in Kerberos database (7)
Caused by: KrbException: Identifier doesn't match expected value (906)

Furthermore local-dn-1.HADOOP.COM, as well all other nodes needs to be resolvable (via DNS).

Your /etc/kafka/conf/producer-conf/kafka-client-jaas.conf has some entries which seems not to fit together:

KafkaServer {
...
keyTab="/etc/kafka/conf/kafka.keytab"
principal="kafka/local-dn-1.hadoop.com@HADOOP.COM";
};
...
Client {
...
keyTab="/etc/kafka/conf/kafka.keytab"
principal="kafka/local-dn-1.hadoop.com.com@HADOOP.COM";
};

According this I like to recommend to check the Configuration of Kerberos Authentication. It seems that Kerberos authentication for node local-dn-1 is not properly setup yet.

Connect to Kafka on Unix from Windows with Kerberos

Disclaimer: I'm not too familiar with Kafka, and that error message does not clearly hint at a Kerberos problem.

But given that this is a cross-realm situation, you will probably hit a Kerberos snag sooner or later...

From Kerberos MIT documentation about krb5.conf section [capaths]

In order to perform direct (non-hierarchical) cross-realm
authentication, configuration is needed to determine the
authentication paths between realms
.

A client will use this section to find the authentication path between
its realm and the realm of the server.

In other words, you get a Kerberos TGT (ticket-granting-ticket) for principal wtf@USERS.CORP.DMN but need a Kerberos service ticket for kafka/brokerhost.some.where@SERVERS.CORP.DMN. Each realm has its own KDC servers. Your Kerberos client (the Java implementation in this case) must have a way to "hop" from one domain to the others


Scenario 1 >> both realms are "brother" AD domains with mutual trust, and they use the default hierarchical relationship -- meaning that there is a "father" AD domain named CORP.DMN that is in the path from USERS to SERVERS.

Your krb5.conf should look like this...

[libdefaults]
default_realm = USERS.CORP.DMN
kdc_timeout = 3000
...

...

[realms]
USERS.CORP.DMN = {
kdc = roundrobin.siteA.users.corp.dmn
kdc = roundrobin.bcp.users.corp.dmn
}
SERVERS.CORP.DMN = {
kdc = dc1.servers.corp.dmn
kdc = dc2.servers.corp.dmn
kdc = roundrobin.bcp.servers.corp.dmn
}
CORP.DMN = {
kdc = roundrobin.corp.dmn
kdc = roundrobin.bcp.corp.dmn
}

...assuming you have multiple AD Domain Controllers in each domain, sometimes behind DNS aliases doing round-robin assignment, plus another set of DC on a separate site for BCP/DRP. It could be more simple than that :-)


Scenario 2 >> there is trust enabled but the relationship does not use the default, hierarchical path.

In that case you must define explicitly that "path" in a [capaths] section, as explained in the Kerberos documentation.


Scenario 3 >> there is no trust between realms. You are screwed.

Or rather, you must obtain a different user that can authenticate on the same domain as the Kafka broker, e.g. xyz@SERVERS.CORP.DMN.

And maybe use a specific krb5.conf that states default_realm = SERVERS.CORP.DMN (I've seen weird behaviors of some JDK versions on Windows, for example)


Bottom line: you must require assistance from your AD administrators. Maybe they are not familiar with raw Kerberos conf, but they will know about the trust and about the "paths"; at this point it's just a matter of following the proper krb5.conf syntax.

Or, maybe, that conf has already been done by the Linux administrators; so you should require an example of their standard krb5.conf to check whether there is cross-domain stuff in there.

And of course you should enable Kerberos debug traces in your Kafka producer:

-Dsun.security.krb5.debug=true
-Djava.security.debug=gssloginconfig,configfile,configparser,logincontext

Just for the record, but not useful here... when using Keberos over HTTP (SPNego) there's an additional flag-Dsun.security.spnego.debug=true



Related Topics



Leave a reply



Submit