我试图为我的kafka集群增加安全性,我遵循了文档:
I tried to add security to my kafka cluster, I followed the documentation:
- kafka.apache/documentation/#security_sasl_scram
- docs.confluent.io/current/kafka/authentication_sasl/authentication_sasl_scram.html#
我使用此添加用户:
kafka-configs.sh --zookeeper zookeeper1:2181 --alter --add-config 'SCRAM-SHA-256=[password=admin-secret],SCRAM-SHA-512=[password=admin-secret]' --entity-type users --entity-name admin我修改了server.properties:
I modify the server.properties:
broker.id=1 listeners=SASL_PLAINTEXT://kafka1:9092 advertised.listeners=SASL_PLAINTEXT://kafka1:9092 sasl.enabled.mechanisms=SCRAM-SHA-256 sasl.mechanism.inter.broker.protocol=SCRAM-SHA-256 security.inter.broker.protocol=SASL_PLAINTEXT numwork.threads=3 num.io.threads=8 socket.send.buffer.bytes=102400 socket.receive.buffer.bytes=102400 socket.request.max.bytes=104857600 default.replication.factor=3 min.insync.replicas=2 log.dirs=/var/lib/kafka num.partitions=3 num.recovery.threads.per.data.dir=1 offsets.topic.replication.factor=3 transaction.state.log.replication.factor=1 transaction.state.log.min.isr=1 log.retention.hours=168 log.segment.bytes=1073741824 log.retention.check.interval.ms=300000 zookeeper.connect=zookeeper1:2181,zookeeper2:2181,zookeeper3:2181/kafka zookeeper.connection.timeout.ms=6000 group.initial.rebalance.delay.ms=0创建了jaas文件:
KafkaServer { org.apache.kafkamon.security.scram.ScramLoginModule required username="admin" password="admin-secret" };在/etc/profile.d中创建文件kafka_opts.sh:
Created the file kafka_opts.sh in /etc/profile.d:
export KAFKA_OPTS=-Djava.security.auth.login.config=/opt/kafka_2.12-2.5.0/config/kafka_server_jaas.conf但是当我启动kafka时,它会引发以下错误:
But when I start kafka it throws the following error:
[2020-05-04 10:54:08,782] INFO [Controller id=1, targetBrokerId=1] Failed authentication with kafka1/kafka1 (Authentication failed during authentication due to invalid credentials with SASL mechanism SCRAM-SHA-256) (org.apache.kafkamonwork.Selector)我分别使用每台服务器的IP代替kafka1,kafka2,kafka3,zookeeper1,zookeeper2和zookeeper3,有人可以帮我解决我的问题吗?
I use instead of kafka1,kafka2,kafka3,zookeeper1,zookeeper2 and zookeeper3 the respectively ip of every server, can someone help me with my issue?
推荐答案我的主要问题是此配置:
My main problem was this configuration:
zookeeper.connect=zookeeper1:2181,zookeeper2:2181,zookeeper3:2181/kafkaserver.properties中的此配置需要按顺序来使Zookeeper创建kafka信息,但是这影响了我执行命令 kafka-configs.sh 的方式,所以我将说明我需要遵循的步骤
This configuration in the server.properties was needed to have order in the way zookeeper create the kafka information, but that affects the way I need to execute the command kafka-configs.sh, so I will explain the steps I needed to followed
我已经从官方网站 zookeeper.apache/releases.html下载了zookeeper.
我修改了zoo.cfg文件并添加了安全性配置:
I modified the zoo.cfg file and added the configuration for the security:
tickTime=2000 dataDir=/var/lib/zookeeper/ clientPort=2181 initLimit=5 syncLimit=2 server.1=zookeeper1:2888:3888 server.2=zookeeper2:2888:3888 server.3=zookeeper3:2888:3888 authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider requireClientAuthScheme=sasl我为Zookeeper创建jaas文件:
I create the jaas file for zookeeper:
Server { org.apache.zookeeper.server.auth.DigestLoginModule required user_admin="admin_secret"; };我在/conf/上创建文件java.env并添加以下内容:
I create the file java.env on /conf/ and added the following:
SERVER_JVMFLAGS="-Djava.security.auth.login.config=/opt/apache-zookeeper-3.6.0-bin/conf/zookeeper_jaas.conf"使用此文件,您告诉Zookeeper使用jaas文件让kafka向Zookeeper进行身份验证,以验证Zookeeper正在获取您只需要运行的文件:
With this files you are telling zookeeper to use the jaas file to let kafka authenticate to zookeeper, to validate that zookeeper is taking the file you only need to run:
zkServer.sh print-cmd它将响应:
/usr/bin/java ZooKeeper JMX enabled by default Using config: /opt/apache-zookeeper-3.6.0-bin/bin/../conf/zoo.cfg "java" -Dzookeeper.log.dir="/opt/apache-zookeeper-3.6.0-bin/bin/../logs" ........-Djava.security.auth.login.config=/opt/apache-zookeeper-3.6.0-bin/conf/zookeeper_jaas.conf....... "/opt/apache-zookeeper-3.6.0-bin/bin/../conf/zoo.cfg" > "/opt/apache-zookeeper-3.6.0-bin/bin/../logs/zookeeper.out" 2>&1 < /dev/null我已经从官方网站 www.apache/dyn/closer.cgi?path=/kafka/2.5.0/kafka_2.12-2.5.0.tgz
我在server.properties文件中修改/添加了以下配置:
I modifed/added the following configuration in the server.properties file:
listeners=SASL_PLAINTEXT://kafka1:9092 advertised.listeners=SASL_PLAINTEXT://kafka1:9092 sasl.enabled.mechanisms=SCRAM-SHA-256 sasl.mechanism.inter.broker.protocol=SCRAM-SHA-256 security.inter.broker.protocol=SASL_PLAINTEXT authorizer.class.name=kafka.security.authorizer.AclAuthorizer allow.everyone.if.no.acl.found=false super.users=User:admin我为kafka创建了jaas文件:
I created the jaas file for kafka:
KafkaServer { org.apache.kafkamon.security.scram.ScramLoginModule required username="admin" password="admin_secret"; }; Client { org.apache.zookeeper.server.auth.DigestLoginModule required username="admin" password="admin_secret"; };您需要了解的一件事,客户端部分必须与zookeeper中的jaas文件相同,而KafkaServer部分则是用于代理之间的通信.
One important thing you need to understand, the Client part needs to be the same as the jaas file in zookeeper and the KafkaServer part is for interbroker communication.
我还需要告诉kafka使用jaas文件,这可以通过设置变量KAFKA_OPTS来完成:
Also I need to tell kafka to use the jaas file, this can be done by setting the variable KAFKA_OPTS:
export KAFKA_OPTS=-Djava.security.auth.login.config=/opt/kafka_2.12-2.5.0/config/kafka_server_jaas.conf运行以下命令:
kafka-configs.sh --zookeeper zookeeper:2181/kafka --alter --add-config 'SCRAM-SHA-256=[password=admin_secret]' --entity-type users --entity-name admin就像我之前提到的那样,我的错误是我没有将/kafka部分添加到zookeeper ip中(请注意,使用zookeeper的所有内容都需要在ip末尾添加/kafka部分)启动Zookeeper和kafka,一切都会好起来.
As I mentioned before my error was that I was't adding the /kafka part to the zookeeper ip(note that everything that uses zookeeper will needs to add the /kafka part at the end of the ip), now if you start zookeeper and kafka everything is going to work great.
更多推荐
kafka SASL/SCRAM身份验证失败
发布评论