0%

Kafka 集成 Kerberos 认证

Kerberos 搭建

Configuring GSSAPI | Confluent Documentation

创建 KDC

https://github.com/gcavalcante8808/docker-krb5-server.git

  1. mkdir kdc && cd kdc
  2. vim docker-compose.yml
  3. 填入以下内容
  4. docker-compose up -d
  • 容器里面使用了 supervisor, 占用了 9001 端口, 可能会和 minio 产生冲突 推荐修改 supervisor 配置文件的端口. 共两处 docker cp krb5:/etc/supr*.conf .
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
version: '2.2'

services:
kdc:
image: gcavalcante8808/krb5-server
container_name: krb5
build: .
restart: unless-stopped
# 避免 hostname 解析, hosts 映射问题
network_mode: host
# networks:
# - kafka-kerbose
# ports:
# - "88:88"
# - "464:464"
# - "749:749"
environment:
KRB5_REALM: EXAMPLE.COM
KRB5_KDC: localhost
TZ: Asia/Shanghai
volumes:
- ./data:/var/lib/krb5kdc
- ./keytabs:/etc/security/keytabs

添加 principal

1
2
kadmin.local -q 'addprinc -randkey kafka/ubuntu@EXAMPLE.COM'
kadmin.local -q "ktadd -k /etc/security/keytabs/kafka.keytab kafka/ubuntu@EXAMPLE.COM"

宿主机添加 krb5.conf

  • yum install krb5-workstation
  • vim /etc/krb5.conf
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
# Configuration snippets may be placed in this directory as well
includedir /etc/krb5.conf.d/

[logging]
default = FILE:/var/log/krb5libs.log
kdc = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmind.log

[libdefaults]
dns_lookup_realm = false
ticket_lifetime = 24h
renew_lifetime = 7d
forwardable = true
rdns = false
pkinit_anchors = FILE:/etc/pki/tls/certs/ca-bundle.crt
default_realm = EXAMPLE.COM
default_ccache_name = KEYRING:persistent:%{uid}

# 主要修改 kdc, admin_server 地址
[realms]
EXAMPLE.COM = {
kdc = 192.168.1.1:88
admin_server = 192.168.1.1
}

[domain_realm]
# .example.com = EXAMPLE.COM
# example.com = EXAMPLE.COM

验证 Kerberos 可用

1
2
# 日志: /var/log/k*
kinit -V -kt /home/ubuntu/docker-krb5-server/keytabs/kafka.keytab kafka/ubuntu@EXAMPLE.COM

Kafka 配置修改

server.properties

1
2
3
4
5
6
7
8
9
10
11
12
# 修改为 SASL_PLAINTEXT 或者 SASL_SSL
listeners=SASL_PLAINTEXT://:19092
advertised.listeners=SASL_PLAINTEXT://:19092
# 同时启用需认证 19092 和无需认证 19093
#listeners=SASL_PLAINTEXT://:19092,PLAINTEXT://:19093
#advertised.listeners=SASL_PLAINTEXT://:19092,PLAINTEXT://:19093

security.inter.broker.protocol=SASL_PLAINTEXT
sasl.mechanism.inter.broker.protocol=GSSAPI
sasl.enabled.mechanisms=GSSAPI
# 如果 principal="kafka2/kafka1.hostname.com@EXAMPLE.COM"; 那么这里应该是 kafka2
sasl.kerberos.service.name=kafka

kafka_server_jaas.conf

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
// 替换路径
KafkaServer {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=true
keyTab="/home/ubuntu/docker-krb5-server/keytabs/kafka.keytab"
principal="kafka/ubuntu@EXAMPLE.COM";
};
// 不启用 zk
// Zookeeper client authentication
// Client {
// com.sun.security.auth.module.Krb5LoginModule required
// useKeyTab=true
// storeKey=true
// keyTab="/home/ubuntu/docker-krb5-server/keytabs/kafka.keytab"
// principal="kafka/ubuntu@EXAMPLE.COM";
// };
KafkaClient {
com.sun.security.auth.module.Krb5LoginModule required
useTicketCache=true
serviceName=kafka
useKeyTab=true
storeKey=true
keyTab="/home/ubuntu/docker-krb5-server/keytabs/kafka.keytab"
principal="kafka/ubuntu@EXAMPLE.COM";
};

kafka-server-start.sh

  • 测试的时候可以不加 -dameon, 日志直接输出到终端, 更直观
1
2
3
export KAFKA_HOME=/home/ubuntu/kafka
# 替换路径
export KAFKA_OPTS=" -Dzookeeper.sasl.client=false -Dsun.security.krb5.debug=true -Djava.security.krb5.conf=/home/ubuntu/kafka/config/krb5.conf -Djava.security.auth.login.config=/home/ubuntu/kafka/config/kafka_server_jaas.conf "

Kafka 备节点

  • 直接复制即可, 需要修改 broker id, KAFKA_HOME, server.properties 里面的 listeners 相关配置.
  • 主备节点的 principal 可一致, 为了安全也可以替换为其他 principal

手动消费验证

至少需要两个 broker, 一个的话,大概率不能正常消费.

消费不到数据的时候, 先验证 kafka 是否配置完整, 所有节点是否都已启动

任何一步提示需要密码之类的报错, 大概率都是因为 kerberos 认证配置的问题, 如 jass 文件, 环境变量, 消费时未指定 comsumer.properties 等. 文件太多了很容易就忘记改了, 从而导致鉴权失败.

日志: $KAFKA_HOME/logs/server.log

producer.properties

1
2
3
4
security.protocol=SASL_PLAINTEXT
sasl.mechanism=GSSAPI
# service.name 同上
sasl.kerberos.service.name=kafka
  1. export KAFKA_HOME KAFKA_OPTS 配置
  2. ./bin/kafka-console-producer.sh --broker-list 192.168.1.166:19092,192.168.1.166:19093 --topic test --producer.config config/producer.properties

consumer.properties

1
2
3
4
security.protocol=SASL_PLAINTEXT
sasl.mechanism=GSSAPI
# service.name 同上
sasl.kerberos.service.name=kafka
  1. export KAFKA_HOME KAFKA_OPTS 配置
  2. ./bin/kafka-console-consumer.sh --bootstrap-server 192.168.1.166:19092,192.168.1.166:19093 --consumer.config config/consumer.properties --topic test --from-beginning

启动后在 kafka producer 终端输入, 在 consumer 终端即可看到数据

备注

容器 kdc.conf 参考

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
[kdcdefaults]
kdc_listen = 88
kdc_tcp_listen = 88

[realms]
EXAMPLE.COM = {
kadmin_port = 749
max_life = 12h 0m 0s
max_renewable_life = 7d 0h 0m 0s
master_key_type = aes256-cts
supported_enctypes = aes256-cts:normal aes128-cts:normal
default_principal_flags = +preauth
}

[logging]
kdc = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmin.log
default = FILE:/var/log/krb5lib.log
-------------本文结束再接再厉-------------

本文标题:Kafka 集成 Kerberos 认证

文章作者:IITII

发布时间:2023年07月15日 - 10:07

最后更新:2023年09月08日 - 10:09

原始链接:https://iitii.github.io/2023/07/15/1/

许可协议: 署名-非商业性使用-禁止演绎 4.0 国际 转载请保留原文链接及作者。