29603

Helm 安装Kafka

<h2 id="helm镜像库配置">helm镜像库配置</h2> helm repo add stable http://mirror.azure.cn/kubernetes/charts helm repo add incubator http://mirror.azure.cn/kubernetes/charts-incubator helm repo list NAME URL stable http://mirror.azure.cn/kubernetes/charts local http://127.0.0.1:8879/charts incubator http://mirror.azure.cn/kubernetes/charts-incubator <h2 id="创建kafka和zookeeper的local-pv">创建Kafka和Zookeeper的Local PV</h2> <h3 id="创建kafka的local-pv">创建Kafka的Local PV</h3>

这里的部署环境是本地的测试环境,存储选择Local Persistence Volumes。首先,在k8s集群上创建本地存储的StorageClass local-storage.yaml

apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: local-storage provisioner: kubernetes.io/no-provisioner volumeBindingMode: WaitForFirstConsumer reclaimPolicy: Retain kubectl apply -f local-storage.yaml storageclass.storage.k8s.io/local-storage created [root@master home]# kubectl get sc --all-namespaces -o wide NAME PROVISIONER AGE local-storage kubernetes.io/no-provisioner 9s

这里要在master,slaver1,slaver2这三个k8s节点上部署3个kafka的broker节点,因此先在三个节点上创建这3个kafka broker节点的Local PV

kafka-local-pv.yaml:

apiVersion: v1 kind: PersistentVolume metadata: name: data-kafka-0 spec: capacity: storage: 5Gi accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain storageClassName: local-storage local: path: /home/kafka/data-0 nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - master --- apiVersion: v1 kind: PersistentVolume metadata: name: data-kafka-1 spec: capacity: storage: 5Gi accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain storageClassName: local-storage local: path: /home/kafka/data-1 nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - slaver1 --- apiVersion: v1 kind: PersistentVolume metadata: name: data-kafka-2 spec: capacity: storage: 5Gi accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain storageClassName: local-storage local: path: /home/kafka/data-2 nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - slaver2 kubectl apply -f kafka-local-pv.yaml

根据上面创建的local pv,

master上创建目录/home/kafka/data-0

slaver1上创建目录/home/kafka/data-1

slaver2上创建目录/home/kafka/data-2

# master mkdir -p /home/kafka/data-0 # slaver1 mkdir -p /home/kafka/data-1 # slaver2 mkdir -p /home/kafka/data-2

查看:

[root@master home]# kubectl get pv,pvc --all-namespaces NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE persistentvolume/datadir-kafka-0 5Gi RWO Retain Available local-storage 12s persistentvolume/datadir-kafka-1 5Gi RWO Retain Available local-storage 12s persistentvolume/datadir-kafka-2 5Gi RWO Retain Available local-storage 12s <h3 id="创建zookeeper的local-pv">创建Zookeeper的Local PV</h3>

这里要在master,slaver1,slaver2这三个k8s节点上部署3个zookeeper节点,因此先在三个节点上创建这3个zookeeper节点的Local PV

zookeeper-local-pv.yaml

apiVersion: v1 kind: PersistentVolume metadata: name: data-kafka-zookeeper-0 spec: capacity: storage: 5Gi accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain storageClassName: local-storage local: path: /home/kafka/zkdata-0 nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - master --- apiVersion: v1 kind: PersistentVolume metadata: name: data-kafka-zookeeper-1 spec: capacity: storage: 5Gi accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain storageClassName: local-storage local: path: /home/kafka/zkdata-1 nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - slaver1 --- apiVersion: v1 kind: PersistentVolume metadata: name: data-kafka-zookeeper-2 spec: capacity: storage: 5Gi accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain storageClassName: local-storage local: path: /home/kafka/zkdata-2 nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - slaver2 kubectl apply -f zookeeper-local-pv.yaml

根据上面创建的local pv,

master上创建目录/home/kafka/zkdata-0

slaver1上创建目录/home/kafka/zkdata-1

slaver2上创建目录/home/kafka/zkdata-2

# master mkdir -p /home/kafka/zkdata-0 # slaver1 mkdir -p /home/kafka/zkdata-1 # slaver2 mkdir -p /home/kafka/zkdata-2

查看:

[root@master home]# kubectl get pv,pvc --all-namespaces NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE persistentvolume/data-kafka-zookeeper-0 5Gi RWO Retain Available local-storage 5s persistentvolume/data-kafka-zookeeper-1 5Gi RWO Retain Available local-storage 5s persistentvolume/data-kafka-zookeeper-2 5Gi RWO Retain Available local-storage 5s persistentvolume/datadir-kafka-0 5Gi RWO Retain Available local-storage 116s persistentvolume/datadir-kafka-1 5Gi RWO Retain Available local-storage 116s persistentvolume/datadir-kafka-2 5Gi RWO Retain Available local-storage 116s <h2 id="部署kafka">部署Kafka</h2>

编写kafka chart的vaule文件

kafka-values.yaml

replicas: 3 persistence: storageClass: local-storage size: 5Gi zookeeper: persistence: enabled: true storageClass: local-storage size: 5Gi replicaCount: 3 helm install --name kafka --namespace kafka -f kafka-values.yaml incubator/kafka

查看:

[root@master home]# kubectl get po,svc -n kafka -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod/kafka-0 1/1 Running 2 5m7s 10.244.1.24 slaver1 <none> <none> pod/kafka-1 1/1 Running 0 2m50s 10.244.2.16 slaver2 <none> <none> pod/kafka-2 0/1 Running 0 80s 10.244.0.13 master <none> <none> pod/kafka-zookeeper-0 1/1 Running 0 5m7s 10.244.1.23 slaver1 <none> <none> pod/kafka-zookeeper-1 1/1 Running 0 4m29s 10.244.2.15 slaver2 <none> <none> pod/kafka-zookeeper-2 1/1 Running 0 3m43s 10.244.0.12 master <none> <none> NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR service/kafka ClusterIP 10.101.224.127 <none> 9092/TCP 5m7s app=kafka,release=kafka service/kafka-headless ClusterIP None <none> 9092/TCP 5m7s app=kafka,release=kafka service/kafka-zookeeper ClusterIP 10.97.247.79 <none> 2181/TCP 5m7s app=zookeeper,release=kafka service/kafka-zookeeper-headless ClusterIP None <none> 2181/TCP,3888/TCP,2888/TCP 5m7s app=zookeeper,release=kafka [root@master home]# kubectl get pv,pvc --all-namespaces NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE persistentvolume/data-kafka-zookeeper-0 5Gi RWO Retain Bound kafka/datadir-kafka-2 local-storage 130m persistentvolume/data-kafka-zookeeper-1 5Gi RWO Retain Bound kafka/data-kafka-zookeeper-0 local-storage 130m persistentvolume/data-kafka-zookeeper-2 5Gi RWO Retain Bound kafka/data-kafka-zookeeper-1 local-storage 130m persistentvolume/datadir-kafka-0 5Gi RWO Retain Bound kafka/data-kafka-zookeeper-2 local-storage 132m persistentvolume/datadir-kafka-1 5Gi RWO Retain Bound kafka/datadir-kafka-0 local-storage 132m persistentvolume/datadir-kafka-2 5Gi RWO Retain Bound kafka/datadir-kafka-1 local-storage 132m NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE kafka persistentvolumeclaim/data-kafka-zookeeper-0 Bound data-kafka-zookeeper-1 5Gi RWO local-storage 129m kafka persistentvolumeclaim/data-kafka-zookeeper-1 Bound data-kafka-zookeeper-2 5Gi RWO local-storage 4m36s kafka persistentvolumeclaim/data-kafka-zookeeper-2 Bound datadir-kafka-0 5Gi RWO local-storage 3m50s kafka persistentvolumeclaim/datadir-kafka-0 Bound datadir-kafka-1 5Gi RWO local-storage 129m kafka persistentvolumeclaim/datadir-kafka-1 Bound datadir-kafka-2 5Gi RWO local-storage 2m57s kafka persistentvolumeclaim/datadir-kafka-2 Bound data-kafka-zookeeper-0 5Gi RWO local-storage 87s <h2 id="Question">Question</h2> <h3 id="拉取最新的gcr.iogoogle_samplesk8szk异常">拉取最新的gcr.io/google_samples/k8szk异常</h3>

最新的k8szk版本为3.5.5,拉取不了

解决方法:

docker pull bairuijie/k8szk:3.5.5 docker tag bairuijie/k8szk:3.5.5 gcr.io/google_samples/k8szk:3.5.5 docker rmi bairuijie/k8szk:3.5.5

或者修改镜像版本为k8szk:v3

kubectl edit pod kafka-zookeeper-0 -n kafka <h3 id="使用k8szk镜像zookeeper一直处于crashloopbackoff状态">使用k8szk镜像,zookeeper一直处于CrashLoopBackOff状态</h3> [root@master home]# kubectl get po -n kafka -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kafka pod/kafka-0 0/1 Running 7 15m 10.244.2.15 slaver2 <none> <none> kafka pod/kafka-zookeeper-0 0/1 CrashLoopBackOff 7 15m 10.244.1.11 slaver1 <none> <none> [root@master home]# kubectl logs kafka-zookeeper-0 -n kafka Error from server: Get https://18.16.202.227:10250/containerLogs/kafka/kafka-zookeeper-0/zookeeper: proxyconnect tcp: net/http: TLS handshake timeout

到slaver1节点上,查看docker容器日志:

[root@slaver1 ~]# docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 51eb5e6e0640 b3a008535ed2 "/bin/bash -xec /con…" About a minute ago Exited (1) About a minute ago k8s_zookeeper_kafka-zookeeper-0_kafka_4448f944-b1cd-4415-8abd-5cee39699b51_8 。。。 [root@slaver1 ~]# docker logs 51eb5e6e0640 + /config-scripts/run /config-scripts/run: line 63: /conf/zoo.cfg: No such file or directory /config-scripts/run: line 68: /conf/log4j.properties: No such file or directory /config-scripts/run: line 69: /conf/log4j.properties: No such file or directory /config-scripts/run: line 70: $LOGGER_PROPERS_FILE: ambiguous redirect /config-scripts/run: line 71: /conf/log4j.properties: No such file or directory /config-scripts/run: line 81: /conf/log4j.properties: No such file or directory + exec java -cp '/apache-zookeeper-*/lib/*:/apache-zookeeper-*/*jar:/conf:' -Xmx1G -Xms1G org.apache.zookeeper.server.quorum.QuorumPeerMain /conf/zoo.cfg Error: Could not find or load main class org.apache.zookeeper.server.quorum.QuorumPeerMain

查看helm拉取的k8s应用

helm fetch incubator/kafka ll -rw-r--r-- 1 root root 30429 8月 23 14:47 kafka-0.17.0.tgz

下载解压以后,发现里面的image依赖的是zookeeper,没有使用k8szk

旧版的kafka-values.yaml:

replicas: 3 persistence: storageClass: local-storage size: 5Gi zookeeper: persistence: enabled: true storageClass: local-storage size: 5Gi replicaCount: 3 image: repository: gcr.io/google_samples/k8szk

去除image绑定即可。

replicas: 3 persistence: storageClass: local-storage size: 5Gi zookeeper: persistence: enabled: true storageClass: local-storage size: 5Gi replicaCount: 3 <h2 id="参考">参考:</h2>

使用helm在k8s上部署kafka

Apache ZooKeeper 服务启动源码解释

kubernetes-retired/contrib

kubernetes(k8s) helm安装kafka、zookeeper

helm安装kafka

来源:博客园

作者:hongdada

链接:https://www.cnblogs.com/hongdada/p/11424579.html

Recommend