测试环境发现运行在Master节点上的CoreDNS 的pod为CrashLoopBackOff 异常状态状态,导致业务应用大量pod均出现异常,查看日志显示大量如下报错:
# 查看详细信息
kubectl -n kube-system describe pod coredns-6dc69b487c-svxxx# 查看CoreDNS 的pod日志
kubectl -n kube-system logs -f –tail=200 coredns-6dc69b487c-svxxx# 输出日志
Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: no route to host
Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: no route to host
Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: no route to host
根据日志分析,该pod无法和10.96.0.1之间建立路由连接,请求被拒绝,所有要允许容器pod 的 IP段通过防火墙,那么容器IP段如何查询呢?
通常是查看该地址对应的svc,因为k8s集群有两个段,一个k8s-service服务IP段,容器IP段,两者之间无法找到路由。
kubectl get svc -A |grep 10.96.0.1
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP
解决
猜测可能是iptables防火墙未放行,由于部署之前已经关闭掉了firewalld,因此尝试清空iptables,放开连接。
#执行如下命令,清空iptables
systemctl stop kubelet
systemctl stop docker
iptables –flush
iptables –tnat –flush
systemctl start kubelet
systemctl start docker
# 通过deployment的方式去重启Pod
kubectl scale deployment coredns –replicas=0 -n kube-system
deployment.extensions/coredns scaled
kubectl scale deployment coredns –replicas=1 -n kube-system
deployment.extensions/coredns scaled
#再次查看pod运行状态,此时已经正常
kubectl get pod -A |grep core
kube-system coredns-7dc79b4f7c-clvz4 1/1 Running