kube-proxy status : Error

Hello, I am having a hard time to set up my cloud servers. I see that kube-proxy is in error state and flannel stopped in halfway. Any suggestion?

user@norihiro1:~$ kubectl get pods --all-namespaces
NAMESPACE     NAME                                                READY   STATUS              RESTARTS   AGE
default       pi-hndtw                                            0/1     ContainerCreating   0          21h
kube-system   coredns-576cbf47c7-jfpqc                            0/1     Completed           0          2d3h
kube-system   coredns-576cbf47c7-zx2c5                            0/1     Completed           0          2d3h
kube-system   etcd-norihiro1.mylabserver.com                      1/1     Running             3          24h
kube-system   kube-apiserver-norihiro1.mylabserver.com            1/1     Running             4          24h
kube-system   kube-controller-manager-norihiro1.mylabserver.com   1/1     Running             2          24h
kube-system   kube-flannel-ds-amd64-fgzx7                         0/1     Init:0/1            0          21h
kube-system   kube-flannel-ds-amd64-kg5n7                         0/1     Init:0/1            0          21h
kube-system   kube-flannel-ds-amd64-lcr8d                         0/1     Init:0/1            0          21h
kube-system   kube-flannel-ds-amd64-w9cw4                         0/1     Init:0/1            0          21h
kube-system   kube-proxy-2mt49                                    0/1     Error               0          2d3h
kube-system   kube-proxy-2nb7z                                    0/1     Error               0          2d3h
kube-system   kube-proxy-b7wpn                                    0/1     Error               0          2d3h
kube-system   kube-proxy-s6gdt                                    0/1     Error               0          2d3h
kube-system   kube-scheduler-norihiro1.mylabserver.com            1/1     Running             3          24h


user@norihiro1:~$ kubectl logs kube-proxy-2mt49 --namespace=kube-system
W1210 02:36:05.681368       1 server_others.go:295] Flag proxy-mode="" unknown, assuming iptables proxy
I1210 02:36:05.692621       1 server_others.go:148] Using iptables Proxier.
W1210 02:36:05.692801       1 proxier.go:317] clusterCIDR not specified, unable to distinguish between internal and external traffic
I1210 02:36:05.692966       1 server_others.go:178] Tearing down inactive rules.
I1210 02:36:05.738350       1 server.go:447] Version: v1.12.3
I1210 02:36:05.758035       1 conntrack.go:98] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
I1210 02:36:05.758176       1 conntrack.go:52] Setting nf_conntrack_max to 131072
I1210 02:36:05.758244       1 conntrack.go:98] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I1210 02:36:05.758316       1 conntrack.go:98] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I1210 02:36:05.758537       1 config.go:202] Starting service config controller
I1210 02:36:05.758580       1 controller_utils.go:1027] Waiting for caches to sync for service config controller
I1210 02:36:05.758715       1 config.go:102] Starting endpoints config controller
I1210 02:36:05.758742       1 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller
I1210 02:36:05.858767       1 controller_utils.go:1034] Caches are synced for service config controller
I1210 02:36:05.858980       1 controller_utils.go:1034] Caches are synced for endpoints config controller
E1210 02:42:50.418795       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=13, ErrCode=NO_ERROR
, debug=""
E1210 02:42:50.419245       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=13, ErrCode=NO_ERROR
, debug=""
E1210 02:42:50.420681       1 reflector.go:251] k8s.io/client-go/informers/factory.go:131: Failed to watch *v1.Service: Get https://172.31.26.129:6443/api/v1/services?resourceVersion=224&
timeoutSeconds=555&watch=true: dial tcp 172.31.26.129:6443: connect: connection refused
E1210 02:42:50.436755       1 reflector.go:251] k8s.io/client-go/informers/factory.go:131: Failed to watch *v1.Endpoints: Get https://172.31.26.129:6443/api/v1/endpoints?resourceVersion=9
23&timeoutSeconds=515&watch=true: dial tcp 172.31.26.129:6443: connect: connection refused
E1210 02:42:51.421204       1 reflector.go:134] k8s.io/client-go/informers/factory.go:131: Failed to list *v1.Service: Get https://172.31.26.129:6443/api/v1/services?limit=500&resourceVer
sion=0: dial tcp 172.31.26.129:6443: connect: connection refused
E1210 02:42:51.437181       1 reflector.go:134] k8s.io/client-go/informers/factory.go:131: Failed to list *v1.Endpoints: Get https://172.31.26.129:6443/api/v1/endpoints?limit=500&resource
Version=0: dial tcp 172.31.26.129:6443: connect: connection refused


user@norihiro1:~$ ip a
...
    inet 172.31.26.129/20 brd 172.31.31.255 scope global eth0
...


user@norihiro1:~$ sudo netstat -ln
....
tcp6       0      0 :::6443                 :::*                    LISTEN     
...

  • post-author-pic
    Will B
    12-12-2018

    Make sure that you do this on all of your nodes (masters and workers):

    echo "net.bridge.bridge-nf-call-iptables=1" | sudo tee -a /etc/sysctl.conf
    sudo sysctl -p

    Since kube-proxy is failing it seems like an issue with inability to provide iptables routing.

  • post-author-pic
    Norihiro T
    12-13-2018

    Hello Will, thank you for advice. Actually, that option was not set. I set it and tried again, then shows a slightly different message. 

    Anyway, if there is any quick way to set up the cluster, please advice me. I just want to practice for the exam.

    user@norihiro1:~$ kubectl get pods --all-namespaces
    NAMESPACE     NAME                                                READY   STATUS              RESTARTS   AGE
    default       pi-hndtw                                            0/1     ContainerCreating   0          47h
    kube-system   coredns-576cbf47c7-jfpqc                            0/1     Completed           0          3d5h
    kube-system   coredns-576cbf47c7-zx2c5                            0/1     Completed           0          3d5h
    kube-system   etcd-norihiro1.mylabserver.com                      1/1     Running             7          2d2h
    kube-system   kube-apiserver-norihiro1.mylabserver.com            1/1     Running             7          2d2h
    kube-system   kube-controller-manager-norihiro1.mylabserver.com   1/1     Running             5          2d2h
    kube-system   kube-flannel-ds-amd64-fgzx7                         0/1     Init:0/1            0          47h
    kube-system   kube-flannel-ds-amd64-kg5n7                         0/1     Init:0/1            0          47h
    kube-system   kube-flannel-ds-amd64-lcr8d                         0/1     Init:0/1            0          47h
    kube-system   kube-flannel-ds-amd64-w9cw4                         0/1     Init:0/1            0          47h
    kube-system   kube-proxy-2mt49                                    0/1     Error               0          3d5h
    kube-system   kube-proxy-2nb7z                                    0/1     Error               0          3d5h
    kube-system   kube-proxy-b7wpn                                    0/1     Error               0          3d5h
    kube-system   kube-proxy-s6gdt                                    0/1     Error               0          3d5h
    kube-system   kube-scheduler-norihiro1.mylabserver.com            1/1     Running             6          2d2h
    user@norihiro1:~$ kubectl logs kube-proxy-2mt49 --namespace=kube-system
    W1210 02:36:05.681368       1 server_others.go:295] Flag proxy-mode="" unknown, assuming iptables proxy
    I1210 02:36:05.692621       1 server_others.go:148] Using iptables Proxier.
    W1210 02:36:05.692801       1 proxier.go:317] clusterCIDR not specified, unable to distinguish between internal and external traffic
    I1210 02:36:05.692966       1 server_others.go:178] Tearing down inactive rules.
    I1210 02:36:05.738350       1 server.go:447] Version: v1.12.3
    I1210 02:36:05.758035       1 conntrack.go:98] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
    I1210 02:36:05.758176       1 conntrack.go:52] Setting nf_conntrack_max to 131072
    I1210 02:36:05.758244       1 conntrack.go:98] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
    I1210 02:36:05.758316       1 conntrack.go:98] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
    I1210 02:36:05.758537       1 config.go:202] Starting service config controller
    I1210 02:36:05.758580       1 controller_utils.go:1027] Waiting for caches to sync for service config controller
    I1210 02:36:05.758715       1 config.go:102] Starting endpoints config controller
    I1210 02:36:05.758742       1 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller
    I1210 02:36:05.858767       1 controller_utils.go:1034] Caches are synced for service config controller
    I1210 02:36:05.858980       1 controller_utils.go:1034] Caches are synced for endpoints config controller
    E1210 02:42:50.418795       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=13, ErrCode=NO_ERROR
    , debug=""
    E1210 02:42:50.419245       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=13, ErrCode=NO_ERROR
    , debug=""
    E1210 02:42:50.420681       1 reflector.go:251] k8s.io/client-go/informers/factory.go:131: Failed to watch *v1.Service: Get https://172.31.26.129:6443/api/v1/services?resourceVersion=224&
    timeoutSeconds=555&watch=true: dial tcp 172.31.26.129:6443: connect: connection refused
    E1210 02:42:50.436755       1 reflector.go:251] k8s.io/client-go/informers/factory.go:131: Failed to watch *v1.Endpoints: Get https://172.31.26.129:6443/api/v1/endpoints?resourceVersion=9
    23&timeoutSeconds=515&watch=true: dial tcp 172.31.26.129:6443: connect: connection refused
    E1210 02:42:51.421204       1 reflector.go:134] k8s.io/client-go/informers/factory.go:131: Failed to list *v1.Service: Get https://172.31.26.129:6443/api/v1/services?limit=500&resourceVer
    sion=0: dial tcp 172.31.26.129:6443: connect: connection refused
    E1210 02:42:51.437181       1 reflector.go:134] k8s.io/client-go/informers/factory.go:131: Failed to list *v1.Endpoints: Get https://172.31.26.129:6443/api/v1/endpoints?limit=500&resource
    Version=0: dial tcp 172.31.26.129:6443: connect: connection refused
    user@norihiro1:~$ kubectl logs kube-flannel-ds-amd64-fgzx7 --namespace=kube-system
    Error from server (BadRequest): container "kube-flannel" in pod "kube-flannel-ds-amd64-fgzx7" is waiting to start: PodInitializing
    user@norihiro1:~$ 
    user@norihiro1:~$ ip a
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
        inet6 ::1/128 scope host 
           valid_lft forever preferred_lft forever
    2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc mq state UP group default qlen 1000
        link/ether 06:68:7b:4a:44:14 brd ff:ff:ff:ff:ff:ff
        inet 172.31.26.129/20 brd 172.31.31.255 scope global eth0
           valid_lft forever preferred_lft forever
        inet6 fe80::468:7bff:fe4a:4414/64 scope link 
           valid_lft forever preferred_lft forever
    3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
        link/ether 02:42:5f:83:61:e6 brd ff:ff:ff:ff:ff:ff
        inet 172.17.0.1/16 scope global docker0
           valid_lft forever preferred_lft forever

  • post-author-pic
    Will B
    12-13-2018

    Hmm, looks like it might be having trouble contacting the master. Are you sure that docker and kubelet are running on all your nodes, including the master?


    If you don't have anything important in the cluster, it might help to just do kubeadm reset on each node and then re-do the kubeadm init and kubeadm join commands.

Looking For Team Training?

Learn More