kubernetes the hard way / pod creation issue


i'm following the k8s  the hard way course  all the steps  work  fine  for me : 


weave-net-ls6pb             2/2       Running             0          16m
weave-net-qk9dh 2/2 Running 0 16m


but  when i try to run the nginx example i have the  following  issue :

root@master2:~# kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-75db9dbd68-5f6bs 0/1 ContainerCreating 0 17s
nginx-75db9dbd68-h4xhf 0/1 ContainerCreating 0 17s

when i do kubectl describe  i get  this:

Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
default-token-9pmzp:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-9pmzp
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 1m default-scheduler Successfully assigned nginx-75db9dbd68-h4xhf to worker2
Normal SuccessfulMountVolume 1m kubelet, worker2 MountVolume.SetUp succeeded for volume "default-token-9pmzp"

Any  idea why it couldn't  start the nginx pods ? knowing that weave pods are created successfuly 

thank you
  • post-author-pic
    Will B
    10-19-2018

    Did you give it a minute or two to pull images? Sometimes it takes a moment to pull the image the first time an image is run on a particular worker. You might check again after a few moments to see if the pods start up. It looks to me like it was just in the process of creating the containers. Let me know if it's still not working after a minute or two!

  • post-author-pic
    Samy Z
    10-19-2018

    Hi Will,


    yes  did  the  setup twice from sratch and  i have  the same problem!

    what i dont understand  is  that the  weave pods are  working fine but  not the nginx  or  any other application pod!




  • post-author-pic
    Samy Z
    10-19-2018

    when i   did : kubectl describe pods nginx-586d68bd9-qp96b

    this morning i had different   error  which is :


    Events:
    Type Reason Age From Message
    ---- ------ ---- ---- -------
    Warning FailedCreatePodSandBox 1m (x3307 over 12h) kubelet, worker1 Failed create pod sandbox: rpc error: code = Unknown desc = failed to reserve sandbox name "nginx-586d68bd9-qp96b_default_bec7eeda-d35d-11e8-8abe-005056011ee6_0": name "nginx-586d68bd9-qp96b_default_bec7eeda-d35d-11e8-8abe-005056011ee6_0" is reserved for "6796c563d949cec54e218610c7df7a15b1661ab23e9b1c8dcaf8a659c98a1a0f"


  • post-author-pic
    Samy Z
    10-22-2018

    Hi  @wboyd   any suggestion for me?  

  • post-author-pic
    Samy Z
    10-22-2018

    with  kubectl --v=8 logs nginx-586d68bd9-qp96b
      i  got  this :
    I1022 11:31:53.200588   14401 round_trippers.go:383] GET http://localhost:8080/api/v1/namespaces/default/pods/nginx-586d68bd9-qp96b/log
    I1022 11:31:53.200715 14401 round_trippers.go:390] Request Headers:
    I1022 11:31:53.200775 14401 round_trippers.go:393] Accept: application/json, */*
    I1022 11:31:53.200815 14401 round_trippers.go:393] User-Agent: kubectl/v1.10.2 (linux/amd64) kubernetes/81753b1
    I1022 11:31:53.208802 14401 round_trippers.go:408] Response Status: 400 Bad Request in 7 milliseconds
    I1022 11:31:53.208817 14401 round_trippers.go:411] Response Headers:
    I1022 11:31:53.208821 14401 round_trippers.go:414] Content-Type: application/json
    I1022 11:31:53.208827 14401 round_trippers.go:414] Date: Mon, 22 Oct 2018 15:31:53 GMT
    I1022 11:31:53.208832 14401 round_trippers.go:414] Content-Length: 209
    I1022 11:31:53.208869 14401 request.go:874] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"container \"my-nginx\" in pod \"nginx-586d68bd9-qp96b\" is waiting to start: ContainerCreating","reason":"BadRequest","code":400}
    I1022 11:31:53.209045 14401 helpers.go:201] server response object: [{
    "metadata": {},
    "status": "Failure",
    "message": "container \"my-nginx\" in pod \"nginx-586d68bd9-qp96b\" is waiting to start: ContainerCreating",
    "reason": "BadRequest",
    "code": 400
    }]
    F1022 11:31:53.209065 14401 helpers.go:119] Error from server (BadRequest): container "my-nginx" in pod "nginx-586d68bd9-qp96b" is waiting to start: ContainerCreating


  • post-author-pic
    Samy Z
    10-22-2018

    more  details:

    root@master2:~# kubectl get pod -o wide -n kube-system
    NAME READY STATUS RESTARTS AGE IP NODE
    weave-net-48w6j 2/2 Running 0 3d 10.221.109.143 worker2
    weave-net-4mk4k 2/2 Running 0 3d 10.221.109.142 worker1
    root@master2:~# kubectl get pod -o wide
    NAME READY STATUS RESTARTS AGE IP NODE
    nginx-deployment-666865b5dd-4n799 0/1 ContainerCreating 0 4h <none> worker2


  • post-author-pic
    Samy Z
    10-23-2018

    need  help ! 


  • post-author-pic
    Ell M
    10-25-2018

    Could we take a look at the error logs? 


    kubectl logs ${POD_NAME} ${CONTAINER_NAME}

    If there are previous pods you can use:


     kubectl logs --previous ${POD_NAME} ${CONTAINER_NAME}


  • post-author-pic
    Chad C
    10-25-2018

    Hey   @samy    @samyzemmouri

     I think you have not met your minimum replicas available. describe your deployments and see that this value is set to true

    kubectl describe deployments
    output will look like this if you have not met the minimum:
    Name:                   nginx
    Namespace: default
    CreationTimestamp: Thu, 25 Oct 2018 18:29:23 +0000
    Labels: <none>
    Annotations: deployment.kubernetes.io/revision: 1
    kubectl.kubernetes.io/last-applied-configuration:
    {"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"name":"nginx","namespace":"default"},"spec":{"replicas":2,"selec...
    Selector: run=nginx
    Replicas: 2 desired | 2 updated | 2 total | 2 available | 0 unavailable
    StrategyType: RollingUpdate
    MinReadySeconds: 0
    RollingUpdateStrategy: 25% max unavailable, 25% max surge
    Pod Template:
    Labels: run=nginx
    Containers:
    my-nginx:
    Image: nginx
    Port: 80/TCP
    Host Port: 0/TCP
    Environment: <none>
    Mounts: <none>
    Volumes: <none>
    Conditions:
    Type Status Reason
    ---- ------ ------
    Available False MinimumReplicasAvailable
    Progressing True NewReplicaSetAvailable
    OldReplicaSets: <none>
    NewReplicaSet: nginx-bcc4746c8 (2/2 replicas created)
    Events:
    Type Reason Age From Message
    ---- ------ ---- ---- -------
    Normal ScalingReplicaSet 62s deployment-controller Scaled up replica set nginx-bcc4746c8 to 2

    or maybe you haven't enabled IP forwarding on your nodes?
    sudo sysctl net.ipv4.conf.all.forwarding=1
    echo "net.ipv4.conf.all.forwarding=1" | sudo tee -a /etc/sysctl.conf

    or maybe you need to roll back your deployment:
    kubectl rollout undo nginx

    See this link for more about failed deployments:

    https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#failed-deployment

  • post-author-pic
    Chad C
    10-25-2018

    Or.....if you'd like to wipe everything out and try again......here's what I did to achieve a successful deployment from start to finish (I just did this minutes ago):
    on k8s-master:

    sudo kubeadm init —-pod-network-cidr=10.244.0.0/16
    on k8s-master:
    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config

    on k8s-master:
    kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
    on k8s-node1 (yours will be a different join command):
    sudo kubeadm join --token cd28a6.4cce7d9e2b012567 172.31.122.94:6443 --discovery-token-ca-cert-hash sha256:4db356ab25f3b07c9533d7799c5750e8c9debd486e48c41746dd7fa8f2437aae
    on k8s-node2 (yours will be a different join command):
    sudo kubeadm join --token cd28a6.4cce7d9e2b012567 172.31.122.94:6443 --discovery-token-ca-cert-hash sha256:4db356ab25f3b07c9533d7799c5750e8c9debd486e48c41746dd7fa8f2437aae
    on k8s-node1:
    sudo sysctl net.ipv4.conf.all.forwarding=1
    on k8s-node2:
    sudo sysctl net.ipv4.conf.all.forwarding=1
    on k8s-node1:
    echo "net.ipv4.conf.all.forwarding=1" | sudo tee -a /etc/sysctl.conf
    on k8s-node2:
    echo "net.ipv4.conf.all.forwarding=1" | sudo tee -a /etc/sysctl.conf
    on k8s-master:
    cat << EOF | kubectl apply -f -
    apiVersion: apps/v1
    kind: Deployment
    metadata:
    name: nginx
    spec:
    selector:
    matchLabels:
    run: nginx
    replicas: 2
    template:
    metadata:
    labels:
    run: nginx
    spec:
    containers:
    - name: my-nginx
    image: nginx
    ports:
    - containerPort: 80
    EOF

    on k8s-master:
    kubectl expose deployment/nginx

Looking For Team Training?

Learn More