升级k8s中的docker版本带来的随记

来自三线的随记
kubectl version
Client Version: version.Info{Major:"v", Minor:".1", GitVersion:"v1.15.3", GitCommit:"93da878", GitTreeState:"clean", BuildDate:"2019-11-05T08:55:53Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.3", GitCommit:"f774be9", GitTreeState:"clean", BuildDate:"2019-08-23T03:42:03Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}

将节点不驱逐直接

systemctl stop kubelet

systemctl stop docker

然后将docker版本升级

升级完以后(不升级其实也一样,不升级的话就stop kubelet,然后docker rm -f 全部容器)

systemctl start docker

systemctl start kubelet


在controller节点上面会发现,升级前pod的状态是如下所示的

有的pod重启次数不为0

[root@tmast01 ~]# kubectl get pods -A -owide | grep tyyzt02
app           app-apollo-configservice-78c4599b4f-mclc4         1/1     Running   1          223d    10.64.100.73    tyyzt02          <none>           <none>
app           app-collector-manager-5d97bfcb5-gg6jw             1/1     Running   1          223d    10.64.100.77    tyyzt02          <none>           <none>
app           app-facade-f4cd99b4c-b5cjx                        1/1     Running   1          223d    10.64.100.72    tyyzt02          <none>           <none>
app           app-ns-collector-server-cc7dc858c-lt849           1/1     Running   2          223d    10.64.100.85    tyyzt02          <none>           <none>
app           app-ns-eureka-enhance-fdbc56676-rssj9             1/1     Running   0          78d     10.64.100.83    tyyzt02          <none>           <none>
app           app-ns-skywalking-oap-5d9675d87b-bst82            1/1     Running   0          78d     10.64.100.84    tyyzt02          <none>           <none>
app           app-query-server-6459b598d9-jnn9m                 1/1     Running   1          223d    10.64.100.76    tyyzt02          <none>           <none>
app           app-redis-cd444c98f-cwvvn                         1/1     Running   8          433d    10.64.100.70    tyyzt02          <none>           <none>
app           app-service-manager-5dd7bbd59-n5665               1/1     Running   1          223d    10.64.100.80    tyyzt02          <none>           <none>
app           app-ui-6fc8ccb7bd-94vfs                           1/1     Running   1          223d    10.64.100.82    tyyzt02          <none>           <none>
kube-system   logs-collector-8lrhq                              1/1     Running   11         325d    10.64.16.14     tyyzt02          <none>           <none>
kube-system   dswitch-agent-zqc9c                               1/1     Running   8          435d    10.64.16.14     tyyzt02          <none>           <none>
kube-system   kube-proxy-qscsh                                  1/1     Running   0          53d     10.64.16.14     tyyzt02          <none>           <none>
kube-system   node-local-dns-p472d                              1/1     Running   1          208d    10.64.16.14     tyyzt02          <none>           <none>
kube-system   smokeping-hhtrh                                   1/1     Running   9          435d    10.64.16.14     tyyzt02          <none>           <none>
[root@tmast01 ~]# 


但是当节点ready以后会发现

[root@tmast01 ~]# kubectl get pods -A -owide | grep tyyzt02
app           app-apollo-configservice-78c4599b4f-mclc4         1/1     Running   0          223d    10.64.100.73    tyyzt02          <none>           <none>
app           app-collector-manager-5d97bfcb5-gg6jw             1/1     Running   0          223d    10.64.100.77    tyyzt02          <none>           <none>
app           app-facade-f4cd99b4c-b5cjx                        1/1     Running   0          223d    10.64.100.72    tyyzt02          <none>           <none>
app           app-ns-collector-server-cc7dc858c-lt849           1/1     Running   0          223d    10.64.100.85    tyyzt02          <none>           <none>
app           app-ns-eureka-enhance-fdbc56676-rssj9             1/1     Running   0          78d     10.64.100.83    tyyzt02          <none>           <none>
app           app-ns-skywalking-oap-5d9675d87b-bst82            1/1     Running   0          78d     10.64.100.84    tyyzt02          <none>           <none>
app           app-query-server-6459b598d9-jnn9m                 1/1     Running   0          223d    10.64.100.76    tyyzt02          <none>           <none>
app           app-redis-cd444c98f-cwvvn                         1/1     Running   0          433d    10.64.100.70    tyyzt02          <none>           <none>
app           app-service-manager-5dd7bbd59-n5665               1/1     Running   0          223d    10.64.100.80    tyyzt02          <none>           <none>
app           app-ui-6fc8ccb7bd-94vfs                           1/1     Running   0          223d    10.64.100.82    tyyzt02          <none>           <none>
kube-system   logs-collector-8lrhq                              1/1     Running   0          325d    10.64.16.14     tyyzt02          <none>           <none>
kube-system   dswitch-agent-zqc9c                               1/1     Running   0          435d    10.64.16.14     tyyzt02          <none>           <none>
kube-system   kube-proxy-qscsh                                  1/1     Running   0          53d     10.64.16.14     tyyzt02          <none>           <none>
kube-system   node-local-dns-p472d                              1/1     Running   0          208d    10.64.16.14     tyyzt02          <none>           <none>
kube-system   smokeping-hhtrh                                   1/1     Running   0          435d    10.64.16.14     tyyzt02          <none>           <none>


emmmm pod restart次数都归零了

[root@tmast01 ~]# kubectl describe pods -n kube-system smokeping-hhtrh
Name:                 smokeping-hhtrh
Namespace:            kube-system
Priority Class Name:  system-node-critical
Node:                 tyyzt02/10.64.16.14
Start Time:           Mon, 25 Nov 2019 15:57:24 +0800
Labels:               controller-revision-hash=5d57887bcc
                      k8s-app=smokeping
                      pod-template-generation=1
Annotations:          <none>
Status:               Running
IP:                   10.64.16.14
Controlled By:        DaemonSet/smokeping
Containers:
  smokeping:
    Container ID:  docker://80f7f82494811d96335ac32c38c273778778a1c3b05c95bbd61abe4bfe1a252e
    Image:         10.64.16.16/kube-system/smokeping:3ae4d3d
    Image ID:      docker-pullable://10.64.16.16/kube-system/smokeping@sha256:4b3b7d847afede994ee9797c98dd5080022aa0a1b0223d121b5b8a5574bf2c24
    Port:          <none>
    Host Port:     <none>
    Command:
      /usr/local/bin/smokeping.sh
    State:          Running
      Started:      Tue, 02 Feb 2021 17:47:54 +0800
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     125m
      memory:  250Mi
    Requests:
      cpu:        125m
      memory:     250Mi
    Environment:  <none>
    Mounts:
      /var/lib/smokeping from data (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-9pj8b (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  data:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/smokeping
    HostPathType:  
  default-token-9pj8b:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-9pj8b
    Optional:    false
QoS Class:       Guaranteed
Node-Selectors:  <none>
Tolerations:     CriticalAddonsOnly
                 node-role.kubernetes.io/master:NoSchedule
                 node.kubernetes.io/disk-pressure:NoSchedule
                 node.kubernetes.io/memory-pressure:NoSchedule
                 node.kubernetes.io/network-unavailable:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute
                 node.kubernetes.io/pid-pressure:NoSchedule
                 node.kubernetes.io/unreachable:NoExecute
                 node.kubernetes.io/unschedulable:NoSchedule
Events:
  Type    Reason   Age    From              Message
  ----    ------   ----   ----              -------
  Normal  Pulling  8m11s  kubelet, tyyzt02  Pulling image "10.64.16.16/kube-system/smokeping:3ae4d3d"
  Normal  Pulled   7m28s  kubelet, tyyzt02  Successfully pulled image "10.64.16.16/kube-system/smokeping:3ae4d3d"
  Normal  Created  7m28s  kubelet, tyyzt02  Created container smokeping
  Normal  Started  7m28s  kubelet, tyyzt02  Started container smokeping


而且从Events上面也没有看到太多的异样

诡异

Just a mark


后面在k8s 1.18.6上面做了同样的测试

发现原本的pod同样会被kubelet在节点上拉起来

但是pod restart counter完全没有变化(理想值应该是+1才对?)

在pod event中同样也看到相关事件