Kubernetes is an open-source container-orchestration system for automating deployment, scaling and management of containerized applications. It was originally designed by Google and is now maintained by the Cloud Native Computing Foundation.
In this post, I will show you how to set up 2 nodes Kubernetes cluster on the virtual machine.
Environment
CentOS Linux Release 7.6.1810 64bit Docker version 18.06.1-ce Git commit: e68fc7a Kubernetes version: v1.13.2
Note that the number of available CPUs minimum is 2
Hostname IP Role k8s-master 172.16.3.136 k8s master node k8s-node-1 172.16.3.140 k8s slave node k8s-node-2 172.16.3.139 k8s slave node
Prerequisites
Setup network on CentOS 7 by nmtui
Backup source
mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup
Download CentOS-Base.repo
into /etc/yum.repos.d/
CentOS 5 wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-5.repo CentOS 6 wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-6.repo CentOS 7 wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
Update source
yum clean all && yum makecache && yum update
Close firewall on all nodes
systemctl stop firewalld systemctl disable firewalld
Disable SELINUX on all nodes
setenforce 0 vi /etc/selinux/config SELINUX=disabled
Disable swap on all nodes
swapoff -a
Comment swap autoload config in the /etc/fstab
file and varify with free -m
.
Setup hostname on each node
hostnamectl --static set-hostname k8s-master hostnamectl --static set-hostname k8s-node-1 hostnamectl --static set-hostname k8s-node-2
Add following content on each /etc/hosts
file
172.16.3.136 k8s-master 172.16.3.140 k8s-node-1 172.16.3.139 k8s-node-2
Create /etc/sysctl.d/k8s.conf
file
net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 vm.swappiness = 0
Take change effect
modprobe br_netfilter sysctl -p /etc/sysctl.d/k8s.conf
Open kernel ipvs modules for kube-proxy on slave node
cat > /etc/sysconfig/modules/ipvs.modules <<EOF #!/bin/bash modprobe -- ip_vs modprobe -- ip_vs_rr modprobe -- ip_vs_wrr modprobe -- ip_vs_sh modprobe -- nf_conntrack_ipv4 EOF chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
Install recommended ipvs management tools
yum -y install ipset ipvsadm
Setup Docker
yum install -y yum-utils device-mapper-persistent-data lvm2 yum-config-manager \ --add-repo \ https://download.docker.com/linux/centos/docker-ce.repo
Check latested Docker versions info
yum list docker-ce.x86_64 --showduplicates |sort -r docker-ce.x86_64 3:18.09.1-3.el7 docker-ce-stable docker-ce.x86_64 3:18.09.0-3.el7 docker-ce-stable docker-ce.x86_64 18.06.1.ce-3.el7 docker-ce-stable docker-ce.x86_64 18.06.0.ce-3.el7 docker-ce-stable docker-ce.x86_64 18.03.1.ce-1.el7.centos docker-ce-stable docker-ce.x86_64 18.03.0.ce-1.el7.centos docker-ce-stable docker-ce.x86_64 17.12.1.ce-1.el7.centos docker-ce-stable docker-ce.x86_64 17.12.0.ce-1.el7.centos docker-ce-stable docker-ce.x86_64 17.09.1.ce-1.el7.centos docker-ce-stable docker-ce.x86_64 17.09.0.ce-1.el7.centos docker-ce-stable docker-ce.x86_64 17.06.2.ce-1.el7.centos docker-ce-stable docker-ce.x86_64 17.06.1.ce-1.el7.centos docker-ce-stable docker-ce.x86_64 17.06.0.ce-1.el7.centos docker-ce-stable docker-ce.x86_64 17.03.3.ce-1.el7 docker-ce-stable docker-ce.x86_64 17.03.2.ce-1.el7.centos docker-ce-stable docker-ce.x86_64 17.03.1.ce-1.el7.centos docker-ce-stable docker-ce.x86_64 17.03.0.ce-1.el7.centos docker-ce-stable
Note that Kubernetes 1.12 require Docker version is 1.11.1 or later.
Install and lanuch Docker on each nodes
yum install -y --setopt=obsoletes=0 \ docker-ce-18.06.1.ce-3.el7 systemctl start docker systemctl enable docker
Check the FOWARD state in the iptables filter is ACCEPT
iptables -nvL iptables -nvL Chain INPUT (policy ACCEPT 19 packets, 1444 bytes) pkts bytes target prot opt in out source destination Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 0 0 DOCKER-USER all -- * * 0.0.0.0/0 0.0.0.0/0 0 0 DOCKER-ISOLATION-STAGE-1 all -- * * 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT all -- * docker0 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED 0 0 DOCKER all -- * docker0 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT all -- docker0 !docker0 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT all -- docker0 docker0 0.0.0.0/0 0.0.0.0/0
Setup Kubernetes
Install kubelet, kubectl and kubeadm on each node
cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg EOF
Note that check network via curl https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
, if you can't access that, maybe need VPN.
Installation
yum makecache fast yum install -y kubelet kubeadm kubectl ... Installed: kubeadm.x86_64 0:1.13.2-0 kubectl.x86_64 0:1.13.2-0 kubelet.x86_64 0:1.13.2-0 Dependency Installed: cri-tools.x86_64 0:1.12.0-0 kubernetes-cni.x86_64 0:0.6.0-0 socat.x86_64 0:1.7.3.2-2.el7
Download docker images
docker pull mirrorgooglecontainers/kube-apiserver:v1.13.1 docker pull mirrorgooglecontainers/kube-controller-manager:v1.13.1 docker pull mirrorgooglecontainers/kube-scheduler:v1.13.1 docker pull mirrorgooglecontainers/kube-proxy:v1.13.1 docker pull mirrorgooglecontainers/pause:3.1 docker pull mirrorgooglecontainers/etcd:3.2.24 docker pull coredns/coredns:1.2.6 docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64 docker tag mirrorgooglecontainers/kube-apiserver:v1.13.1 k8s.gcr.io/kube-apiserver:v1.13.1 docker tag mirrorgooglecontainers/kube-controller-manager:v1.13.1 k8s.gcr.io/kube-controller-manager:v1.13.1 docker tag mirrorgooglecontainers/kube-scheduler:v1.13.1 k8s.gcr.io/kube-scheduler:v1.13.1 docker tag mirrorgooglecontainers/kube-proxy:v1.13.1 k8s.gcr.io/kube-proxy:v1.13.1 docker tag mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1 docker tag mirrorgooglecontainers/etcd:3.2.24 k8s.gcr.io/etcd:3.2.24 docker tag coredns/coredns:1.2.6 k8s.gcr.io/coredns:1.2.6 docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64 quay.io/coreos/flannel:v0.10.0-amd64 docker rmi mirrorgooglecontainers/kube-apiserver:v1.13.1 docker rmi mirrorgooglecontainers/kube-controller-manager:v1.13.1 docker rmi mirrorgooglecontainers/kube-scheduler:v1.13.1 docker rmi mirrorgooglecontainers/kube-proxy:v1.13.1 docker rmi mirrorgooglecontainers/pause:3.1 docker rmi mirrorgooglecontainers/etcd:3.2.24 docker rmi coredns/coredns:1.2.6 docker rmi registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64
docker images REPOSITORY TAG IMAGE ID CREATED SIZE k8s.gcr.io/kube-proxy v1.13.1 fdb321fd30a0 5 weeks ago 80.2MB k8s.gcr.io/kube-scheduler v1.13.1 ab81d7360408 5 weeks ago 79.6MB k8s.gcr.io/kube-apiserver v1.13.1 40a63db91ef8 5 weeks ago 181MB k8s.gcr.io/kube-controller-manager v1.13.1 26e6f1db2a52 5 weeks ago 146MB k8s.gcr.io/coredns 1.2.6 f59dcacceff4 2 months ago 40MB k8s.gcr.io/etcd 3.2.24 3cab8e1b9802 4 months ago 220MB quay.io/coreos/flannel v0.10.0-amd64 f0fad859c909 12 months ago 44.6MB k8s.gcr.io/pause 3.1 da86e6ba6ca1 13 months ago 742kB
Init Kubernetes cluster on master node, also reference kubeadm init official documention.
# auto start on boot systemctl enable kubelet && systemctl start kubelet kubeadm init \ --kubernetes-version=v1.13.1 \ --pod-network-cidr=10.244.0.0/16 \ --apiserver-advertise-address=172.16.3.136
--kubernetes-version
Choose a specific Kubernetes version for the control plane.
--apiserver-advertise-address
The IP address the API Server will advertise it's listening on. Specify '0.0.0.0' to use the address of the default network interface.
--pod-network-cidr
Specify range of IP addresses for the pod network. If set, the control plane will automatically allocate CIDRs for every node.
Detailed cluster initialization process
[init] Using Kubernetes version: v1.13.1 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Activating the kubelet service [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.16.3.136] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [172.16.3.136 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [172.16.3.136 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 22.507237 seconds [uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s-master" as an annotation [mark-control-plane] Marking the node k8s-master as control-plane by adding the label "node-role.kubernetes.io/master=''" [mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: p9e6q0.e55sga7ow6k05gpt [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 172.16.3.136:6443 --token p9e6q0.e55sga7ow6k05gpt --discovery-token-ca-cert-hash sha256:6d63ecaf8af2f27179d10535e9547cb089b70d0de6a3c9dac59181f716d49b87
Config kubectl on the master node
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> /etc/profile source /etc/profile echo $KUBECONFIG
Install Pod networks
Download pod networks config file kube-flannel.yml
and create pod networks by follwing command, reference Flannel Pods Documentation
mkdir -p ~/k8s/ cd ~/k8s wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml kubectl apply -f kube-flannel.yml
If execute success, checked CoreDNS Pod running status
kubectl get pods --all-namespaces -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kube-system coredns-86c58d9df4-fvzh6 1/1 Running 0 7m29s 10.244.0.2 k8s-master <none> <none> kube-system coredns-86c58d9df4-jcjgn 1/1 Running 0 7m29s 10.244.0.3 k8s-master <none> <none> kube-system etcd-k8s-master 1/1 Running 0 6m39s 172.16.3.136 k8s-master <none> <none> kube-system kube-apiserver-k8s-master 1/1 Running 0 6m43s 172.16.3.136 k8s-master <none> <none> kube-system kube-controller-manager-k8s-master 1/1 Running 0 6m29s 172.16.3.136 k8s-master <none> <none> kube-system kube-flannel-ds-amd64-jjqbx 1/1 Running 0 16s 172.16.3.136 k8s-master <none> <none> kube-system kube-proxy-mlnkt 1/1 Running 0 7m28s 172.16.3.136 k8s-master <none> <none> kube-system kube-scheduler-k8s-master 1/1 Running 0 6m30s 172.16.3.136 k8s-master <none> <none>
If you got error stuck at kube-flannel-ds Init:0/1, run kubeadm upgrade apply v1.13.2
to slove it.
Check Kubernetes master status to check ready
kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master Ready master 21m v1.13.2
Slave node configuration
Add slave nodes, run following command on each slave node
systemctl enable kubelet.service kubeadm join --token <token> <master-ip>:<master-port> --discovery-token-ca-cert-hash sha256:<hash> kubeadm join --token p9e6q0.e55sga7ow6k05gpt 172.16.3.136:6443 --discovery-token-ca-cert-hash sha256:6d63ecaf8af2f27179d10535e9547cb089b70d0de6a3c9dac59181f716d49b87 [preflight] Running pre-flight checks [discovery] Trying to connect to API Server "172.16.3.136:6443" [discovery] Created cluster-info discovery client, requesting info from "https://172.16.3.136:6443" [discovery] Requesting info from "https://172.16.3.136:6443" again to validate TLS against the pinned public key [discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "172.16.3.136:6443" [discovery] Successfully established connection with API Server "172.16.3.136:6443" [join] Reading configuration from the cluster... [join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.13" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Activating the kubelet service [tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap... [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s-node-2" as an annotation This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster.
If you forget token, run following command on master node
kubeadm token list
Status validation
Run following commands on the master node
kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master Ready master 13m v1.13.2 k8s-node-1 Ready <none> 107s v1.13.2 k8s-node-2 Ready <none> 91s v1.13.2
Check Pod status
kubectl get pods --all-namespaces -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kube-system coredns-86c58d9df4-fvzh6 1/1 Running 0 14m 10.244.0.2 k8s-master <none> <none> kube-system coredns-86c58d9df4-jcjgn 1/1 Running 0 14m 10.244.0.3 k8s-master <none> <none> kube-system etcd-k8s-master 1/1 Running 0 13m 172.16.3.136 k8s-master <none> <none> kube-system kube-apiserver-k8s-master 1/1 Running 0 13m 172.16.3.136 k8s-master <none> <none> kube-system kube-controller-manager-k8s-master 1/1 Running 0 13m 172.16.3.136 k8s-master <none> <none> kube-system kube-flannel-ds-amd64-fl8k6 1/1 Running 0 3m14s 172.16.3.139 k8s-node-2 <none> <none> kube-system kube-flannel-ds-amd64-jjqbx 1/1 Running 0 7m24s 172.16.3.136 k8s-master <none> <none> kube-system kube-flannel-ds-amd64-sh79b 1/1 Running 0 3m30s 172.16.3.140 k8s-node-1 <none> <none> kube-system kube-proxy-mlnkt 1/1 Running 0 14m 172.16.3.136 k8s-master <none> <none> kube-system kube-proxy-pf7fp 1/1 Running 0 3m14s 172.16.3.139 k8s-node-2 <none> <none> kube-system kube-proxy-t54fr 1/1 Running 0 3m30s 172.16.3.140 k8s-node-1 <none> <none> kube-system kube-scheduler-k8s-master 1/1 Running 0 13m 172.16.3.136 k8s-master <none> <none>
Remove node in the cluster
kubectl drain <node name> --delete-local-data --force --ignore-daemonsets kubectl delete node <node name>
After removing the node in the cluster, reset cluster on the master by the following command
kubeadm reset
Now, we installed 2 nodes Kubernetes cluster on virtual machine. For more information about Kubernetes, please see the official documentation.
Upgrade Kubernetes
kubeadm upgrade plan [preflight] Running pre-flight checks. [upgrade] Making sure the cluster is healthy: [upgrade/config] Making sure the configuration is correct: [upgrade/config] Reading configuration from the cluster... [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [upgrade] Fetching available versions to upgrade to [upgrade/versions] Cluster version: v1.13.1 [upgrade/versions] kubeadm version: v1.13.2 [upgrade/versions] Latest stable version: v1.13.4 [upgrade/versions] Latest version in the v1.13 series: v1.13.4 Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply': COMPONENT CURRENT AVAILABLE Kubelet 3 x v1.13.2 v1.13.4 Upgrade to the latest version in the v1.13 series: COMPONENT CURRENT AVAILABLE API Server v1.13.1 v1.13.4 Controller Manager v1.13.1 v1.13.4 Scheduler v1.13.1 v1.13.4 Kube Proxy v1.13.1 v1.13.4 CoreDNS 1.2.6 1.2.6 Etcd 3.2.24 3.2.24 You can now apply the upgrade by executing the following command: kubeadm upgrade apply v1.13.4 Note: Before you can perform this upgrade, you have to update kubeadm to v1.13.4. _____________________________________________________________________
Pull images on each nodes
wget https://raw.githubusercontent.com/openthings/kubernetes-tools/master/kubeadm/2-images/kubernetes-pull-aliyun-1.13.4.sh sh kubernetes-pull-aliyun-1.13.4.sh kubeadm upgrade apply v1.13.4 --force [preflight] Running pre-flight checks. [upgrade] Making sure the cluster is healthy: [upgrade/config] Making sure the configuration is correct: [upgrade/config] Reading configuration from the cluster... [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [upgrade/apply] Respecting the --cri-socket flag that is set with higher priority than the config file. [upgrade/version] You have chosen to change the cluster version to "v1.13.4" [upgrade/versions] Cluster version: v1.13.1 [upgrade/versions] kubeadm version: v1.13.2 [upgrade/version] Found 1 potential version compatibility errors but skipping since the --force flag is set: - Specified version to upgrade to "v1.13.4" is higher than the kubeadm version "v1.13.2". Upgrade kubeadm first using the tool you used to install kubeadm [upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler etcd] [upgrade/prepull] Prepulling image for component etcd. [upgrade/prepull] Prepulling image for component kube-apiserver. [upgrade/prepull] Prepulling image for component kube-controller-manager. [upgrade/prepull] Prepulling image for component kube-scheduler. [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-etcd [upgrade/prepull] Prepulled image for component kube-scheduler. [upgrade/prepull] Prepulled image for component kube-apiserver. [upgrade/prepull] Prepulled image for component kube-controller-manager. [upgrade/prepull] Prepulled image for component etcd. [upgrade/prepull] Successfully prepulled the images for all the control plane components [upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.13.4"... Static pod: kube-apiserver-k8s-master hash: 85f4ac2525b7cadd9404226e752c8d3b Static pod: kube-controller-manager-k8s-master hash: d4ff37ee76fe761a28f11175fd1c384e Static pod: kube-scheduler-k8s-master hash: 44b569a35761491825f4e7253fbf0543 [upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests313173818" [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-03-19-22-42-59/kube-apiserver.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) Static pod: kube-apiserver-k8s-master hash: 85f4ac2525b7cadd9404226e752c8d3b Static pod: kube-apiserver-k8s-master hash: 85f4ac2525b7cadd9404226e752c8d3b Static pod: kube-apiserver-k8s-master hash: 85f4ac2525b7cadd9404226e752c8d3b Static pod: kube-apiserver-k8s-master hash: 85f4ac2525b7cadd9404226e752c8d3b Static pod: kube-apiserver-k8s-master hash: 85f4ac2525b7cadd9404226e752c8d3b Static pod: kube-apiserver-k8s-master hash: 430c0fb23b278ceaadfb9440eb5667d0 [apiclient] Found 1 Pods for label selector component=kube-apiserver [upgrade/staticpods] Component "kube-apiserver" upgraded successfully! [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-03-19-22-42-59/kube-controller-manager.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) Static pod: kube-controller-manager-k8s-master hash: d4ff37ee76fe761a28f11175fd1c384e Static pod: kube-controller-manager-k8s-master hash: b6ca67226d47ac720e105375a9846904 [apiclient] Found 1 Pods for label selector component=kube-controller-manager [upgrade/staticpods] Component "kube-controller-manager" upgraded successfully! [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-03-19-22-42-59/kube-scheduler.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) Static pod: kube-scheduler-k8s-master hash: 44b569a35761491825f4e7253fbf0543 Static pod: kube-scheduler-k8s-master hash: 4b52d75cab61380f07c0c5a69fb371d4 [apiclient] Found 1 Pods for label selector component=kube-scheduler [upgrade/staticpods] Component "kube-scheduler" upgraded successfully! [uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster [kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.13" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s-master" as an annotation [bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy [upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.13.4". Enjoy! [upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
Related Article