查看原文
其他

使用 kubeadm 安装新版本 Kubernetes 1.13

青蛙小白 K8S中文社区 2019-12-18


kubeadm是Kubernetes官方提供的用于快速安装Kubernetes集群的工具,伴随Kubernetes每个版本的发布都会同步更新,kubeadm会对集群配置方面的一些实践做调整,通过实验kubeadm可以学习到Kubernetes官方在集群配置上一些新的最佳实践。

最近发布的Kubernetes 1.13中,kubeadm的主要特性已经GA了,但还不包含高可用,不过说明kubeadm可在生产环境中使用的距离越来越近了。

当然我们线上稳定运行的Kubernetes集群是使用ansible以二进制形式的部署的高可用集群,这里体验Kubernetes 1.13中的kubeadm是为了跟随官方对集群初始化和配置方面的最佳实践,进一步完善我们的ansible部署脚本。

1、准备

1.1系统配置

在安装之前,需要先做如下准备。两台CentOS 7.4主机如下:

  1. cat /etc/hosts

  2. 192.168.61.11 node1

  3. 192.168.61.12 node2

如果各个主机启用了防火墙,需要开放Kubernetes各个组件所需要的端口,可以查看Installing kubeadm中的”Check required ports”一节。 这里简单起见在各节点禁用防火墙:

  1. systemctl stop firewalld

  2. systemctl disable firewalld

禁用SELINUX:

  1. setenforce 0

  2. vi /etc/selinux/config

  3. SELINUX=disabled

创建/etc/sysctl.d/k8s.conf文件,添加如下内容:

  1. net.bridge.bridge-nf-call-ip6tables = 1

  2. net.bridge.bridge-nf-call-iptables = 1

  3. net.ipv4.ip_forward = 1

执行命令使修改生效。

  1. modprobe br_netfilter

  2. sysctl -p /etc/sysctl.d/k8s.conf

1.2kube-proxy开启ipvs的前置条件

由于ipvs已经加入到了内核的主干,所以为kube-proxy开启ipvs的前提需要加载以下的内核模块:

  1. ip_vs

  2. ip_vs_rr

  3. ip_vs_wrr

  4. ip_vs_sh

  5. nf_conntrack_ipv4

在所有的Kubernetes节点node1和node2上执行以下脚本:

  1. cat > /etc/sysconfig/modules/ipvs.modules <<EOF

  2. #!/bin/bash

  3. modprobe -- ip_vs

  4. modprobe -- ip_vs_rr

  5. modprobe -- ip_vs_wrr

  6. modprobe -- ip_vs_sh

  7. modprobe -- nf_conntrack_ipv4

  8. EOF

  9. chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4

上面脚本创建了的/etc/sysconfig/modules/ipvs.modules文件,保证在节点重启后能自动加载所需模块。 使用lsmod | grep -e ipvs -e nfconntrack_ipv4命令查看是否已经正确加载所需的内核模块。

接下来还需要确保各个节点上已经安装了ipset软件包yum install ipset。 为了便于查看ipvs的代理规则,最好安装一下管理工具ipvsadm yum install ipvsadm。

如果以上前提条件如果不满足,则即使kube-proxy的配置开启了ipvs模式,也会退回到iptables模式。

1.3安装Docker

Kubernetes从1.6开始使用CRI(Container Runtime Interface)容器运行时接口。默认的容器运行时仍然是Docker,使用的是kubelet中内置dockershim CRI实现。

安装docker的yum源:

  1. yum install -y yum-utils device-mapper-persistent-data lvm2

  2. yum-config-manager \

  3.    --add-repo \

  4.    https://download.docker.com/linux/centos/docker-ce.repo

查看最新的Docker版本:

  1. yum list docker-ce.x86_64  --showduplicates |sort -r

  2. docker-ce.x86_64            3:18.09.0-3.el7                     docker-ce-stable

  3. docker-ce.x86_64            18.06.1.ce-3.el7                    docker-ce-stable

  4. docker-ce.x86_64            18.06.0.ce-3.el7                    docker-ce-stable

  5. docker-ce.x86_64            18.03.1.ce-1.el7.centos             docker-ce-stable

  6. docker-ce.x86_64            18.03.0.ce-1.el7.centos             docker-ce-stable

  7. docker-ce.x86_64            17.12.1.ce-1.el7.centos             docker-ce-stable

  8. docker-ce.x86_64            17.12.0.ce-1.el7.centos             docker-ce-stable

  9. docker-ce.x86_64            17.09.1.ce-1.el7.centos             docker-ce-stable

  10. docker-ce.x86_64            17.09.0.ce-1.el7.centos             docker-ce-stable

  11. docker-ce.x86_64            17.06.2.ce-1.el7.centos             docker-ce-stable

  12. docker-ce.x86_64            17.06.1.ce-1.el7.centos             docker-ce-stable

  13. docker-ce.x86_64            17.06.0.ce-1.el7.centos             docker-ce-stable

  14. docker-ce.x86_64            17.03.3.ce-1.el7                    docker-ce-stable

  15. docker-ce.x86_64            17.03.2.ce-1.el7.centos             docker-ce-stable

  16. docker-ce.x86_64            17.03.1.ce-1.el7.centos             docker-ce-stable

  17. docker-ce.x86_64            17.03.0.ce-1.el7.centos             docker-ce-stable

Kubernetes 1.12已经针对Docker的1.11.1, 1.12.1, 1.13.1, 17.03, 17.06, 17.09, 18.06等版本做了验证,需要注意Kubernetes 1.12最低支持的Docker版本是1.11.1。Kubernetes 1.13对Docker的版本依赖方面没有变化。 我们这里在各节点安装docker的18.06.1版本。

  1. yum makecache fast


  2. yum install -y --setopt=obsoletes=0 \

  3.  docker-ce-18.06.1.ce-3.el7


  4. systemctl start docker

  5. systemctl enable docker

确认一下iptables filter表中FOWARD链的默认策略(pllicy)为ACCEPT。

  1. iptables -nvL

  2. Chain INPUT (policy ACCEPT 263 packets, 19209 bytes)

  3. pkts bytes target     prot opt in     out     source               destination


  4. Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)

  5. pkts bytes target     prot opt in     out     source               destination

  6.    0     0 DOCKER-USER  all  --  *      *       0.0.0.0/0            0.0.0.0/0

  7.    0     0 DOCKER-ISOLATION-STAGE-1  all  --  *      *       0.0.0.0/0            0.0.0.0/0

  8.    0     0 ACCEPT     all  --  *      docker0  0.0.0.0/0            0.0.0.0/0            ctstate RELATED,ESTABLISHED

  9.    0     0 DOCKER     all  --  *      docker0  0.0.0.0/0            0.0.0.0/0

  10.    0     0 ACCEPT     all  --  docker0 !docker0  0.0.0.0/0            0.0.0.0/0

  11.    0     0 ACCEPT     all  --  docker0 docker0  0.0.0.0/0            0.0.0.0/0

Docker从1.13版本开始调整了默认的防火墙规则,禁用了iptables filter表中FOWARD链,这样会引起Kubernetes集群中跨Node的Pod无法通信。但这里通过安装docker 1806,发现默认策略又改回了ACCEPT,这个不知道是从哪个版本改回的,因为我们线上版本使用的1706还是需要手动调整这个策略的。

2、使用kubeadm部署Kubernetes

2.1 安装kubeadm和kubelet

下面在各节点安装kubeadm和kubelet:

  1. cat <<EOF > /etc/yum.repos.d/kubernetes.repo

  2. [kubernetes]

  3. name=Kubernetes

  4. baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64

  5. enabled=1

  6. gpgcheck=1

  7. repo_gpgcheck=1

  8. gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg

  9.        https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg

  10. EOF

测试地址https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64是否可用,如果不可用需要科学上网。

  1. curl https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64

  2. yum makecache fast

  3. yum install -y kubelet kubeadm kubectl


  4. ...

  5. Installed:

  6.  kubeadm.x86_64 0:1.13.0-0                                    kubectl.x86_64 0:1.13.0-0                                                           kubelet.x86_64 0:1.13.0-0


  7. Dependency Installed:

  8.  cri-tools.x86_64 0:1.12.0-0                                  kubernetes-cni.x86_64 0:0.6.0-0                                                       socat.x86_64 0:1.7.3.2-2.el7

从安装结果可以看出还安装了cri-tools, kubernetes-cni, socat三个依赖:

  • 官方从Kubernetes 1.9开始就将cni依赖升级到了0.6.0版本,在当前1.12中仍然是这个版本

  • socat是kubelet的依赖

  • cri-tools是CRI(Container Runtime Interface)容器运行时接口的命令行工具

运行kubelet –help可以看到原来kubelet的绝大多数命令行flag参数都被DEPRECATED了,如:

  1. ......

  2. --address 0.0.0.0   The IP address for the Kubelet to serve on (set to 0.0.0.0 for all IPv4 interfaces and `::` for all IPv6 interfaces) (default 0.0.0.0) (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)

  3. ......

而官方推荐我们使用–config指定配置文件,并在配置文件中指定原来这些flag所配置的内容。具体内容可以查看这里Set Kubelet parameters via a config file。这也是Kubernetes为了支持动态Kubelet配置(Dynamic Kubelet Configuration)才这么做的,参考Reconfigure a Node’s Kubelet in a Live Cluster。

kubelet的配置文件必须是json或yaml格式,具体可查看这里。

Kubernetes 1.8开始要求关闭系统的Swap,如果不关闭,默认配置下kubelet将无法启动。

关闭系统的Swap方法如下:

swapoff -a 修改 /etc/fstab 文件,注释掉 SWAP 的自动挂载,使用free -m确认swap已经关闭。 swappiness参数调整,修改/etc/sysctl.d/k8s.conf添加下面一行:

vm.swappiness=0 执行sysctl -p /etc/sysctl.d/k8s.conf使修改生效。

因为这里本次用于测试两台主机上还运行其他服务,关闭swap可能会对其他服务产生影响,所以这里修改kubelet的配置去掉这个限制。 之前的Kubernetes版本我们都是通过kubelet的启动参数–fail-swap-on=false去掉这个限制的。前面已经分析了Kubernetes不再推荐使用启动参数,而推荐使用配置文件。 所以这里我们改成配置文件配置的形式。

查看/etc/systemd/system/kubelet.service.d/10-kubeadm.conf,看到了下面的内容:

  1. # Note: This dropin only works with kubeadm and kubelet v1.11+

  2. [Service]

  3. Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"

  4. Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"

  5. # This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically

  6. EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env

  7. # This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use

  8. # the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.

  9. EnvironmentFile=-/etc/sysconfig/kubelet

  10. ExecStart=

  11. ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS

上面显示kubeadm部署的kubelet的配置文件–config=/var/lib/kubelet/config.yaml,实际去查看/var/lib/kubelet和这个config.yaml的配置文件都没有被创建。 可以猜想肯定是运行kubeadm初始化集群时会自动生成这个配置文件,而如果我们不关闭Swap的话,第一次初始化集群肯定会失败的。

所以还是老老实实的回到使用kubelet的启动参数–fail-swap-on=false去掉必须关闭Swap的限制。 修改/etc/sysconfig/kubelet,加入:

  1. KUBELET_EXTRA_ARGS=--fail-swap-on=false

2.2 使用kubeadm init初始化集群

在各节点开机启动kubelet服务:

  1. systemctl enable kubelet.service

接下来使用kubeadm初始化集群,选择node1作为Master Node,在node1上执行下面的命令:

  1. kubeadm init \

  2.  --kubernetes-version=v1.13.0 \

  3.  --pod-network-cidr=10.244.0.0/16 \

  4.  --apiserver-advertise-address=192.168.61.11

因为我们选择flannel作为Pod网络插件,所以上面的命令指定–pod-network-cidr=10.244.0.0/16。

执行时报了下面的错误:

  1. [init] using Kubernetes version: v1.13.0

  2. [preflight] running pre-flight checks

  3. [preflight] Some fatal errors occurred:

  4.        [ERROR Swap]: running with swap on is not supported. Please disable swap

  5. [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`

有一个错误信息是running with swap on is not supported. Please disable swap。因为我们决定配置failSwapOn: false,所以重新添加–ignore-preflight-errors=Swap参数忽略这个错误,重新运行。

  1. kubeadm init \

  2.   --kubernetes-version=v1.13.0 \

  3.   --pod-network-cidr=10.244.0.0/16 \

  4.   --apiserver-advertise-address=192.168.61.11 \

  5.   --ignore-preflight-errors=Swap


  6. [init] Using Kubernetes version: v1.13.0

  7. [preflight] Running pre-flight checks

  8.        [WARNING Swap]: running with swap on is not supported. Please disable swap

  9. [preflight] Pulling images required for setting up a Kubernetes cluster

  10. [preflight] This might take a minute or two, depending on the speed of your internet connection

  11. [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'

  12. [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"

  13. [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"

  14. [kubelet-start] Activating the kubelet service

  15. [certs] Using certificateDir folder "/etc/kubernetes/pki"

  16. [certs] Generating "ca" certificate and key

  17. [certs] Generating "apiserver-kubelet-client" certificate and key

  18. [certs] Generating "apiserver" certificate and key

  19. [certs] apiserver serving cert is signed for DNS names [node1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.61.11]

  20. [certs] Generating "front-proxy-ca" certificate and key

  21. [certs] Generating "front-proxy-client" certificate and key

  22. [certs] Generating "etcd/ca" certificate and key

  23. [certs] Generating "etcd/healthcheck-client" certificate and key

  24. [certs] Generating "etcd/server" certificate and key

  25. [certs] etcd/server serving cert is signed for DNS names [node1 localhost] and IPs [192.168.61.11 127.0.0.1 ::1]

  26. [certs] Generating "etcd/peer" certificate and key

  27. [certs] etcd/peer serving cert is signed for DNS names [node1 localhost] and IPs [192.168.61.11 127.0.0.1 ::1]

  28. [certs] Generating "apiserver-etcd-client" certificate and key

  29. [certs] Generating "sa" key and public key

  30. [kubeconfig] Using kubeconfig folder "/etc/kubernetes"

  31. [kubeconfig] Writing "admin.conf" kubeconfig file

  32. [kubeconfig] Writing "kubelet.conf" kubeconfig file

  33. [kubeconfig] Writing "controller-manager.conf" kubeconfig file

  34. [kubeconfig] Writing "scheduler.conf" kubeconfig file

  35. [control-plane] Using manifest folder "/etc/kubernetes/manifests"

  36. [control-plane] Creating static Pod manifest for "kube-apiserver"

  37. [control-plane] Creating static Pod manifest for "kube-controller-manager"

  38. [control-plane] Creating static Pod manifest for "kube-scheduler"

  39. [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"

  40. [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s

  41. [apiclient] All control plane components are healthy after 19.506551 seconds

  42. [uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace

  43. [kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster

  44. [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node1" as an annotation

  45. [mark-control-plane] Marking the node node1 as control-plane by adding the label "node-role.kubernetes.io/master=''"

  46. [mark-control-plane] Marking the node node1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]

  47. [bootstrap-token] Using token: 702gz5.49zhotgsiyqimwqw

  48. [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles

  49. [bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials

  50. [bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token

  51. [bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster

  52. [bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace

  53. [addons] Applied essential addon: CoreDNS

  54. [addons] Applied essential addon: kube-proxy


  55. Your Kubernetes master has initialized successfully!


  56. To start using your cluster, you need to run the following as a regular user:


  57.  mkdir -p $HOME/.kube

  58.  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

  59.  sudo chown $(id -u):$(id -g) $HOME/.kube/config


  60. You should now deploy a pod network to the cluster.

  61. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:

  62.  https://kubernetes.io/docs/concepts/cluster-administration/addons/


  63. You can now join any number of machines by running the following on each node

  64. as root:


  65.  kubeadm join 192.168.61.11:6443 --token 702gz5.49zhotgsiyqimwqw --discovery-token-ca-cert-hash sha256:2bc50229343849e8021d2aa19d9d314539b40ec7a311b5bb6ca1d3cd10957c2f

上面记录了完成的初始化输出的内容,根据输出的内容基本上可以看出手动初始化安装一个Kubernetes集群所需要的关键步骤。

其中有以下关键内容:

  • [kubelet-start] 生成kubelet的配置文件”/var/lib/kubelet/config.yaml”

  • [certificates]生成相关的各种证书

  • [kubeconfig]生成相关的kubeconfig文件

  • [bootstraptoken]生成token记录下来,后边使用kubeadm join往集群中添加节点时会用到

  • 下面的命令是配置常规用户如何使用kubectl访问集群:

  1.  mkdir -p $HOME/.kube

  2.  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

  3.  sudo chown $(id -u):$(id -g) $HOME/.kube/config

  • 最后给出了将节点加入集群的命令kubeadm join 192.168.61.11:6443 –token 702gz5.49zhotgsiyqimwqw –discovery-token-ca-cert-hash sha256:2bc50229343849e8021d2aa19d9d314539b40ec7a311b5bb6ca1d3cd10957c2f

查看一下集群状态:

  1. kubectl get cs

  2. NAME                 STATUS    MESSAGE              ERROR

  3. controller-manager   Healthy   ok

  4. scheduler            Healthy   ok

  5. etcd-0               Healthy   {"health": "true"}

确认个组件都处于healthy状态。

集群初始化如果遇到问题,可以使用下面的命令进行清理:

  1. kubeadm reset

  2. ifconfig cni0 down

  3. ip link delete cni0

  4. ifconfig flannel.1 down

  5. ip link delete flannel.1

  6. rm -rf /var/lib/cni/

2.3 安装Pod Network

接下来安装flannel network add-on:

  1. mkdir -p ~/k8s/

  2. cd ~/k8s

  3. wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

  4. kubectl apply -f  kube-flannel.yml


  5. clusterrole.rbac.authorization.k8s.io/flannel created

  6. clusterrolebinding.rbac.authorization.k8s.io/flannel created

  7. serviceaccount/flannel created

  8. configmap/kube-flannel-cfg created

  9. daemonset.extensions/kube-flannel-ds-amd64 created

  10. daemonset.extensions/kube-flannel-ds-arm64 created

  11. daemonset.extensions/kube-flannel-ds-arm created

  12. daemonset.extensions/kube-flannel-ds-ppc64le created

  13. daemonset.extensions/kube-flannel-ds-s390x created

这里注意kube-flannel.yml这个文件里的flannel的镜像是0.10.0,quay.io/coreos/flannel:v0.10.0-amd64

如果Node有多个网卡的话,参考flannel issues 39701,目前需要在kube-flannel.yml中使用–iface参数指定集群主机内网网卡的名称,否则可能会出现dns无法解析。需要将kube-flannel.yml下载到本地,flanneld启动参数加上–iface=

  1. ......

  2. containers:

  3.      - name: kube-flannel

  4.        image: quay.io/coreos/flannel:v0.10.0-amd64

  5.        command:

  6.        - /opt/bin/flanneld

  7.        args:

  8.        - --ip-masq

  9.        - --kube-subnet-mgr

  10.        - --iface=eth1

  11. ......

使用kubectl get pod –all-namespaces -o wide确保所有的Pod都处于Running状态。

  1. kubectl get pod --all-namespaces -o wide

  2. NAMESPACE     NAME                            READY   STATUS    RESTARTS   AGE     IP              NODE    NOMINATED NODE

  3. kube-system   coredns-576cbf47c7-njt7l        1/1     Running   0          12m    10.244.0.3      node1   <none>

  4. kube-system   coredns-576cbf47c7-vg2gd        1/1     Running   0          12m    10.244.0.2      node1   <none>

  5. kube-system   etcd-node1                      1/1     Running   0          12m    192.168.61.11   node1   <none>

  6. kube-system   kube-apiserver-node1            1/1     Running   0          12m    192.168.61.11   node1   <none>

  7. kube-system   kube-controller-manager-node1   1/1     Running   0          12m    192.168.61.11   node1   <none>

  8. kube-system   kube-flannel-ds-amd64-bxtqh     1/1     Running   0          2m     192.168.61.11   node1   <none>

  9. kube-system   kube-proxy-fb542                1/1     Running   0          12m    192.168.61.11   node1   <none>

  10. kube-system   kube-scheduler-node1            1/1     Running   0          12m    192.168.61.11   node1   <none>

2.4 master node参与工作负载

使用kubeadm初始化的集群,出于安全考虑Pod不会被调度到Master Node上,也就是说Master Node不参与工作负载。这是因为当前的master节点node1被打上了node-role.kubernetes.io/master:NoSchedule的污点:

  1. kubectl describe node node1 | grep Taint

  2. Taints:             node-role.kubernetes.io/master:NoSchedule

因为这里搭建的是测试环境,去掉这个污点使node1参与工作负载:

  1. kubectl taint nodes node1 node-role.kubernetes.io/master-

  2. node "node1" untainted

2.5 测试DNS

  1. kubectl run curl --image=radial/busyboxplus:curl -it

  2. kubectl run --generator=deployment/apps.v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl create instead.

  3. If you don't see a command prompt, try pressing enter.

  4. [ root@curl-5cc7b478b6-r997p:/ ]$

进入后执行nslookup kubernetes.default确认解析正常:

  1. nslookup kubernetes.default

  2. Server:    10.96.0.10

  3. Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local


  4. Name:      kubernetes.default

  5. Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local

2.6 向Kubernetes集群中添加Node节点

下面我们将node2这个主机添加到Kubernetes集群中,因为我们同样在node2上的kubelet的启动参数中去掉了必须关闭swap的限制,所以同样需要–ignore-preflight-errors=Swap这个参数。 在node2上执行:

  1. kubeadm join 192.168.61.11:6443 --token 702gz5.49zhotgsiyqimwqw --discovery-token-ca-cert-hash sha256:2bc50229343849e8021d2aa19d9d314539b40ec7a311b5bb6ca1d3cd10957c2f \

  2. --ignore-preflight-errors=Swap


  3. [preflight] Running pre-flight checks

  4.        [WARNING Swap]: running with swap on is not supported. Please disable swap

  5. [discovery] Trying to connect to API Server "192.168.61.11:6443"

  6. [discovery] Created cluster-info discovery client, requesting info from "https://192.168.61.11:6443"

  7. [discovery] Requesting info from "https://192.168.61.11:6443" again to validate TLS against the pinned public key

  8. [discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.61.11:6443"

  9. [discovery] Successfully established connection with API Server "192.168.61.11:6443"

  10. [join] Reading configuration from the cluster...

  11. [join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'

  12. [kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.13" ConfigMap in the kube-system namespace

  13. [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"

  14. [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"

  15. [kubelet-start] Activating the kubelet service

  16. [tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...

  17. [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node2" as an annotation


  18. This node has joined the cluster:

  19. * Certificate signing request was sent to apiserver and a response was received.

  20. * The Kubelet was informed of the new secure connection details.


  21. Run 'kubectl get nodes' on the master to see this node join the cluster.

node2加入集群很是顺利,下面在master节点上执行命令查看集群中的节点:

  1. kubectl get nodes

  2. NAME    STATUS   ROLES    AGE    VERSION

  3. node1   Ready    master   16m    v1.13.0

  4. node2   Ready    <none>   4m5s   v1.13.0

如何从集群中移除Node 如果需要从集群中移除node2这个Node执行下面的命令:

在master节点上执行:

  1. kubectl drain node2 --delete-local-data --force --ignore-daemonsets

  2. kubectl delete node node2

在node2上执行:

  1. kubeadm reset

  2. ifconfig cni0 down

  3. ip link delete cni0

  4. ifconfig flannel.1 down

  5. ip link delete flannel.1

  6. rm -rf /var/lib/cni/

在node1上执行:

  1. kubectl delete node node2

2.7 kube-proxy开启ipvs

修改ConfigMap的kube-system/kube-proxy中的config.conf,mode: “ipvs”:

  1. kubectl edit cm kube-proxy -n kube-system

之后重启各个节点上的kube-proxy pod:

  1. kubectl get pod -n kube-system | grep kube-proxy | awk '{system("kubectl delete pod "$1" -n kube-system")}'

  2. kubectl get pod -n kube-system | grep kube-proxy

  3. kube-proxy-pf55q                1/1     Running   0          9s

  4. kube-proxy-qjnnc                1/1     Running   0          14s


  5. kubectl logs kube-proxy-pf55q -n kube-system

  6. I1208 06:12:23.516444       1 server_others.go:189] Using ipvs Proxier.

  7. W1208 06:12:23.516738       1 proxier.go:365] IPVS scheduler not specified, use rr by default

  8. I1208 06:12:23.516840       1 server_others.go:216] Tearing down inactive rules.

  9. I1208 06:12:23.575222       1 server.go:464] Version: v1.13.0

  10. I1208 06:12:23.585142       1 conntrack.go:52] Setting nf_conntrack_max to 131072

  11. I1208 06:12:23.586203       1 config.go:202] Starting service config controller

  12. I1208 06:12:23.586243       1 controller_utils.go:1027] Waiting for caches to sync for service config controller

  13. I1208 06:12:23.586269       1 config.go:102] Starting endpoints config controller

  14. I1208 06:12:23.586275       1 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller

  15. I1208 06:12:23.686959       1 controller_utils.go:1034] Caches are synced for endpoints config controller

  16. I1208 06:12:23.687056       1 controller_utils.go:1034] Caches are synced for service config controller

日志中打印出了Using ipvs Proxier,说明ipvs模式已经开启。

3、Kubernetes常用组件部署

越来越多的公司和团队开始使用Helm这个Kubernetes的包管理器,我们也将使用Helm安装Kubernetes的常用组件。

3.1 Helm的安装

Helm由客户端命helm令行工具和服务端tiller组成,Helm的安装十分简单。 下载helm命令行工具到master节点node1的/usr/local/bin下,这里下载的2.12.0版本:

  1. wget https://storage.googleapis.com/kubernetes-helm/helm-v2.12.0-linux-amd64.tar.gz

  2. tar -zxvf helm-v2.12.0-linux-amd64.tar.gz

  3. cd linux-amd64/

  4. cp helm /usr/local/bin/

为了安装服务端tiller,还需要在这台机器上配置好kubectl工具和kubeconfig文件,确保kubectl工具可以在这台机器上访问apiserver且正常使用。 这里的node1节点以及配置好了kubectl。

因为Kubernetes APIServer开启了RBAC访问控制,所以需要创建tiller使用的service account: tiller并分配合适的角色给它。 详细内容可以查看helm文档中的Role-based Access Control。 这里简单起见直接分配cluster-admin这个集群内置的ClusterRole给它。创建rbac-config.yaml文件:

  1. apiVersion: v1

  2. kind: ServiceAccount

  3. metadata:

  4.  name: tiller

  5.  namespace: kube-system

  6. ---

  7. apiVersion: rbac.authorization.k8s.io/v1beta1

  8. kind: ClusterRoleBinding

  9. metadata:

  10.  name: tiller

  11. roleRef:

  12.  apiGroup: rbac.authorization.k8s.io

  13.  kind: ClusterRole

  14.  name: cluster-admin

  15. subjects:

  16.  - kind: ServiceAccount

  17.    name: tiller

  18.    namespace: kube-system

  19. kubectl create -f rbac-config.yaml

  20. serviceaccount/tiller created

  21. clusterrolebinding.rbac.authorization.k8s.io/tiller created

接下来使用helm部署tiller:

  1. helm init --service-account tiller --skip-refresh

  2. Creating /root/.helm

  3. Creating /root/.helm/repository

  4. Creating /root/.helm/repository/cache

  5. Creating /root/.helm/repository/local

  6. Creating /root/.helm/plugins

  7. Creating /root/.helm/starters

  8. Creating /root/.helm/cache/archive

  9. Creating /root/.helm/repository/repositories.yaml

  10. Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com

  11. Adding local repo with URL: http://127.0.0.1:8879/charts

  12. $HELM_HOME has been configured at /root/.helm.


  13. Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.


  14. Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.

  15. To prevent this, run `helm init` with the --tiller-tls-verify flag.

  16. For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation

  17. Happy Helming!

tiller默认被部署在k8s集群中的kube-system这个namespace下:

  1. kubectl get pod -n kube-system -l app=helm

  2. NAME                            READY   STATUS    RESTARTS   AGE

  3. tiller-deploy-c4fd4cd68-dwkhv   1/1     Running   0          83s

  4. helm version

  5. Client: &version.Version{SemVer:"v2.12.0", GitCommit:"d325d2a9c179b33af1a024cdb5a4472b6288016a", GitTreeState:"clean"}

  6. Server: &version.Version{SemVer:"v2.12.0", GitCommit:"d325d2a9c179b33af1a024cdb5a4472b6288016a", GitTreeState:"clean"}

注意由于某些原因需要网络可以访问gcr.io和kubernetes-charts.storage.googleapis.com,如果无法访问可以通过helm init –service-account tiller –tiller-image /tiller:v2.11.0 –skip-refresh使用私有镜像仓库中的tiller镜像

3.2 使用Helm部署Nginx Ingress

为了便于将集群中的服务暴露到集群外部,从集群外部访问,接下来使用Helm将Nginx Ingress部署到Kubernetes上。 Nginx Ingress Controller被部署在Kubernetes的边缘节点上,关于Kubernetes边缘节点的高可用相关的内容可以查看我前面整理的Bare metal环境下Kubernetes Ingress边缘节点的高可用(基于IPVS)。

我们将node1(192.168.61.11)和node2(192.168.61.12)同时做为边缘节点,打上Label:

  1. kubectl label node node1 node-role.kubernetes.io/edge=

  2. node/node1 labeled


  3. kubectl label node node2 node-role.kubernetes.io/edge=

  4. node/node2 labeled


  5. kubectl get node

  6. NAME    STATUS   ROLES         AGE   VERSION

  7. node1   Ready    edge,master   24m   v1.13.0

  8. node2   Ready    edge          11m   v1.13.0

stable/nginx-ingress chart的值文件ingress-nginx.yaml:

  1. controller:

  2.  replicaCount: 2

  3.  service:

  4.    externalIPs:

  5.      - 192.168.61.10

  6.  nodeSelector:

  7.    node-role.kubernetes.io/edge: ''

  8.  affinity:

  9.    podAntiAffinity:

  10.        requiredDuringSchedulingIgnoredDuringExecution:

  11.        - labelSelector:

  12.            matchExpressions:

  13.            - key: app

  14.              operator: In

  15.              values:

  16.              - nginx-ingress

  17.            - key: component

  18.              operator: In

  19.              values:

  20.              - controller

  21.          topologyKey: kubernetes.io/hostname

  22.  tolerations:

  23.      - key: node-role.kubernetes.io/master

  24.        operator: Exists

  25.        effect: NoSchedule


  26. defaultBackend:

  27.  nodeSelector:

  28.    node-role.kubernetes.io/edge: ''

  29.  tolerations:

  30.      - key: node-role.kubernetes.io/master

  31.        operator: Exists

  32.        effect: NoSchedule

nginx ingress controller的副本数replicaCount为2,将被调度到node1和node2这两个边缘节点上。externalIPs指定的192.168.61.10为VIP,将绑定到kube-proxy kube-ipvs0网卡上。

  1. helm repo update


  2. helm install stable/nginx-ingress \

  3. -n nginx-ingress \

  4. --namespace ingress-nginx  \

  5. -f ingress-nginx.yaml

  6. kubectl get pod -n ingress-nginx -o wide

  7. NAME                                             READY   STATUS    RESTARTS   AGE    IP           NODE    NOMINATED NODE   READINESS GATES

  8. nginx-ingress-controller-85f8597fc6-g2kcx        1/1     Running   0          5m2s   10.244.1.3   node2   <none>           <none>

  9. nginx-ingress-controller-85f8597fc6-g7pp5        1/1     Running   0          5m2s   10.244.0.5   node1   <none>           <none>

  10. nginx-ingress-default-backend-6dc6c46dcc-7plm8   1/1     Running   0          5m2s   10.244.1.4   node2   <none>           <none>

如果访问http://192.168.61.10返回default backend,则部署完成。

实际测试的结果是无法访问,于是怀疑kube-proxy出了问题,查看kube-proxy的日志,不停的刷下面的log:

  1. I1208 07:59:28.902970       1 graceful_termination.go:160] Trying to delete rs: 10.104.110.193:80/TCP/10.244.1.5:80

  2. I1208 07:59:28.903037       1 graceful_termination.go:170] Deleting rs: 10.104.110.193:80/TCP/10.244.1.5:80

  3. I1208 07:59:28.903072       1 graceful_termination.go:160] Trying to delete rs: 10.104.110.193:80/TCP/10.244.0.6:80

  4. I1208 07:59:28.903105       1 graceful_termination.go:170] Deleting rs: 10.104.110.193:80/TCP/10.244.0.6:80

  5. I1208 07:59:28.903713       1 graceful_termination.go:160] Trying to delete rs: 192.168.61.10:80/TCP/10.244.1.5:80

  6. I1208 07:59:28.903764       1 graceful_termination.go:170] Deleting rs: 192.168.61.10:80/TCP/10.244.1.5:80

  7. I1208 07:59:28.903798       1 graceful_termination.go:160] Trying to delete rs: 192.168.61.10:80/TCP/10.244.0.6:80

  8. I1208 07:59:28.903824       1 graceful_termination.go:170] Deleting rs: 192.168.61.10:80/TCP/10.244.0.6:80

  9. I1208 07:59:28.904654       1 graceful_termination.go:160] Trying to delete rs: 10.0.2.15:31698/TCP/10.244.0.6:80

  10. I1208 07:59:28.904837       1 graceful_termination.go:170] Deleting rs: 10.0.2.15:31698/TCP/10.244.0.6:80

在Kubernetes的Github上找到了这个ISSUE https://github.com/kubernetes/kubernetes/issues/71071,大致是最近更新的IPVS proxier mode now support connection based graceful termination.引入了bug,导致Kubernetes的1.11.5、1.12.1~1.12.3、1.13.0都有这个问题,即kube-proxy在ipvs模式下不可用。而官方称在1.11.5、1.12.3、1.13.0中修复了12月4日k8s的特权升级漏洞(CVE-2018-1002105),如果针对这个漏洞做k8s升级的同学,需要小心,确认是否开启了ipvs,避免由升级引起k8s网络问题。由于我们线上的版本是1.11并且已经启用了ipvs,所以这里我们只能先把线上master node升级到了1.11.5,而kube-proxy还在使用1.11.4的版本。

https://github.com/kubernetes/kubernetes/issues/71071中已经描述有相关PR解决这个问题,后续只能跟踪一下1.11.5、1.12.3、1.13.0之后的小版本了。

参考 

Installing kubeadm 

https://kubernetes.io/docs/setup/independent/install-kubeadm/ Using kubeadm to 

Create a Cluster 

https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/ Get Docker 

CE for CentOS 

https://docs.docker.com/engine/installation/linux/docker-ce/centos/


文章作者:青蛙小白 

原文:

https://blog.frognew.com/2018/12/kubeadm-install-kubernetes-1.13.html


---END---

K8S培训推荐

Kubernetes线下实战培训,采用3+1新的培训模式(3天线下实战培训,1年内可免费再次参加),资深一线讲师,实操环境实践,现场答疑互动,培训内容覆盖:Docker架构、镜像、数据存储、网络、以及最佳实践。Kubernetes实战内容,Kubernetes设计、Pod、常用对象操作,Kuberentes调度系统、QoS、Helm、网络、存储、CI/CD、日志监控等。<了解更多详情>

北京站:1月4-6日,深圳站:12月21-23日;上海站:1月4-6日; 咨询/报名:曹辉/15999647409

    您可能也对以下帖子感兴趣

    文章有问题?点此查看未经处理的缓存