多读书多实践,勤思考善领悟

使用kubeadm搭建高可用的kubernetes集群

本文于2059天之前发表,文中内容可能已经过时。


一. 简介

Kubernetes v1.13版本发布后,kubeadm才正式进入GA,可以生产使用,用kubeadm部署kubernetes集群也是以后的发展趋势。目前Kubernetes的对应镜像仓库,在国内阿里云也有了镜像站点,使用kubeadm部署Kubernetes集群变得简单并且容易了很多,本文使用kubeadm带领大家快速部署Kubernetes v1.13.2版本。

注意:请不要把目光仅仅放在部署上,如果你是新手,推荐先熟悉用二进制文件部署后,再来学习用kubeadm部署。

二. 架构信息

1
2
3
4
5
6
7
8
系统版本:CentOS 7.6
内核:3.10.0-957.el7.x86_64
Kubernetes: v1.13.2
Docker-ce: 18.06
推荐硬件配置:2核2G

Keepalived保证apiserever服务器的IP高可用
Haproxy实现apiserver的负载均衡

为了减少服务器数量,haproxy、keepalived配置在node-01和node-02。

节点名称 角色 IP 安装软件
负载VIP VIP 10.31.90.200  
node-01 master 10.31.90.201 kubeadm、kubelet、kubectl、docker、haproxy、keepalived
node-02 master 10.31.90.202 kubeadm、kubelet、kubectl、docker、haproxy、keepalived
node-03 master 10.31.90.203 kubeadm、kubelet、kubectl、docker
node-04 node 10.31.90.204 kubeadm、kubelet、kubectl、docker
node-05 node 10.31.90.205 kubeadm、kubelet、kubectl、docker
node-06 node 10.31.90.206 kubeadm、kubelet、kubectl、docker
service网段   10.245.0.0/16  

三. 部署前准备工作

1. 关闭selinux和防火墙

1
2
3
4
sed -ri 's#(SELINUX=).*#\1disabled#' /etc/selinux/config
setenforce 0
systemctl disable firewalld
systemctl stop firewalld

2. 关闭swap

1
swapoff -a

3. 为每台服务器添加host解析记录

1
2
3
4
5
6
7
8
cat >>/etc/hosts<<EOF
10.31.90.201 node-01
10.31.90.202 node-02
10.31.90.203 node-03
10.31.90.204 node-04
10.31.90.205 node-05
10.31.90.206 node-06
EOF

4. 创建并分发密钥

在node-01创建ssh密钥。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
[root@node-01 ~]# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:26z6DcUarn7wP70dqOZA28td+K/erv7NlaJPLVE1BTA root@node-01
The key's randomart image is:
+---[RSA 2048]----+
| E..o+|
| . o|
| . |
| . . |
| S o . |
| .o X oo .|
| oB +.o+oo.|
| .o*o+++o+o|
| .++o+Bo+=B*B|
+----[SHA256]-----+

分发node-01的公钥,用于免密登录其他服务器

1
for n in `seq -w 01 06`;do ssh-copy-id node-$n;done

5. 配置内核参数

1
2
3
4
5
6
7
8
9
cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_nonlocal_bind = 1
net.ipv4.ip_forward = 1
vm.swappiness=0
EOF

sysctl --system

6. 加载ipvs模块

1
2
3
4
5
6
7
8
9
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4

7. 添加yum源

1
2
3
4
5
6
7
8
9
10
11
12
13
cat << EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

wget http://mirrors.aliyun.com/repo/Centos-7.repo -O /etc/yum.repos.d/CentOS-Base.repo
wget http://mirrors.aliyun.com/repo/epel-7.repo -O /etc/yum.repos.d/epel.repo
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo

四. 部署keepalived和haproxy

1. 安装keepalived和haproxy

在node-01和node-02安装keepalived和haproxy

1
yum install -y keepalived haproxy

2. 修改配置

keepalived配置

node-01的priority为100,node-02的priority为90,其他配置一样。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
[root@node-01 ~]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
notification_email {
feng110498@163.com
}
notification_email_from Alexandre.Cassen@firewall.loc
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id LVS_1
}

vrrp_instance VI_1 {
state MASTER
interface eth0
lvs_sync_daemon_inteface eth0
virtual_router_id 88
advert_int 1
priority 100
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
10.31.90.200/24
}
}

haproxy配置

node-01和node-02的haproxy配置是一样的。此处我们监听的是10.31.90.200的8443端口,因为haproxy是和k8s apiserver是部署在同一台服务器上,都用6443会冲突。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
global
chroot /var/lib/haproxy
daemon
group haproxy
user haproxy
log 127.0.0.1:514 local0 warning
pidfile /var/lib/haproxy.pid
maxconn 20000
spread-checks 3
nbproc 8

defaults
log global
mode tcp
retries 3
option redispatch

listen https-apiserver
bind 10.31.90.200:8443
mode tcp
balance roundrobin
timeout server 900s
timeout connect 15s

server apiserver01 10.31.90.201:6443 check port 6443 inter 5000 fall 5
server apiserver02 10.31.90.202:6443 check port 6443 inter 5000 fall 5
server apiserver03 10.31.90.203:6443 check port 6443 inter 5000 fall 5

3. 启动服务

1
2
systemctl enable keepalived && systemctl start keepalived 
systemctl enable haproxy && systemctl start haproxy

五. 部署kubernetes

1. 安装软件

由于kubeadm对Docker的版本是有要求的,需要安装与kubeadm匹配的版本。
由于版本更新频繁,请指定对应的版本号,本文采用1.13.2版本,其它版本未经测试。

1
2
3
4
5
6
7
yum install -y kubelet-1.13.2 kubeadm-1.13.2 kubectl-1.13.2 ipvsadm ipset docker-ce-18.06.1.ce

#启动docker
systemctl enable docker && systemctl start docker

#设置kubelet开机自启动
systemctl enable kubelet

2. 修改初始化配置

使用kubeadm config print init-defaults > kubeadm-init.yaml 打印出默认配置,然后在根据自己的环境修改配置.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
[root@node-01 ~]# cat kubeadm-init.yaml 
apiVersion: kubeadm.k8s.io/v1beta1
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 10.31.90.201
bindPort: 6443
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: node-01
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
---
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
apiServer:
timeoutForControlPlane: 4m0s
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: "10.31.90.200:8443"
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
kubernetesVersion: v1.13.2
networking:
dnsDomain: cluster.local
podSubnet: ""
serviceSubnet: "10.245.0.0/16"
scheduler: {}
controllerManager: {}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: "ipvs"

3. 预下载镜像

1
2
3
4
5
6
7
8
[root@node-01 ~]# kubeadm config images pull --config kubeadm-init.yaml 
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.13.2
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.13.2
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.13.2
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.13.2
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.2.24
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.2.6

4. 初始化

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
[root@node-01 ~]# kubeadm init --config kubeadm-init.yaml    
[init] Using Kubernetes version: v1.13.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [node-01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.12.0.1 10.31.90.201 10.31.90.200]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [node-01 localhost] and IPs [10.31.90.201 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [node-01 localhost] and IPs [10.31.90.201 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "admin.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 22.503955 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node-01" as an annotation
[mark-control-plane] Marking the node node-01 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node node-01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

kubeadm join 10.31.90.200:8443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:84201a329ec4388263e97303c6e4de50c2de2aa157a3b961cb8a6f325fadedb1

kubeadm init主要执行了以下操作:

  • [init]:指定版本进行初始化操作
  • [preflight] :初始化前的检查和下载所需要的Docker镜像文件
  • [kubelet-start] :生成kubelet的配置文件”/var/lib/kubelet/config.yaml”,没有这个文件kubelet无法启动,所以初始化之前的kubelet实际上启动失败。
  • [certificates]:生成Kubernetes使用的证书,存放在/etc/kubernetes/pki目录中。
  • [kubeconfig] :生成 KubeConfig 文件,存放在/etc/kubernetes目录中,组件之间通信需要使用对应文件。
  • [control-plane]:使用/etc/kubernetes/manifest目录下的YAML文件,安装 Master 组件。
  • [etcd]:使用/etc/kubernetes/manifest/etcd.yaml安装Etcd服务。
  • [wait-control-plane]:等待control-plan部署的Master组件启动。
  • [apiclient]:检查Master组件服务状态。
  • [uploadconfig]:更新配置
  • [kubelet]:使用configMap配置kubelet。
  • [patchnode]:更新CNI信息到Node上,通过注释的方式记录。
  • [mark-control-plane]:为当前节点打标签,打了角色Master,和不可调度标签,这样默认就不会使用Master节点来运行Pod。
  • [bootstrap-token]:生成token记录下来,后边使用kubeadm join往集群中添加节点时会用到
  • [addons]:安装附加组件CoreDNS和kube-proxy

5. 为kubectl准备Kubeconfig文件

kubectl默认会在执行的用户家目录下面的.kube目录下寻找config文件。这里是将在初始化时[kubeconfig]步骤生成的admin.conf拷贝到.kube/config。

1
2
3
[root@node-01 ~]# mkdir -p $HOME/.kube
[root@node-01 ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@node-01 ~]# chown $(id -u):$(id -g)$HOME/.kube/config

在该配置文件中,记录了API Server的访问地址,所以后面直接执行kubectl命令就可以正常连接到API Server中。

6. 查看组件状态

1
2
3
4
5
6
7
8
[root@node-01 ~]# kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health": "true"}
[root@node-01 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
node-01 NotReady master 14m v1.13.2

目前只有一个节点,角色是Master,状态是NotReady。

7. 其他master部署

在node-01将证书文件拷贝至其他master节点

1
2
3
4
5
6
7
8
9
10
USER=root
CONTROL_PLANE_IPS="node-02 node-03"
for host in ${CONTROL_PLANE_IPS}; do
ssh "${USER}"@$host "mkdir -p /etc/kubernetes/pki/etcd"
scp /etc/kubernetes/pki/ca.* "${USER}"@$host:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/sa.* "${USER}"@$host:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/front-proxy-ca.* "${USER}"@$host:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/etcd/ca.* "${USER}"@$host:/etc/kubernetes/pki/etcd/
scp /etc/kubernetes/admin.conf "${USER}"@$host:/etc/kubernetes/
done

在其他master执行,注意--experimental-control-plane参数

1
kubeadm join 10.31.90.200:8443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:84201a329ec4388263e97303c6e4de50c2de2aa157a3b961cb8a6f325fadedb1 --experimental-control-plane

注意:token有效期是有限的,如果旧的token过期,可以使用kubeadm token create --print-join-command重新创建一条token。

8. node部署

在node-04、node-05、node-06执行,注意没有--experimental-control-plane参数

1
kubeadm join 10.31.90.200:8443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:84201a329ec4388263e97303c6e4de50c2de2aa157a3b961cb8a6f325fadedb1

9. 部署网络插件flannel

Master节点NotReady的原因就是因为没有使用任何的网络插件,此时Node和Master的连接还不正常。目前最流行的Kubernetes网络插件有Flannel、Calico、Canal、Weave这里选择使用flannel。

1
[root@node-01 ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

10. 查看节点状态

所有的节点已经处于Ready状态。

1
2
3
4
5
6
7
8
[root@node-01 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
node-01 Ready master 35m v1.13.2
node-02 Ready master 36m v1.13.2
node-03 Ready master 36m v1.13.2
node-04 Ready <none> 40m v1.13.2
node-05 Ready <none> 40m v1.13.2
node-06 Ready <none> 40m v1.13.2

查看pod

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
[root@node-01 ~]# kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-89cc84847-j8mmg 1/1 Running 0 1d
coredns-89cc84847-rbjxs 1/1 Running 0 1d
etcd-node-01 1/1 Running 1 1d
etcd-node-02 1/1 Running 0 1d
etcd-node-03 1/1 Running 0 1d
kube-apiserver-node-01 1/1 Running 0 1d
kube-apiserver-node-02 1/1 Running 0 1d
kube-apiserver-node-03 1/1 Running 0 1d
kube-controller-manager-node-01 1/1 Running 2 1d
kube-controller-manager-node-02 1/1 Running 0 1d
kube-controller-manager-node-03 1/1 Running 0 1d
kube-proxy-jfbmv 1/1 Running 0 1d
kube-proxy-lvkms 1/1 Running 0 1d
kube-proxy-qx7kh 1/1 Running 0 1d
kube-proxy-xst5v 1/1 Running 0 1d
kube-proxy-zfwrk 1/1 Running 0 1d
kube-proxy-ztg6j 1/1 Running 0 1d
kube-scheduler-node-01 1/1 Running 1 1d
kube-scheduler-node-02 1/1 Running 1 1d
kube-scheduler-node-03 1/1 Running 1 1d
kube-flannel-ds-amd64-87wzj 1/1 Running 0 1d
kube-flannel-ds-amd64-lczwm 1/1 Running 0 1d
kube-flannel-ds-amd64-lwc2j 1/1 Running 0 1d
kube-flannel-ds-amd64-mwlfq 1/1 Running 0 1d
kube-flannel-ds-amd64-nj2mk 1/1 Running 0 1d
kube-flannel-ds-amd64-wx7vd 1/1 Running 0 1d

查看ipvs的状态

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
[root@node-01 ~]# ipvsadm -L -n
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.245.0.1:443 rr
-> 10.31.90.201:6443 Masq 1 2 0
-> 10.31.90.202:6443 Masq 1 0 0
-> 10.31.90.203:6443 Masq 1 2 0
TCP 10.245.0.10:53 rr
-> 10.32.0.3:53 Masq 1 0 0
-> 10.32.0.4:53 Masq 1 0 0
TCP 10.245.90.161:80 rr
-> 10.45.0.1:80 Masq 1 0 0
TCP 10.245.90.161:443 rr
-> 10.45.0.1:443 Masq 1 0 0
TCP 10.245.149.227:1 rr
-> 10.31.90.204:1 Masq 1 0 0
-> 10.31.90.205:1 Masq 1 0 0
-> 10.31.90.206:1 Masq 1 0 0
TCP 10.245.181.126:80 rr
-> 10.34.0.2:80 Masq 1 0 0
-> 10.45.0.0:80 Masq 1 0 0
-> 10.46.0.0:80 Masq 1 0 0
UDP 10.245.0.10:53 rr
-> 10.32.0.3:53 Masq 1 0 0
-> 10.32.0.4:53 Masq 1 0 0

至此kubernetes集群部署完成。