kubeadm 快速部署
软件环境
- OS: CentOS Linux release 7.3.1611 (Core)
- Docker: 18.06.0-ce
- Kubenetes
- client: v1.11.2
- server: v1.11.1
节点规划
Hostname | Role | IP |
---|---|---|
node001 | master | 172.31.117.180 |
node002 | node | 172.31.117.179 |
node003 | node | 172.31.117.178 |
网络规划
网络类别 | 网段 |
---|---|
节点网络 | 172.31.112.0/20 |
Service网络 | 10.96.0.0/12 |
Pod网络 | 10.244.0.0/16 |
部署流程
- 初始化系统环境,hosts解析,时间同步
- 初始化安装Docker环境,配置加速器
- SSH互信
开始部署
初始化系统环境
# ntpdate ntp1.aliyun.com
# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
172.31.117.180 node001
172.31.117.179 node002
172.31.117.178 node003
检查内核参数
# cat /proc/sys/net/bridge/bridge-nf-call-iptables
1
# cat /proc/sys/net/bridge/bridge-nf-call-ip6tables
1
部署Docker配置加速器(所有节点)
调用脚本快速初始化
# curl -s 123.206.25.230/scripts/docker/centos_73_install_docker_latest.sh | bash
# curl -s 123.206.25.230/scripts/docker/centos_73_install_k8s_latest.sh | bash
开始初始化K8S集群
Master
Master: 指明k8s版本,以及service和pod网段配置
# kubeadm init --kubernetes-version=v1.11.1 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12
输出信息,如果没有意外则会看到如下信息,为了方便阅读,我拆分下
检查、拉取镜像
[init] using Kubernetes version: v1.11.1
[preflight] running pre-flight checks
I0831 10:47:54.325501 20853 kernel_validator.go:81] Validating kernel version
I0831 10:47:54.325572 20853 kernel_validator.go:96] Validating kernel config
[WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 18.06.1-ce. Max validated version: 17.03
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
生成配置,启动kubelet
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
生成相关证书
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [node001 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.31.117.180]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [node001 localhost] and IPs [127.0.0.1 ::1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [node001 localhost] and IPs [172.31.117.180 127.0.0.1 ::1]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
生成各组件配置
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
初始化各组件
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] this might take a minute or longer if the control plane images have to be pulled
[apiclient] All control plane components are healthy after 41.501461 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.11" in namespace kube-system with the configuration for the kubelets in the cluster
选主
[markmaster] Marking the node node001 as master by adding the label "node-role.kubernetes.io/master=''"
[markmaster] Marking the node node001 as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]
生成集群哈希码
[bootstraptoken] using token: v7c44j.c1zwjs3ge5r9s3d6
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
应用CoreDNS、kube-proxy插件
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
初始化结果
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
其他node节点加入集群的方法
You can now join any number of machines by running the following on each node
as root:
kubeadm join 172.31.117.180:6443 --token v7c44j.c1zwjs3ge5r9s3d6 --discovery-token-ca-cert-hash sha256:29c24b369db0b1139bff4c9b11b2a0520f578bd7259eb70e224ca611c2d8fee7
到此,初始化K8S集群距离成功之差一步,admin.conf是初始化集群时自动生成的,里面配置了APIServer的连接和证书信息,kubectl需要通过读取这个文件里的内容才能和APIServer交互,所以需要拷贝到指定位置
# mkdir -p $HOME/.kube
# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
# sudo chown $(id -u):$(id -g) $HOME/.kube/config
执行后,可以通过kubectl获取集群资源,如果能看到类似信息,则表示K8S集群已经安装初始化完成
# kubectl get pods --all-namespaces=true
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-78fcdf6894-77xd6 0/1 Pending 0 13m
kube-system coredns-78fcdf6894-vnxnl 0/1 Pending 0 13m
kube-system etcd-node001 1/1 Running 0 12m
kube-system kube-apiserver-node001 1/1 Running 0 12m
kube-system kube-controller-manager-node001 1/1 Running 0 12m
kube-system kube-proxy-h46jc 1/1 Running 0 13m
kube-system kube-scheduler-node001 1/1 Running 0 12m
也可以通过kubectl get componentstatus
命令检查各组件监控状态
# kubectl get componentstatus
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health": "true"}
# kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health": "true"}
但是此时查看节点状态,不是准备就绪状态,因为此时网络插件并没有安装
# kubectl get node
NAME STATUS ROLES AGE VERSION
node001 NotReady master 28m v1.11.2
安装flannel
# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
稍等片刻后,再次查看
# kubectl get node
NAME STATUS ROLES AGE VERSION
node001 Ready master 31m v1.11.2
查看flannel运行
# kubectl get pods --all-namespaces=true
NAMESPACE NAME READY STATUS RESTARTS AGE
...
kube-system kube-flannel-ds-amd64-pbrgd 1/1 Running 0 1m
...
Master的工作基本已经完成了,接下来就是将其他节点加入集群
Node
Node节点加入集群
# kubeadm join 172.31.117.180:6443 --token v7c44j.c1zwjs3ge5r9s3d6 --discovery-token-ca-cert-hash sha256:29c24b369db0b1139bff4c9b11b2a0520f578bd7259eb70e224ca611c2d8fee7
此时输出信息可能如下
自检
[preflight] running pre-flight checks
[WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs_wrr ip_vs_sh ip_vs ip_vs_rr] or no builtin kernel ipvs support: map[ip_vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{} ip_vs:{}]
you can solve this problem with following methods:
1. Run 'modprobe -- ' to load missing kernel modules;
2. Provide the missing builtin kernel ipvs support
I0831 11:22:30.242587 20761 kernel_validator.go:81] Validating kernel version
I0831 11:22:30.242645 20761 kernel_validator.go:96] Validating kernel config
[WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 18.06.1-ce. Max validated version: 17.03
请求APIServer加入集群
[discovery] Trying to connect to API Server "172.31.117.180:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://172.31.117.180:6443"
[discovery] Requesting info from "https://172.31.117.180:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "172.31.117.180:6443"
[discovery] Successfully established connection with API Server "172.31.117.180:6443"
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.11" ConfigMap in the kube-system namespace
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[preflight] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node002" as an annotation
This node has joined the cluster:
* Certificate signing request was sent to master and a response
was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the master to see this node join the cluster.
执行后稍等片刻,查看集群节点
# kubectl get node
NAME STATUS ROLES AGE VERSION
node001 Ready master 34m v1.11.2
node002 NotReady <none> 13s v1.11.2
node003 NotReady <none> 8s v1.11.2
# kubectl get node
NAME STATUS ROLES AGE VERSION
node001 Ready master 35m v1.11.2
node002 Ready <none> 1m v1.11.2
node003 Ready <none> 1m v1.11.2