现在很多业务都是通过docker来进行容器化部署,然后通过kubernetes(k8s)容器编排工具来进行统一管理,但kubernetes相对其他容器编排工具来说相对复杂,但功能强大,所以越来越多的公司在使用它,今天智一面就教大家使用kubeadm的方式安装一个三节点的kubenetes集群:master1、node1、node2,版本是1.19.3的
1)准备三台主机:
master:192.168.32.107
node1:192.168.32.109
node2:192.168.32.110
2)在master主机上操作
我这里已经把步骤写成shell脚本,脚本里面有的地方已经给出了注意点,请先浏览一遍脚本,再把脚本拷贝到master主机中去执行
#!/bin/bash
# master01主机:192.168.32.107
# 禁用swap
swapoff -a
# 永久禁用swap
sed 's/UUID=3ae690fc-9428-451e-bf51-4a8983d0a931 swap swap defaults 0 0/#UUID=3ae690fc-9428-451e-bf51-4a8983d0a931 swap swap defaults 0 0/g' /etc/fstab >1.txt
cat 1.txt > /etc/fstab
# 下载k8s组件所需镜像,我这儿选择的是1.19.3这个版本,具体版本可以根据自己的需求来
kubeadm config images pull --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.19.3
# 给镜像重新打tag并删除原有镜像:
docker tag registry.aliyuncs.com/google_containers/kube-proxy:v1.19.3 k8s.gcr.io/kube-proxy:v1.19.3
docker tag registry.aliyuncs.com/google_containers/kube-apiserver:v1.19.3 k8s.gcr.io/kube-apiserver:v1.19.3
docker tag registry.aliyuncs.com/google_containers/kube-controller-manager:v1.19.3 k8s.gcr.io/kube-controller-manager:v1.19.3
docker tag registry.aliyuncs.com/google_containers/kube-scheduler:v1.19.3 k8s.gcr.io/kube-scheduler:v1.19.3
docker tag registry.aliyuncs.com/google_containers/etcd:3.4.13-0 k8s.gcr.io/etcd:3.4.13-0
docker tag registry.aliyuncs.com/google_containers/coredns:1.7.0 k8s.gcr.io/coredns:1.7.0
docker tag registry.aliyuncs.com/google_containers/pause:3.2 k8s.gcr.io/pause:3.2
docker rmi -f registry.aliyuncs.com/google_containers/kube-proxy:v1.19.3
docker rmi -f registry.aliyuncs.com/google_containers/kube-apiserver:v1.19.3
docker rmi -f registry.aliyuncs.com/google_containers/kube-controller-manager:v1.19.3
docker rmi -f registry.aliyuncs.com/google_containers/kube-scheduler:v1.19.3
docker rmi -f registry.aliyuncs.com/google_containers/etcd:3.4.13-0
docker rmi -f registry.aliyuncs.com/google_containers/coredns:1.7.0
docker rmi -f registry.aliyuncs.com/google_containers/pause:3.2
# 初始化master主节点
# 注意,需要把下面的 api-server 的IP地址修改成自己机器的!!!
kubeadm init \
--apiserver-advertise-address 192.168.32.107 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.19.3 \
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=10.244.0.0/16
# 检查下master初始化完成后的状态码,如果不是0,需要检查下哪儿报错了,或者先试下能不能正常拉取镜像,如果能正常拉取,可以暂时忽略
echo -n "status: "
echo -n $? " , "
echo -e '\033[32mOk,Master initialization successful !\033[0m'
# 使用kubectl
# 在家目录下执行
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
# 启用kubectl命令的自动补全功能
echo "source <(kubectl completion bash)" >> ~/.bashrc
source ~/.bashrc
# 安装网络插件(可以三台都指定hosts)
# 添加hosts解析:
cat << EOF >> /etc/hosts
199.232.68.133 raw.githubusercontent.com
EOF
echo -e "\033[32done!\033[0m"
3)在node1主机上操作
#!/bin/bash
# 该脚本在master01、node01、node02上执行
systemctl stop firewalld.service
systemctl disable firewalld.service
# 禁用swap
swapoff -a
# 永久禁用swap
sed 's/UUID=3ae690fc-9428-451e-bf51-4a8983d0a931 swap swap defaults 0 0/#UUID=3ae690fc-9428-451e-bf51-4a8983d0a931 swap swap defaults 0 0/g' /etc/fstab >1.txt
cat 1.txt > /etc/fstab
# 安装docker,version:18.6.3
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum makecache fast
yum install -y docker-ce-18.06.3.ce-3.el7
systemctl start docker
systemctl enable docker
cat << EOF >> /etc/docker/daemon.json
{
"registry-mirrors": ["https://u8n2zdxj.mirror.aliyuncs.com"]
}
EOF
systemctl restart docker
# 修改主机名
hostnamectl set-hostname node01
# 设置hosts
cat << EOF >> /etc/hosts
192.168.32.107 master01
192.168.32.109 node01
192.168.32.110 node02
EOF
# 修改内核参数
cat << EOF >> /etc/sysctl.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-arptables = 1
net.ipv4.ip_forward = 1
EOF
sysctl -p
# 添加阿里源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
# 安装kubeadm、kubelet
yum install -y kubelet kubeadm kubectl
systemctl enable kubelet
echo -n "status: "
echo -n $? " , "
echo -e '\033[32mOk,Shell script executed successfully !\033[0m'
3)在node2主机上操作
#!/bin/bash
# 该脚本在master01、node01、node02上执行
systemctl stop firewalld.service
systemctl disable firewalld.service
# 禁用swap
swapoff -a
# 永久禁用swap
sed 's/UUID=3ae690fc-9428-451e-bf51-4a8983d0a931 swap swap defaults 0 0/#UUID=3ae690fc-9428-451e-bf51-4a8983d0a931 swap swap defaults 0 0/g' /etc/fstab >1.txt
cat 1.txt > /etc/fstab
# 安装docker,version:18.6.3
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum makecache fast
yum install -y docker-ce-18.06.3.ce-3.el7
systemctl start docker
systemctl enable docker
cat << EOF >> /etc/docker/daemon.json
{
"registry-mirrors": ["https://u8n2zdxj.mirror.aliyuncs.com"]
}
EOF
systemctl restart docker
# 修改主机名
hostnamectl set-hostname node02
# 设置hosts
cat << EOF >> /etc/hosts
192.168.32.107 master01
192.168.32.109 node01
192.168.32.110 node02
EOF
# 修改内核参数
cat << EOF >> /etc/sysctl.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-arptables = 1
net.ipv4.ip_forward = 1
EOF
sysctl -p
# 添加阿里源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
# 安装kubeadm、kubelet
yum install -y kubelet kubeadm kubectl
systemctl enable kubelet
echo -n "status: "
echo -n $? " , "
echo -e '\033[32mOk,Shell script executed successfully !\033[0m'
4)把两台node节点加入到集群中
在两台node上分别执行下面的命令:
注意:你们生成的token和我这里的很明显是不一样的,不要直接全部复制,在安装完master节点后会在master节点上生成一个token,注意看你们的屏幕,192.168.32.107:6443为master主机IP和端口,如果没有看到,在master主机上执行下面的命令会重新生成一个:kubeadm token create --print-join-command
这个token是有时间限制的,需要尽快在两台node节点上执行
kubeadm join 192.168.32.107:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:d1d57b39e4da309096bca4784faf10d2b3ee7d9410ac83456e51a8b80e78b12d
5)到这儿基本上kubeadm就算是安装完成了
查看节点是否准备就绪:kubectl get nodes
检出集群监控状态:kubectl get cs
如果发现集群处于不健康状态,解决办法:
cd /etc/kubernetes/manifests/
vim kube-controller-manager.yaml
vim kube-scheduler.yaml
把上面两个yaml文件中的--port=0注释掉 #- --port=0
继续执行:kubectl get node 查看节点是否加入到master中
6)如果只安装单master,默认不能不能部署pod,开启方法:
允许master节点部署pod :
kubectl taint nodes --all node-role.kubernetes.io/master-
如果不允许调度:
kubectl taint nodes master1 node-role.kubernetes.io/master=:NoSchedule
污点可选参数
NoSchedule: 一定不能被调度
PreferNoSchedule: 尽量不要调度
NoExecute: 不仅不会调度, 还会驱逐Node上已有的Pod
执行完 kubectl taint nodes --all node-role.kubernetes.io/master- 命令后重启下kuberlete:
systemctl restart kubelet.service
7)验证镜像是否拉取正常:
kubectl create deployment nginx --image=nginx
kubectl expose deployment nginx --port=80 --type=NodePort
kubectl get pods -n kube-system -o wide
kubectl get pods
kubectl get pod,svc
如果镜像状态一直是pending,说明k8s在拉取镜像的时候出了问题,用以下命令查看下:
kubectl describe pod pod名称
比如:kubectl describe pod nginx-6799fc88d8-d7cbr
END