一 安装Helm3
Helm 是什么?我们可以将 Helm 看作是 k8s 的 apt-get/yum。Helm 仓库里面只有配置清单文件,并没有镜像,镜像还是由镜像仓库来提供,比如 hub.docker.com、私有仓库等。
对于使用者而言,使用 Helm 的好处是,不需要了解 k8s 的 yaml 应用部署文件了,可以通过 Helm 下载并在 k8s 上安装需要的应用。
Helm 的官方地址:https://helm.sh
1.1 版本选择
选择安装哪个版本的 Helm 呢?当一个 Helm 的新版本发布时,它是针对 Kubernetes 的一个特定的版本编译的。
在 Helm 的官方地址,
https://helm.sh/zh/docs/topics/version_skew/,提供了 k8s 和 Helm 的兼容版本对照表。我仅仅列出了一部分,由于我安装的k8s版本是1.22.6,因此,我选择下载 Helm3.9.x版本。
1.2 安装helm
[bigdata@k8s-master ~]$ cd /opt/module/
# 下载安装包
[bigdata@k8s-master module]$ wget https://get.helm.sh/helm-v3.9.4-linux-amd64.tar.gz
# 解压压缩包
[bigdata@k8s-master module]$ tar -zxvf helm-v3.9.4-linux-amd64.tar.gz
# 在解压目中找到helm程序,移动到需要的目录中
[bigdata@k8s-master module]$ sudo mv linux-amd64/helm /usr/local/bin/helm
# 验证
[bigdata@k8s-master module]$ helm version
1.3 配置仓库
- 安装好 Helm 之后,就可以添加 chart 仓库了
# 添加公用的仓库
[bigdata@k8s-master module]$ helm repo add bitnami https://charts.bitnami.com/bitnami
# 配置helm阿里源地址
[bigdata@k8s-master module]$ helm repo add aliyun https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
# 查看仓库
[bigdata@k8s-master module]$ helm repo list
- 当添加完成,就可以看到可以被安装的 charts 列表了:
[bigdata@k8s-master module]$ helm search repo bitnami
[bigdata@k8s-master module]$ helm search repo aliyun
二 部署NFS并挂载共享目录
服务器集群规划
nfs服务端 |
nfs客户端 |
k8s-master |
k8s-node2 |
2.1 服务端安装NFS
# 在所有的服务端节点安装NFS
[bigdata@k8s-master module]$ sudo yum install -y nfs-utils
# 在master创建共享目录
[bigdata@k8s-master module]$ sudo mkdir -p /data/harbor
# 在master执行授权
[bigdata@k8s-master module]$ sudo chown -R 755 /data/harbor
# 修改配置文件
[bigdata@k8s-master module]$ sudo -- bash -c 'cat << EOF >> /etc/exports
> /data/harbor *(rw,sync,no_root_squash)
> EOF'
# 使配置生效
[bigdata@k8s-master module]$ sudo exportfs -r
# 检查配置是否生效
[bigdata@k8s-master module]$ sudo exportfs
# 启动rpcbind
[bigdata@k8s-master module]$ sudo systemctl start rpcbind
# 设置开机启动rpcbind
[bigdata@k8s-master module]$ sudo systemctl enable rpcbind
# 启动nfs
[bigdata@k8s-master module]$ sudo systemctl start nfs
# 设置开机启动nfs
[bigdata@k8s-master module]$ sudo systemctl enable nfs
2.2 客户端安装NFS
在所有的客户端节点安装NFS
[bigdata@k8s-node2 ~]$ sudo yum -y install nfs-utils rpcbind
[bigdata@k8s-node2 ~]$ sudo systemctl enable rpcbind --now
# 挂载目录
[bigdata@k8s-node1 ~]$ sudo mkdir -p /data/harbor
[bigdata@k8s-node1 ~]$ sudo mount -t nfs k8s-master:/data/harbor /data/harbor
# 验证是否挂载成功
[bigdata@k8s-node1 ~]$ df -h
配置开机自动挂载
[bigdata@k8s-node1 ~]$ sudo vi /etc/fstab
# 挂载点 挂载目录 类型 权限 0不备份 0不检查
k8s-master:/data/harbor /data/harbor nfs defaults 0 1
三 安装harbor
3.1 创建namespace
[bigdata@k8s-master module]$ kubectl create namespace harbor
3.2 配置默认存储
在 master 上配置动态供应的默认存储类 yaml,从哪里找到 yaml 模板?总不能我们手动写yaml文件吧。
k8s已经给我们做好了模板,在github中的地址如下:
https://github.com/kubernetes-retired/external-storage
3.2.1 下载yaml文件模板
由于网络原因,可能会下载失败,多试几次就会成功。
# 如果已经安装过git,直接跳过这一步
[bigdata@k8s-master module]$ sudo yum install -y git
# 下载external-storage,可能会下载失败,多试几次即可
[bigdata@k8s-master module]$ git clone https://github.com/kubernetes-retired/external-storage
3.2.2 配置harbor-nfs-storage
切换到
external-storage/nfs-client/deploy/目录:
[bigdata@k8s-master module]$ cd external-storage/nfs-client/deploy/
主要copy以下几个文件,并合并成一个harbor-nfs-storage.yaml文件
- class.yaml:定义存储类
- deployment.yaml:指定nfs服务地址
- rbac.yaml:权限角色相关内容
3.2.3 编辑harbor-nfs-storage.yaml
把 class.yaml、rbac.yaml 和 deployment.yaml 的内容追加到 harbor-nfs-storage.yaml 中,并修改 5 部分内容。
① 修改metadata.name值;
② 增加存储卷的绑定策略;
③ 将命名空间统一替换成harbor;
④ 指定nfs服务器地址;
⑤ 指定nfs服务器的共享目录。
# 创建一个NFS存储类
# 源自external-storage/nfs-client/deploy/class.yaml开始
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: harbor-nfs-storage # 改名
provisioner: fuseim.pri/ifs # or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:
# 删除pv的时候,pv的内容是否要备份
archiveOnDelete: "false"
# 存储卷绑定策略
volumeBindingMode: Immediate
# 是否允许扩容
allowVolumeExpansion: true
# 源自external-storage/nfs-client/deploy/class.yaml结束
---
# 源自external-storage/nfs-client/deploy/rbac.yaml开始
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: harbor
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: harbor
roleRef:
kind: ClusterRole
name: nfs-client-provisioner-runner
apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: harbor
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: harbor
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: harbor
roleRef:
kind: Role
name: leader-locking-nfs-client-provisioner
apiGroup: rbac.authorization.k8s.io
# 源自external-storage/nfs-client/deploy/rbac.yaml结束
---
# 源自:external-storage/nfs-client/deploy/deployment.yaml开始
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-client-provisioner
labels:
app: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: harbor
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: nfs-client-provisioner
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/nfs-subdir-external-provisioner:v4.0.2 # 修改为阿里云地址
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes # 容器内挂载点
env:
- name: PROVISIONER_NAME
value: fuseim.pri/ifs
- name: NFS_SERVER
value: 192.168.220.101 # 指定nfs服务器地址
- name: NFS_PATH
value: /data/harbor # 指定nfs服务器的共享目录
volumes:
- name: nfs-client-root
nfs:
server: 192.168.220.101
path: /data/harbor
# 源自:external-storage/nfs-client/deploy/deployment.yaml结束
3.2.4 创建持久化存储
[bigdata@k8s-master harbor]$ kubectl apply -f harbor-nfs-storage.yaml
[bigdata@k8s-master harbor]$ kubectl get pod -n harbor
nfs-client-provisioner-654fc7649f-v9wx9 1/1 Running 9 (9s ago) 12s
3.3 通过配置文件安装 Harbor
因为需要修改的参数比较多,先将 Chart 包下载到本地,再修改一些配置,这样比较直观。
3.3.1 搜索Chart包
搜索出最新的几个Chart包,我们选择安装比较成熟的版本2.5.4。
[bigdata@k8s-master ~]$ helm search repo harbor/harbor -l | head -6
3.3.2 下载Chart包
通过 --version 下载指定版本的Chart,Harbor v2.5.4对应的version为1.9.4
[bigdata@k8s-master module]$ pwd
/opt/module
[bigdata@k8s-master module]$ helm pull harbor/harbor --version 1.9.4
# 解压Chart包
[bigdata@k8s-master module]$ tar -zxvf harbor-1.9.4.tgz
[bigdata@k8s-master module]$ cd harbor/
[bigdata@k8s-master harbor]$ ls
cert Chart.yaml conf LICENSE README.md templates values.yaml
3.3.3 修改values.yaml配置
由于values.yaml内容比较多,我只列出修改的部分。一共修改四个内容:
① type: ingress 改为 type: nodePort;
② 关闭tls安全加密认证,开启需要配置证书,修改enabled: true 为enabled: false;
③ 修改 externalURL:
https://core.harbor.domain 为 externalURL: http://192.168.220.101:30002;
④ 将storageClass: "" 全部替换成storageClass: "harbor-nfs-storage",harbor-nfs-storage就是提前创建的nfs和storageClass,参见harbor-nfs-storage.yaml
expose:
# ① type: ingress 改为 type: nodePort
type: nodePort
tls:
# ② 关闭tls安全加密认证,开启需要配置证书,修改enabled: true 为enabled: false
enabled: false
# 使用nodePort且关闭tls认证,将externalURL改为k8s的某个IP地址
# ③ 修改 externalURL: https://core.harbor.domain 为 externalURL: http://192.168.220.101:30002
externalURL: http://192.168.220.101:30002
# 配置持久化存储 ④ 将storageClass: "" 全部替换成storageClass: "harbor-nfs-storage"
persistence:
enabled: true # 开启持久化存储
resourcePolicy: "keep"
persistentVolumeClaim:
registry: # 配置registry持久卷
existingClaim: ""
storageClass: "harbor-nfs-storage" # harbor-nfs-storage.yaml中的StorageClass
subPath: ""
accessMode: ReadWriteOnce
size: 5Gi
annotations: {}
chartmuseum: # 配置chartmuseum持久卷
existingClaim: ""
storageClass: "harbor-nfs-storage"
subPath: ""
accessMode: ReadWriteOnce
size: 5Gi
annotations: {}
jobservice: # 配置异步任务组件
jobLog:
existingClaim: ""
storageClass: "harbor-nfs-storage"
subPath: ""
accessMode: ReadWriteOnce
size: 1Gi
annotations: {}
scanDataExports:
existingClaim: ""
storageClass: "harbor-nfs-storage"
subPath: ""
accessMode: ReadWriteOnce
size: 1Gi
annotations: {}
database: # 配置PostgreSQl数据库组件
existingClaim: ""
storageClass: "harbor-nfs-storage"
subPath: ""
accessMode: ReadWriteOnce
size: 1Gi
annotations: {}
redis: # 配置redis缓存
existingClaim: ""
storageClass: "harbor-nfs-storage"
subPath: ""
accessMode: ReadWriteOnce
size: 1Gi
annotations: {}
trivy: # 配置Trity漏洞扫描插件
existingClaim: ""
storageClass: "harbor-nfs-storage"
subPath: ""
accessMode: ReadWriteOnce
size: 5Gi
3.3.4 安装Harbor
[bigdata@k8s-master harbor]$ helm install harbor . -n harbor
[bigdata@k8s-master harbor]$ helm -n harbor ls
下载过程非常耗时,需要等待比较长的时间,才能安装好,在这个案例中一共花费了40多分钟才安装好。
只有当 ingress,svc,pods 都是正常状态时,才能访问harbor。
[bigdata@k8s-master harbor]$ kubectl get pods -n harbor
四 浏览器访问
4.1 通过浏览器访问
- 浏览器访问验证(账号/密码:admin/Harbor12345):
登录后
4.2 docker cli访问
安装 harbor 的目的之一就是,使用 docker cli 进行 pull/push 镜像,因此必须先测试,能否将镜像文件 push 到 harbor 上。
登录的时候报错了。
[bigdata@k8s-master harbor]$ sudo docker login k8s-master:30002
解决方案:
- 编辑docker daemon.json:
在文件中添加"insecure-registries": ["192.168.220.101:30002"]
[bigdata@k8s-master ~]$ sudo vi /etc/docker/daemon.json
{
"registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"],
"exec-opts": ["native.cgroupdriver=systemd"],
"insecure-registries": ["192.168.220.101:30002"]
}
- 重新加载daemon
[bigdata@k8s-master ~]$ sudo systemctl daemon-reload
- 重启docker
[bigdata@k8s-master ~]$ sudo systemctl restart docker
- 再次登录
sudo docker login 192.168.220.101:30002 -u admin -p Harbor12345
4.3 上传镜像
- 查看本地有哪些docker镜像
比如本地有一个镜像:
jimmidyson/configmap-reload
# 查看本地有哪些镜像
[bigdata@k8s-master harbor]$ sudo docker images
- 上传镜像
[bigdata@k8s-master harbor]$ sudo docker tag jimmidyson/configmap-reload:v0.5.0 192.168.220.101:30002/library/configmap-reload
[bigdata@k8s-master harbor]$ sudo docker push 192.168.220.101:30002/library/configmap-reload
Using default tag: latest
The push refers to repository [192.168.220.101:30002/library/configmap-reload]
b061e643f13c: Pushed
6c199d37e39d: Pushed
latest: digest: sha256:91467ba755a0c41199a63fe80a2c321c06edc4d3affb4f0ab6b3d20a49ed88d1 size: 738
[bigdata@k8s-master harbor]$