k8s 笔记(一) CentOS7 搭建 v1.15.1 集群

2020/03/11 k8s

1. 环境准备

1.1. 环境介绍

主机名 OS IP docker
version
kubelet
version
kubectl
version
fannel
version
角色 yum需要安装组建
k8s-m1 CentOS 7.7.1908 192.168.8.30 18.06.3-ce Kubernetes v1.15.1 v1.15.1   master dockerkubeletkubeadmkubectl
k8s-n1 CentOS 7.7.1908 192.168.8.31 18.06.3-ce Kubernetes v1.15.1 None   node dockerkubeletkubeadm
k8s-n2 CentOS 7.7.1908 192.168.8.32 18.06.3-ce Kubernetes v1.15.1 None   node dockerkubeletkubeadm
harbor CentOS 7.7 192.168.8.29 18.06.3-ce       Docker Private Registry  

所有主机添加 houts 记录

$ cat >> /etc/hosts <<EOF
192.168.8.30 k8s-m1
192.168.8.31 k8s-n1
192.168.8.32 k8s-n2
192.168.8.29	harbor.test.cn
EOF
$ cat /etc/redhat-release
CentOS Linux release 7.7.1908 (Core)

$ uname -r
4.4.215-1.el7.elrepo.x86_64

1.2. 部署环境

#!/bin/bash
#set -e
#
# Desc: k8s 系统初始化脚本,包括必要软件安装,内核参数等配置
# CreateDate: 2019-03-01 14:44:13
# LastModify: 
# Author: larry
#
# History:
#
#
# ---------------------- Script Begin ----------------------
#

# 修改主机名
echo -n "输入主机名:"
read hostname

if [ "${hostname}0" != "0" ];then
	echo "正在设置主机名[${hostname}]"
	hostnamectl  set-hostname  $hostname
fi

# 配置yum源
mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
yum makecache fast
# 配置k8s源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
EOF
## 重建yum缓存
yum clean all
yum makecache fast
yum -y update



# 安装依赖包
yum install -y conntrack ntpdate ntp ipvsadm ipset jq iptables curl sysstat libseccomp wgetvimnet-tools git

# 设置防火墙为iptables 并设置空规则
systemctl  stop firewalld  &&  systemctl  disable firewalld
yum -y install iptables-services 
systemctl  start iptables  
systemctl  enable iptables
iptables -F 
service iptables save

# 关闭selinux 及swap
swapoff -a && sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstabsetenforce 0 && sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config

# 调整内核参数
cat > kubernetes.conf <<EOF
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
net.ipv4.tcp_tw_recycle=0
vm.swappiness=0 # 禁止使用 swap 空间,只有当系统 OOM 时才允许使用它
vm.overcommit_memory=1 # 不检查物理内存是否够用
vm.panic_on_oom=0 # 开启 OOM
fs.inotify.max_user_instances=8192
fs.inotify.max_user_watches=1048576
fs.file-max=52706963
fs.nr_open=52706963
net.ipv6.conf.all.disable_ipv6=1
net.netfilter.nf_conntrack_max=2310720
EOF

cp kubernetes.conf /etc/sysctl.d/kubernetes.conf
sysctl -p /etc/sysctl.d/kubernetes.conf

# 设置时区
# 设置系统时区为中国/上海
timedatectl set-timezone Asia/Shanghai
# 将当前的 UTC 时间写入硬件时钟
timedatectl set-local-rtc 0
# 重启依赖于系统时间的服务
systemctl restart rsyslog
systemctl restart crond
systemctl stop postfix && systemctl disable postfix

# 设置rsyslogd 和 systemd journald
mkdir /var/log/journal 
# 持久化保存日志的目录
mkdir /etc/systemd/journald.conf.d

cat > /etc/systemd/journald.conf.d/99-prophet.conf <<EOF
[Journal]# 持久化保存到磁盘
Storage=persistent
# 压缩历史日志
Compress=yes
SyncIntervalSec=5m
RateLimitInterval=30s
RateLimitBurst=1000
# 最大占用空间 10G
SystemMaxUse=10G
# 单日志文件最大 200M
SystemMaxFileSize=200M
# 日志保存时间 2 周
MaxRetentionSec=2week
# 不将日志转发到 
syslogForwardToSyslog=no
EOF

ystemctl restart systemd-journald

# 升级系统内核
rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm
yum --enablerepo=elrepo-kernel install -y kernel-lt
# 设置开机从新内核启动
grub2-set-default 0
grub2-mkconfig -o /boot/grub2/grub.cfg

yum install yum-utils -y
package-cleanup --oldkernels

# 开启ipvs
modprobe br_netfilter

cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF

chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules 
lsmod | grep -e ip_vs -e nf_conntrack_ipv4

2. 开始安装

2.1. docker安装

#!/bin/bash
# 安装docker

yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager \
	--add-repo \
	http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

yum update -y && yum install -y docker-ce
## 创建 /etc/docker 目录
mkdir /etc/docker

# 配置 daemon.
cat > /etc/docker/daemon.json <<EOF
{
	"exec-opts": ["native.cgroupdriver=systemd"],
	"log-driver": "json-file",
	"log-opts": {"max-size": "100m"}
}
EOF

mkdir -p /etc/systemd/system/docker.service.d
# 重启docker服务
systemctl daemon-reload && systemctl restart docker && systemctl enable docker

2.2 kubeadm、kubectl安装

#!/bin/bash
yum -y install kubeadm-1.15.1 kubectl-1.15.1 kubelet-1.15.1

systemctl enable kubelet.service  && systemctl start kubelet

2.3 加载镜像

#!/bin/bash

## 使用如下脚本下载国内镜像,并修改tag为google的tag
set -e

KUBE_VERSION=v1.15.1
KUBE_PAUSE_VERSION=3.1
ETCD_VERSION=3.3.10
CORE_DNS_VERSION=1.3.1

GCR_URL=k8s.gcr.io
ALIYUN_URL=registry.cn-hangzhou.aliyuncs.com/google_containers

images=(kube-proxy:${KUBE_VERSION}
kube-scheduler:${KUBE_VERSION}
kube-controller-manager:${KUBE_VERSION}
kube-apiserver:${KUBE_VERSION}
pause:${KUBE_PAUSE_VERSION}
etcd:${ETCD_VERSION}
coredns:${CORE_DNS_VERSION})

for imageName in ${images[@]} ; do
  docker pull $ALIYUN_URL/$imageName
  docker tag  $ALIYUN_URL/$imageName $GCR_URL/$imageName
  docker rmi $ALIYUN_URL/$imageName
done

2.4. 初始化主节点

  1. 生成初始化配置文件

    kubeadm config print init-defaults > kubeadm-config.yaml

  2. 修改配置文件

    • 修改 advertiseAddress 为本机ip
    • 修改 kubernetesVersion 为当前kubeadm 版本号
    • 添加 podSubnet: "10.244.0.0/16" ,这里的podSubnet和ServiceSubnet分别是pods和service的子网网络,后续flannel网格需要用到。
    • 在最后添加 ipvs相关内容
     ...
     kind: InitConfiguration
     localAPIEndpoint:
       advertiseAddress: 192.168.8.30
       bindPort: 6443
     ...
    	
     kubernetesVersion: v1.15.1
     networking:
       dnsDomain: cluster.local
       podSubnet: "10.244.0.0/16"
       serviceSubnet: 10.96.0.0/12
     scheduler: {}
     ---
     apiVersion: kubeproxy.config.k8s.io/v1alpha1
     kind: KubeProxyConfiguration
     featureGates:
       SupportIPVSProxyMode: true
     mode: ipvs
    
  3. 初始化

     kubeadm init --config=kubeadm-config.yaml --experimental-upload-certs | tee kubeadm-init.log
    

    将初始化日志存在kubeadm-init.log 文件中

    当看到如下内容,证明已经初始化完成,保存好该命令,丢了不好找回。节点加入时需要

     ...
     Your Kubernetes control-plane has initialized successfully!
    
     To start using your cluster, you need to run the following as a regular user:
    	
       mkdir -p $HOME/.kube
       sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
       sudo chown $(id -u):$(id -g) $HOME/.kube/config
    	
     You should now deploy a pod network to the cluster.
     Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
       https://kubernetes.io/docs/concepts/cluster-administration/addons/
    	
     Then you can join any number of worker nodes by running the following on each as root:
    	
     kubeadm join 192.168.8.30:6443 --token abcdef.0123456789abcdef \
         --discovery-token-ca-cert-hash sha256:bccd64d81892c59ddc0c5a571d601cc975d2d7fb58235ff3e69060284669062d
    

    根据提示执行下面命令:

      	mkdir -p $HOME/.kube
      	sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      	sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
  4. 验证

     $ kubectl get nodes
     NAME     STATUS     ROLES    AGE   VERSION
     k8s-m1   NotReady   master   12m   v1.15.1
    	
     $ kubectl get cs
     NAME                 STATUS    MESSAGE             ERROR
     controller-manager   Healthy   ok
     scheduler            Healthy   ok
     etcd-0               Healthy   {"health":"true"}
    

    证明已经初始化完成

2.5. 配置flannel 网络

执行安装flannel:

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

查看状态:

$ kubectl get pod -n kube-system
NAME                             READY   STATUS     RESTARTS   AGE
coredns-5c98db65d4-9tshl         0/1     Pending    0          20m
coredns-5c98db65d4-p5twb         0/1     Pending    0          20m
etcd-k8s-m1                      1/1     Running    0          19m
kube-apiserver-k8s-m1            1/1     Running    0          19m
kube-controller-manager-k8s-m1   1/1     Running    0          19m
kube-flannel-ds-amd64-lqmzk      0/1     Init:0/1   0          3m57s
kube-proxy-rhwxc                 1/1     Running    0          20m
kube-scheduler-k8s-m1            1/1     Running    0          20m

等待flannel根coredns 初始化结束后查看nodes状态

$ kubectl get nodes
NAME     STATUS   ROLES    AGE    VERSION
k8s-m1   Ready    master   100m   v1.15.1

2.6. 加入node节点

分别在各node节点执行 (始化日志 中提示内容)

kubeadm join 192.168.8.30:6443 --token abcdef.0123456789abcdef \
	    --discovery-token-ca-cert-hash sha256:bccd64d81892c59ddc0c5a571d601cc975d2d7fb58235ff3e69060284669062d

等待node节点flannel容器启动完成,查看node节点状态

$ kubectl -n kube-system get po -l app=flannel -o wide
NAME                          READY   STATUS    RESTARTS   AGE   IP             NODE     NOMINATED NODE   READINESS GATES
kube-flannel-ds-amd64-7sjmh   1/1     Running   0          18h   192.168.8.31   k8s-n1   <none>           <none>
kube-flannel-ds-amd64-lqmzk   1/1     Running   0          19h   192.168.8.30   k8s-m1   <none>           <none>
kube-flannel-ds-amd64-prlpg   1/1     Running   0          18h   192.168.8.32   k8s-n2   <none>           <none>

$ kubectl get nodes -o wide
NAME     STATUS   ROLES    AGE   VERSION   INTERNAL-IP    EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION                CONTAINER-RUNTIME
k8s-m1   Ready    master   20h   v1.15.1   192.168.8.30   <none>        CentOS Linux 7 (Core)   4.4.215-1.el7.elrepo.x86_64   docker://18.6.3
k8s-n1   Ready    <none>   18h   v1.15.1   192.168.8.31   <none>        CentOS Linux 7 (Core)   4.4.215-1.el7.elrepo.x86_64   docker://18.6.3
k8s-n2   Ready    <none>   18h   v1.15.1   192.168.8.32   <none>        CentOS Linux 7 (Core)   4.4.215-1.el7.elrepo.x86_64   docker://18.6.3

3. Harbor 私有仓库安装

3.1. 环境准备

软件 版本 备注
Python 2.7+  
Docker 1.10+  
Docker Compose 1.6.0+  

安装 docker

安装Docker Compose :

#!/bin/bash
curl -L https://github.com/docker/compose/releases/download/1.9.0/docker-compose-`uname -s`-`uname -m`> /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose
docker-compose -v

3.2. Harbor安装

harbor 官方网址 链接

github 地址 链接

下载地址 https://github.com/vmware/harbor/releases/download/v1.2.0/harbor-offline-installer-v1.2.0.tgz

  1. 下载

    wget https://github.com/vmware/harbor/releases/download/v1.2.0/harbor-offline-installer-v1.2.0.tgz

  2. 解压

    tar xf harbor-offline-installer-v1.2.0.tgz

  3. 修改配置

     cd harbor
     vim harbor.cfg
    	
    

    配置文件内容:

     hostname : 目标到主机域名或者完全限定郁闷
     ui_url_protocol : 使用的协议,http 或者 https ,默认http
     db_password:用于:用于db_auth的的MySQL数据库的根密码。更改此密码进行任何生产用途数据库的根密码。更改此密码进行任何生产用途
     max_job_workers:(默认值为:(默认值为3)作业服务中的复制工作人员的最大数量。对于每个映像复制作业,)作业服务中的复制工作人员的最大数量。对于每个映像复制作业,工作人员将存储库的所有标签同步到远程目标。增加此数字允许系统中更多的并发复制作业。但是,由于每个工工作人员将存储库的所有标签同步到远程目标。增加此数字允许系统中更多的并发复制作业。但是,由于每个工作人员都会消耗一定数量的网络作人员都会消耗一定数量的网络/ CPU / IO资源,请根据主机的硬件资源,仔细选择该属性的值资源,请根据主机的硬件资源,仔细选择该属性的值
     customize_crt:(on或或off。默认为。默认为on)当此属性打开时,)当此属性打开时,prepare脚本将为注册表的令牌的生成脚本将为注册表的令牌的生成/验证创验证创建私钥和根证书建私钥和根证书
     ssl_cert::SSL证书的路径,仅当协议设置为证书的路径,仅当协议设置为https时才应用时才应用
     ssl_cert_key::SSL密钥的路径,仅当协议设置为密钥的路径,仅当协议设置为https时才应用时才应用
     secretkey_path:用于在复制策略中加密或解密远程注册表的密码的密钥路径:用于在复制策略中加密或解密远程注册表的密码的密
    
  4. 创建https证书以及配置相关目录权限

     $ openssl genrsa -des3 -out server.key 2048
    	
     Generating RSA private key, 2048 bit long modulus
     ...................+++
     ...........................................................+++
     e is 65537 (0x10001)
     Enter pass phrase for server.key:
     # 输入密码,长度4-1023字节
    	
     $ openssl req -new -key server.key -out server.csr
    	
     # 输入上面设置的密码
     Enter pass phrase for server.key:
     You are about to be asked to enter information that will be incorporated
     into your certificate request.
     What you are about to enter is what is called a Distinguished Name or a DN.
     There are quite a few fields but you can leave some blank
     For some fields there will be a default value,
     If you enter '.', the field will be left blank.
     -----
     # 输入 国家、地区、公司、域名、邮箱 等信息
     Country Name (2 letter code) [XX]:CN
     State or Province Name (full name) []:BJ
     Locality Name (eg, city) [Default City]:BJ
     Organization Name (eg, company) [Default Company Ltd]:gg
     Organizational Unit Name (eg, section) []:section
     Common Name (eg, your name or your server's hostname) []:harbor.test.cn
     Email Address []:larryzl@test.cn
     # 下面可留空
     Please enter the following 'extra' attributes
     to be sent with your certificate request
     A challenge password []:
     An optional company name []:
    	
     $ cp server.key server.key.org
    	
     $ openssl rsa -in server.key.org -out server.key
     # 输入第一步设定的密码
     openssl rsa -in server.key.org -out server.key
     Enter pass phrase for server.key.org:
     writing RSA key
    	
     $ openssl x509 -req -days 365 -in server.csr -signkey server.key -out server.crt
     Signature ok
     subject=/C=CN/ST=BJ/L=BJ/O=gg/OU=section/CN=harbor.test.cn/emailAddress=larryzl@test.cn
     Getting Private key
    	
     $ mkdir  /data/cert
     $ chmod -R 777 /data/cert
     $ mv server.* /data/cert
    
  5. 进行安装

    运行 harbor 目录中的安装脚本 ./install.sh

3.3. 测试

  1. web 页面测试

    访问 https://yourdomain.com (上面设置的域名,如果没有解析请修改hosts文件), 默认管理员用户名密码为 admin Harbor12345

  2. 上传镜像测试

    1. 指定镜像仓库地址

      修改 daemon.json ,加入insecure-registrie 选项,可以让docker 通过使用用安全的 HTTPS-TLS方式

       vim /etc/docker/daemon.json
       # 加入
       "insecure-registries": ["harbor.test.cn"]
      

      重启docker

       systemctl daemon-reload && systemctl restart docker
      
    2. 上传测试 (注意,所有主机均可以正常解析 harbor 域名地址,如果不能,自行添加 hosts记录)

       # 下载测试镜像
       $ docker pull hello-world
      
       # 修改名称
       $ docker tag hello-world harbor.test.cn/library/hello-world:latest
      
       # 登陆 ( 用户名 admin 密码 Harbor12345)
       $ docker login harbor.test.cn
       Username: admin
       Password:
       WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
       Configure a credential helper to remove this warning. See
       https://docs.docker.com/engine/reference/commandline/login/#credentials-store
      		
       Login Succeeded
      		
       # push 镜像
       $ docker push harbor.test.cn/library/hello-world:latest
      
       # 在 其他机器pull镜像
       $ docker pull harbor.test.cn/library/hello-world:latest
      		
      
    3. 管理

      在harbor目录执行 docker-compose stop/start 可以可开启或关闭

       $ docker-compose stop
       Stopping harbor-jobservice ... done
       Stopping nginx ... done
       Stopping harbor-ui ... done
       Stopping harbor-adminserver ... done
       Stopping registry ... done
       Stopping harbor-db ... done
       Stopping harbor-log ... done
      
       $ docker-compose start
       Starting log ... done
       Starting adminserver ... done
       Starting registry ... done
       Starting ui ... done
       Starting mysql ... done
       Starting jobservice ... done
       Starting proxy ... done
      

3.4. 搭建安全的 Docker Private Registry (可选)

搭建 Private Registry 的目的往往是因为要建立自有的 Docker Image 池, 出于此目的,Private Registry的安全机制更加值得注意 安全又两个方面:

  1. Registry 自身的权威性。这是基本的安全,若一个 Registry 很重要,那么需要从该 Registry 获取 Image 的 Docker 客户端就必须能够通过证书机制验证它的权威性。
  2. Docker 客户端与 Registry 交互的安全性。如果一个 Registry 暴露在互联网上,但是只对某个组织开放存取 Images ,那么仅验证 Registry 权威性是不够的,还需要有密码或双向 SSL ( Dual-way SSL )机制保证只有该组织内获得授权的 Docker 客户端(用来制造和上传 Docker Image 的机器)能够 push ,或者组织内需要下载该 Registry 内 Images 的 Docker 客户端能够 pull 。

下面介绍 采用 Let's Encrypt 搭建免费的 Private Registry

3.4.1. 安装/使用 Let's Encrypt

# 现在执行文件
$ wget https://dl.eff.org/certbot-auto

# 授权
$ chmod +x certbot-auto

# 将*.xxx.com 改为自己的域名
$ ./certbot-auto --server https://acme-v02.api.letsencrypt.org/directory -d "*.xxx.com" --manual --preferred-challenges dns-01 certonly

WARNING: unable to check for updates.
Creating virtual environment...
Installing Python packages...
Installation succeeded.
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Plugins selected: Authenticator manual, Installer None
Enter email address (used for urgent renewal and security notices) (Enter 'c' to
cancel): 输入自己邮箱

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Please read the Terms of Service at
https://letsencrypt.org/documents/LE-SA-v1.2-November-15-2017.pdf. You must
agree in order to register with the ACME server at
https://acme-v02.api.letsencrypt.org/directory
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
(A)gree/(C)ancel: a

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Would you be willing to share your email address with the Electronic Frontier
Foundation, a founding partner of the Let's Encrypt project and the non-profit
organization that develops Certbot? We'd like to send you email about our work
encrypting the web, EFF news, campaigns, and ways to support digital freedom.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
(Y)es/(N)o: y
Obtaining a new certificate
Performing the following challenges:
dns-01 challenge for xxx.com

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NOTE: The IP of this machine will be publicly logged as having requested this
certificate. If you're running certbot in manual mode on a machine that is not
your server, please ensure you're okay with that.

Are you OK with your IP being logged?
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
(Y)es/(N)o: y

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Please deploy a DNS TXT record under the name
_acme-challenge.harbor.xxxx.cn with the following value:

Nd7pJtpSCc44bWj4ShXy-DtxsMOBriA0hX5oJUZ26ec

Before continuing, verify the record is deployed.
##
## 注意,在这步时不要着急会车,需要添加相应的DNS TXT记录,
##  Nd7pJtpSCc44bWj4ShXy-DtxsMOBriA0hX5oJUZ26ec 为记录值
## _acme-challenge.harbor 为txt记录
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Press Enter to Continue
Waiting for verification...
Cleaning up challenges

IMPORTANT NOTES:
 - Congratulations! Your certificate and chain have been saved at:
   /etc/letsencrypt/live/harbor.tansuotv.cn/fullchain.pem
   Your key file has been saved at:
   /etc/letsencrypt/live/harbor.tansuotv.cn/privkey.pem
   Your cert will expire on 2020-06-10. To obtain a new or tweaked
   version of this certificate in the future, simply run certbot-auto
   again. To non-interactively renew *all* of your certificates, run
   "certbot-auto renew"
 - Your account credentials have been saved in your Certbot
   configuration directory at /etc/letsencrypt. You should make a
   secure backup of this folder now. This configuration directory will
   also contain certificates and private keys obtained by Certbot so
   making regular backups of this folder is ideal.
 - If you like Certbot, please consider supporting our work by:

   Donating to ISRG / Let's Encrypt:   https://letsencrypt.org/donate
   Donating to EFF:                    https://eff.org/donate-le


显示如上证明 证书发放完成,会在 /etc/letsencrypt/archive/xxx.xxx.cn/下面显示域名证书

$ ls /etc/letsencrypt/archive/xxx.xxx.cn/
cert1.pem  chain1.pem  fullchain1.pem  privkey1.pem

|名称|说明| |—|—| cert.pem | 服务端证书 chain.pem | 浏览器需要的所有证书但不包括服务端证书,比如根证书和中间证书 fullchain.pem | 包括了cert.pem和chain.pem的内容 privkey.pem | 证书的私钥

创建软连接

$ ln -s /etc/letsencrypt/live/xxx.xxx.cn/fullchain.pem /data/cert/docker.MySite.com.crt
$ ln -s /etc/letsencrypt/live/xxx.xxx.cn/privkey.pem /data/cert/docker.MySite.com.key

修改harbor.cfg

#设置证书文件
ssl_cert = /data/cert/docker.MySite.com.crt
ssl_cert_key = /data/cert/docker.MySite.com.key

执行 prepare 脚本重建配置文件

$ ./harbor/prepare

启动容器

docker-compose up -d

Search

    Table of Contents