CentOS-7 K8S 1.8.4 环境搭建

2018/01/09 k8s

1. 环境

IP地址 主机名 操作系统 角色
10.20.40.107 k8s-master1.com centos7 master
10.20.40.112 k8s-node1.com centos7 node
10.20.40.102 k8s-node2.com centos7 node

软件版本:

kubernetes 1.8.4

docker 17.03.2.ce

2. 准备工作

在所有主机执行以下工作

2.1. 配置主机

2.1.1. 修改主机名称

hostnamectl --static set-hostname k8s-master1.com
hostnamectl --static set-hostname k8s-node1.com
hostnamectl --static set-hostname k8s-node2.com

2.1.2. 配置hosts

cat >> /etc/hosts << EOF
10.20.40.107	k8s-master1.com
10.20.40.112 k8s-node1.com
10.20.40.102	k8s-node2.com
EOF

2.1.3. 关闭防火墙和selinux

systemctl stop firewalld && systemctl disable firewalld
iptables -P FORWARD ACCEPT
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
setenforce 0

cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
vm.swappiness=0
EOF
sysctl -p /etc/sysctl.d/k8s.conf

2.1.4. 关闭swap

swapoff -a

永久关闭,注释swap相关内容

vim /etc/fstab

2.2. 安装docker

在所有主机执行以下工作。

kubernetes 1.8.4 目前支持 Docker 17.03.

2.2.1 添加阿里源

yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager \
    --add-repo \
    https://download.docker.com/linux/centos/docker-ce.repo
yum makecache fast

2.2.2 安装指定Docker版本

yum install -y --setopt=obsoletes=0 \
	docker-ce-17.03.2.ce-1.el7.centos \
	docker-ce-selinux-17.03.2.ce-1.el7.centos

systemctl start docker && systemctl enable docker

2.2.3 修改docker 配置

vi /lib/systemd/system/docker.service
#在 ExecStart 前添加
ExecStartPost=/usr/sbin/iptables -P FORWARD ACCEPT

cat << EOF > /etc/docker/daemon.json
{
  "exec-opts": ["native.cgroupdriver=systemd"]
}
EOF

#重启docker
systemctl daemon-reload && systemctl restart docker

3. 安装kubernetes

3.1. 环境准备

3.1.1. 准备镜像

#!/bin/sh
set -e

#download k8s rpm
cd /usr/local/src
mkdir -p k8s && cd k8s

wget https://github.com/TaurusFruit/docker_k8s1.8.4/raw/master/rpms/aeaad1e283c54876b759a089f152228d7cd4c049f271125c23623995b8e76f96-kubeadm-1.8.4-0.x86_64.rpm
wget https://github.com/TaurusFruit/docker_k8s1.8.4/raw/master/rpms/a9db28728641ddbf7f025b8b496804d82a396d0ccb178fffd124623fb2f999ea-kubectl-1.8.4-0.x86_64.rpm
wget https://github.com/TaurusFruit/docker_k8s1.8.4/raw/master/rpms/1acca81eb5cf99453f30466876ff03146112b7f12c625cb48f12508684e02665-kubelet-1.8.4-0.x86_64.rpm
wget https://github.com/TaurusFruit/docker_k8s1.8.4/raw/master/rpms/79f9ba89dbe7000e7dfeda9b119f711bb626fe2c2d56abeb35141142cda00342-kubernetes-cni-0.5.1-1.x86_64.rpm

3.1.2. 安装启动kubelet

#install k8s
cd /usr/local/src/k8s
yum -y localinstall *.rpm
yum -y install socat
systemctl daemon-reload && systemctl restart kubelet && systemctl enable kubelet && systemctl status kubelet
journalctl -u kubelet --no-pager

3.1.3. 准备Docker 镜像

gcr.io/google_containers/kube-apiserver-amd64  v1.8.4
gcr.io/google_containers/kube-controller-manager-amd64  v1.8.4
gcr.io/google_containers/kube-proxy-amd64  v1.8.4
gcr.io/google_containers/kube-scheduler-amd64  v1.8.4
quay.io/coreos/flannel    v0.9.1-amd64
gcr.io/google_containers/k8s-dns-sidecar-amd64  1.14.5
gcr.io/google_containers/k8s-dns-kube-dns-amd64  1.14.5
gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64  1.14.5
gcr.io/google_containers/etcd-amd64  3.0.17
gcr.io/google_containers/pause-amd64  3.0

使用下面命令将image拉取到本地进行安装

curl https://raw.githubusercontent.com/TaurusFruit/docker_k8s1.8.4/master/script/pull_k8s_img.sh | sh

3.2. master初始化

3.2.1. master初始化

kubeadm init --apiserver-advertise-address=10.20.40.107 --kubernetes-version=v1.8.4 --pod-network-cidr=10.244.0.0/16
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.8.4
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks
[kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0)
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [k8s-master1.com kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.20.40.107]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] This often takes around a minute; or longer if the control plane images have to be pulled.
[apiclient] All control plane components are healthy after 48.501492 seconds
[uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[markmaster] Will mark node k8s-master1.com as master by adding a label and a taint
[markmaster] Master k8s-master1.com tainted and labelled with key/value: node-role.kubernetes.io/master=""
[bootstraptoken] Using token: 8b6931.f94d5dd51751aca5
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: kube-dns
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run (as a regular user):

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  http://kubernetes.io/docs/admin/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join --token 8b6931.f94d5dd51751aca5 10.20.40.107:6443 --discovery-token-ca-cert-hash sha256:de5b53564d2d7cbae8ed7ca2be77cd44b96f57f525a1e74a95681642cfb894d2

3.2.2. 配置用户使用kubectl访问集群

$ mkdir -p $HOME/.kube && \
	sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config && \
	sudo chown $(id -u):$(id -g) $HOME/.kube/config

查看集群状态

$ kubectl get pod --all-namespaces -o wide
NAMESPACE     NAME                                      READY     STATUS    RESTARTS   AGE       IP             NODE
kube-system   etcd-k8s-master1.com                      1/1       Running   0          2m        10.20.40.107   k8s-master1.com
kube-system   kube-apiserver-k8s-master1.com            1/1       Running   0          3m        10.20.40.107   k8s-master1.com
kube-system   kube-controller-manager-k8s-master1.com   1/1       Running   0          2m        10.20.40.107   k8s-master1.com
kube-system   kube-dns-545bc4bfd4-75742                 0/3       Pending   0          3m        <none>         <none>
kube-system   kube-proxy-8hf2w                          1/1       Running   0          3m        10.20.40.107   k8s-master1.com
kube-system   kube-scheduler-k8s-master1.com            1/1       Running   0          2m        10.20.40.107   k8s-master1.com

$ kubectl get cs
NAME                 STATUS    MESSAGE              ERROR
scheduler            Healthy   ok
controller-manager   Healthy   ok
etcd-0               Healthy   {"health": "true"}

3.2.3 安装Pod Network

$ kubectl apply -f https://raw.githubusercontent.com/TaurusFruit/docker_k8s1.8.4/master/flannel/kube-flannel.yml

这时再执行kubectl get pod -all-namespaces -o wide可以看到kube-dns-545bc4bfd4-75742 已变成Running状态。如果遇到问题可以使用下面命令查看:

$ kubectl -n kube-system describe pod kube-dns-545bc4bfd4-75742
$ journalctl -u kubelet --no-pager
$ journalctl -u docker --no-pager

3.3 配置node节点

3.3.1 添加node节点

在node节点安装kubelet、kubeadm等配置

执行master节点init输出的命令: ` kubeadm join –token 8b6931.f94d5dd51751aca5 10.20.40.107:6443 –discovery-token-ca-cert-hash sha256:de5b53564d2d7cbae8ed7ca2be77cd44b96f57f525a1e74a95681642cfb894d2`

  • 查看token值:

      kubeadm token list | grep authentication,signing | awk '{print $1}'
    
  • 查看discovery-token-ca-cert-hash值:

      openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
    
  • 添加节点:

      kubeadm join --token 8b6931.f94d5dd51751aca5 10.20.40.107:6443 --discovery-token-ca-cert-hash sha256:de5b53564d2d7cbae8ed7ca2be77cd44b96f57f525a1e74a95681642cfb894d2
    
  • 查看节点状态:

      $ kubectl get nodes
      NAME              STATUS    ROLES     AGE       VERSION
      k8s-master1.com   Ready     master    2h        v1.8.4
      k8s-node1.com     Ready     <none>    1h        v1.8.4
      k8s-node2.com     Ready     <none>    1h        v1.8.4
    
      $ kubectl get pod --all-namespaces -o wide
      NAMESPACE     NAME                                       READY     STATUS    RESTARTS   AGE       IP             NODE
      default       curl-6896d87888-tp9zv                      1/1       Running   0          1h        10.244.0.3     k8s-master1.com
      kube-system   calico-etcd-kr4cz                          1/1       Running   0          9m        10.20.40.107   k8s-master1.com
      kube-system   calico-kube-controllers-77789796b5-g2qp8   1/1       Running   0          9m        10.20.40.112   k8s-node1.com
      kube-system   calico-node-ggzff                          2/2       Running   0          9m        10.20.40.107   k8s-master1.com
      kube-system   calico-node-wjg7m                          2/2       Running   0          9m        10.20.40.112   k8s-node1.com
      kube-system   calico-node-wmtb2                          2/2       Running   0          9m        10.20.40.102   k8s-node2.com
      kube-system   etcd-k8s-master1.com                       1/1       Running   0          2h        10.20.40.107   k8s-master1.com
      kube-system   kube-apiserver-k8s-master1.com             1/1       Running   0          2h        10.20.40.107   k8s-master1.com
      kube-system   kube-controller-manager-k8s-master1.com    1/1       Running   0          2h        10.20.40.107   k8s-master1.com
      kube-system   kube-dns-545bc4bfd4-75742                  3/3       Running   0          2h        10.244.0.2     k8s-master1.com
      kube-system   kube-flannel-ds-22cvs                      1/1       Running   0          2h        10.20.40.107   k8s-master1.com
      kube-system   kube-flannel-ds-j9vq5                      1/1       Running   0          1h        10.20.40.102   k8s-node2.com
      kube-system   kube-flannel-ds-rf25s                      1/1       Running   0          1h        10.20.40.112   k8s-node1.com
      kube-system   kube-proxy-7k5vb                           1/1       Running   0          1h        10.20.40.102   k8s-node2.com
      kube-system   kube-proxy-8hf2w                           1/1       Running   0          2h        10.20.40.107   k8s-master1.com
      kube-system   kube-proxy-kzvjf                           1/1       Running   0          1h        10.20.40.112   k8s-node1.com
      kube-system   kube-scheduler-k8s-master1.com             1/1       Running   0          2h        10.20.40.107   k8s-master1.com
    

注意,master节点默认是不作为node的,也不推荐做node节点。 如果需要把master当node:

$ kubectl taint nodes k8s-master1.com node-role.kubernetes.io/master-

3.3.2 移除node节点

  • master节点操作

      $ kubectl  drain  k8s-node1.com --delete-local-data --force --ignore-daemonsets
      $ kubectl delete node  k8s-node1.com
    
  • node节点操作

      kubeadm reset
      ifconfig cni0 down
      ip link delete cni0
      ifconfig flannel.1 down
      ip link delete flannel.1
      rm -rf /var/lib/cni/
    
  • 查看集群节点

      kubectl get nodes
    

3.4. 安装Dashboard

3.4.1. 安装Dashboard

所需镜像gcr.io/google_containers/kubernetes-dashboard-amd64 v1.8.0

在各个节点执行安装:

curl https://raw.githubusercontent.com/TaurusFruit/docker_k8s1.8.4/master/script/pull_k8s_dashboard_img.sh | sh

3.4.2. 初始化:

$ kubectl apply -f https://raw.githubusercontent.com/TaurusFruit/docker_k8s1.8.4/master/kubernetes-dashboard-amd64/kubernetes-dashboard.yaml
$ kubectl apply -f https://raw.githubusercontent.com/TaurusFruit/docker_k8s1.8.4/master/kubernetes-dashboard-amd64/kubernetes-dashboard-admin.rbac.yaml

确认Dashboard 状态

$ kubectl get pod --all-namespaces -o wide
kube-system   kubernetes-dashboard-7486b894c6-rtjtp     1/1       Running   1          17h       10.244.3.2     k8s-node1.com

3.4.3. 访问

https://10.20.40.107:30000

或者在任意主机执行(比如我的 Mac)

$ kubectl proxy

访问:http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/

查看登录token

$ kubectl describe -n kube-system secret/$(kubectl -n kube-system get secret | grep kubernetes-dashboard-admin|awk '{print $1}')
Name:         kubernetes-dashboard-admin-token-gnhkq
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name=kubernetes-dashboard-admin
              kubernetes.io/service-account.uid=1532a423-f45b-11e7-aef5-fa163ec63a6f

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1025 bytes
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC1hZG1pbi10b2tlbi1nbmhrcSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC1hZG1pbiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjE1MzJhNDIzLWY0NWItMTFlNy1hZWY1LWZhMTYzZWM2M2E2ZiIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTprdWJlcm5ldGVzLWRhc2hib2FyZC1hZG1pbiJ9.u2qs3MQFKh5uHknj79qHosCRdbgJzYtLi2PeMDh-vepXdBAi1Llm8A4CUJg-i3nP2bs0LogUYet-iFSL0ZYeMPt9E8LNCpVt3neVxoSjAqbhQigYGLSyizoMl9XAZodib6zYuLoEwnKoKqWZ3X5NOUdrLOaQWgwPyzzoLoenUTd_Axdy640U4eojNYu7jgok6lEzrgAWgZDdHP3Yf2MR10fR5HI_N5NDnU7LRTfm1SRkqPZmEk5K2EgBPT94WUlwFKN3EwNd7ZwEYuqWLmMxT_s17bi1m34211leTUnJ7iMZcncZxWXDJHXQh8UKa192deV7cHnKdgFyMkTI2kx1dw

3.5. 安装 heapster

3.5.1 准备镜像

所需镜像列表

gcr.io/google_containers/heapster-amd64:v1.4.2
gcr.io/google_containers/heapster-grafana-amd64:v4.4.3
gcr.io/google_containers/heapster-influxdb-amd64:v1.3.3

执行拉取到本地:

curl https://raw.githubusercontent.com/TaurusFruit/docker_k8s1.8.4/master/script/pull_k8s_heapster_img.sh | sh

初始化:

$ kubectl apply -f https://raw.githubusercontent.com/TaurusFruit/docker_k8s1.8.4/master/heapster-amd64/heapster-rbac.yaml
$ kubectl apply -f https://raw.githubusercontent.com/TaurusFruit/docker_k8s1.8.4/master/heapster-grafana-amd64/grafana.yaml
$ kubectl apply -f https://raw.githubusercontent.com/TaurusFruit/docker_k8s1.8.4/master/heapster-influxdb-amd64/influxdb.yaml
$ kubectl apply -f https://raw.githubusercontent.com/TaurusFruit/docker_k8s1.8.4/master/heapster-amd64/heapster.yaml

Search

    Table of Contents