Kubernetes二进制部署之单节点部署
K8s二进制部署搭建步骤:
1:自签ETCD证书 2:ETCD部署 3:Node安装docker 4:Flannel部署(先写入子网到etcd) ---------master---------- 5:自签APIServer证书 6:部署APIServer组件(token,csv) 7:部署controller-manager(指定apiserver证书)和scheduler组件 ----------node---------- 8:生成kubeconfig(bootstrap,kubeconfig和kube-proxy.kubeconfig) 9:部署kubelet组件 10:部署kube-proxy组件 ----------加入群集---------- 11:kubectl get csr && kubectl certificate approve 允许办法证书,加入群集 12:添加一个node节点 13:查看kubectl get node 节点
环境准备:
**master节点:** *CentOS 7-3:192.168.18.128* **node节点:** *CentOS 7-4:192.168.18.129 docker* *CentOS 7-5:192.168.18.130 docker*
实验所需软件包
k8s官网地址:https://github.com/kubernetes/kubernetes/releases?after=v1.13.1
Master部署:
[ ~]# mkdir k8s [ ~]# cd k8s/ [ k8s]# mkdir etcd-cert [ k8s]# mv etcd-cert.sh etcd-cert [ k8s]# ls etcd-cert etcd.sh [ k8s]# vim cfssl.sh curl -L https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -o /usr/local/bin/cfssl curl -L https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -o /usr/local/bin/cfssljson curl -L https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -o /usr/local/bin/cfssl-certinfo chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson /usr/local/bin/cfssl-certinfo [ k8s]# bash cfssl.sh [ k8s]# ls /usr/local/bin/ cfssl cfssl-certinfo cfssljson [ k8s]# cd etcd-cert/ `定义CA证书` cat > ca-config.json <<EOF { "signing":{ "default":{ "expiry":"87600h" }, "profiles":{ "www":{ "expiry":"87600h", "usages":[ "signing", "key encipherment", "server auth", "client auth" ] } } } } EOF `实现证书签名` cat > ca-csr.json <<EOF { "CN":"etcd CA", "key":{ "algo":"rsa", "size":2048 }, "names":[ { "C":"CN", "L":"Nanjing", "ST":"Nanjing" } ] } EOF `生产证书,生成ca-key.pem ca.pem` [ etcd-cert]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca - 2020/01/15 11:26:22 [INFO] generating a new CA key and certificate from CSR 2020/01/15 11:26:22 [INFO] generate received request 2020/01/15 11:26:22 [INFO] received CSR 2020/01/15 11:26:22 [INFO] generating key: rsa-2048 2020/01/15 11:26:23 [INFO] encoded CSR 2020/01/15 11:26:23 [INFO] signed certificate with serial number 58994014244974115135502281772101176509863440005 `指定etcd三个节点之间的通信验证` cat > server-csr.json <<EOF { "CN": "etcd", "hosts": [ "192.168.18.128", "192.168.18.129", "192.168.18.130" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "NanJing", "ST": "NanJing" } ] } EOF 生成ETCD证书 server-key.pem server.pem [ etcd-cert]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server 2020/01/15 11:28:07 [INFO] generate received request 2020/01/15 11:28:07 [INFO] received CSR 2020/01/15 11:28:07 [INFO] generating key: rsa-2048 2020/01/15 11:28:07 [INFO] encoded CSR 2020/01/15 11:28:07 [INFO] signed certificate with serial number 153451631889598523484764759860297996765909979890 2020/01/15 11:28:07 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for websites. For more information see the Baseline Requirements for the Issuance and Management of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org); specifically, section 10.2.3 ("Information Requirements").
上传以下三个压缩包进行解压:
[ etcd-cert]# ls ca-config.json etcd-cert.sh server-csr.json ca.csr etcd-v3.3.10-linux-amd64.tar.gz server-key.pem ca-csr.json flannel-v0.10.0-linux-amd64.tar.gz server.pem ca-key.pem kubernetes-server-linux-amd64.tar.gz ca.pem server.csr [ etcd-cert]# mv *.tar.gz ../ [ etcd-cert]# cd ../ [ k8s]# ls cfssl.sh etcd.sh flannel-v0.10.0-linux-amd64.tar.gz etcd-cert etcd-v3.3.10-linux-amd64.tar.gz kubernetes-server-linux-amd64.tar.gz [ k8s]# tar zxvf etcd-v3.3.10-linux-amd64.tar.gz [ k8s]# ls etcd-v3.3.10-linux-amd64 Documentation etcd etcdctl README-etcdctl.md README.md READMEv2-etcdctl.md [ k8s]# mkdir /opt/etcd/{cfg,bin,ssl} -p [ k8s]# mv etcd-v3.3.10-linux-amd64/etcd etcd-v3.3.10-linux-amd64/etcdctl /opt/etcd/bin/ #证书拷贝 [ k8s]# cp etcd-cert/*.pem /opt/etcd/ssl/ #进入卡住状态等待其他节点加入 [ k8s]# bash etcd.sh etcd01 192.168.18.128 etcd02=https://192.168.18.129:2380,etcd03=https://192.168.18.130:2380 Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /usr/lib/systemd/system/etcd.service.
此时新打开一个master节点的远程连接:
[ ~]# ps -ef | grep etcd root 3479 1780 0 11:48 pts/0 00:00:00 bash etcd.sh etcd01 192.168.18.128 etcd02=https://192.168.195.129:2380,etcd03=https://192.168.195.130:2380 root 3530 3479 0 11:48 pts/0 00:00:00 systemctl restart etcd root 3540 1 1 11:48 ? 00:00:00 /opt/etcd/bin/etcd --name=etcd01 --data-dir=/var/lib/etcd/default.etcd --listen-peer-urls=https://192.168.18.128:2380 --listen-client-urls=https://192.168.18.128:2379,http://127.0.0.1:2379 --advertise-client-urls=https://192.168.18.128:2379 --initial-advertise-peer-urls=https://192.168.18.128:2380 --initial-cluster=etcd01=https://192.168.18.128:2380,etcd02=https://192.168.195.129:2380,etcd03=https://192.168.195.130:2380 --initial-cluster-token=etcd-cluster --initial-cluster-state=new --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --peer-cert-file=/opt/etcd/ssl/server.pem --peer-key-file=/opt/etcd/ssl/server-key.pem --trusted-ca-file=/opt/etcd/ssl/ca.pem --peer-trusted-ca-file=/opt/etcd/ssl/ca.pem root 3623 3562 0 11:49 pts/1 00:00:00 grep --color=auto etcd
拷贝证书到node节点
[ k8s]# scp -r /opt/etcd/ :/opt/ The authenticity of host ‘192.168.18.129 (192.168.18.129)‘ can‘t be established. ECDSA key fingerprint is SHA256:mTT+FEtzAu4X3D5srZlz93S3gye8MzbqVZFDzfJd4Gk. ECDSA key fingerprint is MD5:fa:5a:88:23:49:60:9b:b8:7e:4b:14:4b:3f:cd:96:a0. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added ‘192.168.18.129‘ (ECDSA) to the list of known hosts. ‘s password: etcd 100% 518 455.8KB/s 00:00 etcd 100% 18MB 105.0MB/s 00:00 etcdctl 100% 15MB 108.2MB/s 00:00 ca-key.pem 100% 1679 1.4MB/s 00:00 ca.pem 100% 1265 396.1KB/s 00:00 server-key.pem 100% 1675 1.0MB/s 00:00 server.pem 100% 1338 525.6KB/s 00:00 [ k8s]# scp -r /opt/etcd/ :/opt/ The authenticity of host ‘192.168.18.129 (192.168.18.129)‘ can‘t be established. ECDSA key fingerprint is SHA256:mTT+FEtzAu4X3D5srZlz93S3gye8MzbqVZFDzfJd4Gk. ECDSA key fingerprint is MD5:fa:5a:88:23:49:60:9b:b8:7e:4b:14:4b:3f:cd:96:a0. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added ‘192.168.18.130‘ (ECDSA) to the list of known hosts. ‘s password: etcd 100% 518 816.5KB/s 00:00 etcd 100% 18MB 87.4MB/s 00:00 etcdctl 100% 15MB 108.6MB/s 00:00 ca-key.pem 100% 1679 1.3MB/s 00:00 ca.pem 100% 1265 411.8KB/s 00:00 server-key.pem 100% 1675 1.4MB/s 00:00 server.pem 100% 1338 639.5KB/s 00:00
拷贝启动脚本到node节点
[ k8s]# scp /usr/lib/systemd/system/etcd.service :/usr/lib/systemd/system/ ‘s password: etcd.service 100% 923 283.4KB/s 00:00 [ k8s]# scp /usr/lib/systemd/system/etcd.service :/usr/lib/systemd/system/ ‘s password: etcd.service 100% 923 347.7KB/s 00:00
Node1节点配置
修改配置文件 [ ~]# systemctl stop firewalld.service [ ~]# setenforce 0 [ ~]# vim /opt/etcd/cfg/etcd #[Member] ETCD_NAME="etcd02" ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS="https://192.168.18.129:2380" ETCD_LISTEN_CLIENT_URLS="https://192.168.18.129:2379" #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.18.129:2380" ETCD_ADVERTISE_CLIENT_URLS="https://192.168.18.129:2379" ETCD_INITIAL_CLUSTER="etcd01=https://192.168.18.128:2380,etcd02=https://192.168.18.148:2380,etcd03=https://192.168.18.129:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_INITIAL_CLUSTER_STATE="new" [ ~]# systemctl start etcd [ ~]# systemctl status etcd ● etcd.service - Etcd Server Loaded: loaded (/usr/lib/systemd/system/etcd.service; disabled; vendor preset: disabled) Active: active (running) since 三 2020-01-15 17:53:24 CST; 5s ago # 状态为Active
Node2节点配置
# 修改配置文件 [ ~]# systemctl stop firewalld.service [ ~]# setenforce 0 [ ~]# vim /opt/etcd/cfg/etcd #[Member] ETCD_NAME="etcd03" ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS="https://192.168.18.130:2380" ETCD_LISTEN_CLIENT_URLS="https://192.168.18.130:2379" #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.18.130:2380" ETCD_ADVERTISE_CLIENT_URLS="https://192.168.18.130:2379" ETCD_INITIAL_CLUSTER="etcd01=https://192.168.18.128:2380,etcd02=https://192.168.18.148:2380,etcd03=https://192.168.18.130:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_INITIAL_CLUSTER_STATE="new" [ ~]# systemctl start etcd [ ~]# systemctl status etcd ● etcd.service - Etcd Server Loaded: loaded (/usr/lib/systemd/system/etcd.service; disabled; vendor preset: disabled) Active: active (running) since 三 2020-01-15 17:55:24 CST; 5s ago #状态为Active
在master中进行群集状态验证:
# 回到master上输入以下命令: [ k8s]# cd etcd-cert/ [ etcd-cert]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.18.128:2379,https://192.168.18.129:2379,https://192.168.18.130:2379" cluster-health member 9104d301e3b6da41 is healthy: got healthy result from https://192.168.18.129:2379 member 92947d71c72a884e is healthy: got healthy result from https://192.168.18.130:2379 member b2a6d67e1bc8054b is healthy: got healthy result from https://192.168.18.128:2379 cluster is healthy #状态为healthy健康
两台弄得节点服务器部署docker引擎
node1:
# 安装依赖包 [ ~]# yum install yum-utils device-mapper-persistent-data lvm2 -y #设置阿里云镜像源 [ ~]# yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo #安装Docker-ce [ ~]# yum install -y docker-ce #启动Docker并设置为开机自启动 [ ~]# systemctl start docker.service [ ~]# systemctl enable docker.service Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service. [ ~]# ps aux | grep docker root 5551 0.1 3.6 565460 68652 ? Ssl 09:13 0:00 /usr/bin/docke d -H fd:// --containerd=/run/containerd/containerd.sock root 5759 0.0 0.0 112676 984 pts/1 R+ 09:16 0:00 grep --color=auto docker [ ~]# tee /etc/docker/daemon.json <<-‘EOF‘ { "registry-mirrors": ["https://w1ogxqvl.mirror.aliyuncs.com"] } EOF echo ‘net.ipv4.ip_forward=1‘ > /etc/sysctl.cnf sysctl -p [ ~]# service network restart Restarting network (via systemctl): [ 确定 ] [ ~]# systemctl restart docker [ ~]# systemctl daemon-reload [ ~]# systemctl restart docker
node2:
[ ~]# yum install yum-utils device-mapper-persistent-data lvm2 -y [ ~]# yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo [ ~]# yum install -y docker-ce [ ~]# systemctl start docker.service [ ~]# systemctl enable docker.service Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service. [ ~]# ps aux | grep docker root 5570 0.5 3.5 565460 66740 ? Ssl 09:18 0:00 /usr/bin/docke d -H fd:// --containerd=/run/containerd/containerd.sock root 5759 0.0 0.0 112676 984 pts/1 R+ 09:18 0:00 grep --color=auto docker [ ~]# tee /etc/docker/daemon.json <<-‘EOF‘ { "registry-mirrors": ["https://w1ogxqvl.mirror.aliyuncs.com"] } EOF [ ~]# service network restart Restarting network (via systemctl): [ 确定 ] [ ~]# systemctl restart docker [ ~]# systemctl daemon-reload [ ~]# systemctl restart docker
flannel网络配置
在master中写入分配的子网段到ETCD中,供flannel使用
[ etcd-cert]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.18.128:2379,https://192.168.18.129:2379,https://192.168.18.130:2379" set /coreos.com/network/config ‘{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}‘ { "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}
查看写入的信息
[ etcd-cert]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.18.128:2379,https://192.168.18.129:2379,https://192.168.18.130:2379" get /coreos.com/network/config { "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}
将flannel的软件包拷贝到所有node节点
[ etcd-cert]# cd ../ [ster k8s]# scp flannel-v0.10.0-linux-amd64.tar.gz :/root ‘s password: flannel-v0.10.0-linux-amd64.tar.gz 100% 9479KB 55.6MB/s 00:00 [ k8s]# scp flannel-v0.10.0-linux-amd64.tar.gz :/root ‘s password: flannel-v0.10.0-linux-amd64.tar.gz 100% 9479KB 69.5MB/s 00:00
在所有node节点进行解压操作
#node1节点 [ ~]# tar zxvf flannel-v0.10.0-linux-amd64.tar.gz [ ~]# mkdir /opt/kubernetes/{cfg,bin,ssl} -p [ ~]# mv mk-docker-opts.sh flanneld /opt/kubernetes/bin/ [ ~]# vim flannel.sh #!/bin/bash ETCD_ENDPOINTS=${1:-"http://127.0.0.1:2379"} cat <<EOF >/opt/kubernetes/cfg/flanneld FLANNEL_OPTIONS="--etcd-endpoints=${ETCD_ENDPOINTS} -etcd-cafile=/opt/etcd/ssl/ca.pem -etcd-certfile=/opt/etcd/ssl/server.pem -etcd-keyfile=/opt/etcd/ssl/server-key.pem" EOF cat <<EOF >/usr/lib/systemd/system/flanneld.service [Unit] Description=Flanneld overlay address etcd agent After=network-online.target network.target Before=docker.service [Service] Type=notify EnvironmentFile=/opt/kubernetes/cfg/flanneld ExecStart=/opt/kubernetes/bin/flanneld --ip-masq \$FLANNEL_OPTIONS ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env Restart=on-failure [Install] WantedBy=multi-user.target EOF systemctl daemon-reload systemctl enable flanneld systemctl restart flanneld
开启flannel网络功能
[ ~]# bash flannel.sh https://192.168.18.128:2379,https://192.168.18.129:2379,https://192.168.18.130:2379 Created symlink from /etc/systemd/system/multi-user.target.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.
配置docker连接flannel
[ ~]# vim /usr/lib/systemd/system/docker.service #service段落做如下改动 9 [Service] 10 Type=notify 11 # the default is not to use systemd for cgroups because the delegate issues s till 12 # exists and systemd currently does not support the cgroup feature set requir ed 13 # for containers run by docker 14 EnvironmentFile=/run/flannel/subnet.env 15 ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS -H fd:// --containerd=/run /containerd/containerd.sock 16 ExecReload=/bin/kill -s HUP $MAINPID 17 TimeoutSec=0 18 RestartSec=2 19 Restart=always #修改完成后按Esc退出插入模式,输入:wq保存退出 [ ~]# cat /run/flannel/subnet.env DOCKER_OPT_BIP="--bip=172.17.32.1/24" DOCKER_OPT_IPMASQ="--ip-masq=false" DOCKER_OPT_MTU="--mtu=1450" DOCKER_NETWORK_OPTIONS=" --bip=172.17.32.1/24 --ip-masq=false --mtu=1450" #此处bip指定启动时的子网
重启docker服务
[ ~]# systemctl daemon-reload [ ~]# systemctl restart docker
查看flannel网络
[ ~]# ifconfig flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450 inet 172.17.32.0 netmask 255.255.255.255 broadcast 0.0.0.0 inet6 fe80::344b:13ff:fecb:1e2d prefixlen 64 scopeid 0x20<link> ether 36:4b:13:cb:1e:2d txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 27 overruns 0 carrier 0 collisions 0
node2:
[ ~]# tar zxvf flannel-v0.10.0-linux-amd64.tar.gz [ ~]# mkdir /opt/kubernetes/{cfg,bin,ssl} -p [ ~]# mv mk-docker-opts.sh flanneld /opt/kubernetes/bin/ [ ~]# vim flannel.sh #!/bin/bash ETCD_ENDPOINTS=${1:-"http://127.0.0.1:2379"} cat <<EOF >/opt/kubernetes/cfg/flanneld FLANNEL_OPTIONS="--etcd-endpoints=${ETCD_ENDPOINTS} -etcd-cafile=/opt/etcd/ssl/ca.pem -etcd-certfile=/opt/etcd/ssl/server.pem -etcd-keyfile=/opt/etcd/ssl/server-key.pem" EOF cat <<EOF >/usr/lib/systemd/system/flanneld.service [Unit] Description=Flanneld overlay address etcd agent After=network-online.target network.target Before=docker.service [Service] Type=notify EnvironmentFile=/opt/kubernetes/cfg/flanneld ExecStart=/opt/kubernetes/bin/flanneld --ip-masq \$FLANNEL_OPTIONS ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env Restart=on-failure [Install] WantedBy=multi-user.target EOF systemctl daemon-reload systemctl enable flanneld systemctl restart flanneld
开启flannel网络功能
[ ~]# bash flannel.sh https://192.168.18.128:2379,https://192.168.18.129:2379,https://192.168.18.130:2379 Created symlink from /etc/systemd/system/multi-user.target.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.
配置docker连接flannel
[ ~]# vim /usr/lib/systemd/system/docker.service #service段落做如下改动 9 [Service] 10 Type=notify 11 # the default is not to use systemd for cgroups because the delegate issues s till 12 # exists and systemd currently does not support the cgroup feature set requir ed 13 # for containers run by docker 14 EnvironmentFile=/run/flannel/subnet.env 15 ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS -H fd:// --containerd=/run /containerd/containerd.sock 16 ExecReload=/bin/kill -s HUP $MAINPID 17 TimeoutSec=0 18 RestartSec=2 19 Restart=always #修改完成后按Esc退出插入模式,输入:wq保存退出 [ ~]# cat /run/flannel/subnet.env DOCKER_OPT_BIP="--bip=172.17.40.1/24" DOCKER_OPT_IPMASQ="--ip-masq=false" DOCKER_OPT_MTU="--mtu=1450" DOCKER_NETWORK_OPTIONS=" --bip=172.17.40.1/24 --ip-masq=false --mtu=1450" #此处bip指定启动时的子网
重启docker服务
[ ~]# systemctl daemon-reload [ ~]# systemctl restart docker
查看flannel网络
[ ~]# ifconfig flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450 inet 172.17.40.0 netmask 255.255.255.255 broadcast 0.0.0.0 inet6 fe80::cc6f:baff:fe89:3b93 prefixlen 64 scopeid 0x20<link> ether ce:6f:ba:89:3b:93 txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 240 overruns 0 carrier 0 collisions 0
测试ping通对方docker0网卡 证明flannel起到路由作用
[ ~]# docker run -it centos:7 /bin/bash [ /]# yum install net-tools -y [ /]# ifconfig eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450 inet 172.17.84.2 netmask 255.255.255.0 broadcast 172.17.84.255 ether 02:42:ac:11:54:02 txqueuelen 0 (Ethernet) RX packets 18192 bytes 13930229 (13.2 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 6179 bytes 337037 (329.1 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 loop txqueuelen 1 (Local Loopback) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 //再次测试ping通两个node中的centos:7容器
部署master组件
在master上操作,api-server生成证书
Archive: master.zip inflating: apiserver.sh inflating: controller-manager.sh inflating: scheduler.sh [ k8s]# mkdir /opt/kubernetes/{cfg,bin,ssl} -p #创建apiserver自签证书目录 [ k8s]# mkdir k8s-cert [ k8s]# cd k8s-cert/ [ k8s-cert]# ls k8s-cert.sh #建立ca证书 [ k8s-cert]# cat > ca-config.json <<EOF { "signing": { "default": { "expiry": "87600h" }, "profiles": { "kubernetes": { "expiry": "87600h", "usages": [ "signing", "key encipherment", "server auth", "client auth" ] } } } } EOF [ k8s-cert]# cat > ca-csr.json <<EOF { "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Nanjing", "ST": "Nanjing", "O": "k8s", "OU": "System" } ] } EOF [ k8s-cert]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca - 2020/02/05 10:15:09 [INFO] generating a new CA key and certificate from CSR 2020/02/05 10:15:09 [INFO] generate received request 2020/02/05 10:15:09 [INFO] received CSR 2020/02/05 10:15:09 [INFO] generating key: rsa-2048 2020/02/05 10:15:09 [INFO] encoded CSR 2020/02/05 10:15:09 [INFO] signed certificate with serial number 154087341948227448402053985122760482002707860296 #建立apiserver证书 [ k8s-cert]# cat > server-csr.json <<EOF { "CN": "kubernetes", "hosts": [ "10.0.0.1", "127.0.0.1", "192.168.18.128", #master1(centos 7-3) "192.168.18.140", #master2(mini-1) "192.168.18.100", #vip(自行设定负载均衡用) "192.168.18.141", #lb (mini-2) "192.168.18.142", #lb (mini-3) "kubernetes", "kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster", "kubernetes.default.svc.cluster.local" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "NanJing", "ST": "NanJing", "O": "k8s", "OU": "System" } ] } EOF [ k8s-cert]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server 2020/02/05 11:43:47 [INFO] generate received request 2020/02/05 11:43:47 [INFO] received CSR 2020/02/05 11:43:47 [INFO] generating key: rsa-2048 2020/02/05 11:43:47 [INFO] encoded CSR 2020/02/05 11:43:47 [INFO] signed certificate with serial number 359419453323981371004691797080289162934778938507 2020/02/05 11:43:47 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for websites. For more information see the Baseline Requirements for the Issuance and Management of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org); specifically, section 10.2.3 ("Information Requirements"). #建立admin证书 [ k8s-cert]# cat > admin-csr.json <<EOF { "CN": "admin", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "NanJing", "ST": "NanJing", "O": "system:masters", "OU": "System" } ] } EOF [ k8s-cert]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin 2020/02/05 11:46:04 [INFO] generate received request 2020/02/05 11:46:04 [INFO] received CSR 2020/02/05 11:46:04 [INFO] generating key: rsa-2048 2020/02/05 11:46:04 [INFO] encoded CSR 2020/02/05 11:46:04 [INFO] signed certificate with serial number 361885975538105795426233467824041437549564573114 2020/02/05 11:46:04 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for websites. For more information see the Baseline Requirements for the Issuance and Management of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org); specifically, section 10.2.3 ("Information Requirements"). #建立kube-proxy证书 [ k8s-cert]# cat > kube-proxy-csr.json <<EOF { "CN": "system:kube-proxy", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "NanJing", "ST": "NanJing", "O": "k8s", "OU": "System" } ] } EOF [ k8s-cert]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy 2020/02/05 11:47:55 [INFO] generate received request 2020/02/05 11:47:55 [INFO] received CSR 2020/02/05 11:47:55 [INFO] generating key: rsa-2048 2020/02/05 11:47:56 [INFO] encoded CSR 2020/02/05 11:47:56 [INFO] signed certificate with serial number 34747850270017663665747172643822215922289240826 2020/02/05 11:47:56 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for websites. For more information see the Baseline Requirements for the Issuance and Management of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org); specifically, section 10.2.3 ("Information Requirements").
生成k8s证书
[ k8s-cert]# bash k8s-cert.sh 2020/02/05 11:50:08 [INFO] generating a new CA key and certificate from CSR 2020/02/05 11:50:08 [INFO] generate received request 2020/02/05 11:50:08 [INFO] received CSR 2020/02/05 11:50:08 [INFO] generating key: rsa-2048 2020/02/05 11:50:08 [INFO] encoded CSR 2020/02/05 11:50:08 [INFO] signed certificate with serial number 473883155883308900863805079252124099771123043047 2020/02/05 11:50:08 [INFO] generate received request 2020/02/05 11:50:08 [INFO] received CSR 2020/02/05 11:50:08 [INFO] generating key: rsa-2048 2020/02/05 11:50:08 [INFO] encoded CSR 2020/02/05 11:50:08 [INFO] signed certificate with serial number 66483817738746309793417718868470334151539533925 2020/02/05 11:50:08 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for websites. For more information see the Baseline Requirements for the Issuance and Management of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org); specifically, section 10.2.3 ("Information Requirements"). 2020/02/05 11:50:08 [INFO] generate received request 2020/02/05 11:50:08 [INFO] received CSR 2020/02/05 11:50:08 [INFO] generating key: rsa-2048 2020/02/05 11:50:08 [INFO] encoded CSR 2020/02/05 11:50:08 [INFO] signed certificate with serial number 245658866069109639278946985587603475325871008240 2020/02/05 11:50:08 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for websites. For more information see the Baseline Requirements for the Issuance and Management of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org); specifically, section 10.2.3 ("Information Requirements"). 2020/02/05 11:50:08 [INFO] generate received request 2020/02/05 11:50:08 [INFO] received CSR 2020/02/05 11:50:08 [INFO] generating key: rsa-2048 2020/02/05 11:50:09 [INFO] encoded CSR 2020/02/05 11:50:09 [INFO] signed certificate with serial number 696729766024974987873474865496562197315198733463 2020/02/05 11:50:09 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for websites. For more information see the Baseline Requirements for the Issuance and Management of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org); specifically, section 10.2.3 ("Information Requirements"). [ro k8s-cert]# ls *pem admin-key.pem ca-key.pem kube-proxy-key.pem server-key.pem admin.pem ca.pem kube-proxy.pem server.pem [ k8s-cert]# cp ca*pem server*pem /opt/kubernetes/ssl/ [ k8s-cert]# cd .. #解压kubernetes压缩包 [ k8s]# tar zxvf kubernetes-server-linux-amd64.tar.gz [ k8s]# cd /root/k8s/kubernetes/server/bin #复制关键命令文件 [ bin]# cp kube-apiserver kubectl kube-controller-manager kube-scheduler /opt/kubernetes/bin/ [ k8s]# cd /root/k8s #随机生成序列号 [ k8s]# head -c 16 /dev/urandom | od -An -t x | tr -d ‘ ‘ 9b3186df3dc799376ad43b6fe0108571 [ k8s]# vim /opt/kubernetes/cfg/token.csv 9b3186df3dc799376ad43b6fe0108571,kubelet-bootstrap,10001,"system:kubelet-bootstrap" #序列号,用户名,id,角色
二进制文件,token,证书都准备好,开启apiserver
[ k8s]# bash apiserver.sh 192.168.18.128 https://192.168.18.128:2379,https://192.168.18.148:2379,https://192.168.18.130:2379 Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service.
检查进程是否启动成功
[ k8s]# ps aux | grep kube
查看配置文件
[ k8s]# cat /opt/kubernetes/cfg/kube-apiserver KUBE_APISERVER_OPTS="--logtostderr=true --v=4 --etcd-servers=https://192.168.18.128:2379,https://192.168.18.129:2379,https://192.168.18.130:2379 --bind-address=192.168.18.128 --secure-port=6443 --advertise-address=192.168.18.128 --allow-privileged=true --service-cluster-ip-range=10.0.0.0/24 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction --authorization-mode=RBAC,Node --kubelet-https=true --enable-bootstrap-token-auth --token-auth-file=/opt/kubernetes/cfg/token.csv --service-node-port-range=30000-50000 --tls-cert-file=/opt/kubernetes/ssl/server.pem --tls-private-key-file=/opt/kubernetes/ssl/server-key.pem --client-ca-file=/opt/kubernetes/ssl/ca.pem --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem --etcd-cafile=/opt/etcd/ssl/ca.pem --etcd-certfile=/opt/etcd/ssl/server.pem --etcd-keyfile=/opt/etcd/ssl/server-key.pem"
监听的https端口
[ k8s]# netstat -ntap | grep 6443 tcp 0 0 192.168.18.128:6443 0.0.0.0:* LISTEN 8146/kube-apiserver tcp 0 0 192.168.18.128:6443 192.168.18.128:56724 ESTABLISHED 8146/kube-apiserver tcp 0 0 192.168.18.128:56724 192.168.18.128:6443 ESTABLISHED 8146/kube-apiserver [ k8s]# netstat -ntap | grep 8080 tcp 0 0 127.0.0.1:8080 0.0.0.0:* LISTEN 8146/kube-apiserver ......以下省略多行
启动scheduler服务
[ k8s]# ./scheduler.sh 127.0.0.1 Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service. [ k8s]# ps aux | grep ku postfix 6212 0.0 0.0 91732 1364 ? S 11:29 0:00 pickup -l -t unix -u root 7034 1.1 1.0 45360 20332 ? Ssl 12:23 0:00 /opt/kubernetes/bin/kube-scheduler --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect root 7042 0.0 0.0 112676 980 pts/1 R+ 12:23 0:00 grep --color=auto ku [ k8s]# chmod +x controller-manager.sh
启动controller-manager
[ k8s]# ./controller-manager.sh 127.0.0.1 Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.
查看master 节点状态
[ k8s]# /opt/kubernetes/bin/kubectl get cs NAME STATUS MESSAGE ERROR controller-manager Healthy ok scheduler Healthy ok etcd-1 Healthy {"health":"true"} etcd-0 Healthy {"health":"true"} etcd-2 Healthy {"health":"true"}
node节点部署
master上操作把 kubelet、kube-proxy拷贝到node节点上去
[ k8s]# cd kubernetes/server/bin/ [ bin]# scp kubelet kube-proxy :/opt/kubernetes/bin/ ‘s password: kubelet 100% 168MB 81.1MB/s 00:02 kube-proxy 100% 48MB 77.6MB/s 00:00 [ bin]# scp kubelet kube-proxy :/opt/kubernetes/bin/ ‘s password: kubelet 100% 168MB 86.8MB/s 00:01 kube-proxy 100% 48MB 90.4MB/s 00:00
node1节点操作(上传node.zip到/root目录下再解压)
[ ~]# ls anaconda-ks.cfg flannel-v0.10.0-linux-amd64.tar.gz node.zip 公共 视频 文档 音乐 flannel.sh initial-setup-ks.cfg README.md 模板 图片 下载 桌面 [ ~]# unzip node.zip Archive: node.zip inflating: proxy.sh inflating: kubelet.sh
master上操作
[ bin]# cd /root/k8s/ [ k8s]# mkdir kubeconfig [root k8s]# cd kubeconfig/
拷贝kubeconfig.sh文件进行重命名
[[ kubeconfig]# mv kubeconfig.sh kubeconfig [ kubeconfig]# vim kubeconfig #-----------------------删除以下部分-------------------------------- # 创建 TLS Bootstrapping Token #BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ‘ ‘) BOOTSTRAP_TOKEN=0fb61c46f8991b718eb38d27b605b008 cat > token.csv <<EOF ${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap" EOF #----------------------------------------------------------------- # 获取token信息 [ ~]# cat /opt/kubernetes/cfg/token.csv 6351d652249951f79c33acdab329e4c4,kubelet-bootstrap,10001,"system:kubelet-bootstrap" # 配置文件修改为tokenID # 设置客户端认证参数 kubectl config set-credentials kubelet-bootstrap --token=6351d652249951f79c33acdab329e4c4 --kubeconfig=bootstrap.kubeconfig
设置环境变量(可以写入到/etc/profile中)
[ kubeconfig]# vim /etc/profile #按大写字母G到最末行,按小写字母o在下行插入 export PATH=$PATH:/opt/kubernetes/bin/ #修改完成后按Esc退出插入模式,输入:wq保存退出 [ kubeconfig]# source /etc/profile [ kubeconfig]# kubectl get cs NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-2 Healthy {"health":"true"} etcd-1 Healthy {"health":"true"} etcd-0 Healthy {"health":"true"} [ kubeconfig]# kubectl get node No resources found. #此时还没有节点被添加
生成配置文件
[ kubeconfig]# bash kubeconfig 192.168.18.128 /root/k8s/k8s-cert/ Cluster "kubernetes" set. User "kubelet-bootstrap" set. Context "default" created. Switched to context "default". Cluster "kubernetes" set. User "kube-proxy" set. Context "default" created. Switched to context "default". [ kubeconfig]# ls bootstrap.kubeconfig kubeconfig kube-proxy.kubeconfig
拷贝配置文件到node节点
[ kubeconfig]# scp bootstrap.kubeconfig kube-proxy.kubeconfig :/opt/kubernetes/cfg/ ‘s password: bootstrap.kubeconfig 100% 2168 2.2MB/s 00:00 kube-proxy.kubeconfig 100% 6270 3.5MB/s 00:00 [ kubeconfig]# scp bootstrap.kubeconfig kube-proxy.kubeconfig :/opt/kubernetes/cfg/ ‘s password: bootstrap.kubeconfig 100% 2168 3.1MB/s 00:00 kube-proxy.kubeconfig 100% 6270 7.9MB/s 00:00
创建bootstrap角色赋予权限用于连接apiserver请求签名(关键步骤)
[ kubeconfig]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created
在node01节点上操作
[ ~]# bash kubelet.sh 192.168.18.129 Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
检查kubelet服务启动
[ ~]# ps aux | grep kube root 8807 0.0 0.8 300512 16260 ? Ssl 09:45 0:05 /opt/kubernetes/bin/flanneld --ip-masq --etcd-endpoints=https://192.168.18.128:2379,https://192.168.18.129:2379,https://192.168.18.130:2379 -etcd-cafile=/opt/etcd/ssl/ca.pem -etcd-certfile=/opt/etcd/ssl/server.pem -etcd-keyfile=/opt/etcd/ssl/server-key.pem root 35040 0.4 2.1 369632 40832 ? Ssl 14:53 0:00 /opt/kubernetes/bin/kubelet --logtostderr=true --v=4 --hostname-override=192.168.18.148 --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig --bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig --config=/opt/kubernetes/cfg/kubelet.config --cert-dir=/opt/kubernetes/ssl --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0 root 35078 0.0 0.0 112676 984 pts/1 S+ 14:54 0:00 grep --color=auto kube [ ~]# systemctl status kubelet.service ● kubelet.service - Kubernetes Kubelet Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled) Active: active (running) since 三 2020-02-05 14:54:45 CST; 21s ago #状态为running运行中
master上检查节点的请求
node1会自动寻找apiserver去进行申请证书,我们就可以检查到node01节点的请求 [ kubeconfig]# kubectl get csr NAME AGE REQUESTOR CONDITION node-csr-ZZnDyPkUICga9NeuZF-M8IHTmpekEurXtbHXOyHZbDg 18s kubelet-bootstrap Pending #此时状态为Pending等待集群给该节点颁发证书 #继续查看证书状态 [ kubeconfig]# kubectl get csr NAME AGE REQUESTOR CONDITION node-csr-ZZnDyPkUICga9NeuZF-M8IHTmpekEurXtbHXOyHZbDg 3m59s kubelet-bootstrap Approved,Issued #此时状态为Approved,Issued已经被允许加入群集
查看群集节点,成功加入node1节点
[ kubeconfig]# kubectl get node NAME STATUS ROLES AGE VERSION 192.168.18.129 Ready <none> 6m54s v1.12.3
在node1节点操作,启动proxy服务
[ ~]# bash proxy.sh 192.168.195.129 Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service. [ ~]# systemctl status kube-proxy.service ● kube-proxy.service - Kubernetes Proxy Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; enabled; vendor preset: disabled) Active: active (running) since 四 2020-02-06 11:11:56 CST; 20s ago #状态为running运行中
node2节点部署
在node01节点操作
把现成的/opt/kubernetes目录复制到其他节点进行修改即可
[ ~]# scp -r /opt/kubernetes/ :/opt/ The authenticity of host ‘192.168.18.130 (192.168.18.130)‘ can‘t be established. ECDSA key fingerprint is SHA256:mTT+FEtzAu4X3D5srZlz93S3gye8MzbqVZFDzfJd4Gk. ECDSA key fingerprint is MD5:fa:5a:88:23:49:60:9b:b8:7e:4b:14:4b:3f:cd:96:a0. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added ‘192.168.18.130‘ (ECDSA) to the list of known hosts. ‘s password: flanneld 100% 238 572.7KB/s 00:00 bootstrap.kubeconfig 100% 2168 4.9MB/s 00:00 kube-proxy.kubeconfig 100% 6270 12.0MB/s 00:00 kubelet 100% 378 642.2KB/s 00:00 kubelet.config 100% 268 565.0KB/s 00:00 kubelet.kubeconfig 100% 2297 3.5MB/s 00:00 kube-proxy 100% 191 396.6KB/s 00:00 mk-docker-opts.sh 100% 2139 3.2MB/s 00:00 scp: /opt//kubernetes/bin/flanneld: Text file busy kubelet 100% 168MB 96.9MB/s 00:01 kube-proxy 100% 48MB 108.9MB/s 00:00 kubelet.crt 100% 2193 2.4MB/s 00:00 kubelet.key 100% 1675 2.5MB/s 00:00 kubelet-client-2020-02-06-11-03-32.pem 100% 1277 2.2MB/s 00:00 kubelet-client-current.pem 100% 1277 684.2KB/s 00:00 #把node1中的kubelet,kube-proxy的service文件拷贝到node2中 [ ~]# scp /usr/lib/systemd/system/{kubelet,kube-proxy}.service :/usr/lib/systemd/system/ ‘s password: kubelet.service 100% 264 291.3KB/s 00:00 kube-proxy.service 100% 231 407.8KB/s 00:00 `` **到node2上操作,进行修改:首先删除复制过来的证书,等会node2会自行申请证书**
[ ~]# cd /opt/kubernetes/ssl/
[ ssl]# rm -rf *
**修改配置文件kubelet kubelet.config kube-proxy(三个配置文件)**
[ ssl]# cd ../cfg/
[ cfg]# vim kubelet
4 --hostname-override=192.168.18.129 \
#修改完成后按Esc退出插入模式,输入:wq保存退出
[ cfg]# vim kubelet.config
4 address: 192.168.18.130
#修改完成后按Esc退出插入模式,输入:wq保存退出
[ cfg]# vim kube-proxy
4 --hostname-override=192.168.195.130 #第4行,改为node2节点的IP地址
#修改完成后按Esc退出插入模式,输入:wq保存退出
**启动服务**
[ cfg]# systemctl start kubelet.service
[ cfg]# systemctl enable kubelet.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
[ cfg]# systemctl start kube-proxy.service
[ cfg]# systemctl enable kube-proxy.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.
master上查看node2节点请求
[ k8s]# kubectl get csr
NAME AGE REQUESTOR CONDITION
node-csr-QtKJLeSj130rGIccigH6-MKH7klhymwDxQ4rh4w8WJA 99s kubelet-bootstrap Pending
**授权许可加入群集**
[ k8s]# kubectl certificate approve node-csr-QtKJLeSj130rGIccigH6-MKH7klhymwDxQ4rh4w8WJA
certificatesigningrequest.certificates.k8s.io/node-csr-QtKJLeSj130rGIccigH6-MKH7klhymwDxQ4rh4w8WJA approved
**查看群集中的节点**
[ k8s]# kubectl get node
NAME STATUS ROLES AGE VERSION
192.168.18.130 Ready <none> 28s v1.12.3
192.168.18.129 Ready <none> 26m v1.12.3
#此时两个节点都已加入到群集中