Kubernetes多节点二进制线网部署(实例!!!)
前情回顾
部署K8s多节点,首先得署单节master的k8s群集 详情参考:
blog.csdn.net/caozhengtao1213/article/details/103987039
本篇内容
1.部署Master2
2.Nginx负载均衡部署-keeplived服务
3.node节点修改配置文件统一VIP
4.创建Pod
5.创建UI显示界面
环境准备
角色 | 地址 | 安装组件 |
---|---|---|
master | 192.168.142.129 | kube-apiserver kube-controller-manager kube-scheduler etcd |
master2 | 192.168.142.120 | kube-apiserver kube-controller-manager kube-scheduler |
node1 | 192.168.142.130 | kubelet kube-proxy docker flannel etcd |
node2 | 192.168.142.131 | kubelet kube-proxy docker flannel etcd |
nginx1(lbm) | 192.168.142.140 | nginx keepalived |
nginx2(lbb) | 192.168.142.150 | nginx keepalived |
VIP | 192.168.142.20 | - |
一、部署Master2
1.远程复制master的相关目录
- 关闭防火墙及安全功能
systemctl stop firewalld.service setenforce 0
- 复制kubernetes目录到master2
scp -r /opt/kubernetes/ :/opt
- 复制etcd目录到master2(内含证书)
scp -r /opt/etcd/ :/opt
- 复制服务启动脚本到master2
scp /usr/lib/systemd/system/{kube-apiserver,kube-controller-manager,kube-scheduler}.service :/usr/lib/systemd/system/
2.修改kube-apiserver配置文件
vim /opt/kubernetes/cfg/kube-apiserver #将第5和7行IP地址改为master2主机的地址 --bind-address=192.168.142.120 --advertise-address=192.168.142.120 \
3.启动服务并设定开机自启
systemctl start kube-apiserver.service systemctl enable kube-apiserver.service systemctl start kube-controller-manager.service systemctl enable kube-controller-manager.service systemctl start kube-scheduler.service systemctl enable kube-scheduler.service
4.追加环境变量并生效
vim /etc/profile #末尾追加 export PATH=$PATH:/opt/kubernetes/bin/ source /etc/profile
5.查看node节点
kubectl get node
NAME STATUS ROLES AGE VERSION 192.168.142.130 Ready <none> 10d12h v1.12.3 192.168.142.131 Ready <none> 10d11h v1.12.3
二、Nginx负载均衡部署-keeplived服务
1.在lbm&lbb端的操作,安装nginx服务
- 把nginx.sh和keepalived.conf脚本拷贝到家目录(后面会用到)
#nginx.sh cat > /etc/yum.repos.d/nginx.repo << EOF [nginx] name=nginx repo baseurl=http://nginx.org/packages/centos/7/$basearch/ gpgcheck=0 EOF stream { log_format main ‘$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent‘; access_log /var/log/nginx/k8s-access.log main; upstream k8s-apiserver { server 10.0.0.3:6443; server 10.0.0.8:6443; } server { listen 6443; proxy_pass k8s-apiserver; } }
#keepalived.conf ! Configuration File for keepalived global_defs { # 接收邮件地址 notification_email { } # 邮件发送地址 notification_email_from smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id NGINX_MASTER } vrrp_script check_nginx { script "/usr/local/nginx/sbin/check_nginx.sh" } vrrp_instance VI_1 { state MASTER interface eth0 virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的 priority 100 # 优先级,备服务器设置 90 advert_int 1 # 指定VRRP 心跳包通告间隔时间,默认1秒 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 10.0.0.188/24 } track_script { check_nginx } } mkdir /usr/local/nginx/sbin/ -p vim /usr/local/nginx/sbin/check_nginx.sh count=$(ps -ef |grep nginx |egrep -cv "grep|$$") if [ "$count" -eq 0 ];then /etc/init.d/keepalived stop fi chmod +x /usr/local/nginx/sbin/check_nginx.sh
- 编辑nginx.repo文件
vim /etc/yum.repos.d/nginx.repo [nginx] name=nginx repo baseurl=http://nginx.org/packages/centos/7/$basearch/ gpgcheck=0
- 安装nginx服务
yum install nginx -y
- 添加四层转发
vim /etc/nginx/nginx.conf #在第12行下追加以下内容 stream { log_format main ‘$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent‘; access_log /var/log/nginx/k8s-access.log main; upstream k8s-apiserver { server 192.168.142.129:6443; #此处为master的ip地址 server 192.168.142.120:6443; #此处为master2的ip地址 } server { listen 6443; proxy_pass k8s-apiserver; } }
2.部署keeplived服务
#安装keepalived yum install keepalived -y 复制前面的keepalived.conf配置文件,覆盖安装后原有的配置文件 cp keepalived.conf /etc/keepalived/keepalived.conf vim /etc/keepalived/keepalived.conf script "/etc/nginx/check_nginx.sh" #18行,目录改为/etc/nginx/,脚本后写 interface ens33 #23行,eth0改为ens33,此处的网卡名称可以使用ifconfig命令查询 virtual_router_id 51 #24行,vrrp路由ID实例,每个实例是唯一的 priority 100 #25行,优先级,备服务器设置90 virtual_ipaddress { #31行, 192.168.142.20/24 #32行,vip地址改为之前设定好的192.168.142.20 #38行以下全部删除 vim /etc/nginx/check_nginx.sh #统计数量 count=$(ps -ef |grep nginx |egrep -cv "grep|$$") #统计数量 #匹配为0,关闭keepalived服务 if [ "$count" -eq 0 ];then systemctl stop keepalived fi chmod +x /etc/nginx/check_nginx.sh #启动服务 systemctl start keepalived
- 查看地址信息
ip a
# lbm地址信息 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:eb:11:2a brd ff:ff:ff:ff:ff:ff inet 192.168.142.140/24 brd 192.168.142.255 scope global ens33 valid_lft forever preferred_lft forever inet 192.168.142.20/24 scope global secondary ens33 //漂移地址在lb01中 valid_lft forever preferred_lft forever inet6 fe80::53ba:daab:3e22:e711/64 scope link valid_lft forever preferred_lft forever #lbb地址信息 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:c9:9d:88 brd ff:ff:ff:ff:ff:ff inet 192.168.142.150/24 brd 192.168.142.255 scope global ens33 valid_lft forever preferred_lft forever inet6 fe80::55c0:6788:9feb:550d/64 scope link valid_lft forever preferred_lft forever
- 验证地址漂移
#停止lbm端的nginx服务 pkill nginx #查看服务状态 systemctl status nginx systemctl status keepalived.service #此时判断条件若为0,keepalived服务则是停止的 ps -ef |grep nginx |egrep -cv "grep|$$"
- 查看地址信息
ip a
# lbm地址信息 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:eb:11:2a brd ff:ff:ff:ff:ff:ff inet 192.168.142.140/24 brd 192.168.142.255 scope global ens33 valid_lft forever preferred_lft forever inet6 fe80::53ba:daab:3e22:e711/64 scope link valid_lft forever preferred_lft forever #lbb地址信息 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:c9:9d:88 brd ff:ff:ff:ff:ff:ff inet 192.168.142.150/24 brd 192.168.142.255 scope global ens33 valid_lft forever preferred_lft forever inet 192.168.142.20/24 scope global secondary ens33 //漂移地址在lb01中 valid_lft forever preferred_lft forever inet6 fe80::55c0:6788:9feb:550d/64 scope link valid_lft forever preferred_lft forever
- 恢复操作
#在lbm端启动nginx和keepalived服务 systemctl start nginx systemctl start keepalived
- 漂移地址回归lbm端
ip a
# lbm地址信息 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:eb:11:2a brd ff:ff:ff:ff:ff:ff inet 192.168.142.140/24 brd 192.168.142.255 scope global ens33 valid_lft forever preferred_lft forever inet 192.168.142.20/24 scope global secondary ens33 //漂移地址在lb01中 valid_lft forever preferred_lft forever inet6 fe80::53ba:daab:3e22:e711/64 scope link valid_lft forever preferred_lft forever
三、node节点修改配置文件统一VIP(bootstrap.kubeconfig,kubelet.kubeconfig)
cd /opt/kubernetes/cfg/ #配置文件统一修改为VIP vim /opt/kubernetes/cfg/bootstrap.kubeconfig server: https://192.168.142.20:6443 #第5行改为Vip的地址 vim /opt/kubernetes/cfg/kubelet.kubeconfig server: https://192.168.142.20:6443 #第5行改为Vip的地址 vim /opt/kubernetes/cfg/kube-proxy.kubeconfig server: https://192.168.142.20:6443 #第5行改为Vip的地址
- 替换完成后自检
grep 20 *
bootstrap.kubeconfig: server: https://192.168.142.20:6443 kubelet.kubeconfig: server: https://192.168.142.20:6443 kube-proxy.kubeconfig: server: https://192.168.142.20:6443
- 在lb01上查看nginx的k8s日志
tail /var/log/nginx/k8s-access.log
192.168.142.140 192.168.142.129:6443 - [08/Feb/2020:19:20:40 +0800] 200 1119 192.168.142.140 192.168.142.120:6443 - [08/Feb/2020:19:20:40 +0800] 200 1119 192.168.142.150 192.168.142.129:6443 - [08/Feb/2020:19:20:44 +0800] 200 1120 192.168.142.150 192.168.142.120:6443 - [08/Feb/2020:19:20:44 +0800] 200 1120
四、创建Pod
- 测试创建Pod
kubectl run nginx --image=nginx
- 查看状态
kubectl get pods
- 绑定群集中的匿名用户赋予管理员权限(解决日志不可看问题)
kubectl create clusterrolebinding cluster-system-anonymous --clusterrole=cluster-admin --user=system:anonymous
- 查看Pod网络
kubectl get pods -o wid
五、创建UI显示界面
- 在master1上创建dashborad工作目录
mkdir /k8s/dashboard cd /k8s/dashboard #上传官方的文件到该目录中
- 创建页面,注意顺序
#授权访问api kubectl create -f dashboard-rbac.yaml #进行加密 kubectl create -f dashboard-secret.yaml #配置应用 kubectl create -f dashboard-configmap.yaml #控制器 kubectl create -f dashboard-controller.yaml #发布出去进行访问 kubectl create -f dashboard-service.yaml
- 完成后查看创建在指定的kube-system命名空间下
kubectl get pods -n kube-system
- 查看如何访问
kubectl get pods,svc -n kube-system
- 在浏览器中输入nodeIP地址就可以访问(谷歌浏览器无法访问题解决方法)
1.在master端操作,编写进行证书自签
vim dashboard-cert.sh cat > dashboard-csr.json <<EOF { "CN": "Dashboard", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "NanJing", "ST": "NanJing" } ] } EOF K8S_CA=$1 cfssl gencert -ca=$K8S_CA/ca.pem -ca-key=$K8S_CA/ca-key.pem -config=$K8S_CA/ca-config.json -profile=kubernetes dashboard-csr.json | cfssljson -bare dashboard kubectl delete secret kubernetes-dashboard-certs -n kube-system kubectl create secret generic kubernetes-dashboard-certs --from-file=./ -n kube-system
2.重新应用新的自签证书
bash dashboard-cert.sh /root/k8s/apiserver/
3.修改yaml文件
vim dashboard-controller.yaml #在47行下追加以下内容 - --tls-key-file=dashboard-key.pem - --tls-cert-file=dashboard.pem
4.重新进行部署
kubectl apply -f dashboard-controller.yaml
5.生成登录令牌
- 生成令牌
kubectl create -f k8s-admin.yaml
- 将令牌保存
kubectl get secret -n kube-system
NAME TYPE DATA AGE dashboard-admin-token-drs7c kubernetes.io/service-account-token 3 60s default-token-mmvcg kubernetes.io/service-account-token 3 55m kubernetes-dashboard-certs Opaque 10 10m kubernetes-dashboard-key-holder Opaque 2 23m kubernetes-dashboard-token-crqvs kubernetes.io/service-account-token 3 23m
- 查看令牌
kubectl describe secret dashboard-admin-token-drs7c -n kube-system
6.复制粘贴令牌后,登录到UI界面
谢谢阅读!
相关推荐
CurrentJ 2020-08-18
朱培知浅ZLH 2020-11-16
cdbdqn00 2020-11-12
达观数据 2020-11-11
JustinChia 2020-11-11
远远的山 2020-11-09
jingtao 2020-11-08
大叔比较胖 2020-10-30
gracecxj 2020-10-30
onepiecedn 2020-10-29
kunyus 2020-10-28
JustHaveTry 2020-10-27
锋锋 2020-10-26
hubanbei00的家园 2020-10-25
谢恩铭 2020-10-23
btqszl 2020-10-21
kaidiphp 2020-10-13
guchengxinfen 2020-10-12
liverlife 2020-10-10