Kubernetes2-K8s的集群部署

一、简介

  1、架构参考

              Kubernetes1-K8s的简单介绍

   2、实例架构

  192.168.216.51 master  etcd

  192.168.216.53 node1

  192.168.216.54 node1

  3、拓扑

Kubernetes2-K8s的集群部署

  4、软件版本

[ ~]# cat /etc/redhat-release 
CentOS Linux release 7.6.1810 (Core) 
[ ~]# uname -a
Linux master 3.10.0-957.21.3.el7.x86_64 #1 SMP Tue Jun 18 16:35:19 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
[ ~]# 
[ ~]# docker version 
Client:
 Version:         1.13.1
 API version:     1.26

  kubernetes版本请见下面安装时候的依赖关系

二、部署软件

  1、修改主机名

    1)按照如下名称修改主机名每台主机分别修改

     hostnamectl set-hostname master
     #hostnamectl set-hostname etcd 暂时不用此节点和master公用一个节点
     hostnamectl set-hostname node1
     hostnamectl set-hostname node2

    2)并修改hosts文件

    三台主机都修改hosts文件

[ yum.repos.d]# cat >>/etc/hosts<<eof
> 127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
> ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
> 192.168.216.51 master
> 192.168.216.52 etcd   #因为做实验时候虚拟机有点问题,暂时不用此节点etcd和master一个节点
> 192.168.216.53 node1
> 192.168.216.54 node2
> eof

  2、时间同步

  三台节点都启用chronyd,保证时间同步

    systemctl start chronyd
    systemctl enable chronyd

  3、安装软件

  master/etcd:kubenetes flannel etcd

yum install  kubernetes etcd flannel ntp -y
Installed:
  etcd.x86_64 0:3.3.11-2.el7.centos   flannel.x86_64 0:0.7.1-4.el7   kubernetes.x86_64 0:1.5.2-0.7.git269f928.el7  

Dependency Installed:
  conntrack-tools.x86_64 0:1.4.4-5.el7_7.2                 docker.x86_64 2:1.13.1-103.git7f2769b.el7.centos        
  docker-client.x86_64 2:1.13.1-103.git7f2769b.el7.centos  docker-common.x86_64 2:1.13.1-103.git7f2769b.el7.centos 
  kubernetes-client.x86_64 0:1.5.2-0.7.git269f928.el7      kubernetes-master.x86_64 0:1.5.2-0.7.git269f928.el7     
  kubernetes-node.x86_64 0:1.5.2-0.7.git269f928.el7        libnetfilter_cthelper.x86_64 0:1.0.0-10.el7_7.1         
  libnetfilter_cttimeout.x86_64 0:1.0.0-6.el7_7.1          libnetfilter_queue.x86_64 0:1.0.2-2.el7_2               
  socat.x86_64 0:1.7.3.2-2.el7                            

Updated:
  ntp.x86_64 0:4.2.6p5-29.el7.centos                                                                                

Dependency Updated:
  ntpdate.x86_64 0:4.2.6p5-29.el7.centos                                                                            

Complete!
[ backup1]#
View Code  

  node1/node2

[ ~]# yum install kubernetes flannel ntp -y 
[ ~]# yum install kubernetes flannel ntp -y

三、配置

  1、配置etct

    1)修改第6,10,23行为第7、11、24行的内容即可,也就是标红部分

[ ~]# vim /etc/etcd/etcd.conf 
#ETCD_LISTEN_CLIENT_URLS="http://localhost:2379"
ETCD_LISTEN_CLIENT_URLS="http://localhost:2379,http://192.168.216.51:2379"
#ETCD_NAME="default"
ETCD_NAME="etcd"   
#ETCD_ADVERTISE_CLIENT_URLS="http://localhost:2379"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.216.51:2379"

    2)配置文件含义

  

  ETCD_NAME="etcd"   #etcd节点名称,如果etcd机器只有一台etcd可以不修改,保持默认default

  ETCD_DATA_DIR="/var/lib/etcd/default.etcd"   #数据存储目录

  ETCD_LISTEN_CLIENT_URLS="http://localhost:2379,http://192.168.216.51:2379"    #etcd对外服务监听地址,一般指定2379端口,如果为0.0.0.0将会监听所有端口

  ETCD_ADVERTISE_CLIENT_URLS="http://192.168.216.51:2379" #这个是通知客户端的urls

    3)启动服务

systemctl start etcd
systemctl status etcd
systemctl enable etcd

    4)监听端口 2379

netstat -antup |grep 2379

    5)查看成员列表

[ ~]# etcdctl member list
8e9e05c52164694d: name=etcd peerURLs=http://localhost:2380 clientURLs=http://192.168.216.51:2379 isLeader=true
[ ~]#

  2、配置master服务器

     1)修改kubernetes配置文件

    修改第22行标红的

[ ~]# vim /etc/kubernetes/config   

22 KUBE_MASTER="--master=http://192.168.216.51:8080"

    2)配置文件意思

###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including

#   kube-apiserver.service
#   kube-controller-manager.service
#   kube-scheduler.service
#   kubelet.service
#   kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"
#---表示错误日志记录道文件还是输出道stderr标准错误输出
# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"
#---日志等级
# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"
#---是否运行运行特权容器,false是不允许
# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://192.168.216.51:8080"
#---监听端口
~

     3)修改apiserver配置文件

    修改标红部分

[ ~]# vim /etc/kubernetes/apiserver 
###
# kubernetes system config
#
# The following values are used to configure the kube-apiserver
#

# The address on the local server to listen to.
#KUBE_API_ADDRESS="--insecure-bind-address=127.0.0.1"
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
#---监听的接口,修改为0.0.0.0监听所有端口
# The port on the local server to listen on.
# KUBE_API_PORT="--port=8080"

# Port minions listen on
# KUBELET_PORT="--kubelet-port=10250"

# Comma separated list of nodes in the etcd cluster
#KUBE_ETCD_SERVERS="--etcd-servers=http://127.0.0.1:2379"
KUBE_ETCD_SERVERS="--etcd-servers=http://192.168.216.51:2379"
#---etcd服务地址,之前配置的etcd服务
# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
#---kubernetes可以分配的ip的范围,kubernetes启动的每个pod以及service都会分配一个地址这里定义一个ip池
# default admission control policies
#KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,
    ServiceAccount,ResourceQuota"
KUBE_ADMISSION_CONTROL="--admission-control=AlwaysAdmit"
#---不做限制,允许所有节点可以访问apiserver,对所有请求的允许
 28 # Add your own! 29 KUBE_API_ARGS="" ~

    4)配置kube-controller-manager配置文件

    这里保持默认即可,先不用修改

[ ~]# cat /etc/kubernetes/controller-manager 
###
# The following values are used to configure the kubernetes controller-manager

# defaults from config and apiserver should be adequate

# Add your own!
KUBE_CONTROLLER_MANAGER_ARGS=""
[ ~]#

    5)配置kube-schedule

[ ~]# vim /etc/kubernetes/scheduler 
###
# kubernetes scheduler config

# default config should be adequate

# Add your own!
KUBE_SCHEDULER_ARGS="0.0.0.0"
#---改为监听所有                                                                                                                 
~

  3、设置etcd网络

etcdctl mkdir /k8s/network        #---创建一个k8s/network用于存储flannel网络信息                           
etcdctl set /k8s/network/config ‘{"Network": "10.255.0.0/16"}‘#---给/k8s/network/config 赋一个字符串的值‘{"Network": "10.255.0.0/16"}‘,这个配置将用于flannel分配给每个docker的虚拟ip地址段,用于配置在minion上的dockerip地址

[ ~]# etcdctl get /k8s/network/config

#---查看etcd网络配置记录
{"Network": "10.255.0.0/16"}
[ ~]#

  4、flanneld

[ ~]# vim /etc/sysconfig/flanneld

# Flanneld configuration options

# etcd url location. Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="http://192.168.216.51:2379"

#---etcd url位置信息,指向运行的etcd服务器

# etcd config key. This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="/k8s/network"

#---指定网络配置目录

# Any additional options that you want to pass
FLANNEL_OPTIONS="--iface=ens33"

#---指定网卡

[ ~]# systemctl restart flanneld

 [ ~]# systemctl status flanneld● flanneld.service - Flanneld overlay address etcd agent   Loaded: loaded (/usr/lib/systemd/system/flanneld.service; enabled; vendor preset: disabled)   Active: active (running) since Tue 2019-10-29 17:21:54 CST; 42min ago Main PID: 12715 (flanneld)   CGroup: /system.slice/flanneld.service           └─12715 /usr/bin/flanneld -etcd-endpoints=http://192.168.216.51:2379 -etcd-prefix=/k8s/network --iface...

Oct 29 17:21:53 master systemd[1]: Starting Flanneld overlay address etcd agent...Oct 29 17:21:54 master flanneld-start[12715]: I1029 17:21:54.022949   12715 main.go:132] Installing signal handlersOct 29 17:21:54 master flanneld-start[12715]: I1029 17:21:54.023985   12715 manager.go:149] Using interface w...6.51Oct 29 17:21:54 master flanneld-start[12715]: I1029 17:21:54.024047   12715 manager.go:166] Defaulting extern....51)Oct 29 17:21:54 master flanneld-start[12715]: I1029 17:21:54.048791   12715 local_manager.go:134] Found lease...singOct 29 17:21:54 master flanneld-start[12715]: I1029 17:21:54.068556   12715 manager.go:250] Lease acquired: 1...0/24Oct 29 17:21:54 master flanneld-start[12715]: I1029 17:21:54.069202   12715 network.go:98] Watching for new s...asesOct 29 17:21:54 master systemd[1]: Started Flanneld overlay address etcd agent.Oct 29 17:38:56 master flanneld-start[12715]: I1029 17:38:56.822596   12715 network.go:191] Subnet added: 10....0/24Oct 29 17:56:05 master flanneld-start[12715]: I1029 17:56:05.501411   12715 network.go:191] Subnet added: 10....0/24Hint: Some lines were ellipsized, use -l to show in full.[ ~]# 
 

  查看子网信息

[ ~]# ifconfigens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500        inet 192.168.216.51  netmask 255.255.255.0  broadcast 192.168.216.255        inet6 fe80::3409:e73d:1ef:2e1  prefixlen 64  scopeid 0x20<link>        inet6 fe80::9416:80e8:f210:1e24  prefixlen 64  scopeid 0x20<link>        inet6 fe80::39cb:d8d1:a78b:9be1  prefixlen 64  scopeid 0x20<link>        ether 00:0c:29:1c:8b:39  txqueuelen 1000  (Ethernet)        RX packets 124978  bytes 149317395 (142.4 MiB)        RX errors 0  dropped 0  overruns 0  frame 0        TX packets 47636  bytes 5511781 (5.2 MiB)        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

flannel0: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST> mtu 1472        inet 10.255.16.0  netmask 255.255.0.0  destination 10.255.16.0        inet6 fe80::1837:1885:18c6:5e52  prefixlen 64  scopeid 0x20<link>        unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00  txqueuelen 500  (UNSPEC)        RX packets 0  bytes 0 (0.0 B)        RX errors 0  dropped 0  overruns 0  frame 0        TX packets 3  bytes 144 (144.0 B)        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536        inet 127.0.0.1  netmask 255.0.0.0        inet6 ::1  prefixlen 128  scopeid 0x10<host>        loop  txqueuelen 1000  (Local Loopback)        RX packets 178940  bytes 55467759 (52.8 MiB)        RX errors 0  dropped 0  overruns 0  frame 0        TX packets 178940  bytes 55467759 (52.8 MiB)        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

virbr0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500        inet 192.168.122.1  netmask 255.255.255.0  broadcast 192.168.122.255        ether 52:54:00:23:a5:7c  txqueuelen 1000  (Ethernet)        RX packets 0  bytes 0 (0.0 B)        RX errors 0  dropped 0  overruns 0  frame 0        TX packets 0  bytes 0 (0.0 B)        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[ ~]#
[ ~]# cat /run/flannel/subnet.env
#---查看子网信息,之后会有一个脚本将subnet.env转写成一个docker的环境变量文件/run/flannel/dockerFLANNEL_NETWORK=10.255.0.0/16FLANNEL_SUBNET=10.255.16.1/24FLANNEL_MTU=1472FLANNEL_IPMASQ=false[ ~]# 
 
[ ~]# cat /run/flannel/docker 
DOCKER_OPT_BIP="--bip=10.255.93.1/24"
DOCKER_OPT_IPMASQ="--ip-masq=true"
DOCKER_OPT_MTU="--mtu=1472"
DOCKER_NETWORK_OPTIONS=" --bip=10.255.93.1/24 --ip-masq=true --mtu=1472"
[ ~]#

  5、配置node1

    1)配置flanneld服务

[ ~]# vim /etc/sysconfig/flanneld 

# Flanneld configuration options

# etcd url location.  Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="http://192.168.216.51:2379"
#---指定etcd服务器url
# etcd config key.  This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="/k8s/network"
#---指定网络配置目录
# Any additional options that you want to pass
#FLANNEL_OPTIONS="--iface=ens33"

    2)配置master地址和kube-proxy

[ ~]# vim /etc/kubernetes/config

###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
#   kube-apiserver.service
#   kube-controller-manager.service
#   kube-scheduler.service
#   kubelet.service
#   kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"

# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"

# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"

# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://192.168.216.51:8080"#---指定master url地址

    3)配置kube-proxy

    主要是负责service的实现,就是实现内部从pod到service

    这里保持默认即可

[ ~]# grep -v ‘^#‘ /etc/kubernetes/proxy


KUBE_PROXY_ARGS=""
[ ~]#

    4)配置node1的kubelet

    kubelet组件管理pod,pod中容器及容器的镜像和卷等信息

[ ~]# vim /etc/kubernetes/kubelet
 1 ###  2 # kubernetes kubelet (minion) config  3   4 # The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)  5 KUBELET_ADDRESS="--address=0.0.0.0"  6 #---监听所有地址  7 # The port for the info server to serve on  8 # KUBELET_PORT="--port=10250"  9  10 # You may leave this blank to use the actual hostname 11 KUBELET_HOSTNAME="--hostname-override=node1" 12  13 # location of the api-server 14 KUBELET_API_SERVER="--api-servers=http://192.168.216.51:8080" 15 #---api-servers-url地址 16 # pod infrastructure container 17 KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:lat    est" 18  19 # Add your own! 20 KUBELET_ARGS=""~~~
 

    5)启动相关服务

[ ~]# systemctl restart flanneld kube-proxy kubelet docker
[ ~]# systemctl enable flanneld kube-proxy kubelet docker[ ~]# systemctl status flanneld kube-proxy kubelet docker 
   注意:这里kubelet可能无法正常启动,请见最后排错 

    6)查看ifconfig信息,及端口监听

[ ~]# ifconfig
docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 10.255.41.1 netmask 255.255.255.0 broadcast 0.0.0.0
ether 02:42:22:ac:66:2f txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.216.53 netmask 255.255.255.0 broadcast 192.168.216.255
inet6 fe80::3409:e73d:1ef:2e1 prefixlen 64 scopeid 0x20<link>
inet6 fe80::9416:80e8:f210:1e24 prefixlen 64 scopeid 0x20<link>
ether 00:0c:29:79:23:62 txqueuelen 1000 (Ethernet)
RX packets 2490 bytes 802004 (783.2 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 1853 bytes 397450 (388.1 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

flannel0: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST> mtu 1472
inet 10.255.41.0 netmask 255.255.0.0 destination 10.255.41.0
unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 txqueuelen 500 (UNSPEC)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1 (Local Loopback)
RX packets 76 bytes 6004 (5.8 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 76 bytes 6004 (5.8 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

virbr0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 192.168.122.1 netmask 255.255.255.0 broadcast 192.168.122.255
ether 52:54:00:23:a5:7c txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

[ ~]# netstat -antup |grep proxy
tcp        0      0 127.0.0.1:10249         0.0.0.0:*               LISTEN      918/kube-proxy      
tcp        0      0 192.168.216.53:58700    192.168.216.51:8080     ESTABLISHED 918/kube-proxy      
tcp        0      0 192.168.216.53:58698    192.168.216.51:8080     ESTABLISHED 918/kube-proxy      
[ ~]#

    7)验证

    master节点上通过kubectl命令查看

[ ~]# kubectl get node
NAME      STATUS    AGE
node1     Ready     17h
[ ~]#

    看到node1 Ready 17h,就是成功了

  6、配置node2,基本和node1一致

    1)从node1发送配置文件到node2

scp /etc/sysconfig/flanneld 192.168.216.54:/etc/sysconfig/
scp /etc/kubernetes/config 192.168.216.54:/etc/kubernetes/
scp /etc/kubernetes/proxy 192.168.216.54:/etc/kubernetes/
scp /etc/kubernetes/kubelet 192.168.216.54:/etc/kubernetes/

    2)只需要改一个文件

    把node1换成node2

[ ~]# vim /etc/kubernetes/kubelet 

###
# kubernetes kubelet (minion) config

# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address="0.0.0.0"

# The port for the info server to serve on
# KUBELET_PORT="--port=10250"

# You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=node2"

# location of the api-server
KUBELET_API_SERVER="--api-servers=http://192.168.216.51:8080"

# pod infrastructure container
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"

# Add your own!
KUBELET_ARGS=""
~
~

    3)启动服务,并开机运行

systemctl restart flanneld kube-proxy kubelet docker
systemctl enable flanneld kube-proxy kubelet docker

  7、master验证节点是否加入进来

  看到status 为Ready状态证明成功加入到集群了

[ ~]# kubectl get node
NAME      STATUS    AGE
node1     Ready     17h
node2     Ready     1m
[ ~]#

四、排错

  针对无法启用kubelet的问题

  1、查看启动详情

[ ~]# systemctl status -l kubelet
● kubelet.service - Kubernetes Kubelet Server
   Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
   Active: failed (Result: start-limit) since Fri 2019-10-25 15:39:18 CST; 2s ago
     Docs: https://github.com/GoogleCloudPlatform/kubernetes
  Process: 71003 ExecStart=/usr/bin/kubelet $KUBE_LOGTOSTDERR $KUBE_LOG_LEVEL $KUBELET_API_SERVER $KUBELET_ADDRESS $KUBELET_PORT $KUBELET_HOSTNAME $KUBE_ALLOW_PRIV $KUBELET_POD_INFRA_CONTAINER $KUBELET_ARGS (code=exited, status=204/MEMORY)
 Main PID: 71003 (code=exited, status=204/MEMORY)

Oct 25 15:39:17 node1 systemd[1]: kubelet.service: main process exited, code=exited, status=204/MEMORY
Oct 25 15:39:17 node1 systemd[1]: Unit kubelet.service entered failed state.
Oct 25 15:39:17 node1 systemd[1]: kubelet.service failed.
Oct 25 15:39:18 node1 systemd[1]: kubelet.service holdoff time over, scheduling restart.
Oct 25 15:39:18 node1 systemd[1]: start request repeated too quickly for kubelet.service
Oct 25 15:39:18 node1 systemd[1]: Failed to start Kubernetes Kubelet Server.
Oct 25 15:39:18 node1 systemd[1]: Unit kubelet.service entered failed state.
Oct 25 15:39:18 node1 systemd[1]: kubelet.service failed.
[ ~]# journalctl -f -u kubelet
-- Logs begin at Thu 2019-10-24 19:20:13 CST. --
Oct 25 15:39:17 node1 systemd[1]: Started Kubernetes Kubelet Server.
Oct 25 15:39:17 node1 systemd[1]: Starting Kubernetes Kubelet Server...
Oct 25 15:39:17 node1 systemd[1]: kubelet.service: main process exited, code=exited, status=204/MEMORY
Oct 25 15:39:17 node1 systemd[1]: Unit kubelet.service entered failed state.
Oct 25 15:39:17 node1 systemd[1]: kubelet.service failed.
Oct 25 15:39:18 node1 systemd[1]: kubelet.service holdoff time over, scheduling restart.
Oct 25 15:39:18 node1 systemd[1]: start request repeated too quickly for kubelet.service
Oct 25 15:39:18 node1 systemd[1]: Failed to start Kubernetes Kubelet Server.
Oct 25 15:39:18 node1 systemd[1]: Unit kubelet.service entered failed state.
Oct 25 15:39:18 node1 systemd[1]: kubelet.service failed.        

  2、交换内存 

      对应方法是禁用 swap

         swapoff -a

  3、kubelet文件驱动默认cgroupfs, 而我们安装的docker使用的文件驱动是systemd,更改驱动

  1)方法1,这个方法不行,修改后启动docker有问题

#修改daemon.json
vi /etc/docker/daemon.json
#添加如下属性
"exec-opts": [
    "native.cgroupdriver=systemd"
]

  2)方法2,修改docker.service

# 修改docker.service
vi /lib/systemd/system/docker.service
找到
--exec-opt native.cgroupdriver=systemd 修改为:
--exec-opt native.cgroupdriver=cgroupfs \查看是否更改成功docker info

  4、虚拟机问题,就重新做了一遍就ok了

  如果上面都不行,估计和我这里一样应该是虚拟机问题,重新恢复镜像重新做了一遍发现没问题

五、配置流程总结:

Kubernetes2-K8s的集群部署

转载请注明出处:https://www.cnblogs.com/zhangxingeng/p/11721035.html

 

相关推荐