kubeadm部署k8s
[kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
通过以上repo安装kubeadm kubectl命令以及kubelet服务,master上的kubelet服务是用来启动静态pod,master的所有组件也是静态pod
- kubeadm init启动一个Kubernetes主节点 可以加上--image-repository接上自己的仓库,具体看kubeadm init --help
- kubeadm join启动一个Kubernetes工作节点并且将其加入到集群
- kubeadm upgrade更新一个Kubernetes集群到新版本
- kubeadm config如果使用v1.7.x或者更低版本的kubeadm初始化集群,您需要对集群做一些配置以便使用kubeadm upgrade命令
- kubeadm token管理 kubeadm join使用的令牌
- kubeadm reset还原kubeadm init或者kubeadm join对主机所做的任何更改
- kubeadm version打印kubeadm版本
- kubeadm alpha预览一组可用的新功能以便从社区搜集
kubeadm init [flags]
注意配置文件中的配置项 kubernetesVersion 或者命令行参数 --kubernetes-version 会影响到镜像的版本
如果要在没有网络的情况下运行 kubeadm,你需要预先拉取所选版本的主要镜像: Image Name v1.10 release branch version k8s.gcr.io/kube-apiserver-${ARCH} v1.10.x k8s.gcr.io/kube-controller-manager-${ARCH} v1.10.x k8s.gcr.io/kube-scheduler-${ARCH} v1.10.x k8s.gcr.io/kube-proxy-${ARCH} v1.10.x k8s.gcr.io/etcd-${ARCH} 3.1.12 k8s.gcr.io/pause-${ARCH} 3.1 k8s.gcr.io/k8s-dns-sidecar-${ARCH} 1.14.8 k8s.gcr.io/k8s-dns-kube-dns-${ARCH} 1.14.8 k8s.gcr.io/k8s-dns-dnsmasq-nanny-${ARCH} 1.14.8 coredns/coredns 1.0.6 此处 v1.10.x 意为 “v1.10 分支上的最新的 patch release”。 ${ARCH} 可以是以下的值: amd64, arm, arm64, ppc64le 或者 s390x。 如果你运行1.10或者更早版本的 Kubernetes,并且你设置了 --feature-gates=CoreDNS=true, 你必须也使用 coredns/coredns 镜像, 而不是使用三个 k8s-dns-* 镜像。 在 Kubernetes 1.11版本以及之后的版本中,你可以使用 kubeadm config images 的子命令来列出和拉取相关镜像: kubeadm config images list kubeadm config images pull 从 Kubernetes 1.12 版本开始, k8s.gcr.io/kube-*、 k8s.gcr.io/etcd 和 k8s.gcr.io/pause 镜像不再要求 -${ARCH} 后缀。
安装
master节点
端口 | 作用 | 使用者 |
---|---|---|
6443* | apiserver | all |
2379-2380 | etcd | apiserver,etcd |
10250 | kubelet | self,control,plane |
10251 | kube-scheduler | self |
10252 | kube-controller-manager | self |
worker节点
端口 | 作用 | 使用者 |
---|---|---|
10250 | kubelet | self,control,plane |
30000-32767 | nodeport service** | all |
调整内核参数
vim /etc/sysctl.conf添加
net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1
关闭防火墙和selinux
systemctl disable firewalld systemctl stop firewalld systemctl stop iptables systemctl disable iptables setenforce 0
然后修改当前内核状态
sysctl -p
将yum源换为文章开头的源站之后
yum -y install docker kubelet kubeadm ebtables ethtool
daemon.json配置:
{ "insecure-registries":["http://harbor.test.com"], "registry-mirrors": ["https://72idtxd8.mirror.aliyuncs.com"] }
添加文件cat /etc/default/kubelet
KUBELET_KUBEADM_EXTRA_ARGS=--cgroup-driver=systemd
此时启动kubelet是会报错的
failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory
先启动docker
systemctl daemon-reload
然后看看我们需要哪些镜像:
[ kubernetes]# kubeadm config images list k8s.gcr.io/kube-apiserver:v1.13.4 k8s.gcr.io/kube-controller-manager:v1.13.4 k8s.gcr.io/kube-scheduler:v1.13.4 k8s.gcr.io/kube-proxy:v1.13.4 k8s.gcr.io/pause:3.1 k8s.gcr.io/etcd:3.2.24 k8s.gcr.io/coredns:1.2.6
这个步骤在新版本中也不太行,因为没有了print-default等参数。
这些镜像由于都在国外,所以并不能满足我们的安装需求,我们需要将安装的镜像源换掉
我们先生成配置文件:
kubeadm config print-defaults --api-objects ClusterConfiguration >kubeadm.conf
将配置文件中的:
imageRepository: k8s.gcr.io
换成自己的私有仓库
imageRepository: docker.io/mirrorgooglecontainers
有时候也需要修改参数kubernetesVersion的版本,我实际安装中不需要
然后运行命令
kubeadm config images list --config kubeadm.conf kubeadm config images pull --config kubeadm.conf kubeadm init --config kubeadm.conf
查看当前配置
kubeadm config view
默认上边的仓库没有处理coredns 的镜像,我拉取,本地处理了
docker pull coredns/coredns:1.2.6 docker tag coredns/coredns:1.2.6 mirrorgooglecontainers/coredns:1.2.6
这样我们查看
[ kubernetes]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE docker.io/mirrorgooglecontainers/kube-proxy v1.13.0 8fa56d18961f 3 months ago 80.2 MB docker.io/mirrorgooglecontainers/kube-apiserver v1.13.0 f1ff9b7e3d6e 3 months ago 181 MB docker.io/mirrorgooglecontainers/kube-controller-manager v1.13.0 d82530ead066 3 months ago 146 MB docker.io/mirrorgooglecontainers/kube-scheduler v1.13.0 9508b7d8008d 3 months ago 79.6 MB docker.io/coredns/coredns 1.2.6 f59dcacceff4 4 months ago 40 MB docker.io/mirrorgooglecontainers/coredns 1.2.6 f59dcacceff4 4 months ago 40 MB docker.io/mirrorgooglecontainers/etcd 3.2.24 3cab8e1b9802 5 months ago 220 MB docker.io/mirrorgooglecontainers/pause 3.1 da86e6ba6ca1 14 months ago 742 kB
之后我们运行
kubeadm init --config kubeadm.conf
[addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: #######网络也要单独运行 https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 10.11.90.45:6443 --token 2rau0q.1v7r64j0qnbw54ev --discovery-token-ca-cert-hash sha256:eb792e5e9f64eee49e890d8676c0a0561cb58a4b99892d22f57d911f0a3eb7f2
可以看到需要执行
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
否则我们执行kubectl命令的时候访问的是默认的8080端口会报错
The connection to the server localhost:8080 was refused - did you specify the right host or port?
[ ~]# kubectl get cs NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-0 Healthy {"health": "true"}
在新的1.14.1的版本中,上面的配置不需要,将image打包到自己的私人镜像库之后,直接执行:
kubeadm init --image-repository harbor.test.com/k8snew
然后按照提示完成即可
由下面可以看出有两个pod是pending状态,因为我没有部署网络组件,见上面的网络需要单独运行安装
[ ~]# kubectl get all --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system pod/coredns-9f7ddc475-kwmxg 0/1 Pending 0 45m kube-system pod/coredns-9f7ddc475-rjs8d 0/1 Pending 0 45m kube-system pod/etcd-host5 1/1 Running 0 44m kube-system pod/kube-apiserver-host5 1/1 Running 0 45m kube-system pod/kube-controller-manager-host5 1/1 Running 0 44m kube-system pod/kube-proxy-nnvsl 1/1 Running 0 45m kube-system pod/kube-scheduler-host5 1/1 Running 0 45m NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 46m kube-system service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 45m NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE kube-system daemonset.apps/kube-proxy 1 1 1 1 1 <none> 45m NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE kube-system deployment.apps/coredns 0/2 2 0 45m NAMESPACE NAME DESIRED CURRENT READY AGE kube-system replicaset.apps/coredns-9f7ddc475 2 2 0 45m
接下来我们就要部署calico网络组件
由上面的init时候的提示我们可以看到:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
网络组件的安装可以查看此网页
我们下载或者远程执行这个yaml文件
kubectl apply -f calico.yml
..... image: harbor.test.com/k8s/cni:v3.6.0 ..... image: harbor.test.com/k8s/node:v3.6.0
如上图将其中的image换成我下载完成的本地image,如果下载国外的镜像下载不下来,可以参考本人另外一篇文章《默认镜像库改为指定》
目前版本相关网络组件已经换成了v3.7.2
之后我们发现如下:
kubectl get pod --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE kube-system calico-kube-controllers-7c69b4dd88-89jnl 1/1 Running 0 6m58s kube-system calico-node-7g7gn 1/1 Running 0 6m58s kube-system coredns-f7855ccdd-p8g58 1/1 Running 0 86m kube-system coredns-f7855ccdd-vkblw 1/1 Running 0 86m kube-system etcd-host5 1/1 Running 0 85m kube-system kube-apiserver-host5 1/1 Running 0 85m kube-system kube-controller-manager-host5 1/1 Running 0 85m kube-system kube-proxy-6zbzg 1/1 Running 0 86m kube-system kube-scheduler-host5 1/1 Running 0 85m
发现所有pod都正常启动
在node节点上
同样也要关闭selinux,调整内核参数等
然后执行
kubeadm join 10.11.90.45:6443 --token 05o0eh.6andmi1961xkybcu --discovery-token-ca-cert-hash sha256:6ebbf4aeca912cbcf1c4ec384721f1043714c3cec787c3d48c7845f95091a7b5
如果是添加master节点,加上--experimental-control-plane参数
如果忘记token,使用:
kubeadm token create --print-join-command
journalctl -xe -u docker #查看docker的日志
journalctl -xe -u kubelet -f #查看kubelet的日志
dashboard安装
下载image和yaml文件
docker pull registry.cn-qingdao.aliyuncs.com/wangxiaoke/kubernetes-dashboard-amd64:v1.10.0 wget https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.0/src/deploy/recommended/kubernetes-dashboard.yaml
yaml文件如下:
# Copyright 2017 The Kubernetes Authors. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # ------------------- Dashboard Secret ------------------- # apiVersion: v1 kind: Secret metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard-certs namespace: kube-system type: Opaque --- # ------------------- Dashboard Service Account ------------------- # apiVersion: v1 kind: ServiceAccount metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kube-system --- # ------------------- Dashboard Role & Role Binding ------------------- # kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: name: kubernetes-dashboard-minimal namespace: kube-system rules: # Allow Dashboard to create ‘kubernetes-dashboard-key-holder‘ secret. - apiGroups: [""] resources: ["secrets"] verbs: ["create"] # Allow Dashboard to create ‘kubernetes-dashboard-settings‘ config map. - apiGroups: [""] resources: ["configmaps"] verbs: ["create"] # Allow Dashboard to get, update and delete Dashboard exclusive secrets. - apiGroups: [""] resources: ["secrets"] resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"] verbs: ["get", "update", "delete"] # Allow Dashboard to get and update ‘kubernetes-dashboard-settings‘ config map. - apiGroups: [""] resources: ["configmaps"] resourceNames: ["kubernetes-dashboard-settings"] verbs: ["get", "update"] # Allow Dashboard to get metrics from heapster. - apiGroups: [""] resources: ["services"] resourceNames: ["heapster"] verbs: ["proxy"] - apiGroups: [""] resources: ["services/proxy"] resourceNames: ["heapster", "http:heapster:", "https:heapster:"] verbs: ["get"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: kubernetes-dashboard-minimal namespace: kube-system roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: kubernetes-dashboard-minimal subjects: - kind: ServiceAccount name: kubernetes-dashboard namespace: kube-system --- # ------------------- Dashboard Deployment ------------------- # kind: Deployment apiVersion: apps/v1beta2 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kube-system spec: replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: k8s-app: kubernetes-dashboard template: metadata: labels: k8s-app: kubernetes-dashboard spec: containers: - name: kubernetes-dashboard image: harbor.test.com/k8s/kubernetes-dashboard-amd64:v1.10.0 ports: - containerPort: 8443 protocol: TCP args: - --auto-generate-certificates # Uncomment the following line to manually specify Kubernetes API server Host # If not specified, Dashboard will attempt to auto discover the API server and connect # to it. Uncomment only if the default does not work. # - --apiserver-host=http://my-address:port volumeMounts: - name: kubernetes-dashboard-certs mountPath: /certs # Create on-disk volume to store exec logs - mountPath: /tmp name: tmp-volume livenessProbe: httpGet: scheme: HTTPS path: / port: 8443 initialDelaySeconds: 30 timeoutSeconds: 30 volumes: - name: kubernetes-dashboard-certs secret: secretName: kubernetes-dashboard-certs - name: tmp-volume emptyDir: {} serviceAccountName: kubernetes-dashboard # Comment the following tolerations if Dashboard must not be deployed on master tolerations: - key: node-role.kubernetes.io/master effect: NoSchedule --- # ------------------- Dashboard Service ------------------- # kind: Service apiVersion: v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kube-system spec: type: NodePort ports: - port: 443 targetPort: 8443 nodePort: 30003 selector: k8s-app: kubernetes-dashboard
如上面,在svc里面的访问方式改成了nodeport方式,映射到node的30003端口
kubectl create之后
直接访问https://nodeip:30003即可打开dashboard界面
经过测试发现google浏览器不行,firefox可行
但是此时需要token才能登陆
接下来我们生成token
admin-token.yml
kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: admin annotations: rbac.authorization.kubernetes.io/autoupdate: "true" roleRef: kind: ClusterRole name: cluster-admin apiGroup: rbac.authorization.k8s.io subjects: - kind: ServiceAccount name: admin namespace: kube-system --- apiVersion: v1 kind: ServiceAccount metadata: name: admin namespace: kube-system labels: kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile
kubectl create -f 之后
kubectl get secret -n kube-system |grep admin|awk ‘{print $1}‘ kubectl describe secret admin-token-8hn52 -n kube-system|grep ‘^token‘|awk ‘{print $2}‘ #注意此处的secret换成实际的
记下这个生成的token即可登陆
重启服务器后,启动kubelet服务,然后启动所有容器
swapoff -a
docker start $(docker ps -a | awk ‘{ print $1}‘ | tail -n +2)
参考
https://github.com/gjmzj/kubeasz
heapster安装
镜像使用另一篇文章中heapster安装的image
共有四个yaml文件 (省略)
grafana.yaml只需修改如下image
... #image: k8s.gcr.io/heapster-grafana-amd64:v5.0.4 image: harbor.test.com/rongruixue/heapster_grafana:latest ...
heapster-rbac.yaml无需修改
kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: heapster roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:heapster subjects: - kind: ServiceAccount name: heapster namespace: kube-system
heapster.yaml需要添加volumes挂载和修改image以及如下
spec: serviceAccountName: heapster containers: - name: heapster #image: k8s.gcr.io/heapster-amd64:v1.5.4 image: harbor.test.com/rongruixue/heapster:latest volumeMounts: - mountPath: /srv/kubernetes name: auth - mountPath: /root/.kube name: config imagePullPolicy: IfNotPresent command: - /heapster - --source=kubernetes:https://kubernetes.default?inClusterConfig=false&insecure=true&auth=/root/.kube/config - --sink=influxdb:http://monitoring-influxdb.kube-system.svc:8086 volumes: - name: auth hostPath: path: /srv/kubernetes - name: config hostPath: path: /root/.kube
说明:
inClusterConfig=false : 不使用service accounts中的kube config信息;
insecure=true:这里偷了个懒儿:选择对kube-apiserver发过来的服务端证书做信任处理,即不校验;
auth=/root/.kube/config:这个是关键!在不使用serviceaccount时,我们使用auth文件中的信息来对应kube-apiserver的校验。
否则连接不上apiserver的6443
注意在新版本中v.1.14.1安装之后,用这个版本的heapster会提示无法认证,后面改用
registry.cn-hangzhou.aliyuncs.com/google_containers/heapster-amd64 v1.5.4 registry.cn-hangzhou.aliyuncs.com/google_containers/heapster-influxdb-amd64 v1.5.2 registry.cn-hangzhou.aliyuncs.com/google_containers/heapster-grafana-amd64 v5.0.4
influxdb.yaml只需修改image
spec: containers: - name: influxdb #image: k8s.gcr.io/heapster-influxdb-amd64:v1.5.2 image: harbor.test.com/rongruixue/influxdb:latest
执行kubectl create -f 之后发现heapster这个pod报错
缺少--read-only-port=10255参数,如果没有这个参数的话默认不打开10255端口,那么后续在部署dashboard的时候heapster会报错提示无法连接节点的10255端口
E0320 14:07:05.008856 1 kubelet.go:230] error while getting containers from Kubelet: failed to get all container stats from Kubelet URL "http://10.11.90.45:10255/stats/container/": Post http://10.11.90.45:10255/stats/container/: dial tcp 10.11.90.45:10255: getsockopt: connection refused
而查看之后发现所有kubelet都没有开启10255端口,对于此端口的说明如下:
参数 | 解释 | 默认值 |
---|---|---|
–address | kubelet 服务监听的地址 | 0.0.0.0 |
–port | kubelet 服务监听的端口 | 10250 |
–healthz-port | 健康检查服务的端口 | 10248 |
–read-only-port | 只读端口,可以不用验证和授权机制,直接访问 | 10255 |
故此,我们要开放此只读端口
我们在启动每台机器上的kubelet服务时
需要在配置文件里加上此端口的启动配置:
vim /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--read-only-port=10255"
这样,我们就可以安装成功:
[ heapster]# kubectl top node NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% host4 60m 6% 536Mi 28% host5 181m 9% 1200Mi 64%
相关推荐
###host字段指定授权使用该证书的etcd节点IP或子网列表,需要将etcd集群的3个节点都添加其中。cp etcd-v3.3.13-linux-amd64/etcd* /opt/k8s/bin/