kubernetes的的私有仓库vmware harbor的配置
kubernetes的的私有仓库vmware harbor的配置
标签(空格分隔): kubernetes系列
- 一. 系统环境的配置
- 二. vmware harbor 的安装测试
- 三. 发布一个测试nginx
一:系统初始化
1.1 系统主机名
192.168.100.11 node01.flyfish 192.168.100.12 node02.flyfish 192.168.100.13 node03.flyfish 192.168.100.14 node04.flyfish 192.168.100.15 node05.flyfish 192.168.100.16 node06.flyfish
1.2 关闭firewalld 清空iptables 与 selinux 规则
系统节点全部执行: systemctl stop firewalld && systemctl disable firewalld && yum -y install iptables-services && systemctl start iptables && systemctl enable iptables && iptables -F && service iptables save
关闭 SELINUX swapoff -a && sed -i ‘/ swap / s/^\(.*\)$/#\1/g‘ /etc/fstab setenforce 0 && sed -i ‘s/^SELINUX=.*/SELINUX=disabled/‘ /etc/selinux/config
1.3 安装 依赖包
全部节点安装 yum install -y conntrack ntpdate ntp ipvsadm ipset jq iptables curl sysstat libseccomp wget vim net-tools git
1.4 升级调整内核参数,对于 K8S
所有节点都执行 cat > kubernetes.conf <<EOF net.bridge.bridge-nf-call-iptables=1 net.bridge.bridge-nf-call-ip6tables=1 net.ipv4.ip_forward=1 net.ipv4.tcp_tw_recycle=0 vm.swappiness=0 # 禁止使用 swap 空间,只有当系统 OOM 时才允许使用它 vm.overcommit_memory=1 # 不检查物理内存是否够用 vm.panic_on_oom=0 # 开启 OOM fs.inotify.max_user_instances=8192 fs.inotify.max_user_watches=1048576 fs.file-max=52706963 fs.nr_open=52706963 net.ipv6.conf.all.disable_ipv6=1 net.netfilter.nf_conntrack_max=2310720 EOF cp kubernetes.conf /etc/sysctl.d/kubernetes.conf sysctl -p /etc/sysctl.d/kubernetes.conf
1.5 调整系统时区
# 设置系统时区为 中国/上海 timedatectl set-timezone Asia/Shanghai # 将当前的 UTC 时间写入硬件时钟 timedatectl set-local-rtc 0 # 重启依赖于系统时间的服务 systemctl restart rsyslog && systemctl restart crond
###1.6 关闭系统不需要服务
systemctl stop postfix && systemctl disable postfix
###1.7 设置 rsyslogd 和 systemd journald
mkdir /var/log/journal # 持久化保存日志的目录 mkdir /etc/systemd/journald.conf.d cat > /etc/systemd/journald.conf.d/99-prophet.conf <<EOF [Journal] # 持久化保存到磁盘 Storage=persistent # 压缩历史日志 Compress=yes SyncIntervalSec=5m RateLimitInterval=30s RateLimitBurst=1000 # 最大占用空间 10G SystemMaxUse=10G # 单日志文件最大 200M SystemMaxFileSize=200M # 日志保存时间 2 周 MaxRetentionSec=2week # 不将日志转发到 syslog ForwardToSyslog=no EOF systemctl restart systemd-journald
1.7 升级系统内核为 4.44
CentOS 7.x 系统自带的 3.10.x 内核存在一些 Bugs,导致运行的 Docker、Kubernetes 不稳定,例如: rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm # 安装完成后检查 /boot/grub2/grub.cfg 中对应内核 menuentry 中是否包含 initrd16 配置,如果没有,再安装 一次! yum --enablerepo=elrepo-kernel install -y kernel-lt # 设置开机从新内核启动 grub2-set-default "CentOS Linux (4.4.182-1.el7.elrepo.x86_64) 7 (Core)" reboot # 重启后安装内核源文件 yum --enablerepo=elrepo-kernel install kernel-lt-devel-$(uname -r) kernel-lt-headers-$(uname -r)
1.8 kube-proxy开启ipvs的前置条件
modprobe br_netfilter cat > /etc/sysconfig/modules/ipvs.modules <<EOF #!/bin/bash modprobe -- ip_vs modprobe -- ip_vs_rr modprobe -- ip_vs_wrr modprobe -- ip_vs_sh modprobe -- nf_conntrack_ipv4 EOF chmod 755 /etc/sysconfig/modules/ipvs.modules bash /etc/sysconfig/modules/ipvs.modules lsmod | grep -e ip_vs -e nf_conntrack_ipv4
二: 开始安装vmware harbor 集群
2.1 安装docker
机器节点都执行: yum install -y yum-utils device-mapper-persistent-data lvm2 yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo yum update -y && yum install -y docker-ce-18.09.9-3.el7 重启机器: reboot 查看内核版本: uname -r 在加载: grub2-set-default "CentOS Linux (4.4.182-1.el7.elrepo.x86_64) 7 (Core)" && reboot 如果还不行 就改 文件 : vim /etc/grub2.cfg 注释掉 3.10 的 内核 保证 内核的版本 为 4.4 service docker start chkconfig docker on ## 创建 /etc/docker 目录 mkdir /etc/docker # 配置 daemon. cat > /etc/docker/daemon.json <<EOF { "exec-opts": ["native.cgroupdriver=systemd"], "log-driver": "json-file", "log-opts": { "max-size": "100m" }, "insecure-registries":["https://node04.flyfish"] } EOF mkdir -p /etc/systemd/system/docker.service.d # 重启docker服务 systemctl daemon-reload && systemctl restart docker && systemctl enable docker
上传: docker-compose 与harbor-offline-installer-v1.2.0.tgz mv docker-compose /usr/bin/ chmod +x /usr/bin/docker-compose tar -zxvf harbor-offline-installer-v1.2.0.tgz
mv harbor /usr/local/ cd /usr/local/harbor/
vim harbor.cfg --- hostname node04.flyfish ---
mkdir -p /data/cert/ 创创建建 https 证证书书以以及及配配置置相相关关目目录录权权限限 cd /data/cert/ 创建私钥: openssl rq -new -key server.key -out server.csr openssl req -new -key server.key -out server.csr cp server.key server.key.org openssl rsa -in server.key.org -out server.key openssl x509 -req -days 365 -in server.csr -signkey server.key -out server.crt chmod 777 -R *
cd /usr/local/harbor/ ./install.sh
登陆用户名密码: admin Harbor12345
查看harbor 是否能用docker 登陆 docker login https://node04.flyfish 用户名: admin 密码: Harbor12345
docker pull wangyanglinux/myapp:v1 docker pull wodby/nginx docker tag wodby/nginx node04.flyfish/library/wodby/nginx:v1 docker push node04.flyfish/library/wodby/nginx:v1 docker tag wangyanglinux/myapp:v1 node04.flyfish/library/myapp:v1 docker push node04.flyfish/library/myapp:v1
删掉原来的镜像 docker rmi -f wangyanglinux/myapp:v1 docker rmi -f node04.flyfish/library/myapp:v1 docker rmi -f wodby/nginx docker rmi -f node04.flyfish/library/wodby/nginx:v1
三: 发布一个对外的nginx
测试从仓库(vmware harbor)创建一个pod测试 kubectl run nginx-deployment --image=node04.flyfish/library/myapp:v1 --port 80 --replicas=1 kubectl get pods kubectl get deploy,rs
副本扩容: kubectl get deploy kubectl scale --replicas=3 deploy/nginx-deployment
暴露端口访问 kubectl expose deployment nginx-deployment --port=3000 --target-port=80 kubectl get svc
kubectl get pods -o wide
kubectl edit svc nginx-deployment --- 修改TYPE 类型 clusterIP 变为:NodePort ---
kubectl get pod svc,pods -o wide
从外网访问: node02.flyfish:30789 node03.flyfish:30789
相关推荐
朱培知浅ZLH 2020-11-16
cdbdqn00 2020-11-12
达观数据 2020-11-11
JustinChia 2020-11-11
远远的山 2020-11-09
jingtao 2020-11-08
大叔比较胖 2020-10-30
gracecxj 2020-10-30
onepiecedn 2020-10-29
kunyus 2020-10-28
JustHaveTry 2020-10-27
锋锋 2020-10-26
hubanbei00的家园 2020-10-25
谢恩铭 2020-10-23
btqszl 2020-10-21
kaidiphp 2020-10-13
guchengxinfen 2020-10-12
liverlife 2020-10-10
BigDataMining 2020-10-08