常用组件、kafka集群、hadoop高可用
1.Zookeeper安装
搭建Zookeeper集群并查看各服务器的角色
停止Leader并查看各服务器的角色
1.1 安装Zookeeper
1)编辑/etc/hosts ,所有集群主机可以相互 ping 通(在nn01上面配置,同步到node1,node2,node3)
nn01 hadoop]# vim /etc/hosts
192.168.1.21 nn01
192.168.1.22 node1
192.168.1.23 node2
192.168.1.24 node3
2)安装 java-1.8.0-openjdk-devel,由于之前的hadoop上面已经安装过,这里不再安装,若是新机器要安装
3)zookeeper 解压拷贝到 /usr/local/zookeeper
nn01 ~]# tar -xf zookeeper-3.4.10.tar.gz
nn01 ~]# mv zookeeper-3.4.10 /usr/local/zookeeper
4)配置文件改名,并在最后添加配置
nn01 ~]# cd /usr/local/zookeeper/conf/
nn01 conf]# ls
configuration.xsl log4j.properties zoo_sample.cfg
nn01 conf]# mv zoo_sample.cfg zoo.cfg
nn01 conf]# chown root.root zoo.cfg
nn01 conf]# vim zoo.cfg #添加
server.1=node1:2888:3888
server.2=node2:2888:3888
server.3=node3:2888:3888
server.4=nn01:2888:3888:observer
5)拷贝 /usr/local/zookeeper 到其他集群主机 (nn01 conf]# jobs -l 确保同步完成)
nn01 conf]# for i in {22..24}; do rsync -aSH --delete /usr/local/zookeeper/ 192.168.1.$i:/usr/local/zookeeper -e ‘ssh‘ & done
[4] 4956
[5] 4957
[6] 4958
6)创建 mkdir /tmp/zookeeper,每一台都要
nn01 conf]# mkdir /tmp/zookeeper
nn01 conf]# ssh node1 mkdir /tmp/zookeeper
nn01 conf]# ssh node2 mkdir /tmp/zookeeper
nn01 conf]# ssh node3 mkdir /tmp/zookeeper
7)创建 myid 文件,id 必须与配置文件里主机名对应的 server.(id) 一致
nn01 conf]# echo 4 >/tmp/zookeeper/myid
nn01 conf]# ssh node1 ‘echo 1 >/tmp/zookeeper/myid‘
nn01 conf]# ssh node2 ‘echo 2 >/tmp/zookeeper/myid‘
nn01 conf]# ssh node3 ‘echo 3 >/tmp/zookeeper/myid‘
8)启动服务,单启动一台无法查看状态,需要启动全部集群以后才能查看状态,每一台上面都要手工启动(以nn01为例子)
nn01 conf]# /usr/local/zookeeper/bin/zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
注意:刚启动zookeeper查看状态的时候报错,启动的数量要保证半数以上,这时再去看就成功了
9)查看状态
nn01 conf]# jps
13200 Jps
12690 NameNode
12882 SecondaryNameNode
13173 QuorumPeerMain
node1 ~]# jps
10641 DataNode
10823 Jps
10794 QuorumPeerMain
nn01 conf]# /usr/local/zookeeper/bin/zkServer.sh status
Mode: observe
node1 ~]# /usr/local/zookeeper/bin/zkServer.sh status
Mode: follower
node2 ~]# /usr/local/zookeeper/bin/zkServer.sh status
Mode: leader
node3 ~]# /usr/local/zookeeper/bin/zkServer.sh status
Mode: follower
//关闭之后查看状态其他服务器的角色
nn01 conf]# /usr/local/zookeeper/bin/zkServer.sh stop
... STOPPED
nn01 conf]# yum -y install telnet
nn01 conf]# telnet node3 2181
Trying 192.168.1.24...
Connected to node3.
Escape character is ‘^]‘.
ruok //发送 A U O K
imokConnection closed by foreign host. //imok回应的结果
10)利用 api 查看状态(nn01上面操作)
nn01 conf]# /usr/local/zookeeper/bin/zkServer.sh start
nn01 conf]# vim api.sh
#!/bin/bash
function getstatus(){
exec 9<>/dev/tcp/$1/2181 2>/dev/null
echo stat >&9
MODE=$(cat <&9 |grep -Po "(?<=Mode:).*")
exec 9<&-
echo ${MODE:-NULL}
}
for i in node{1..3} nn01;do
echo -ne "${i}\t"
getstatus ${i}
done
nn01 conf]# chmod 755 api.sh
nn01 conf]# ./api.sh
node1 follower
node2 leader
node3 follower
nn01 observer
2.Kafka集群实验
利用Zookeeper搭建一个Kafka集群
创建一个topic
模拟生产者发布消息
模拟消费者接收消息
2.1 搭建Kafka集群
1)解压 kafka 压缩包
Kafka在node1,node2,node3上面操作即可
node1 ~]# tar -xf kafka_2.10-0.10.2.1.tgz
2)把 kafka 拷贝到 /usr/local/kafka 下面
node1 ~]# mv kafka_2.10-0.10.2.1 /usr/local/kafka
3)修改配置文件 /usr/local/kafka/config/server.properties
node1 ~]# cd /usr/local/kafka/config
node1 config]# vim server.properties
21 broker.id=22
119 zookeeper.connect=node1:2181,node2:2181,node3:2181
4)拷贝 kafka 到其他主机,并修改 broker.id ,不能重复
node1 config]# for i in 23 24; do rsync -aSH --delete /usr/local/kafka 192.168.1.$i:/usr/local/; done
[1] 27072
[2] 27073
node2 ~]# vim /usr/local/kafka/config/server.properties
//node2主机修改
broker.id=23
node3 ~]# vim /usr/local/kafka/config/server.properties
//node3主机修改
broker.id=24
5)启动 kafka 集群(node1,node2,node3启动)
node1 local]# /usr/local/kafka/bin/kafka-server-start.sh -daemon /usr/local/kafka/config/server.properties
node1 local]# jps //出现kafka
26483 DataNode
27859 Jps
27833 Kafka
26895 QuorumPeerMain
6)验证配置,创建一个 topic
node1 local]# /usr/local/kafka/bin/kafka-topics.sh --create --partitions 1 --replication-factor 1 --zookeeper node3:2181 --topic aa
Created topic "aa".
7) 模拟生产者,发布消息(步骤8监视后,次数多写几次)
node2 ~]# /usr/local/kafka/bin/kafka-console-producer.sh \
--broker-list node2:9092 --topic aa //写一个数据
ccc
ddd
8)模拟消费者,接收消息
node3 ~]# /usr/local/kafka/bin/kafka-console-consumer.sh \
--bootstrap-server node1:9092 --topic aa //这边会直接同步
ccc
ddd
注意:kafka比较吃内存,做完这个kafka的实验可以把它停了
3.Hadoop高可用
配置Hadoop的高可用
修改配置文件
配置Hadoop的高可用,解决NameNode单点故障问题,使用之前搭建好的hadoop集群,新添加一台nn02,ip为192.168.1.25,之前有一台node4主机,可以用这台主机,具体要求如图-1所示:
3.1 hadoop的高可用
停止所有服务
ALL:停止所有hadoop服务:reboot(node1,2,3)
ALL:清空所有/var/hadoop文件(rm -rf /var/hadoop/* node1,2,3)
1)nn01停止所有服务
nn01 ~]# cd /usr/local/hadoop/
nn01 hadoop]# ./sbin/stop-all.sh //停止所有服务
2)启动zookeeper(需要一台一台的启动)这里以nn01为例子
nn01 hadoop]# /usr/local/zookeeper/bin/zkServer.sh start
nn01 hadoop]# sh /usr/local/zookeeper/conf/api.sh //利用之前写好的脚本查看
node1 follower
node2 leader
node3 follower
nn01 observer
3)新加一台机器nn02,这里之前有一台node4,可以用这个作为nn02
node4 ~]# echo nn02 > /etc/hostname
node4 ~]# hostname nn02
4)修改vim /etc/hosts
nn01 hadoop]# vim /etc/hosts
192.168.1.21 nn01
192.168.1.25 nn02
192.168.1.22 node1
192.168.1.23 node2
192.168.1.24 node3
5)同步到nn02,node1,node2,node3
nn01 hadoop]# for i in {22..25}; do rsync -aSH --delete /etc/hosts 192.168.1.$i:/etc/hosts -e ‘ssh‘ & done
[1] 14355
[2] 14356
[3] 14357
[4] 14358
6)配置SSH信任关系
注意:nn01和nn02互相连接不需要密码,nn02连接自己和node1,node2,node3同样不需要密码
nn02 ~]# vim /etc/ssh/ssh_config
Host *
GSSAPIAuthentication yes
StrictHostKeyChecking no
//把nn01的公钥私钥考给nn02
nn01 hadoop]# cd /root/.ssh/
nn01 .ssh]# scp id_rsa id_rsa.pub nn02:/root/.ssh/
7)所有的主机删除/var/hadoop/*
nn01 .ssh]# rm -rf /var/hadoop/*
nn01 .ssh]# ssh nn02 rm -rf /var/hadoop/*
nn01 .ssh]# ssh node1 rm -rf /var/hadoop/*
nn01 .ssh]# ssh node2 rm -rf /var/hadoop/*
nn01 .ssh]# ssh node3 rm -rf /var/hadoop/*
8)配置 core-site
nn01 .ssh]# vim /usr/local/hadoop/etc/hadoop/core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://nsdcluster</value>
//nsdcluster是随便起的名。相当于一个组,访问的时候访问这个组
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/var/hadoop</value>
</property>
<property>
<name>ha.zookeeper.quorum</name>
<value>node1:2181,node2:2181,node3:2181</value> //zookeepe的地址
</property>
<property>
<name>hadoop.proxyuser.nfs.groups</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.nfs.hosts</name>
<value>*</value>
</property>
</configuration>
9)配置 hdfs-site
nn01 ~]# vim /usr/local/hadoop/etc/hadoop/hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.nameservices</name>
<value>nsdcluster</value>
</property>
<property>
<name>dfs.ha.namenodes.nsdcluster</name>
//nn1,nn2名称固定,是内置的变量,nsdcluster里面有nn1,nn2
<value>nn1,nn2</value>
</property>
<property>
<name>dfs.namenode.rpc-address.nsdcluster.nn1</name>
//声明nn1 8020为通讯端口,是nn01的rpc通讯端口
<value>nn01:8020</value>
</property>
<property>
<name>dfs.namenode.rpc-address.nsdcluster.nn2</name>
//声明nn2是谁,nn02的rpc通讯端口
<value>nn02:8020</value>
</property>
<property>
<name>dfs.namenode.http-address.nsdcluster.nn1</name>
//nn01的http通讯端口
<value>nn01:50070</value>
</property>
<property>
<name>dfs.namenode.http-address.nsdcluster.nn2</name>
//nn01和nn02的http通讯端口
<value>nn02:50070</value>
</property>
<property>
<name>dfs.namenode.shared.edits.dir</name>
//指定namenode元数据存储在journalnode中的路径
<value>qjournal://node1:8485;node2:8485;node3:8485/nsdcluster</value>
</property>
<property>
<name>dfs.journalnode.edits.dir</name>
//指定journalnode日志文件存储的路径
<value>/var/hadoop/journal</value>
</property>
<property>
<name>dfs.client.failover.proxy.provider.nsdcluster</name>
//指定HDFS客户端连接active namenode的java类
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<property>
<name>dfs.ha.fencing.methods</name> //配置隔离机制为ssh
<value>sshfence</value>
</property>
<property>
<name>dfs.ha.fencing.ssh.private-key-files</name> //指定密钥的位置
<value>/root/.ssh/id_rsa</value>
</property>
<property>
<name>dfs.ha.automatic-failover.enabled</name> //开启自动故障转移
<value>true</value>
</property>
</configuration>
10)配置yarn-site
nn01 ~]# vim /usr/local/hadoop/etc/hadoop/yarn-site.xml
<configuration>
<!-- Site specific YARN configuration properties -->
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.resourcemanager.ha.enabled</name>
<value>true</value>
</property>
<property>
<name>yarn.resourcemanager.ha.rm-ids</name> //rm1,rm2代表nn01和nn02
<value>rm1,rm2</value>
</property>
<property>
<name>yarn.resourcemanager.recovery.enabled</name>
<value>true</value>
</property>
<property>
<name>yarn.resourcemanager.store.class</name>
<value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
</property>
<property>
<name>yarn.resourcemanager.zk-address</name>
<value>node1:2181,node2:2181,node3:2181</value>
</property>
<property>
<name>yarn.resourcemanager.cluster-id</name>
<value>yarn-ha</value>
</property>
<property>
<name>yarn.resourcemanager.hostname.rm1</name>
<value>nn01</value>
</property>
<property>
<name>yarn.resourcemanager.hostname.rm2</name>
<value>nn02</value>
</property>
</configuration>
11)同步到nn02,node1,node2,node3
nn01 ~]# for i in {22..25}; do rsync -aSH --delete /usr/local/hadoop/ 192.168.1.$i:/usr/local/hadoop -e ‘ssh‘ & done
[1] 25411
[2] 25412
[3] 25413
[4] 25414
12)删除所有机器上面的/user/local/hadoop/logs,方便排错
nn01 ~]# for i in {21..25}; do ssh 192.168.1.$i rm -rf /usr/local/hadoop/logs ; done
13)同步配置(nn01 hadoop]# jobs -l 确保同步完成)
nn01 ~]# for i in {22..25}; do rsync -aSH --delete /usr/local/hadoop 192.168.1.$i:/usr/local/hadoop -e ‘ssh‘ & done
[1] 28235
[2] 28236
[3] 28237
[4] 28238
4.高可用验证
初始化集群
验证集群
4.1 验证hadoop的高可用
1)初始化ZK集群
nn01 ~]# /usr/local/hadoop/bin/hdfs zkfc -formatZK
..出现Successfully即为成功
2)在node1,node2,node3上面启动journalnode服务(以node1为例子)
node1 ~]# /usr/local/hadoop/sbin/hadoop-daemon.sh start journalnode
starting journalnode...
node1 ~]# jps
29262 JournalNode
26895 QuorumPeerMain
29311 Jps
3)格式化,先在node1,node2,node3上面启动journalnode才能格式化
nn01 ~]# /usr/local/hadoop//bin/hdfs namenode -format
//出现Successfully即为成功
nn01 hadoop]# ls /var/hadoop/
dfs
4)nn02数据同步到本地 /var/hadoop/dfs
nn02 ~]# cd /var/hadoop/
nn02 hadoop]# ls
nn02 hadoop]# rsync -aSH nn01:/var/hadoop/ /var/hadoop/
nn02 hadoop]# ls
dfs
5)初始化 JNS
nn01 hadoop]# /usr/local/hadoop/bin/hdfs namenode -initializeSharedEdits
18/09/11 16:26:15 INFO client.QuorumJournalManager: Successfully started new epoch 1 //出现Successfully,成功开启一个节点
6)停止 journalnode 服务(node1,node2,node3)
node1 hadoop]# /usr/local/hadoop/sbin/hadoop-daemon.sh stop journalnode
node1 hadoop]# jps
29346 Jps
26895 QuorumPeerMain
4.2 启动集群
1)nn01上面操作 启动所有集群
nn01 hadoop]# /usr/local/hadoop/sbin/start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
2)nn02上面操作
nn02 hadoop]# /usr/local/hadoop/sbin/yarn-daemon.sh start resourcemanager
starting resourcemanager...
3)查看集群状态
nn01 hadoop]# /usr/local/hadoop/bin/hdfs haadmin -getServiceState nn1
active
nn01 hadoop]# /usr/local/hadoop/bin/hdfs haadmin -getServiceState nn2
standby
nn01 hadoop]# /usr/local/hadoop/bin/yarn rmadmin -getServiceState rm1
active
nn01 hadoop]# /usr/local/hadoop/bin/yarn rmadmin -getServiceState rm2
standby
4)查看节点是否加入
nn01 hadoop]# /usr/local/hadoop/bin/hdfs dfsadmin -report
Live datanodes (3): //会有三个节点
nn01 hadoop]# /usr/local/hadoop/bin/yarn node -list
Total Nodes:3
Node-Id Node-State Node-Http-Address Number-of-Running-Containers
node3:41977 RUNNING node3:8042 0
node1:39982 RUNNING node1:8042 0
node2:42212 RUNNING node2:8042 0
4.3 访问集群
1)查看并创建
nn01 hadoop]# /usr/local/hadoop/bin/hadoop fs -ls /
nn01 hadoop]# /usr/local/hadoop/bin/hadoop fs -mkdir /aa //创建aa
nn01 hadoop]# /usr/local/hadoop/bin/hadoop fs -ls / //再次查看
Found 1 items
drwxr-xr-x - root supergroup 0 2019-03-14 16:32 /aa
nn01 hadoop]# /usr/local/hadoop/bin/hadoop fs -put *.txt /aa
nn01 hadoop]# /usr/local/hadoop/bin/hadoop fs -ls hdfs://nsdcluster/aa
//也可以这样查看
Found 3 items
-rw-r--r-- 2 root supergroup 86424 2019-03-14 16:33 hdfs://nsdcluster/aa/LICENSE.txt
-rw-r--r-- 2 root supergroup 14978 2019-03-14 16:33 hdfs://nsdcluster/aa/NOTICE.txt
-rw-r--r-- 2 root supergroup 1366 2019-03-14 16:33 hdfs://nsdcluster/aa/README.txt
2)验证高可用,关闭 active namenode
nn01 hadoop]# /usr/local/hadoop/bin/hdfs haadmin -getServiceState nn1
active
nn01 hadoop]# /usr/local/hadoop/sbin/hadoop-daemon.sh stop namenode
stopping namenode
nn01 hadoop]# /usr/local/hadoop/bin/hdfs haadmin -getServiceState nn1
//再次查看会报错
nn01 hadoop]# /usr/local/hadoop/bin/hdfs haadmin -getServiceState nn2
//nn02由之前的standby变为active
active
nn01 hadoop]# /usr/local/hadoop/bin/yarn rmadmin -getServiceState rm1
active
nn01 hadoop]# /usr/local/hadoop/sbin/yarn-daemon.sh stop resourcemanager
//停止resourcemanager
nn01 hadoop]# /usr/local/hadoop/bin/yarn rmadmin -getServiceState rm2
active
3) 恢复节点
nn01 hadoop]# /usr/local/hadoop/sbin/hadoop-daemon.sh start namenode
//启动namenode
nn01 hadoop]# /usr/local/hadoop/sbin/yarn-daemon.sh start resourcemanager
//启动resourcemanager
查看
nn01 hadoop]# /usr/local/hadoop/bin/hdfs haadmin -getServiceState nn1
standby
nn01 hadoop]# /usr/local/hadoop/bin/yarn rmadmin -getServiceState rm1
standby
######################
每日总结:
01:Hadoop常用组件
是一种分析和处理海量数据的软件平台,Java开发,提供分布式基础架构。
高可靠性、高扩展性、高校性、高容错性、低成本。
常用组件:
HDFS:分布式文件系统(核心组件,存储)
MapReduce0(分布式计算框架)(核心组件)
Yarn:集群资源管理系统(核心组件,集群资源管理系统)
Zookeeper:分布式写作服务
数据、机器损坏了怎么办?NameNode、SecondaryNameNode、ResourceManage都是单点故障(里面都有数据、日志等),一旦损坏了可能导致数据全部丢失。
zookeeper:实时备份
分布式开放源码的应用程序协调服务,保证数据在集群键事务的一致性。应用于:
集群分布式锁(互斥锁、共享锁)、集群同一命名服务、分布式协调服务。
zookeeper:角色与选举
Leader:主(能都能写),接受所有Follower提案请求并统一协调发起提案投票,负责与所有Follower进行内部数据交换。
Follower:从(只能读,唯一),直接为客户端服务并参加与提案的投票,同时与Leader进行数据交换。
Observer:从,直接为客户端服务但并不参与提案的投票,同时也与Leader进行数据交换。
选举:n/2+1的机器选出Leader(超过半数),Leader死亡重新选举,剩余机器少于n/2+1则集群停止工作(集体挂起)。
zookeeper:工作原理
Client读操作:每台server的本地数据库副本直接响应;
Client写操作:统一转发给Leader,Leader发起vote,Leader收集vote结果,当结果过半时,Leader会给所有的server发送通知消息,会把该操作更新的内存中并对Client写请求做回应。
如果投票的Follower太多,信息太大,就降低了效率,Observer诞生了,Observer可以接受客户端的连接,并将写请求转发给Leader节点,但Observer不参与投票,仅仅等待投票结果。
高可用:停止Leader,查看服务器角色(选出新的Leader)
02:Kafka集群 消息队列 开发来用
Kafka:是LinkedIn开发的一个分布式的消息系统,使用Scala编写,是一种消息的中间件。
作用:解耦、冗余、提高扩展性、缓冲;保证顺序、灵活,削锋填谷;异步通信
消息队列:读增加缓存,写添加队列
程序链条A--发送消息-->B-执行-->C-->执行...
A发送消息(A1,A2..)-->消息队列里,分条执行-->B-执行-->C-->执行...
Kafka角色:
Producer:生产者,负责发送消息
Consumer:消费者,负责读取处理消息
Topic:消息的类别,不同的消息放到不同的列别里
Parition:每个Topic包含一个或多个Parition
Broker:Kafka集群包含一个或多个服务器
Kafka通过Zookeeper管理集群配置,选举Leader.
03:Hadoop高可用
NameNode的高可用(ZKFS,高可用软件):
1>.HDFS with NFS:新建NFS服务器,主备NameNode同时挂载(NFS瓶颈)
NFS高可用:
a.inotify+rsync :10G
b.drbd+heartbeat:300G
c.hdfs || ceph
2>.HDFS with QJM:不需要共享存储,但需要让每一个DN都知道两个NN的位置,并把块信息和心跳包发送给active和standby这两个NN。(配置复杂,数据传输量增大),数据存放在JournalNode(集群,相当于temp,公共存储,谁都能读和取)服务里,active宕机死掉,standby启用,两者都与一组称为JNS(Journal Nodes)的互相独立的进程保持通信.当Active Node上更新了namespace(文件映射块),它将记录修改日志发送给JNS多数派。Standby nodes将会从JNS中读取这些edits,并保持关注它们日志的变更。Standby Node将日志变更应用在自己的namespace中,当failover发生时,Standby将会提升自己为Active之前,确保能够从JNS中读取所有的edits,即在failover发生之前Standby持有的namespace应该与Active保持完全同步。
脑裂:split-brain,三节点通讯阻断,即集群中不同的DN看到了两个ActiveNN(只能有一个存活!,JNS规定,只允许一个活着的NN写入日志记录)
FailoverControl:状态检测,时时切换。
3>检测高可用:集群内存放文件,宕掉ActiveNN,查看数据分析及文件是否存在。
#####################################
脚本:
#!/bin/bash
func_zj(){
cd /var/lib/libvirt/images/
for i in nn01 node{1..4} nfsgw
do
qemu-img create -f qcow2 -b node.qcow2 $i.img 20G
sed "s,node,$i," /etc/libvirt/qemu/node.xml > /etc/libvirt/qemu/$i.xml
virsh define /etc/libvirt/qemu/$i.xml
virsh start $i
for j in {5..0}
do
echo $j
sleep 1
done
done
}
func_spawn(){
expect <<EOF
spawn virsh console $1
expect " " {send "\r"}
expect "login" {send "root\r"}
expect "Password" {send "123456\r"}
expect "#" {send "export LANG=en_US\r"}
expect "#" {send "growpart /dev/vda 1\r"}
expect "#" {send "xfs_growfs /\r"}
expect "#" {send "echo ‘# Generated by dracut initrd
DEVICE=eth0
ONBOOT=yes
IPV6INIT=no
IPV4_FAILURE_FATAL=no
NM_CONTROLLED=no
TYPE=Ethernet
BOOTPROTO=static
IPADDR=192.168.1.2$2
PREFIX=24
GATEWAY=192.168.1.254‘ > /etc/sysconfig/network-scripts/ifcfg-eth0\r"}
expect "#" {send "lsblk\r"}
expect "#" {send "echo $1 > /etc/hostname\r"}
expect "#" {send "rm -rf /etc/yum.repos.d/*.repo\r"}
expect "#" {send "echo ‘192.168.1.21 nn01
192.168.1.22 node1
192.168.1.23 node2
192.168.1.24 node3
192.168.1.25 node4
192.168.1.26 nfsgw‘ > /etc/hosts\r"}
expect "#" {send "reboot\r"}
expect "#" {send "reboot\r"}
expect "#" {send "exit\r"}
EOF
}
func_sj(){
z=0
for k in nn01 node{1..4} nfsgw
do
let z=z+1
func_spawn $k $z
done
}
echo "开始"
func_sj
for j in {3..0}
do
echo $j
sleep 1
done
func_sj
echo "结束"