CentOS 6.5下Kafka安装部署
系统环境
组件 | 版本 |
---|---|
CentOS | 6.5 64x |
zookeeper | 3.4.5 |
kafka | 2.10-0.8.1.1 |
单节点安装
下载kafka并且解压
tar zxvf kafka_2.10-0.8.1.1.tar.gz cd kafka_2.10-0.8.1.1/
启动kafka默认配置
bin/zookeeper-server-start.sh config/zookeeper.properties bin/kafka-server-start.sh config/server.properties
创建 topic 名为 “test”
./bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test #列出 topic ./bin/kafka-topics.sh --list --zookeeper localhost:2181
创建客户端
bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test
创建消费端
bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic test --from-beginning
集群配置
修改server-1.properties中的参数
cp config/server.properties config/server-1.properties
主要修改内容
broker.id=0 log.dirs=/home/Hadoop/development/src/kafka_2.10-0.8.1.1/logs zookeeper.connect=canbot130:2181,canbot131:2181,canbot132:2181
修改完成后 copy 到其他节点
scp -r ./kafka_2.10-0.8.1.1/ hadoop@canbot131:/home/hadoop/development/src/ scp -r ./kafka_2.10-0.8.1.1/ hadoop@canbot132:/home/hadoop/development/src/
copy完以后需要修改再修改 server-1.properties 中的 broker.id
broker.id=0 192.169.2.130 broker.id=1 192.169.2.131 broker.id=2 192.169.2.132
启动Kafka
分别在canbot130/1/2三个节点都启动
./kafka_2.10-0.8.1.1/bin/kafka-server-start.sh ./kafka_2.10-0.8.1.1/config/server-1.properties &
创建集群 Topic
[hadoop@canbot130 kafka_2.10-0.8.1.1]$./bin/kafka-topics.sh --create --zookeeper canbot130:2181 --replication-factor 3 --partitions 1 --topic test
提示以下内容表示创建Topic 成功
Created topic "test".
查看 Topic 列表
[hadoop@canbot130 kafka_2.10-0.8.1.1]$ ./bin/kafka-topics.sh --list --zookeeper canbot130:2181 test [hadoop@canbot130 kafka_2.10-0.8.1.1]$
创建生产者
./bin/kafka-console-consumer.sh --zookeeper canbot130:2181 --topic test
使用该命令创建生产者,然后将在canbot132节点上创建 消费者,查看消息是否被消费
创建消费者
在 canbot132 节点上执行
./bin/kafka-console-consumer.sh --zookeeper canbot130:2181 --topic test
生产消息==>消费消息
在canbot130节点上的生产者
[hadoop@canbot130 kafka_2.10-0.8.1.1]$ ./bin/kafka-console-producer.sh --broker-list canbot130:9092 --topic test SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder". SLF4J: Defaulting to no-operation (NOP) logger implementation SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details. "holl" [2016-05-31 04:27:30,770] INFO Closing socket connection to /192.168.2.130. (kafka.network.Processor) "hao xiang shi tong bu l haha" "test kafka"
在canbot132节点上的消费者所产生的信息
[hadoop@canbot132 kafka_2.10-0.8.1.1]$ ./bin/kafka-console-consumer.sh --zookeeper canbot130:2181 --topic test SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder". SLF4J: Defaulting to no-operation (NOP) logger implementation SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details. [2016-05-31 04:27:22,328] INFO Closing socket connection to /192.168.2.132. (kafka.network.Processor) holl hao xiang shi tong bu l haha test kafka
错误记录
错误一
java.lang.RuntimeException: A broker is already registered on the path /brokers/ids/1. This probably indicates that you either have configured a brokerid that is already in use, or else you have shutdown this broker and restarted it faster than the zookeeper timeout so it appears to be re-registering. at kafka.utils.ZkUtils$.registerBrokerInZk(ZkUtils.scala:205) at kafka.server.KafkaHealthcheck.register(KafkaHealthcheck.scala:57) at kafka.server.KafkaHealthcheck.startup(KafkaHealthcheck.scala:44) at kafka.server.KafkaServer.startup(KafkaServer.scala:103) at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:34) at kafka.Kafka$.main(Kafka.scala:46) at kafka.Kafka.main(Kafka.scala)
解决方法:该错误是由于server.properties 中的broker.id 重复
Kafka 的详细介绍:请点这里
Kafka 的下载地址:请点这里
相关推荐
ljcsdn 2020-07-27
woaishanguosha 2020-07-18
qingyuerji 2020-06-14
MojitoBlogs 2020-06-14
MojitoBlogs 2020-06-09
猫咪的一生 2020-06-03
guicaizhou 2020-05-05
猫咪的一生 2020-05-03
jiangkai00 2020-04-15
方新德 2020-04-08
jiangkai00 2020-03-20
那年夏天0 2020-02-22
sweetgirl0 2020-02-21
sweetgirl0 2020-02-09
MrZhangAdd 2020-01-13
jiangkai00 2020-01-12
GoatSucker 2020-01-11
yangyutong00 2019-12-30