SnowFlake分布式自增ID--基于Zookeeper的集群实现
最简单的方式便是采用UUID,但UUID无序。
分布式id生成算法的有很多种,Twitter的SnowFlake就是其中经典的一种,并且生成的ID在整体上有序。
如,mysql集群可以通过自增步长解决集群中生成ID的唯一性及有序性,但是集群中增删mysql节点时,需要对所有mysql节点的自增步长进行调整。
对于SnowFlake的原理介绍,可以参考文章 理解分布式id生成算法SnowFlake
依然国际惯例,先上代码 SnowFlakeWithZK
SnowFlakeWithZK可以轻松搭建发号器集群,并通过Zookeeper管理workId,免去频繁修改集群节点配置的麻烦
使用
安装
下载并解压 SnowFlakeWithZK-1.0.1.zip
进入解压目录并执行 ./SnowFlakeWithZK.jar start
API
- GET http(s)://[host]:[port]/api/next/long 以长整型返回
- GET http(s)://[host]:[port]/api/next/hex 以十六进制返回
- GET http(s)://[host]:[port]/api/next/bin 以二进制返回
- GET http(s)://[host]:[port]/api/parse/long/{id} 解析长整型id
- GET http(s)://[host]:[port]/api/parse/hex/{id} 解析十六进制id
单机使用
修改SnowFlakeWithZK.conf
中RUN_ARGS
参数,新增--zookeeper.enable=false
集群使用
使用zookeeper
修改SnowFlakeWithZK.conf
中RUN_ARGS
参数,新增--zookeeper.enable=true --zookeeper.url=[zookeeper-host]:[zookeeper-port]
不使用zookeeper
修改SnowFlakeWithZK.conf
中RUN_ARGS
参数,新增--zookeeper.enable=false --machineId.workId=[You workId]
RUN_ARGS参数
--server.port 服务端口
--machineId.dataCenterId 数据中心id,0~31,默认16
--machineId.workerId 实例id,0~31,默认5,--zookeeper.enable=false时生效,同一数据中心的不同实例,需要保证各不相同
--zookeeper.enable 是否使用zookeeper管理workerId,默认true
--zookeeper.url zookeeper 连接地址,默认localhost:2181,--zookeeper.enable=true时生效
源码解析
项目采用springboot框架,通过@ConditionalOnProperty
注解控制是否使用zookeeper
当配置zookeeper.enable
为false
时,通过配置中的machineId.workId
来启动worker
/** * 单机配置SnowFlake的Machine ID * * 设置 zookeeper.enable = false */ @ConditionalOnProperty("zookeeper.enable", matchIfMissing = true, havingValue = "false") @Configuration class SingletonConfiguration { private val logger = LoggerFactory.getLogger(SingletonConfiguration::class.java) @Value("\${machineId.dataCenterId:16}") private var dataCenterId: Long = 16 @Value("\${machineId.workerId:0}") private var workerId: Long = 0 @Bean fun idWorker(): IdWorker { logger.info("Singleton Detected! Create IdWorker using SingletonConfiguration!") return IdWorker(workerId, dataCenterId) } }
当配置zookeeper.enable
为true
时,通过配置中的zookeeper.url
连接zk,并在zk中创建临时有序节点,通过节点的序号控制workId
/** * 使用zookeeper配置SnowFlake集群的Machine ID * * 设置 zookeeper.enable = true */ @ConditionalOnProperty("zookeeper.enable") @Configuration class ZKConfiguration { private val logger = LoggerFactory.getLogger(ZKConfiguration::class.java) @Value("\${zookeeper.url}") private lateinit var url: String @Value("\${machineId.datacenterId:16}") private var dataCenterId: Long = 16 @Bean @Primary fun idWorker(): IdWorker { logger.info("Zookeeper Detected! Create IdWorker using ZKConfiguration!") val client = CuratorFrameworkFactory.builder() .connectString(url) .sessionTimeoutMs(5000) .connectionTimeoutMs(5000) .retryPolicy(ExponentialBackoffRetry(1000, 3)) .build() client.start() val parent = "/snowflake/$dataCenterId" val worker = "$parent/worker" client.checkExists().forPath("/snowflake/$dataCenterId") ?: client.create().creatingParentsIfNeeded().withMode(CreateMode.PERSISTENT).forPath(parent) // 利用临时节点序列设置workerId val name = client.create().creatingParentsIfNeeded().withMode(CreateMode.EPHEMERAL_SEQUENTIAL).forPath(worker) val workerId = name.substring(worker.length).toLong() var idWorker = IdWorker(workerId, dataCenterId) // 重连监听 client.connectionStateListenable.addListener(ConnectionStateListener { _client: CuratorFramework, state: ConnectionState -> when (state) { ConnectionState.RECONNECTED -> { val name = _client.create().creatingParentsIfNeeded().withMode(CreateMode.EPHEMERAL_SEQUENTIAL).forPath(worker) val workerId = name.substring(worker.length).toLong() idWorker.workerId = workerId logger.info("ZK ReConnected. workerId changed: $workerId") } ConnectionState.LOST, ConnectionState.SUSPENDED -> { logger.warn("ZK is Abnormal. State is $state") } else -> { logger.info("ZK State Changed: $state") } } }) return idWorker } }
您可以fork该项目,轻松的接入Spring Cloud、Dubbo等微服务框架中
如果本工具有帮助到您,欢迎star