hbase 定时进行compact CompactionChecker类
CompactionChecker类 定时判断是否进行compact,
定时判断间隔hbase.server.compactchecker.interval.multiplier默认1000
@Override protected void chore() { for (HRegion r : this.instance.onlineRegions.values()) { if (r == null) continue; for (Store s : r.getStores().values()) { try { long multiplier = s.getCompactionCheckMultiplier(); assert multiplier > 0; if (iteration % multiplier != 0) continue; if (s.needsCompaction()) {//compact判断 // Queue a compaction. Will recognize if major is needed. this.instance.compactSplitThread.requestSystemCompaction(r, s, getName() + " requests compaction"); } else if (s.isMajorCompaction()) {//major判断 if (majorCompactPriority == DEFAULT_PRIORITY || majorCompactPriority > r.getCompactPriority()) { this.instance.compactSplitThread.requestCompaction(r, s, getName() + " requests major compaction; use default priority", null); } else { this.instance.compactSplitThread.requestCompaction(r, s, getName() + " requests major compaction; use configured priority", this.majorCompactPriority, null); } } } catch (IOException e) { LOG.warn("Failed major compaction check on " + r, e); } } } iteration = (iteration == Long.MAX_VALUE) ? 0 : (iteration + 1); } }
循环所有Store,
定时判断循环次数 % 配置值 = 0 则不做操作
否则进行判断needsCompaction,会调用策略类RatioBasedCompactionPolicy
当store下的hfile个数>hbase.hstore.compaction.min,则进行compact
public boolean needsCompaction(final Collection<StoreFile> storeFiles, final List<StoreFile> filesCompacting) { int numCandidates = storeFiles.size() - filesCompacting.size(); return numCandidates >= comConf.getMinFilesToCompact(); }
若不需要compact,还会查看是否需要major compact,
HStore检查是否进行compact时候,现在now - 上次major时间(加一个波动,防止restart集中marjor compact)大于hbase.hregion.majorcompaction下次时间间隔
满足上边条件,并且hfile个数大于1
满足上边条件,并且只有一个hfile(最早的ts>ttl)整个文件过期 => 进行marjor compact
public boolean isMajorCompaction(final Collection<StoreFile> filesToCompact) throws IOException { boolean result = false; long mcTime = getNextMajorCompactTime(filesToCompact); if (filesToCompact == null || filesToCompact.isEmpty() || mcTime == 0) { return result; } // TODO: Use better method for determining stamp of last major (HBASE-2990) long lowTimestamp = StoreUtils.getLowestTimestamp(filesToCompact); long now = System.currentTimeMillis(); if (lowTimestamp > 0l && lowTimestamp < (now - mcTime)) { //现在now - 上次major时间(加一个波动,防止restart集中marjor compact)大于hbase.hregion.majorcompaction下次时间间隔 // Major compaction time has elapsed. long cfTtl = this.storeConfigInfo.getStoreFileTtl();// if (filesToCompact.size() == 1) { // Single file StoreFile sf = filesToCompact.iterator().next(); Long minTimestamp = sf.getMinimumTimestamp(); long oldest = (minTimestamp == null) ? Long.MIN_VALUE : now - minTimestamp.longValue(); if (sf.isMajorCompaction() && (cfTtl == HConstants.FOREVER || oldest < cfTtl)) { if (LOG.isDebugEnabled()) { LOG.debug("Skipping major compaction of " + this + " because one (major) compacted file only and oldestTime " + oldest + "ms is < ttl=" + cfTtl); } } else if (cfTtl != HConstants.FOREVER && oldest > cfTtl) { //只有一个hfile(最早的ts>ttl)整个文件过期 => 进行marjor compact LOG.debug("Major compaction triggered on store " + this + ", because keyvalues outdated; time since last major compaction " + (now - lowTimestamp) + "ms"); result = true; } } else {//hfile个数大于1个 if (LOG.isDebugEnabled()) { LOG.debug("Major compaction triggered on store " + this + "; time since last major compaction " + (now - lowTimestamp) + "ms"); } result = true; } } return result; }
相关推荐
晨曦之星 2020-08-14
lwb 2020-07-26
eternityzzy 2020-07-19
大而话之BigData 2020-06-16
ITwangnengjie 2020-06-14
gengwx00 2020-06-11
大而话之BigData 2020-06-10
鲸鱼写程序 2020-06-08
needyit 2020-06-04
strongyoung 2020-06-04
WeiHHH 2020-05-30
ITwangnengjie 2020-05-09
gengwx00 2020-05-08
gengwx00 2020-05-09
大而话之BigData 2020-05-06
Buerzhu 2020-05-01
gengwx00 2020-04-30