Skip to content

Commit 22bec4f

Browse files
authored
fix: full compactions not scheduled under some circumstances (#26668)
Fix issue introduced with 1.12.1 that caused some TSM files to not be scheduled for full compaction when they should be. The user visible symptom was an unbounded increase in disk space for impacted shards. This simplifies the compaction planning code by removing the PT_SmartOptimize and PT_NoOptimize special cases. These are no longer needed for performance thanks to the first block caching code that was also added in #26432. Issues with the PT_SmartOptimize and PT_NoOptimize cases could cause TSM files to be locked by the compaction planner as in-use, preventing them from being compacted. The optimized compaction hold-off is now handled entirely in Engine.compact rather than by the compaction planner. Update test cases as necessary for changes, plus add checks for issue that caused #26667. Closes: #26667
1 parent c67028a commit 22bec4f

File tree

3 files changed

+96
-272
lines changed

3 files changed

+96
-272
lines changed

tsdb/engine/tsm1/compact.go

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -781,6 +781,14 @@ func (c *DefaultPlanner) Release(groups []CompactionGroup) {
781781
}
782782
}
783783

784+
// InUseCount returns the number of files currently locked as in-use.
785+
// This method is primarily useful for test validation.
786+
func (c *DefaultPlanner) InUseCount() int {
787+
c.mu.RLock()
788+
defer c.mu.RUnlock()
789+
return len(c.filesInUse)
790+
}
791+
784792
// Compactor merges multiple TSM files into new files or
785793
// writes a Cache into 1 or more TSM files.
786794
type Compactor struct {

0 commit comments

Comments
 (0)