Skip to content

Conversation

@bhavanap1131-cmd
Copy link
Contributor

Split pipe-concatenated namespace regex (e.g., ns=~"nsA|nsB|nsC") using QueryUtils.splitAtUnescapedPipes:

Single value -> Equals("nsA")

Multiple values -> In(Set("nsA","nsB","nsC"))

Per-namespace attribution: when In(...) is present, build a metric group-by per namespace and increment counters accordingly (timeSeriesScanned, cpuNanos; preserves existing behavior elsewhere).

Comment on lines +424 to +426
// per-namespace lookup purely for accounting
val recsForNs = partKeyIndex.partKeyRecordsFromFilters(
perNsFilters, chunkMethod.startTime, chunkMethod.endTime)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This correctly gets the per-ns query stats, but it's an expensive index lookup that we've already done for the full pipe-concatenated set of namespaces (i.e. to serve the query). IIUC this is effectively a second full scan.

Our options might be pretty limited here-- we can:

  1. In the SingleClusterPlanner, split LogicalPlans with pipe-concatenated namespaces into one plan per namespace. Materialize each, then concatenate/aggregate. Benefit: filodb clusters can still be sent namespace-regex; total count of remote plans remains low. Cost: risky planner rework.
  2. In the ShardKeyRegexPlanner, set the max count of pipe-concatenated namespaces to 1. Benefit: stupid-simple configuration change. Cost: count of remote plans will increase; query fanout increases.

I'm leaning towards option 2 (at least as a first attempt). The benefit we've seen from reduced fanout has been negligible, so I don't expect query latencies to suddenly increase. Also, we can scale the max count of pipe-concatenated namespaces gradually back to 1 and abort if query latencies spike.

What do you think @amolnayak311?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants