Skip to content

Conversation

@timvaillancourt
Copy link
Contributor

@timvaillancourt timvaillancourt commented Sep 30, 2025

Description

Building on top of #18531, this PR addresses #18529 by adding support for EmergencyReparentShard to ignore a minority of lagging tablets ahead of the wait-for-relaylogs phase

Why? Severely lagging tablets can cause the ERS to fail due to timeout waiting for relaylogs to apply, which before this PR must happen on ALL tablets 😬

Is is safe to ignore some candidates? Yes. Before we start the wait-for-relaylog phase we issue a StopReplicationAndGetStatus RPC, which stops the IO thread and returns the now-static GTID sets of each candidate. This gives us an opportunity to "ignore" some candidates we know will never be most-advanced, ahead of actually waiting for them to apply logs

This PR adds "modes" to ERS to support the existing and future-desired behaviours, and to allow the existing "all tablets" behaviour to remain the default.

The modes:

  1. ALL (default) - wait for all tablets to apply relaylogs, like <= v22
  2. MAJORITY - wait for only a majority of most-advanced tablets
  3. COUNT - wait for an exact number of tablets. Count specified by additional flag/RPC field

I'm on the fence if that is the best idea, because the existing ALL behaviour is risky in itself. I think the MAJORITY behaviour is probably safer/an-improvement, but I suppose I'm hesitant to make it the default just-yet

If we decide to make ALL the default in v23, we should have a plan for MAJORITY becoming the default. I've added a DEFAULT (0) mode to the protobuf to potentially support this. Thoughts appreciated!

Related Issue(s)

Resolves: #18529

Checklist

  • "Backport to:" labels have been added if this change should be back-ported to release branches
  • If this change is to be back-ported to previous releases, a justification is included in the PR description
  • Tests were added or are not required
  • Did the new or modified tests pass consistently locally and on CI?
  • Documentation was added or is not required

Deployment Notes

AI Disclosure

AI creeps me out

@vitess-bot
Copy link
Contributor

vitess-bot bot commented Sep 30, 2025

Review Checklist

Hello reviewers! 👋 Please follow this checklist when reviewing this Pull Request.

General

  • Ensure that the Pull Request has a descriptive title.
  • Ensure there is a link to an issue (except for internal cleanup and flaky test fixes), new features should have an RFC that documents use cases and test cases.

Tests

  • Bug fixes should have at least one unit or end-to-end test, enhancement and new features should have a sufficient number of tests.

Documentation

  • Apply the release notes (needs details) label if users need to know about this change.
  • New features should be documented.
  • There should be some code comments as to why things are implemented the way they are.
  • There should be a comment at the top of each new or modified test to explain what the test does.

New flags

  • Is this flag really necessary?
  • Flag names must be clear and intuitive, use dashes (-), and have a clear help text.

If a workflow is added or modified:

  • Each item in Jobs should be named in order to mark it as required.
  • If the workflow needs to be marked as required, the maintainer team must be notified.

Backward compatibility

  • Protobuf changes should be wire-compatible.
  • Changes to _vt tables and RPCs need to be backward compatible.
  • RPC changes should be compatible with vitess-operator
  • If a flag is removed, then it should also be removed from vitess-operator and arewefastyet, if used there.
  • vtctl command output order should be stable and awk-able.

@vitess-bot vitess-bot bot added NeedsBackportReason If backport labels have been applied to a PR, a justification is required NeedsDescriptionUpdate The description is not clear or comprehensive enough, and needs work NeedsIssue A linked issue is missing for this Pull Request NeedsWebsiteDocsUpdate What it says labels Sep 30, 2025
@github-actions github-actions bot added this to the v23.0.0 milestone Sep 30, 2025
@timvaillancourt timvaillancourt added Type: Enhancement Logical improvement (somewhere between a bug and feature) Component: VTorc Vitess Orchestrator integration Component: vtctl and removed NeedsWebsiteDocsUpdate What it says NeedsBackportReason If backport labels have been applied to a PR, a justification is required NeedsIssue A linked issue is missing for this Pull Request labels Sep 30, 2025
@codecov
Copy link

codecov bot commented Sep 30, 2025

Codecov Report

❌ Patch coverage is 60.57692% with 41 lines in your changes missing coverage. Please review.
✅ Project coverage is 67.51%. Comparing base (e87882e) to head (3e897f1).

Files with missing lines Patch % Lines
go/vt/vtorc/config/config.go 0.00% 32 Missing ⚠️
go/cmd/vtorc/cli/cli.go 0.00% 4 Missing ⚠️
go/vt/vtctl/reparentutil/util.go 91.66% 3 Missing ⚠️
go/vt/vtctl/grpcvtctldserver/server.go 84.61% 2 Missing ⚠️
Additional details and impacted files
@@           Coverage Diff           @@
##             main   #18707   +/-   ##
=======================================
  Coverage   67.51%   67.51%           
=======================================
  Files        1606     1606           
  Lines      263588   263681   +93     
=======================================
+ Hits       177953   178021   +68     
- Misses      85635    85660   +25     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@timvaillancourt timvaillancourt removed the NeedsDescriptionUpdate The description is not clear or comprehensive enough, and needs work label Sep 30, 2025
@timvaillancourt timvaillancourt marked this pull request as ready for review September 30, 2025 14:50
@timvaillancourt
Copy link
Contributor Author

timvaillancourt commented Sep 30, 2025

Copying some conclusions from an offline discussion with @arthurschreiber:

  • As it stands this PR will avoid a minority of lagging tablets from preventing ERS ✅
  • As this PR stands, ERS will still-fail if a single or "majority" is picked and any of those tablets fail to apply logs, because the code still expects 100% of candidates to succeed 🟡
  • It would be ideal if ERS tried the "next-best" candidate. Today it does not 🟡
    • This means selecting a 1+/majority may still be beneficial, if we know how to handle partial results
    • Call-out: in some case picking a next-best candidate will result in an errant GTID. Today the code makes every effort to avoid this, to the point of erring on failing the ERS. This is good for correctness but bad for availability. Some users may have differing views on this tradeoff. This should probably be configurable

@mattlord
Copy link
Member

mattlord commented Oct 1, 2025

The reason, AFAIUI, for the current behavior is to prevent any of the healthy tablets from becoming forever unhealthy/unusable due to the new primary not having binary logs covering/containing the GTIDs that the lagging replica(s) may still need (the new primary may have recently been restored from a backup and have minimal binary logs). Have you already thought about that in this context?

@timvaillancourt
Copy link
Contributor Author

timvaillancourt commented Oct 1, 2025

The reason, AFAIUI, for the current behavior is to prevent any of the healthy tablets from becoming forever unhealthy/unusable due to the new primary not having binary logs covering/containing the GTIDs that the lagging replica(s) may still need (the new primary may have recently been restored from a backup and have minimal binary logs). Have you already thought about that in this context?

@mattlord I don't think that has changed, but I would appreciate you double checking my assumption because that is a very important functionality

One part of the code that could affect that was actually changed in #18531. Previous to this PR, the code called position.AtLeast(otherPos) on replication.Positions and this PR moved things to call a wrapper (*reparentutil.RelayLogPositions) with the same method name (.AtLeast(...)) that calls 2 x different replication.Positions: https://github.com/timvaillancourt/vitess/blob/main/go/vt/vtctl/reparentutil/replication.go#L55-L69

The TL;DR on that wrapper func: we do the same sort but put now prioritise positions with the most advanced SQL thread if two combined sets are equal. In the end the same replication.Position is called unchanged and that is what is deciding which GTID set is larger

And in terms of being certain we're ignoring the right tablets: after StopReplicationAndGetStatus is ran replication is stopped, we know the GTID sets and replication is not started until after the post-wait-for-relaylogs candidate selection. That selection uses the same sort logic as this optimization. So, the idea is: because replication isn't moving and we know all GTIDs each candidate could potentially apply when asked, we already know the post-wait-for-relaylogs GTID sets each candidate could have. This means they can be filtered before the apply phase, instead of after. Or TL;DR: we already know the losers after running StopReplicationAndGetStatus RPCs (via the After-field GTIDSets)

@mattlord mattlord self-assigned this Oct 2, 2025
@systay systay modified the milestones: v23.0.0, v24.0.0 Oct 8, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Component: vtctl Component: VTorc Vitess Orchestrator integration Type: Enhancement Logical improvement (somewhere between a bug and feature)

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Bug Report/RFC: lagging tablet(s) can cause EmergencyReparentShard to fail

3 participants