-
Notifications
You must be signed in to change notification settings - Fork 39
PlacementV2: Consolidate Placement into Scheduler #96
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Signed-off-by: Cassandra Coyle <[email protected]>
Signed-off-by: Cassandra Coyle <[email protected]>
Signed-off-by: Cassandra Coyle <[email protected]>
Signed-off-by: Cassandra Coyle <[email protected]>
Signed-off-by: Cassandra Coyle <[email protected]>
Signed-off-by: Cassandra Coyle <[email protected]>
Signed-off-by: Cassandra Coyle <[email protected]>
|
This seems like a hands-down win across the board. Consolidation of the older Placement table into the newer Scheduler looks like it'd (significantly) reduce network I/O, CPU and memory usage while benefiting from the robust performance improvements iterated over the last several releases to Scheduler. |
|
Eligible voters: @dapr/maintainers-dapr |
| - After that, subsequent changes are per-type: only the affected types must pause, update, and resume. | ||
|
|
||
| Sidecar Startup vs steady-state | ||
| - On new stream: Scheduler sends LOCK(all) → UPDATE(full snapshot: all types, versions per type) → UNLOCK(all). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you think that this will raise the startup time for daprd hosting actors? or no since this was still happening in placement v1?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The latter - no startup time won’t increase because the full snapshot on new streams was already happening with the original Placement service.
But we should actually see some improvement here because we no longer need to hop between two separate control plane services for actors and their reminders. With this proposal, everything will funnel thru the Scheduler service - fewer hops and connections should result in improvements.
| This enforces soft stickiness: per‑type updates do not stop actors that still map to the local sidecar, which avoids | ||
| unnecessary churn and short global pauses. | ||
| So, during LOCK([T]) → UPDATE([T]) → UNLOCK([T]), only actors of types [T] that moved to a remote owner are drained and | ||
| all others continue running. No namespace‑wide drain. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a huge win 🎉
|
+1 binding |
1 similar comment
|
+1 binding |
Proposal to improve Actors and Workflows Reliability and Performance by consolidating Placement -> Scheduler