Skip to content

Conversation

@abhinavDhulipala
Copy link
Contributor

@abhinavDhulipala abhinavDhulipala commented Oct 30, 2025

Summary & Motivation

After setting timeouts in my k8s jobs spec, those aren't respected on the dagster wait side. So the k8s pipe client at this point will always kill jobs at 24hrs without a way to override or modify this behavior. Currently, I inherit the behavior of the underlying wait api. Debatably, this should be either be hard set to 0 or overridable and default to 0. The amount of toggles we have for workload timeouts is pretty confusing and this layer in dagster just adds to that confusion. I think everyone will agree that hard killing after 24hrs without notifying the user explicitly ahead of time or giving a way to override is pretty confusing especially to new users trying out pipes for the first time.

The current behavior preserves the default behavior, but make it overridable.

How I Tested These Changes

TBD. I want to first gather feedback on a what we'd like the default behavior to be. In fact, I think we should parameterize all Dagster based wait timeouts (launch_timeout, terminate_timeout, and poll_rate). Maybe parameterize it as a TimeoutConfig parameter.

On our local deployment, I inherited and overrode the run method with an extra param. It seems to work well.

Changelog

  • Feature: parameterize k8s terminate wait timeout in K8sPipeClient

@abhinavDhulipala abhinavDhulipala changed the title [k8s] paramtrize wait timeout in k8s pipe client [k8s] parameterize wait timeout in k8s pipe client Oct 30, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant