⚡️ Speed up method InferencePipeline.init_with_workflow by 26% in PR #1759 (fix-modal-workspace-id-500)
#1761
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
⚡️ This pull request contains optimizations for PR #1759
If you approve this dependent PR, these changes will be merged into the original PR branch
fix-modal-workspace-id-500.📄 26% (0.26x) speedup for
InferencePipeline.init_with_workflowininference/core/interfaces/stream/inference_pipeline.py⏱️ Runtime :
82.3 microseconds→65.1 microseconds(best of5runs)📝 Explanation and details
The optimized code achieves a 26% speedup through several strategic micro-optimizations that reduce computational overhead in hot paths:
Key Optimizations
1. String validation optimization in
get_workflow_specification()re.match(r"^[\w\-]+$", workflow_id)with a direct character set lookup usingall(c in allowed for c in workflow_id)2. File I/O optimization
local_file_path.exists()followed by opening to a directtry/exceptapproach when opening the file3. FPS extraction streamlining in
WorkflowRunner.run_workflow()measured_fpsfirst, then falling back tofps4. Conditional assignment optimization in
init_with_custom_logic()desired_source_fpsassignment with a direct conditional expressiondesired_source_fps = max_fps if ENABLE_FRAME_DROP_ON_VIDEO_FILE_RATE_LIMITING else NonePerformance Impact Context
Based on the function references,
InferencePipeline.init_with_workflow()is called during pipeline initialization in video processing applications. While initialization happens only once per pipeline, the optimization particularly benefits:get_workflow_specification()is in the hot pathThe test results show consistent 21-30% improvements across different initialization scenarios, with the string validation and file I/O optimizations providing the most substantial gains. These optimizations are especially effective for workflows using local file specifications where the validation and file access patterns are frequently exercised.
✅ Correctness verification report:
🌀 Generated Regression Tests and Runtime
To edit these changes
git checkout codeflash/optimize-pr1759-2025-11-28T15.54.14and push.