Skip to content

add task manager #27

@flagman9040

Description

@flagman9040

based on https://github.com/lllyasviel/stable-diffusion-webui-forge/blob/main/modules_forge/main_thread.py

original author's comment in the source code:

This file is the main thread that handles all gradio calls for major t2i or i2i processing.
Other gradio calls (like those from extensions) are not influenced.
By using one single thread to process all major calls, model moving is significantly faster.

and it's commit:
lllyasviel/stable-diffusion-webui-forge@f06ba8e

from commit message:

original author said:
This will move all major gradio calls into the main thread rather than random gradio threads.
This ensures that all torch.module.to() are performed in main thread to completely possible avoid GPU fragments.
In my test now model moving is 0.7 ~ 1.2 seconds faster, which means all 6GB/8GB VRAM users will get 0.7 ~ 1.2 seconds faster per image on SDXL.

  • classified
  • fix workflow (all tests have been passed but coverage not work)
  • this way, gc.collect() will work as expected

extracted from PR AUTOMATIC1111#16484

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions