For chunking method "manual", vision model was not used #10290
-
|
In RagFlow v0.20.5, when choosing manual as chunking method, which ever visual model that I set in default model setting, didnt get pick up. Did not test the other chunking methods, so far only general seem to be working with img2txt models. Is this a bug or only general chunk method uses the visual model ? For context, I tried both llama3.2-vision:latest and gpt4 mini |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
|
This is intended behavior: in RagFlow v0.20.5, the "manual" chunking method is implemented as an empty stub and does not process or invoke visual models for image handling. Only the "general" chunking method supports associating images with text chunks and can utilize outputs from visual models like img2txt, llama3.2-vision, or gpt4 mini if they are present from upstream processing. So, if you need visual model support for image-to-text conversion, use the "general" chunking method instead of "manual" source. To reply, just mention @dosu. How did I do? Good | Irrelevant | Incorrect | Verbose | Hallucination | Report 🐛 | Other |
Beta Was this translation helpful? Give feedback.
This is intended behavior: in RagFlow v0.20.5, the "manual" chunking method is implemented as an empty stub and does not process or invoke visual models for image handling. Only the "general" chunking method supports associating images with text chunks and can utilize outputs from visual models like img2txt, llama3.2-vision, or gpt4 mini if they are present from upstream processing. So, if you need visual model support for image-to-text conversion, use the "general" chunking method instead of "manual" source.
To reply, just mention @dosu.
How did I do? Good | Irrelevant | Incorrect | Verbose | Hallucination | Report 🐛 | Other
