Hi,
I tried testing some PDF file, but when i wanted to query/ask it give error that the context window limit is reached, so i believe you are adding the whole content of the file into the prompt, i can see that this will be helpful with the Claude 100K context window, but for the Chatgpt 4 at the moment this is limitation.
Is it possible to add support for the vector DB such as Supabase which will store the embedding and then it will be queried to retrieve the most relevant chunk(s) based on the user query.