Hi!
Quick question about the proxy service in this repo.
Does the current implementation already support generating and storing embeddings using OpenAI (for example for RAG / semantic search)? Or is it only handling request forwarding right now?
If embeddings are supported:
which model(s) are you using?
where are the vectors stored (DB / pgvector / something else)?
is there an endpoint we can call for embedding generation through the proxy?
If not supported yet, do you have plans or a preferred approach for adding this? I can help contribute if needed.
Thanks!