You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
A minimal Python demo showing how to use [Redis LangCache](https://redis.io/docs/latest/solutions/semantic-caching/langcache/) with OpenAI to implement semantic caching for LLM queries.
4
+
This example caches responses based on semantic similarity, reducing latency and API usage costs.
5
+
6
+
---
7
+
8
+
## 📂 Project Structure
9
+
10
+
```
11
+
.
12
+
├── main.py # Main script for running the demo
13
+
├── requirements.txt # Python dependencies
14
+
├── .env.EXAMPLE # Example environment variable configuration
15
+
└── .env # Your actual environment variables (not committed)
0 commit comments