You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+56-6Lines changed: 56 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -91,36 +91,86 @@ Check if URI is marked as degraded (should_failover)
91
91
└── Downstream failed --> try_cache fallback
92
92
```
93
93
94
+
---
95
+
## 🔁 Probabilistic Cache Refreshing
96
+
97
+
To ensure cached responses stay fresh over time, CacheBolt supports **probabilistic refreshes**.
98
+
You can configure a percentage of requests that will intentionally bypass the cache and fetch a fresh version from the backend.
99
+
100
+
```yaml
101
+
cache:
102
+
refresh_percentage: 10
103
+
```
104
+
105
+
In the example above, approximately 1 in every 10 requests to the same cache key will bypass the memory and persistent cache and trigger a revalidation from the upstream server.
106
+
The refreshed response is then stored again in both memory and persistent storage backends.
107
+
108
+
This strategy helps:
109
+
110
+
Keep long-lived cache entries updated
111
+
112
+
Avoid cache staleness without needing manual invalidation
113
+
114
+
Distribute backend load gradually and intelligently
115
+
116
+
If set to 0, no automatic refresh will occur unless the cache is manually purged.
94
117
95
118
---
96
119
## 🔧 Configuration
97
120
98
121
The config is written in YAML. Example:
99
122
100
123
```yaml
124
+
# 🔧 Unique identifier for this CacheBolt instance
101
125
app_id: my-service
102
126
127
+
# 🚦 Maximum number of concurrent outbound requests to the downstream service
103
128
max_concurrent_requests: 200
129
+
130
+
# 🌐 Base URL of the upstream API/backend to which requests are proxied
104
131
downstream_base_url: http://localhost:4000
132
+
133
+
# ⏱️ Timeout (in seconds) for downstream requests before failing
105
134
downstream_timeout_secs: 5
106
135
107
-
storage_backend: s3# options: gcs, s3, azure, local
136
+
# 💾 Backend used for persistent cache storage
137
+
# Available options: gcs, s3, azure, local
138
+
storage_backend: s3
139
+
140
+
# 🪣 Name of the Google Cloud Storage bucket (used if storage_backend is 'gcs')
108
141
gcs_bucket: cachebolt
142
+
143
+
# 🪣 Name of the Amazon S3 bucket (used if storage_backend is 's3')
109
144
s3_bucket: my-cachebolt-bucket
145
+
146
+
# 📦 Name of the Azure Blob Storage container (used if storage_backend is 'azure')
110
147
azure_container: cachebolt-container
111
148
112
-
memory_eviction:
113
-
threshold_percent: 90
149
+
# 🧠 Memory cache configuration
150
+
cache:
151
+
# 🚨 System memory usage threshold (%) above which in-memory cache will start evicting entries
152
+
memory_threshold: 80
153
+
154
+
# 🔁 Percentage of requests (per key) that should trigger a refresh from backend instead of using cache
155
+
# Example: 10% means 1 in every 10 requests will bypass cache
156
+
refresh_percentage: 10
114
157
158
+
# ⚠️ Latency-based failover configuration
115
159
latency_failover:
116
-
default_max_latency_ms: 300
160
+
# ⌛ Default maximum allowed latency in milliseconds for any request
161
+
default_max_latency_ms: 3000
162
+
163
+
# 🛣️ Path-specific latency thresholds
117
164
path_rules:
118
165
- pattern: "^/api/v1/products/.*"
119
-
max_latency_ms: 150
166
+
max_latency_ms: 1500
120
167
- pattern: "^/auth/.*"
121
-
max_latency_ms: 100
168
+
max_latency_ms: 1000
169
+
170
+
# 🚫 List of request headers to ignore when computing cache keys (case-insensitive)
0 commit comments