Skip to content

Commit 16479ec

Browse files
feat(provide): detailed ipfs provide stat (#11019)
* feat: provide stats * added N/A * format * workers stats alignment * ipfs provide stat --all --compact * consolidating compact stat * update column alignment * flags combinations errors * command description * change schedule AvgPrefixLen to float * changelog * alignments * provide stat description draft * rephrased provide-stats.md * linking provide-stats.md from command description * documentation test * fix: refactor provide stat command type handling - add extractSweepingProvider() helper to reduce nested type switching - extract lowWorkerThreshold constant for worker availability check - fix --lan error handling to work with buffered providers * docs: add clarifying comments * fix(commands): improve provide stat compact mode - prevent panic when both columns are empty - fix column alignment with UTF-8 characters - only track col0MaxWidth for first column (as intended) * test: add tests for ipfs provide stat command - test basic functionality, flags, JSON output - test legacy provider behavior - test integration with content scheduling - test disabled provider configurations - add parseSweepStats helper with t.Helper() * docs: improve provide command help text - update tagline to "Control and monitor content providing" - simplify help descriptions - make error messages more consistent - update tests to match new error messages * metrics rename ``` Next reprovide at: Next prefix: ``` updated to: ``` Next region prefix: Next region reprovide: ``` * docs: improve Provide system documentation clarity Enhance documentation for the Provide system to better explain how provider records work and the differences between sweep and legacy modes. Changes to docs/config.md: - Provide section: add clear explanation of provider records and their role - Provide.DHT: add provider record lifecycle and two provider systems overview - Provide.DHT.Interval: explain relationship to expiration, contrast sweep vs legacy behavior - Provide.DHT.SweepEnabled: rewrite to explain legacy problem, sweep solution, and efficiency gains - Monitoring section: prioritize command-line tools (ipfs provide stat) before Prometheus Changes to core/commands/provide.go: - ipfs provide stat help: add explanation of provider records, TTL expiration, and how sweep batching works Changes to docs/changelogs/v0.39.md: - Add context about why stats matter for monitoring provider health - Emphasize real-time monitoring workflow with watch command - Explain what users can observe (rates, queues, worker availability) * depend on latest kad-dht master * docs: nits --------- Co-authored-by: Marcin Rataj <[email protected]>
1 parent f9dc739 commit 16479ec

File tree

11 files changed

+1474
-252
lines changed

11 files changed

+1474
-252
lines changed

core/commands/provide.go

Lines changed: 425 additions & 37 deletions
Large diffs are not rendered by default.

docs/changelogs/v0.39.md

Lines changed: 45 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,6 +10,8 @@ This release was brought to you by the [Shipyard](https://ipshipyard.com/) team.
1010

1111
- [Overview](#overview)
1212
- [🔦 Highlights](#-highlights)
13+
- [📊 Detailed statistics for Sweep provider with `ipfs provide stat`](#-detailed-statistics-for-sweep-provider-with-ipfs-provide-stat)
14+
- [🪦 Deprecated `go-ipfs` name no longer published](#-deprecated-go-ipfs-name-no-longer-published)
1315
- [📦️ Important dependency updates](#-important-dependency-updates)
1416
- [📝 Changelog](#-changelog)
1517
- [👨‍👩‍👧‍👦 Contributors](#-contributors)
@@ -18,13 +20,55 @@ This release was brought to you by the [Shipyard](https://ipshipyard.com/) team.
1820

1921
### 🔦 Highlights
2022

23+
#### 📊 Detailed statistics for Sweep provider with `ipfs provide stat`
24+
25+
The experimental Sweep provider system ([introduced in
26+
v0.38](https://github.com/ipfs/kubo/blob/master/docs/changelogs/v0.38.md#-experimental-sweeping-dht-provider))
27+
now has detailed statistics available through `ipfs provide stat`.
28+
29+
These statistics help you monitor provider health and troubleshoot issues,
30+
especially useful for nodes providing large content collections. You can quickly
31+
identify bottlenecks like queue backlog, worker saturation, or connectivity
32+
problems that might prevent content from being announced to the DHT.
33+
34+
**Default behavior:** Displays a brief summary showing queue sizes, scheduled
35+
CIDs/regions, average record holders, ongoing/total provides, and worker status
36+
when resources are constrained.
37+
38+
**Detailed statistics with `--all`:** View complete metrics organized into sections:
39+
40+
- **Connectivity**: DHT connection status
41+
- **Queues**: Pending provide and reprovide operations
42+
- **Schedule**: CIDs/regions scheduled for reprovide
43+
- **Timings**: Uptime, reprovide cycle information
44+
- **Network**: Peer statistics, keyspace region sizes
45+
- **Operations**: Ongoing and past provides, rates, errors
46+
- **Workers**: Worker pool utilization and availability
47+
48+
**Real-time monitoring:** For continuous monitoring, run
49+
`watch ipfs provide stat --all --compact` to see detailed statistics refreshed
50+
in a 2-column layout. This lets you observe provide rates, queue sizes, and
51+
worker availability in real-time. Individual sections can be displayed using
52+
flags like `--network`, `--operations`, or `--workers`, and multiple flags can
53+
be combined for custom views.
54+
55+
**Dual DHT support:** For Dual DHT configurations, use `--lan` to view LAN DHT
56+
provider statistics instead of the default WAN DHT stats.
57+
58+
> [!NOTE]
59+
> These statistics are only available when using the Sweep provider system
60+
> (enabled via
61+
> [`Provide.DHT.SweepEnabled`](https://github.com/ipfs/kubo/blob/master/docs/config.md#providedhtsweepenabled)).
62+
> Legacy provider shows basic statistics without flag support.
63+
2164
#### 🪦 Deprecated `go-ipfs` name no longer published
2265

2366
The `go-ipfs` name was deprecated in 2022 and renamed to `kubo`. Starting with this release, we have stopped publishing Docker images and distribution binaries under the old `go-ipfs` name.
2467

2568
Existing users should switch to:
69+
2670
- Docker: `ipfs/kubo` image (instead of `ipfs/go-ipfs`)
27-
- Binaries: download from https://dist.ipfs.tech/kubo/ or https://github.com/ipfs/kubo/releases
71+
- Binaries: download from <https://dist.ipfs.tech/kubo/> or <https://github.com/ipfs/kubo/releases>
2872

2973
For Docker users, the legacy `ipfs/go-ipfs` image name now shows a deprecation notice directing you to `ipfs/kubo`.
3074

docs/config.md

Lines changed: 102 additions & 43 deletions
Original file line numberDiff line numberDiff line change
@@ -1910,10 +1910,17 @@ Type: `duration`
19101910

19111911
## `Provide`
19121912

1913-
Configures CID announcements to the routing system, including both immediate
1914-
announcements for new content (provide) and periodic re-announcements
1915-
(reprovide) on systems that require it, like Amino DHT. While designed to support
1916-
multiple routing systems in the future, the current default configuration only supports providing to the Amino DHT.
1913+
Configures how your node advertises content to make it discoverable by other
1914+
peers.
1915+
1916+
**What is providing?** When your node stores content, it publishes provider
1917+
records to the routing system announcing "I have this content". These records
1918+
map CIDs to your peer ID, enabling content discovery across the network.
1919+
1920+
While designed to support multiple routing systems in the future, the current
1921+
default configuration only supports [providing to the Amino DHT](#providedht).
1922+
1923+
<!-- TODO: See the [Reprovide Sweep blog post](https://blog.ipfs.tech/2025-reprovide-sweep/) for detailed performance comparisons. -->
19171924

19181925
### `Provide.Enabled`
19191926

@@ -1964,13 +1971,39 @@ Type: `optionalString` (unset for the default)
19641971

19651972
Configuration for providing data to Amino DHT peers.
19661973

1974+
**Provider record lifecycle:** On the Amino DHT, provider records expire after
1975+
[`amino.DefaultProvideValidity`](https://github.com/libp2p/go-libp2p-kad-dht/blob/v0.34.0/amino/defaults.go#L40-L43).
1976+
Your node must re-announce (reprovide) content periodically to keep it
1977+
discoverable. The [`Provide.DHT.Interval`](#providedhtinterval) setting
1978+
controls this timing, with the default ensuring records refresh well before
1979+
expiration or negative churn effects kick in.
1980+
1981+
**Two provider systems:**
1982+
1983+
- **Sweep provider**: Divides the DHT keyspace into regions and systematically
1984+
sweeps through them over the reprovide interval. This batches CIDs allocated
1985+
to the same DHT servers, dramatically reducing the number of DHT lookups and
1986+
PUTs needed. Spreads work evenly over time with predictable resource usage.
1987+
1988+
- **Legacy provider**: Processes each CID individually with separate DHT
1989+
lookups. Works well for small content collections but struggles to complete
1990+
reprovide cycles when managing thousands of CIDs.
1991+
19671992
#### Monitoring Provide Operations
19681993

1969-
You can monitor the effectiveness of your provide configuration through metrics exposed at the Prometheus endpoint: `{Addresses.API}/debug/metrics/prometheus` (default: `http://127.0.0.1:5001/debug/metrics/prometheus`).
1994+
**Quick command-line monitoring:** Use `ipfs provide stat` to view the current
1995+
state of the provider system. For real-time monitoring, run
1996+
`watch ipfs provide stat --all --compact` to see detailed statistics refreshed
1997+
continuously in a 2-column layout.
19701998

1971-
Different metrics are available depending on whether you use legacy mode (`SweepEnabled=false`) or sweep mode (`SweepEnabled=true`). See [Provide metrics documentation](https://github.com/ipfs/kubo/blob/master/docs/metrics.md#provide) for details.
1999+
**Long-term monitoring:** For in-depth or long-term monitoring, metrics are
2000+
exposed at the Prometheus endpoint: `{Addresses.API}/debug/metrics/prometheus`
2001+
(default: `http://127.0.0.1:5001/debug/metrics/prometheus`). Different metrics
2002+
are available depending on whether you use legacy mode (`SweepEnabled=false`) or
2003+
sweep mode (`SweepEnabled=true`). See [Provide metrics documentation](https://github.com/ipfs/kubo/blob/master/docs/metrics.md#provide)
2004+
for details.
19722005

1973-
To enable detailed debug logging for both providers, set:
2006+
**Debug logging:** For troubleshooting, enable detailed logging by setting:
19742007

19752008
```sh
19762009
GOLOG_LOG_LEVEL=error,provider=debug,dht/provider=debug
@@ -1982,12 +2015,24 @@ GOLOG_LOG_LEVEL=error,provider=debug,dht/provider=debug
19822015
#### `Provide.DHT.Interval`
19832016

19842017
Sets how often to re-announce content to the DHT. Provider records on Amino DHT
1985-
expire after [`amino.DefaultProvideValidity`](https://github.com/libp2p/go-libp2p-kad-dht/blob/v0.34.0/amino/defaults.go#L40-L43),
1986-
also known as Provider Record Expiration Interval.
2018+
expire after [`amino.DefaultProvideValidity`](https://github.com/libp2p/go-libp2p-kad-dht/blob/v0.34.0/amino/defaults.go#L40-L43).
2019+
2020+
**Why this matters:** The interval must be shorter than the expiration window to
2021+
ensure provider records refresh before they expire. The default value is
2022+
approximately half of [`amino.DefaultProvideValidity`](https://github.com/libp2p/go-libp2p-kad-dht/blob/v0.34.0/amino/defaults.go#L40-L43),
2023+
which accounts for network churn and ensures records stay alive without
2024+
overwhelming the network with unnecessary announcements.
19872025

1988-
An interval of about half the expiration window ensures provider records
1989-
are refreshed well before they expire. This keeps your content continuously
1990-
discoverable accounting for network churn without overwhelming the network with too frequent announcements.
2026+
**With sweep mode enabled
2027+
([`Provide.DHT.SweepEnabled`](#providedhtsweepenabled)):** The system spreads
2028+
reprovide operations smoothly across this entire interval. Each keyspace region
2029+
is reprovided at scheduled times throughout the period, ensuring announcements
2030+
periodically happen every interval.
2031+
2032+
**With legacy mode:** The system attempts to reprovide all CIDs as quickly as
2033+
possible at the start of each interval. If reproviding takes longer than this
2034+
interval (common with large datasets), the next cycle is skipped and provider
2035+
records may expire.
19912036

19922037
- If unset, it uses the implicit safe default.
19932038
- If set to the value `"0"` it will disable content reproviding to DHT.
@@ -2055,46 +2100,60 @@ Type: `optionalInteger` (non-negative; `0` means unlimited number of workers)
20552100

20562101
#### `Provide.DHT.SweepEnabled`
20572102

2058-
Whether Provide Sweep is enabled. If not enabled, the legacy
2059-
[`boxo/provider`](https://github.com/ipfs/boxo/tree/main/provider) is used for
2060-
both provides and reprovides.
2061-
2062-
Provide Sweep is a resource efficient technique for advertising content to
2063-
the Amino DHT swarm. The Provide Sweep module tracks the keys that should be periodically reprovided in
2064-
the `Keystore`. It splits the keys into DHT keyspace regions by proximity (XOR
2065-
distance), and schedules when reprovides should happen in order to spread the
2066-
reprovide operation over time to avoid a spike in resource utilization. It
2067-
basically sweeps the keyspace _from left to right_ over the
2068-
[`Provide.DHT.Interval`](#providedhtinterval) time period, and reprovides keys
2069-
matching to the visited keyspace region.
2070-
2071-
Provide Sweep aims at replacing the inefficient legacy `boxo/provider`
2072-
module, and is currently opt-in. You can compare the effectiveness of sweep mode vs legacy mode by monitoring the appropriate metrics (see [Monitoring Provide Operations](#monitoring-provide-operations) above).
2073-
2074-
Whenever new keys should be advertised to the Amino DHT, `kubo` calls
2075-
`StartProviding()`, triggering an initial `provide` operation for the given
2076-
keys. The keys will be added to the `Keystore` tracking which keys should be
2077-
reprovided and when they should be reprovided. Calling `StopProviding()`
2078-
removes the keys from the `Keystore`. However, it is currently tricky for
2079-
`kubo` to detect when a key should stop being advertised. Hence, `kubo` will
2080-
periodically refresh the `Keystore` at each [`Provide.DHT.Interval`](#providedhtinterval)
2081-
by providing it a channel of all the keys it is expected to contain according
2082-
to the [`Provide.Strategy`](#providestrategy). During this operation,
2083-
all keys in the `Keystore` are purged, and only the given ones remain scheduled.
2103+
Enables the sweep provider for efficient content announcements. When disabled,
2104+
the legacy [`boxo/provider`](https://github.com/ipfs/boxo/tree/main/provider) is
2105+
used instead.
2106+
2107+
**The legacy provider problem:** The legacy system processes CIDs one at a
2108+
time, requiring a separate DHT lookup (10-20 seconds each) to find the 20
2109+
closest peers for each CID. This sequential approach typically handles less
2110+
than 10,000 CID over 22h ([`Provide.DHT.Interval`](#providedhtinterval)). If
2111+
your node has more CIDs than can be reprovided within
2112+
[`Provide.DHT.Interval`](#providedhtinterval), provider records start expiring
2113+
after
2114+
[`amino.DefaultProvideValidity`](https://github.com/libp2p/go-libp2p-kad-dht/blob/v0.34.0/amino/defaults.go#L40-L43),
2115+
making content undiscoverable.
2116+
2117+
**How sweep mode works:** The sweep provider divides the DHT keyspace into
2118+
regions based on keyspace prefixes. It estimates the Amino DHT size, calculates
2119+
how many regions are needed (sized to contain at least 20 peers each), then
2120+
schedules region processing evenly across
2121+
[`Provide.DHT.Interval`](#providedhtinterval). When processing a region, it
2122+
discovers the peers in that region once, then sends all provider records for
2123+
CIDs allocated to those peers in a batch. This batching is the key efficiency:
2124+
instead of N lookups for N CIDs, the number of lookups is bounded by a constant
2125+
fraction of the Amino DHT size (e.g., ~3,000 lookups when there are ~10,000 DHT
2126+
servers), regardless of how many CIDs you're providing.
2127+
2128+
**Efficiency gains:** For a node providing 100,000 CIDs, sweep mode reduces
2129+
lookups by 97% compared to legacy. The work spreads smoothly over time rather
2130+
than completing in bursts, preventing resource spikes and duplicate
2131+
announcements. Long-running nodes reprovide systematically just before records
2132+
would expire, keeping content continuously discoverable without wasting
2133+
bandwidth.
2134+
2135+
**Implementation details:** The sweep provider tracks CIDs in a persistent
2136+
keystore. New content added via `StartProviding()` enters the provide queue and
2137+
gets batched by keyspace region. The keystore is periodically refreshed at each
2138+
[`Provide.DHT.Interval`](#providedhtinterval) with CIDs matching
2139+
[`Provide.Strategy`](#providestrategy) to ensure only current content remains
2140+
scheduled. This handles cases where content is unpinned or removed.
20842141

20852142
> <picture>
20862143
> <source media="(prefers-color-scheme: dark)" srcset="https://github.com/user-attachments/assets/f6e06b08-7fee-490c-a681-1bf440e16e27">
20872144
> <source media="(prefers-color-scheme: light)" srcset="https://github.com/user-attachments/assets/e1662d7c-f1be-4275-a9ed-f2752fcdcabe">
20882145
> <img alt="Reprovide Cycle Comparison" src="https://github.com/user-attachments/assets/e1662d7c-f1be-4275-a9ed-f2752fcdcabe">
20892146
> </picture>
20902147
>
2091-
> The diagram above visualizes the performance patterns:
2148+
> The diagram compares performance patterns:
20922149
>
2093-
> - **Legacy mode**: Individual (slow) provides per CID, can struggle with large datasets
2094-
> - **Sweep mode**: Even distribution matching the keyspace sweep described with low resource usage
2095-
> - **Accelerated DHT**: Hourly traffic spikes with high resource usage
2150+
> - **Legacy mode**: Sequential processing, one lookup per CID, struggles with large datasets
2151+
> - **Sweep mode**: Smooth distribution over time, batched lookups by keyspace region, predictable resource usage
2152+
> - **Accelerated DHT**: Hourly network crawls creating traffic spikes, high resource usage
20962153
>
2097-
> Sweep mode provides similar effectiveness to Accelerated DHT but with steady resource usage - better for machines with limited CPU, memory, or network bandwidth.
2154+
> Sweep mode achieves similar effectiveness to the Accelerated DHT client but with steady resource consumption.
2155+
2156+
You can compare the effectiveness of sweep mode vs legacy mode by monitoring the appropriate metrics (see [Monitoring Provide Operations](#monitoring-provide-operations) above).
20982157

20992158
> [!NOTE]
21002159
> This feature is opt-in for now, but will become the default in a future release.

docs/examples/kubo-as-a-library/go.mod

Lines changed: 17 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -7,9 +7,9 @@ go 1.25
77
replace github.com/ipfs/kubo => ./../../..
88

99
require (
10-
github.com/ipfs/boxo v0.35.1-0.20251016232905-37006871a40e
10+
github.com/ipfs/boxo v0.35.0
1111
github.com/ipfs/kubo v0.0.0-00010101000000-000000000000
12-
github.com/libp2p/go-libp2p v0.44.0
12+
github.com/libp2p/go-libp2p v0.43.0
1313
github.com/multiformats/go-multiaddr v0.16.1
1414
)
1515

@@ -82,7 +82,7 @@ require (
8282
github.com/ipfs/go-ds-flatfs v0.5.5 // indirect
8383
github.com/ipfs/go-ds-leveldb v0.5.2 // indirect
8484
github.com/ipfs/go-ds-measure v0.2.2 // indirect
85-
github.com/ipfs/go-ds-pebble v0.5.5 // indirect
85+
github.com/ipfs/go-ds-pebble v0.5.3 // indirect
8686
github.com/ipfs/go-dsqueue v0.1.0 // indirect
8787
github.com/ipfs/go-fs-lock v0.1.1 // indirect
8888
github.com/ipfs/go-ipfs-cmds v0.15.0 // indirect
@@ -98,7 +98,7 @@ require (
9898
github.com/ipfs/go-peertaskqueue v0.8.2 // indirect
9999
github.com/ipfs/go-test v0.2.3 // indirect
100100
github.com/ipfs/go-unixfsnode v1.10.2 // indirect
101-
github.com/ipld/go-car/v2 v2.16.0 // indirect
101+
github.com/ipld/go-car/v2 v2.15.0 // indirect
102102
github.com/ipld/go-codec-dagpb v1.7.0 // indirect
103103
github.com/ipld/go-ipld-prime v0.21.0 // indirect
104104
github.com/ipshipyard/p2p-forge v0.6.1 // indirect
@@ -115,15 +115,15 @@ require (
115115
github.com/libp2p/go-doh-resolver v0.5.0 // indirect
116116
github.com/libp2p/go-flow-metrics v0.3.0 // indirect
117117
github.com/libp2p/go-libp2p-asn-util v0.4.1 // indirect
118-
github.com/libp2p/go-libp2p-kad-dht v0.35.1 // indirect
118+
github.com/libp2p/go-libp2p-kad-dht v0.35.2-0.20251017193437-abd04263daac // indirect
119119
github.com/libp2p/go-libp2p-kbucket v0.8.0 // indirect
120120
github.com/libp2p/go-libp2p-pubsub v0.14.2 // indirect
121121
github.com/libp2p/go-libp2p-pubsub-router v0.6.0 // indirect
122122
github.com/libp2p/go-libp2p-record v0.3.1 // indirect
123123
github.com/libp2p/go-libp2p-routing-helpers v0.7.5 // indirect
124124
github.com/libp2p/go-libp2p-xor v0.1.0 // indirect
125125
github.com/libp2p/go-msgio v0.3.0 // indirect
126-
github.com/libp2p/go-netroute v0.3.0 // indirect
126+
github.com/libp2p/go-netroute v0.2.2 // indirect
127127
github.com/libp2p/go-reuseport v0.4.0 // indirect
128128
github.com/libp2p/go-yamux/v5 v5.0.1 // indirect
129129
github.com/libp2p/zeroconf/v2 v2.2.0 // indirect
@@ -141,7 +141,7 @@ require (
141141
github.com/multiformats/go-multiaddr-dns v0.4.1 // indirect
142142
github.com/multiformats/go-multiaddr-fmt v0.1.0 // indirect
143143
github.com/multiformats/go-multibase v0.2.0 // indirect
144-
github.com/multiformats/go-multicodec v0.10.0 // indirect
144+
github.com/multiformats/go-multicodec v0.9.2 // indirect
145145
github.com/multiformats/go-multihash v0.2.3 // indirect
146146
github.com/multiformats/go-multistream v0.6.1 // indirect
147147
github.com/multiformats/go-varint v0.1.0 // indirect
@@ -177,7 +177,7 @@ require (
177177
github.com/prometheus/common v0.66.1 // indirect
178178
github.com/prometheus/procfs v0.17.0 // indirect
179179
github.com/quic-go/qpack v0.5.1 // indirect
180-
github.com/quic-go/quic-go v0.55.0 // indirect
180+
github.com/quic-go/quic-go v0.54.1 // indirect
181181
github.com/quic-go/webtransport-go v0.9.0 // indirect
182182
github.com/rogpeppe/go-internal v1.14.1 // indirect
183183
github.com/spaolacci/murmur3 v1.1.0 // indirect
@@ -212,22 +212,22 @@ require (
212212
go.uber.org/zap/exp v0.3.0 // indirect
213213
go.yaml.in/yaml/v2 v2.4.3 // indirect
214214
go4.org v0.0.0-20230225012048-214862532bf5 // indirect
215-
golang.org/x/crypto v0.43.0 // indirect
216-
golang.org/x/exp v0.0.0-20251009144603-d2f985daa21b // indirect
217-
golang.org/x/mod v0.29.0 // indirect
218-
golang.org/x/net v0.46.0 // indirect
215+
golang.org/x/crypto v0.42.0 // indirect
216+
golang.org/x/exp v0.0.0-20250911091902-df9299821621 // indirect
217+
golang.org/x/mod v0.28.0 // indirect
218+
golang.org/x/net v0.44.0 // indirect
219219
golang.org/x/sync v0.17.0 // indirect
220-
golang.org/x/sys v0.37.0 // indirect
221-
golang.org/x/telemetry v0.0.0-20251008203120-078029d740a8 // indirect
222-
golang.org/x/text v0.30.0 // indirect
220+
golang.org/x/sys v0.36.0 // indirect
221+
golang.org/x/telemetry v0.0.0-20250908211612-aef8a434d053 // indirect
222+
golang.org/x/text v0.29.0 // indirect
223223
golang.org/x/time v0.12.0 // indirect
224-
golang.org/x/tools v0.38.0 // indirect
224+
golang.org/x/tools v0.37.0 // indirect
225225
golang.org/x/xerrors v0.0.0-20240903120638-7835f813f4da // indirect
226226
gonum.org/v1/gonum v0.16.0 // indirect
227227
google.golang.org/genproto/googleapis/api v0.0.0-20250825161204-c5933d9347a5 // indirect
228228
google.golang.org/genproto/googleapis/rpc v0.0.0-20250825161204-c5933d9347a5 // indirect
229229
google.golang.org/grpc v1.75.0 // indirect
230-
google.golang.org/protobuf v1.36.10 // indirect
230+
google.golang.org/protobuf v1.36.9 // indirect
231231
gopkg.in/yaml.v3 v3.0.1 // indirect
232232
lukechampine.com/blake3 v1.4.1 // indirect
233233
)

0 commit comments

Comments
 (0)