Skip to content

Commit 71031f8

Browse files
authored
Consul Content Updates (#1233)
* Consul Content Updates Add additional consul content changed since #1228 * Fix quoting in jsonc files
1 parent 210da40 commit 71031f8

File tree

1,188 files changed

+182839
-1835
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

1,188 files changed

+182839
-1835
lines changed

content/consul/v1.21.x/content/commands/debug.mdx

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -40,6 +40,11 @@ flag to not retrieve it initially.
4040
Additionally, we recommend securely transmitting this archive via encryption
4141
or otherwise.
4242

43+
## Port connectivity checks
44+
45+
The `consul debug` command includes a built-in port connectivity check that runs on the host that executes the command. This check is similar to the one performed by the `consul troubleshoot ports` command, but it is integrated directly into the debug command's data capture process. Therefore, when you run `consul debug`, the resulting archive contains information about the status of the default Consul ports. This information can make it easier to diagnose network issues.
46+
There is no separate `-capture` flag because the port connectivity check always runs as a core part of the debug command. If you need to check port connectivity without collecting other debugging information, use the [`consul troubleshoot ports` command](/consul/commands/troubleshoot/ports).
47+
4348
## Usage
4449

4550
`Usage: consul debug [options]`

content/consul/v1.21.x/content/docs/automate/kv/store.mdx

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -86,4 +86,8 @@ The following example deletes all the keys with the `redis` prefix using the `-r
8686
```shell-session
8787
$ consul kv delete -recurse redis
8888
Success! Deleted keys with prefix: redis
89-
```
89+
```
90+
91+
<Warning title="Security warning">
92+
To mitigate vulnerability [CVE-2025-11392], Consul does not allow path escapes, directory escapes, leading spaces, or trailing spaces in keys, beginning with Consul v1.22.0. If you have any existing keys in this format and want to continue using the same keys, set the `disable_kv_key_validation` parameter to `true` in the Consul agent configuration. We strongly recommend using validated keys unless you have a specific reason to disable it for legacy compatibility.
93+
</Warning>

content/consul/v1.21.x/content/docs/connect/proxy/transparent-proxy/k8s.mdx

Lines changed: 20 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -98,6 +98,25 @@ spec:
9898
serviceAccountName: static-server
9999
```
100100

101+
### Transparent proxy and multi-port services
102+
103+
Transparent proxy mode assumes that services have only one port. If you want multi-port services to be reachable from the service mesh, you must use explicit upstreams as well.
104+
105+
In the following example, the `source-service` has one port, and it is part of the service mesh. Its annotations include two upstream targets towards the multi-port `target-service` on ports `15000` and `20000`. Without explicitly declaring the ports as service upstreams, the multi-port service `target-service` is unreachable from within the service mesh. For more information regarding the `connect-service-upstreams` annotation, refer to [Dial services across Kubernetes cluster](#dial-services-across-kubernetes-cluster).
106+
107+
```yaml
108+
apiVersion: apps/v1
109+
kind: Deployment
110+
metadata:
111+
name: source-service
112+
spec:
113+
template:
114+
metadata:
115+
annotations:
116+
consul.hashicorp.com/connect-inject: "true"
117+
consul.hashicorp.com/connect-service-upstreams: "target-service.svc:15000,target-service.svc:20000"
118+
```
119+
101120
## Enable the Consul CNI plugin
102121

103122
By default, Consul generates a `connect-inject init` container as part of the Kubernetes Pod startup process. The container configures traffic redirection in the service mesh through the sidecar proxy. To configure redirection, the container requires elevated CAP_NET_ADMIN privileges, which may not be compatible with security policies in your organization.
@@ -253,4 +272,4 @@ Note that when dialing individual instances, Consul ignores the HTTP routing rul
253272

254273
- Deployment configurations with federation across or a single datacenter spanning multiple clusters must explicitly dial a service in another datacenter or cluster using annotations.
255274

256-
- When dialing headless services, the request is proxied using a plain TCP proxy. Consul does not take into consideration the upstream's protocol.
275+
- When dialing headless services, the request is proxied using a plain TCP proxy. Consul does not take into consideration the upstream's protocol.

content/consul/v1.21.x/content/docs/discover/vm.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ description: >-
77

88
# Discover services on virtual machines (VMs)
99

10-
This page provides an overview of Consul service discovery operations on virtual machines. After you register services with Consul, you can address them using Consul DNS to perform application load balancing and static service lookups. You can also create prepared queries for dynamic service lookups and service failover.
10+
This page provides an overview of Consul service discovery operations on virtual machines. After you register services with Consul, you can use Consul DNS to perform application load balancing and static service lookups. You can also create prepared queries for dynamic service lookups and service failover.
1111

1212
## Introduction
1313

content/consul/v1.21.x/content/docs/error-messages/consul.mdx

Lines changed: 38 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -107,7 +107,7 @@ Error getting server health from "XXX": context deadline exceeded
107107

108108
</CodeBlockConfig>
109109

110-
Make sure you are monitoring Consul telemetry and system metrics according to our [monitoring guide][monitoring]. Increase the CPU or memory allocation to the server if needed. Check the performance of the network between Consul nodes.
110+
Make sure you are monitoring Consul telemetry and system metrics according to our [telemetry documentation][monitoring]. Increase the CPU or memory allocation to the server if needed. Check the performance of the network between Consul nodes.
111111

112112
## Too many open files
113113

@@ -131,14 +131,46 @@ Get http://localhost:8500/: dial tcp 127.0.0.1:31643: socket: too many open file
131131

132132
You need to increase the limit for the Consul user and maybe the system-wide limit. Refer to [this guide][files] for instructions to do so on Linux. Alternatively, if you are starting Consul from `systemd`, you could add `LimitNOFILE=65536` to the unit file for Consul. Refer to the [sample systemd file][systemd].
133133

134-
## Snapshot close error
134+
## Backup and restore errors
135+
136+
### Snapshot close error
135137

136138
Our RPC protocol requires support for a TCP half-close in order to signal the other side that they are done reading the stream, since we don't know the size in advance. This saves us from having to buffer just to calculate the size.
137139

138-
If a host does not properly implement half-close, you may see an error message `[ERR] consul: Failed to close snapshot: write tcp <source>-><destination>: write: broken pipe` when saving snapshots. This should not affect saving and restoring snapshots.
140+
If a host does not properly implement half-close, you may receive an error message when saving snapshots.
141+
142+
<CodeBlockConfig hideClipboard>
143+
144+
```log
145+
[ERR] consul: Failed to close snapshot: write tcp <source>-><destination>: write: broken pipe
146+
```
147+
148+
</CodeBlockConfig>
149+
150+
This error should not affect saving and restoring snapshots.
139151

140152
This has been a [known issue](https://github.com/docker/libnetwork/issues/1204) in Docker, but may manifest in other environments as well.
141153

154+
### Snapshot restore error
155+
156+
When restoring a Consul datacenter with a snapshot on new infrastructure, Consul throws the following errors when there is a conflict between the `node_name` used in the new datacenter and the `node_id` value for those nodes.
157+
158+
<CodeBlockConfig hideClipboard>
159+
160+
```log
161+
[WARN] agent.fsm: EnsureRegistration failed: error="failed inserting node: Error while renaming Node ID: "<NEW_UUID>": Node name <NAME> is reserved by node <OLD_UUID> with name <NAME> (<IP>)"
162+
```
163+
164+
</CodeBlockConfig>
165+
166+
This error means that in the new datacenter there is at least one node with the same `node_name` as a node in the snapshot's datacenter, but with a different `node_id`. This represents a consistency issue.
167+
168+
There are two possible workarounds:
169+
170+
1. Save the UUID from the previous node’s data directory. Then re-use that same UUID when you first start the agent on the new node. You can configure node IDs for your Consul agent nodes with the [`node_id` configuration parameter](/consul/docs/reference/agent/configuration-file/node#_node_id).
171+
172+
1. Always use unique node names for your Consul datacenters so that there is no risk of conflicts. You can configure node names for your Consul agent nodes using the [`node_name`](/consul/docs/reference/agent/configuration-file/node#_node) configuration parameter.
173+
142174
## ACL not found
143175

144176
If Consul returns the following error, this indicates that you have ACL enabled in your cluster but you aren't passing a valid token.
@@ -251,7 +283,8 @@ To resolve this error, you must manually issue the `consul reload` command or se
251283
[releases]: https://releases.hashicorp.com/consul/
252284
[files]: https://easyengine.io/tutorials/linux/increase-open-files-limit
253285
[certificates]: /consul/docs/secure/encryption/tls/enable/new/builtin
254-
[systemd]: /consul/tutorials/production-deploy/deployment-guide#configure-systemd
255-
[monitoring]: /consul/tutorials/day-2-operations/monitor-datacenter-health
286+
[systemd]: /consul/tutorials/production-vms/deployment-guide#configure-systemd
287+
[monitoring]: /consul/docs/monitor/telemetry/agent
256288
[bind]: /consul/commands/agent#_bind
257289
[jq]: https://stedolan.github.io/jq/
290+
[go-sockaddr]: https://pkg.go.dev/github.com/hashicorp/go-sockaddr

content/consul/v1.21.x/content/docs/error-messages/k8s.mdx

Lines changed: 53 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -111,4 +111,56 @@ spec:
111111
serviceAccountName: does-not-match
112112
```
113113

114-
</CodeBlockConfig>
114+
</CodeBlockConfig>
115+
116+
## Unbound PersistentVolumeClaims
117+
118+
If your Consul server pods are stuck in the `Pending` state, check if the PersistentVolumeClaims (PVCs) are bound to PersistentVolumes (PVs). If they are not bound, you will see an error similar to the following:
119+
120+
<CodeBlockConfig highlight="7,14">
121+
122+
```shell-session
123+
$ kubectl describe pods --namespace consul consul-server-0
124+
Name: consul-server-0
125+
Namespace: consul
126+
127+
##...
128+
129+
Status: Pending
130+
131+
##...
132+
133+
Events:
134+
Type Reason Age From Message
135+
---- ------ ---- ---- -------
136+
Warning FailedScheduling 3m29s (x3 over 13m) default-scheduler 0/3 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.
137+
```
138+
139+
</CodeBlockConfig>
140+
141+
There are two ways to resolve this issue. The fastest and simplest option is to use an up-to-date version of the Helm chart or `consul-k8s` tool to deploy Consul. The `consul-k8s` tool automatically creates the required PVs for you.
142+
143+
If you cannot use a newer version of the Helm chart or `consul-k8s` tool, you can manually create the `StorageClass` object that governs the creation of PVs, and then specify it in the Consul Helm chart. For example, you can use the following YAML to create a `StorageClass` called `ebs-sc` for AWS EBS volumes:
144+
145+
```yaml
146+
apiVersion: storage.k8s.io/v1
147+
kind: StorageClass
148+
metadata:
149+
name: ebs-sc
150+
provisioner: ebs.csi.aws.com
151+
volumeBindingMode: WaitForFirstConsumer
152+
parameters:
153+
csi.storage.k8s.io/fstype: xfs
154+
type: io1
155+
iopsPerGB: "50"
156+
encrypted: "true"
157+
```
158+
159+
Finally, specify the [StorageClass](/consul/docs/reference/k8s/helm#v-server-storageclass) in the Consul Helm chart values and redeploy Consul to Kubernetes.
160+
161+
```yaml
162+
##...
163+
server:
164+
storageClass: "ebs-sc"
165+
##...
166+
```

0 commit comments

Comments
 (0)