Skip to content

Conversation

@MicahBird
Copy link

I love this project and hope this documentation helps! Please let me know if anything needs to be tweaked/adjusted :)

Copy link
Owner

@qdm12 qdm12 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome, thanks for the contribution 👍
Just a few comments to address 😉

@shepherdjerred
Copy link

I would love to see this merged in! I followed these instructions and was able to easily setup gluetun on k8s.

@S0PEX
Copy link

S0PEX commented Jan 4, 2024

I would love to see this merged in! I followed these instructions and was able to easily setup gluetun on k8s.

Hey, do you mind sharing how you configured the other containers to use the sidecar?

@shepherdjerred
Copy link

I would love to see this merged in! I followed these instructions and was able to easily setup gluetun on k8s.

Hey, do you mind sharing how you configured the other containers to use the sidecar?

Sure! Here's a Deployment where I put an application and gluetun onto the same Pod: https://github.com/shepherdjerred/servers/blob/main/cdk8s/dist/turing.k8s.yaml#L1862-L1967

With cdk8s: https://github.com/shepherdjerred/servers/blob/main/cdk8s/src/services/torrents/qbittorrent.ts#L39-L86

@S0PEX
Copy link

S0PEX commented Jan 5, 2024

I would love to see this merged in! I followed these instructions and was able to easily setup gluetun on k8s.

Hey, do you mind sharing how you configured the other containers to use the sidecar?

Sure! Here's a Deployment where I put an application and gluetun onto the same Pod: https://github.com/shepherdjerred/servers/blob/main/cdk8s/dist/turing.k8s.yaml#L1862-L1967

With cdk8s: https://github.com/shepherdjerred/servers/blob/main/cdk8s/src/services/torrents/qbittorrent.ts#L39-L86

Thanks for getting back to me so quickly. The solution worked for me too. However, I'm thinking that with this setup, I won't be able to share the gluetun container with networks that aren't in the same pod, right? I'm planning to check if there's a good way to deploy gluetun separately and then set up other pods to use it as an egress network using labels.

@shepherdjerred
Copy link

I'm not super experienced with Kubernetes, but that sounds correct. You could deploy one gluetun sidecar container per pod that needs the VPN, but maybe there's a better way.

@gjrtimmer
Copy link

gjrtimmer commented Jan 20, 2024

@S0PEX @shepherdjerred I'm currently researching what you guys are looking for. As far as I understand it. It should be possible to run the gluetun separately. Currently trying to figure it for for a nomad deployment. For both nomad and kubeneters it should be the same because in order to run it separately it must be using a CNI Network macvlan as I understand it now. I don't have it working yet. But at least I have figured out that people are using the CNI macvlan driver for this and are creating a separate network for their vpn. Hope this helps. Because both nomad and kubernetes can use CNI plugins it should work for both.

Here is the clue I'm working with: a lot of home labbers are using the macvlan CNI to create a special vpn network in their clusters, both for kubernetes and nomad. And use it to redirect their traffic through tailscale. I'm currently thinking the same principle should work for gluetun.

If you check blogs and repositories on github you see people are using the macvlan cni driver to create special cluster wide network to route all traffic through tailscale vpn.

Hope this helps, please ping me if you figure it out. I will do the same.

@S0PEX
Copy link

S0PEX commented Jan 26, 2024

@gjrtimmer Thanks for the hint, I'll check out macvlan and see if I can get it working.

Comment on lines +27 to +28
containers:
- name: gluetun
Copy link

@Kab1r Kab1r Apr 18, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The better way to do this in newer versions of Kubernetes is to use native sidecar containers with a readiness probe. This can ensure that the gluetun sidecar starts and is healthy before the container being proxied is started.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't this a beta feature in v1.29 and not GA yet? It seems that it's still behind a feature gate, even in v1.30.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is in beta, but the feature gate is on by default since v1.29

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's been a while and it would be great to see this done using an init container.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Init-container hit GA in 1.31 and I would be happy to see this implemented as well. If I have time I will try to set this up

@v1nsai
Copy link

v1nsai commented May 20, 2024

I was able to access the UI using kubectl port-forward, but LoadBalancer service never worked for me, and I know its not user error on my part as I was able to access the other container's UI just fine when I got rid of gluetun.

Here's the manifest I applied, if anyone sees why this won't work through the LoadBalancer I'd love to hear it, I hate giving up but screw it it works with port forwarding.

@banana-soldier
Copy link

Thank you this was really helpful. I was able to use Gluten with browserless/chromium that another container uses Puppeteer to connect and run some routines.

@DreamingRaven
Copy link

DreamingRaven commented Jun 25, 2024

Thanks for this pull, this helped me get everything together and working, albeit slightly differently.

For anyone stumbling upon this to integrate with applications like qbittorrent, I have created a helm chart that creates an init-container based side-car out of gluetun, to enable binding to the tunnel interface in the same pod.

https://gitlab.com/GeorgeRaven/raven-helm-charts/-/tree/main/charts/qbittorrent?ref_type=heads

or using the gitlab package registry:

helm repo add raven https://gitlab.com/api/v4/projects/55284972/packages/helm/stable

The optional init container boils down to this: https://gitlab.com/GeorgeRaven/raven-helm-charts/-/blob/main/charts/qbittorrent/values.yaml?ref_type=heads#L28-L61

  initContainers:
  # optional gluetun VPN client sidecar
  # https://github.com/qdm12/gluetun
  # https://github.com/qdm12/gluetun-wiki/pull/7
  - name: gluetun # init sidecar for VPN connection
    image: "ghcr.io/qdm12/gluetun:latest" # <- you probably want this to be a set version
    restartPolicy: Always # makes this init into a sidecar container k8s 1.29
    imagePullPolicy: Always
    ports:
    - name: http-proxy
      containerPort: 8888
      protocol: TCP
    - name: tcp-shadowsocks
      containerPort: 8388
      protocol: TCP
    - name: udp-shadowsocks
      containerPort: 8388
      protocol: UDP
    envFrom:
    - secretRef:
        name: gluetun
        optional: false
    env:
    - name: TZ
      value: "Europe/London"
    - name: FIREWALL_DEBUG
      value: "on"
    - name: FIREWALL_INPUT_PORTS
      value: "8080" # <- the port for qbittorrent container otherwise blocked by gluetun firewall in same pod
    securityContext:
      capabilities:
        add:
        - NET_ADMIN

This will specifically enable a firewall rule to forward normal web traffic to qbittorrent server in the standard ingress > svc > pod manner of k8s, otherwise the firewall blocks the normal traffic like you trying to access qbittorrent (and fails liveness probes etc). This also uses envFrom which allows one secret to populate lots of environment variables, which is useful if you encrypt your secrets with something like bitnami sealed-secrets as I do.

Hope this helps the next person looking to do this.

@qdm12 qdm12 force-pushed the main branch 3 times, most recently from 7894b72 to 440e806 Compare July 30, 2024 06:51
@holysoles
Copy link
Contributor

holysoles commented Aug 9, 2024

Sharing some example config that ended up working for me for those that would find it useful, unsure why this MR isnt merged yet but glad it was here for reference!

---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: myapp
  name: myapp
spec:
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
    spec:
      containers:
        - image: ghcr.io/qdm12/gluetun:latest
          name: gluetun
          imagePullPolicy: Always
          securityContext:
            capabilities:
              add: ["NET_ADMIN"]
          ports:
            - containerPort: 9091
          env:
            - name: TZ
              value: '<timezone val>'
            - name: VPN_SERVICE_PROVIDER
              value: "<my provider>"
            - name: VPN_TYPE
              value: "wireguard"
            - name: WIREGUARD_PRIVATE_KEY
              value: "<priv key val>"
            - name: WIREGUARD_ADDRESSES
              value: "<IP val>"
            - name: FIREWALL_INPUT_PORTS
              value: "9091"
        - image: myapp:latest
          name: myapp

---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: myapp
  name: myapp
spec:
  selector:
    app: myapp
  type: NodePort
  ports:
    - name: webserver
      port: 9091
      targetPort: 9091
      protocol: TCP
  externalIPs:
    - 192.168.1.99

@Mr-Philipp
Copy link

Sharing some example config that ended up working for me for those that would find it useful, unsure why this MR isnt merged yet but glad it was here for reference!

---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: myapp
  name: myapp
spec:
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
    spec:
      containers:
        - image: ghcr.io/qdm12/gluetun:latest
          name: gluetun
          imagePullPolicy: Always
          securityContext:
            capabilities:
              add: ["NET_ADMIN"]
          ports:
            - containerPort: 9091
          env:
            - name: TZ
              value: '<timezone val>'
            - name: VPN_SERVICE_PROVIDER
              value: "<my provider>"
            - name: VPN_TYPE
              value: "wireguard"
            - name: WIREGUARD_PRIVATE_KEY
              value: "<priv key val>"
            - name: WIREGUARD_ADDRESSES
              value: "<IP val>"
            - name: FIREWALL_INPUT_PORTS
              value: "9091"
        - image: myapp:latest
          name: myapp

---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: myapp
  name: myapp
spec:
  selector:
    app: myapp
  type: NodePort
  ports:
    - name: webserver
      port: 9091
      targetPort: 9091
      protocol: TCP
  externalIPs:
    - 192.168.1.99

that is one of the most important information needed for everyone running this in kubernetes. i hope it get merged soon.
thank you for your contribution!

@bornav
Copy link

bornav commented Mar 7, 2025

@holysoles deployment looks good, what im unsure of is how are you forcing pods to use the "gluetun" container network?

@S0PEX
Copy link

S0PEX commented Mar 8, 2025

@bornav
All containers within a Kubernetes pod share the same network namespace, meaning they have the same IP address, communicate with each other over localhost, and use the same network interfaces. Thus, if one container in the pod establishes a VPN connection, all network traffic from the pod, including traffic from other containers, is routed through the VPN.

@bornav
Copy link

bornav commented Mar 8, 2025

@bornav All containers within a Kubernetes pod share the same network namespace, meaning they have the same IP address, communicate with each other over localhost, and use the same network interfaces. Thus, if one container in the pod establishes a VPN connection, all network traffic from the pod, including traffic from other containers, is routed through the VPN.

Was not aware network namespace was shared, Thanks

sofmeright added a commit to sofmeright/Dungeon that referenced this pull request Oct 23, 2025
Two critical fixes for cross-seed cluster connectivity:

1. Add Pod CIDR (192.168.144.0/20) to FIREWALL_OUTBOUND_SUBNETS
   - Gluetun firewall sees Pod IPs after kube-proxy DNAT, not Service IPs
   - Must allow traffic to Pod CIDR for cluster service communication
   - Reference: qdm12/gluetun-wiki#7

2. Set DNS_KEEP_NAMESERVER=on and DOT=off (not delete)
   - Preserves Kubernetes DNS resolver for cluster service resolution
   - Disables DOT which requires external DNS (1.1.1.1) blocked by firewall
   - Critical for consistent cluster DNS resolution
@SisyphusMD
Copy link

This discussion was incredibly helpful for me. I added a bit to what was discussed above, so I figured I should share my work as well. Essentially, now I'm running gluetun as a true sidecar container. Also, I had to come up with some sort of readiness check, and I wanted it to only pass if the vpn was connected, so I'm using the IP address from my WIREGUARD_ADDRESSES env variable, though I'm sure someone could probably come up with a better method of readiness and liveness probing.

Example Deployment:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp
  namespace: mynamespace
  labels:
    app: myapp
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
    spec:
      initContainers:
        - name: gluetun
          image: ghcr.io/qdm12/gluetun:v3.40.0
          securityContext:
            capabilities:
              add: ["NET_ADMIN"]
          ports:
            - name: http
              containerPort: 8080 # port to reach myapp
              protocol: TCP
          envFrom:
            - configMapRef:
                name: myapp-configmap
          # Below restart policy turns an "init" container into a long-running sidecar.
          restartPolicy: Always
          readinessProbe:
            exec:
              command:
                - sh
                - -c
                - "ping -c 1 $(echo $WIREGUARD_ADDRESSES | cut -d'/' -f1)"
            initialDelaySeconds: 5
            periodSeconds: 3
            timeoutSeconds: 2
            failureThreshold: 3
          livenessProbe:
            exec:
              command:
                - sh
                - -c
                - "ping -c 1 $(echo $WIREGUARD_ADDRESSES | cut -d'/' -f1)"
            initialDelaySeconds: 10
            periodSeconds: 15
            timeoutSeconds: 2
            failureThreshold: 3
      # -- Connecting Other Containers --
      # Define other containers that you want to connect to the VPN.
      # When using Gluetun in a sidecar configuration, all other containers will use Gluetun's VPN connection.
      # For testing purposes, you can `kubectl exec -it -n mynamespace myapp -- sh` into this curl container and run `curl https://ipinfo.io` to test your connection!
      containers:
        - name: curl-container
          image: quay.io/curl/curl:latest
          command: ["sleep", "infinity"]

Example ConfigMap:

apiVersion: v1
kind: ConfigMap
metadata:
  name: myapp-configmap
  namespace: mynamespace
  labels:
    app: myapp
data:
  TZ: "Etc/UTC"
  VPN_SERVICE_PROVIDER: "custom"
  VPN_TYPE: "wireguard"
  WIREGUARD_ENDPOINT_IP: "X.X.X.X"
  WIREGUARD_ENDPOINT_PORT: "XXXX"
  WIREGUARD_PUBLIC_KEY: "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
  WIREGUARD_PRIVATE_KEY: "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
  WIREGUARD_PRESHARED_KEY: "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
  WIREGUARD_ADDRESSES: "X.X.X.X/X"
  DNS_ADDRESS: "X.X.X.X"
  FIREWALL_OUTBOUND_SUBNETS: "10.244.0.0/16,192.168.0.0/20" # Include POD CIDR and possibly also LAN CIDR here
  FIREWALL_INPUT_PORTS: "8080" # myapp port
  # Below two ENVs are to maintain DNS resolution of cluster services
  DNS_KEEP_NAMESERVER: "on"
  DOT: "off"

Example Service:

apiVersion: v1
kind: Service
metadata:
  name: myapp
  namespace: mynamespace
  labels:
    app: myapp
spec:
  selector:
    app: myapp
  ports:
    - protocol: TCP
      port: 8080
      targetPort: 8080
      name: http
  type: ClusterIP

@qdm12
Copy link
Owner

qdm12 commented Nov 13, 2025

Can you please try image tag :pr-2970 with both DNS_KEEP_NAMESERVER=off and DOT=on? This should fix K8s local DNS as well as sending DNS requests through the TLS+VPN tunnel correctly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.