-
Notifications
You must be signed in to change notification settings - Fork 84
Add Kubernetes Sidecar Networking Documentation. #7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
qdm12
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome, thanks for the contribution 👍
Just a few comments to address 😉
|
I would love to see this merged in! I followed these instructions and was able to easily setup gluetun on k8s. |
Hey, do you mind sharing how you configured the other containers to use the sidecar? |
Sure! Here's a Deployment where I put an application and gluetun onto the same Pod: https://github.com/shepherdjerred/servers/blob/main/cdk8s/dist/turing.k8s.yaml#L1862-L1967 With cdk8s: https://github.com/shepherdjerred/servers/blob/main/cdk8s/src/services/torrents/qbittorrent.ts#L39-L86 |
Thanks for getting back to me so quickly. The solution worked for me too. However, I'm thinking that with this setup, I won't be able to share the gluetun container with networks that aren't in the same pod, right? I'm planning to check if there's a good way to deploy gluetun separately and then set up other pods to use it as an egress network using labels. |
|
I'm not super experienced with Kubernetes, but that sounds correct. You could deploy one gluetun sidecar container per pod that needs the VPN, but maybe there's a better way. |
|
@S0PEX @shepherdjerred I'm currently researching what you guys are looking for. As far as I understand it. It should be possible to run the gluetun separately. Currently trying to figure it for for a nomad deployment. For both nomad and kubeneters it should be the same because in order to run it separately it must be using a CNI Network Here is the clue I'm working with: a lot of home labbers are using the macvlan CNI to create a special vpn network in their clusters, both for kubernetes and nomad. And use it to redirect their traffic through tailscale. I'm currently thinking the same principle should work for gluetun. If you check blogs and repositories on github you see people are using the macvlan cni driver to create special cluster wide network to route all traffic through tailscale vpn. Hope this helps, please ping me if you figure it out. I will do the same. |
|
@gjrtimmer Thanks for the hint, I'll check out |
| containers: | ||
| - name: gluetun |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The better way to do this in newer versions of Kubernetes is to use native sidecar containers with a readiness probe. This can ensure that the gluetun sidecar starts and is healthy before the container being proxied is started.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Isn't this a beta feature in v1.29 and not GA yet? It seems that it's still behind a feature gate, even in v1.30.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is in beta, but the feature gate is on by default since v1.29
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's been a while and it would be great to see this done using an init container.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Init-container hit GA in 1.31 and I would be happy to see this implemented as well. If I have time I will try to set this up
|
I was able to access the UI using Here's the manifest I applied, if anyone sees why this won't work through the LoadBalancer I'd love to hear it, I hate giving up but screw it it works with port forwarding. |
|
Thank you this was really helpful. I was able to use Gluten with browserless/chromium that another container uses Puppeteer to connect and run some routines. |
|
Thanks for this pull, this helped me get everything together and working, albeit slightly differently. For anyone stumbling upon this to integrate with applications like qbittorrent, I have created a helm chart that creates an init-container based side-car out of gluetun, to enable binding to the tunnel interface in the same pod. https://gitlab.com/GeorgeRaven/raven-helm-charts/-/tree/main/charts/qbittorrent?ref_type=heads or using the gitlab package registry: The optional init container boils down to this: https://gitlab.com/GeorgeRaven/raven-helm-charts/-/blob/main/charts/qbittorrent/values.yaml?ref_type=heads#L28-L61 This will specifically enable a firewall rule to forward normal web traffic to qbittorrent server in the standard ingress > svc > pod manner of k8s, otherwise the firewall blocks the normal traffic like you trying to access qbittorrent (and fails liveness probes etc). This also uses envFrom which allows one secret to populate lots of environment variables, which is useful if you encrypt your secrets with something like bitnami sealed-secrets as I do. Hope this helps the next person looking to do this. |
7894b72 to
440e806
Compare
|
Sharing some example config that ended up working for me for those that would find it useful, unsure why this MR isnt merged yet but glad it was here for reference! ---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: myapp
name: myapp
spec:
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- image: ghcr.io/qdm12/gluetun:latest
name: gluetun
imagePullPolicy: Always
securityContext:
capabilities:
add: ["NET_ADMIN"]
ports:
- containerPort: 9091
env:
- name: TZ
value: '<timezone val>'
- name: VPN_SERVICE_PROVIDER
value: "<my provider>"
- name: VPN_TYPE
value: "wireguard"
- name: WIREGUARD_PRIVATE_KEY
value: "<priv key val>"
- name: WIREGUARD_ADDRESSES
value: "<IP val>"
- name: FIREWALL_INPUT_PORTS
value: "9091"
- image: myapp:latest
name: myapp
---
apiVersion: v1
kind: Service
metadata:
labels:
app: myapp
name: myapp
spec:
selector:
app: myapp
type: NodePort
ports:
- name: webserver
port: 9091
targetPort: 9091
protocol: TCP
externalIPs:
- 192.168.1.99 |
that is one of the most important information needed for everyone running this in kubernetes. i hope it get merged soon. |
|
@holysoles deployment looks good, what im unsure of is how are you forcing pods to use the "gluetun" container network? |
|
@bornav |
Was not aware network namespace was shared, Thanks |
Two critical fixes for cross-seed cluster connectivity: 1. Add Pod CIDR (192.168.144.0/20) to FIREWALL_OUTBOUND_SUBNETS - Gluetun firewall sees Pod IPs after kube-proxy DNAT, not Service IPs - Must allow traffic to Pod CIDR for cluster service communication - Reference: qdm12/gluetun-wiki#7 2. Set DNS_KEEP_NAMESERVER=on and DOT=off (not delete) - Preserves Kubernetes DNS resolver for cluster service resolution - Disables DOT which requires external DNS (1.1.1.1) blocked by firewall - Critical for consistent cluster DNS resolution
|
This discussion was incredibly helpful for me. I added a bit to what was discussed above, so I figured I should share my work as well. Essentially, now I'm running gluetun as a true sidecar container. Also, I had to come up with some sort of readiness check, and I wanted it to only pass if the vpn was connected, so I'm using the IP address from my WIREGUARD_ADDRESSES env variable, though I'm sure someone could probably come up with a better method of readiness and liveness probing. Example Deployment: Example ConfigMap: Example Service: |
|
Can you please try image tag |
I love this project and hope this documentation helps! Please let me know if anything needs to be tweaked/adjusted :)