3-Node Kubernetes 1.34.1 Cluster with automated provisioning using Vagrant, VirtualBox, and Ansible.
- Kubernetes Version: 1.34.1
- Cluster: 1 control-plane + 2 workers
- Container Runtime: containerd 1.7.28
- CNI: Calico
- Base OS: Ubuntu 24.04
- Hypervisor: VirtualBox (for Linux x86_64/amd64)
This project includes complete automated provisioning. Running vagrant up will:
- β Install and configure containerd runtime
- β Install Kubernetes binaries (kubelet, kubeadm, kubectl)
- β Initialize the control plane with kubeadm
- β Install Calico CNI
- β Join worker nodes to the cluster
- β Untaint control plane for workload scheduling
The cluster will be fully functional after provisioning completes.
- Linux system with x86_64/amd64 architecture
- VirtualBox 7.0+ (for Linux)
- Vagrant 2.4.0 or later
- Ansible 2.15.0 or later
- At least 10 GB RAM available (cluster uses ~10 GB total)
- At least 30 GB disk space
Run the automated setup script to check and install Vagrant, VirtualBox, and Ansible:
./setup-host.shThis script will:
- β Check for required tools (Vagrant, VirtualBox, Ansible)
- β Install missing tools or update outdated versions
- β
Add your user to the
vboxusersgroup if needed - β Verify minimum version requirements
vboxusers group, you MUST log out and log back in before continuing.
Linux systems often have KVM (Kernel Virtual Machine) running by default, which conflicts with VirtualBox. Switch to VirtualBox mode:
./switch-to-virtualbox.shThis script will:
- β Stop KVM services (libvirtd, virtlogd, virtlockd)
- β Unload KVM kernel modules (kvm_intel/kvm_amd, kvm)
- β Load VirtualBox kernel modules (vboxdrv, vboxnetflt, vboxnetadp)
- β Verify VirtualBox is functional
vagrant upThis will:
- Download the Ubuntu 24.04 Vagrant box (first time only)
- Create 3 VMs (k8s-cp, k8s-node-1, k8s-node-2)
- Run Ansible provisioning (all automated)
Expected time: 15-20 minutes (depending on your internet speed and system)
./verify-cluster.shOr manually:
vagrant ssh k8s-cp -c "kubectl get nodes -o wide"
vagrant ssh k8s-cp -c "kubectl get pods -A"| Component | Details |
|---|---|
| Kubernetes Version | 1.34.1 |
| Container Runtime | containerd 1.7.28 |
| CNI Plugin | Calico 3.28.0 |
| Base OS | Ubuntu 24.04 LTS |
| Control Plane IP | 192.168.57.10 |
| Worker 1 IP | 192.168.57.11 |
| Worker 2 IP | 192.168.57.12 |
| Pod Network CIDR | 10.244.0.0/16 |
- Control Plane: 2 CPU, 6144 MB RAM
- Worker 1: 2 CPU, 2048 MB RAM
- Worker 2: 2 CPU, 2048 MB RAM
This repository includes scripts to easily switch between hypervisors without conflicts.
./switch-to-virtualbox.shUse this before running vagrant up or when you want to use VirtualBox VMs.
./switch-to-kvm.shUse this when you need KVM for other applications.
Important Notes:
- Always shut down running VMs before switching hypervisors
- Use
vagrant haltto stop VirtualBox VMs before switching to KVM - Use
virsh list --allto check for running KVM VMs before switching to VirtualBox - Both hypervisors cannot be active simultaneously
# Start all nodes
vagrant up
# Start specific node
vagrant up k8s-cp
vagrant up k8s-node-1
vagrant up k8s-node-2
# Stop the cluster
vagrant halt
# Restart with re-provisioning
vagrant reload --provision
# Destroy the cluster
vagrant destroy -f
# Check cluster status
vagrant status# SSH into nodes
vagrant ssh k8s-cp
vagrant ssh k8s-node-1
vagrant ssh k8s-node-2
# Run single command without interactive shell
vagrant ssh k8s-cp -c "kubectl get nodes"# From control plane node
vagrant ssh k8s-cp
# Inside control plane:
kubectl get nodes
kubectl get pods -A
kubectl get namespaces
kubectl cluster-info
# Deploy a test workload
kubectl create deployment nginx --image=nginx
kubectl get podsvagrant-linux/
βββ Vagrantfile # VM definitions and provisioning orchestration
βββ README.md # This file
βββ CLAUDE.md # Project instructions for Claude Code
βββ MANUAL_STEPS.md # Manual configuration steps (reference)
βββ setup-host.sh # Install/check Vagrant, VirtualBox, Ansible
βββ switch-to-virtualbox.sh # Switch from KVM to VirtualBox
βββ switch-to-kvm.sh # Switch from VirtualBox to KVM
βββ verify-cluster.sh # Verify cluster health
βββ playbooks/
βββ common.yml # System configuration for all nodes
βββ binaries-only.yml # Install Kubernetes binaries
βββ containerd.yml # Configure containerd runtime
βββ control-plane.yml # Initialize Kubernetes control plane
βββ calico.yml # Install Calico CNI
βββ cilium.yml # Install Cilium CNI (alternative, not used)
βββ untaint.yml # Allow scheduling on control plane
βββ workers.yml # Join workers to cluster
βββ k8s-join-command.sh # Generated join command (created during provisioning)
Error: VirtualBox can't operate in VMX root mode
Solution: Switch to VirtualBox mode:
./switch-to-virtualbox.shSolution: Add yourself and re-login:
sudo usermod -aG vboxusers $USER
# Log out and log back inIssue: Join token expired (tokens last 24 hours)
Solution: Generate new token on control plane:
vagrant ssh k8s-cp
kubeadm token create --print-join-commandCopy the output to playbooks/k8s-join-command.sh and reprovision workers:
vagrant provision k8s-node-1 --provision-with worker
vagrant provision k8s-node-2 --provision-with workerCheck 1: Verify CNI is running:
vagrant ssh k8s-cp -c "kubectl get pods -n kube-system -l k8s-app=calico-node"Check 2: Verify containerd is running:
vagrant ssh k8s-node-1 -c "systemctl status containerd"Check 3: Check kubelet logs:
vagrant ssh k8s-node-1 -c "journalctl -u kubelet -f"Check: Node resources:
vagrant ssh k8s-cp -c "kubectl describe nodes"Solution: Increase node memory/CPU in Vagrantfile and reload:
vagrant reloadSolution: Check internet connection and retry:
vagrant box add bento/ubuntu-24.04 --provider virtualboxIf everything is broken:
# Destroy everything
vagrant destroy -f
# Remove Vagrant boxes (optional)
vagrant box remove bento/ubuntu-24.04
# Clean VirtualBox VMs manually if needed
VBoxManage list vms
VBoxManage unregistervm <vm-name> --delete
# Start fresh
vagrant up- Edit
Vagrantfileand updateKUBERNETES_VERSIONvariable - Rebuild:
vagrant destroy -f && vagrant up
- Edit the
NODESarray inVagrantfile - Modify
cpusormemoryvalues - Apply changes:
vagrant reload
- Add new node to
NODESarray inVagrantfile - Add node to
/etc/hostssection inplaybooks/common.yml - Start the new node:
vagrant up <new-node-name>
To use kubectl from your host machine:
# Copy kubeconfig from control plane
vagrant ssh k8s-cp -c "cat ~/.kube/config" > ~/.kube/vagrant-k8s-config
# Use the config
export KUBECONFIG=~/.kube/vagrant-k8s-config
kubectl get nodesNote: You may need to update the server address in the config from 127.0.0.1:6443 to 192.168.57.10:6443.
- This cluster is for development/testing only
- Control plane is untainted (allows workload scheduling)
- No network policies configured by default
- No RBAC restrictions beyond defaults
- Do not expose this cluster to the internet
- Kubernetes Documentation
- Calico Documentation
- containerd Documentation
- Vagrant Documentation
- Ansible Documentation
If you encounter issues:
- Check the Troubleshooting section
- Review logs:
vagrant ssh <node> -c "journalctl -u kubelet -f" - Check Vagrant status:
vagrant status - Verify hypervisor:
lsmod | grep -E "(kvm|vbox)" - Check VirtualBox:
VBoxManage list vms
This project is provided as-is for educational and development purposes.
Made with β€οΈ for learning Kubernetes on Linux