Skip to content

marsunin/zabbix-helm-poc

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 

Repository files navigation

Zabbix on OpenShift - Proof of Concept

A production-ready Helm chart for deploying a complete Zabbix monitoring stack on OpenShift with advanced security features including mutual TLS (mTLS) authentication between all components.

Table of Contents


Overview

This project implements a complete Zabbix monitoring infrastructure on OpenShift/Kubernetes using Helm charts. The deployment includes:

  • Zabbix Server with PostgreSQL backend
  • Zabbix Web Interface (Nginx + PHP)
  • Zabbix Proxy for distributed monitoring
  • Three types of agents deployed as DaemonSets on every node:
    • Agent2 (Active) - Connects directly to Zabbix Server
    • Agent2-Proxy (Active) - Connects to Zabbix Server via Proxy
    • AgentD (Passive) - Monitored on-demand via Proxy
  • Automated bootstrap for registering hosts and proxy
  • Full mTLS encryption for all inter-component communication

Key Features

Multi-version support - Works with Zabbix 6.0.x, 7.0.x, and 7.4.x
Automated certificate management - Self-signed CA and certificates generated automatically
Zero-touch deployment - Hosts and proxy auto-registered via bootstrap job
OpenShift security compliant - SCCs, RBAC, non-root containers
Production-ready - Persistent storage, health checks, proper resource limits
Flexible monitoring - Multiple agent types for different use cases


Architecture

High-Level Architecture

┌─────────────────────────────────────────────────────────────────────┐
│                         OpenShift Cluster                           │
│                                                                     │
│  ┌──────────────┐         ┌──────────────┐                          │
│  │ Zabbix Web   │◄────────┤ Zabbix Server│◄─────────┐               │
│  │   (Nginx)    │         │  (PostgreSQL)│          │               │
│  └──────────────┘         └──────────────┘          │               │
│         ▲                         ▲                  │              │
│         │                         │                  │              │
│    [Route/Ingress]                │                  │              │
│                              ┌────┴────────┐    ┌────┴─────┐        │
│                              │   Proxy     │    │  mTLS    │        │
│                              │  (Active)   │    │ Direct   │        │
│                              └──┬───────┬──┘    │Connection│        │
│                                 │ mTLS  │       └─────▲────┘        │
│                         ┌───────┘       └──────┐      │             │
│                         │                      │      │             │
│  ┌──────────────────────┴──────────────────────┴──────┴────────┐    │
│  │                      Kubernetes Nodes                       │    │
│  │                                                             │    │
│  │  ┌─────────────┐    ┌─────────────┐    ┌─────────────┐      │    │
│  │  │   Agent2    │    │Agent2-Proxy │    │AgentD-Proxy │      │    │
│  │  │  (Active)   │    │  (Active)   │    │  (Passive)  │      │    │
│  │  │             │    │             │    │             │      │    │
│  │  │  Direct     │    │   Via       │    │   Via       │      │    │
│  │  │  to Server  │    │   Proxy     │    │   Proxy     │      │    │
│  │  │   (mTLS)    │    │             │    │             │      │    │
│  │  └─────────────┘    └─────────────┘    └─────────────┘      │    │
│  │           (Deployed as DaemonSets on each node)             │    │
│  └─────────────────────────────────────────────────────────────┘    │
└─────────────────────────────────────────────────────────────────────┘

Component Communication Flow

1. Zabbix Server (Central Hub)

  • Listens on port 10051 for connections
  • Receives active checks from Agent2 instances
  • Receives proxy data from Zabbix Proxy
  • All connections secured with mTLS (certificate authentication)
  • Connected to PostgreSQL database for data storage

2. Zabbix Proxy (Distributed Collector)

  • Operates in active mode (connects to Server)
  • Listens on port 10051 for agent connections
  • Forwards collected data to Zabbix Server
  • Communicates with Server using mTLS:
    • TLS_CONNECT=1 (certificate) - Proxy → Server
    • TLS_ACCEPT=4 (certificate) - Server → Proxy
  • Uses SQLite for local buffer storage

3. Agent2 (Direct to Server)

  • Active agent - Initiates connection to Server
  • Deployed as DaemonSet (one pod per node)
  • Hostname: <node-name>-agent2
  • Sends metrics directly to Zabbix Server
  • mTLS Configuration:
    • TLS_CONNECT=4 (certificate)
    • TLS_ACCEPT=4 (certificate)

4. Agent2-Proxy (Via Proxy)

  • Active agent - Initiates connection to Proxy
  • Deployed as DaemonSet (one pod per node)
  • Hostname: <node-name>-agent2-proxy
  • Sends metrics to Proxy, which forwards to Server
  • mTLS Configuration:
    • TLS_CONNECT=4 (certificate)
    • TLS_ACCEPT=4 (certificate)
  • Use case: Distributed monitoring, network segmentation

5. AgentD-Proxy (Passive Agent)

  • Passive agent - Waits for Proxy to poll it
  • Deployed as DaemonSet (one pod per node)
  • Hostname: <node-name>-agentd-proxy
  • Proxy connects to agent on port 10050 to collect metrics
  • mTLS Configuration:
    • TLS_CONNECT=4 (certificate)
    • TLS_ACCEPT=4 (certificate)
  • Use case: Legacy monitoring, on-demand metrics collection

Security Model

Certificate-Based Authentication (mTLS)

All components authenticate using X.509 certificates:

┌──────────────┐
│  CA (Root)   │
│  ca.crt      │
└───────┬──────┘
        │
        ├─────────┐─────────┐─────────┐
        │         │         │         │
    ┌───▼───┐ ┌──▼───┐ ┌──▼────┐ ┌──▼───┐
    │Server │ │Proxy │ │Agent  │ │Web   │
    │ Cert  │ │Cert  │ │ Cert  │ │ UI   │
    └───────┘ └──────┘ └───────┘ └──────┘

Certificate Generation Process:

  1. Pre-install Hook - certgen-job runs before deployment
  2. CA Creation - Self-signed root CA certificate generated
  3. Component Certs - Individual certificates for server, proxy, agent
  4. Secret Storage - Certificates stored in Kubernetes Secret
  5. Volume Mounts - Certificates mounted into all component pods

TLS Configuration Matrix:

Component TLS_CONNECT TLS_ACCEPT Direction
Server N/A 4 (cert) Receives connections
Proxy 1 (cert) 4 (cert) Bidirectional
Agent2 4 (cert) 4 (cert) Initiates to Server
Agent2-Proxy 4 (cert) 4 (cert) Initiates to Proxy
AgentD-Proxy 4 (cert) 4 (cert) Receives from Proxy

Components

Core Components

Zabbix Server

  • Deployment: 1 replica
  • Image: zabbix/zabbix-server-pgsql
  • Database: PostgreSQL (StatefulSet)
  • Ports: 10051 (server)
  • Storage: PostgreSQL persistent volume (5Gi)

Zabbix Web UI

  • Deployment: 1 replica
  • Image: zabbix/zabbix-web-nginx-pgsql
  • Ports: 8080 (HTTP)
  • Access: OpenShift Route
  • Authentication: Admin/zabbix (default)

Zabbix Proxy

  • Deployment: 1 replica
  • Image: zabbix/zabbix-proxy-sqlite3
  • Ports: 10051 (proxy)
  • Mode: Active (connects to server)
  • Storage: SQLite (ephemeral)

Agent Components (DaemonSets)

Agent2 (Direct)

  • Image: zabbix/zabbix-agent2
  • Deployment: DaemonSet (all nodes)
  • Naming: <node-name>-agent2
  • Connection: Direct to Server (port 10051)
  • Mode: Active

Agent2-Proxy

  • Image: zabbix/zabbix-agent2
  • Deployment: DaemonSet (all nodes)
  • Naming: <node-name>-agent2-proxy
  • Connection: Via Proxy (port 10051)
  • Mode: Active

AgentD-Proxy

  • Image: zabbix/zabbix-agent
  • Deployment: DaemonSet (all nodes)
  • Naming: <node-name>-agentd-proxy
  • Connection: Via Proxy (port 10050)
  • Mode: Passive
  • Interface: Agent interface on port 10050

Supporting Components

PostgreSQL Database

  • Image: bitnami/postgresql
  • Deployment: StatefulSet
  • Storage: 5Gi persistent volume
  • Credentials: zabbix/zabbix (configurable)

Bootstrap Job

  • Image: bitnami/kubectl
  • Execution: Post-install/upgrade hook
  • Purpose: Auto-register proxy and hosts
  • API Version: Adapts to Zabbix 6.x vs 7.x
  • Tools: curl, jq for JSON-RPC API calls

Certificate Generator Job

  • Image: bitnami/kubectl
  • Execution: Pre-install/upgrade hook
  • Purpose: Generate CA and component certificates
  • Tools: openssl

Technologies & Tools

Container Platform

  • OpenShift 4.x / Kubernetes 1.24+
  • Helm 3.x for package management
  • kubectl for manual operations

Zabbix Stack

  • Zabbix Server 6.0.x / 7.0.x / 7.4.x
  • Zabbix Proxy (SQLite variant)
  • Zabbix Agent2 (modern agent)
  • Zabbix Agent (legacy agentd)
  • Zabbix Web (Nginx + PHP-FPM)

Data Storage

  • PostgreSQL 15+ (via Bitnami)
  • SQLite (proxy local storage)
  • Persistent Volumes (RWO)

Security

  • OpenSSL for certificate generation
  • mTLS (mutual TLS) for all connections
  • RBAC (ServiceAccount, RoleBinding)
  • Security Context Constraints (OpenShift)
  • Non-root containers (all components)

Automation

  • Helm Hooks for job orchestration
  • Zabbix API (JSON-RPC 2.0) for automation
  • curl + jq for API scripting

Networking

  • ClusterIP services (internal)
  • OpenShift Routes (external access)
  • Multus CNI (optional, for multi-network)

Project Structure

zabbix-helm-poc/
├── README.md                          # This file
├── BOOTSTRAP-DIFFERENCES.md           # API version differences (6.x vs 7.x)
├── multus-cm.yaml                     # Optional: Multus CNI configuration
│
└── manifests/                         # Helm chart root
    ├── Chart.yaml                     # Chart metadata
    ├── values.yaml                    # Default configuration values
    │
    └── templates/                     # Kubernetes manifests
        ├── _helpers.tpl               # Helm template helpers
        │
        ├── certgen-job.yaml          # Pre-install: Generate TLS certificates
        ├── zabbix-bootstrap-job.yaml # Post-install: Register hosts (Zabbix 7.0+)
        ├── zabbix-bootstrap-job-legacy.yaml  # Post-install: Register hosts (Zabbix <7.0)
        │
        ├── sa-rolebinding.yaml       # RBAC: ServiceAccount + RoleBinding
        │
        ├── postgres-statefulset.yaml # PostgreSQL database
        ├── postgres-service.yaml     # PostgreSQL service
        │
        ├── server-deployment.yaml    # Zabbix Server
        ├── server-service.yaml       # Server service (port 10051)
        │
        ├── proxy-deployment.yaml     # Zabbix Proxy
        ├── proxy-service.yaml        # Proxy service (port 10051)
        │
        ├── agent2-daemonset.yaml     # Agent2 (direct to server)
        ├── agent2-service.yaml       # Agent2 service
        │
        ├── agent2-daemonset-proxy.yaml     # Agent2 via proxy
        ├── agent2-service-proxy.yaml       # Agent2-proxy service
        │
        ├── agentd-daemonset-proxy.yaml     # AgentD (passive, via proxy)
        ├── agentd-service-proxy.yaml       # AgentD service
        │
        ├── web-nginx-pgsql.yaml      # Zabbix Web UI deployment
        ├── web-nginx-pgsql-service.yaml    # Web UI service
        └── route-web.yaml            # OpenShift Route for web access

Key Files Description

Chart Configuration

  • Chart.yaml - Helm chart metadata (name, version, description)
  • values.yaml - Default values for all configurable parameters
  • _helpers.tpl - Reusable template functions (naming, labels, version parsing)

Security & Bootstrap

  • certgen-job.yaml - Generates CA and all component certificates (runs first)
  • zabbix-bootstrap-job.yaml - Modern bootstrap for Zabbix 7.0+ (Bearer auth)
  • zabbix-bootstrap-job-legacy.yaml - Legacy bootstrap for Zabbix 6.x (auth in body)
  • sa-rolebinding.yaml - Kubernetes RBAC permissions for jobs

Backend Services

  • postgres-statefulset.yaml - Database with persistent storage
  • server-deployment.yaml - Main Zabbix Server process
  • proxy-deployment.yaml - Distributed proxy for agent collection

Monitoring Agents

  • agent2-daemonset.yaml - Active agent, direct to server
  • agent2-daemonset-proxy.yaml - Active agent via proxy
  • agentd-daemonset-proxy.yaml - Passive agent via proxy

Web Interface

  • web-nginx-pgsql.yaml - Nginx + PHP-FPM web frontend
  • route-web.yaml - OpenShift Route for external access

Prerequisites

For Local Development (CRC)

  1. OpenShift Local (CRC) 2.20+

    # Download from: https://developers.redhat.com/products/openshift-local/overview
    crc setup
    crc start --cpus 4 --memory 16384
  2. Helm 3.x

    # macOS
    brew install helm
    
    # Linux
    curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
  3. kubectl / oc CLI

    # macOS
    brew install openshift-cli
    
    # Login to CRC
    eval $(crc oc-env)
    oc login -u developer https://api.crc.testing:6443

For Production OpenShift

  1. OpenShift Cluster 4.10+
  2. Cluster Admin Access (for namespace creation, SCCs)
  3. Helm 3.x installed
  4. oc CLI configured with cluster credentials
  5. Storage Class for persistent volumes (RWO)

Resource Requirements

Component CPU Request Memory Request Storage
Server 500m 512Mi -
Proxy 250m 256Mi -
Agent2 (per node) 100m 128Mi -
PostgreSQL 500m 512Mi 5Gi
Web UI 250m 256Mi -

Minimum Cluster: 2 vCPU, 8GB RAM, 10GB storage
Recommended: 4 vCPU, 16GB RAM, 20GB storage


Installation

Local Development (OpenShift CRC)

Step 1: Start CRC and Login

# Start OpenShift Local
crc start --cpus 4 --memory 16384

# Configure oc environment
eval $(crc oc-env)

# Login as developer
oc login -u developer https://api.crc.testing:6443

Step 2: Create Namespace

# Create namespace for Zabbix
oc new-project zabbix-monitoring

# Or use existing namespace
oc project zabbix-monitoring

Step 3: Clone Repository

git clone https://github.com/marsunin/zabbix-helm-poc.git
cd zabbix-helm-poc

Step 4: Install with Helm

# Install Zabbix 7.4.x (latest)
helm install zabbix-7-4 manifests \
  --create-namespace \
  --set zabbixVersion=7.4.4-alpine \
  --namespace zabbix-monitoring

# Or install Zabbix 6.0.x (legacy)
helm install zabbix-6-0 manifests \
  --create-namespace \
  --set zabbixVersion=6.0.42-alpine \
  --namespace zabbix-monitoring

Step 5: Access Web UI

# Get the route URL
oc get route -n zabbix-monitoring

# Open in browser
# Example: http://zabbix-web-zabbix-monitoring.apps-crc.testing

# Default credentials:
# Username: Admin
# Password: zabbix

Step 6: Verify Deployment

# Check all pods are running
oc get pods -n zabbix-monitoring

# Expected output:
# NAME                                           READY   STATUS
# zabbix-7-4-agent2-xxxxx                        1/1     Running
# zabbix-7-4-agent2-proxy-xxxxx                  1/1     Running
# zabbix-7-4-agentd-proxy-xxxxx                  1/1     Running
# zabbix-7-4-postgres-0                          1/1     Running
# zabbix-7-4-proxy-xxxxx                         1/1     Running
# zabbix-7-4-server-xxxxx                        1/1     Running
# zabbix-7-4-web-xxxxx                           1/1     Running

# Check hosts are registered
oc logs -n zabbix-monitoring job/zabbix-7-4-bootstrap

Production OpenShift Cluster

Step 1: Login to Cluster

# Login with token or credentials
oc login --token=<your-token> --server=https://api.cluster.example.com:6443

# Or with username/password
oc login https://api.cluster.example.com:6443 -u admin

Step 2: Create Project with Quotas

# Create namespace
oc new-project zabbix-production

# Set resource quotas (optional)
cat <<EOF | oc apply -f -
apiVersion: v1
kind: ResourceQuota
metadata:
  name: zabbix-quota
  namespace: zabbix-production
spec:
  hard:
    requests.cpu: "4"
    requests.memory: 8Gi
    persistentvolumeclaims: "5"
EOF

Step 3: Configure Values for Production

Create a custom values-production.yaml:

# values-production.yaml
zabbixVersion: "7.4.4-alpine"

replicaCount: 2  # High availability for web UI

postgresql:
  persistence:
    enabled: true
    size: 50Gi  # Larger storage for production
    storageClass: "fast-ssd"  # Your storage class
  auth:
    postgresPassword: "<strong-password>"
    password: "<strong-password>"
    username: "zabbix"
    database: "zabbix"

# Production security
openshift:
  runAsNonRoot: true
  
# Node selection (optional)
nodeSelector:
  node-role.kubernetes.io/worker: ""
  
tolerations:
  - key: "monitoring"
    operator: "Equal"
    value: "zabbix"
    effect: "NoSchedule"

Step 4: Install with Custom Values

# Install with production values
helm install zabbix-prod manifests \
  --create-namespace \
  --namespace zabbix-production \
  --values values-production.yaml \
  --wait \
  --timeout 10m

# Watch deployment
watch oc get pods -n zabbix-production

Step 5: Configure External Route (if needed)

# Create edge-terminated route with TLS
oc create route edge zabbix-web-secure \
  --service=zabbix-prod-web \
  --port=8080 \
  --hostname=zabbix.example.com \
  --namespace zabbix-production

Step 6: Production Verification

# Check all components
oc get all -n zabbix-production

# Verify TLS certificates
oc get secret -n zabbix-production | grep tls

# Check persistent volumes
oc get pvc -n zabbix-production

# View bootstrap logs
oc logs -n zabbix-production job/zabbix-prod-bootstrap

# Test database connection
oc exec -it zabbix-prod-postgres-0 -n zabbix-production -- \
  psql -U zabbix -d zabbix -c "SELECT COUNT(*) FROM hosts;"

Configuration

Helm Values Reference

Version Configuration

# Zabbix version - determines API compatibility and bootstrap job
zabbixVersion: "7.4.4-alpine"  # or "6.0.42-alpine"

The chart automatically:

  • Strips -alpine suffix for version comparison
  • Deploys correct bootstrap job (modern vs legacy API)
  • Uses matching container image tags

Image Configuration

image:
  server:
    repository: zabbix/zabbix-server-pgsql
    pullPolicy: IfNotPresent
  proxy:
    repository: zabbix/zabbix-proxy-sqlite3
    pullPolicy: IfNotPresent
  agent2:
    repository: zabbix/zabbix-agent2
    pullPolicy: IfNotPresent
  agentd:
    repository: zabbix/zabbix-agent
    pullPolicy: IfNotPresent
  web:
    repository: zabbix/zabbix-web-nginx-pgsql
    pullPolicy: IfNotPresent

Database Configuration

postgresql:
  enabled: true
  image:
    repository: bitnami/postgresql
    tag: "latest"
    pullPolicy: IfNotPresent
  auth:
    postgresPassword: zabbix  # Change in production!
    username: zabbix
    password: zabbix           # Change in production!
    database: zabbix
  persistence:
    enabled: true
    size: 5Gi                  # Adjust for production
    storageClass: ""           # Use default or specify
  service:
    port: 5432

Service Ports

service:
  type: ClusterIP
  agent2Port: 10050      # Agent listening port
  agentdPort: 10050      # AgentD listening port
  serverPort: 10051      # Server listening port
  proxyPort: 10051       # Proxy listening port
  webPort: 8080          # Web UI port

TLS Configuration

tls:
  enabled: true  # Enable mTLS for all components
  # Certificates are auto-generated by certgen job
  # Stored in Secret: <release-name>-tls

OpenShift Security

openshift:
  runAsUser: null        # Let OpenShift assign UID
  fsGroup: null          # Let OpenShift assign GID
  runAsNonRoot: true     # Enforce non-root containers

Bootstrap Configuration (Advanced)

bootstrap:
  # Node list for host registration
  # If not provided, uses all cluster nodes
  nodes: "crc,worker-1,worker-2"
  
  # API credentials (default in Zabbix)
  username: "Admin"
  password: "zabbix"
  
  # Proxy name (auto-generated if not set)
  proxyName: "zabbix-proxy"

Environment-Specific Overrides

Development (CRC)

helm install zabbix manifests \
  --create-namespace \
  --set zabbixVersion=7.4.4-alpine \
  --set postgresql.persistence.size=2Gi

Staging

helm install zabbix manifests \
  --create-namespace \
  --set zabbixVersion=7.4.4-alpine \
  --set postgresql.persistence.size=10Gi \
  --set postgresql.persistence.storageClass=standard

Production

helm install zabbix manifests \
  --create-namespace \
  --set zabbixVersion=7.4.4-alpine \
  --set replicaCount=2 \
  --set postgresql.persistence.size=100Gi \
  --set postgresql.persistence.storageClass=fast-ssd \
  --set postgresql.auth.postgresPassword="$(openssl rand -base64 32)" \
  --set postgresql.auth.password="$(openssl rand -base64 32)"

Version Compatibility

Supported Zabbix Versions

Version Bootstrap Job API Auth Status
6.0.x Legacy user + body auth ✅ Tested
7.0.x Modern username + Bearer ⚠️ Compatible
7.4.x Modern username + Bearer ✅ Tested

API Differences (6.x vs 7.x)

The chart includes two bootstrap jobs that are conditionally deployed based on version:

Zabbix 7.0+ (Modern API)

  • Authentication: Bearer token in header
  • Login parameter: username
  • Proxy creation: name + operating_mode
  • Host assignment: monitored_by + proxyid

Zabbix 6.x (Legacy API)

  • Authentication: auth token in request body
  • Login parameter: user
  • Proxy creation: host + status
  • Host assignment: proxy_hostid

See BOOTSTRAP-DIFFERENCES.md for detailed API comparison.

Upgrade Path

# From Zabbix 6.0 to 7.4
helm upgrade zabbix-6-0 manifests \
  --set zabbixVersion=7.4.4-alpine \
  --namespace zabbix-monitoring

# Chart automatically uses correct bootstrap job

⚠️ Warning: Zabbix Server database schema upgrades are automatic but irreversible. Backup database before major version upgrades.


Troubleshooting

Common Issues

1. Pods Not Starting

Symptom: Pods stuck in Pending or CrashLoopBackOff

# Check pod status
oc get pods -n zabbix-monitoring

# Describe pod for events
oc describe pod <pod-name> -n zabbix-monitoring

# Check logs
oc logs <pod-name> -n zabbix-monitoring

Common causes:

  • Insufficient resources (CPU/memory)
  • PVC not bound (check oc get pvc)
  • Image pull errors (check image names)

2. TLS Certificate Errors

Symptom: Agents can't connect, "TLS handshake failed" errors

# Check if certificates were generated
oc get secret -n zabbix-monitoring | grep tls

# View certificate job logs
oc logs job/zabbix-7-4-certgen -n zabbix-monitoring

# Verify certificates are mounted
oc exec -it <pod-name> -n zabbix-monitoring -- ls -la /etc/zabbix/tls/

Solution: Delete and recreate to regenerate certificates

oc delete secret zabbix-7-4-tls -n zabbix-monitoring
helm upgrade zabbix-7-4 manifests -n zabbix-monitoring

3. Bootstrap Job Failed

Symptom: Hosts/proxy not registered in Zabbix UI

# Check bootstrap job logs
oc logs job/zabbix-7-4-bootstrap -n zabbix-monitoring

# Common issues:
# - "Not authorized" → Wrong API version/credentials
# - "Connection refused" → Server not ready yet
# - "Invalid params" → API version mismatch

Solution for version mismatch:

# Verify correct bootstrap job deployed
helm template manifests --set zabbixVersion=7.4.4-alpine | grep "kind: Job"

# Should see only one bootstrap job

4. Database Connection Issues

Symptom: Server logs show "Cannot connect to database"

# Check PostgreSQL is running
oc get pods -n zabbix-monitoring | grep postgres

# Test database connection
oc exec -it zabbix-7-4-postgres-0 -n zabbix-monitoring -- \
  psql -U zabbix -d zabbix -c "SELECT version();"

# Check server can resolve postgres service
oc exec -it <server-pod> -n zabbix-monitoring -- \
  nslookup zabbix-7-4-postgres

5. Web UI Not Accessible

Symptom: Route exists but returns 502/503

# Check route
oc get route -n zabbix-monitoring

# Check web pod is running
oc get pods -n zabbix-monitoring | grep web

# Check web pod logs
oc logs <web-pod-name> -n zabbix-monitoring

# Test internal connectivity
oc exec -it <web-pod> -n zabbix-monitoring -- \
  curl -I localhost:8080

Debug Mode

Enable verbose logging:

# Server debug logs
oc set env deployment/zabbix-7-4-server \
  ZBX_DEBUGLEVEL=4 \
  -n zabbix-monitoring

# View logs
oc logs -f deployment/zabbix-7-4-server -n zabbix-monitoring

Clean Reinstall

# Complete uninstall
helm uninstall zabbix-7-4 -n zabbix-monitoring

# Delete PVCs (data will be lost!)
oc delete pvc -l app=zabbix-postgres -n zabbix-monitoring

# Delete TLS secret
oc delete secret zabbix-7-4-tls -n zabbix-monitoring

# Reinstall
helm install zabbix-7-4 manifests \
  --create-namespace \
  --set zabbixVersion=7.4.4-alpine \
  --namespace zabbix-monitoring

Development

Testing Locally

# Lint Helm chart
helm lint manifests

# Template rendering (without installing)
helm template test manifests --set zabbixVersion=7.4.4-alpine

# Dry-run installation
helm install zabbix-test manifests \
  --create-namespace \
  --dry-run \
  --debug \
  --set zabbixVersion=7.4.4-alpine

Modifying Templates

# After editing templates, validate syntax
helm template manifests | oc apply --dry-run=client -f -

# Test with different versions
helm template manifests --set zabbixVersion=6.0.42-alpine | grep bootstrap
helm template manifests --set zabbixVersion=7.4.4-alpine | grep bootstrap

Adding New Agents

To add a fourth agent type (e.g., SNMP traps):

  1. Create new DaemonSet: templates/agent-snmp-daemonset.yaml
  2. Create service: templates/agent-snmp-service.yaml
  3. Add to values.yaml:
    image:
      agentSnmp:
        repository: zabbix/zabbix-agent-snmp
        pullPolicy: IfNotPresent
  4. Update bootstrap job to register new hosts
  5. Test deployment

Contributing

  1. Fork repository
  2. Create feature branch: git checkout -b feature/my-feature
  3. Make changes and test thoroughly
  4. Commit with descriptive message
  5. Push and create pull request

License

This project is provided as-is for educational and proof-of-concept purposes.

Support

For issues and questions:

Acknowledgments

  • Zabbix LLC for the monitoring platform
  • Red Hat OpenShift team
  • Helm community
  • Bitnami for container images

Last Updated: November 2025
Chart Version: 0.1.0
Tested On: OpenShift Local (CRC) 2.20, OpenShift 4.14

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages