-
-
Notifications
You must be signed in to change notification settings - Fork 226
fix(disk): restore updating boot disk. #2150
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @Vaneixus, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request addresses a limitation in updating virtual machine boot disks by implementing a more robust mechanism for handling disk import operations. It ensures that necessary VM state transitions, such as shutdowns and reboots, are correctly orchestrated when disk configurations are modified, particularly for bootable volumes. The changes improve the reliability and functionality of disk management within the system.
Highlights
- Boot Disk Update Restoration: The pull request restores the ability to update boot disks by correctly handling the
ImportFromparameter. Previously,ImportFromwas not sent for existing disks due to issues with re-importing boot disks, but now it is conditionally sent, triggering a necessary VM shutdown and reboot. - VM State Management for Disk Updates: The VM update process has been refined to properly manage the virtual machine's state during disk updates. A new
shutdownBeforeUpdateflag is introduced to ensure the VM is gracefully shut down before applying disk changes that require it, and then restarted if it was originally running. - Function Signature Update: The
Updatefunction inproxmoxtf/resource/vm/disk/disk.gonow returns an additional boolean value,shutdownBeforeUpdate, to indicate whether a VM shutdown is required before applying disk changes. This change is propagated to calling functions and tests.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
|
This is currently untested. but the end goal is restoring the updating of disks already in vm via import-from. it should work for all other disks, but the boot disk needs the vm to be offline before it gets updated. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request aims to enable updating a boot disk, which involves re-importing it. The changes in proxmoxtf/resource/vm/disk/disk.go correctly identify that this operation requires the VM to be shut down. However, the implementation in proxmoxtf/resource/vm/vm.go has a flaw: while it correctly shuts down the VM for the update, it fails to restart it afterward, even if the VM is configured to be running. This leaves the resource in an inconsistent state. My review includes a critical fix to ensure the VM is restarted when required, restoring it to its configured state after the update.
|
I might need to add another parameter to identify boot disks to the update disk function. |
Signed-off-by: Marco Attia <[email protected]>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> Signed-off-by: Marco Attia <[email protected]>
Signed-off-by: Marco Attia <[email protected]>
963342a to
7d32487
Compare
bpg
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @Vaneixus 👋🏼!
Thank you for looking into that, and contributing the fix, but I have a few questions.
| if disk.ImportFrom != nil && *disk.ImportFrom != "" { | ||
| rebootRequired = true | ||
| shutdownBeforeUpdate = true | ||
| tmp.DatastoreID = disk.DatastoreID | ||
| tmp.ImportFrom = disk.ImportFrom | ||
| tmp.Size = disk.Size | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this part was specifically removed here
so I think this change will break the VM update scenario, when VM has an imported disk.
What was the reason for adding this back?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hello again @bpg, the reason this was re-added was that the boot disk cannot be changed while the vm is running, you can only change it while it isn't, at least that's what I remember being the case back when I opened the first PR. This PR is currently untested. I just opened it as to show my progress. I should have probably put it as a draft. It's also why I move the start-up/shutdown code, I probably have to change that part again. But basically, the whole idea is to shut down the vm to allow the boot disk to be changed. I just haven't had the opportunity to test this again. I will be working again on this PR on the weekend.
proxmoxtf/resource/vm/vm.go
Outdated
| e = vmAPI.UpdateVM(ctx, updateBody) | ||
| if e != nil { | ||
| return diag.FromErr(e) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry, I didn't quite get the reason for moving vmUpdate after vmStart / vmShutdown. This will affect ability to start / stop VM buy updating started=true|false.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's untested this is a very much an incomplete code change, I need to switch this to a draft, I just forgot to d it.
Signed-off-by: Marco Attia <[email protected]>
4006809 to
43d1733
Compare
Signed-off-by: Marco Attia <[email protected]>
|
/gemini review |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces a fix to allow updating a VM's boot disk by re-importing it, which correctly identifies that a VM shutdown is required. The changes are well-propagated from the disk update logic to the main VM update function, including logic to prevent redundant shutdowns. My review includes a suggestion to fix a logic issue in the VM start/stop handling and to add test coverage for the new disk update scenario.
|
|
||
| // Call the Update function | ||
| _, err = Update(ctx, client, nodeName, vmID, resourceData, planDisks, currentDisks, updateBody) | ||
| _, _, err = Update(ctx, client, nodeName, vmID, resourceData, planDisks, currentDisks, updateBody) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The Update function signature has changed, and this test was updated to compile. However, there is no test case that covers the new functionality of updating a disk with import_from. It would be valuable to add a new test to ensure that when import_from is changed on an existing disk, the shutdownBeforeUpdate flag is correctly returned as true from the Update function.
|
Hello BPG, I already ran a very basic update on an existing cluster, it seems to work according to PVE's logs. feel free to run a test and check it out. I haven't actually checked the content inside of the disks, but proxmox gave me a new import entry in the logs so it should have replaced it successfully. I will probably be testing other stuff such as no change. but the basic functionality seems to be working. I will also probably have an another change or two to be done, such as in case the VM is already shutdown since it seems I am not handling that as of yet. |
Signed-off-by: Marco Attia <[email protected]>
Signed-off-by: Marco Attia <[email protected]>
a2ea017 to
911c591
Compare
|
Sorry for the delays, lack of time and that I wanted to test this extensively and make sure it works correctly took some time. I am currently running the last of my tests and so it should be ready to go. I am able to have the Boot HDD replaced even if the VM is already running, Just have to stop it, update the disk then start the VM again. I will need to update the documentation to mention that, just so no one updates the disk and expects the VM to continue running. As for the old one, it merely gets detached, nothing gets permanently deleted, so yeah). Edit: And this time I tested the actual images uploaded with a changing creation date that I can check and see if the VM is indeed running the same or newer build of my test image. |
|
TF Deployment Logs: The order in the Node Logs is incorrect for some reason, or rather, it might be unrelated, but if we look at the cluster jobs, we can see it executed the shutdown, the update then the start up in order. As for my testing proccedures? Oh and if we check the TF Logs, it's an in-place update, so the other disks aren't modified or destroyed, the VM is safe for usage in disk update scenarios. And I will need to add the test Gemini recommended as well as a last thing to do. |
Signed-off-by: Marco Attia <[email protected]>
cbbedd9 to
21b9694
Compare
Signed-off-by: Marco Attia <[email protected]>
|
Good Evening @bpg, I have finished my testing and I have ran the scenarios above and added the following: |
|
/gemini review |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request correctly restores the ability to update a boot disk by changing the import_from attribute. The core logic involves detecting this change, triggering a VM shutdown, and then applying the disk update. The related changes in vm.go to handle the shutdown signal and avoid duplicate shutdowns are also well-implemented. I've found a minor typo in the documentation and some issues in the new test case that should be addressed to ensure correctness and clarity. Overall, this is a good fix for the issue.
| // Create plan disks (what terraform wants) | ||
| importFrom2 := "local:import/test2.qcow2" | ||
| planDisks2 := vms.CustomStorageDevices{ | ||
| "scsi0": &vms.CustomStorageDevice{ | ||
| Size: types.DiskSizeFromGigabytes(10), // Same as current | ||
| DatastoreID: &datastoreID, | ||
| ImportFrom: &importFrom2, | ||
| }, | ||
| "scsi1": &vms.CustomStorageDevice{ | ||
| Size: types.DiskSizeFromGigabytes(5), // Different from current (5 -> 20) | ||
| DatastoreID: &datastoreID, | ||
| }, | ||
| } | ||
|
|
||
| // Mock update body to capture what gets sent to the API | ||
| updateBody2 := &vms.UpdateRequestBody{} | ||
|
|
||
| // Force HasChange to return true by setting old and new values | ||
| err = resourceData.Set(MkDisk, []interface{}{ | ||
| map[string]interface{}{ | ||
| mkDiskInterface: "scsi1", | ||
| mkDiskDatastoreID: "local", | ||
| mkDiskSize: 5, // Old size | ||
| mkDiskSpeed: []interface{}{}, | ||
| }, | ||
| }) | ||
| require.NoError(t, err) | ||
|
|
||
| err = resourceData.Set(MkDisk, []interface{}{ | ||
| map[string]interface{}{ | ||
| mkDiskInterface: "scsi1", | ||
| mkDiskDatastoreID: "local", | ||
| mkDiskSize: 20, // New size | ||
| mkDiskSpeed: []interface{}{}, | ||
| }, | ||
| }) | ||
| require.NoError(t, err) | ||
|
|
||
| // Call the Update function | ||
| shutdownBeforeUpdate, _, err = Update(ctx, client, nodeName, vmID, resourceData, planDisks2, currentDisks, updateBody2) | ||
| require.NoError(t, err) | ||
| require.True(t, shutdownBeforeUpdate) | ||
|
|
||
| // Check that only the changed disk (scsi1) is in the update body | ||
| // scsi0 should NOT be in the update body since it hasn't changed | ||
| require.NotNil(t, updateBody) | ||
|
|
||
| // The update body should only contain scsi0, not scsi1 | ||
| // Note: We can't directly inspect the updateBody content in this test framework, | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This new test case has a few issues that could be improved for clarity and correctness:
- The comment on line 378 is incorrect as the size for
scsi1is not changed inplanDisks2. - The assertion on line 414 uses
updateBodyinstead ofupdateBody2. - The test would be more robust if it explicitly asserted which disks are present (or not) in
updateBody2.
Here is a suggested refactoring of this test case to address these points and make the intent clearer:
// --- Test Case 2: Change import_from on a disk ---
// This test case verifies that changing the import_from field on a disk
// correctly triggers a shutdown.
importFrom2 := "local:import/test2.qcow2"
planDisks2 := vms.CustomStorageDevices{
"scsi0": &vms.CustomStorageDevice{
Size: types.DiskSizeFromGigabytes(10), // Same as current
DatastoreID: &datastoreID,
ImportFrom: &importFrom2, // This is the change
},
"scsi1": &vms.CustomStorageDevice{
Size: types.DiskSizeFromGigabytes(5), // Same as current
DatastoreID: &datastoreID,
},
}
// Mock update body to capture what gets sent to the API
updateBody2 := &vms.UpdateRequestBody{}
// Force HasChange to return true for the disk block by simulating a change.
// This is necessary because the test framework doesn't automatically detect
// the change in `planDisks2` vs `currentDisks`.
err = resourceData.Set(MkDisk, []interface{}{
map[string]interface{}{
mkDiskInterface: "scsi0",
mkDiskDatastoreID: "local",
mkDiskSize: 10,
mkDiskImportFrom: "local:iso/disk.qcow2", // Old value
mkDiskSpeed: []interface{}{},
},
})
require.NoError(t, err)
err = resourceData.Set(MkDisk, []interface{}{
map[string]interface{}{
mkDiskInterface: "scsi0",
mkDiskDatastoreID: "local",
mkDiskSize: 10,
mkDiskImportFrom: importFrom2, // New value
mkDiskSpeed: []interface{}{},
},
})
require.NoError(t, err)
// Call the Update function
shutdownBeforeUpdate, _, err = Update(ctx, client, nodeName, vmID, resourceData, planDisks2, currentDisks, updateBody2)
require.NoError(t, err)
require.True(t, shutdownBeforeUpdate, "Shutdown should be required when import_from changes")
// Check that the update body contains the disk with the import_from change (scsi0)
// and does not contain the unchanged disk (scsi1).
require.NotNil(t, updateBody2)
require.Contains(t, updateBody2.CustomStorageDevices, "scsi0", "Update body should contain the changed disk scsi0")
require.NotContains(t, updateBody2.CustomStorageDevices, "scsi1", "Update body should not contain the unchanged disk scsi1")
require.Equal(t, importFrom2, *updateBody2.CustomStorageDevices["scsi0"].ImportFrom)
}Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> Signed-off-by: Marco Attia <[email protected]>
Signed-off-by: Marco Attia <[email protected]>




Contributor's Note
/docsfor any user-facing features or additions./fwprovider/testsfor any new or updated resources / data sources.make exampleto verify that the change works as expected.Proof of Work
Community Note
Closes #0000 | Relates #0000