Tree:
86f847a39a
10.1-testing
99888-virtio-zero-init-c9s
block
coverity
master
stable-0.10
stable-0.11
stable-0.12
stable-0.13
stable-0.14
stable-0.15
stable-1.0
stable-1.1
stable-1.2
stable-1.3
stable-1.4
stable-1.5
stable-1.6
stable-1.7
stable-10.0
stable-10.1
stable-10.2
stable-2.0
stable-2.1
stable-2.10
stable-2.11
stable-2.12
stable-2.2
stable-2.3
stable-2.4
stable-2.5
stable-2.6
stable-2.7
stable-2.8
stable-2.9
stable-3.0
stable-3.1
stable-4.0
stable-4.1
stable-4.2
stable-5.0
stable-6.0
stable-6.0-staging
stable-6.1
stable-7.2
stable-7.2-staging
stable-8.0
stable-8.0-staging
stable-8.1
stable-8.2
stable-9.0
stable-9.1
stable-9.2
staging
staging-0.0
staging-10.0
staging-10.1
staging-10.2
staging-7.2
staging-8.0
staging-8.1
staging-8.2
staging-9.0
staging-9.1
staging-9.2
staging-mjt-test
stsquad-hotfix
tracing
initial
release_0_10_0
release_0_10_1
release_0_10_2
release_0_5_1
release_0_6_0
release_0_6_1
release_0_7_0
release_0_7_1
release_0_8_1
release_0_8_2
release_0_9_0
release_0_9_1
staging-mjt-test
trivial-patches-pull-request
v0.1.0
v0.1.1
v0.1.3
v0.1.4
v0.1.5
v0.1.6
v0.10.0
v0.10.1
v0.10.2
v0.10.3
v0.10.4
v0.10.5
v0.10.6
v0.11.0
v0.11.0-rc0
v0.11.0-rc1
v0.11.0-rc2
v0.11.1
v0.12.0
v0.12.0-rc0
v0.12.0-rc1
v0.12.0-rc2
v0.12.1
v0.12.2
v0.12.3
v0.12.4
v0.12.5
v0.13.0
v0.13.0-rc0
v0.13.0-rc1
v0.13.0-rc2
v0.13.0-rc3
v0.14.0
v0.14.0-rc0
v0.14.0-rc1
v0.14.0-rc2
v0.14.1
v0.15.0
v0.15.0-rc0
v0.15.0-rc1
v0.15.0-rc2
v0.15.1
v0.2.0
v0.3.0
v0.4.0
v0.4.1
v0.4.2
v0.4.3
v0.4.4
v0.5.0
v0.5.1
v0.6.0
v0.6.1
v0.7.0
v0.7.1
v0.8.1
v0.8.2
v0.9.0
v0.9.1
v1.0
v1.0-rc0
v1.0-rc1
v1.0-rc2
v1.0-rc3
v1.0-rc4
v1.0.1
v1.1-rc0
v1.1-rc1
v1.1-rc2
v1.1.0
v1.1.0-rc2
v1.1.0-rc3
v1.1.0-rc4
v1.1.1
v1.1.2
v1.2.0
v1.2.0-rc0
v1.2.0-rc1
v1.2.0-rc2
v1.2.0-rc3
v1.2.1
v1.2.2
v1.3.0
v1.3.0-rc0
v1.3.0-rc1
v1.3.0-rc2
v1.3.1
v1.4.0
v1.4.0-rc0
v1.4.0-rc1
v1.4.0-rc2
v1.4.1
v1.4.2
v1.5.0
v1.5.0-rc0
v1.5.0-rc1
v1.5.0-rc2
v1.5.0-rc3
v1.5.1
v1.5.2
v1.5.3
v1.6.0
v1.6.0-rc0
v1.6.0-rc1
v1.6.0-rc2
v1.6.0-rc3
v1.6.1
v1.6.2
v1.7.0
v1.7.0-rc0
v1.7.0-rc1
v1.7.0-rc2
v1.7.1
v1.7.2
v10.0.0
v10.0.0-rc0
v10.0.0-rc1
v10.0.0-rc2
v10.0.0-rc3
v10.0.0-rc4
v10.0.1
v10.0.2
v10.0.3
v10.0.4
v10.0.5
v10.0.6
v10.0.7
v10.0.8
v10.1.0
v10.1.0-rc0
v10.1.0-rc1
v10.1.0-rc2
v10.1.0-rc3
v10.1.0-rc4
v10.1.1
v10.1.2
v10.1.3
v10.1.4
v10.2.0
v10.2.0-rc1
v10.2.0-rc2
v10.2.0-rc3
v10.2.0-rc4
v10.2.1
v2.0.0
v2.0.0-rc0
v2.0.0-rc1
v2.0.0-rc2
v2.0.0-rc3
v2.0.1
v2.0.2
v2.1.0
v2.1.0-rc0
v2.1.0-rc1
v2.1.0-rc2
v2.1.0-rc3
v2.1.0-rc4
v2.1.0-rc5
v2.1.1
v2.1.2
v2.1.3
v2.10.0
v2.10.0-rc0
v2.10.0-rc1
v2.10.0-rc2
v2.10.0-rc3
v2.10.0-rc4
v2.10.1
v2.10.2
v2.11.0
v2.11.0-rc0
v2.11.0-rc1
v2.11.0-rc2
v2.11.0-rc3
v2.11.0-rc4
v2.11.0-rc5
v2.11.1
v2.11.2
v2.12.0
v2.12.0-rc0
v2.12.0-rc1
v2.12.0-rc2
v2.12.0-rc3
v2.12.0-rc4
v2.12.1
v2.2.0
v2.2.0-rc0
v2.2.0-rc1
v2.2.0-rc2
v2.2.0-rc3
v2.2.0-rc4
v2.2.0-rc5
v2.2.1
v2.3.0
v2.3.0-rc0
v2.3.0-rc1
v2.3.0-rc2
v2.3.0-rc3
v2.3.0-rc4
v2.3.1
v2.4.0
v2.4.0-rc0
v2.4.0-rc1
v2.4.0-rc2
v2.4.0-rc3
v2.4.0-rc4
v2.4.0.1
v2.4.1
v2.5.0
v2.5.0-rc0
v2.5.0-rc1
v2.5.0-rc2
v2.5.0-rc3
v2.5.0-rc4
v2.5.1
v2.5.1.1
v2.6.0
v2.6.0-rc0
v2.6.0-rc1
v2.6.0-rc2
v2.6.0-rc3
v2.6.0-rc4
v2.6.0-rc5
v2.6.1
v2.6.2
v2.7.0
v2.7.0-rc0
v2.7.0-rc1
v2.7.0-rc2
v2.7.0-rc3
v2.7.0-rc4
v2.7.0-rc5
v2.7.1
v2.8.0
v2.8.0-rc0
v2.8.0-rc1
v2.8.0-rc2
v2.8.0-rc3
v2.8.0-rc4
v2.8.1
v2.8.1.1
v2.9.0
v2.9.0-rc0
v2.9.0-rc1
v2.9.0-rc2
v2.9.0-rc3
v2.9.0-rc4
v2.9.0-rc5
v2.9.1
v3.0.0
v3.0.0-rc0
v3.0.0-rc1
v3.0.0-rc2
v3.0.0-rc3
v3.0.0-rc4
v3.0.1
v3.1.0
v3.1.0-rc0
v3.1.0-rc1
v3.1.0-rc2
v3.1.0-rc3
v3.1.0-rc4
v3.1.0-rc5
v3.1.1
v3.1.1.1
v4.0.0
v4.0.0-rc0
v4.0.0-rc1
v4.0.0-rc2
v4.0.0-rc3
v4.0.0-rc4
v4.0.1
v4.1.0
v4.1.0-rc0
v4.1.0-rc1
v4.1.0-rc2
v4.1.0-rc3
v4.1.0-rc4
v4.1.0-rc5
v4.1.1
v4.2.0
v4.2.0-rc0
v4.2.0-rc1
v4.2.0-rc2
v4.2.0-rc3
v4.2.0-rc4
v4.2.0-rc5
v4.2.1
v5.0.0
v5.0.0-rc0
v5.0.0-rc1
v5.0.0-rc2
v5.0.0-rc3
v5.0.0-rc4
v5.0.1
v5.1.0
v5.1.0-rc0
v5.1.0-rc1
v5.1.0-rc2
v5.1.0-rc3
v5.2.0
v5.2.0-rc0
v5.2.0-rc1
v5.2.0-rc2
v5.2.0-rc3
v5.2.0-rc4
v6.0.0
v6.0.0-rc0
v6.0.0-rc1
v6.0.0-rc2
v6.0.0-rc3
v6.0.0-rc4
v6.0.0-rc5
v6.0.1
v6.1.0
v6.1.0-rc0
v6.1.0-rc1
v6.1.0-rc2
v6.1.0-rc3
v6.1.0-rc4
v6.1.1
v6.2.0
v6.2.0-rc0
v6.2.0-rc1
v6.2.0-rc2
v6.2.0-rc3
v6.2.0-rc4
v7.0.0
v7.0.0-rc0
v7.0.0-rc1
v7.0.0-rc2
v7.0.0-rc3
v7.0.0-rc4
v7.1.0
v7.1.0-rc0
v7.1.0-rc1
v7.1.0-rc2
v7.1.0-rc3
v7.1.0-rc4
v7.2.0
v7.2.0-rc0
v7.2.0-rc1
v7.2.0-rc2
v7.2.0-rc3
v7.2.0-rc4
v7.2.1
v7.2.10
v7.2.11
v7.2.12
v7.2.13
v7.2.14
v7.2.15
v7.2.16
v7.2.17
v7.2.18
v7.2.19
v7.2.2
v7.2.20
v7.2.21
v7.2.22
v7.2.3
v7.2.4
v7.2.5
v7.2.6
v7.2.7
v7.2.8
v7.2.9
v8.0.0
v8.0.0-rc0
v8.0.0-rc1
v8.0.0-rc2
v8.0.0-rc3
v8.0.0-rc4
v8.0.1
v8.0.2
v8.0.3
v8.0.4
v8.0.5
v8.1.0
v8.1.0-rc0
v8.1.0-rc1
v8.1.0-rc2
v8.1.0-rc3
v8.1.0-rc4
v8.1.1
v8.1.2
v8.1.3
v8.1.4
v8.1.5
v8.2.0
v8.2.0-rc0
v8.2.0-rc1
v8.2.0-rc2
v8.2.0-rc3
v8.2.0-rc4
v8.2.1
v8.2.10
v8.2.2
v8.2.3
v8.2.4
v8.2.5
v8.2.6
v8.2.7
v8.2.8
v8.2.9
v9.0.0
v9.0.0-rc0
v9.0.0-rc1
v9.0.0-rc2
v9.0.0-rc3
v9.0.0-rc4
v9.0.1
v9.0.2
v9.0.3
v9.0.4
v9.1.0
v9.1.0-rc0
v9.1.0-rc1
v9.1.0-rc2
v9.1.0-rc3
v9.1.0-rc4
v9.1.1
v9.1.2
v9.1.3
v9.2.0
v9.2.0-rc0
v9.2.0-rc1
v9.2.0-rc2
v9.2.0-rc3
v9.2.1
v9.2.2
v9.2.3
v9.2.4
${ noResults }
2429 Commits (86f847a39aef93bcfbea65a702ba76762ae54d61)
| Author | SHA1 | Message | Date |
|---|---|---|---|
|
|
bc38dc2f5f |
migration: refactor ram_save_target_page functions
Refactor ram_save_target_page legacy and multifd functions into one. Other than simplifying it, it frees 'migration_ops' object from usage, so it is expunged. Signed-off-by: Prasad Pandit <pjp@fedoraproject.org> Reviewed-by: Fabiano Rosas <farosas@suse.de> Message-ID: <20250127120823.144949-3-ppandit@redhat.com> Signed-off-by: Fabiano Rosas <farosas@suse.de> |
1 year ago |
|
|
a10b37c553 |
migration: Trivial cleanup on JSON writer of vmstate_save()
Two small cleanups in the same section of vmstate_save(): - Check vmdesc before the "mixed null/non-null data in array" logic, to be crystal clear that it's only about the JSON writer, not the vmstate on its own in the migration stream. - Since we have is_null variable now, use that to replace a check. Signed-off-by: Peter Xu <peterx@redhat.com> Tested-by: Jiri Denemark <jdenemar@redhat.com> Reviewed-by: Juraj Marcin <jmarcin@redhat.com> Link: https://lore.kernel.org/r/20250114230746.3268797-17-peterx@redhat.com Signed-off-by: Fabiano Rosas <farosas@suse.de> |
1 year ago |
|
|
3dde8fdbad |
migration: Merge precopy/postcopy on switchover start
Now after all the cleanups, finally we can merge the switchover startup phase into one single function for precopy/postcopy. Signed-off-by: Peter Xu <peterx@redhat.com> Tested-by: Jiri Denemark <jdenemar@redhat.com> Reviewed-by: Juraj Marcin <jmarcin@redhat.com> Link: https://lore.kernel.org/r/20250114230746.3268797-16-peterx@redhat.com Signed-off-by: Fabiano Rosas <farosas@suse.de> |
1 year ago |
|
|
4881411136 |
migration: Always set DEVICE state
DEVICE state was introduced back in 2017: https://lore.kernel.org/qemu-devel/20171020090556.18631-1-dgilbert@redhat.com/ Quote from Dave's cover letter, when the pre-switchover phase was enabled, the state transition looks like this: The precopy flow is: active->pre-switchover->device->completed The postcopy flow is: active->pre-switchover->postcopy-active->completed To supplement above, when the cap is not enabled: The precopy flow is: active->completed The postcopy flow is: active->postcopy-active->completed It works for us, though we have some code just to special case these state transitions, so the DEVICE state currently is special only to precopy, and only conditionally. I had a quick discussion with Libvirt developers, it turns out that this may not be necessary. IOW, it seems okay we can have DEVICE state to be generic, so that we don't have over-complicated state machines. It not only helps align all the migration state machine, help cleanup the code path especially on pre-switchover handling (see the patch itself), another side benefit is we can unconditionally have a specific state to mark the switchover phase, which might be helpful for debugging too. This patch makes the DEVICE state to be present always, marking that source QEMU is switching over. Then the state machine will be always as simple as: active-> [pre-switchover->] -> device -> [postcopy-active->] -> complete After the change, no matter whether pre-switchover or postcopy is enabled or not, we always have DEVICE state showing the switchover phase. When pre-switchover enabled, we'll have an extra stage before that. When postcopy is enabled, we'll have an extra stage after that. A few qtests need touch up in QEMU tree for this change: - A few iotest outputs (194, 203, 234, 262, 280) - Teach libqos's migrate() on "device" state Cc: Jiri Denemark <jdenemar@redhat.com> Cc: Daniel P. Berrangé <berrange@redhat.com> Cc: Dr. David Alan Gilbert <dave@treblig.org> Signed-off-by: Peter Xu <peterx@redhat.com> Tested-by: Jiri Denemark <jdenemar@redhat.com> Reviewed-by: Juraj Marcin <jmarcin@redhat.com> Link: https://lore.kernel.org/r/20250114230746.3268797-15-peterx@redhat.com Signed-off-by: Fabiano Rosas <farosas@suse.de> |
1 year ago |
|
|
455c1963d3 |
migration: Cleanup qemu_savevm_state_complete_precopy()
Now qemu_savevm_state_complete_precopy() is never used in postcopy, clean it up as in_postcopy==false now unconditionally. Signed-off-by: Peter Xu <peterx@redhat.com> Tested-by: Jiri Denemark <jdenemar@redhat.com> Reviewed-by: Juraj Marcin <jmarcin@redhat.com> Link: https://lore.kernel.org/r/20250114230746.3268797-14-peterx@redhat.com Signed-off-by: Fabiano Rosas <farosas@suse.de> |
1 year ago |
|
|
15c2ffa0b7 |
migration: Unwrap qemu_savevm_state_complete_precopy() in postcopy
Postcopy invokes qemu_savevm_state_complete_precopy() twice for a long time, and that caused way too much confusions. Let's clean this up and make postcopy easier to read. It's actually fairly straightforward: postcopy starts with saving non-postcopiable iterables, then later it saves again with non-iterable only. Move these two calls out makes everything much easier to follow. Otherwise it's very unclear what qemu_savevm_state_complete_precopy() did in either of the calls. No functional change intended. Signed-off-by: Peter Xu <peterx@redhat.com> Tested-by: Jiri Denemark <jdenemar@redhat.com> Reviewed-by: Juraj Marcin <jmarcin@redhat.com> Link: https://lore.kernel.org/r/20250114230746.3268797-13-peterx@redhat.com Signed-off-by: Fabiano Rosas <farosas@suse.de> |
1 year ago |
|
|
46b0155ecf |
migration: Notify COMPLETE once for postcopy
Postcopy invokes qemu_savevm_state_complete_precopy() twice, that means it'll invoke COMPLETE notify twice.. also twice the tracepoints that marking precopy complete. Move that notification (along with the tracepoint) out to the caller, so that postcopy will only notify once right at the start of switchover phase from precopy. When at it, rename it to suite the file now it locates. For precopy, there should have no functional change except the tracepoint has a name change. For the other two users of qemu_savevm_state_complete_precopy(), namely: qemu_savevm_state() and qemu_savevm_live_state(): the notifier shouldn't matter because they're not precopy at all. Now in these two contexts (aka, "savevm", and "colo") sometimes the precopy notifiers will still be invoked, but that's outside the scope of this patch. Signed-off-by: Peter Xu <peterx@redhat.com> Tested-by: Jiri Denemark <jdenemar@redhat.com> Reviewed-by: Juraj Marcin <jmarcin@redhat.com> Link: https://lore.kernel.org/r/20250114230746.3268797-12-peterx@redhat.com Signed-off-by: Fabiano Rosas <farosas@suse.de> |
1 year ago |
|
|
a880ddd8ce |
migration: Take BQL slightly longer in postcopy_start()
This paves way for some follow up patch to modify migration states at the end of postcopy_start(), which should better be with the BQL so that there's no way of concurrent cancellation. So we'll do something slightly more with BQL but they're really trivial, hopefully nothing will really chance with this. A side benefit is we can drop another explicit lock() in failure path. Signed-off-by: Peter Xu <peterx@redhat.com> Tested-by: Jiri Denemark <jdenemar@redhat.com> Reviewed-by: Juraj Marcin <jmarcin@redhat.com> Link: https://lore.kernel.org/r/20250114230746.3268797-11-peterx@redhat.com Signed-off-by: Fabiano Rosas <farosas@suse.de> |
1 year ago |
|
|
ec611bd731 |
migration: Drop cached migration state in migration_maybe_pause()
I can't see why we must cache the state now after we avoided possible CANCEL race: that's the only thing I can think of that can modify the migration state concurrently with the migration thread itself. Make all the state updates to happen always, then we don't need to cache the state anymore. Signed-off-by: Peter Xu <peterx@redhat.com> Tested-by: Jiri Denemark <jdenemar@redhat.com> Reviewed-by: Juraj Marcin <jmarcin@redhat.com> Link: https://lore.kernel.org/r/20250114230746.3268797-10-peterx@redhat.com Signed-off-by: Fabiano Rosas <farosas@suse.de> |
1 year ago |
|
|
1f9b657cae |
migration: Adjust locking in migration_maybe_pause()
In migration_maybe_pause() QEMU may yield BQL before waiting for a semaphore. However it yields the BQL too early, which logically gives it chance for the main thread to quickly take the BQL and modify the state to CANCELLING. To avoid such race condition from happening at all, always update the migration states within the BQL. It'll make sure no concurrent cancellation can ever happen. With that, IIUC there's chance we can remove the extra parameter in migration_maybe_pause() to update active state, but that'll be done separately later. Signed-off-by: Peter Xu <peterx@redhat.com> Tested-by: Jiri Denemark <jdenemar@redhat.com> Reviewed-by: Juraj Marcin <jmarcin@redhat.com> Link: https://lore.kernel.org/r/20250114230746.3268797-9-peterx@redhat.com Signed-off-by: Fabiano Rosas <farosas@suse.de> |
1 year ago |
|
|
40004007e6 |
migration: Adjust postcopy bandwidth during switchover
Precopy uses unlimited bandwidth always during switchover, it makes sense because this is so critical and no one would like to throttle bandwidth during the VM blackout. OTOH, postcopy surprisingly didn't do that. There's one line that in the middle of the postcopy switchover it tries to switch to postcopy's specified max-postcopy-bandwidth, but even so it's somewhere in the middle which is strange. This patch brings the two modes to always use unlimited bandwidth for switchover, meanwhile only apply the postcopy max bandwidth after the switchover is completed. Signed-off-by: Peter Xu <peterx@redhat.com> Tested-by: Jiri Denemark <jdenemar@redhat.com> Reviewed-by: Juraj Marcin <jmarcin@redhat.com> Link: https://lore.kernel.org/r/20250114230746.3268797-8-peterx@redhat.com Signed-off-by: Fabiano Rosas <farosas@suse.de> |
1 year ago |
|
|
89011a702f |
migration: Synchronize all CPU states only for non-iterable dump
Do one shot cpu sync at qemu_savevm_state_complete_precopy_non_iterable(), instead of coding it separately in two places. Note that in the context of qemu_savevm_state_complete_precopy(), this patch is also an optimization for postcopy path, in that we can avoid sync cpu twice during switchover: before this patch, postcopy_start() invokes twice on qemu_savevm_state_complete_precopy(), each of them will try to sync CPU info. In reality, only one of them would be enough. For background snapshot, there's no intended functional change. Signed-off-by: Peter Xu <peterx@redhat.com> Tested-by: Jiri Denemark <jdenemar@redhat.com> Reviewed-by: Juraj Marcin <jmarcin@redhat.com> Link: https://lore.kernel.org/r/20250114230746.3268797-7-peterx@redhat.com Signed-off-by: Fabiano Rosas <farosas@suse.de> |
1 year ago |
|
|
4822128693 |
migration: Drop inactivate_disk param in qemu_savevm_state_complete*
This parameter is only used by one caller, which is the genuine precopy
complete path (migration_completion_precopy).
The parameter was introduced in
|
1 year ago |
|
|
812145fcf7 |
migration: Avoid two src-downtime-end tracepoints for postcopy
Postcopy can trigger this tracepoint twice, while only the 1st one is valid. Avoid triggering the 2nd tracepoint just like what we do with recording the total downtime. Signed-off-by: Peter Xu <peterx@redhat.com> Tested-by: Jiri Denemark <jdenemar@redhat.com> Reviewed-by: Juraj Marcin <jmarcin@redhat.com> Link: https://lore.kernel.org/r/20250114230746.3268797-5-peterx@redhat.com Signed-off-by: Fabiano Rosas <farosas@suse.de> |
1 year ago |
|
|
9cde9b435a |
migration: Optimize postcopy on downtime by avoiding JSON writer
postcopy_start() is the entry function that postcopy is destined to start. It also means QEMU source will not dump VM description, aka, the JSON writer is garbage now. We can leave that to be cleaned up when migration completes, however when with the JSON writer object being present, vmstate_save() will still try to construct the JSON objects for the VM descriptions, even though it'll never be used later if it's postcopy. To save those cycles, release the JSON writer earlier for postcopy. Then vmstate_save() later will be smart enough to skip the JSON object constructions completely. It can logically reduce downtime because all such JSON constructions happen during postcopy blackout. Signed-off-by: Peter Xu <peterx@redhat.com> Tested-by: Jiri Denemark <jdenemar@redhat.com> Reviewed-by: Juraj Marcin <jmarcin@redhat.com> Link: https://lore.kernel.org/r/20250114230746.3268797-4-peterx@redhat.com Signed-off-by: Fabiano Rosas <farosas@suse.de> |
1 year ago |
|
|
a55090db2a |
migration: Do not construct JSON description if suppressed
QEMU machine has a property "suppress-vmdesc". When it is enabled, QEMU will stop attaching JSON VM description at the end of the precopy migration stream (postcopy is never affected because postcopy never attach that). However even if it's suppressed by the user, the source QEMU will still construct the JSON descriptions, which is a complete waste of CPU and memory resources. To avoid it, only create the JSON writer object if suppress-vmdesc is not specified. Luckily, vmstate_save() already supports vmdesc==NULL, so only a few spots that are left to be prepared that vmdesc can be NULL now. When at it, move the init / destroy of the JSON writer object to start / end of the migration - the JSON writer object is a sub-struct of migration state, and that looks like the only object that was dynamically allocated / destroyed within migration process. Make it the same as the rest objects that migration uses. Signed-off-by: Peter Xu <peterx@redhat.com> Tested-by: Jiri Denemark <jdenemar@redhat.com> Reviewed-by: Juraj Marcin <jmarcin@redhat.com> Link: https://lore.kernel.org/r/20250114230746.3268797-3-peterx@redhat.com Signed-off-by: Fabiano Rosas <farosas@suse.de> |
1 year ago |
|
|
013c6e1f42 |
migration: Remove postcopy implications in should_send_vmdesc()
should_send_vmdesc() has a hack inside (which was not reflected in the
function name) in that it tries to detect global postcopy state and that
will affect the value to be returned.
It's easier to keep the helper simple by only check the suppress-vmdesc
property. Then:
- On the sender side of its usage, there's already in_postcopy variable
that we can use: postcopy doesn't send vmdesc at all, so directly skip
everything for postcopy.
- On the recv side, when reaching vmdesc processing it must be precopy
code already, hence that hack check never used to work anyway.
No functional change intended, except a trivial side effect that QEMU
source will start to avoid running some JSON helper in postcopy path, but
that would only reduce the postcopy blackout window a bit, rather than any
other bad side effect.
Signed-off-by: Peter Xu <peterx@redhat.com>
Tested-by: Jiri Denemark <jdenemar@redhat.com>
Reviewed-by: Juraj Marcin <jmarcin@redhat.com>
Link: https://lore.kernel.org/r/20250114230746.3268797-2-peterx@redhat.com
Signed-off-by: Fabiano Rosas <farosas@suse.de>
|
1 year ago |
|
|
624e6e654e |
migration: cpr-transfer mode
Add the cpr-transfer migration mode, which allows the user to transfer
a guest to a new QEMU instance on the same host with minimal guest pause
time, by preserving guest RAM in place, albeit with new virtual addresses
in new QEMU, and by preserving device file descriptors. Pages that were
locked in memory for DMA in old QEMU remain locked in new QEMU, because the
descriptor of the device that locked them remains open.
cpr-transfer preserves memory and devices descriptors by sending them to
new QEMU over a unix domain socket using SCM_RIGHTS. Such CPR state cannot
be sent over the normal migration channel, because devices and backends
are created prior to reading the channel, so this mode sends CPR state
over a second "cpr" migration channel. New QEMU reads the cpr channel
prior to creating devices or backends. The user specifies the cpr channel
in the channel arguments on the outgoing side, and in a second -incoming
command-line parameter on the incoming side.
The user must start old QEMU with the the '-machine aux-ram-share=on' option,
which allows anonymous memory to be transferred in place to the new process
by transferring a memory descriptor for each ram block. Memory-backend
objects must have the share=on attribute, but memory-backend-epc is not
supported.
The user starts new QEMU on the same host as old QEMU, with command-line
arguments to create the same machine, plus the -incoming option for the
main migration channel, like normal live migration. In addition, the user
adds a second -incoming option with channel type "cpr". This CPR channel
must support file descriptor transfer with SCM_RIGHTS, i.e. it must be a
UNIX domain socket.
To initiate CPR, the user issues a migrate command to old QEMU, adding
a second migration channel of type "cpr" in the channels argument.
Old QEMU stops the VM, saves state to the migration channels, and enters
the postmigrate state. New QEMU mmap's memory descriptors, and execution
resumes.
The implementation splits qmp_migrate into start and finish functions.
Start sends CPR state to new QEMU, which responds by closing the CPR
channel. Old QEMU detects the HUP then calls finish, which connects the
main migration channel.
In summary, the usage is:
qemu-system-$arch -machine aux-ram-share=on ...
start new QEMU with "-incoming <main-uri> -incoming <cpr-channel>"
Issue commands to old QEMU:
migrate_set_parameter mode cpr-transfer
{"execute": "migrate", ...
{"channel-type": "main"...}, {"channel-type": "cpr"...} ... }
Signed-off-by: Steve Sistare <steven.sistare@oracle.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Acked-by: Markus Armbruster <armbru@redhat.com>
Link: https://lore.kernel.org/r/1736967650-129648-17-git-send-email-steven.sistare@oracle.com
Signed-off-by: Fabiano Rosas <farosas@suse.de>
|
1 year ago |
|
|
b3698869f4 |
migration: cpr-transfer save and load
Add functions to create a QEMUFile based on a unix URI, for saving or loading, for use by cpr-transfer mode to preserve CPR state. Signed-off-by: Steve Sistare <steven.sistare@oracle.com> Reviewed-by: Peter Xu <peterx@redhat.com> Link: https://lore.kernel.org/r/1736967650-129648-16-git-send-email-steven.sistare@oracle.com Signed-off-by: Fabiano Rosas <farosas@suse.de> |
1 year ago |
|
|
e3965dc352 |
migration: VMSTATE_FD
Define VMSTATE_FD for declaring a file descriptor field in a VMStateDescription. Signed-off-by: Steve Sistare <steven.sistare@oracle.com> Reviewed-by: Peter Xu <peterx@redhat.com> Link: https://lore.kernel.org/r/1736967650-129648-15-git-send-email-steven.sistare@oracle.com Signed-off-by: Fabiano Rosas <farosas@suse.de> |
1 year ago |
|
|
b5779dc7cf |
migration: SCM_RIGHTS for QEMUFile
Define functions to put/get file descriptors to/from a QEMUFile, for qio channels that support SCM_RIGHTS. Maintain ordering such that put(A), put(fd), put(B) followed by get(A), get(fd), get(B) always succeeds. Other get orderings may succeed but are not guaranteed. Signed-off-by: Steve Sistare <steven.sistare@oracle.com> Reviewed-by: Peter Xu <peterx@redhat.com> Link: https://lore.kernel.org/r/1736967650-129648-14-git-send-email-steven.sistare@oracle.com Signed-off-by: Fabiano Rosas <farosas@suse.de> |
1 year ago |
|
|
2862b6b924 |
migration: incoming channel
Extend the -incoming option to allow an @MigrationChannel to be specified. This allows channels other than 'main' to be described on the command line, which will be needed for CPR. Signed-off-by: Steve Sistare <steven.sistare@oracle.com> Acked-by: Peter Xu <peterx@redhat.com> Link: https://lore.kernel.org/r/1736967650-129648-13-git-send-email-steven.sistare@oracle.com Signed-off-by: Fabiano Rosas <farosas@suse.de> |
1 year ago |
|
|
f2374f0fc3 |
migration: enhance migrate_uri_parse
Export migrate_uri_parse for use outside migration internals, and define a method migrate_is_uri that indicates when migrate_uri_parse should be used. Signed-off-by: Steve Sistare <steven.sistare@oracle.com> Reviewed-by: Peter Xu <peterx@redhat.com> Link: https://lore.kernel.org/r/1736967650-129648-12-git-send-email-steven.sistare@oracle.com Signed-off-by: Fabiano Rosas <farosas@suse.de> |
1 year ago |
|
|
e7d79011a4 |
migration: cpr-state
CPR must save state that is needed after QEMU is restarted, when devices are realized. Thus the extra state cannot be saved in the migration channel, as objects must already exist before that channel can be loaded. Instead, define auxilliary state structures and vmstate descriptions, not associated with any registered object, and serialize the aux state to a cpr-specific channel in cpr_state_save. Deserialize in cpr_state_load after QEMU restarts, before devices are realized. Provide accessors for clients to register file descriptors for saving. The mechanism for passing the fd's to the new process will be specific to each migration mode, and added in subsequent patches. Signed-off-by: Steve Sistare <steven.sistare@oracle.com> Reviewed-by: Fabiano Rosas <farosas@suse.de> Reviewed-by: Peter Xu <peterx@redhat.com> Link: https://lore.kernel.org/r/1736967650-129648-8-git-send-email-steven.sistare@oracle.com Signed-off-by: Fabiano Rosas <farosas@suse.de> |
1 year ago |
|
|
ed19620846 |
migration: fix -Werror=maybe-uninitialized
../migration/savevm.c: In function ‘qemu_savevm_state_complete_precopy_non_iterable’:
../migration/savevm.c:1560:20: error: ‘ret’ may be used uninitialized [-Werror=maybe-uninitialized]
1560 | return ret;
| ^~~
Cc: Peter Xu <peterx@redhat.com>
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Fabiano Rosas <farosas@suse.de>
Reviewed-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-ID: <20250114104811.2612846-1-marcandre.lureau@redhat.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
|
1 year ago |
|
|
a523bc5216 |
multifd: bugfix for incorrect migration data with qatzip compression
When QPL compression is enabled on the migration channel and the same dirty page changes from a normal page to a zero page in the iterative memory copy, the dirty page will not be updated to a zero page again on the target side, resulting in incorrect memory data on the source and target sides. The root cause is that the target side does not record the normal pages to the receivedmap. The solution is to add ramblock_recv_bitmap_set_offset in target side to record the normal pages. Signed-off-by: Yuan Liu <yuan1.liu@intel.com> Reviewed-by: Jason Zeng <jason.zeng@intel.com> Reviewed-by: Peter Xu <peterx@redhat.com> Message-Id: <20241218091413.140396-4-yuan1.liu@intel.com> Signed-off-by: Fabiano Rosas <farosas@suse.de> |
1 year ago |
|
|
2588a5f99b |
multifd: bugfix for incorrect migration data with QPL compression
When QPL compression is enabled on the migration channel and the same dirty page changes from a normal page to a zero page in the iterative memory copy, the dirty page will not be updated to a zero page again on the target side, resulting in incorrect memory data on the source and target sides. The root cause is that the target side does not record the normal pages to the receivedmap. The solution is to add ramblock_recv_bitmap_set_offset in target side to record the normal pages. Signed-off-by: Yuan Liu <yuan1.liu@intel.com> Reviewed-by: Jason Zeng <jason.zeng@intel.com> Reviewed-by: Peter Xu <peterx@redhat.com> Message-Id: <20241218091413.140396-3-yuan1.liu@intel.com> Signed-off-by: Fabiano Rosas <farosas@suse.de> |
1 year ago |
|
|
cdc3970f85 |
multifd: bugfix for migration using compression methods
When compression is enabled on the migration channel and
the pages processed are all zero pages, these pages will
not be sent and updated on the target side, resulting in
incorrect memory data on the source and target sides.
The root cause is that all compression methods call
multifd_send_prepare_common to determine whether to compress
dirty pages, but multifd_send_prepare_common does not update
the IOV of MultiFDPacket_t when all dirty pages are zero pages.
The solution is to always update the IOV of MultiFDPacket_t
regardless of whether the dirty pages are all zero pages.
Fixes:
|
1 year ago |
|
|
35049eb0d2 |
migration: Fix arrays of pointers in JSON writer
Currently, if an array of pointers contains a NULL pointer, that
pointer will be encoded as '0' in the stream. Since the JSON writer
doesn't define a "pointer" type, that '0' will now be an uint8, which
is different from the original type being pointed to, e.g. struct.
(we're further calling uint8 "nullptr", but that's irrelevant to the
issue)
That mixed-type array shouldn't be compressed, otherwise data is lost
as the code currently makes the whole array have the type of the first
element:
css = {NULL, NULL, ..., 0x5555568a7940, NULL};
{"name": "s390_css", "instance_id": 0, "vmsd_name": "s390_css",
"version": 1, "fields": [
...,
{"name": "css", "array_len": 256, "type": "nullptr", "size": 1},
...,
]}
In the above, the valid pointer at position 254 got lost among the
compressed array of nullptr.
While we could disable the array compression when a NULL pointer is
found, the JSON part of the stream still makes part of downtime, so we
should avoid writing unecessary bytes to it.
Keep the array compression in place, but if NULL and non-NULL pointers
are mixed break the array into several type-contiguous pieces :
css = {NULL, NULL, ..., 0x5555568a7940, NULL};
{"name": "s390_css", "instance_id": 0, "vmsd_name": "s390_css",
"version": 1, "fields": [
...,
{"name": "css", "array_len": 254, "type": "nullptr", "size": 1},
{"name": "css", "type": "struct", "struct": {"vmsd_name": "s390_css_img", ... }, "size": 768},
{"name": "css", "type": "nullptr", "size": 1},
...,
]}
Now each type-discontiguous region will become a new JSON entry. The
reader should interpret this as a concatenation of values, all part of
the same field.
Parsing the JSON with analyze-script.py now shows the proper data
being pointed to at the places where the pointer is valid and
"nullptr" where there's NULL:
"s390_css (14)": {
...
"css": [
"nullptr",
"nullptr",
...
"nullptr",
{
"chpids": [
{
"in_use": "0x00",
"type": "0x00",
"is_virtual": "0x00"
},
...
]
},
"nullptr",
}
Reviewed-by: Peter Xu <peterx@redhat.com>
Message-Id: <20250109185249.23952-7-farosas@suse.de>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
|
1 year ago |
|
|
9867c3a7ce |
migration: Dump correct JSON format for nullptr replacement
QEMU plays a trick with null pointers inside an array of pointers in a VMSD
field. See
|
1 year ago |
|
|
f52965bf0e |
migration: Rename vmstate_info_nullptr
Rename vmstate_info_nullptr from "uint64_t" to "nullptr". This vmstate actually reads and writes just a byte, so the proper name would be uint8. However, since this is a marker for a NULL pointer, it's convenient to have a more explicit name that can be identified by the consumers of the JSON part of the stream. Change the name to "nullptr" and add support for it in the analyze-migration.py script. Arbitrarily use the name of the type as the value of the field to avoid the script showing 0x30 or '0', which could be confusing for readers. Reviewed-by: Peter Xu <peterx@redhat.com> Message-Id: <20250109185249.23952-5-farosas@suse.de> Signed-off-by: Fabiano Rosas <farosas@suse.de> |
1 year ago |
|
|
2aead53d39 |
migration: Remove unused argument in vmsd_desc_field_end
Reviewed-by: Peter Xu <peterx@redhat.com> Message-Id: <20250109185249.23952-3-farosas@suse.de> Signed-off-by: Fabiano Rosas <farosas@suse.de> |
1 year ago |
|
|
8597af7615 |
migration/block: Rewrite disk activation
This patch proposes a flag to maintain disk activation status globally. It mostly rewrites disk activation mgmt for QEMU, including COLO and QMP command xen_save_devices_state. Backgrounds =========== We have two problems on disk activations, one resolved, one not. Problem 1: disk activation recover (for switchover interruptions) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ When migration is either cancelled or failed during switchover, especially when after the disks are inactivated, QEMU needs to remember re-activate the disks again before vm starts. It used to be done separately in two paths: one in qmp_migrate_cancel(), the other one in the failure path of migration_completion(). It used to be fixed in different commits, all over the places in QEMU. So these are the relevant changes I saw, I'm not sure if it's complete list: - In 2016, commit |
1 year ago |
|
|
8c97c5a476 |
migration/block: Fix possible race with block_inactive
Src QEMU sets block_inactive=true very early before the invalidation takes place. It means if something wrong happened during setting the flag but before reaching qemu_savevm_state_complete_precopy_non_iterable() where it did the invalidation work, it'll make block_inactive flag inconsistent. For example, think about when qemu_savevm_state_complete_precopy_iterable() can fail: it will have block_inactive set to true even if all block drives are active. Fix that by only update the flag after the invalidation is done. No Fixes for any commit, because it's not an issue if bdrv_activate_all() is re-entrant upon all-active disks - false positive block_inactive can bring nothing more than "trying to active the blocks but they're already active". However let's still do it right to avoid the inconsistent flag v.s. reality. Signed-off-by: Peter Xu <peterx@redhat.com> Reviewed-by: Fabiano Rosas <farosas@suse.de> Message-Id: <20241206230838.1111496-6-peterx@redhat.com> Signed-off-by: Fabiano Rosas <farosas@suse.de> |
1 year ago |
|
|
61f2b48998 |
migration/block: Apply late-block-active behavior to postcopy
Postcopy never cared about late-block-active. However there's no mention in the capability that it doesn't apply to postcopy. Considering that we _assumed_ late activation is always good, do that too for postcopy unconditionally, just like precopy. After this patch, we should have unified the behavior across all. Signed-off-by: Peter Xu <peterx@redhat.com> Reviewed-by: Fabiano Rosas <farosas@suse.de> Message-Id: <20241206230838.1111496-5-peterx@redhat.com> Signed-off-by: Fabiano Rosas <farosas@suse.de> |
1 year ago |
|
|
fca9aef1c8 |
migration/block: Make late-block-active the default
Migration capability 'late-block-active' controls when the block drives will be activated. If enabled, block drives will only be activated until VM starts, either src runstate was "live" (RUNNING, or SUSPENDED), or it'll be postponed until qmp_cont(). Let's do this unconditionally. There's no harm to delay activation of block drives. Meanwhile there's no ABI breakage if dest does it, because src QEMU has nothing to do with it, so it's no concern on ABI breakage. IIUC we could avoid introducing this cap when introducing it before, but now it's still not too late to just always do it. Cap now prone to removal, but it'll be for later patches. Signed-off-by: Peter Xu <peterx@redhat.com> Reviewed-by: Fabiano Rosas <farosas@suse.de> Message-Id: <20241206230838.1111496-4-peterx@redhat.com> Signed-off-by: Fabiano Rosas <farosas@suse.de> |
1 year ago |
|
|
7815f69867 |
migration: Add helper to get target runstate
In 99% cases, after QEMU migrates to dest host, it tries to detect the target VM runstate using global_state_get_runstate(). There's one outlier so far which is Xen that won't send global state. That's the major reason why global_state_received() check was always there together with global_state_get_runstate(). However it's utterly confusing why global_state_received() has anything to do with "let's start VM or not". Provide a helper to explain it, then we have an unified entry for getting the target dest QEMU runstate after migration. Suggested-by: Fabiano Rosas <farosas@suse.de> Signed-off-by: Peter Xu <peterx@redhat.com> Message-Id: <20241206230838.1111496-2-peterx@redhat.com> Signed-off-by: Fabiano Rosas <farosas@suse.de> |
1 year ago |
|
|
b93d897ea2 |
migration/multifd: Fix compat with QEMU < 9.0
Commit |
1 year ago |
|
|
baab4473db |
migration/multifd: Document the reason to sync for save_setup()
It's not straightforward to see why src QEMU needs to sync multifd during setup() phase. After all, there's no page queued at that point. For old QEMUs, there's a solid reason: EOS requires it to work. While it's clueless on the new QEMUs which do not take EOS message as sync requests. One will figure that out only when this is conditionally removed. In fact, the author did try it out. Logically we could still avoid doing this on new machine types, however that needs a separate compat field and that can be an overkill in some trivial overhead in setup() phase. Let's instead document it completely, to avoid someone else tries this again and do the debug one more time, or anyone confused on why this ever existed. Signed-off-by: Peter Xu <peterx@redhat.com> Reviewed-by: Fabiano Rosas <farosas@suse.de> Message-Id: <20241206224755.1108686-8-peterx@redhat.com> Signed-off-by: Fabiano Rosas <farosas@suse.de> |
1 year ago |
|
|
1aa81c3098 |
migration/multifd: Cleanup src flushes on condition check
The src flush condition check is over complicated, and it's getting more out of control if postcopy will be involved. In general, we have two modes to do the sync: legacy or modern ways. Legacy uses per-section flush, modern uses per-round flush. Mapped-ram always uses the modern, which is per-round. Introduce two helpers, which can greatly simplify the code, and hopefully make it readable again. Signed-off-by: Peter Xu <peterx@redhat.com> Reviewed-by: Fabiano Rosas <farosas@suse.de> Message-Id: <20241206224755.1108686-7-peterx@redhat.com> Signed-off-by: Fabiano Rosas <farosas@suse.de> |
1 year ago |
|
|
de695b1399 |
migration/multifd: Remove sync processing on postcopy
Multifd never worked with postcopy, at least yet so far. Remove the sync processing there, because it's confusing, and they should never appear. Now if RAM_SAVE_FLAG_MULTIFD_FLUSH is observed, we fail hard instead of trying to invoke multifd code. Reviewed-by: Fabiano Rosas <farosas@suse.de> Signed-off-by: Peter Xu <peterx@redhat.com> Message-Id: <20241206224755.1108686-6-peterx@redhat.com> Signed-off-by: Fabiano Rosas <farosas@suse.de> |
1 year ago |
|
|
e5f14aa5fe |
migration/multifd: Unify RAM_SAVE_FLAG_MULTIFD_FLUSH messages
RAM_SAVE_FLAG_MULTIFD_FLUSH message should always be correlated to a sync request on src. Unify such message into one place, and conditionally send the message only if necessary. Reviewed-by: Fabiano Rosas <farosas@suse.de> Signed-off-by: Peter Xu <peterx@redhat.com> Message-Id: <20241206224755.1108686-5-peterx@redhat.com> Signed-off-by: Fabiano Rosas <farosas@suse.de> |
1 year ago |
|
|
604b4749c5 |
migration/ram: Move RAM_SAVE_FLAG* into ram.h
Firstly, we're going to use the multifd flag soon in multifd code, so ram.c isn't gonna work. Secondly, we have a separate RDMA flag dangling around, which is definitely not obvious. There's one comment that helps, but not too much. Put all RAM save flags altogether, so nothing will get overlooked. Add a section explain why we can't use bits over 0x200. Remove RAM_SAVE_FLAG_FULL as it's already not used in QEMU, as the comment explained. Reviewed-by: Fabiano Rosas <farosas@suse.de> Signed-off-by: Peter Xu <peterx@redhat.com> Message-Id: <20241206224755.1108686-4-peterx@redhat.com> Signed-off-by: Fabiano Rosas <farosas@suse.de> |
1 year ago |
|
|
10801e08ac |
migration/multifd: Allow to sync with sender threads only
Teach multifd_send_sync_main() to sync with threads only. We already have such requests, which is when mapped-ram is enabled with multifd. In that case, no SYNC messages will be pushed to the stream when multifd syncs the sender threads because there's no destination threads waiting for that. The whole point of the sync is to make sure all threads finished their jobs. So fundamentally we have a request to do the sync in different ways: - Either to sync the threads only, - Or to sync the threads but also with the destination side. Mapped-ram did it already because of the use_packet check in the sync handler of the sender thread. It works. However it may stop working when e.g. VFIO may start to reuse multifd channels to push device states. In that case VFIO has similar request on "thread-only sync" however we can't check a flag because such sync request can still come from RAM which needs the on-wire notifications. Paving way for that by allowing the multifd_send_sync_main() to specify what kind of sync the caller needs. We can use it for mapped-ram already. No functional change intended. Signed-off-by: Peter Xu <peterx@redhat.com> Reviewed-by: Fabiano Rosas <farosas@suse.de> Message-Id: <20241206224755.1108686-3-peterx@redhat.com> Signed-off-by: Fabiano Rosas <farosas@suse.de> |
1 year ago |
|
|
1d457daf86 |
migration/multifd: Further remove the SYNC on complete
Commit
|
1 year ago |
|
|
d127294f26 |
migration/multifd: Fix compile error caused by page_size usage
>From Commit |
1 year ago |
|
|
3bdb1a75f1 |
migration: Unexport migration_is_active()
After being removed from VFIO and dirty limit, migration_is_active() no longer has any users outside the migration subsystem, and in fact, it's only used in migration.c. Unexport it and also relocate it so it can be made static. Signed-off-by: Avihai Horon <avihaih@nvidia.com> Reviewed-by: Cédric Le Goater <clg@redhat.com> Acked-by: Peter Xu <peterx@redhat.com> Tested-by: Joao Martins <joao.m.martins@oracle.com> Link: https://lore.kernel.org/r/20241218134022.21264-8-avihaih@nvidia.com Signed-off-by: Cédric Le Goater <clg@redhat.com> |
1 year ago |
|
|
844ed0f762 |
migration: Drop migration_is_device()
After being removed from VFIO, migration_is_device() no longer has any users. Drop it. Signed-off-by: Avihai Horon <avihaih@nvidia.com> Reviewed-by: Cédric Le Goater <clg@redhat.com> Acked-by: Peter Xu <peterx@redhat.com> Tested-by: Joao Martins <joao.m.martins@oracle.com> Link: https://lore.kernel.org/r/20241218134022.21264-7-avihaih@nvidia.com Signed-off-by: Cédric Le Goater <clg@redhat.com> |
1 year ago |
|
|
32cad1ffb8 |
include: Rename sysemu/ -> system/
Headers in include/sysemu/ are not only related to system *emulation*, they are also used by virtualization. Rename as system/ which is clearer. Files renamed manually then mechanical change using sed tool. Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Tested-by: Lei Yang <leiyang@redhat.com> Message-Id: <20241203172445.28576-1-philmd@linaro.org> |
1 year ago |
|
|
0017f5713a |
migration: Use device_class_set_props_n
Export the migration_properties array size from options.c; use that to feed device_class_set_props_n. We must remove DEFINE_PROP_END_OF_LIST so the count is correct. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Tested-by: Lei Yang <leiyang@redhat.com> Link: https://lore.kernel.org/r/20241218134251.4724-15-richard.henderson@linaro.org Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> |
1 year ago |