qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] vpc max table entries calculation error


From: Nick Owens
Subject: [Qemu-devel] vpc max table entries calculation error
Date: Tue, 29 Nov 2016 13:07:17 -0800

i'm writing to discuss an issue in qemu's vpc creation.

when creating a dynamic-type vpc image with qemu-img like so:

$ qemu-img create -f vpc vhd.vhd 100M
Formatting 'vhd.vhd', fmt=vpc size=104857600

and then inspecting the file (with a tool i wrote; it's also easy to see in
a hex dump):

$ vhd-inspect ./vhd.vhd
                                  MaxTableEntries: 51
BlockSize: 2097152
PhysicalSize: 104865792
VirtualSize: 104865792
PhysicalSize/BlockSize = 50

we see that the MaxTableEntries differs from the PhysicalSize/BlockSize.

in the vhd specification ([1] or [2]) we see that it says:

"Max Table Entries
This field holds the maximum entries present in the BAT. This should be
equal to the number of blocks in the disk (that is, the disk size divided
by the block size)."

however, in the QEMU function 'create_dynamic_disk' in block/vpc.c, we can
see there is one additional block added to the calculation for
num_bat_entries (same as MaxTableEntries):

num_bat_entries = (total_sectors + block_size / 512) / (block_size / 512);

so, i tried to fix this by removing the extra '+ block_size / 512'.
however, that seems to break some assumptions in 'vpc_open', namely this
code:

        computed_size = (uint64_t) s->max_table_entries * s->block_size;
        if (computed_size < bs->total_sectors * 512) {
            error_setg(errp, "Page table too small");
            ret = -EINVAL;
            goto fail;
        }

on the other hand, if i create the dynamic vpc using '-o force_size', the
disk size computation ends up slightly different, apparently due to not
using CHS, and the check passes.

so, i am not sure what the right fix is here, as it seems vpc is very
messy, but i do think that this is a bug because the incorrect
MaxTableEntries causes other tools to miscompute the real disk size. when
these dynamic-type vpcs with incorrect MaxTableEntries are converted to
fixed-type and uploaded to Microsoft Azure, it results in the hypervisor
rejecting the image.

does someone have an idea about the correct way to fix this?

[1] https://technet.microsoft.com/en-us/virtualization/bb676673.aspx
[2]
https://docs.google.com/document/d/1RWssryIPuH_5isISxu9cGisyOfAV8s1_-e-YhhiF-jY/edit


reply via email to

[Prev in Thread] Current Thread [Next in Thread]