qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] Slow kernel/initrd loading via fw_cfg; Was Re: Hack int


From: Alexander Graf
Subject: Re: [Qemu-devel] Slow kernel/initrd loading via fw_cfg; Was Re: Hack integrating SeaBios / LinuxBoot option rom with QEMU trace backends
Date: Tue, 11 Oct 2011 11:19:25 +0200

On 11.10.2011, at 11:15, Avi Kivity wrote:

> On 10/11/2011 10:23 AM, Daniel P. Berrange wrote:
>>  - Application sandbox, directly boots the regular host's kernel and
>>    a custom initrd image. The initrd does not contain any files except
>>    for the 9p kernel modules and a custom init binary, which mounts
>>    the guest root FS from a 9p filesystem export.
>> 
>>    The kernel is<  5 MB, while the initrd is approx 700 KB compressed,
>>    or 1.4 MB compressed. Performance for the sandbox is even more
>>    critical than for libguestfs. Even 10's of milliseconds make a
>>    difference here. The commands being run in the sandbox can be
>>    very short lived processes, executed reasonably frequently. The
>>    goal is to have end-to-end runtime overhead of<  2 seconds. This
>>    includes libvirt guest startup, qemu startup/shutdown, bios time,
>>    option ROM time, kernel boot&  shutdown time.
>> 
>>    The reason for using a kerenl/initrd instead of a bootable ISO,
>>    is that building an ISO requires time itself, and we need to be
>>    able to easily pass kernel boot arguments via -append.
>> 
>> 
>> I'm focusing on the last use case, and if the phase of the moon
>> is correct, I can currently executed a sandbox command with a total
>> overhead of 3.5 seconds (if using a compressed initrd) of which
>> the QEMU execution time is 2.5 seconds.
>> 
>> Of this, 1.4 seconds is the time required by LinuxBoot to copy the
>> kernel+initrd. If I used an uncompressed initrd, which I really want
>> to, to avoid decompression overhead, this increases to ~1.7 seconds.
>> So the LinuxBoot ROM is ~60% of total QEMU execution time, or 40%
>> of total sandbox execution overhead.
> 
> One thing we can do is boot a guest and immediately snapshot it, before it 
> runs any application specific code.  Subsequent invocations will MAP_PRIVATE 
> the memory image and COW their way.  This avoids the kernel initialization 
> time as well.

That doesn't allow modification of -append and gets you in a pretty bizarre 
state when doing updates of your host files, since then you have 2 different 
paths: full boot and restore. That's yet another potential source for bugs.

> 
>> 
>> For comparison I also did a test building a bootable ISO using ISOLinux.
>> This required 700 ms for the boot time, which is appoximately 1/2 the
>> time reqiured for direct kernel/initrd boot. But you have to then add
>> on time required to build the ISO on every boot, to add custom kernel
>> command line args. So while ISO is faster than LinuxBoot currently
>> there is still non-negligable overhead here that I want to avoid.
> 
> You can accept parameters from virtio-serial or some other channel.  Is there 
> any reason you need them specifically as *kernel* command line parameters?

That doesn't work for kernel parameters. It also means things would have to be 
rewritten needlessly. Some times we can't easily change the way parameters are 
passed into the guest either, for example when running a random (read: old, 
think of RHEL5) distro installation initrd.

And I don't see the point why we would have to shoot yet another hole into the 
guest just because we're too unwilling to make an interface that's perfectly 
valid horribly slow.


Alex




reply via email to

[Prev in Thread] Current Thread [Next in Thread]