qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] Windows and I/O size


From: Vadim Rozenfeld
Subject: Re: [Qemu-devel] Windows and I/O size
Date: Tue, 8 Jan 2013 12:15:59 +0200
User-agent: KMail/1.13.7 (Linux/3.3.0-rc5+; KDE/4.6.5; x86_64; ; )

On Tuesday, January 08, 2013 11:47:54 AM Peter Lieven wrote:
> Am 08.01.2013 um 10:29 schrieb Vadim Rozenfeld <address@hidden>:
> > On Tuesday, January 08, 2013 10:53:44 AM Peter Lieven wrote:
> >> Am 08.01.2013 um 09:50 schrieb Vadim Rozenfeld <address@hidden>:
> >>> On Tuesday, January 08, 2013 10:16:48 AM Peter Lieven wrote:
> >>>> Hi all,
> >>>> 
> >>>> I came across the fact that Windows seems to requests greater 64KB
> >>>> into pieces leading to a lot of IOPs on the storage side.
> >>>> 
> >>>> Can anyone imagine of a way to merge them before sending them to e.g.
> >>>> an iSCSI Storage? 64KB I/O Size is not optimal when e.g. large
> >>>> sequential operations with an iSCSI target.
> >>>> 
> >>>> Thank you,
> >>>> Peter
> >>> 
> >>> Hi Peter.
> >>> Is it viostor? Which version? The most recent one is able to handle
> >>> 256K blocks.
> >> 
> >> Not the recent. I will try 0.1.49 now.
> >> 
> >> 256KB is still not that much but definitely better than 64KB. are this
> >> windows limits?
> > 
> > not exactly. it came from the driver itself. actually, with indirect
> > buffer support in virtio the sky is the limit.
> 
> would it be possible to make this value user adjustable in the driver
> settings?
> 
technically, yes. 
> I can meanwhile confirm that the IOPS for sequential reads (writes not
> tested) have dropped to 1/4 as expected.
> 
> I think 256K is a reasonable value. What was the reason to choose it?
In Windows cache manager operates with 256 KB blocks. 
Cheers,
Vadim.
> 
> Thank you,
> Peter
> 
> >> I have found docs in the net that windows splits up everything into 64kB
> >> requests. Is this info old?
> >> 
> >> thank you,
> >> Peter
> >> 
> >>> Best regards,
> >>> Vadim.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]