qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] Debian 7.8.0 SPARC64 on qemu - anything i can do to spe


From: Artyom Tarasenko
Subject: Re: [Qemu-devel] Debian 7.8.0 SPARC64 on qemu - anything i can do to speedup the emulation?
Date: Thu, 20 Aug 2015 12:40:52 +0200

On Thu, Aug 20, 2015 at 7:22 AM, Dennis Luehring <address@hidden> wrote:
> Am 19.08.2015 um 16:41 schrieb Artyom Tarasenko:
>>
>> And if I completely disable optimizer (// #define
>> USE_TCG_OPTIMIZATIONS in tcg.c), it's still quite faster:
>>
>> real    14m17.668s
>> user    14m10.241s
>> sys     0m6.060s
>
>
> my tests also without USE_TCG_OPTIMIZATIONS
>
> qemu 2.4.50, netbsd 6.1.5 SPARC64
>
> without-optimization
> //#define USE_TCG_OPTIMIZATIONS
>
> pugixml compile: (without-optimization is faster)
> with-optimization: ~2:51.2
> without-optimization: ~2:14.1
>
> prime.c runtime: (without-optimization is faster)
> with-optimization: ~11 sec
> without-optimization: ~9.9 sec
>
> stream results (with-optimization gives better results)

Ok, this makes sense. Optimized code performs better but requires more
time for the translation.
The question is whether TCG can translate less while running a g++.
Maybe just increase the TB cache?

I see that it always uses the default TB buffer (sizetcg_init in
accel.c is called with an uninitialized variable).
And the default is 25 % of the machine memory (size_code_gen_buffer in
translate-all.c). I tried increasing this to 50%, and observe that
tb_flushes don't happen during the g++ run. Nevertheless QEMU is still
busy translating the code.

Why does it happen? I'd expect the TBs would mostly be re-used at some
point of running the same process.
Aurelien, Richard?

Artyom



reply via email to

[Prev in Thread] Current Thread [Next in Thread]