[Top][All Lists]
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Qemu-devel] A question regarding QEMU.
From: |
Lluís |
Subject: |
Re: [Qemu-devel] A question regarding QEMU. |
Date: |
Thu, 17 Feb 2011 17:04:26 +0100 |
User-agent: |
Gnus/5.13 (Gnus v5.13) Emacs/24.0.50 (gnu/linux) |
Chung Hwan Kim writes:
> I and two other students have formed up a team for a project called
> "Accelerating Dynamic Binary Translation with the GPUs". As the name of
> the project suggests our main idea is to parallelize Dynamic Binary
> Translation (DBT) process and speed it up with GPUs using the NVIDIA
> CUDA library.
AFAIK, DBT is a fairly control flow intensive code, so you'll probably
run into lots of branch divergence problems, so that performance will
suffer a lot, even if you use instruction template tables (like in
qemu's PPC target).
Nonetheless, I think new fermi models have less problems with that, but
it's still an architecture thought for control-flow-homogeneous parallel
code.
In any case, I'm not sure what is the real cost of translation related
to execution, it all depends on the kind of applications you're running;
but the computation on the GPU better have a huge speedup compared to
the current approach, or otherwise the data transfers to/from the GPU
will dominate the cost, specially if they're small transfers.
Lluis
--
"And it's much the same thing with knowledge, for whenever you learn
something new, the whole world becomes that much richer."
-- The Princess of Pure Reason, as told by Norton Juster in The Phantom
Tollbooth