qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] Re: Status update


From: Eduard - Gabriel Munteanu
Subject: [Qemu-devel] Re: Status update
Date: Thu, 1 Jul 2010 22:30:34 +0300
User-agent: Mutt/1.5.20 (2009-06-14)

On Wed, Jun 30, 2010 at 09:37:31AM +0100, Stefan Hajnoczi wrote:
> On Tue, Jun 29, 2010 at 6:25 PM, Eduard - Gabriel Munteanu
> <address@hidden> wrote:
> > On the other hand, we could just leave it alone for now. Changing
> > mappings during DMA is stupid anyway: I don't think the guest can
> > recover the results of DMA safely, even though it might be used on
> > transfers in progress you simply don't care about anymore. Paul Brook
> > suggested we could update the cpu_physical_memory_map() mappings
> > somehow, but I think that's kinda difficult to accomplish.
> 
> A malicious or broken guest shouldn't be able to crash or corrupt QEMU
> process memory.  The IOMMU can only map from bus addresses to guest
> physical RAM (?) so the worst the guest can do here is corrupt itself?
> 
> Stefan

That's true, but it's fair to be concerned about the guest itself.
Imagine it runs some possibly malicious apps which program the hardware
to do DMA. That should be safe when a IOMMU is present.

But suddenly the guest OS changes mappings and expects the IOMMU to
enforce them as soon as invalidation commands are completed. The guest
then reclaims the old space for other uses. This leaves an opportunity
for those processes to corrupt or read sensitive data.

If the guest OS is prone to changing mappings during DMA, some process
could continually set up, e.g., IDE DMA write transfers hoping to expose
useful data it can't read otherwise. The buffer can be poisoned to see
if someone went for the bait and wrote in that space.

Actually I'm not that sure changing mappings during DMA is stupid,
as the OS might want to reassign devices (where this is possible) to
various processes. Reclaiming mappings seems normal when a processes
dies during DMA, as the kernel has no way of telling whether DMA
completed (or even started).


        Eduard




reply via email to

[Prev in Thread] Current Thread [Next in Thread]