[Top][All Lists]
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Qemu-devel] QEMU redesigned for MPI (Message Passing Interface)
From: |
Paul Brook |
Subject: |
Re: [Qemu-devel] QEMU redesigned for MPI (Message Passing Interface) |
Date: |
Tue, 17 Nov 2009 12:20:28 +0000 |
User-agent: |
KMail/1.12.2 (Linux/2.6.30-2-amd64; KDE/4.3.2; x86_64; ; ) |
> > The practical example below will explain it completely:
> >
> > 1) we take 4 common modern computers - CoreQuad + 8 GB Memory.
> > 2) we assemble a standard Linux cluster with 16 cores and 32G memory.
> > 3) and now - we run the only one virtual guest system, but give it ALL
> > available resources.
If the guest isn't aware of this discontinuity then performance will really
suck. Generally speaking you have to split jobs anyway, the same as you would
on a regular cluster, the SSI just makes migration and programming a little
easier.
If you don't believe me then talk to anyone who's used large SSI systems (e.g.
SGI Altix) - these systems have dedicated hardware assist and interconnect
designed for SSI operation and you still have to be fairly selective about how
you use them.
> What you're describing is commonly referred to as a Single System
> Image. It's been around for a while and can be found in software-only
> verses (pre-Xen VirtualIron, ScaleMP) and hardware-assisted (IBM, 3leaf).
Or better still do it at the OS level (e.g. OpenSSI).
Paul