qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] osdep.c patch (FreeBSD hosts)


From: Juergen Lock
Subject: Re: [Qemu-devel] osdep.c patch (FreeBSD hosts)
Date: Sun, 1 Jun 2008 15:15:07 +0200
User-agent: Mutt/1.5.17 (2007-11-01)

On Fri, May 30, 2008 at 01:03:05AM +0200, Juergen Lock wrote:
> On Fri, May 30, 2008 at 12:57:13AM +0200, Juergen Lock wrote:
> > On Thu, May 29, 2008 at 11:54:31PM +0200, Fabrice Bellard wrote:
> > > Is it really needed to mmap() the RAM on FreeBSD ? This is a Linux
> > > specific hack, and it may even be obsolete with recent Linux kernels.
> > > 
> > Hmm actually I don't know...  You think the...
> > 
> > > > +#else
> > > > +    ptr = mmap(NULL, 
> > > > +               size, 
> > > > +               PROT_WRITE | PROT_READ, MAP_PRIVATE|MAP_ANON, 
> > > > +               -1, 0);
> > > > +#endif
> > 
> > could be replaced by just malloc?  Or would there be align issues too?
> 
> ..and I just checked the manpage, our malloc page-aligns too (for sizes >=
> pagesize), so at least that shouldn't be an issue.

Ok and I just tested the following patch and it worked for me:

Index: qemu/osdep.c
@@ -83,7 +83,9 @@
 
 #if defined(USE_KQEMU)
 
+#ifndef __FreeBSD__
 #include <sys/vfs.h>
+#endif
 #include <sys/mman.h>
 #include <fcntl.h>
 
@@ -94,6 +96,7 @@
     const char *tmpdir;
     char phys_ram_file[1024];
     void *ptr;
+#ifndef __FreeBSD__
 #ifdef HOST_SOLARIS
     struct statvfs stfs;
 #else
@@ -155,7 +158,9 @@
         }
         unlink(phys_ram_file);
     }
+#endif
     size = (size + 4095) & ~4095;
+#ifndef __FreeBSD__
     ftruncate(phys_ram_fd, phys_ram_size + size);
     ptr = mmap(NULL,
                size,
@@ -165,6 +170,13 @@
         fprintf(stderr, "Could not map physical memory\n");
         exit(1);
     }
+#else
+    ptr = malloc(size);
+    if (ptr == NULL) {
+        fprintf(stderr, "Could not allocate physical memory\n");
+        exit(1);
+    }
+#endif
     phys_ram_size += size;
     return ptr;
 }




reply via email to

[Prev in Thread] Current Thread [Next in Thread]