commit-gnuradio
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Commit-gnuradio] r5043 - in gnuradio/branches/developers/eb/ibu: mblock


From: eb
Subject: [Commit-gnuradio] r5043 - in gnuradio/branches/developers/eb/ibu: mblock/src/lib pmt/src/lib
Date: Wed, 18 Apr 2007 13:34:39 -0600 (MDT)

Author: eb
Date: 2007-04-18 13:34:38 -0600 (Wed, 18 Apr 2007)
New Revision: 5043

Modified:
   gnuradio/branches/developers/eb/ibu/mblock/src/lib/mb_message.h
   gnuradio/branches/developers/eb/ibu/pmt/src/lib/pmt_int.h
Log:
Disabled local allocators for now.  At least on GNU/Linux, the default
malloc implementation is faster.  I suspect that malloc is being smart
about multi-threaded allocation, but haven't looked at the
implementation yet.



Modified: gnuradio/branches/developers/eb/ibu/mblock/src/lib/mb_message.h
===================================================================
--- gnuradio/branches/developers/eb/ibu/mblock/src/lib/mb_message.h     
2007-04-18 18:23:06 UTC (rev 5042)
+++ gnuradio/branches/developers/eb/ibu/mblock/src/lib/mb_message.h     
2007-04-18 19:34:38 UTC (rev 5043)
@@ -24,7 +24,7 @@
 #include <mb_common.h>
 #include <iosfwd>
 
-#define MB_MESSAGE_LOCAL_ALLOCATOR 1   // define to 0 or 1
+#define MB_MESSAGE_LOCAL_ALLOCATOR 0   // define to 0 or 1
 
 class mb_message;
 typedef boost::shared_ptr<mb_message> mb_message_sptr;

Modified: gnuradio/branches/developers/eb/ibu/pmt/src/lib/pmt_int.h
===================================================================
--- gnuradio/branches/developers/eb/ibu/pmt/src/lib/pmt_int.h   2007-04-18 
18:23:06 UTC (rev 5042)
+++ gnuradio/branches/developers/eb/ibu/pmt/src/lib/pmt_int.h   2007-04-18 
19:34:38 UTC (rev 5043)
@@ -31,7 +31,7 @@
  * See pmt.h for the public interface
  */
 
-#define PMT_LOCAL_ALLOCATOR 1          // define to 0 or 1
+#define PMT_LOCAL_ALLOCATOR 0          // define to 0 or 1
 
 class pmt_base : boost::noncopyable {
 protected:





reply via email to

[Prev in Thread] Current Thread [Next in Thread]