help-gnunet
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Help-gnunet] Re: Inserting large amounts of content?


From: Per Kreuger
Subject: [Help-gnunet] Re: Inserting large amounts of content?
Date: Tue, 22 Apr 2003 00:34:57 +0200
User-agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.3) Gecko/20030313



Christian Grothoff wrote:
The cause for this is known. Currently, GNUnet can not do any exact predictions on how big the database will be, so we put in some values from experiments conducted by Eric -- and made them even a bit more conservative. Yes, that should be fixed, but you can certainly work around it.

How?



What is probably worse is very distinct degradation of insertion speed.
I enclose a postscript file with the insertions-rate listed and graphed
over the several hours it took to insert the 946 files.


Some degradation is to be expected with any database. You got a larger degradation due to the uneven distribution of content to buckets (see below).


All my files had the same mime type and libextractor extracts keywords for that type which I used in the insertion.

Insertion rate with empty database was on the order of 600K/sec. This quickly goes down to about 100K/sec and then slowly deterioates to less the 50K/sec after filling about half the allocated storage.



Can you be more specific about 'crashes gnunetd' (see FAQ on bug reporting...).


I'll try the latest CVS and see if the same type of behaviour is present before running gnunet-check. If so I'll report it as a bug.

With latest CVS the insertion-rate behaviour is initally the same: going from 558K/sec to 100K/sec in about 30 minutes and 500M of content indexed.


It may be the case that this type of testing is premature. You mentioned that you are working heavily on the storage mechanism. Is this type of testing at all usefull for you or would you rather that I wait until the next release?

        piak





reply via email to

[Prev in Thread] Current Thread [Next in Thread]