help-gnunet
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Help-gnunet] Error uploading file


From: David Kuehling
Subject: Re: [Help-gnunet] Error uploading file
Date: 17 Dec 2005 18:38:08 +0100
User-agent: Gnus/5.09 (Gnus v5.9.0) Emacs/21.4

Hi,

>>>>> "Christian" == Christian Grothoff <address@hidden> writes:

> See, not the issue.  Still I would think performance will likely be
> not-so-great with such a huge DB (indexing is good for performance!).

Since gnunet data blocks are accessed in a random order, I thought that
there would be not much difference between accesing random blocks from
indexed files and accesing random blocks in a database...

> I'm not sure I understand this.  The *memory* used by gnunet?  You'll
> just use much more disk space (making a copy of the files in the DB)
> instead of linking to existing content.  This will cost you DB access
> performance...

But I don't want to keep the files I publish on my "server" computer.  I
just upload them into the database and remove them afterwards.  BTW the
gnunet-insert manpage is somewhat unclear about what indexing means, it
reads:

    Since 0.6.2 GNUnet will make a copy of the file in the directory
    specified in gnunet.conf.

Which sounds like the files will _always_ be copied, which seems like a
bad idea, since it would then be quite difficult to keep track of the
amount of storage used by gnunet's indexed content...

> can you try applying the following patch:
[..]

I applied your patch, and this is the result:

Dec 17 17:54:53 WARNING: Datastore full (2149286829/2147483648) and
  content priority too low to kick out other content.  Refusing put.

You are right, GNUnet assumes that the database is full.  Would be neat
if gnunet-insert displayed that message...  But why is my database
limited to only 2GB?  What can I do to boost it to 10GB?

BTW one of the reasons for doing full inserts for all files was, that
this way GNUnet would be able to do smart decisions about what content
to drop when the 10GB limit is reached.  Seems that the natural fading
of priorities is quite slow for my node (and for the limited 2GB
database size)...

regards,

David
-- 
GnuPG public key: http://user.cs.tu-berlin.de/~dvdkhlng/dk.gpg
Fingerprint: B17A DC95 D293 657B 4205  D016 7DEF 5323 C174 7D40





reply via email to

[Prev in Thread] Current Thread [Next in Thread]