guix-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Use guix to distribute data & reproducible (data) science


From: Roel Janssen
Subject: Re: Use guix to distribute data & reproducible (data) science
Date: Sat, 17 Feb 2018 23:21:10 +0100
User-agent: mu4e 0.9.18; emacs 25.1.1

Amirouche Boubekki writes:

> Hello again Ludovic,
>
> On 2018-02-09 18:13, address@hidden wrote:
>> Hi!
>> 
>> Amirouche Boubekki <address@hidden> skribis:
>> 
>>> tl;dr: Distribution of data and software seems similar.
>>>        Data is more and more important in software and reproducible
>>>        science. Data science ecosystem lakes resources sharing.
>>>        I think guix can help.
>> 
>> I think some of us especially Guix-HPC folks are convinced about the
>> usefulness of Guix as one of the tools in the reproducible science
>> toolchain (that was one of the themes of my FOSDEM talk).  :-)
>> 
>> Now, whether Guix is the right tool to distribute data, I don’t know.
>> Distributing large amounts of data is a job in itself, and the store
>> isn’t designed for that.  It could quickly become a bottleneck.
>
> What does it mean technically that the store “isn't designed for that”?
>
>> That’s one of the reasons why the Guix Workflow Language (GWL)
>> does not store scientific data in the store itself.
>
> Sorry, I did not follow the engineering discussion around GWL.
> Looking up the web brings me [0]. That said the question I am
> asking is not answered there. In particular there is no rationale
> for that in the design paper.
>
> [0] http://lists.gnu.org/archive/html/guix-devel/2016-10/msg01248.html
>
>> I think data should probably be stored and distributed out-of-band 
>> using
>> appropriate storage mechanisms.
>
> Then, in a follow up mail, you reply to Konrad:
>
>>> Konrad Hinsen <address@hidden> skribis:
>> 
>> [...]
>> 
>>> It would be nice if big datasets could conceptually be handled in the
>>> same way while being stored elsewhere - a bit like git-annex does for
>>> git. And for parallel computing, we could have special build daemons.
>> 
>> Exactly.  I think we need a git-annex/git-lfs-like tool for the store.
>> (It could also be useful for things like secrets, which we don’t want
>> to have in the store.)
>> 

To answer your question:
> What does it mean technically that the store “isn't designed for that”?

I speak only from my own experience with “big data sets”, so may be it
is different for other people, but we use a separate storage system for
storing large amounts of data.  This separate storage is fault-tolerant
and is optimized for large files, meaning higher latency for file access
to reduce the financial footprint of such a system.

If we were to put data inside the store, we would need to optimize the
storage system for both low latency for small files, and a high storage
capacity.  This is extremely expensive.

Another issue I faced when providing datasets in the store is that
it's quite easy to end up with duplicated copies of the same dataset.

For example, I use the GNU build system for extracting a tarball that
contains a couple of files.  Whenever a package changes that affects the
GNU build system, the data package will be rebuild.

So you could use the trivial build system, but then I'd still need tar
and gzip to unpack the tarball.  Any change to these and the datasets
get duplicated.  This is not ideal.

Kind regards,
Roel Janssen



reply via email to

[Prev in Thread] Current Thread [Next in Thread]