[Top][All Lists]
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Monotone-devel] Re: wrapping up the changeset branch
From: |
graydon hoare |
Subject: |
[Monotone-devel] Re: wrapping up the changeset branch |
Date: |
Mon, 18 Oct 2004 14:55:43 -0400 |
User-agent: |
Mozilla Thunderbird 0.8 (X11/20040913) |
Asger Ottar Alstrup wrote:
But maybe you can tell me the status of these things given the recent
developments:
- Storing large files ~ 1 GB
it cannot do this at the moment. it requires the upgrade to sqlite 3,
which we have not performed yet.
- Efficient working with large files - i.e. not having them three times
in memory in common operations
no work has been done on this.
- Efficient distribution of large files
I don't know what sort of efficiency gain you might be thinking of.
perhaps something to do with shared sub-fragments, or simply avoidance
of copying?
- Reduce hashing time on large files
no work has been done on this.
- Footprint on each person machine should be a function of what they
have checked out, not the history or what is in the repository
there has been a minor improvement here on the changeset branch, which
is that netsync knows how to pull both "forwards" (starting from the
oldest historical record) and also "backwards", when it realizes it is
fetching a disconnected subgraph. this "backwards fetching" is actually
quite a bit more efficient when it's noticed, since it is the way deltas
in the underlying storage layer are kept.
but towards the goal you have in mind, it's theoretically feasible to
have the "backwards" fetch stop early, say only retrieve a small-ish
subgraph near the heads of a branch, perhaps all nodes under the least
dominator of the current heads. I say theoretically rather than
practically, because we haven't made any such feature yet.
-graydon