gnu-arch-users
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gnu-arch-users] Smart server question


From: Aaron Bentley
Subject: Re: [Gnu-arch-users] Smart server question
Date: Thu, 21 Apr 2005 09:03:42 -0400
User-agent: Mozilla Thunderbird 0.6 (X11/20040530)

Szilard Hajba wrote:
I've read some messages from the archive about this projekt, but the last
message in the area was more than a year ago. Have anything happened in the
subject?

Not really.

But there are some commands that work on the archive, what are extremly slow.
I have made about 20 tags on my imported project. The creation of the first
tag held about 5 minutes (!) and transferred several megabytes from the
server.

A tag revision is very cheap-- several k at most.  However, tags from
another archive also trigger cacherev, by default.  This is to ensure
that the archive's revisions do not depend on external sources. Auto-cachereving can be disabled, though.

The remaining 19 tags held about 1 second (after I ssh0ed to the
server and run the tag commands locally :)

Assuming they were from the same archive, they would have been fairly
quick remotely too.

An other example is cacherev. If I want to make a cached revision on the
server it's much faster locally than on a slow network.

A smart server would only need cacherevs to be performed for the
tag-from-external-archive case. If there's a cacherev or import at base-0, no further cachereving should be required, because a smart server can build them on demand.

Now the suggestions:

As I read the archives I noticed that you had several discussions about smart
servers but as I know there is no usable solutions.

1. suggestion: Why don't you make the transport layer pluginable!

I believe it's better to target the archive layer, not the transport layer. Operations on the transport (pfs) layer are done in terms of filenames. Operations on the archive layer are done in terms of revisions, versions, packages and categories.

If it would be possible to develop several transport layers and they would be
usable with your standard tla distribution than there would be no need, to
wait for a perfect idea to implement a really cool smart server. We could do a
simple smart server with some sftp-like file system interface and some other
accelerator instructions.

You can always implement your new archive type using the existing pfs code. This would have the same advantage that you could start simple, and add accelerator instructions.

This would claim an other thing to the above:

2. Eztend your abstract filesystem API with optional higher level commands.

This would mess up the layering in all kinds of ways, and make the simple abstract filesystem API quite complicated.

For example, the cacherev command now works like this (simplified):
        create_pristine_dir(dir)
        fs->build_revision(dir, archive, revision)
        fs->put_cached(dir, archive, revision)

The cacherev command is redundant for a smart server, except when the smart server does not have enough information to produce a cacherev. So the current approach looks better to me.

Note that
a) create_pristine and build_revision are the same operation
b) the fs for build_revision and put_cached are unrelated objects.

You could create an optional method for the fs interface,
cacherev(archive, revision) and modify the above command like this:
        if (fs->cacherev) {
            fs->cacherev(archive, revision)
        } else {
            create_pristine_dir(dir)
            fs->build_revision(dir, archive, revision)
            fs->put_cached(dir, archive, revision)
        }

Cachedrevs can be implicit for a smart server, as long as it has enough info to build them. So fs->cacherev(archive, revision) is either redundant or doesn't have enough information to complete successfully.

By all means, build a smart server. But there's no significant advantage to building it on top of the pfs layer, and that would mess up the layering.

Aaron




reply via email to

[Prev in Thread] Current Thread [Next in Thread]