[Top][All Lists]
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: Getting Started with Hurd-L4
From: |
Sam Mason |
Subject: |
Re: Getting Started with Hurd-L4 |
Date: |
Mon, 25 Oct 2004 23:33:01 +0100 |
User-agent: |
Mutt/1.5.6i |
Neal H. Walfield wrote:
>> I basically mean a running program/server/
>> module) would ask physmem for a new container of the appropriate
>> size;
>
>Containers are filled by a task. (However, there is an optimization
>to create a container and commit a number of pages to it
>simultaneously.)
I think that's what I meant, I'm just picking the wrong words!
>> it would then give this container to the server module, who would put
>> the data into it, or get the data out;
>
>The client does not give the container to the server as it would not
>be able to get it back. The clients gives the server access to the
>container including the right to uninterruptibly lock the container
>for a time period.
That's definitely what I meant. I'm still not quite used to this
terminology yet!
>(This way the server can be sure that the client
>will not remove the container while it is filling it or corrupting it
>before it takes a copy into its cache and allows the client to know
>that it can get its resources back eventually.)
I would assume that a different API would be used for things like
emulating "mmap" then? How would this work for two mutually
untrusting tasks here?
A brief scan over your presentation seems to suggest you cover this,
I'll have a more detailed look in a bit.
>Containers don't need to be dumped. They are designed to be low
>overhead and could be reused (but needn't be).
Right again. . . I've even skimmed over the code that handles all
that as well!
Is it possible to have several requests using the same "container"
being processed (I would guess it would have to be the same task,
because of the locking) at the same time? I'm guessing (and hoping)
the answer will be; that's up to the API that's layered on top of
this, lower-level, abstraction.
>> Assuming I've got all that right, there will be quite a few trips
>> through the kernel involved. For a basic file system operation
>> there's going to be, at very least, three processes involved in
>> getting a block to disk - the client, file-system and device driver.
>
>And physmem, of course.
I think I detailed that in my little table; but yes, lots of little
calls into physmem!
>If you go to disk it doesn't matter how slow it is. The fast path is
>where the data is in core.
OK, bad example with reading/writing data. I would imagine that the
system will spend an enormous amount of time (in comparison to a
monolithic-kernel, but still hopefully low in comparison to the useful
working being done) so anything that can be done to reduce this
overhead would be good. Again, this is coming from a position of
total naivete!
>Unless a thread is bound to a specific CPU, it is unlikely you will
>get any cross cpu IPCs.
?? I'm confused! I read that as "if a thread is bound to a specific
CPU, cross CPU IPC is likely". Is that right? it seems rather
counter-intuitive!
I would expect that if a thread is bound to a specific CPU there would
be little external state that would need to be communicated to other
CPUs. It may be to do with what we're talking about though. By
"thread" I was really talking about the common flow of execution in
order to get some work (I'm reluctant to use the word "task" here)
done. Although I would expect this work to be processed by several
individual threads each in their own task.
I'd probably expect something like "Scheduler Activations" [1] to
actually be in charge of scheduling the individual threads inside the
tasks.
>You may change CPUs if you are preempted (or
>block) but that is different.
That makes a bit more sense! But even though, I'd still hope that
decisions like that were delegated to something like scheduler
activations. I'm not sure if there has been more recent work on this
subject that I don't know about. Any pointers would be appreciated!
I've just noticed that I'm coming with a lot of preconceived ideas
about how I think this thing should be implemented. Please tell me to
be quiet if it's getting a bit over the top and I'll try and do a
little more listening!
>Once you go to disk overhead from executing code is negligible.
Definitely agree about that, disk IO was a bad example.
>I did a presentation at Waterloo two years ago about the virtual
>memory subsystem. You can find it here [1]. There is no text to go
>along with it so you will have to ask questions but the diagrams
>should help.
There seems to be a recording of your presentation as well. I'll give
it a listen to in a bit. . .
Cheers,
Sam
[1] http://citeseer.ist.psu.edu/anderson92scheduler.html
- Re: Getting Started with Hurd-L4, (continued)
- Re: Getting Started with Hurd-L4, Marcus Brinkmann, 2004/10/25
- Re: Getting Started with Hurd-L4, Neal H. Walfield, 2004/10/25
- Re: Getting Started with Hurd-L4, Marcus Brinkmann, 2004/10/25
- Re: Getting Started with Hurd-L4, Sam Mason, 2004/10/25
- Re: Getting Started with Hurd-L4, Marcus Brinkmann, 2004/10/25
- Re: Getting Started with Hurd-L4, Espen Skoglund, 2004/10/26
- Re: Getting Started with Hurd-L4, Sam Mason, 2004/10/26
- Re: Getting Started with Hurd-L4, Espen Skoglund, 2004/10/26
- Re: Getting Started with Hurd-L4, Sam Mason, 2004/10/26
- Re: Getting Started with Hurd-L4, Espen Skoglund, 2004/10/26
- Re: Getting Started with Hurd-L4,
Sam Mason <=
- Re: Getting Started with Hurd-L4, Marcus Brinkmann, 2004/10/25
- Re: Getting Started with Hurd-L4, Sam Mason, 2004/10/26
- Re: Getting Started with Hurd-L4, Marcus Brinkmann, 2004/10/26
- Re: Getting Started with Hurd-L4, Sam Mason, 2004/10/26
- Re: Getting Started with Hurd-L4, Sam Mason, 2004/10/26
- Re: Getting Started with Hurd-L4, Marcus Brinkmann, 2004/10/26
- Re: Getting Started with Hurd-L4, Sam Mason, 2004/10/26
- Re: Getting Started with Hurd-L4, Neal H. Walfield, 2004/10/26
- Re: Getting Started with Hurd-L4, Marcus Brinkmann, 2004/10/25
- Re: Getting Started with Hurd-L4, Neal H. Walfield, 2004/10/25