gomp-discuss
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gomp-discuss] CIL representation ...


From: Lars Segerlund
Subject: Re: [Gomp-discuss] CIL representation ...
Date: Wed, 12 Mar 2003 15:42:55 +0100
User-agent: Mozilla/5.0 (X11; U; Linux i586; en-US; rv:1.2.1) Gecko/20030311 Debian/1.2.1-10


 Ok I'll try to comment as we go along.

Biagio Lucini wrote:
On Wed, 12 Mar 2003, [iso-8859-1] Pop S?bastian wrote:


12.6.7 defines volatile reads and writes that is exacltly what we have
to deal with when OpenMP says shared variables: in other words variables
visible across all threads.


I think (tell me if I am wrong) that for a shared memory system (I believe
that we can't be "general" to the extend of tackling all the concurrent
model in one go) the general building blocks are
a) shared variables
b) private variables
c) locks
d) synchronisation


Yes this is the building blocks, however the representation in a DAG or similar ( SSA form ) is very likely not to follow this format, in order to do syncronisation and optimization on the parallel code and sections you might want to rearange a bit which I believe would be hard to do in the CIL format.


This, together with fork/joim mechanism, is all what a thread has to know
about. And once you speak in these terms, you should be able to tackle any
SM/NUMA-based parallel paradigm, be it OpenMP, Posix Threads, whatever.

Symmetric yes NUMA, a little more doubtfull since then you have to make considerations about access to different variables, ( the NON UNIFORM ).

As a subcase, you should be able to build any OpenMP functionality with a
combination of those. I don't know if this is a minimal subset (you may
argue that you can deal with shared variables in terms of locks, for
instance), but it should be a good compromise between exhaustivity and
complexity of implementation. Posix has already those built in and there
is a Posix Thread Librart available for Linux, so my suggestion goes along
the line of "posixising" OpenMP or whatever SM scheme. The suggestion is
as follows:
1) in the frontend phase, we collect and organise those information
2) in the Generic - Gimple phase we build threads as structure whose
component are the information collected in (1) and the code that the
thread must execute
3) optimising -> Not yet a clue for it :-)

There is quite a lot of clues in Diego's paper, algorithms in pseudo code operating on DAG's.


At the moment we miss all the code for doing anything. Some time ago, I
was proposing to simulate (1) and (2) with some script (Perl, Python,
Ruby, whatever the interested people are more familiar with). I think that
just trying to implement a design you realise what is wrong with it, so I
bring forward again that proposal.


Now this is a good proposal, however perhaps we can use C and the GENERIC definitions fron the gcc source three ? and code the algorithms for operations on the tree in C, ( im order to be reused later), while doing the framework, ( frontend, and backend ( probably only dumping of pseudo code/trees ) in any of the suggested languages.

I do think that perhaps we could prototype some actions without thinking about GENERIC, now to the really great question, which scripting language ?

I would like to have some tk available, so I would propose perl or tcl, but I don't know what you people are familiar with ?
 Suggestions.


Furthermore it defines the behaviour of an
optimizing compiler with respect to transformations on CIL.


When I read the paper I found that it imposes a set of constraints to what the compiler might do, and quite a lot about what the compiler have to ensure, and it's just to ensure constraints of this form that a graph is a better representation, i.e. it's a functional specification, which I consider on par with the openMP spec, it doesn't go into implementation details.


This could be a starting point for (3), though at a first reading
yesterday evening (may be I was slippy?) I could not find anything interesting


As I said earlier, I looked at silicon compilation ( which uses the ssa form in order to handle concurrency and timings on chip level ), and they have quite a bit of optimizations which I think we can adapt.

Questions: how does the actual optimising phase of gcc deals with Posix
Threads (if it does it at all)?


It doesn't , it's a lib and totally ignored by the compiler, thus you will have interesting behavior using gnu threads ( 1 vs 2 ), linux threads ( native ) or a number of different libraries.

Biagio



_______________________________________________
Gomp-discuss mailing list
address@hidden
http://mail.nongnu.org/mailman/listinfo/gomp-discuss






reply via email to

[Prev in Thread] Current Thread [Next in Thread]