freepooma-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [pooma-dev] Assigning to internal guards


From: James Crotinger
Subject: RE: [pooma-dev] Assigning to internal guards
Date: Wed, 25 Sep 2002 10:29:35 -0600

We considered this and it was on the list of things to work on someday. If I remember correctly, we were going to try to figure out if the layouts all matched, including the numbers of internal guards, and if so, generate patch iterates that performed the operations on the entire domains, not just the physical domains. We already figure out if the layouts match at the physical-domain basis, I believe, so this didn't sound like a big extension. Indeed, the simplest form of this would be to check that everyone has exactly the same layout.

Of course, in multiple dimensions you generally don't want internal guards on anything that you're not going to stencil, so in practice it isn't clear how often this practice will really pay off. (Internal guards can consume a lot of memory in 3D.)

        Jim


-----Original Message-----
From: Richard Guenther [mailto:address@hidden]
Sent: Wednesday, September 25, 2002 3:51 AM
To: address@hidden
Subject: [pooma-dev] Assigning to internal guards

Hi!

I'd like to have internal guards computated rather than communicated in
simple cases like

 x.all() = 1.0;

or even

 x(I) = a(I) + b(I);

so after the operation a x.fillGuards() will do nothing (is this
equivalent to having the dirty flag cleared after the operation or is the
dirty flag overloaded as I think with the handling of relations?).

I can achieve at least the assignment by creating a special layout for x
which contains overlapping patches with no guards, but I get extra
communication of guards at other places then which I dont really
understand. The layout is created using a custom partition based on grid
partition

OGridPartition<1>:
  blocks_m = [4]
  internalGuards_m:
      upper       1
      lower       1
  num_m = 4
  grid_m = (empty)

the resulting layout is

GridLayout 1 on global domain [-1:65:1]:
   Total subdomains: 4
   Local subdomains: 2
  Remote subdomains: 2
        Grid blocks: [4]
  Global subdomain = {[-1:16:1]: allocated=[-1:16:1], con=0, aff=0, gid=0,
lid=0}
  Global subdomain = {[15:32:1]: allocated=[15:32:1], con=0, aff=0, gid=1,
lid=1}
  Global subdomain = {[31:48:1]: allocated=[31:48:1], con=1, aff=-1,
gid=2, lid=-1}
  Global subdomain = {[47:65:1]: allocated=[47:65:1], con=1, aff=-1,
gid=3, lid=-1}
   Local subdomain = {[-1:16:1]: allocated=[-1:16:1], con=0, aff=0, gid=0,
lid=0}
   Local subdomain = {[15:32:1]: allocated=[15:32:1], con=0, aff=0, gid=1,
lid=1}
  Remote subdomain = {[31:48:1]: allocated=[31:48:1], con=1, aff=-1,
gid=2, lid=-1}
  Remote subdomain = {[47:65:1]: allocated=[47:65:1], con=1, aff=-1,
gid=3, lid=-1}
 hasInternalGuards_m, hasExternalGuards_m 0 0
 internalGuards_m 0-0
 externalGuards_m 0-0
 gcFillList_m


Anyone with other/better ideas to reduce communication? I'm still unable
to find where the computation domain for the patches is computed and the
dirty flag is handled - it semms to be spread over the whole code...

Any hints?
    Thanks, Richard.

--
Richard Guenther <address@hidden>
WWW: http://www.tat.physik.uni-tuebingen.de/~rguenth/
The GLAME Project: http://www.glame.de/


reply via email to

[Prev in Thread] Current Thread [Next in Thread]