octave-maintainers
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [fem-fenics] MPI parallelisation


From: Eugenio Gianniti
Subject: Re: [fem-fenics] MPI parallelisation
Date: Sun, 3 Aug 2014 12:47:13 +0000


On 31 Jul 2014, at 22:37, Eugenio Gianniti <address@hidden> wrote:


On 31 Jul 2014, at 21:31, Marco Vassallo <address@hidden> wrote:


Il giorno 31/lug/2014 20:54, "Eugenio Gianniti" <address@hidden> ha scritto:
>
> > Hi Eugenio,
> >
> > we mark the subdomain in the mesh.oct files in order to be consistent with the mesh representation in the msh pkg. In fact, the (p, e, t) representation contain this information and so we keep it also in fem-fenics. I do agree with you that it is not widely used, but for example in the msh_refine function it is necessary in order to give back in Octave a refined mesh with all the subdomain available (if they were present in the non-refined mesh).
>
> I noticed that they are also used to apply DirichletBC. Indeed, currently I got parallel assembly working and running a full Neumann problem yields the same solution both in serial and in parallel execution. On the other hand, DirichletBCs do not work in parallel due to the missing markers, and DOLFIN 1.4.0 still does not support them, so problems with Dirichlet boundary conditions cannot be solved in parallel (better, the code runs fine till the end, but the solution is crap).
>
> After going through DOLFIN code, I figured out that dolfin::DirichletBC can be instantiated also giving as argument a MeshFunction identifying subdomains. I would then move the information from the Mesh itself to two MeshFunctions, one for the boundary facets and one for the region identifiers. I wonder where it is better to store such objects. Should I just add them as members of the mesh class or implement a new class to wrap MeshFunction? Probably with the first approach the only change visible by the user would be a new mesh argument needed by DirichletBC.
>
> Eugenio
>

Hi Eugenio,
it seems that you are doing some really good progress towards a good solution.

I have two points:
1) why do you think that with a separate meshfunction the code should work ?

A couple of examples among those distributed with DOLFIN use this approach. In version 1.3.0 you can find “subdomains”, which just shows how to use MeshFunction to store markers. In version 1.4.0 there is also "subdomains-poisson”, which further uses such MeshFunctions to define DirichletBCs, even if this one features only the python sample. Both examples run without problems also in parallel.

2) provided that the solition proposed works, I don t think that we shoul change the user interface. We should look for a convenient way of translating the info from a mesh to a meshfunction when building it. Probably there is some fenics method which could do something like this, or we can ask on the Fenics mailing list.

I don’t think I have understood what you are saying. If your point is that the information already present in the pet mesh produced by the msg package needs to be extracted and added to the fem-fenics representation, that’s what I want to do. Basically instead of setting markers in the Mesh object I would set the corresponding values of a MeshFunction. Anyway, this object needs to be available when constructing DirichletBC, so I must find a way of passing it to that oct-file. If you meant something else, I need you to explain it once more :).

As a side note, I’ve seen that it is possible to attach a MeshFunction to a Mesh using MeshData, so maybe this could be an interesting alternative. However, I still need to go through the code and figure out if it works in parallel, since in the FEniCS book there’s just a small paragraph about it.

Eugenio

HTH

marco



I pushed to my repository some changesets that allow for the solution of problems in parallel via the MPI paradigm. Namely:
  * ufl.m and all the import_ufl_*.m are executed only by the main process, since they perform IO;
  * in parallel execution meshes are built from the p-e-t input argument and then distributed among processes;
  * matrices and vectors are assembled locally and then gathered to obtain the global ones on the main process;
  * Mesh.oct now can return as output arguments up to two meshfunctions storing facet or cell markers, since in parallel runs Meshes cannot store that information themselves with the current implementation of the DOLFIN library.
Tomorrow I will provide more details on my blog.

Currently when using MPI one must use the meshfunction returned by Mesh.oct in order to provide DirichletBC.oct with the markers identifying subsets of the boundary. There are a couple of points that could be discussed. First of all, as of now DirichletBC.oct accepts as argument also a meshfunction identifying subdomains. I used this approach because I did not find an alternative for the storage of this information: MeshData, the class I mentioned previously, does not help in this task, as far as I understand. Anyway, dolfin::DirichletBC offers a constructor with the same signature as the current DirichletBC.oct, so this is not really weird.

A second aspect is that meshfunction is actually very, very raw. Currently there is no way to display it, no function to build it as users need… Moreover, its name is quite misleading, as dolfin::MeshFunction is a template class, whilst meshfunction wraps dolfin::MeshFunction <std::size_t>. For the first issues I think I can provide easily functionality as importing/exporting a MeshFunction from/to file, a descriptive line to display like in other fem-fenics types, etc. For the latter, I do not know if it could be meaningful to make meshfunction itself a template class or just provide three wrappers for <std::size_t>, <bool> and <double>, which are the most recurring instances in FEniCS documentation. Or even just leave it as it is and say it in the package documentation.

Eugenio

reply via email to

[Prev in Thread] Current Thread [Next in Thread]