[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [fem-fenics] MPI parallelisation
From: |
Eugenio Gianniti |
Subject: |
Re: [fem-fenics] MPI parallelisation |
Date: |
Sun, 27 Jul 2014 23:13:19 +0000 |
On 16 Jul 2014, at 14:46, Eugenio Gianniti <address@hidden> wrote:
> Dear all,
>
> I am currently stuck in the implementation of the MPI parallelisation in
> fem-fenics. I wrote some code for the parallel assembly of matrices, but if I
> try to run the examples it crashes with DOLFIN internal errors even before
> this new code is executed. I quickly recall what I know about the issue:
>
> 1. MPI in itself does not seem to be the cause, since running with mpirun -n
> 1 everything works (basically I understand that DOLFIN internally checks the
> number of processes, not the MPI initialisation, to decide if it needs to
> “act parallel”);
>
> 2. just to be sure, I ran the equivalent C++ implementations of these
> examples, and everything works fine in parallel, too;
>
> 3. trying different examples (in Octave) leads to different errors, spread
> all around DOLFIN’s code base, sometimes even before the call to assemble;
>
> 4. whilst different examples cause different errors, the same example leads
> always to the same error.
>
> From the information I could extract, my guess is that the issue may arise
> from the dynamic loading of the Octave functions that reference DOLFIN code.
> However, I do not know how to further troubleshoot the problem. So I wonder
> if someone has already dealt with a similar issue and has any advice to share
> or if someone more familiar with Octave’s internals can provide me with any
> helpful insight.
>
> Thanks,
> Eugenio
I further worked on this issue, and now have an implementation that allows for
solving a system on a mesh loaded from file and without DirichletBCs. In this
simplified setting, seemingly the parallel solution is the serial one divided
by the number of processes.
Moreover, I tried to fix also the problem of meshes defined with the msh
package. DOLFIN provides a handy method to distribute the mesh, but marked
subdomains are not supported in version 1.3.0: indeed, Mesh.oct does mark a
subdomain, but I cannot get any clue on why it does, so I need Marco to explain
some details of that, please.
Eugenio
- [fem-fenics] MPI parallelisation, Eugenio Gianniti, 2014/07/16
- Re: [fem-fenics] MPI parallelisation,
Eugenio Gianniti <=
- Re: [fem-fenics] MPI parallelisation, Marco Vassallo, 2014/07/28
- Re: [fem-fenics] MPI parallelisation, Carlo de Falco, 2014/07/28
- Re: [fem-fenics] MPI parallelisation, Eugenio Gianniti, 2014/07/28
- Re: [fem-fenics] MPI parallelisation, Carlo de Falco, 2014/07/28
- Re: [fem-fenics] MPI parallelisation, Eugenio Gianniti, 2014/07/29
- Re: [fem-fenics] MPI parallelisation, c., 2014/07/29
- Re: [fem-fenics] MPI parallelisation, Marco Vassallo, 2014/07/28
- Re: [fem-fenics] MPI parallelisation, Eugenio Gianniti, 2014/07/31
- Re: [fem-fenics] MPI parallelisation, Marco Vassallo, 2014/07/31
- Re: [fem-fenics] MPI parallelisation, Eugenio Gianniti, 2014/07/31