octave-maintainers
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [fem-fenics] MPI parallelisation


From: Marco Vassallo
Subject: Re: [fem-fenics] MPI parallelisation
Date: Mon, 28 Jul 2014 08:53:25 +0200




On Mon, Jul 28, 2014 at 1:13 AM, Eugenio Gianniti <address@hidden> wrote:

On 16 Jul 2014, at 14:46, Eugenio Gianniti <address@hidden> wrote:

> Dear all,
>
> I am currently stuck in the implementation of the MPI parallelisation in fem-fenics. I wrote some code for the parallel assembly of matrices, but if I try to run the examples it crashes with DOLFIN internal errors even before this new code is executed. I quickly recall what I know about the issue:
>

Moreover, I tried to fix also the problem of meshes defined with the msh package. DOLFIN provides a handy method to distribute the mesh, but marked subdomains are not supported in version 1.3.0: indeed, Mesh.oct does mark a subdomain, but I cannot get any clue on why it does, so I need Marco to explain some details of that, please.

Hi Eugenio,

we mark the subdomain in the mesh.oct files in order to be consistent with the mesh representation in the msh pkg. In fact, the (p, e, t) representation contain this information and so we keep it also in fem-fenics. I do agree with you that it is not widely used, but for example in the msh_refine function it is necessary in order to give back in Octave a refined mesh with all the subdomain available (if they were present in the non-refined mesh).

What do you mean that they are not supported in FEniCS 1.3.0? By the way, now it is available FEniCS 1.4.0 and so we should later check if fem-fenics is compliant with it.

HTH

marco


Eugenio


reply via email to

[Prev in Thread] Current Thread [Next in Thread]