espressomd-users
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [ESPResSo-users] Problem MPI+GPU using LBM


From: Axel Arnold
Subject: Re: [ESPResSo-users] Problem MPI+GPU using LBM
Date: Mon, 01 Jul 2013 11:56:51 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20130620 Thunderbird/17.0.7

Yes, the master branch has the LBGPU pretty much overworked. I just noticed that 3.2 is not in the build server, so can well be that we oversee some errors there. So, can you checkout the git version ( git clone git://github.com/espressomd/espresso.git ) ?

Thanks,
Axel

On 07/01/2013 11:45 AM, Markus Gusenbauer wrote:
Hi Axel,

I am using 3.2.0 (Latest release) from the ESPResSo download section. Is there a later release on git?

Markus



On 01.07.2013 11:35, Axel Arnold wrote:
... meaning, the current git master...

Axel

On 07/01/2013 11:34 AM, Axel Arnold wrote:
Hi Markus,

which version are you using? There are currently a number of restructure attempts going on in the LBGPU code, so it is quite important that you use the latest version.

Axel

On 07/01/2013 11:17 AM, Markus Gusenbauer wrote:
Hi all,

I've tried to run a simple simulation using 2 CPU + GPU. I have a channel with lbfluid, from the left I put a certain velocity. Without lbboundary the simulation runs fine. As soon as I add a lbboundary it crashes:

[mgusenbauerMint13:12827] *** An error occurred in MPI_Bcast
[mgusenbauerMint13:12827] *** on communicator MPI_COMMUNICATOR 3
[mgusenbauerMint13:12827] *** MPI_ERR_TRUNCATE: message truncated
[mgusenbauerMint13:12827] *** MPI_ERRORS_ARE_FATAL (your MPI job will now abort) --------------------------------------------------------------------------
mpirun has exited due to process rank 1 with PID 12827 on
node mgusenbauerMint13 exiting without calling "finalize". This may
have caused other processes in the application to be
terminated by signals sent by mpirun (as reported here).
--------------------------------------------------------------------------


Here is the tcl-script:


setmd time_step 0.1
setmd skin 0.2
thermostat off

setmd box_l 40 40 100

lbboundary wall normal 1 0 0 dist 0.5 type 501
lbboundary wall normal -1 0 0 dist -39.5 type 501
lbboundary wall normal 0 1 0 dist 0.5 type 501
lbboundary wall normal 0 -1 0 dist -39.5 type 501


lbfluid gpu grid 1 dens 1.0 visc 1.5 tau 0.1 friction 0.5

set i 0
while { $i < 100 } {
    puts "$i / 100 \r"

    for { set iii 0 } { $iii < 40} { incr iii } {
        for { set jjj 0 } { $jjj < 40 } { incr jjj } {
            for { set kkk 0 } { $kkk < 1 } { incr kkk } {
                lbnode $iii $jjj $kkk set u 0.0 0.0 0.1
            }
        }
    }

    integrate 1
    incr i
}


Same simulation works fine using MPI+CPU. Any ideas?

Markus










--
JP Dr. Axel Arnold
ICP, Universität Stuttgart
Pfaffenwaldring 27
70569 Stuttgart, Germany
Email: address@hidden
Tel: +49 711 685 67609




reply via email to

[Prev in Thread] Current Thread [Next in Thread]