espressomd-users
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [ESPResSo-users] CUDA error


From: Axel Arnold
Subject: Re: [ESPResSo-users] CUDA error
Date: Thu, 15 Dec 2011 16:25:16 +0100
User-agent: KMail/1.13.5 (Linux/2.6.34.10-0.4-desktop; KDE/4.4.4; x86_64; ; )

Hi!

On Thursday 15 December 2011 15:04:38 Farnoosh Farahpoor wrote:
> *Could not allocate gpu memory at lbgpu.cu:1507.*
> *CUDA error: no CUDA-capable device is detected*

the error means what it says - there is no CUDA-capable device accessible to 
you on the computer you have. There are many possible reasons for that:
- you don't have a CUDA-capable GPU
- you don't have write permissions to /dev/nvidiactl
- your nvidia driver module is outdated
- you are on a computing center like Juelich, where you need to explicitely 
ask for GPUs, otherwise the NVIDIA driver disables them for you

> 2- "myconfig.h" file is like this:
> it seems that there is no LB_BOUNDARIES compiled in and so when I run my
> code I get this error:

No, that is a bug, LB_BOUNDARIES_GPU is simply never written there, even if it 
is compiled in. Will be fixed in the next release, thanks for reporting.

> *LB_BOUNDARIES not compiled in!*
> *    while executing*
> *"lbboundary pore center $x_pore $y_pore $z_pore axis $x_axis $y_axis
> $z_axis radius $rad length [expr $leng/2] type 2"*
> 
> When I add "*#define LB_BOUNDARIES"* to "myconfig.h" file I get rid of this
> error. May this issue yield to some bad results in my simulation?

Yes, because like this you add the boundary to the CPU lb, not the GPU one. 
Probably, you are using the lbboundary command before the lbfluid gpu command. 
Since the CPU solver is the default, obviously all settings go to this one 
before the lbfluid gpu switches things to the GPU.

Many regards,
Axel

-- 
JP Dr. Axel Arnold
ICP, Universität Stuttgart
Pfaffenwaldring 27
70569 Stuttgart, Germany
Email: address@hidden
Tel: +49 711 685 67609



reply via email to

[Prev in Thread] Current Thread [Next in Thread]