espressomd-users
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [ESPResSo] Diffusion of a probe particle


From: Limbach, Hans Joerg, LAUSANNE, NRC-FS
Subject: RE: [ESPResSo] Diffusion of a probe particle
Date: Fri, 10 Aug 2007 17:51:05 +0200

Hi again,

Some clearing words on MSD (Mean Square Displacement) in Espresso (hopefully):

There are two implementations of the MSD in Espresso:
1) g123 which calculates the MSD of a ploymer system (implementet in calc_g123; 
statistics_chains.c). This calculates several different MSDs related to 
polymeric systems and has been implemented by Bernward. For single particles 
this can be used, when you set the chain length to 1. Otherwise it is very 
specific for a specific topology.

2) vanhove which calculates the Van Hove auto correlation function and as a 
byproduct also the MSD (implemented in calc_vanhove; statistics.c). This is the 
method of joice for arbitrary systems.

Both use the configs array to calculate dynamic quantities. This can indeed 
become very slow, when the configs array is large. But I do not see a way to 
speed this up on the implementation side (if you have an idea then let me know, 
since of course I also would be happy if the vanhove analysis runs faster). I 
guess even if you comment out the vanhove stuff and just calculate the MSD it 
will not become significantly faster. I do NOT recommend to mingle that with 
the integrator!!! (The distances calculated here are other distances then the 
ones being calculated for forces or energies. So there are no synergies 
possible!!!)
The algorithm should scale linear with the number of particles but 
quadratically with the number of stored configurations.

Tips for speed:
Store only the coordinates of the particles you are interested in and at time 
distances that are interesting for you. This of course means that the analysis 
has to be done after the simulation.
If you need the MSD over several decades in time it makes sense to split the 
calculation: E.g. create a trajectory for 1000 tau in 1 tau steps and create a 
second trajectory for 100000 tau in 100 tau steps. Calculate the vanhove(MSD) 
for the two and combine them in your final plot. 

Further Tips:
Whatch out that your system has no center of mass motion!

ToDo:
If you think this could be helpful one could extend the configs concept to 
store only a selceted set of particles instead of the whole system, which could 
help for on the fly calculations of dynamic variables in general.

Best regards,
Hanjo 

> -----Original Message-----
> From: address@hidden 
> [mailto:address@hidden On Behalf Of Olaf Lenz
> Sent: lundi, 6. août 2007 20:56
> To: ESPResSo users' mailing list
> Subject: Re: [ESPResSo] Diffusion of a probe particle
> 
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: RIPEMD160
> 
> Hi!
> 
> Axel Arnold wrote:
> >> so that I get a very significant time overhead. The only 
> solution to 
> >> this that I see would be to implement the measurement of 
> the MSD in C 
> >> so that the simulation does not have to return to the Tcl 
> level. This 
> >> would however neither be simple nor in the spirit of 
> ESPResSo, so I 
> >> would like to avoid this.
> > 
> > Well, why is not in the spirit of Espresso? 
> 
> It is not in the spirit of ESPResSo, as it would mean to add 
> a measurement to the main loop (i.e. into the integrate 
> command). All other measurements are explicit Tcl commands 
> that are called outside the integrate command. So far, 
> Espresso doesn't have any definition of an interface for this 
> kind of thing.
> 
> If I wanted to implement it in C, I do see the problem to 
> make it flexible enough: Maybe, one doesn't want to trace the 
> msd of all particles, but just of a subset - or of several 
> distinct subsets. Or maybe one doesn't want to trace the msd 
> in each step, but only of every x'th step. In my case, I do 
> not want to measure the total msd, but a direction-dependend 
> msd, etc. etc.
> Usually, this kind of problems can be handled on the Tcl script level.
> Here, it would be necessary to implement all this in C. Note, 
> that I have not at all touched the issue of parallelization so far...
> 
> > I mean, the MSD would be a nice feature. 
> 
> BTW, the MSD is computed as a by-product when computing the 
> van-Hove autocorrelation function. However, as this function 
> is REALLY slow for large datasets, I'm currently doing it in 
> Tcl, anyway.
> 
> > Yes, they are. Currently, we build up the Verlet according to one 
> > interaction range for all particles, i.e., the interaction range is 
> > dictated by the most longranged interaction. That is not ideal; 
> > assuming you have two huge colloids in your system, then the 
> > interaction range for all particles will be twice the size 
> of the two 
> > colloids. Even for particles that do not interact at all, 
> you actually get the largest necessary Verlet cutoff.
> 
> Indeed. That was one of the first things I noticed about 
> ESPResSo, as I was working on binary hard-sphere mixtures at 
> that time...
> 
> > How many cells did you then have? Espresso loops over 
> cells, and then 
> > over the particles in the cells. Assuming that you have an 
> interaction 
> > range of 2, you have in the worst case 500^3=125.000.000 
> cells, which 
> > would not even fit into memory. In practice, that doesn't happen, 
> > since the number of cells is limited by max_num_cells, but 
> depending 
> > on what you put there, the cell-looping can still take quite some 
> > time. On the other hand, a small max_num_cells, which is 
> the default, 
> > causes large cells, which also makes things really slow.
> 
> Indeed. I have done some changes on the max_num_cells 
> (between 100 and 10000), to no result. To my surprise, this 
> didn't seem to influence the computing time very much.
> 
> > Well, I wanted to calculate the MSD for completely free particles, 
> > there were no interacting ones. So, I simply commented out 
> the force 
> > calculation... Not nice, but helps.
> 
> Ok, this would work for free particles. However, this is of 
> course only the first preparatory experiment of a number of 
> further preparatory experiments that include obstacles.
> 
> >> Using a large box length, I have also tried to disable the 
> Verlet lists.
> >> From my understanding, this should speed up the simulation, as no 
> >> Verlet list update would ever be required and the size of 
> the default 
> >> cell lists should be small enough not to contain any 
> neighboring particles.
> >> However, this did not seem to be the case. Instead, the simulation 
> >> was significantly slower (factor 3 or so). Can anybody 
> explain this to me?
> > 
> > Hmm, that is again a question of max_num_cells, and how 
> many cells you 
> > actually had. From what you tell, I guess you have the 
> default max_num_cells.
> > Can you try playing around with it, like setting it to 
> 10000 or more?
> 
> As I said, I did that. But still: when I deactivate the 
> Verlet lists, this would mean that we do not need to do any 
> Verlet list update.
> Instead, we would only need to loop over the neighboring cells.
> Well, this might indeed be the problem: there are 27 
> neighboring cells, each of which is probably empty, but still 
> has to be taken into account in the loop. On the other hand, 
> the Verlet list is most probably also empty, but it is only a 
> single list.
> 
> Cheers
>       Olaf
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v1.4.2 (GNU/Linux)
> Comment: Using GnuPG with SUSE - http://enigmail.mozdev.org
> 
> iD8DBQFGt27GtQ3riQ3oo/oRA1CzAKCNany0uIjZiJThAY86WCfSQZlILwCffjFT
> NnUtZ/2vLX/l2ubN3ZwdtzU=
> =V5iQ
> -----END PGP SIGNATURE-----
> 
> _______________________________________________
> ESPResSo mailing list
> address@hidden
> https://fias.uni-frankfurt.de/mailman/listinfo/espresso
> 



reply via email to

[Prev in Thread] Current Thread [Next in Thread]