[Top][All Lists]
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[gnugo-devel] GnuGo trains NeuroGo (was: Bug report gnugo-3.3.8)
From: |
Arend Bayer |
Subject: |
[gnugo-devel] GnuGo trains NeuroGo (was: Bug report gnugo-3.3.8) |
Date: |
Mon, 18 Nov 2002 15:40:36 +0100 (CET) |
Markus Enzenberger wrote:
(quite a while ago...)
> I already set level to 0. If you say, that GnuGo was already using
> its persistent reading cache, then there is probably not much room
> for gaining speed.
>
> Right now, GnuGo does less than 3-5 examine_positions per second
> at level 0 on my 1GHz Duron. To make a NeuroGo training possible
> I would need at least 10-20 evaluations per second.
In case you are still interested in doing this:
1. The speed-ups by Dave to accumulate_influece() should have a big
impact on level 0. (In GNU Go since version 3.3.9.)
2. You should complie GNU Go with MAX_BOARD = 9 (because all our
data structures and board loops etc. are optimized for the common case
where the board size is really equal to MAX_BOARD).
3. I only now realized that GNU Go does not fully benefit from the
persistent caching the way you are using it. What you should do is
eliminate all calls to purge_persistent_reading_cache() and
purge_persistent_owl_cache() (they are called during examine_position).
And instead you should call them yourself once you make a move for real.
The reason for this is that purge_persistent_..._cache throws out entries
that are currently invalid, but could become valid again once you undo
the move.
4. There might very well be bottle-necks at level 0 that we are not at
all aware of (because we almost only look at default level 10).
Arend
- [gnugo-devel] GnuGo trains NeuroGo (was: Bug report gnugo-3.3.8),
Arend Bayer <=