monotone-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Monotone-devel] Speedup chances


From: Jack Lloyd
Subject: Re: [Monotone-devel] Speedup chances
Date: Fri, 2 May 2008 15:17:48 -0400
User-agent: Mutt/1.5.11

On Fri, May 02, 2008 at 09:05:06PM +0200, Christof Petig wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
> 
> just some notes for myself on the performance problems with OE:
> 
> - - hex_decode is used extensively by roster.cc: parse_marking (and
> expensive)
> - - get_roster_version is taking 90% of the time
> - - hex_decode is taking 25% of the time
> - - hex_encode is taking 15% of the time (both use malloc extensively)
> 
> Most probably a hand coded hex decoder for a fixed length of 40 digits
> would not take 10000 cycles per call 8-O

Worth nothing that transforms.cc is (was? I don't have a checkout
handy) creating a new Pipe/Filter every time it ran a hex/base64
transform. (And, IIRC, there was a comment to the effect of "Yeah this
is really slow, but it's easy, and if it turns out to be a problem
maybe it'll get fixed").

OTOH, for fixed length inputs/outputs you could do a hella fast
encoding by just dropping a 40 byte buffer onto the stack and encoding
directly into it and avoid any dynamic memory allocation, so... (BTW,
Hex_Encoder::encode and Hex_Decoder::decode can do the basic transform
(just your typical table lookup implementation, nothing clever),
leaving the code needed just a for loop)

Do you have a sense how many unique values are being encoded/decoded?
Wondering if memoization would help. (OTOH a hash_map lookup might
actually be slower than just reencoding it).

-Jack




reply via email to

[Prev in Thread] Current Thread [Next in Thread]