qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 4/4] json-streamer: Limit number of tokens in ad


From: Eric Blake
Subject: Re: [Qemu-devel] [PATCH 4/4] json-streamer: Limit number of tokens in addition to total size
Date: Thu, 29 Oct 2015 17:35:38 -0600
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.3.0

On 10/29/2015 12:27 PM, Markus Armbruster wrote:
>> Sounds like we have some quadratic (or worse) scaling in the parser.
>> Worth fixing some day, but I also agree that we don't have to tackle it
>> in this series.
> 
> I believe it's linear with a criminally negligent constant (several KiB
> per token).  The first hog is actually fairly obvious: we use on QDict
> per token.

Wow.  I just read through the code, and you're right.  We are passing
around QDicts right and left (one per token, with 4 keys, which is
several mallocs per token), and then creating a QList of those tokens.

Prior to commit 65c0f1e9, when we were really incrementing the refcount
of each token on each level of nesting (as part of copying context, for
definite quadratic behavior), the refcounts let us ensure tokens would
be cleaned up at the end.  But I'm hard pressed to see the refcount of
tokens going above one in the current code, which means we aren't
gaining anything by using QDict for reference counting.  For that
matter, JSON is quite linear - the code talks about needing to backtrack
in different contexts, but JSON doesn't have ambiguities that need
backtracking to try multiple different rules.  It seems like the code is
overengineered because it is copied from another language where
backtracking to try several alternate parses actually makes sense.

I suspect that using a C struct per token, and a C array of those
structs, would go a long way towards alleviating memory abuse per token.

Are you tackling that, or would you like me to take a stab at it while
you work on flushing my pending qapi patches?

> 
>> I'm assuming you temporarily patched check-qjson to use larger constants
>> when you hit your ~100K token testing?  Because I am definitely seeing a
>> lot of execution time spent on large_dict when running tests/check-qjson
>> by hand, in relation to all the other tests of that file, but not
>> minutes worth.  Care to post the diff you played with?
> 
> I tested on a slow machine.

I guess it was all the malloc pressure on a low-memory system that would
make it so much slower than what I'm seeing, if you stuck with the
default gen_test_json(gstr, 10, 100).

-- 
Eric Blake   eblake redhat com    +1-919-301-3266
Libvirt virtualization library http://libvirt.org

Attachment: signature.asc
Description: OpenPGP digital signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]