[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [ft-devel] Splitting up the GSoC project
From: |
Werner LEMBERG |
Subject: |
Re: [ft-devel] Splitting up the GSoC project |
Date: |
Tue, 30 May 2017 08:13:34 +0200 (CEST) |
> > Although I'm unsure of the feasibility of it, my initial thought
> > was to sum the pixel intensity differences between the baseline
> > and test glyphs . Supposedly, these values could be stored for
> > every glyph (although maybe not for 12 million) in an index, and
> > we could consider these values when displaying the glyphs in the
> > browser.
>
> As for quantifying differences - just summing pixel difference isn't
> a good measure.
I agree.
> The differences is likely to be corner pixels or whole edges; i.e.,
> a corner or a whole edge has moved slightly. I'd probably suggest
> somehow taking connected-ness into account - e.g., if you draw a
> square of 4x4 pixels, the 4 corner pixels being slightly
> darker/lighter, is less important than a whole edge of 4 pixels
> being darker/lighter. So you weight the difference by how many
> nearby pixels also show differences. [...]
I think we need a two-level setup.
. Compute checksums based on the actual glyph bitmaps. Everything
that's identical will be ignored. This step only needs a list of
checksums for the baseline and current version; assuming that
checksums are unique it would be sufficient to test whether a
`current' checksum is already present in the `baseline' list.
. Compute glyph images for both the baseline and the current version
(probably using a cache of already computed glyph bitmaps), then
apply further analysis to derive a `distance' between the baseline
and current version. It's a main task of GSoC to find out good
heuristics for this job, and Hin-Tak's suggestions might be one
possibility. Ideally, the HTML interface allows selection of
different heuristics, applying different algorithms. However, I
have no idea whether this is necessary at all.
Werner