freetype-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [ft-devel] various questions about implementing ttf


From: Jan Bruns
Subject: Re: [ft-devel] various questions about implementing ttf
Date: Mon, 5 Dec 2016 14:24:52 +0100
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Icedove/38.4.0


Am 05.12.2016 um 07:57 schrieb Werner LEMBERG:
>> I'm currently writing on some ft-"equivalent" pascal lib.  I've
>> decided to do this from scratch, 100% fresh code base.

> Why?  This sounds like reinventing the wheel, in particular, ignoring
> FreeType's intelligence forged over 20 years...

> I would rather rewrite *FreeType* in Pascal, probably defining a
> better, uniform, and more modern interface.  I could even imagine the
> opposite to texk, the main implementation today of TeX, which uses a
> very simple Pascal-to-C translator (followed and preceded by various
> patches) to avoid rewriting Knuth's original code...

I personally think it's always good to have choices. Using FT
is currently the only advice you get when it comes to platform
independent text rendering. They typically dynlink, even if
they'd prefer pixel-wise consistency.

It also helps to verify specifications, identifying documentation
holes, and know what you're doing (besides choosing).

In my experience, it's often much easier and faster to totally
rewrite things, instead of trying to mimic some existing
code behaviour.

I don't see much reason to modify FT source language. My
computer has a C-compiler installed. But even translating just
the FT headers is much of work, and the only point of worth
you get in exchange is hope the lib can be used by one specific
additional language (I think I've seen non-protable structures
somewhere, but it probably was something that can be worked
around).

Just think of all these link-tables... they can easily
become much larger than native code doing the same thing.


>> 1. Are there fonts designed for the purpose of debuging
>>    interpreters?

> Not really, AFAIK.  You can however compare the results of FreeType's
> `ttdebug' program with your stuff.  Additionally, Microsoft's VTT is
> also available.

Great. This will help me.

>> 2. Scaling: Really make the interpreter work on device-space units?
>>   Wouldn't it be more natural to scale by pointsize, and probably
>>   have some "intended ppem hint" about a final renderer transform
>>   just for fonts that are really interested in subpixel fitting?

> Well, you have to follow the OpenType specification if you are going
> to interpret TrueType bytecode...

I've already spent days cress-searching the specs for such details.
The sections about scaling FUnits don't know about direction-dependent
device resolutions and the like.

The sections about MPPEM however don't talk about the meaning.

And noone ever said it's illegal to sample the glyph polygons at
resolutions without informing the interpreter about it. This is
even a recommended method of getting greyscales. So we're
definitely talking about virtual device units.

After reading the specs, you still don't know what MPPEM result
should be. You can still assume what I said above: Scale FUinits
using "pointsize", and give intended sampling through MPPEM.

>> 3. CVT-table (from font-file):
>>   spec unclear (apple says 4 bytes per elem),

> Unclear?  4 bytes per element is correct.

According to

https://www.microsoft.com/typography/otspec/cvt.htm

it's an array of FWORDS, wich in turn are 16 Bit FUnits:

https://www.microsoft.com/typography/otspec/otff.htm


>>   OT/MS say FUnits and tabs often have odd multiples of 2

> CVS are basically stored as funits but scaled to the current PPEM
> before being used (note that it gets tricky if the horizontal and
> vertical scaling factors are different).

And what is this trick?

> What do you mean with `odd multiples'?

For example "DejaVuSans.ttf" has a cvt-tab length of 510 bytes,
enough to store exactly 255 (an odd number) values of 16 bit,
or 127.5 long values.

>>   How to scale? By device independent pointsize?

> The basic unit for bytecode is 1/64th pixel, and you *always*
> scale by PPEM, not pointsize – while the latter is absolute, it
> has no connection to the output device resolution.

But the CVT-entries don't have their own direction-vector.

>> 4. Interpreter/Hinter, initializations: How to Reset or keep states
>>    between program invocations?  Fresh init for any invocation, or
>>    reset to what "prep" left?

> It's quite simple: `fpgm' is run once per font, `prep' is always run
> after the graphic state changes (e.g., new PPEM, AA instead of B/W,
> etc.)

This is what the specs talk about. What I stil don't know is what
to do when switching from one program to another.

For example. assume the prep program has left the projection vector
at a non-default value when it finished. Should subsequent glyph
programs all start up with that projection vector, or should they
start up with the default?

>>    What about zone0 init?

> Missing in the spec, but it is always reset and re-initialized in the
> MS implementation before a glyph gets rendered, IIRC.

Hmn, ok, but there are fragments of memo in my brain that I might
have read exactly the opposite somewhere, something like:
let the prep govern the storage AND CVT values for the glyph progs.
Buut I can't find it back, maybe the memo-fragment is wrong.

>> 5. Interpreter/Hinter, dual projection vector: Is it really used
>>    somewhere?

> Of course.  Mainly for diagonal lines.

I have already implemented most bytecode commands, and
none of wich makes use of this state, even those that
according to some specs would use it (without mention how).

For example, the GC[a] command as described here:
https://developer.apple.com/fonts/TrueType-Reference-Manual/RM05/Chap5.html#GC

In the OpenType specs, the dual projection vector doesn't
appear in the list of used states for GC[a].

So does it probably have to anything to do with these
old angle-related commands that are already obsolete?

>> 6. Renderer, dropout control: Is the test "continue to intersect
>>    scanlines" a pixel-based process, like just checking if the
>>    neighbor scan line segments have been touched by a contour?  Or
>>    do we already have more sophisticated tests implemented today?

> Dropout control is badly worded in the specification, and I wasn't
> able to make FreeType behave exactly the same as MS, unfortunately.
> It's extremely sensitive to rounding issues...

Thanks anyway.

Gruss

Jan Bruns




reply via email to

[Prev in Thread] Current Thread [Next in Thread]