Page 1 of 1

different results for similar shape

Posted: 27 Mar 2022
by Identity-Letters
Hey,

my quoteright and quotedblright get a total different result, although the shapes and lsb/rsb are completely identical.

Any idea?

Re: different results for similar shape

Posted: 27 Mar 2022
by Identity-Letters
using KO 1.11 and Glyphs 3.0.5 (3120)

Re: different results for similar shape

Posted: 27 Mar 2022
by Identity-Letters
just decomposed these elements. now everything works as expected.

Re: different results for similar shape

Posted: 21 Apr 2022
by Tim Ahrens
Sorry about this! Not sure whether it was caused by the fact that you had components. Composites are generally supported by KO. Can you still reproduce the problem (using an old file)?

My feeling is that you made edits to the font while Kern On was running, and Kern On did not properly update the internal engine. Sorry, this is something I need to look into in more detail. Certain types of edits are not immediately reflected, sometimes you need to close and re-start Kern On to make sure it has the correct shapes internally.

Re: different results for similar shape

Posted: 24 May 2022
by Kostas
I purchased KO yesterday and I started trying it and I am observing the same exact issue with the doublequotes (left and right) clashing with all the glyphs to their left side and not following the model of the singlequotes.

Re: different results for similar shape

Posted: 24 May 2022
by Tim Ahrens
Hello Kostas, would you mind sending me the .glyphs file? Then I can have a closer look, and hopefully find the root of the problem. Thanks!

Re: different results for similar shape

Posted: 20 Sep 2022
by chrisjansky
Hello Tim, I think I have a very similar problem with comma/quotesinglbase.

Whilst KO is running everything seems ok, but once I click "Kern On" to generate all kerning pairs and open the file again, some kerns are completely off and from what I can see are compressed into odd groups (like comma and 7 together etc.)

I'll send you the problematic .glyphs file right away.

Thanks,
Christian

Re: different results for similar shape

Posted: 20 Sep 2022
by Tim Ahrens
Thanks for sending the file.

The second pair in your example is a-quotesinglbase. This pair is not in Kern On’s internal list of pairs, it is simply not in the system. As you can see, none of the “Model”, “Auto”, “Ind[ependent]” buttons are activated. Kern On thinks a-quotesinglbase will never be used, this is why it may get a strange value after decomposition. I know KO is a bit radical in this respect; I will try to tweak the class generation system so as to respect “non-existing” pairs somewhat.

If a-quotesinglbase is important to you then you can explicitly activate it by clicking the “Auto” button. This makes it a “user-set autopair”. As a result, it will be autokerned, and a-quotesinglbase will be considered a relevant pair by the class generation system.

Do you think a-quotesinglbase is generally relevant, and should be included by default? I am open to discuss this (and add it to the built-in list of pairs if necessary).
Screenshot 2022-09-20 at 14.54.26.png
Screenshot 2022-09-20 at 14.54.26.png (542.1 KiB) Viewed 26223 times

Re: different results for similar shape

Posted: 20 Sep 2022
by Tim Ahrens
Btw, this is one of the sources I used when compiling the list of pairs:
https://en.wikipedia.org/wiki/Quotation ... mary_table

As you can see, quotesinglbase doesn’t seem to be used as a closing quotation mark for any Latin-based language. If there is something I missed I’m happy to learn about it, though.

Re: different results for similar shape

Posted: 20 Sep 2022
by chrisjansky
Thanks for the quick reply, Tim.

Yes, I agree that "a-quotesinglbase" is not a real use-case and thus I am completely fine it being left out/ignored by KO. The problem is when these "nonexistent" pairs get faulty kerning as shown—I had the same problem earlier today with "emdash-eogonek" which is arguably a real combination and tried fixing it by specifically assigning it an "Model = 0" combination.

Might it be the case that some of the models contradict each other? It worries me that the kerning engine generates values that make the glyphs collide and assigning all such combinations as "Auto/Model" to make then unkerned doesn‘t seem feasible.

Re: different results for similar shape

Posted: 20 Sep 2022
by Tim Ahrens
chrisjansky wrote: 20 Sep 2022 "emdash-eogonek" which is arguably a real combination
I don’t think it is, as eogonek does not occur at the beginning of words.

Re: different results for similar shape

Posted: 20 Sep 2022
by chrisjansky
Touché. Another reason it should not get kerned though, correct?

Another example I‘ve spotted now: exclamdown Je-cy gets overkerned (non-existent pair), exclamdown J doesn’t.

Re: different results for similar shape

Posted: 21 Sep 2022
by Tim Ahrens
I don’t quite understand what you are worried about.

Re: different results for similar shape

Posted: 21 Sep 2022
by chrisjansky
Tim Ahrens wrote:
> I don’t quite understand what you are worried about.

Fair enough, I will try to word it a bit differently: when KO engine is running
- those pairs that should be kerned look fine
- those that should be ignored (e.g. a-quotesinglbase) look also fine
- Note a-quotesinglbase is not user-selected as Auto or Model = left untouched (see screenshot).

Once I click "Kern On" to let the engine do its thing and generate all remaining pairs, something is fumbled along the way and a-quotesinglbase actually is kerned.

The fact it is is somewhat confusing (not what I expected pre-processing), but especially considering it results in collision (probably because it is misinterpreted as belonging to the same group as bracketright, which is "Auto" in a-bracketright, i.e. left untouched by me). See screenshot no. 2.

Hope I‘ve explained it better this time.

Re: different results for similar shape

Posted: 22 Sep 2022
by Tim Ahrens
From Kern On’s point of view, “ignored” means “the value does not matter”. It seems when you say “should be ignored” you mean “should have a value of zero”?

When Kern On finalizes the font (after you click “Kern On”), it first autokerns all relevant glyph-glyph pairs (those pairs it considers “real-world” pairs). If you are curious you can see this intermediate result if you set 0 kB as the limit: This will generate pure glyph-glyph pairs without class kerning. If we didn’t care about the data size then we could stop here, without the need to generate classes. However, the data size generally does matter, either for webfonts, or because of the 65 kB limit for desktop fonts (unless you use extension kerning).

What happens next is the generation of classes. This is essentially a kind of (potentially) lossy compression. Because the pairs usually don’t fit into the given data size losslessly, Kern On has to find compromises, and it tries to retain the values as good as possible. It has to discard some of the information, however, which means some pairs will end up with the class kerning value (which may be zero or non-zero) although they should have a different value, because the autokerning value is different.

This may mean that a pair is un-kerned (i.e. zero value) although it should be kerned (i.e. non-zero value), as there is not enough “space” left for the glyph-glyph exception that would define the pair’s value. It could also be the other way round: A pair might get a non-zero class kerning although it should have a zero value – we would need a zero-value exception, for which there is not enough space.

These are essentially compression artifacts (see Wikipedia). You have probably seen this in strongly compressed JPEG images, when we see ripples where there should be a plain, even colour. The lossy compression algorithm seems to “add data” but in fact, we are seeing information the compression did not manage to eliminate (because of lack of space).

In this sense “ignored” pairs may also get a non-zero kerning value, which seems like added information, but in fact these are cases where the compression algorithm could not afford the bytes that would be necessary to remove this data. Hope this makes sense.

Re: different results for similar shape

Posted: 22 Sep 2022
by chrisjansky
Thanks for the comprehensive explanation. That‘s basically how I imagined it to be and makes sense from a programmer‘s perspective.

Considering how the algorithm operates, it‘s to be expected some pairs will get compressed into similar-enough classes and thus will have some kerning (n-comma n-quitesinglbase for exmaple) because that simply look very alike.

I now understand *why* it happens, but can we agree that these colliding pairs should not be outputted into the final font? Mathematically speaking, I know it *can* happen, but I feel we are skipping the part if it *should* happen in a tool like this.

From a font user‘s perspective, to find pairs that are actually overlapping because of excessive kerning (like a-quotesinglbase) will most definitely look like a bug.

Can you advise on how to prevent such behaviour? Should I assign "Model = 0" for those colliding pairs? And if so, how will I know that some other pairs will not get merged into a different group that will cause collision? I feel like I have no control over this.

Re: different results for similar shape

Posted: 22 Sep 2022
by SCarewe
You essentially don't have control, no.

What might reassure you is that, with all the data crunching of hundreds of millions of letter combinations from real-world applications, Kern On has a pretty solid idea of what will actually appear in the real world.

Ask yourself: which user will actually, seriously, type the combination a„ ?

The trade-off you need to make in order for Kern On to save you weeks of kerning work for, on top of that, an even better result overall, is that you need to trust it in terms of what to kern and what not. It's a common type designer's habit to consider all theoretically possible cases equally and detach himself from what is actually used in reality ;)

Re: different results for similar shape

Posted: 23 Sep 2022
by chrisjansky
Hmm, that is extremely disappointing then.

I appreciate your explanation—while I am completely on board with the idea that I have to forgo some level of control and trust KO to make sound decisions on what constitutes a pair worth kerning, it‘s quite a different beast altogether to get *faulty* kerning in unneeded pairs. I‘d much rather get *no* kerning in these pairs.

I‘ve been a strong advocate of KO in the face of users who say "Oh wait a minute why is there no kerning in questionmark-ccaron" (a situation that I as a native Czech speaker know would never happen) since I respect how the algorithm works and I am fine with having a "minimally-kerned" fonts that aren‘t bloated by bogus kerning no one will ever use. I actually love that idea.

Hence I usually respond "if you want to type something that basically never happens in a real language, you have to kern it yourself"—it‘s usually a one-time thing for a headline, so the matter is settled. On the other, saying "if you want to type something that basically never happens in a real language, you have to FIX the kerning yourself" (i.e. override the faulty kerning to make the glyphs stop touching each other) seems rude and is quite unacceptable for me to ship in a retail font.

I am specifically mentioning those pairs that end up in a collision/overlapping each other. How do you manage justifying this to the end user?

Anyway, as this discussion only seems to reiterate "just trust the system", I have one last question:

Tim Ahrens wrote:
I know KO is a bit radical in this respect; I will try to tweak the class generation system so as to respect “non-existing” pairs somewhat.

Is that something that you can look into please, Tim? I initially got an impression that there is hope this could be amended somehow in the engine itself.

Thanks.

Re: different results for similar shape

Posted: 30 Sep 2022
by chrisjansky
Tim, I‘d appreciate your response on this. Thanks.

Re: different results for similar shape

Posted: 30 Sep 2022
by Tim Ahrens
Okay, first thing:
chrisjansky wrote: 22 Sep 2022 Can you advise on how to prevent such behaviour? Should I assign "Model = 0" for those colliding pairs?
No. The best solution would be to turn them into a user-set autopair, as I explained above.

Re: different results for similar shape

Posted: 30 Sep 2022
by Tim Ahrens
chrisjansky wrote: 22 Sep 2022 but can we agree that these colliding pairs should not be outputted into the final font?
They are colliding because the exceptions are not outputted into the final font. I tried to explain that above.

Re: different results for similar shape

Posted: 30 Sep 2022
by Tim Ahrens
SCarewe wrote: 22 Sep 2022 You essentially don't have control, no.
Well, you do have control (via user-set autopairs) but you don’t have a preview. This seems to be the main problem here.

Re: different results for similar shape

Posted: 30 Sep 2022
by Tim Ahrens
Chris, it seems you have a simple solution for your problem in mind, which I fail to see. Sorry about this. I am sure Kern On will improve (in many different aspects) in the next years but I don’t see any solution I can implement in a snap.

Re: different results for similar shape

Posted: 30 Sep 2022
by chrisjansky
Thank you for a candid response, Tim.

As unfortunate as it is, I will keep my fingers crossed for a remedy in some of the future updates.

Re: different results for similar shape

Posted: 30 Sep 2022
by SCarewe
I still don't see why you care about these cases. Who will ever run into them?

Re: different results for similar shape

Posted: 30 Sep 2022
by chrisjansky
Different strokes, I guess.

I just don‘t feel comfortable shipping those in a commercial font. I have found several just typing some combinations—none of them from "common use" so far, but I can never be sure as I don‘t know how many other colliding combinations there are.

I can imagine typing "e—ę" and being taken aback by the kerning.