Behind the kb limit

Post Reply
tiro_j
Posts: 20
Joined: 26 Jan 2021

Behind the kb limit

Post by tiro_j »

Every time I use Kern On, I have a happy time defining models, setting up special spacing sets, previewing auto results, and then there’s this moment when I click the Kern On button and am presented with the ‘Limit kerning to ________kB’ — and I freeze.

I have no sense at all what effect the number I enter here will have on my kerning results. To clarify: I understand exactly how KO works and that it applies prioritisation and class kerning based on how large or small this number is. But the relationship of a particular number here to a particular output is opaque to me: I can’t anticipate the results, so I can’t make an informed decision about what value to enter. This is especially so because I work with so many different scripts, some of which require huge amounts of kerning.

I suspect the only way to develop a feel for this, to be able to anticipate results, is to use KO a lot and to experiment with lots of different kB limit settings. I wonder if anyone feels they have developed such a feel? Any insights?
User avatar
Tim Ahrens
Site Admin
Posts: 424
Joined: 11 Jul 2019

Re: Behind the kb limit

Post by Tim Ahrens »

Agreed, selecting the kerning volume is a bit of a strange question to ask the user but after all, KO is a little robot so it needs instructions.

I am planning to re-work the dialogue somewhat. As far as I can see, we have several distinct scenarios:
  • Exporting single-style desktop fonts. In this case, there is no rational reason not to use the full 64kB available in the font without extension kerning.
  • Exporting single-style webfonts. Probably, we want to limit the size of the WOFF2 files but how much? What’s the reasoning? Hard to say, maybe spending 1/4 of the file size on kerning is appropriate? Maybe less?
    The real challenge here, though, is that we need to look at file size after compression. In other words, how can we fit “as much kerning as possible” into a given compressed data size? I expect this to be a real rabbit hole and I haven’t started to do proper research on this. Who knows, maybe pure glyph-glyph kerning (without any class-class kerning) is more efficient as it can be compressed well? (Un-compressed CFF fonts result in smaller WOFFs as well.) Or, will it work better to use the existing system and tweak the selection of class kerning and glyph kerning pairs? Also, “palettizing” the kerning – so as to make the data more compressible – has a demonstrable effect, and at JAF we have been using this technique for many years (see this). This also includes applying a threshold to the class kerning (which doesn’t have any effect on the raw kerning data size). I hope KO can offer a thoroughly thought-through, optimized strategy for webfonts at some point.
  • Exporting variable fonts. This means the given 64kB are for the whole font so we practically have only a certain portion for each master. Oh, wait, wherever the kerning value is the same for all (or some) masters we can save a few bytes each. This means there is some potential for lossy compression. Another rabbit hole, I guess.
  • Exporting variable webfonts. Oh dear, I am not even starting to think about this one, yet.
KO should rather not try to guess which scenario it is working for so the dialogue will probably ask for a choice.
User avatar
Tim Ahrens
Site Admin
Posts: 424
Joined: 11 Jul 2019

Re: Behind the kb limit

Post by Tim Ahrens »

That said, in order to get a feeling for the lossiness of the final kerning depending on the chosen kB, we can make use of the fact that the comparison function literally refers to the kerning stored in the other font and compares it to the immediate, un-compressed live autokerning values in the current font.

I’d recommend the following:
  • Finalize your font with a certain kB size. Save the font.
  • Duplicate the file.
  • Open both files.
  • Start KO for one file.
  • Use “Compare to”, which practically shows you the compression loss (in reverse, of course).
This should give you an idea whether the kerning is acceptable in terms of faithfulness to the ideal autokerning values.
tiro_j
Posts: 20
Joined: 26 Jan 2021

Re: Behind the kb limit

Post by tiro_j »

Thanks, Tim.

With regard to target output format, that is something I would be more likely to handle as downstream data processing. At the stage I am running KO, I am not thinking about specific output formats, and those output formats are going to be handled in our build process from a single source: I wouldn’t be running KO multiple times with different settings for different formats.

What I am finding with the kb settings is that it takes a while to find the sweet spot in which everything I want kerned is kerned, but kerned using a relatively small set of class-based kerning reflecting shared values for similar shapes. If I set the kb limit too low, I end up with some things not kerned, and if I set it too high I end up with a lot of variation in kerning values that means class kerning doesn’t help much.

In terms of controls, there are options I think would be useful. One is being able to include or exclude certain kinds of kern. I noticed that in Latin kerning I get a lot of lowercase-to-cap and smallcap-to-cap kerning for intercaps, which normally I do not bother to kern and would happily exclude. If people want to use intercaps, they can damn well manually kern them! :)

Do you have a written description of the compression steps that KO goes through? I am hoping that part of the process is rounding similar values to a common value so that more opportunities result for class kerning. That being so, being able to prioritise that kind of compression would be helpful. In general, being able to prioritise different compression methods seems to me a useful way to interat with KO and provide the little robot with instruction.
Post Reply