Feedback/Questions after kerning 8 masters

Post Reply
matloutre
Posts: 2
Joined: 26 Feb 2023

Feedback/Questions after kerning 8 masters

Post by matloutre »

Hello! I've just done the kerning on an 8 master, 3 axis variable font with Kern-On and I'm pretty happy with the result so far. It took a couple days to get everything to fall into place but now I feel pretty confident in the models to let it do the work (with 150-250 models per master, a lot of them being 0 models).

I thought I'd share my experience/feedback here and also ask a few questions.

First, it seems Kern-On likes to tell me that things are too tight when they're exactly the same. It doesn't happen all the time and in every master (so I guess it's a particular model that's causing the issue) but for instance if I have "on" and "bn" and both pairs have exactly the same sidebearings and 0 model, it sometimes will throw one out for being too tight/loose. Which leads me to a few questions/suggestions:

- If the model does what I want on a pair, should I set the auto value as a model to "enforce" it? When is it beneficial and when is it not? Kern-On likes to suggest new pairs that I like the look of already, should I set those? When does it become unnecessary?

- I never want Kern-On to throw out my basic pairs which define the rhythm (HH Hn HO OO nn oo no etc.). Could we have a way to set pairs as "preferred" or "locked"? I don't mind Kern-On telling it can't let me set a model the way I want if it contradicts a "locked" model.

On choosing models, I find Kern-On is usually in the right ballpark for most pairs. I rarely have to adjust the auto model by more than 5-10 units and often it's debatable whether I've actually improved on the suggestion! So...

- Is there a way to flag/list "unhelpful" models, either models that seem too restrictive or seem to be on the edge/causing conflicts?

- My number one annoyance is setting a model (either on purpose or by accident, adjusting the spacing in the font etc.) and Kern-On throwing out 10-15 models that I have to click "try again" (mostly in vain) to find out it wants to get rid of my HO pair. I found Cmd-Z pretty iffy with Kern-On so could we have a way for Kern-On to flag when it's about to make big changes/throw out models and stop it in its tracks by reverting the offending change (rather than having to manually re-set all the now independent models). At the moment, I can sometimes go out of the slider ranges and nothing happens and sometimes it's a small disaster so alternatively a way to point out that there are "non-breaking" options outside of the slider shown?

- I've enjoyed using the spacing feature to check on things like furnitures and symbols. I've read that you hope to make KO remember the sidebearing you enter (such as /H/n/d) and that would certainly be helpful (though much like Glyphs own sidebearing references, it might be nice to be able to flatten them to a figure again). Would it be possible to have a way to apply it to all masters at once? With 8 masters, it was a touch tedious to copy-paste everything.

On cleaning-up/maintenance of models, I found that over the 8 masters depending how long I spent I could add new models for ages with no sense of whether I'm improving the overall model, or changing some existing auto settings:

- Could we have a glimpse of things like "changing this will make pairs such as so and xo tighter" so we know the repercussions/ramifications of setting a model

- I'm certain I have more models than I really need to get the same/similar result, what about flagging those and tell the user they can leave X numbers of pairs to auto and get the same result? The fewer the model the easier it is to maintain/understand what you're doing?

- I'm not a huge fan of kerning pairs for +/- 2 units. It's not a big deal and probably just me liking things "neat" but could we have an option on export to squash those? (or maybe that's happening internally already?)

Lastly, setting the size of the kerning pairs on export is neat but again I think I'd like a better understanding of the process and outcomes: "Kern-On generated 17000 pairs but to fit in the size limit, it only got 10 000. Pairs such as Xj and zQ have been removed and close pairs such as Xo and xo have been merged by averaging". Or maybe an export log showing what's been kept/removed/compressed?

Anyway, thanks for the tool. It can sometimes be hard to know if you're making the right choice with kerning/spacing and I feel that Kern-On is a good safeguard which can make me more confident that my choices are within reasonable bounds.
Eben Sorkin
Posts: 38
Joined: 27 Apr 2021

Re: Feedback/Questions after kerning 8 masters

Post by Eben Sorkin »

I have often found myself wishing for similar stuff.
Gagaramond
Posts: 2
Joined: 01 Mar 2023

Re: Feedback/Questions after kerning 8 masters

Post by Gagaramond »

I would like to second all of this.
User avatar
Tim Ahrens
Site Admin
Posts: 407
Joined: 11 Jul 2019

Re: Feedback/Questions after kerning 8 masters

Post by Tim Ahrens »

Thanks for taking your time to write such an thorough feedback! It has taken me a while to write a thorough answer ;-)

Some questions are a not easy to answer without seeing the font. Is it a sans or a serif design?
matloutre wrote: 26 Feb 2023 with 150-250 models per master
That seems like a lot of models. Again, without seeing your design, it is difficult to say whether so many pairs are necessary.
matloutre wrote: 26 Feb 2023 First, it seems Kern-On likes to tell me that things are too tight when they’re exactly the same. It doesn’t happen all the time and in every master (so I guess it’s a particular model that’s causing the issue) but for instance if I have “on” and “bn” and both pairs have exactly the same sidebearings and 0 model, it sometimes will throw one out for being too tight/loose.
If b and o have the same sidebearings but Kern On insists bn vs on is inconsistent then that means the (right-hand side) shape of b and o is not identical. To see what exactly is going on, it would be helpful to have the font.
matloutre wrote: 26 Feb 2023 Which leads me to a few questions/suggestions:
- If the model does what I want on a pair, should I set the auto value as a model to “enforce” it? When is it beneficial and when is it not?
As a general principle, if you see many tick-marks then that means the minimum-maximum span is large and it is probably a good idea to set it as a model, as it will fill a rather large gap in the cloud of models. In other words, it probably has different shapes from the other models.

If you are unsure what the perfect kerning value is for a pair then don’t set it as a model. Of course, you are the designer and you should have an opinion on the right kerning value for each pair but I honestly encounter cases when I am not so sure and then I rather don’t set the model and let KO determine the right amount of kerning from my other models, which I am sure about.
matloutre wrote: 26 Feb 2023 Kern-On likes to suggest new pairs that I like the look of already, should I set those? When does it become unnecessary?
- I never want Kern-On to throw out my basic pairs which define the rhythm (HH Hn HO OO nn oo no etc.). Could we have a way to set pairs as “preferred” or “locked”? I don’t mind Kern-On telling it can’t let me set a model the way I want if it contradicts a “locked” model.
Interesting idea. As a general principle, if Kern On has to remove pairs, it will remove newly set models. If you then insist by using “Try again” or simply setting the model again (which has the same effect), it will get higher priority. If you do this multiple times the priority will increase further, which sometimes helps determine a third pair that is the actual culprit.

Generally, don’t use more models than necessary, and set a model only if you are really sure this is the right kerning value.
matloutre wrote: 26 Feb 2023 On choosing models, I find Kern-On is usually in the right ballpark for most pairs. I rarely have to adjust the auto model by more than 5-10 units and often it’s debatable whether I’ve actually improved on the suggestion! So...
In that case it is better not to set the model.
matloutre wrote: 26 Feb 2023 - Is there a way to flag/list “unhelpful” models, either models that seem too restrictive or seem to be on the edge/causing conflicts?
Yes, these are the warnings with an orange dot. Not sure what exactly “too restrictive” means. Can you explain in more detail? The idea is that models restrict the values of the other pairs so that would be a good thing.
matloutre wrote: 26 Feb 2023 - My number one annoyance is setting a model (either on purpose or by accident, adjusting the spacing in the font etc.) and Kern-On throwing out 10-15 models that I have to click “try again” (mostly in vain) to find out it wants to get rid of my HO pair.
You mean, when setting the kerning value via Glyphs? If Kern On throws out 10 models then that probably means it is not an entirely new model because that would be removed instead. (The only exception would be if it is a new letter-letter model that causes special spacing models to mismatch.) Hard to say without seeing an example.

That said, it’s an interesting idea to give models (newly set or changed value) that come from the Glyphs UI (instead of the KO palette) a very low priority no matter whether they are new or not, because they are likely to be accidental. I guess that will be easy enough to implement.
matloutre wrote: 26 Feb 2023 I found Cmd-Z pretty iffy with Kern-On so could we have a way for Kern-On to flag when it’s about to make big changes/throw out models and stop it in its tracks by reverting the offending change (rather than having to manually re-set all the now independent models).
Cmd-Z is not really supported with Kern On. I’d love to but in some cases it would be unclear what exactly you want to undo. Instead, the little menu next to the “Compare to” button offers a KO-specific undo, which should help you undo big changes.
matloutre wrote: 26 Feb 2023 At the moment, I can sometimes go out of the slider ranges and nothing happens and sometimes it’s a small disaster so alternatively a way to point out that there are “non-breaking” options outside of the slider shown?
That’s true, the range is currently not strictly correct, as it would require some additional calculation time to determine the exact limits (it would practically have to simulate what happens). It’s on my to-do list to make this more precise, though. That said, I somehow like the current behaviour as it gives you an idea when you are leaving the “comfort zone” and potential contradictions start to come up. If you move to the edge, it will often give you a bit more leeway. Although this is a proof that the initial limit was not correct, I think this behaviour is also useful in practice.
matloutre wrote: 26 Feb 2023 - I’ve enjoyed using the spacing feature to check on things like furnitures and symbols. I’ve read that you hope to make KO remember the sidebearing you enter (such as /H/n/d) and that would certainly be helpful (though much like Glyphs own sidebearing references, it might be nice to be able to flatten them to a figure again). Would it be possible to have a way to apply it to all masters at once? With 8 masters, it was a touch tedious to copy-paste everything.
Interesting idea! I will pick that up when I continue working on the auto-spacing feature(s).

Again, it would be interesting to see the font. 8 masters seems a lot, are you sure they are really necessary? If you really spent the time on designing 8 masters then it is natural that this creates proportionally more work for the kerning – it is simply a very large project. In case not all 8 masters are independently designed (i.e. more systematically created, with strong interrelations between them) then that means there is some redundancy, and you can probably work with fewer masters. Hard to say without seeing the font. My impression is that some designers use more masters than necessary because they are not aware you can leave away some “corners of the cube”. The number of masters you need is the need number of axes plus 1, and rarely more.

You may also want to use “Interpolate master kerning” in KO. Do you know the feature? Whether this works with your font depends on the master set-up, of course.
matloutre wrote: 26 Feb 2023 On cleaning-up/maintenance of models, I found that over the 8 masters depending how long I spent I could add new models for ages with no sense of whether I’m improving the overall model, or changing some existing auto settings:
- Could we have a glimpse of things like “changing this will make pairs such as so and xo tighter” so we know the repercussions/ramifications of setting a model
Use the “Compare to” function for this purpose:

1. save the current file using Cmd+S

2. duplicate the .glyphs file in Finder (Cmd+D)

3. open the copy of the file (in addition to the actual working file)

4. in the actual working file, set the new model

5. use “Compare to”

This will show an overview of the strongest changes caused by the new model.

I usually don’t do this after each model but per “session”. It is interesting to see the changes made since I started up Kern On.

I was considering making this a built-in function but it is not a high priority because the current system seems to work quite well and is even more flexible and transparent.
matloutre wrote: 26 Feb 2023 - I’m certain I have more models than I really need to get the same/similar result, what about flagging those and tell the user they can leave X numbers of pairs to auto and get the same result? The fewer the model the easier it is to maintain/understand what you’re doing?
Yes, definitely, and this is also on my to-do-list. Pointing out superfluous models would be good.
matloutre wrote: 26 Feb 2023 - I’m not a huge fan of kerning pairs for +/- 2 units. It’s not a big deal and probably just me liking things “neat” but could we have an option on export to squash those? (or maybe that’s happening internally already?)
There have been discussions on this forum about this topic, like here.

I hope it’s okay if I don’t add elaborate here and now.
matloutre wrote: 26 Feb 2023 Lastly, setting the size of the kerning pairs on export is neat but again I think I’d like a better understanding of the process and outcomes: “Kern-On generated 17000 pairs but to fit in the size limit, it only got 10 000. Pairs such as Xj and zQ have been removed and close pairs such as Xo and xo have been merged by averaging”. Or maybe an export log showing what’s been kept/removed/compressed?
I will write a detailed explanation of what Kern On does as it finalizes the kerning. It’s probably not exactly like you suggest but it may be possible to generate some sort of report. Thanks for the suggestion, I will keep that in mind.

Some additional thoughts from myself:

As I keep getting feedback from users, and use Kern On myself, it is slowly becoming clearer what potential issues are, and where the journey might lead.

Maybe the underlying challenge is the following. It seems there are two possible approaches to the interaction between the designer and KO:

(1) Designer: “I am a competent human, and only I can judge the right amount of kerning. The computer can, at best, pick up the (perfect) models I set and kern the rest according to my instructions.” Kern On: “Very well, I will respect your perfect models and apply them as sensibly as I can. May I ask for a few more models to be set by your immaculate human judgment.”

(2) Designer: “I generally trust Kern On and I just want to quickly kern my font without much effort. Besides, I do have a decent visual judgment but I am not perfect. Plus, if you could throw in some autospacing that would be nice. Just do what’s right.” Kern On: “Very well, I will try to figure out sensible autokerning even for the less tightly defined pairs (and special spacing). Here are models that seem to be inconsistent others. Which glyphs would you like autospaced?”

Of course, most users – including myself – have some sort of mix of both attitudes.

Initially, before Kern On was widely used, I mostly had (1) in mind, and this was also the feedback from early testers. I was simply assuming all designers would be extremely suspicious about anything automatically determined, and very confident about their own judgment. Now I am realizing users are also unhappy about KO acting too robotish and unforgiving, and taking all input literally. I need to shift more towards (2), without giving up (1), of course.
matloutre
Posts: 2
Joined: 26 Feb 2023

Re: Feedback/Questions after kerning 8 masters

Post by matloutre »

Hi Tim,

Thank you for the detailed and thorough answer. Very insightful! I've replied inline (didn't manage to get the nice quote blocks you used) to what I thought needed an answer, for everything else: thank you!

The font is a sans-serif design with width, weight and opsz axis. I can send you the font via email if that's useful?

> If b and o have the same sidebearings but Kern On insists bn vs on is
> inconsistent then that means the (right-hand side) shape of b and o is not
> identical.

That makes sense and I suspect that's correct, the shape of the bowl of b and o is not identical. I suppose that's linked to your idea of approaches, who knows better? In an ideal world, I suspect that if the bowl is not identical then assigning the same sidebearing would be incorrect, but my understanding is that it's common practice/good enough if they're closely linked/look/feel the same.

> As a general principle, if Kern On has to remove pairs,
> it will remove newly set models. If you then insist by using “Try again” or
> simply setting the model again (which has the same effect), it will get
> higher priority. If you do this multiple times the priority will increase
> further, which sometimes helps determine a third pair that is the actual
> culprit.

I did notice the "try again" behaviour which is pretty smart. But I suppose, linked to the complexity of using undo/cmd-z, sometimes that heuristic is maybe not applicable? "locking" some pairs may avoid these cases? It would help the designer point KO in a direction they feel more comfortable in?

> Not sure what exactly “too
> restrictive” means. Can you explain in more detail? The idea is that models
> restrict the values of the other pairs so that would be a good thing.

That made a lot of sense to me when I wrote it but I can see now it's a bit obscure. I think what I meant was linked to this idea of "impact" of a model. Showing a list of models that have an impact on the current pair you're looking at maybe? (besides the two at the boundary). That said, maybe perceiving the orange warnings in this light might help as it is?


> You mean, when setting the kerning value via Glyphs? If Kern On throws out
> 10 models then that probably means it is not an entirely new model because
> that would be removed instead. (The only exception would be if it is a new
> letter-letter model that causes special spacing models to mismatch.) Hard
> to say without seeing an example.

Yes, when setting spacing (not kerning) via Glyphs. For instance if I want to adjust the sidebearing after seeing that KO wants to kern something I feel shouldn't be kerned. (like "bo"). If I make changes then, it sometimes panic (and if I make a typo and type 100 instead of 10, it's worse haha). I guess changing the spacing on a letter that has models attached to it will ripple the changes through which might cause conflicts/contradictions.

I suppose changing the spacing via KO could solve this issue but I think this also affects "zero-models" which then become "-2". So would there be a way to change the spacing in KO without affecting "zero-models"? For example: "oo" is zero and I want "Ho" to be zero, I decide to change the sidebearing of "o" but I want "oo" to still be zero.

Maybe the idea of "locked" pairs could help here too?


> Cmd-Z is not really supported with Kern On. I’d love to but in some cases
> it would be unclear what exactly you want to undo. Instead, the little menu
> next to the “Compare to” button offers a KO-specific undo, which should
> help you undo big changes.

Thanks for clarifying!

> That’s true, the range is currently not strictly correct, as it would
> require some additional calculation time to determine the exact limits (it
> would practically have to simulate what happens). It’s on my to-do list to
> make this more precise, though. That said, I somehow like the current
> behaviour as it gives you an idea when you are leaving the “comfort zone”
> and potential contradictions start to come up. If you move to the edge, it
> will often give you a bit more leeway. Although this is a proof that the
> initial limit was not correct, I think this behaviour is also useful in
> practice.

Yes, the behaviour is super useful I think, really good to see that there are several levels of useable models for boundaries. I think it's just having some visibility on that, but I suspected it would mean doing "all" the calculations.

> Again, it would be interesting to see the font. 8 masters seems a lot, are
> you sure they are really necessary? If you really spent the time on
> designing 8 masters then it is natural that this creates proportionally
> more work for the kerning – it is simply a very large project.

Very true and though I feel the 8 masters are warranted — because of a (slightly) unusual design decisions at the extremes — I'm sure someone else could pulled it off with fewer. I guess, I'm not complaining about having to the do the 8 masters, KO makes the work easier, but it's maybe about having more options to apply things across masters (for instance, you might want the same basic zero models for all masters). In the grand scheme of things, this is nitpicking haha.

> You may also want to use “Interpolate master kerning” in KO. Do you know
> the feature? Whether this works with your font depends on the master
> set-up, of course.

I'm aware of it, but I don't think I maybe fully understand what it is or if it's appropriate to my design space.

> Use the “Compare to” function for this purpose:
> I was considering making this a built-in function but it is not a high
> priority because the current system seems to work quite well and is even
> more flexible and transparent.

That's neat, I haven't tried that yet. I can see how that answers the question "per session" but I suppose, what I was trying to suggest is maybe more "transparency" in what KO is doing so having this per glyph, at the moment of deciding to set a model or not, would help the designer choose whether or not it's worth setting the model for them. It's good to have the visual "notches" to know that there's a hole in the model space but to know on which strings it'll pull could be useful (but I expect... difficult).

> Some additional thoughts from myself: [...]
> (1) Designer: “I am a competent human, and only I can judge the right
> amount of kerning. [...]
>
> (2) Designer: “I generally trust Kern On and I just want to quickly kern my
> font without much effort. Besides, I do have a decent visual judgment but I
> am not perfect. Plus, if you could throw in some autospacing that would be
> nice. Just do what’s right.” [...]
>
> Of course, most users – including myself – have some sort of mix of both
> attitudes.
> I need to shift more towards (2), without giving up (1), of course.

If that's helpful, I generally agree with that. I'm maybe more of a (2) on average but there are times when (1) comes in strongly. My perception was that KO was a pretty good mix of both approaches, but might just need "escape hatches" for those strong (1) moments.

> Initially, before Kern On was widely used, I mostly had (1) in mind, and
> this was also the feedback from early testers. I was simply assuming all
> designers would be extremely suspicious about anything automatically
> determined, and very confident about their own judgment. Now I am realizing
> users are also unhappy about KO acting too robotish and unforgiving, and
> taking all input literally.

My perception is that there is a point where KO does a "good enough" job that I'm happy to let it go but:
- There are areas where I would like to be uncompromising as a designer (setting models is already great concept for that, the "locked" models idea can help further assert that maybe)
- Any mechanisms that allow the designer to see what KO is thinking on a global level will help the designer trust KO for the parts of the font the designer is less worried about. After using it for a while, I knew I wouldn't have to review every single pair, but any outward signals would help.
- It's hard to know when it's good enough. Seeing the model space "health" — seeing if the model pairs are likely to "overfit" or maybe you have too many — could help?
Post Reply