Allow use of multiple fonts acting together like a fallback stack,
where if a glyph is not found in one it can be retrieved from another.
---
Let's start with an example that might not resonate with most viewers, but does the job well. Imagine that you live in North Sumatra and speak a Batak language, which is not supported in Blender. The best font you find is [[ https://fonts.google.com/noto/specimen/Noto+Sans+Batak | Noto Sans Batak ]]:
{F11752692}
But when you install it you find that Blender is not usable. This is because this font contains //only// Batak glyphs and nothing else, no latin (A-Z) characters, no symbols, nothing. This patch changes this experience from what you see on the left to that on the right:
{F11752746,width=100%}
We are shipping fonts that contain most of the world's characters - 54,000 of them. We allow users to change these fonts, but when you do so they completely //replace// them rather than //augment// them. Selecting ANY other font will give you less glyph coverage. If a font does not contain a needed character you will see a blank square (tofu) instead. Missing symbols we use in the interface like ← or ⌘ or ⇧ could result in confusion. Even worse is that some language-specific fonts - almost the entire Noto family - have **no latin characters at all**! So you can select an ideal font for your language that is unusable with Blender.
With this patch you always have **all** characters available no matter what font you select. //Your font and ours are treated as one//.
For testing you'd need to add a "fallback" subfolder to your datafiles/fonts folder and place some fonts in it. The following contains two files outside of "fallback" - replacements for our default fonts, along with a bunch of fallbacks. Enough font files to cover all of the [[ https://en.wikipedia.org/wiki/List_of_languages_by_total_number_of_speakers | top 44 languages by number of speakers ]]. This represents about **1.5 billion more people who can view their language in Blender**.
{F12935010}
Okay, but what is with the huge `unicode_blocks` structure?
When looking for a glyph I don't want to literally go from font to font to font. Each font contains a set of "coverage bits", a set of four 32-bit flags indicating that it has substantial coverage of that unicode block. It isn't a guarantee of complete coverage, but a very good hint that it probably contains the one you need.
This means when looking for a character code I can use that `unicode_blocks` structure to quickly find (binary search) the coverage bit corresponding to the the range containing that character code. Then we look only in fonts containing that coverage bit, stopping at the first one. If not found it then looks in the "last resort" font, or "not.def" if not found anywhere.
This means we don't have touch files unnecessarily. With FreeType caching {D13137} we can have fonts without faces loaded. This would mean that if you never use a particular font, like Javanese, it would not have to be kept in memory **at all**. So no penalty of having good coverage and the quickest access to the characters you actually use.