Allow use of multiple fonts acting together like a fallback stack,
where if a glyph is not found in one it can be retrieved from another.
A nice overview is also here: https://devtalk.blender.org/t/discussion-of-d12622-fallback-font-stack/23502
Let's start with an example that might not resonate with most viewers, but does the job well. Imagine that you live in North Sumatra and speak a Batak language, which is not supported in Blender. The best font you find is Noto Sans Batak:
But when you install it you find that Blender is not usable. This is because this font contains only Batak glyphs and nothing else, no latin (A-Z) characters, no symbols, nothing. This patch changes this experience from what you see on the left to that on the right:
We are shipping fonts that contain most of the world's characters - 54,000 of them. We allow users to change these fonts, but when you do so they completely replace them rather than augment them. Selecting ANY other font will give you less glyph coverage. If a font does not contain a needed character you will see a blank square (tofu) instead. Missing symbols we use in the interface like ← or ⌘ or ⇧ could result in confusion. Even worse is that some language-specific fonts - almost the entire Noto family - have no latin characters at all! So you can select an ideal font for your language that is unusable with Blender.
With this patch you always have all characters available no matter what font you select. Your font and ours are treated as one.
For testing you will need to ADD the contents of the following zip archive to the files in your datafiles/fonts folder:
It is 26 separate font files, enough font files to cover all of the top 44 languages by number of speakers. This represents about 1.5 billion more people who can view their language in Blender.
Okay, but what is with the huge unicode_blocks structure?
When looking for a glyph I don't want to literally go from font to font to font. Each font contains a set of "coverage bits", a set of four 32-bit flags indicating that it has substantial coverage of that unicode block. It isn't a guarantee of complete coverage, but a very good hint that it probably contains the one you need.
This means when looking for a character code I can use that unicode_blocks structure to quickly find (binary search) the coverage bit corresponding to the the range containing that character code. Then we look only in fonts containing that coverage bit, stopping at the first one. If not found it then looks in the "last resort" font, or "not.def" if not found anywhere.
This means we don't have touch files unnecessarily. With FreeType caching D13137: BLF: Implement FreeType Caching we can have fonts without faces loaded. This would mean that if you never use a particular font, like Javanese, it would not have to be kept in memory at all. So no penalty of having good coverage and the quickest access to the characters you actually use.
