It’s hard for people not of that era to understand how important the Seybold publication was, the vanguard of computers and printing, and the genius of Jonathan Seybold creating connections between the two with his Seybold Seminars. It’s not an exaggeration to say that desktop publishing (DTP) and web browsers would not have happened the way they did without him and his father John Seybold. That’s why I was thrilled to find the Oral History of Jonathan Seybold courtesy of the Computer History Museum. There are lots of mentions of Seybold out there but very little history. It’s great that his contribution and history is on record now, required viewing if you have any interesting in computers and print and how they came together. A revolution that changed print forever.
Today is great day for Japanese typography: Morisawa and Shaken announced they will co-develop the Shaken font library for OpenType (English press release here), due for release in 2024 in celebration of the Japanese typesetter they created 100 years ago. The founders of Morisawa (Nobuo Morisawa) and Shaken (Mokichi Ishii) co-created the first modern Japanese typesetter in 1924 but quickly became 2 different family companies. By the late 1970’s Shaken had grown to be the dominate force of the Japanese pre-press market with the largest and most sought after font library. In the 1980’s it started to unravel.
Shaken never made the transition to digital pre-press and PostScript fonts, which Morisawa did with a very profitable licensing agreement with Adobe. When Shaken announced OpenType fonts at the 2011 International eBook Expo, they were a has-been company run into the ground by sheer greed. They never delivered on that promise. As the former Shaken lead font engineer told me, there was no font engineer talent left in the company to do the job of re-creating the proprietary digital format library into OpenType.
Now that Shaken is finally free of the founder family, since 2018, they are cutting a deal with Morisawa who have the necessary talent and font engineering expertise to bring the Shaken font library into the digital era. They even have Jiyukobo, creators of the Hiragino Japanese system fonts used in macOS and iOS, which Morisawa bought in 2019. An interesting side story: Apple negotiated with Shaken to purchase their library shortly after Steve Jobs returned but it never came to be, Jeff Martin should be proud of today’s announcement.
It’s hard to emphasize how important this development is. Imagine the LinoType library, or everyday standards like Helvetica, New York, etc. were never licensed as digital fonts…until now. I doubt the first release will encompass OpenType Variable Fonts due to cost and time restraints. Morisawa has yet to release anything in that format so far.
The co-developer team will also have to prioritize and edit as the Shaken library is huge and only a small subset ever made it onto proprietary Shaken digital typesetters. There are huge glyph variation and feature holes to fill. Just getting a simplified basic Shaken library in OpenType format will be a tremendous job.
The 2024 delivery date is important in more ways than the 100th anniversary of Japanese typesetting. With Shaken selling off everything they can over the past 2 years, 2024 is when the last Shaken digital typesetters go out of service. Shaken will stop pretending to be a font developer, cut loose their last remaining 100 customers and live on as a real estate holding company. Morisawa is the only listed contact on the co-development announcement, they will eventually buy out the Shaken library.
But that’s a story for another day. Today is a celebration. After nearly 100 years of separation, 2 halves of a whole are coming together again. In Requiem for Shaken I wrote, “When the last person turns out the lights at Shaken KK, I hope they open the vaults and set the Shaken font library free. Only by taking flight and having a life of its own can it ever hope to live on in the hearts and imaginations of future Japanese designers.” Japanese designers finally have their font legacy back.
It took me a while to fully appreciate the issue that Twitter user Yoshimasa Niwa was describing. At first glance I and many others assumed that setting Japanese over English would solve his app library sorting issue.
Then I realized that wasn’t his point at all. The software app in the screenshot is the Yahoo Japan ‘Norikae Annai’ transit app, one of the most popular free stand alone transit apps in Japan. I use it all the time. It’s a Japanese app with a Japanese name but the basic iOS English sorting algorithm ignores this and assumes all Chinese characters used everywhere must follow modern mainland China’s Simplified Chinese rules for reading and sorting.
This is ridiculous as assuming that all Roman based character sets everywhere must follow modern Italian reading and sorting rules. I always find that westerners assume the Kanji culture flow was always one way from China which it is not, with different and unique readings, usages, and Japanese Kanji like shitsuke 躾 traveling the other way over the centuries. The same is true for other cultures that adapted the Chinese writing system for their languages.
It amounts to cultural destruction by neglect and ignorance by large western based technology companies who think things are ‘good enough’. Or are just bugs to fix in a later software update that usually never appears. Modern computer software has pretty much destroyed traditional kanji culture publishing this way, with many countries abandoning mainstream traditional vertical text layout for western style layout because ‘it’s easier’, i.e. western tech companies couldn’t be bothered getting Asian language typography right. All these years later web browsers still can’t do vertical text worth a damn.
A veteran Japanese font engineer whose entire career was devoted to preserving high end Japanese typography in the digital age recently told me, “I don’t think anybody cares anymore.” In the end it all too often comes down to this: I don’t care cultural death by I don’t care companies who have the money and power to care.
That’s bitter irony in our age that purports to champion cultural diversity.
2020 is the coming out party for Apple designed OpenType variable fonts, both the SF Pro and SF Compact system fonts and the all-new New York font shipping in iOS 14, watchOS 7 and macOS 11. The Apple created variable font technology is not new of course. It has been around since the QuickDraw GX days along with the TrueType GX enhanced Skia font. It was due to be standard in MacOS Copland system fonts including a Japanese variable font created by FontWorks. Then Steve Jobs returned to Apple and everything changed.
Yes, it has taken 25 years for an Apple created technology to make it into the basic system. It proves my long stated belief that font technology doesn’t matter unless it is built into every nook and cranny of the OS foundation. The TrueType GX Skia variable font has been with us all this time, but only matters now because the SF Pro system font has gone variable.
Why is It Taking So Long? iOS 14 and macOS 11 variable font basics are covered in an excellent WWDC20 video, ‘The Details of UI Typography’. It’s important to remember that while OpenType variable font technology is ‘world ready’, at this stage they only apply to Roman based font sets. It’s going to be a long time before we see a Japanese language system font in variable format.
There are many reasons. In the WWDC20 video Loïc Sander of the Apple design team drops a big hint when he explains that while digital technology (PostScript fonts) “gave us a lot more flexibility in handling text,” it also “made typography a bit more crude than it used to be.” The statement shows how clueless designers and engineers outside of Japan can be about Japanese fonts and typography.
While a ‘bit more crude’ might be true for Roman based fonts and text layout, PostScript fonts completely broke traditional Japanese font design and composition models. Everything was thrown out because Adobe made no accommodation outside of western typography needs when creating the PostScript font DTP foundation.
Another big problem was that Adobe relations with Japanese PostScript licensees in the 1990’s was not healthy. Adobe stuck with closed print device font licensing for far too long and discouraged independent font production wherever they could. Because of this situation, digital font progress in Japan was slow and very expensive.
Here are some challenges facing Japanese variable fonts.
Once Upon a Time One basic flaw of OpenType font outline technology is that it’s extremely inefficient for kanji glyph production and storage. Every glyph has to be created and stored separately and doesn’t scale well. This is why OpenType CJK fonts on tiny devices like Apple Watch are a match made in hell. One solution to this problem is stroke fonts. Stroke fonts use a library of basic glyph parts to efficiently create complex glyphs.
Stroke fonts are a perfect fit for kanji font production and for small constrained devices like Apple Watch because reusable parts don’t take up precious resources. On the desktop, stroke fonts can do weight variations over the full range from Light through Ultra Bold without losing typographic details, all in a single 4 MB font while an equivalent OpenType variable font can weigh in around 18 MB.
The technology has been around for a long time and was supported up until macOS 9 but lost out when Apple quietly dropped the QuickDraw GX derived Open Font Scaler architecture in the migration from classic to macOS X.
While stroke fonts are not supported in the current Apple OS lineup, on the font tool side stroke font technology has appeared in software such as the classic MacOS Gaiji Master from FontWorks. The lead engineer of that effort is currently working independently on a similar gaiji glyph tool for Windows based on stroke font technology that is much more advanced than the old and long unavailable FontWorks software. I plan to cover developments in a future post.
The Japanese Font Production Challenge The Hiragino iOS/macOS Japanese system font was not created by Apple, it was licensed from Screen Holdings (SH), originally created by independent font design studio Jiyukobo in the early 1990’s. There is much more work involved creating a Japanese font compared to Roman based languages. Hand drawn glyphs are created, scanned and cleaned up for digital production.
The Adobe Japan 1-7 glyph collection requires 23,060 glyphs for a single weight, multiply this work by the different weights for one family and you get an idea how massive the undertaking is. From Osamu Torinoumi, one of the key designers of the Apple licensed Hiragino font on its creation:
On average, one person would (hand) draw 12 or 13 glyphs a day, which is not much change of pace from the days of creating block type…the whole process, from start to finish, took three years.
One might think that a single CJK (Chinese-Japanese-Korean) font sharing a common design can streamline the process but this is a huge misconception. Each culture has centuries worth of different design aesthetics that good design must incorporate: what looks good to a Chinese designer and works well in a Chinese text design, looks terrible in Japanese context. I have yet to see a decent digital ‘kana’ design from a Chinese font designer. Osamu Torinoumi on the differences in creating the Simplified Chinese Hiragino Sans GB:
“We worked with the Adobe GB 1-4 character set (29,064 glyphs) at 2 weights. Basically we had to finish one weight in 6 months. One year for the entire project. At first we only thought we would be there as backup, but Screen kept passing us all the questions from Beijing. It turned out to be a lot more work than we anticipated.”
Jiyukobo sent all the original Hiragino design data to Hanyi Keyin through Screen and they adapted the designs for China. Torinoumi said that one of the major differences is that Chinese design demands that Gothic (sans serif) characters mimic handwritten style. This means the character should be slightly off center within the virtual body. “Even after the project was over I still didn’t understand the difference between Japan and Chinese “Kokoro” glyph which the Chinese designers insisted were different.”
The Variable font UI Challenge Finally we get to a problem on the Apple OS platform side that has been around since the GX days: how to present advanced typography features in a useful and easy to understand system UI that works everywhere. What works on macOS obviously won’t work on iOS, but iPad OS will need some degree of advanced typography feature access. Sliders have their place but I agree with Adobe Type Senior Manager Dan Rhatigan who made a very good point in his TYPO Talk 2016 presentation: there has to be a better UI control concept out there.
Japanese typography is unique in that it has preserved its own print ‘moji bunka’ cultural history and vision that China and Korea have largely abandoned in the face of western centric computer culture that all too often pretends to care about such things, which it does not. If it did we’d have vertical text in web browsers by now that actually works. I hope a rich text culture can be preserved and conveyed to future generations even in such small details as a well designed and executed Japanese variable font for computers and smart-devices.
With Unicode adding more and more useless emoji, and seemly doing little else, it’s time to ask an important question: what the fuck is the Unicode Consortium supposed to be doing anyway?
It’s time to dust off Howard Oakley’s excellent blog post Why we can’t keep stringing along with Unicode, and think about the Normalization problem for file names and the Glyph Variation problem of CJK font sets. These problems fit together surprisingly well. My take is the problems must be tackled together as one thing to find a solution. Let’s take a look at the essential points that Oakley makes:
Unicode is one of the foundations of digital culture. Without it, the loss of world languages would have accelerated greatly, and humankind would have become the poorer. But if the effect of Unicode is to turn a tower of Babel into a confusion of encodings, it has surely failed to provide a sound encoding system for language.
Neither is normalisation an answer. To perform normalisation sufficient to ensure that users are extremely unlikely to confuse any characters with different codes, a great many string operations would need to go through an even more laborious normalisation process than is performed patchily at present.
Pretending that the problem isn’t significant, or will just quietly go away, is also not an answer, unless you work in a purely English linguistic environment. With increasing use of Unicode around the world, and increasing global use of electronic devices like computers, these problems can only grow in scale…
Having grown the Unicode standard from just over seven thousand characters in twenty-four scripts, in Unicode 1.0.0 of 1991, to more than an eighth of a million characters in 135 scripts now (Unicode 9.0), it is time for the Unicode Consortium to map indistiguishable characters to the same encodings, so that each visually distinguishable character is represented by one, and only one, encoding.
The Normalization Problem and the Gylph Variation Problem As Oakley explains earlier in the post: the problem for file system naming boils down to the fact that Unicode represents many visually-identical characters using different encodings. Older file systems like HFS+ used Normalization to resolve the problem, but it is incomplete and inefficient. Modern file systems like APFS avoid Normalization to improve performance.
Glyph variations are the other side of the coin. Instead of identical looking characters using different encodings, we have different looking characters that are variations of the same ‘glyph’. They have the same encoding but they have to be distinguished as variation 1, 2, 3, etc. of the parent glyph. Because this is CJK problem, western software developers traditionally see it as a separate problem for the OpenType partners to solve and not worth considering.
Put another way there needs to be an unambiguous 1-to-1 mapping and an unambiguous 1-1/1-2/1-3-to-1 mapping. I say the problems are two sides of the same coin and must be solved together. Unicode has done a good job of mapping things but it is way past time for Unicode to evolve beyond that and tackle bigger things: lose the western centric problem solving worldview (i.e. let’s fix western encoding issues first and deal with CJK issues later), and start solving problems from a truly globally viewpoint.