Apple Pay Octopus close to launch…or maybe not

In the on, off, on again Octopus Cards Limited relationship with Apple Pay, predicting a service launch is risky business. The last reliable statement was from Octopus Cards Limited CEO Sunny Cheung on September 19 saying that Apple Pay Octopus would not launch on iOS 13 release day, but would “start as soon as possible within the year.”

Take it for what it is worth but a reliable source tweeted the following leak from beta testers to me today:

According to an internal note leaked by beta testers, Octopus Customer Service has yet to receive training for Octopus on Apple Pay, and they are advised not to call the hotline before the project goes online officially. The note was released on 11/10. However, a subsequent email sent to beta testers says that the official launch is coming “very soon”. Considering that the project has been “coming soon” since July, I’m not sure what to make of this “very soon” wording.

I also hear from other sources that OCL is cracking down on Apple Pay Octopus beta test leaks by limiting access and shutting out some testers, unsuccessfully I might add. There have been so many leaks the only thing we don’t know is the launch date. The beta tester crackdown may be OCL’s way of keeping the launch date under wraps so that the press event launch still has a surprise or two.

To use a Donald Rumsfeldian turn of phrase, do we have a known known or an unknown known? Well, we do have iOS 13.2 due on October 30 but at this point any iOS point release has nothing to do with an Apple Pay Octopus launch. The leak is almost 2 weeks old, hopefully Octopus Customer Service staff is being trained for the launch. Other than that all we have is… a known unknown.

Advertisements

Unicode needs a new Mission

With Unicode adding more and more useless emoji, and seemly doing little else, it’s time to ask an important question: what the fuck is the Unicode Consortium supposed to be doing anyway?

It’s time to dust off Howard Oakley’s excellent blog post Why we can’t keep stringing along with Unicode, and think about the Normalization problem for file names and the Glyph Variation problem of CJK font sets. These problems fit together surprisingly well. My take is the problems must be tackled together as one thing to find a solution. Let’s take a look at the essential points that Oakley makes:

Unicode is one of the foundations of digital culture. Without it, the loss of world languages would have accelerated greatly, and humankind would have become the poorer. But if the effect of Unicode is to turn a tower of Babel into a confusion of encodings, it has surely failed to provide a sound encoding system for language.

Neither is normalisation an answer. To perform normalisation sufficient to ensure that users are extremely unlikely to confuse any characters with different codes, a great many string operations would need to go through an even more laborious normalisation process than is performed patchily at present.

Pretending that the problem isn’t significant, or will just quietly go away, is also not an answer, unless you work in a purely English linguistic environment. With increasing use of Unicode around the world, and increasing global use of electronic devices like computers, these problems can only grow in scale…

Having grown the Unicode standard from just over seven thousand characters in twenty-four scripts, in Unicode 1.0.0 of 1991, to more than an eighth of a million characters in 135 scripts now (Unicode 9.0), it is time for the Unicode Consortium to map indistiguishable characters to the same encodings, so that each visually distinguishable character is represented by one, and only one, encoding.

The Normalization Problem and the Gylph Variation Problem
As Oakley explains earlier in the post: the problem for file system naming boils down to the fact that Unicode represents many visually-identical characters using different encodings. Older file systems like HFS+ used Normalization to resolve the problem, but it is incomplete and inefficient. Modern file systems like APFS avoid Normalization to improve performance.

Glyph variations are the other side of the coin. Instead of identical looking characters using different encodings, we have different looking characters that are variations of the same ‘glyph’. They have the same encoding but they have to be distinguished as variation 1, 2, 3, etc. of the parent glyph. Because this is CJK problem, western software developers traditionally see it as a separate problem for the OpenType partners to solve and not worth considering.

Put another way there needs to be an unambiguous 1-to-1 mapping and an unambiguous 1-1/1-2/1-3-to-1 mapping. I say the problems are two sides of the same coin and must be solved together. Unicode has done a good job of mapping things but it is way past time for Unicode to evolve beyond that and tackle bigger things: lose the western centric problem solving worldview, and start solving problems from a truly globally viewpoint.

On The Media

Tim has been on a roll recently. Not that Tim, the other Tim. Tim Pool. When YouTube and Twitter started purging ‘conservative’ Japanese content that wasn’t breaking any content rules, following what YouTube and Twitter were already doing in America, Tim Pool was the only online journalist reporting it.

I don’t always agree with Tim’s politics or watch every video post, but I always keep an eye on him. His reports on the devolution of mainstream media and how social media like YouTube and Twitter contribute to that decline, is on the nose. Another thing I like about Tim is that he believes in positive engagement and calling things as he finds it. This sets him apart from former Vice News colleagues: Tim has not lost the ability to think critically and objectively, he questions everything and tries to examine both sides of an issue. To me this is healthy.

And Tim knows when to play the YouTube de-ranking guessing game because he knows there are more important things to report on than waste time fighting YouTube. His milk toast reports are considered so dangerous by YouTube that real YouTube humans review his every video and suppress ones they don’t like:

One disturbing trend that social media drives is what I call cut and paste narrative journalism. Part of it is driven by the need for clicks and what big media thinks will sell. I see this frequently in mainstream western reporting on Japan that likes to portray Japan in a negative light. Here’s a recent piece written by Ian Bremmer for Time titled, Why the Japan-South Korea Trade War Is Worrying for the World, where you can see cut and paste narrative journalism in action.

Why the Japan-South Korea Trade War Is Worrying for the World

The opening sentence is a setup: “but it’s the trade spat between Japan and South Korea that signals the larger troubles ahead for the world.” This is Bremmer’s opinion, nothing else, and puts him squarely in the South Korea supporters club. There are plenty of economic experts who will tell you that Japanese ~ South Korean trade volume isn’t nearly as important as the media makes it out to be.

Skipping the next few sentences of regurgitated South Korean side only history, we arrive at the crucial sentence:

“Frustrated with the proceedings and determined to put pressure on Moon’s government to intervene in some way, Japan strengthened restrictions on several high-tech exports to South Korea in July and downgraded South Korea’s status as a trusted trading partner in August.”

This is classic cut and paste narrative. It substitutes fact for opinion, while presenting it as fact. Bremmer removes all the context of Japanese claims that South Korean was violating UN sanctions on North Korean, among many other things, leading up to the sanctions. Instead of crucial context we get: Japan is frustrated. Really? Can you prove that Ian?

The rest of the piece deflates from there into a half-hearted denouncement of President Trumps foreign policy, without naming Trump, as if Bremmer can’t decide whether it’s a good or bad thing for the U.S. to play the world’s policeman.

I find it hard to stay well informed with big media these days. Big media is still important but sifting the good from the bad is a lot more work. Unfortunately I don’t think it’s going to get easier.

Japanese Text Layout for the Future* (hint: there isn’t one)

I finally had time to catch Adobe Nat McCully’s ATypl Tokyo 2019 presentation. He covers the topic that I have covered in depth many times before: the (sad) state of CJK typography. As Nat points out most software developers and system engineers talk about CJK support as typography without any idea of what it means. Throwing CJK glyphs on a screen is not typography, they are not the same thing at all.

The defining feature of CJK typography and layout in general and Japanese typography in particular is that space is an essential composition element equal with text and graphics, with fine space element control way beyond a baseline. Instead of thinking about how much space should be between text, flip it around and think about how much text should be between the space. Baseline font metrics will never deliver great CJK typography because there are too many limitations. So everybody implements the missing stuff on the fly and everybody does it different. Unfortunately the irony of it all is that Adobe played a huge role in how these limitations played out in the evolution of digital fonts, desktop publishing (DTP) and the situation we have today.

QuickDraw GX was probably the only time in computer history that fonts, layout engine and the basic OS came together to solve these limitations for all language systems, all language typography as equal from the bottom up. Parts of that effort survived, such as Apple’s San Francisco variable system font based on the TrueType GX model, and the inclusion of the TrueType GX model as the base technology for OpenType Variable fonts. Nice as this is, it’s only a tiny sliver of the GX vision pie that survived, all the other baseline font metric and CJK typography limitations still exist. Outside of a handful of people like Nat at Adobe, and the Adobe CJK typography ghetto approach of keeping all the good stuff corralled in InDesign J, very little is being done to address them.

Call me a pessimist but after 20 years of watching things slide sideways, I don’t see much hope for the future evolution of great CJK typography on digital devices. Most western software development people think that having CKJ glyphs on a screen is ‘good enough’ CJK typography, end of story.

Already I see the OpenType Variable Font effort devolving into a bauble for web developer geeks, always stuck in demo-hell, never going mainstream. It is the same story for quality CJK typography on digital devices. When the current Adobe CJK leaders like McCully and Ken Lunde reach retirement age, whom have devoted their careers to fixing these problems, I think it will be the end of an era. In many ways we are already there.

Apple prides itself on having good typography but cannot be bothered with such Japanese typography basics as not mixing Gothic and Ryumin Japanese font styles seen here in the Photos app

Pixel 4 goes cheap instead of deep

As I tweeted earlier today, the updated Pixel Phone Help hardware pages tell the whole story: if you purchased your Pixel 4, 3a or 3 phone in Japan, a FeliCa chip is located in the same area as the NFC.

This is a little misleading because as FeliCa Dude pointed out in tweets, the Pixel 3 uses the global NFC PN81B ‘all in one chip’ from NXP. There is no separate ‘FeliCa chip’:

All the Pixel 3 devices have an eSE…A teardown of the global edition Pixel 3 XL (G013C) reveals a <NXP> PN81B.

FeliCa Dude

Pixel 4 teardowns will certainly reveal a PN81B or similar all in one NFC chip from NXP. Google could have gone global NFC with Pixel 4 and given Android users everywhere access to Google Pay Suica. Unfortunately Google went cheap instead of deep, sticking with the same Pixel 3 policy of only buying FeliCa keys for JP Pixel models.

Why is Google turning off FeliCa on Pixel models outside of Japan? I doubt it is a licensing restriction because the whole point of NXP PN81 is having all the global NFC licensing pieces, NFC A-B-F/EMV/FeliCa/MIFARE, all on one chip, all ready to go. It could have something to do with Google Pay Japan. For Apple Pay Japan, Apple licensed all the necessary technology and built it into their own Apple Pay.

Instead of that approach Google Pay Japan is a kind of candy wrapper around the existing ‘Osaifu Keitai’ software from Docomo and FeliCa Networks, and all of the existing Osaifu Keitai apps from Mobile Suica to iD to QUICPay. That’s why having a ‘Osaifu Keitai’ Android device is a requirement for using Google Pay Japan. Perhaps Google is content in candy wrapping things instead of retooling it all as basic Google Pay functionality and letting Android OEMs benefit from that.

Whatever the reason, the moral of this story is that Google Pay Suica will not be a transit option for inbound Android users during the 2020 Tokyo Olympics. Unfortunately, the Android equivalent of the global NFC iPhone has yet to appear.