On my monoglot Macintosh…


The Mac has always done a pretty good job of dealing with different languages, ever since the advent of WorldScript in System 7.1, way back in 1992, which, to look at it now, was groundbreaking, especially when you consider that in the PC world, it wasn’t really well done until Windows 2000. And Mac OS X’s multilingual functionality is unparallelled – great Unicode, beautiful fonts, fluid input methods, all the intefaces languages included in every copy of the operating system (unlike Windows, where Microsoft makes you pay a fortune for language packs), and supremely simple localisation, making extending your software to a wider audience a breeze. For developers and users alike, it’s a great platform for the world that doesn’t speak English.

And since day one, the Mac has been a talker. When Steve Jobs demoed the Macintosh in 1984, he was proud to showcase its speech synthesiser, because it was pretty special stuff. Fast-forward twenty years, and the Mac can still talk in the same now-semi-famous voice, as well as various others, reading documents and web pages with impressive accuracy. And to improve accessibility, Tiger is bringing a technology Apple are calling VoiceOver into the fold, using the speech synthesiser as a fully comprehensive screen reader. This should be a boon for a sizeable number of users.

But it only works in English. Although there was support for Mac OS 9 to speak Chinese (and possibly other languages as well, of which I am unaware), Mac OS X only speaks English. Attempts to input French result in very painful output, and were I better versed in other languages, I would experiment with them too, doubtless with similar results. Basically, though its reading and writing skills are excellent, the Mac is bound to fail Elementary French – it can’t talk.

The challenge is not insurmountable either. Apple have obviously done it for Chinese in the past, and Microsoft, apparently through licensing technology from Lernout & Hauspie (now seemingly acquired by ScanSoft) have done it at least for Japanese, and probably other languages as well (although the capability is sadly wasted, because nothing apart from Microsoft Word appears to take advantage of it). But why has Apple put speech synthesis out to pasture since the advent of Mac OS X?

The main reason is, of course, pretty obvious. Wonderful though it is that the Mac can talk, it’s not its biggest selling point. Yes, it speaks, reading things aloud with a choice of voices, but you’re not going to choose it over Windows on that basis, unless you have extremely peculiar needs. Until now, the return on investment would simply not have been sufficient to justify developing (or licensing) foreign language speech synthesis.

But now we have VoiceOver. And given that Apple’s second biggest market is in Japan, the fact that the Mac only does English starts to look like a bit of an oversight. It’s a bit pathetic that Apple has devoted time to translating the page on VoiceOver into Japanese, with only a couple of parenthesised additions to note that it only works in English. At least the French page redirects to the English one, so the French are under no illusion as to where Apple stands on the whole issue.

Ethnocentricity has always been a problem in computing, which has traditionally emanated primarily from the United States. And in many ways, Apple has done much to combat this, providing even a British version of the Mac OS up until Mac OS X, as well as, obviously, a multitude of other localisations. But it seems to me a bit much for them to trumpet VoiceOver’s brilliance to the Japanese, when the people who will be able to use it (blind, English speaking Mac users living in Japan) probably number less than one hundred.

Speech synthesis and (from what I can tell) VoiceOver are good. But that’s not enough. Apple ought to be making them Insanely Great.



ear mail can speak japanese, and speaks it well, but it is an app made just to turn your text emails into audio files and put them into itunes or your ipod, if someone could disect the awesome speak module that is in this app and apply it system wide or at least to text editors or web browsers it would solve half the problem

here is the link to ear mail if any one wants to take a stab at it



While it is not exactly a Japanese VoiceOver, and not even speaks Japanese, the Proloquo software I developed includes very high quality voices in a language of choice (including French and other European languages) and offers many features for low-vision (not blind) users. It requires Mac OS X 10.3 and higher


…i just discovered speech synthesis this morning and am completely thrilled by it. i’m one of those audio-centered persons, vastly prefering to listen than to read. however, being an academic and having to read latin, french, spanish, etc. fairly often, I immediately tried out the speech synthesis on a french news site and the results were, as Gareth has indicated, pretty much worthless. Ditto for spanish, latin, etc. It cannot be denied that the speech synthesizer is a real godsend; but it would be even more divine if we could at least have 2 or 3 (or 5 or 6) other languages. how hard could it be?…


I’m left in an interesting quandry myself. You see, I do not see. I am a visually impaired college student, and I am going to be going to school in Tokyo for the coming year. Currently I use a WinXP machine, with a screen magnifier, and a screen reader. The screen reader is completely capable of handling Japanese, all hiragan, and kattakana, and most kanzi that it encounters. It read through asahi.com fairly easily. The voice synthesizer is an out of license Japanese synthesizer from IBM under the Viavoice brand. The first step for apple is not necisarily getting the voice its getting the engine to process all 3 of the alphabets fluidly. With that I am left to make a decision, I am purchasing an lighter notebook than my 8 lb beast. I would desperately like to go with a mac. But, I need something that can read Japanese text to me. So as with the rest, I anxiously await.

Gareth Potter

I can’t see that _that_ many Japanese people are clamouring for their Macs to talk to them. I know there’s a serious paucity of native English speakers over there, but replacing them with Vicki, Fred or Bruce isn’t really the way to go.

Funny if a sizeable number did, though. I’d like to meet a Japanese person who has been vocally trained by the Mac Speech Synthesiser. Probably more amusing than my Japanese friend who spent a year in the north of England.

As to the difficulty of doing it, it’s a yes and no. On the one hand, there’s none of the inconsistency between characters and pronunciation if you have the hiragana, but in the absence of that (say with names), it would be a nightmare, because there’s no way (as far as I know, anyway) at this stage to attach metadata to a specific word in a block of text to give kanji a reading.

I’ve never extensively tested Microsoft’s Japanese synthesiser, so I don’t know how well it copes with random names.


you’re overlooking the number of Japanese natives with a desire to become fluent in English – a great majority of them

that being said, I’m really dying for Japanese voice synthesis as well. It shouldn’t be too hard either, I would imagine, since Japanese pronunciation is extremely straightforward compared to English.


Make no mistake — most pages in non-English languages link back to the English description for the features. Try rewriting the apple.com/macosx/… URL to apple.com/fr/macosx/… (or it or jp or etc.) and look for yourself.

Comments are closed.