At TechCrunch’s event in Shenzhen last month, we had a chance to test out the WT2, a clever and ambitious device from startup TimeKettle. It’s a pair of wireless earpieces; each person in a multilingual conversation wears one, and they translate what’s said into the language spoken by each participant. Essentially it’s a Babel fish, though admittedly a rough draft of one.
The devices live in a little charging case, and when you want to speak to someone who doesn’t know your language, you take them out — one goes in your ear, one in theirs. They pair automatically with an iOS app as soon as they’re removed from the case, and it begins monitoring for speech.
When you speak in English, there’s a short delay and then your interlocutor hears it in Mandarin Chinese (or whatever other languages are added later). They respond in Chinese, and you hear it in English — it’s really that simple.
Of course there are translation apps that do something similar already, but this ultra-simple method of sharing earpieces means there’s no fuss or interface to deal with. You talk as if talking to someone who speaks your language, complete with eye contact and ordinary gestures.
This is the main thing Wells Tu, TimeKettle’s founder, wanted to achieve. He and I talked (in English and in Chinese) about the complexity of communication and how important things like body language are. The simplicity of operation was also important, he said, if you were to use the WT2 with people who’d never seen or used it.
Right now the device is very much a prototype, although the design and chipset used are more or less final. It fit pretty well in my ear, just like you’d expect a bulkier-than-usual Bluetooth headset to.
but it worked quite well, as long as you keep your translation expectations realistic — complex speech and idiom don’t survive quick machine translation, but you can still get a lot across. The main issue I had was with the latency, which left Wells and I staring at each other silently for a three count while the app did its work. But the version I used wasn’t optimized for latency, and the team is hard at work reducing it.
“We’re trying to shorten the latency to 1-3 seconds, which needs lots of work in optimization of the whole process of data transmission between the earphones, app and server,” Wells said.
The spotty wireless connection at the venue didn’t help, either — you’ll need a solid data connection for this, at least until offline translation becomes good enough.
The WT2 isn’t ready for market, exactly, but Wells and I agreed that even if the first version isn’t perfect, well, someone’s got to do it, or no one will. This kind of tech will be ubiquitous in the future, but first it has to be rare, weird, and only work 3/4 of the time. In service of the goal of improving communication across language barriers, I’m perfectly happy to applaud each step along the way.
You can learn more about the WT2 at its website, and keep an eye out for the Kickstarter the company plans to launch next month.