Text 2.0 tracks where you’re looking, and based on a number of factors, triggers one of several context-sensitive actions. I have to wonder whether it’s something human beings would really appreciate. The simple fact is this: we don’t interact with things using our eyes. That’s what our hands are for. And that’s why the next generation of books and magazines is going to be both rich and tactile. While certainly you could train yourself to “click” with your eyes, I’m skeptical of the preferability of that over a simple touch-based interface. When the eye is the only or best input then it’s a go, but for all others, any action that might be taken with the eye (getting a word definition or something) could be done just as easily with a quick gesture — and there’s much less room for error.
It’s a cool concept being worked on by some very smart people, and I can think of quite a few applications for this right off the top of my head. Kids learning to read would be a perfect example. But they clearly have some hurdles ahead of them if this is to be anything other than an academic project.
Read more at CrunchGear.