Devices on our bodies will multiply. Sensors, cameras, input methods, and displays will work their way into our clothing. They’ll listen for commands and whisper in our ears. Our environment will respond to us in new and interesting ways. The proliferation of large displays and projection technologies will relegate the small display on our phone to private or a constrained set of tasks. A new layered interaction model of touch, voice, and gesture will emerge as important as consumption: the continuous exchange of what we are doing, where we are, and who we are with. This will again work into the collective memory, attaching to our legacy--bringing in a new type of patina effect. It won’t be the same as physical degradation yet will offer fresh stimuli that allow for more meaningful navigation and recall.
Emerging tools and services will help translate our needs and desires into cloud-based automation. They will proactively work on our behalf, guided by our permission and divining our intent. Existing services such as Google’s Prediction API, which offers pattern-matching and trainable machine learning capabilities to developers, and IFTTT, which offers intuitive, user-friendly, and cloud-based rules engine expressed in simple “if this, then that” terms, are representative of the trend toward empowering more automated, if not quite yet artificial, intelligence for our digital alter-egos.
Read more here