Smart memory seen as cure for user-interface bottlenecks
PORTLAND, Ore.—Next-generation augmented reality displays will employ smart-recognition of their users that melds context-sensitive voice-and-gesture commands with total-surrounding awareness of other people, places and things, according to John Kispert, CEO of Spansion Inc., who gave the keynote address at the Globalpress Electronics Summit 2012 last week. But smart memory will be needed to wean these advanced user-interfaces off cloud connectivity dependence, Kispert said."The bottleneck to next-generation user interfaces is local memory," said Kispert. "The most advanced user-interfaces user cloud-assets to first recognize their user with facial recognition, then adjust the context by switching preferences and augmented-reality displays that are aware of a user's surroundings."
However, in the future, according to Kispert, local memory assets will substitute for cloud-based connectivity so that context-aware augmented-reality displays react faster and do not have to be online to function properly. The Hansen Report on Automotive Electronics (January 2012), for instance, cited sluggish response of cloud-based user-interfaces as the number one complaint of automobile users. Likewise, advanced voice-based user-interfaces such as Apple's Speech Interpretation and Recognition Interface (Siri) requires wireless cloud-access to work and still only provide response times measured in seconds. Local memory assets, on the other hand, can provide context-aware augmented-reality displays that not only recognize their user in milliseconds, but also provide instant recognition of the other people, places and things in the immediate surroundings.
"User-interface convergence today has eliminated the need for manuals and instructions--people just know how things work," said Kispert. "But head-up displays, recognition algorithms and context switching needs to be able to rely on local memory to work well regardless of whether wireless cloud assess are available or not."
Not surprisingly, Spansion specializes in the NOR flash memories that provide nonvolatile configuration assets to modern user interfaces, which Kispert claims will be critical to weaning context-aware augmented-reality interfaces off exclusive dependence on cloud-based assets. By storing the information that allows devices to recognize their user, switch contexts, and recognize the other people, places and things in a user's surroundings, next-generation high-density NOR flash will solve the memory bottleneck that makes cloud-based augmented-reality user interfaces sluggish, according to Kilpert, resulting in safer, more secure displays that superimpose relevant tactical information in realtime.
User interfaces have evolved from keyboard and mouse to user-aware voice control, but need smarter memory to make the jump to augmented-reality displays that don't depend on cloud connectivity. (Click on image to enlarge.)
TAG:User Interface NOR Flash Augmented Reality Heads Up Display EETimes NextGenLog Electronics
No comments:
Post a Comment