News
Computing Applications News

Context-Aware Smartphones

Future generations of smartphones will be context aware, tracking your behavior, providing information about the immediate environment, and anticipating your intentions.
Posted
  1. Introduction
  2. Major Hurdles
  3. Author
  4. Footnotes
  5. Figures
Google Android smartphone with Enkin 3D navigation system
Enkin, a 3D navigation system built for Google's Android smartphone, provides a live mode via a video feed, a landscape mode, and a map mode.

The problem with today’s smartphones is that there’s nothing terribly smart about them. Even the most souped-up devices amount to little more than stripped-down PCs, with tiny screens and maddening keyboards. The era of the PC-in-your-pocket may soon give way to a new type of smartphone, however, that is less of a portable workstation and more of a personal digital assistant that anticipates what you need to know, and when.

The next generation of context-aware smartphones will take advantage of the growing availability of built-in physical sensors and better data exchange capabilities to support new applications that not only keep track of your personal data, but can also track your behavior and—this is where the truly smart part will finally come into play—anticipate your intentions.

Many smartphones already contain the basic building blocks for context awareness such as physical sensors, like GPS, accelerometers, and light sensors, coupled with operating systems that allow developers to create their own applications. What’s missing, however, has been the middleware that will enable applications to juxtapose information about your physical location with data from other applications. “A lot of the basic technology has existed for years,” says Erick Tseng, product manager for Google’s Android. “The challenge has been that developers didn’t really have access to those devices.”

The first wave of context-aware applications will focus on pulling in data keyed to the user’s physical location. “Your phone is going to be a sensor,” says Ken Dulaney, a distinguished analyst at Gartner Group. Equipped with magnetic compasses, accelerometers, and GPS, smartphones will begin to support augmented reality applications that draw on detailed information about the location and orientation of your phone. The result will be a type of portable data X-ray device that reveals information about any location at which you point your phone.

San Francisco-based GeoVector is one of several new companies developing software to take advantage of more accurate location-sensing data. “We want to convert the phone into a virtual mouse that you can use to click on the real world,” says Pam Kerwin, GeoVector’s head of strategic business development. Using a combination of latitude and longitude data, provided by GPS, and information about the user’s orientation, which is gleaned from an electronic compass, the software creates a virtual vector that can superimpose location-specific data on a mobile phone screen.

Similarly, Enkin—a 3D navigation application based on Google’s Android—takes advantage of location data to allow users to scan the physical environment using GPS and compass data. In live mode, it works with a phone’s built-in camera to project onscreen annotations of physical locations atop a live video feed, by querying a database of location data tagged with latitude and longitude coordinates. So, if you point the device at a hospital building, the term “Hospital” appears superimposed over the video image in real time.


Future smartphones will provide information via a live video feed about locations at which the phone is pointed.


While advanced location awareness will open up new avenues for application development, this kind of physical sensing constitutes the most simplistic level of context awareness. As smartphone platforms mature, they will start to merge physical location data with information about user behavior—drawn from calendars, email, text messages, and other applications—to begin modeling your intentions. “The trend is toward long chains of integration,” says Dulaney.

For example, a context-aware phone might know that you’re sitting inside a movie theater and automatically mute itself. And if your calendar knows that you’re about to leave for a meeting on the other side of town, your phone could query a public traffic database and determine that you’re going to be gridlocked, then notify colleagues that you’ll be delayed. Or if you’re traveling and it’s dinnertime, your smartphone might suggest a nearby restaurant based on your location and previous dining history.

Researchers at PARC, based in Palo Alto, CA, have proposed four levels of system awareness: basic context awareness (including location, time, and other details of the physical environment); behavior awareness (typing, walking, standing, or clicking a button); activity awareness (shopping, dining, or traveling); and intent awareness (predicting the future). “Intention modeling is the Holy Grail for context awareness,” says Bo Begole, a principal scientist at PARC who manages ubiquitous computing initiatives. Begole’s team has been working on a prototype mobile application called Magitti (a contraction of “magic lens” and “graffiti”) that monitors users’ behaviors across a wide swath of applications on both desktop and mobile devices, including calendars, email, text messages, and Web browsing history, to give users recommendations of where to go and what to do in any particular location.

In a similar vein, Intel researchers have developed a prototype of a smart-phone-based system that draws data from multiple applications and matches it against location data. One demonstration shows a user walking into a conference room, automatically triggering the location-aware smartphone to check the user’s calendar, determine whether the user is in charge of the meeting and, if so, request that the conference room computer begin displaying his or her stored presentation on the room’s projector.

Back to Top

Major Hurdles

For now, most of these context-aware mobile applications remain in the R&D stage. To bring contextual computing to the masses, phone manufacturers and software vendors will need to overcome some major hurdles. Not the least of these is the lack of open standards for exchanging context data between applications. In the absence of such standards, several companies are pursuing their own implementations, which demonstrate how this kind of contextual integration might eventually work.

Google’s Android relies on a middleware-like system of “intents” and “listeners” tied to each application. For example, an incoming phone call could send out an intent that could be picked up by any number of listening applications—like the phone receiver, email, or calendar—which could then notify other applications of the person’s activity. Users can establish the priority of listeners by setting preferences in individual applications. Developers can also create their own intents and listeners within any application.


With intention modeling, users will receive recommendations based on both their computer and mobile device activity and on their current location.


At Intel, researchers have proposed a Context Aware Computing framework that allows for platform-level integration of contextual data that relies on an aggregator component managing context information from multiple sources. An access control policy enforcer manages permissions, while an analyzer executes rules, allowing clients to establish algorithms for interpreting context information. At the hardware level, a simple dynamic link library enables the phone to access sensors transmitting information about data such as location, orientation, and identity. At the software level, a set of Web services allows the phone to interface with applications anywhere in the cloud of accessible applications. Finally, a level of middleware glues the whole thing together, joining data from the sensors and applications with user input, storing contextual information, and allowing the phone to share that data across applications or even between different devices. The framework would operate more effectively, however, if it were integrated into an operating system. “We think these services could be driven further down into the system,” says Lester Memmott, a software architect in Intel’s Software Pathfinding and Innovation Division.

These prototype implementations allow developers to explore the possibilities of integrating contextual data, but the era of cross-platform interoperability seems a long way off. In today’s fractured smartphone market, a slew of competing operating systems battle for dominance, including those of Apple, Microsoft, Research in Motion, Symbian, and a variety of Linux variants, including Google’s Android, Intel’s Moblin, and LiMo. Beyond the operating systems lies a babel of competing standards including Bluetooth, the Digital Living Network Alliance, Universal Plug and Play, and ZigBee, all requiring developers to create custom data interfaces for each type of device.

Right now, there seems to be little incentive for cooperation about developing context-aware applications. However, one ray of hope may come from the Open Handset Alliance, whose members—including Google, Intel, LG, Motorola, Samsung, Sprint, T-Mobile, and other major players in the mobile phone market—are taking tentative steps toward exploring a common framework for sharing contextual data.

Beyond the technical and business hurdles, perhaps the greatest challenge will come from the most unpredictable variable of all: users. In a world where smartphones start to collect and share more personal data about users’ behaviors and preferences, people may begin to feel increasingly uncomfortable with the idea of a smart device that seems to be tracking their daily activities. Regardless of whether such concerns are well founded, they may prove difficult to overcome.

“We need to address these concerns in a responsible way,” says Memmott. How can developers help to ameliorate users’ privacy concerns? “Transparency is the key,” says Begole, who advocates giving users control over their data and especially providing users with tools to manage the sharing and presentation of personal data.

If developers and manufacturers can work together to overcome these technical, business, and behavioral obstacles, the next generation of mobile devices may finally start living up to their brainy potential. The term “smartphone” may still be a misnomer, however: Smart they will undoubtedly be, but will anyone still care about the phone?

Back to Top

Back to Top

Back to Top

Figures

UF1 Figure. Enkin, a 3D navigation system built for Google’s Android, provides a live mode via a video feed (left), a landscape mode (top right), and a map mode.

Back to top

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More