MobileAR

From XLDB

Jump to: navigation, search


Mobile Augmented Reality

Augmented Reality (AR) superimposes graphical elements on real images to augment the information provided to the user. It can be considered as an intermediate technique between the simple capture of reality and the use of immersive virtual reality environments. The first experience in RA was developed by Ivan Sutherland in 1968 [Sutherland68]. However, only recently its use became common due to considerable technological advances, especially in mobile devices.

AR still presents some challenges, especially when used in mobile devices with small screens. The use of such devices, like smartphones or tablets, is becoming more widespread thanks to its portability, connection to mobile networks, increasing processing power and low cost. Currently these devices integrate different types of sensors such as camera, GPS and digital compass. Both the widespread use of mobile devices and the broad range of application areas of AR justify the research we intend to perform to solve problems still faced when implementing this technique.

Since in AR we have to display graphics on top of real objects, there are two essential aspects to be addressed: tracking, which allows identifying the objects in the real image over which we want to display the information; and visualization that includes the treatment of the virtual elements to draw.

The team of this project has been working in the context of visualization in mobile devices, both on maps and in AR, in the context of the project PTDC/EIA/69765/2006, "Geo- Referenced Information Visualization ". In MobileAR we want to continue our work by studying aspects related with tracking and visualization, in order to develop modules that can be integrated into an AR system for mixed environments (indoor and outdoor).

In what concerns tracking, there are several possible solutions which can be divided into two major groups [Carmigniani2010]: those based on image analysis, identifying fiducial markers or natural objects, and those which are based on location and orientation sensors installed on the device, such as GPS and compass. The identification of fiducial markers is a process used in various AR applications, but it requires placing these marks on objects, which may be visually obtrusive, and not always feasible.

The recognition of natural objects is computationally more cumbersome and requires a previous collection of images of the objects that must be recognized. The tracking based on GPS and compass is one that uses only the capacities integrated in the mobile device, but that can not be used inside buildings. One of the objectives of this project is to create a mobile AR platform for mixed environments that allows the creation of mobile AR applications that support transitions between external and internal environments (and vice versa) and that is based only on the sensors installed in mobile devices. The challenge is to develop techniques to calculate the user location and orientation inside the building, knowing outside metrics and information about the inner environment.

In what concerns visualization there are several aspects to consider. One is related with the diversity of the surrounding environments, especially in the outside, where it is not possible to control all objects, creating sometimes situations in which virtual elements are not distinguishable by the user in the observed images. It is therefore important to study the adaptations that the virtual elements must suffer in order make them distinguishable from the background image but still maintaining the semantics associated with its graphical attributes. The addition of information to the real image does not have to be restricted to objects in the field of view of the user. The inclusion of clues about the existence of relevant objects outside the field of view (off-screen objects) is an important contribution to navigate in the information space reachable to the user. Furthermore, in large information spaces, it is convenient to highlight the information that is relevant to support user's choices. This project will develop an AR platform for mobile devices, both for outdoors and for indoor environments using tracking based on sensors integrated in the device itself, as well as mechanisms for visualization which incorporate: automatic adaptation of symbology, depending on the characteristics of the background image, representation of the relevance of the objects and including clues to off-screen objects.


Research Team


Past Members


Publications

Document | BibTeX source
Paulo Pombinho, Ana Paula Afonso, Maria Beatriz Carmo, Mixed Environment Adaptive System for Point of Interest Awareness.Workshop on Location Awareness for Mixed and Dual Reality, in conjunction with IUI 2012 February, 2012.

Document | BibTeX source
Paulo Pombinho, Maria Beatriz Carmo, Ana Paula Afonso, Hugo Aguiar 2011: Location and Orientation Based Queries on Mobile Environments. International Journal of Computer Information Systems and Industrial Management Applications (3), 788-795.

BibTeX source
Pedro Silva, Paulo Pombinho, Ana Paula Afonso, Tiago Goncalves, Rubi: An Open Source Android Platform for Mobile Augmented Reality Applications.Workshop on Mobile Augmented Reality: Design Issues and Opportunities, MobileHCI 11, the 13th International Conference on Human-Computer Interaction with Mobile Devices and Services September, 2011.

Document | BibTeX source
Paulo Pombinho, Maria Beatriz Carmo, Ana Paula Afonso, Hugo Aguiar, Location and Orientation Based Point of Interest Search Interface.Proceedings of MobileHCI 2010, the 12th International Conference on Human-Computer Interaction with Mobile Devices and Services September, 2010.


Personal tools
Research Lines
Internal Information