Over the last few years, disciplines such as Virtual & Augmented Reality, Computer Aided Design, and Human Factors, have undergone a profound evolution essentially driven by ever-increasing computational power and by availability of new hardware and software technologies. Arbitrary shaped non‐flat displays, portable augmented reality glasses, micro-projection units, portable marker-less gesture tracking systems, increasingly accurate biometric sensors, and brain-machine interfaces are becoming commodities. Previously, these technologies were only available in “Big” research labs, but are now entering the mainstream market causing a profound paradigm shift for the general public level. This means not only adapting old interaction schemas but it will call for the development of brand new interaction mechanisms that can account for complex technological ecosystem issues.
In his talk, Dr. De Amicis presented his research on how users perceive, understand, and interact with real and digital objects during an immersive experience. First, he discussed some of the results achieved concerning the design of a multimodal experience combining sketches, gestures, and speech, to accomplish design tasks in different contexts, ranging from lightweight near-to-the-eye displays, to large-scale displays and immersive environments. He also presented his research on the design of geo-intelligent complex visualization systems and products and related services, addressing aspects such as interactive spatial data infrastructure, context awareness, and highly dynamic interaction. The second part of his talk was devoted to describing his current and future research plan on how Mixed Reality allows the interaction with hybrid space with varying degrees of interaction allowing the user to enter a highly effective information environment which heightens data awareness and understanding.