Natural User Interaction

NatUseInt-Project

In both demonstrator projects, we will focus on issues related to natural user interaction. Human focused analysis aims to continuously monitor the current status of users involved, including body movement, gestures, and expressions. Based on our existing expertise, we will develop enabling techniques to support natural interactions among multiple persons in a virtual/augmented environment, such as speaker identification, 3D audio based speaker localisation, human action recognition, and facial expression recognition. It is in the area of natural user interaction that we will explore the use of sensors and the Internet of Things (IoT) paradigm. While most AR/VR applications focus on bringing virtual or augmented experiences to individuals, more challenging scenarios such as coordinating human-human interactions at one remote site and integrating into a centralised virtual environment has been seldom addressed. It is anticipated that greater potential can be unleashed from AR/VR with the support of complex human-human interaction. Our several well-equipped labs provide necessary physical spaces for us to explore the forefront of the applications in such scenarios and to develop novel enabling techniques. Context-based data presentation aims to reduce both the cognitive load of users and the bandwidth demand caused by Big Data, in particular high dimensional, heterogeneous data. This can be achieved with data summarisation or dimension reduction techniques, which are able to extract the essence of big data and better utilise display space.

A/Prof Craig Jin


Image credit: The Explainer explains . . . by Ahd Photography via Attribution-NonCommercial-NoDerivs 2.0 Generic licence.