Network Operations

Network-Operations

In the demonstrator projects, we will explore the ability of Network Operators to deliver/broadcast immersive and interactive (real-time) AR/VR via the network. Recent marked advances in computer graphics and display technologies have enabled rapid growth in locally-operated AR/VR, i.e., AR/VR in which everything is computed and generated locally without requiring internet access. While these locally-operated AR/VR applications facilitate immersive and realistic experience with various types of real-world data, current technologies do not permit immersive and interactive AR/VR experiences to be rendered and controlled via the internet.

Despite the significant advancement in networking technologies, real-time AR/VR applications present challenges. For example, visual information in the form of true 3D graphics data presents a very difficult challenge to network operators because of the unlimited size of the data. If the visual information consists of 2D images (such as static images or movies), the upper bound of the network bandwidth required is known and the current network infrastructure can provide sufficient bandwidth to support such transmission. However, it is impossible to define the upper bound limit for the true 3D graphics data/information because the user’s viewpoint can change interactively and fairly instantaneously. It is also difficult to link the data streaming rate to the amount of data available because despite the fact that data must be available to support many different points of view, but not every point of view will be explored. In summary, there has not yet been practical implementations of AR/VR over the existing Wide Area Networks.

In this project we explore two primary AR/VR networking innovations: (i) data encoding and (ii) multicast-like transmission using the unicast-based network. The first area of research we will investigate is new data encoding methods that allow us to encode 3D visual data, 3D audio data, haptic data, and other AR/VR required data into a modified 2D image. The advantage of pursuing 2D image streaming (movie streaming) for AR/VR is that we can build upon existing 2D image streaming techniques. We will study and develop methods to enable network operators to estimate the required bandwidth capacity required for an interactive, real-time AR/VR display. We will investigate how sufficient bandwidth can be provided for interactive, real-time AR/VR using the existing or near-future network infrastructure. The second area of research relates to enabling multicast-like transmission of AR/VR data using the unicast- based network. The clear advantage of the multicast framework is that the server only needs to deliver/broadcast the data once. While the modern network infrastructure is capable of utilizing multicast data transmission, there is no business model to accurately apply network usage charge for multicast data transmission, which might require network infrastructure operated by different network operators. For this reason, we will study and develop techniques for multicast-like transmission of AR/VR data via the unicast-based network. This research will utilize key innovations developed at USyd.

A/Prof Craig Jin


Image credit: Virtual Reality Demonstrations by Knight Center for Journalism in the Americas, University of Texas at Austin via Creative Commons Attribution 2.0 Generic licence.