Broadcast Augmented Reality 2: The live experience

When we were creating the first BroadcastAR experience way back in 2011, we aimed for an application that provides a great interactive experience to the audience with no barrier to entry. The key was to have immersive, close to photorealistic content that allows the viewer to forget about what is really going on in the background. We obviously couldn’t record dinosaurs roaming around in front of a green screen, and since we had a world class 3D team waiting to work on experiences, 3D content was the clear choice. The content quality was never really a question but it doesn’t matter how great the 3D team is if the technology delivering the content isn’t up to the challenge.

Photorealistic rendering real time?

Looking into the different technologies we could use, we realised that although there were ways to use a game engine to run animation, the results they offered were not very close to what we wanted to achieve. Unreal and Cryengine were still pretty inaccessible engines for the budget, and the learning curve was too steep for a project with a relatively tight deadline. Cinder and openframeworks were quite capable tools for real time effects, but the level of character animations, the shaders and effects we were to use made them ineffective, at least at our own level of experience with them. We used and loved Unity3D for mobile AR and small games, but the toolset it offered back then was not really suitable for near-photorealistic rendering. To make long story short, we decided to render the scenes in Autodesk Maya, and it’s safe to say that it was the best decision in terms of quality. In terms of delivering method, we could focus on the operators’ user experience and to provide them the most flexible and powerful tool to play the content.

broadcast-ar-large-screen-jurrasic-park-augmented-reality-experience-in-a-shopping-centre (1) (1).jpeg

Challenges

Although the quality is exceptional with our licensed experiences and watching people interacting with our virtual content, the level of immersion and presence it offers is often surprising, we had our challenges with it over the years.

  1. Because it is pre-rendered, it’s hard and often costly to customize or modify the content, since having movie quality CGI means the render times are often long and expensive

  2. Because the content is rendered from a specific camera angle, to have it mapped to close to the same camera angle at the location, a specific footprint range is required for it to work properly. Thanks to BroadcastAR’s tools, we can adapt the content to a wide range of different locations, we still have limitations, for example the characters can’t use the real environment.

To be completely honest the above challenges rarely turned into breaking problems, but they still made us to always look for a more flexible solution that would be more forgiving to different local features and provide more animation and interaction varieties. 

broadcast-ar-augmented-reality-experience-in-a-shopping-centre-featuring-dinosaurs (1).jpeg

Photorealistic rendering real time? (again)

As I mentioned we’ve used Unity3D for mobile AR (for the 2013 edition Guinness World Records book or the Isle of Wight app along others), so we are familiar with it and comfortable working with it even though we’ve had our differences. Time and time again we revisited the idea of using it for BroadcastAR, to “live render” our content. Whenever we had a new content in the works, our CD and CEO came to me, and asked “Is Unity there yet?”. And for a long time I had to say “Close, but not just yet.” In truth, we already had the Unity basis of the next generation of BroadcastAR 2.0. While we had tools like Marmoset Skyshop and there were other plugins like Alloy that provided physically-based shading, it was with the release of Unity5 that the floodgates were opened, and we went full speed on developing the new version.  

With Unity providing the engine that runs BroadcastAR, we can achieve realistic rendering on the camera image using the standard shader, high quality textures, real time global illumination and in some cases custom shaders. While it’s not hard to tell the difference between pre-rendered and real time rendered CGI the additional benefits of the real time rendered content, the overall experience gets even better.

Challenges (again)

Needless to say, we ran into some challenges along the way. Just to mention a couple:

Because we use a number of different assets from the asset store as well as custom made plugins, we had to select a rendering path that would work with most of them, and then find a way to make the rest work well with the other.

BroadcastAR usually runs on two screens: one is a large LED screen that shows the main experience full screen, and one is the control screen that has the user interface and a control screen of the LED content for the operator to look at. We had to find a way to mirror the full screen window on the control monitor without performance costs.

Recording experiences had to be seamless, and must not have any effect on performance.

Although we like the new GUI system that Unity5 offers, we simply had to separate the GUI from the main experience, opting for a WPF control application.

Communication and synchronization between the control app and Unity or remote control and Unity or remote control and control app or remote control and control app and Unity or… you get the idea.

Imagine this list going on for about two more pages...

broadcast-ar-large-screen-jurrasic-park-augmented-reality-experience-in-a-shopping-centre (1).jpeg

Advantages

In spite of all of the above, going real time rendering offers the flexibility and variety the we were looking for.

Using Unity’s mechanism, we can create a huge variety of motions for the characters. It doesn’t have to be the same animation over again anymore, we can mix animations based on viewer interactions. For example applying field of view to the characters, they can be aware of the viewer in a certain spot, and react to the user’s motions and gestures.

The same way, adding behaviours to the characters, they would react differently to different situations.

Randomizing certain characters can lead to different animations as well. To use a dinosaur example, imagine a poor Iguanodon walking in accidentally on a T-Rex then getting chased off the screen by it (whatever happens off-screen is best left unknown).

Positioning the Unity camera to match any real camera is a huge benefit. We can even have moving real camera, and match the movement with the virtual camera.

Change of lighting real time helps to blend the virtual character more easily into the real world environment.

Customising certain characters in real time to match the user’s input.

And of course the use of the same content across different platforms. Mobile AR and VR to mention the two we’re most excited about.

Previous
Previous

Augmented Reality is for everybody: 3 industries we target and why

Next
Next

Virtual Reality in Theme Parks: Some thoughts on application