More selected projects

A Virtual-Physical world interaction (The Dogville experiment)

What kind of conversations can emerge between two users - one in the physical and the other in the virtual world? Referencing the movie "Dogville", I came up with this project drawing on the notion of "taking control" that could either decide to help or exploit the other (at a later more narrative-developed stage), depending on how the players choose to act. 

produced by: Elli (Elisavet) Koliniati

Introduction

For this project I decided to do an interactive experience between two users. The one user wears a VR headset, while the other one interacts with his/her physical environment.

For the physical environment I have created an interactive projection that trigger events inside the VR environment affecting what the VR player sees, while events inside the VR are also updating what the projection mapping shows.

Concept and background research

The player in the physical world sees in front of him/her a "map" where he/she is able to see the player in VR moving inside it as a green dot. He/she is also able to see yellow dots that represent some moving NPCs (Non Player Characters) in the VR environment that use an AI Pathfinding system and make the projection mapping of Dogville feel like it is inhabited.

For this project the basic interaction the user can make is trigger some animations with a moving light in the scene that will also trigger some events in VR (making a light move in VR as well). For example the user is able to point his/her light in a car object that is drawn on the map and immediately see it move across the map. This is basically done by triggering the car movement in Unity's environment that uses the same AI pathfinding system like the NPCs. In this way the moving car in OpenFrameworks also behaves accordingly, making it feel a lot more alive than coding its behaviour in OpenFrameworks directly.

The goal was to test some basic interactions and see how an interactive projection mapping in OpenFrameworks could take advantage of Unity's features and vise versa. 

The real goal is to explore the dynamics that can emerge between these interactions and the sense of control that can be analyzed. Both players are able to give each other inputs and affect what each other sees. The player who has the scope of the whole map at the start seems to be at a more advantaged position than the player in VR that actually inhabits it. When relationships of power start to emerge, how would the players behave? That is the actual question behind this whole attempt but still has a long way to go to enable any answers given by potential users.

Technical

For this ptoject I worked with the communication between OpenFrameworks and Unity via OSC messaging. But in order to have actual interaction between the physical input and OpenFrameworks->Unity I also needed to implement Computer Vision.

For the player in the physical world I used:

- OpenFrameworks to do the interactive projection mapping

- A camera with brightness tracking, in order to track the light movement input the player is able to make to interact with the projection mapping and with the VR scene as well.

For the player in VR I used:

- An Oculus Quest VR headset that sends back to OpenFrameworks through OSC the position of the player that is then depicted into the projection mapping as a moving green dot.

- Unity to develop the VR experience

- Unity's AI system (Nav Mesh Agents) to introduce some moving NPC's in the scene to make it feel more "alive" - these are also depicted as moving yellow dots in the projection mapping.

Future development

Now that I have the basic mechanics working and I made both programs "speak", I would like to dive in deeper into what sort of interactions I would like to enable.

One step is to develop a narrative that would immerse players in a setting that they will have to make decisions upon and make this all feel more like a game.

A second step would be to also dive deeper in the computer vision aspect, and have the computer recognize specific objects that could be placed in the projection mapped surface. These recognized objects could then be translated as spawned objects inside VR.

Third step could be to have the VR actions trigger some real world effects, like a light turning on/off, or involve some arduino communication as well.

Self evaluation

I managed to get my initial goal working, by achieving the basic interactions. However this is still a very basic concept and setup (no matter how complex it was for me), that doesn't actually give any meaningful experience to the users. This was just a test to see what works, the real challenge is to actually make something out of it.

References

Dogville by Lars Von Trier (obviously) for the concept

Unity/OSC implementation:

https://github.com/jorgegarcia/UnityOSC

https://thomasfredericks.github.io/UnityOSC/

OpenFrameworks/OSC

Unity-OpenFrameworks-OSC working together example!

https://github.com/nicoversity/unity_ios_of_osc/blob/master/README.md