More selected projects

Between

“Between” is a real-time digitalised performance that happens across two different programming applications— Openframeworks and Processing. The dancer’s movement is captured by a depth camera embedded in Kinect sensor and transformed into points cloud  in Processing, forming the silhouette of the dancer. Relevant data of the dancing motion will be sent to Openframeworks to control several different properties of the virtual world. The two visual effects will take place simultaneously, delivering a sensation aroused  between the distinction and ambiguity of the visual comparison.

produced by: Yishuai Zhang

 

Introduction

“Between” is a technoetic performance that seeks to explore consciousness and connectivity through digital, telematic and spiritual means, embracing both interactive and psychoactive technologies. A special form of performance will be introduced into the choreography, which combines modern techno dance party movements with a traditional form of Chinese Taijiquan. The audience will simultaneously perceive two sets of visualisation while exploring the underlying concept of “between”.

 

Concept and background research

The concept of “between” starts with the intention to reveal the mysterious interconnection between our physical and metaphysical reality. A meditative state of being is induced by the specific choreography to evoke the biophotonic coherent patterning which functions as an information network that intertwines with the other senses while simultaneously detecting correlated stimuli across modalities and fusing these into a single percept before their interpretation happens. (An academic research was conducted on this topic as computational art and theory’s final project) In “between”, the idea is to demonstrate two distinctive visualisation, resonating with each other in an uncanny fashion, conveying a sense between clarity and ambiguity, physicality and virtuality, interior exploration of the consciousness and exterior expression of the body.

Technical / Self-evaluation

There are two programming techniques that define the concept of this project as well as directly influence the visual outcomes.

In Processing, the screen output represents human being’s existence as physical forms. In order to relate the dancing with the other world made in Openframeworks, I apply an average position tracking technique to vaguely capture the spatial movement of the dancer, and ask Processing to send the data via Osc message to Openframeworks.

In Openframeworks, the received data in the buffer will be organised and mapped into different groups and value, independently triggering and controlling different functions. The world filled with spheric and cubic objects  consists of these main components: the light source (rotation, intensity, position), the materials of the 3D objects, and the movement and interaction of the objects (alignment, cohesion and separation). These properties will be influenced by the data sent form Processing while the dancer is dancing in real space.

I am generally satisfied with this project at this stage. Because this artefact effectively complements my current research on computational art theory. With this project, I gradually realise the intention and direction of my artistic practices. The instantaneous occurrence of the digitalised figures bring the concept of performance art beyond spatial-temporal scale and open more possibilities of how we can integrate technologies into artistic practices at this special moment. 

In the future, I will focus more on the simulations of reality and various energy fields to discover their intricate interconnections with the help of technologies.  The underlying metaphysical reality will be positively discovered and discussed through interdisciplinary collaboration of art and science. 

Future development

One further development of this project can be the advancement of the methods in capturing and handling the data between two programs.  Instead of using points average position tracking technique,  the method of machine learning’s movement tracking can also be utilised in one program to implement accuracy and higher level of coherence to the visual expression in another program.

Another potential development can be the integration of auditory output. Same data could be used with ofxMaxim to generate real-time sound. Furthermore, a biometric sensor such as heartbeat sensor can also be attached to the dancer, providing real time data to produce the sound. This approach will theoretically complete the concept of the project, and practically adding more depth to the performance. Ideally, a second person apart from the dancer can play a role of sound performer, putting more efforts in the aesthetic of the music.

References

Andy Lomas.“particleLab5” coding reference from Computational Form and Process Week 6, Artefac and Pipelines. https://learn.gold.ac.uk/course/view.php?id=12881#section-8

Andy Lomas.“particleLab6” coding reference from Computational Form and Process Week 7, Multi Agents System. https://learn.gold.ac.uk/course/view.php?id=12881#section-8

Theo Papatheodorou.“OSC Receiving” coding reference from Workshops in Creative Coding Week 15, Osc Message.      https://learn.gold.ac.uk/course/view.php?id=12859§ion=16

Daniel Shiffman.“Kinect Point Cloud example”. http://shiffman.net/p5/kinect/

Daniel Shiffman, “Tracking the average location beyond a given depth threshold” http://shiffman.net/p5/kinect/.  https://github.com/shiffman/OpenKinect-for-Processing