It's a dialogue between shader and reality. The metamorphoses of touching can raise people’s brittleness about boundary of dematerialized shader and actual screen. The screen can be regarded as the spectrum of experiencer’s volatile emotions. Distorting the limits and immersing the spectator into a short detachment of reality. Nothing is uncounted experience, creating an immaterial boundary is the process of people wander their heart. It’s all start from the screen seduced you to interact with it.
produced by: Friendred
The video is the whole process of making this project including fabricating, sawing, sewing, programming, testing, etc.
Two Sketch Plan (before I start creating SHADER project )
1. create massive spandex interactive installations, there is few problems I need to considered:
a. The spandex fabric size, usually the fabric only have 1.5 meter width. If I want to create 180*300, then I have to sew it, how to deal with the seam. That will be a problem.
b. I need a good quality of projector, high lumen at least more than 2600, and projection distance need to be controlled in 2- 3 meter range. Otherwise its will be hard put in one room
c. I need to create a strong support, since the performer need to touch the screen by her body, it will be ramshackle if it doesn’t commensurate with the force it tolerated, then the screen will fall down.
d. Kinect detect distance in proximity to 500-4000 mm, so keep that in mind
e. Then last but not least drawing on the screen and creating the sounds.
2. Also use spandex material, but put it in front of TV instead of using the real screen
a. The first problem is I need a new projector which has a small projection distance, the one I saw on amazon is good, the projection distance is 0.5-1m
b. I need to buy one bush pink TV
c. The merit is I probably can have more accurate detect range but the distance need to longer than 500mm
d. Also I don't need to create the heavy support
This is the place where you put links to papers and other works and you make sure you credit code that you used from other people here (as well as in your code).
Basically, I used Kinect to detect depth image and I read depth gray image into brightness color pixels. Then I used certain range of pixels to trigger each single scenes.
The fabric I chosen is spandex. Through measuring, the elasticity of spandex can be pulled off more than 20cm, which is quite good for depth detecting. The problem is the ratio of projector I used is 16 : 9, but the proportion of Kinect 2 depth image is 6 : 9(approximate to 640 * 480P). Thus, when mapping to the real screen, I could not get the accurate position output. The tricky way to solve is that mapping to the same ratio. In this project, I need to figure out noises problem too as the leg of timber structure can influence detection of Kinect. So the solution ended up with mapping specific area in gray depth image to frame which has already been changed mapping proportion. Consequently, I wrote this in the nested for loop when I go through each pixels of depth image.
“posX = ofMap(x,128,382,1609.5,310.5);”
In total, there are 8 different scenes, which all expressing lexical distortions of shader and real life. Blurring the boundary and submerging the audience into a ‘transition’ environment.
The first scene, when you put your hand in the left top corner, the iridescent metamorphosis will become blur, there will be less and less drape. By contrast, if you slightly press your hand towards right direction, it will come out peristaltic and dynamic drape.
In this stage, the ripple will follow your hand position as long as you press the screen. It will go back to original place when you release your hand. In here is your hand instead of not hands, otherwise the flickering ripple will jump between your hands where no matter which one is touching.
In this scenery, what the interaction behavior is simply sliding the hands between left and right corners. Then you will see the cloth wireframe flutter lightly, along with your hands orientation.
scene 4, Black and white moving strips will make people vertigo. But you can control it, if you insert your hand or whichever part of your body into middle screen position(I’m not really mean insert). Then you can be able to slow it down. Also, the middle part would become the largest concentration. If moving towards left side, you will make a cluster of intensive lighting strips toward right side, vice versa.
scene5, the interaction of this scene is overlapped ‘computational glitch’ comes out from far away in commensurate with your position. the glitch show up from right part of the screen if you touch the left area.
Scene 6, this part, what you need to do is picking a place on screen and press it. The hole comes out fast if you press it hardly. In contrast, it will stop or slightly comes out in accordance with how big force you added to dematerialized screen.
scene7 this is also exploring the space and time when you interact with the screen. Regarding how to do it is moving your hand position based on x axis. The matrix will transform from stable to volatile, which is mapped from 0 to the width of screen.
In this part, I used opencv to find the contour. To be more specific, I get the position of blob centroid, transfer these number to particle system and draw bunch of polyline. Compare with the other scenes, this one can trigger multiple position at the same time.