An interactive video installation that portrays the absence of the sense of belonging.
Concept Development Videography Computer Vision Programming Projection Mapping Audio Editing
Processing MadMapper After Effects Adobe Audition
Feb 2019 - May 2019
This installation is a digital portrayal of the artists and their continuous search for their sense of belonging. ・・・・・・・・・・・・ Both artists left their home country at a very young age, and the experience of constantly moving to a new city has led them to accustom to the feeling of not deeply rooted in any particular place. ・・・・・・・・・・・・ In this installation, the artists and the cities they have lived in are projected on the two leaves floating on the surface of the moving water. It uses the metaphor of “fu ping” (the mandarin for duckweeds), which are plants floating on the surface of still or slow-moving bodies of fresh water, as a symbol for the artists’ rootless feelings.
The concept of the installation originated from the’ personal experience of me and my designer friend Haiyi Huang. Having to acclimate to different environments and cultures in our early stage of life, we both have constantly taken on new identities and are constantly looking for a place where we feel we belong. The hustling bustling videos of the cities in which we have lived are projected in the shape of our bodies, contrasting our projected lackadaisical movements on the leaves, to emphasize on their helplessness of the swift shifting of their environment. The floating leaves on the moving water are a symbol for “fu ping” (the mandarin for duckweeds), which we use as a metaphor for our sense of rootlessness.
The physical objects used in the installation, the leaves and the water, are the physical-metaphorical representation of our personal feelings. However, these physical objects alone can not fully illustrate the feelings on their own. By overlaying digital projection onto the physical objects, the stories of the piece come alive and the feelings that we trying to convey can be translated more vividly through this video installation. In addition, it also pushes the boundaries of traditional display and video sculpture with moving physical objects by adding the technology of tracking the leaves and projecting on them.
For the installation, the following items are needed:
We planned to have a ceramic/stone bowl filled with clear tap water, and create the movement to the water using a motor. Two leaves will then be put on the moving water. We planned have a motion tracker to track where the leaves are, and then project the videos of us and the cities we have lived in on the tracked leaves. We also planned to record an audio narrative to help provide the context and emotions to the audience for the installation.
We shoot videos of us lying on a white canvas.
Then I adjusted the contrast and edited the clips of cities we lived in and masked it within the shape of our body, the darker the color on our body parts were the clearer the videos of the cities showed.
We exported the file as an mp4 file for projection mapping.
When we started projection mapping, we were very intrigued by how cute the miniature us looked.
We made sure the projection mapped with the bowl and the leaves.
The hardest part was how to track the leaves and have the projection of the miniature us projected on the tracked leaves. Sounds easy but hard to do, why?
I tried blob detection utilizing the following element respectively:
I programmed in Processing and syphoned it to MadMapper and then projection mapped it onto the leaves.
But it was very challenging because...
Eventually, after super fine tuning, we finally got it to work just alright. But as shown in the images, the miniature us would move along this thin scrolling line generated by the projector, and we could not really figure out why and eliminate it.
In order to make the tracking more reliable, I tried tracking the leaves with the inferred camera on Kinect (as we did not have access to an actual inferred camera) to eliminate the factor of color and brightness.
The result was even worse than blob detection with color, as the water refracts the inferred light, plus the inferred camera on Kinect is mainly for depth sensing (IR depth sensing) so not only would the location be offset, the tracking was also very unreliable as the depth between the container and the leaves were not able to be sensed accurately with the interference of the reflection and refraction of the water and also how the depths or distances of the two were not that different from the web cam no matter how much I adjusted them by making the leaves a lot higher than the surface of the water.
We realized it was easy to track body using Kinect, but it was another story to track something in the water. Our discovery was also further confirmed when we consulted Dan Shiffman for technical help and he explained that when using Kinect to track objects on the surface of water or anything reflective, its detection becomes inaccurate.
To work around our problem, we replaced the ceramic bowl with a clear one and placed the webcam below the bowl to capture the leaves from below with our previous blob detection with color.
Although it was not 100% stable and we had to adjust lighting, it’s definitely more reliable than using Kinect or placing the webcam from above. The light coming from projector could still interfere with the webcam through the transparent bowl. Therefore, to resolve this issue, we placed more leaves in the bowl to block the laser light.
In addition, we also added audio narration. We wanted to create a stronger emotional connection between visitors and our piece through our stories for our showcase at the Spring Show 2019.
A lot of the visitors at Spring Show said that the audio definitely made the piece more compelling and they felt stronger emotions through the stories we told. With audio narration, the piece feels more completed.
Ultimately, I encountered our possible solution when I was working for Currents New Media Festival in Santa Fe, New Mexico last summer.
One of the new media artist Neil Mendoza who I helped troubleshoot with suggested, after hearing me talking about the problem of this piece, that maybe we could try painting the bottom of the leaves with inferred paint, and try use an actual inferred camera (instead of the IR camera of Kinect) to capture the reflection from the bottom.
I look forward to trying next time.