MovementPro

by Aphelion

Discover your artistic potential, express your personality, and find someone who appreciates you.


Issue: The fast-paced nature of contemporary life has led to more mental health challenges. MovementPro let users release pent-up energy, provides a supportive stage to help users explore their creativity and show their true selves.

Concept: MovementPro is a movable device, consisting of a screen, a LED light group, a laptop, and a web camera. (Picture1) The camera captures the user's movement and then outputs a model (anime girl) that follows the same movement on the screen. (Picture2) MovementPro immerses both the audience and the user in a stage-like virtual atmosphere, and users are encouraged to make their movements accompanied by rhythmic electronic music. Also, the audience can clap and cheer to the user to light up the stage and trigger visual effects. (Picture3) This unique interaction visualised audience's encouragement to the user. On the other hand, the user feels encouragement from the audience while exploring different movement. Technology passes positive emotions from the audience to the user. It also creates an environment that promotes self-expression to showcase the personality which is associated to movement characteristics.

Experience: The interaction starts with the user standing in a specific area, after which all the user's movements are projected onto the anime girl model through body tracking technology. The system gradually immerses everyone in an engaging virtual stage world through music and dynamic scene. When the user has just stepped onto the stage and the slow music begins, they may feel shy, exploring some movements to the accompaniment of the music and the applause of the audience. However, as time goes on, the user is gradually drawn into the virtual world under the influence of the encouragement from the system, gradually transforming from a shy and tentative performer to a confident and free moving one, until the end of the music.(Picture4)

Technical Description

In the early stages of the prototype, we plan to use motion capture to accomplish our core experience: to let users see a 3D model moving in real-time. We did some research on all the motion capture solutions available. We found that motion capture schemes fall into three main categories. The first is optical motion capture. The technology requires multiple infrared cameras to capture wearable markers on the body. The collected data is then converted into a three-dimensional model. The second is an inertial capture system. The micro-flow sensor is used to achieve higher motion capture than the traditional inertial acquisition system. Because our team knew very little about sensors, we didn't adopt this technology. The third technology is motion capture using computer vision. We chose to use the third technique because it has a relatively low threshold to learn and use. It is found that motion capture by computer vision has been widely used. And the technology only requires one camera, no need for multiple cameras and wearables. (Picture5) We chose Unity for our visualizations. There are several computer vision plug-ins available in Unity that solve data communication problems. Unity's powerful asset store also allows us to design and produce the desired user experience. (Picture6) After deciding on the visualization solution, we set out to find a viable motion capture project. At first, we found OpenPose, an external library that has a Unity plugin. However, we failed when we try to use the plugin. As a result, we had to turn to another motion capture technology----MediaPipe. Mediapipe is also an open-source computer vision library. It provides a Pose Landmark Model in its official documentation.


After Mediapipe was successfully installed and running, we started thinking about how to transfer data to Unity. It turns out that UDP communication was a good solution. In our first demonstration of the prototype, we didn't find a usable 3D model solution. We used a ball and stick model. We have found an available solution based on a 3D model. We believed that the 3D model is more attractive to users. It is difficult for users to continue to experience our prototype when watching the ball-and-stick model move along with their actions. So, we considered a 3D model is necessary. After the Mediapipe data are collected, calculated, and mapped to the model, we completed the experience of having the model synchronize user actions in real-time. Then we built a stage in Unity so that the user could feel like they were standing in the middle of the stage, and we turned the lights off. Part of the light will be unlocked gradually as users show themselves on the stage. To encourage users to play our prototype instead of simply moving and leaving after two or three seconds. We developed verbal and literal prompts to motivate users. At the end of the session, the user gets a systematic rating, which encourages other audience members to show up on stage.(Picture7) In addition, we hope to involve the audience in the experience, where they can give users their compliments by clapping and cheering. We will use voice prompts that the audience should clap during the user's performance so that the darkened stage can unlock more content. We also made a volume monitor using an Arduino board, an LED light strip, and a sound sensor. The LED light strip will flash as the audience claps and cheers while the user can see the content.(Picture8)