Mr. Mood is an educational tool that aims to provide pre-schoolers ('targeted audience') with a playful experience on identifying and managing of emotions.
There are two parts to the project. Mr. Mood himself and a storybook.
The storybook consists of 6 chapters of emotions (happy, angry, sad, scared, disgust, surprise) and introduces Mr. Mood in more detail. Depending on the chapter, there will be relevant prompts that helps guide the targeted audience to interact with Mr. Mood. The storyline is written so that at the end of each chapter, they must make Mr. Mood happy once again. This teaches them to make up for what they have done or comfort someone when they identify a negative emotion.
Mr. Mood has been designed to be more than just a toy, but as a friend to our targeted audience. The interactions that our targeted audience would perform is based on the storybook prompts and discussion points which relates to real life situations such as hurting someone which can make them angry etc. Depending on the interactions, Mr. Mood's emotions will display colours to indicate his current emotion.
Technical Description
The design choice of Mr. Mood was originally inspired by Baymax, a fictional cartoon character from the movie Big Hero 6 that is well-known and popular with any age group, especially preferred by our targeted audience. We drew sketches of the protagonist, Mr. Mood and iterated based on the user feedback by our targeted audience. In order to attract our targeted audience to interact with Mr. Mood, we made it less creepy and intrusive by designing larger eyes and a wider mouth. After discussing and verifying Mr. Mood’s outline by sketch, we used Adobe Illustrator to design Mr. Mood. We used white for the design of Mr. Mood because it represents neutrality, as we used other colours to represent various emotions which may be confusing for the user when a variety of visual feedback appears.During the initial stage of the development of Mr. Mood, we used white cotton fabric, a button and white thread as the outer material, and latex foam as the stuffed material. However, the fabric was too thin and was not strong enough to hold the various technology components. Hence, we had to sew two layers together on each side to reinforce the fabric. After conducting our material research, we decided to use minky fleece as it is thicker and stronger. It has a soft furry texture similar to the materials used in existing soft toys, allowing it to be malleable, safe, and suitable for our targeted audience.
According to early desk research, we found that parents/guardians can make use of storybooks and stories with our targeted audience to allow them to communicate how they feel based on their emotions, which advocates self-developmental learning to succeed in life. Meanwhile, our targeted audience are appealed by fascinating stories that they would be able to relate to. Hence, we decided to make a storybook that incorporates an educator or parents notes, the interactions that they can interact with Mr. Mood, prompts and discussion points to help our targeted audience identify and manage emotions.
The story was created based on existing storybooks on emotions from various resources, such as Youtube videos, Pinterest, and other educational storybook websites. Considering reading the story independently would be difficult for some of our targeted audience, engaging them in a conversation by reading would be beneficial for their learning process. We would also invite parents/guardians, and educators to read the book and guide them through the various interactions with Mr. Mood based on the story prompts.
The colours used for each emotion is based on desk research and the feedback obtained from user testing. It is then represented visually on Mr. Mood with the different LED light colours, and the different chapters of the storybook. For instance, yellow was used for the happy chapter as yellow represents happiness and joy; Green was used for the disgust chapter as it appears to suit our targeted audience's cognition better.
Mr. Mood is controlled by an Arduino Mega microcontroller, attached with color sensor, force sensor, shake sensor, capacitive sensor, buttons to capture the various interactions by our targeted audience. We use color sensor to detect items such as little bear, socks and present. Force sensor is used to detect the squeezing interaction, and shake sensor is to detect a shaking interaction. Besides, we have used conductive thread and conductive fabric to detect the stroking interaction. A button is used to detect hugs, and another is used for switching on/off the color sensor to prevent it from continuously sending data to Arduino microcontrollers impacting other sensors and components.
From the interactions detected by the sensors, it will be sent to the Arduino microcontroller for processing.
Then, we used LED strips, Adafruit Neopixel strips, DFplayer, speaker and servos to provide visual and auditory feedback by changing of lights, sounds, and the movement of the eyebrows and mouth to represent the various emotions of Mr. Mood.
Source Code
Final Statement
The final product form consisted of Mr. Mood which housed the Arduino microcontroller, the colour sensor to sense the difference in colour of the objects designed to be given to Mr. Mood, the magnet sensor to activate the servo motor within the present to open it, LED light strips used to indicate the various emotions, and buttons used to activate the colour sensor.Mr. Mood was accompanied with a storybook that helped users understand all the interactions needed in interacting with Mr. Mood as well as educate them regarding the identification and management of emotions, and the various objects (little bear, dirty sock, clean sock, and the present) that helped change the expressions through the detection of different colours by the colour sensor.
Public Response and Feedback
The final product attracted a multitude of people of different ages, and successfully provided a playful and enjoyable environment derived from the laughter of all who interacted with Mr. Mood.Most importantly, children and their parents interacted with Mr. Mood as intended, with the parents reading the storybook and prompting their child to interact with Mr. Mood. The children learned quickly in identifying the interaction they should perform to make Mr. Mood feel a certain emotion and then manage that emotion appropriately. Mr. Mood was so attractive to children that many were hugging Mr. Mood and walking around with it.
While Mr. Mood attracted children and parents due to its appearance, faculty members were intrigued with the technology used and how the concept addressed the problem. It was noted by a faculty member of ITEE that arduino is considered an outdated hardware, suggesting that the speed issues were mainly caused by the arduino having insufficient RAM to accommodate all the sensors used.
Outcomes & Next Steps
While the final product was received with various positive appraisals, many adjustments were made to simulate the intended experience.First and foremost, the colour sensor had inaccurate readings due to the exposure to too much light which caused Mr. Mood to display incorrect expressions. The code was adjusted to mitigate the inaccuracy of the readings as much as possible. In lowlight conditions, we switched back to the original code to test whether the issue was due to exposure to too much light. Turns out, Mr. Mood worked smoothly in low light conditions suggesting that the exposure of light caused the issue.
Additionally, the pressure sensor also posed an issue. It was fairly slow in detecting the squeezing of Mr. Mood’s leg due to its size being rather small. Users had to squeeze Mr. Mood’s legs a couple of times before finding the right spot to activate the pressure sensor.
On top of that, the shake sensor was far too insensitive. Although users were shaking Mr. Mood vigorously, the shake sensor still did not detect any shakes. However, when manually shaken at a precise spot where it was installed in Mr. Mood, the feedback of Mr. Mood was correctly expressed.
Perhaps further steps should focus on making the technology smoother, faster and automated. Currently all interactions work as intended however there were delays and inconsistency with the feedback.