arrange: Improving Family Sleep Using AR
An Augmented Reality Platform for Tangible Objects
arrange: Improving Family Sleep Using AR
An Augmented Reality Platform for Tangible Objects
As part of a collaboration between CMU and Philips, I worked with a team of 3 designers on a design brief titled "Screens & Beyond". The challenge consisted in designing a digital sleep solution for family health. Most sleep solutions today focus on the individualized sleep experience, but what might a system designed specifically for a family's sleep health look like? The prompt gave us the freedom to explore all sorts of questions about defining families, sleep, and health while also encouraging us to design with future-facing technologies in mind.
To that end, we designed arrange: a holistic digital platform for combining digital information in the form of augmented reality with tangible objects. With arrange, family members can connect the information they find most meaningful to the objects that surround them in the contexts in which they want that information. It combines the intuitiveness of the physical world with the richness of the digital world. The arrange system consists of two components: a mobile application to tag objects with new information and access previous tags, and the physical objects that can reveal information upon physical interaction.
Tools: Interviews, Pen & Paper, Sketch, InVision, Hololens, Unity, Leap Motion, Adobe Creative Suite, Video
Team: Allison Huang, Makenzie Cherban, Vikas Yadav (CMU Design)
Role: AR prototyping, video production, design ideation, mobile UX
Link: Project blog
To explore these questions, we challenged ourselves to each come up with scenarios (written or sketched) where a family’s sleep could be disturbed and our solution could support the family. We scoped down to a scenario where one member of the family member was ill, although we purposefully didn’t specify the family situation or the type/level of illness to allow for further exploration. In this early phase, our scenarios ended up showing where a designed intervention could prevent or alleviate a sleep disruption in situations that ranged from an oncoming cold to a child in the hospital.
We scheduled interviews with five participants: parents of different ages with children in different life stages and in different professions. Almost all had some pretty dramatic stories about how health-related issues affected their family’s sleep. Two spoke of respiratory issues that their children suffered from in their sleep. On a less dire note, some parents also talked about how seemingly innocuous sicknesses affected both their sleep and the sleep of their children: coughing can keep people awake, and stomach bugs are common sleep disruptors for everyone involved. Four out of our five participants mentioned that either a family member or they themselves had experienced a potential sleep disorder. One research participant spoke of a potential sleep disorder that went overlooked, even after his wife spoke with him about it, until he saw his own sleep data by using a sensor while sleeping. More interview takeaways can be found here.
We also came up with four design principles based on our research and early explorations we had already done in the problem space
Moving forward, we began exploring new ways to communicate data, specifically through augmented reality which isn’t restricted to two dimensions. We referred often to work coming out of MIT Media Lab, particularly Hiroshi Ishii’s musicBottles and Sublimate and Valentin Heun’s Reality Editor. These projects brought tangible objects to life in a digital world, connecting the physicality of our environments to information that is usually hidden beneath two-dimensional screens. Top of mind as we began to prototype were questions such as:
What would actually be helpful for families as a unit? What could be used to improve sleep quality–for the user themselves or the user’s family members? What would fade nicely into the background or be integrated effortlessly into people’s lives?
How do you prototype when designing for augmented reality (AR)? This was an important question our team grappled with while ideating solutions to tackle the problem of sleep in families. It should come as no surprise that we started the process much like any other design process: by picking tools that would allow us to prototype cheaply and quickly. Paper, scissors, tape, markers, and other simple objects were good enough for our early explorations.
We wanted to achieve two things during this process: a) gain an understanding of the affordances and possibilities of AR and b) test which of our ideas merited further exploration through higher fidelity forms of prototyping. The former was important because none of us had worked with AR in the past. And apart from trying out games such as Pokémon Go, there weren’t any AR experiences we could point to in our daily lives for inspiration. The latter was important because we had a long list of needs and pain points to possibly address from our research, but very little intuition for which problems to prioritize — or which ones we could reliably address given our project’s time constraints.
We created a few video prototypes (edited in AfterEffects and Premiere) to explore some of our early ideas of how physical interactions might reveal different pieces of data in augmented reality. Three of them began to explore our idea of using an aura as a signifier, which ended up carrying all the way through to our final solution. We also created one prototype that augmented a meaningful object and leveraged a pre-existing affordance in that particular object: another idea that made it all the way through. Finally, the fifth video prototype showed the ability to use gestural interactions to move a hologram around, which did not make it to the final; after creating both video and working prototypes, purely gestural interactions no longer seemed compelling.
For one of our technical prototypes, we attached augmented reality content to two different blocks of wood. Moving each block would in turn generate movement in the digital content. We relied on Unity and a platform for creating AR markers called Vuforia to paste two unique AR markers (in the form of image targets) onto the two blocks.
Though simple and technically crude, the experiment helped us experience what it would be like to natively combine AR content with specific physical objects. We were able to get a sense of the field of view of the AR content while twisting or turning around the blocks of wood, as well as the kinds of things one might expect when bringing the blocks together (should the AR content disappear when stacking one block on top of another?). This prototype also helped us compare direct touch manipulation to the gestural pinch and move experience of the Microsoft HoloLens. HoloLens lets users select or activate holograms using air-tap gestures and absolute movement of the hand in space. It was important to get a sense of how our interaction technique ideas stacked up to this first generation commercial AR product.
We've written a whole post about prototyping for AR here.
The prototyping process led us to create arrange: a holistic digital platform for families that places embedded information at people's fingertips — connecting the information we find most meaningful to the objects that surround us. It combines the intuitiveness of the physical world with the richness of the digital world. The arrange system consists of two components: a mobile application and pre-existing physical objects tagged with information in AR. It consists of two components:
The mobile app enables users to photograph and tag objects with personalized AR information. The basic interaction involves taking a picture of an object, then going through a series of steps to customize the AR content the user wants to combine with the object. They can choose from a variety of categories, which range from sleep and health to messaging and games. arrange also gives users the ability to assign tags to specific family members, preview and confirm holograms. Once an object has been tagged, it is cataloged within the app — users can see the most recent objects they’ve tagged as well as edit and activate older tags.
Through multiple iterations in the wireframing phase, we were able to simplify the tagging process. In earlier iterations, users were able to select multiple categories and assign them to one object. This interaction became cumbersome, and ultimately we settled on one category per object.
Finally, we created an interactive prototype in InVision to test interaction flows and demonstrate its intended use.
Once an object has been tagged, it emits an aura in AR, signaling to users that embedded information is present in that object. The aura’s glow is a subtle, as well as a peripheral signifier that gives the first level of information in our communication hierarchy: that there is an active tag nearby. The viewer can use the intuitive affordances of touch to reveal the full hologram in AR, showing data relevant to their family and home or showing messages that family members leave for each other.
Objects with active tags can be combined to reveal new pertinent information. For example, if an object has been assigned to include the sleep quality of a loved one and another object has been assigned to monitor the room’s temperature, a user can bring the two objects together to combine the AR and reveal the loved one’s body temperature. After the AR has been viewed, the hologram can either time out, as specified by the user setting up the tag, or it can be double-tapped to turn it off.