I worked as a full-stack Product Designer at RealEyes. I was responsible for all product related design and research tasks, working closely with the developer teams and providing the product team with user insights.
Realeyes measures how people feel as they watch video content online, enabling brands, agencies, and media companies to optimize their content and target their videos at the right audiences.
People share access to their webcams while they watch ads. Using computer vision and machine learning, Realeyes track their facial expressions and report in real time about how the audience truly feels about the content.
End users can analyze the emotion and attention data collected about the viewers on a dashboard, making it the key interface between the client and the service Realeyes provides.
One of the main user personas is an agency-based marketing professional working to optimize an ad for their client. The other main persona is a big brand’s employee using the services to understand how their campaign performs. Both users want to identify the emotional triggers in the ad that drive the analytics.
I focused on the second persona, since they typically have less training in marketing and can use extra help in interpreting the results on the dashboard. The challenge was to present the results in an actionable way.
For example, users were always interested to see the viewers’ expressions of happiness throughout the video. But they didn’t know what to do if there was a sudden fall in happiness, or if the ad induced a consistently low level of happiness. In the former case, they asked if the scene should be cut. In the latter case they wondered if they should start over with new creative work.
It seems that these are quite subjective questions and without not much expertise in marketing the end user might be not able to make informed decisions. We conducted user interviews with current and potential clients in this role and found that most were confused and intimidated by the complexity of the interface. They couldn’t understand everything on the UI and couldn’t make sense out of the data.
After making this observation, I started to design and iterate on a new dashboard that provides the user with more actionable answers to their questions. We introduced a simple checklist to diagnose what the user’s ad is doing well, and how it could be improved. We also added Natural Language Processing data, so users can see what kind of positive and negative thoughts viewers had after watching the media when they answered open ended questions.
The new dashboard enables the user to effectively compare different aspects of the video across different versions and against their competitors.They use it to improve their campaign for the target audience and increase sales.
Another important aspect of the Realeyes platform is how it interacts with our viewer participants. These are the people around the world that are paid to watch media provided by Realeyes’ clients and answer questions. This is how Realeyes collects emotion and attention data for clients.
The main problem we face is that participants often begin the process, earning money for their time, but then drop out of the session when prompted to share their webcam. Clearly people are uncomfortable being watched, even when we emphasize that only the algorithm will ever watch the video.
I conducted lot of usability tests with users from different countries. We knew that cultural norms, for example the different emphasis on privacy for German and American participants, play an important part. The interviews confirmed this impression, suggesting a difficult but effective way to minimize dropout: tailoring the UI and copy by location. On the top of that we also made the choice to differentiate mobile and desktop users.
I also created a popup survey to ask mid-session leavers why they decided to stop participating. We got hundreds of answers, so I could really focus on the main problems and iterate on different designs with usability tests. A/B tests showed a significant improvement in conversion rates, validating this research-oriented approach to design.
I also improved an internal tool that operations uses when turning client specifications into data collection work. For instance, clients may be interested in how a specific demographic reacts to their ad: Chinese women with dogs, aged 30-45. Often the client is interested in multiple overlapping and intersecting groups. In addition to those Chinese women they may also want to test their ad on East-Asian parents, for example. Operations had difficulty launching data collection with these complex restrictions on participation, and they were solving the problem in an ad-hoc way by asking developers for bespoke solutions for every case.
I was asked to set up a system for this workflow, and saw it through from interviews with various teams to prototyping to testing. We created an MVP in a short time frame and improved its value by iterating on its flexible basic structure. We significantly improved the time it takes to launch new data collection and made operations between teams run much more smoothly.