How can we help visually impaired individuals use our products and feel more self-reliant and confident? That’s the question The Estée Lauder Companies (ELC) set out to answer in 2022 with the help of AI and augmented reality (AR).
The result is the Voice-enabled Makeup Assistant (VMA), now fully launched in the UK and US, and soon to debut worldwide. The project also earned ELC a 2023 CIO 100 Award in IT Excellence.
“We set out to address a significant business problem in the beauty industry, and that’s the lack of accessible solutions,” says Michael Smith, ELC SVP and CIO. “There are 2.2 billion people globally that have some type of visual impairment. There’s over two million in the UK alone, which is our pilot market. But it’s more than just a business opportunity. It’s also something that aligns with our mission to be the most inclusive and diverse beauty company in the world, both for our employees and all our consumers.”
Christopher Aidan, ELC’s VP of innovation and emerging technologies, adds that visually impaired people often must rely on others for help, so the vision for VMA was to use ELC’s existing Augmented Reality Immersive Application (ARIA) platform, which leverages AR, AI, and machine learning (ML) algorithms to analyze makeup on a user’s face. VMA then uses voice guidance to help the user create their ideal look.
VMA is a mobile app that leverages the preferences the user has already configured on their device for voice controls, but it also gives the option to customize the voice and its speed using the accessibility settings.
Getting the AI right
While Aidan’s team was able to build the app relatively quickly, the research stages to get to that point proved challenging. With inclusivity as the project’s guiding principle, the team recognized the need for the app to accommodate all shapes, sizes, shades, and unique features of any potential user’s face. Ensuring the AI could adapt and react to the diversity of colors, shades, facial features, hair, and other inclusions made the AI training process more complex than initially anticipated, requiring a wide diversity of faces for the training corpus.
The original design involved taking a selfie, which the algorithm analyzed to assess the uniformity of application and then provided guidance to the user. It didn’t take long, though, for the team to instead utilize real-time video the app could use to scan the user’s face. For instance, if the video shows the user has applied foundation or lipstick unevenly, the app provides verbal descriptions of the specific areas that need a touchup and guidance to correct the issue. The user can then make adjustments, rescan, and the app will provide a prompt when everything is correctly applied.
First priority, however, was for Aidan and his team to directly engage with the visually impaired community. “We really wanted to gain an understanding of what their unique needs were, their pain points and preferences, and what they desired from our products,” he says. “We pulled together focus groups and asked questions, but mostly listened to them about their personal experiences with makeup and technology.”
Importantly, he says, some focus group members were completely blind, some experienced varying degrees of low vision, and some had excellent peripheral vision. This enabled his team to gain insight from a variety of individual experiences and to question their own assumptions.
“We assumed that a natural, more humanistic sounding voice would be the preference for the implementation, but the user research confirmed that familiarity was actually most important to our users,” Aidan says. “Whatever they had set up on their device is what they wanted to experience.”
The team also partnered with internal advocacy groups at ELC, external advocacy groups, and experts in accessibility and inclusivity, and combined their insights with feedback from the focus groups to gather requirements for VMA. The team then leveraged the user research for everything from naming the application to getting the tone of voice just right.
“Throughout the design, build, and test phases, their feedback was informing our decisions, even the small features like being able to adjust the speed of the virtual assistant’s speech,” Aidan says.
A work in progress
The team has continued to monitor feedback on the app since its initial launch in the UK in January, and Aidan notes that even with extensive testing prior to launch, users have since identified new issues.
“We started getting questions no one had asked in all the early interviews,” he says. “What about when I remove product? Can you tell me if I did a good job removing it?”
Smith says ELC is measuring the success of VMA by user feedback, which has been as positive as it’s been constructive.
“One user said, ‘This is one of those apps where I’m going to wonder how I ever lived without it,’” Smith says. “The app allows them to feel empowered and to try out new products and avoid feeling like they have to rely on other people when they might feel hesitant to even ask. There’s no judgment from the app; it’s just honesty.”
Smith also says VMA doesn’t just benefit those who are visually impaired. For instance, young people who don’t have an adult in their lives to support them, or those who don’t feel comfortable asking an adult could use this app to learn how to apply makeup.
“I would challenge other CIOs to put a priority on creating more accessible and inclusive products,” he says. “When you design for all abilities and people in an inclusive way, you benefit everyone.”
Artificial Intelligence, Augmented Reality, CIO, IT Leadership
Read More from This Article: Estée Lauder applies AI, AR for cosmetics accessibility
Source: News