It’s impossible to go a day without talking or hearing about AI — and what makes it powerful and potentially dangerous. We’re well past its hype phase and now operate in a world where the technology has become an even more permanent fixture in our lives.
Against this backdrop, everyone has an opinion on how to make AI more inclusive and ethical, as well as who should be responsible for the safety of its innumerable users. Safety and ethics are equally paramount considering the speed new models are built and brought to market, and the oversight needed to ensure models are fair, unbiased, and equitable.
There are big questions to ask and many were raised at the inaugural International Association for Safe and Ethical AI last month in Paris. Experts in academia, civil society, industry, media, and government discussed and debated the latest developments in AI safety and ethics, but representation of neurodivergent perspectives in AI development wasn’t examined. This is a huge oversight especially considering 70 million people in the US alone learn and think differently, including many in tech.
Technology should be built for and serve all, so how do we make sure future AI models are accessible and unbiased if neurodivergent representation isn’t considered? It all starts at the development stage.
AI accessibility: no longer a novelty
The good news is we aren’t starting from scratch, but there’s still a long way to go before accessibility is synonymous with the development of ethical and inclusive AI. In the last few years, we’ve seen conscious efforts from companies like Apple and Google, who’ve created and delivered mobile offerings like Apple’s Live Speech and Eye Tracking, as well as Google’s Guided Frame and Lookout. But accessibility in tech is still viewed as a niche offering.
That’s why the Understood Assistant was developed and trained by experts, who focus on those with learning and thinking differences, with the goal to make our vast content library more accessible. It helps by including a voice-to-text feature to ask questions, for instance, and its clear, concise responses are written at an eighth-grade reading level.
But there’s much more to do, and it’ll be an uphill battle for some organizations. Discussions around inclusivity remain contentious, all while AI continues to dominate content, business, and how people live their lives. Still, we must prioritize and respect that everyone’s brain is wired differently. Here’s how to do that:
- The data and teams behind AI development must reflect cognitive diversity. If chatbots and AI models are always fed biased and flawed data, the results will always be biased and flawed. The best way to ensure these datasets are trustworthy, inclusive, and unbiased is to include cognitively diverse developers. A neurodivergent team also makes it easier to explore a wider range of use cases and the risks associated with applications. When you engage neurodivergent people at the development stage, you create a team that understands and prioritizes diverse ways of thinking, learning, and working. And that benefits all users.
- Companies should hire from a diverse talent pool. According to research from Gartner, 75% of organizations whose decision-making teams reflect a diverse and inclusive culture — with a particular emphasis on cognitive diversity — have benefited from enhanced productivity, innovation, and positive financial outcomes. And an Accenture report backs this up finding that economic output of the US could improve by up to $25 billion if 1% more people with disabilities enter the workforce.
- Users want tech that helps them survive and thrive. New data from EY found that 85% of neurodivergent employees think gen AI creates a more inclusive workplace, so it’s incumbent on more companies to level the playing field by casting a wider net to include a broader range of employees and tools needed to thrive and generate more accurate and robust datasets. Gen AI can also go a long way to help neurodivergent workers with simple tasks like productivity, quality assurance, and time management. The same EY report also found that as a result of gen AI, 65% of respondents felt confident about their work, 64% had more clarity at work, 61% were relieved AI could help remove distressing obstacles at work, and 30% felt more accepted at work.
As leading companies edge closer to achieving artificial general intelligence (AGI) by 2027, early and strategic involvement in shaping the AI landscape is critical. But we must remember that rapid AI evolution demands caution. In many cases, there are more questions than answers but it’s vital we focus on and prioritize the safety and inclusivity elements of AI so the technology is what those with learning and thinking differences need and deserve.
Read More from This Article: Why neurodivergent perspectives are essential in AI development
Source: News