The technological revolution of artificial intelligence (AI) has taken the spotlight this year, and thinking about our work at Three Hands Insight, I’ve been intrigued to learn about what matters to us:
How could artificial intelligence impact those who are more ‘vulnerable’ and underserved in society?
How can we ensure this technology serves everyone, especially those often left behind?
How are these new products and services developed with the needs of ALL customers in mind?
In the last few months, I’ve been immersing myself in discussions and talks with experts in a range of fields – from Silicon Valley tech developers, entrepreneurs, digital ethicists, and social commentators, to public servants. They shared varied perspectives, some hopeful, some worrying.
There is a consensus on the great opportunity this technology brings to address social issues, fill the gaps or get it right where we humans fall short; and enhance our work, rather than replace it. For example, by assisting radiologists in diagnosing faster, or expediting drug discovery to treat diseases.
Long before Chat GPT took the headlines, AI was already being used in tools we are very familiar with, such as GPS navigation, voice assistants (Alexa/Siri), and speech recognition tools in our phones. The main difference of the current technology is the fast pace at which it is developing and getting much more capable. Even its developers have said it is scary to witness this.
Therefore, it is essential to think through how to use it right, its potential risks and ethical implications. My main learning has been that the focus should not simply be on what this technology can do but on how it is developed and how it is used.
The use of AI across different sectors presents great opportunities to offer the right support to those who are marginalised, and empower vulnerable people – there are some existing impactful applications in screen readers or speech recognition software. However, the voices of these groups must be at the core of any new technological development. This means actively engaging diverse people, envisioning desirable outcomes, and maintaining ongoing conversations with society, right from the start of the design process. Businesses need to listen to their customers, pivoting the focus from technology to the people it is supposed to serve.
A great example of inclusive design – not specifically in AI – is the creation of the ‘Hero Arm’ by Open Bionics, the world’s first clinically approved 3D-printed bionic hand for children and adults. Open Bionics initially aimed at creating a prosthetic with exceptional technical features, but the input from children with limb differences challenged the designer’s assumptions, redirecting focus towards aesthetics, playfulness, and psychological comfort in wearing a prosthetic. Designers learned that customisation and a superhero aesthetic mattered more than hyper-functionality. And currently, they test every new idea with a lived expert, Tilly. This highlights how involving end-users in design leads to truly impactful and useful solutions.
As more businesses are investing in incorporating AI into their services and products, it is imperative to have discussions about prioritising its inclusive and ethical design and development. Otherwise, there is a significant risk of perpetuating existing inequalities and causing harm to some groups of people. For example, some AI tools have been found to have bias inbuilt into them unintentionally, caused by the documents and data used to train them. These bedtime stories created by a large language model show how this type of AI may reinforce gender stereotypes, and these AI-generated work letters reveal potential gender biases.
So what should companies consider when incorporating new ‘intelligent’ tools?
- Is it designed based on the people it is supposed to serve? Are you including lived experts’ needs and concerns at the development stage?
- What outcomes would be most beneficial for your customers?
- What are the potential risks for your customers and how are these mitigated?
- How are you testing your product before it is launched or implemented?
- How are you monitoring it and checking in with your customers post-launch?
- Who will be responsible for addressing any negative outcomes?
This phase is a learning curve for everyone and I’m genuinely curious about how companies are thinking about the societal impact and the ethical implications of embedding AI in their products and services.
I believe in the power of inclusive, ethical technology, and I’d like to collaborate with those initiatives that are developing innovative tools with the needs of all customers in mind, so they work for everyone. One way to do this is by weaving the insights and experiences of our Lived Experts Research Community into the design and testing of new technology.
Ultimately, our collective responsibility is clear: to harness the potential of AI while ensuring it doesn’t exacerbate existing disparities. Let’s steer AI towards inclusivity and empathy, shaping a future where no one is left out in the tech revolution.
Get in touch to share your thoughts and explore how we could collaborate!
The author, Lucia Bertello, is Head of Social Insight at Three Hands.
Featured image created by Magic Media, Canva’s generative AI.