With AI increasingly set to permeate all areas of our lives, a soon-to-be-published book focuses on how to make AI more human-centred in a range of sectors and industries.
We are constantly bombarded with news about how AI is going to transform our lives. Technology companies are making huge strides in developing technology-driven AI, and it gives the impression that, as humans, we have to give way to this technology and adapt our lives to fit with it. However, global experts warn AI should instead fit with what humans need.
In a new book – Human-Centered AI: A multidisciplinary perspective for policy-makers, auditors, and users – 50 experts from a variety of backgrounds, sectors, disciplines and countries contribute to an exploration of human-centred AI. It looks at why and how AI should be developed and deployed in a human-centred future.
One of the experts featured in the book – Professor Shannon Vallor, the Baillie Gifford chair in the ethics of data and AI at the Edinburgh Futures Institute – said that human-centred AI means technology that helps humans to flourish. “Human-centred technology is about aligning the entire technology ecosystem with the health and wellbeing of the human person. The contrast is with technology that’s designed to replace, compete with or devalue humans, as opposed to technology that’s designed to support, empower, enrich and strengthen humans.”
As an example, she argued that generative AI is a technology that is not human-centred. In her view, it has been created by organisations focused on how powerful they can make a system, rather than meeting a human need.
“What we get is something that we then have to cope with, as opposed to something designed by us, for us and to benefit us. It’s not the technology we need,” she explained. “Instead of adapting technologies to our needs, we adapt ourselves to technology’s needs.”
The book also highlights how policymakers could go about regulating AI – for instance, how using regulatory sandboxes could ensure the development of age-appropriate AI for kids.
It is little wonder why we become increasingly wary of AI when we read news about how the technology is outperforming humans. For instance, a recent study at the University of Arkansas showed that when 151 human participants were pitted against ChatGPT-4 to measure divergent thinking, which is considered to be an indicator of creative thought, the AI bots provided more original and elaborate answers than the human participants.
In the study – called The current state of artificial intelligence generative language models is more creative than humans on divergent thinking tasks – the authors found that “overall, GPT-4 was more original and elaborate than humans on each of the divergent thinking tasks, even when controlling for fluency of responses. In other words, GPT-4 demonstrated higher creative potential across an entire battery of divergent thinking tasks.”
The authors did not evaluate the appropriateness of GPT-4 responses, as the point of their study was to prove the rapid progression of large language models and how they are outperforming humans in ways they have not before. Whether they are a threat to replace human creativity remains to be seen.
If anything, studies such as this prove Vallor’s point in showing how technology-driven AI such as generative AI is not necessarily technology that will enhance the human experience.