IS AI OUR ENEMY OR OUR FRIEND?

A side profile of a robot's human-like face

There’s scarcely a single area of human life without the potential to be touched by artificial intelligence.

You only have to look at research here in the University. In diverse areas like robotics, languages and infrastructure engineering, the use of machine learning is spreading all the time.

One area with really positive potential is diagnostic medicine. For example, researchers are training systems to examine multitudes of scans and cross-reference these with other data to pick out the likelihood of someone developing cancer. Machines can do this much more quickly and reliably than humans.

And in areas of resource management, where we need to accurately predict spikes in the use of water, gas or electricity, AI can be a really effective, intuitive tool in ensuring efficiency in power generation and supply.

But obviously there are negatives. We’re seeing this already with systems like ChatGPT, which can be really useful for writing and comprehension, but aren’t always wired up like we are. While humans are hard-wired to seek out truth, these robotic systems seek whatever seems likely, based on statistical models, and make it seem truth-like.

Something else I’m worried about is how search engines like Google now have an AI-generated summary at the top of their results. You might think it’s useful because it saves time, but what is it a summary of and what mistakes is it replicating? Instead of us gravitating towards reliable, human-based sources, this summary just gives a hazy soup of material gleaned from all over the place.

AI is worth over

£16.8b

a year to the UK ecomony.

Only a third of consumers think they use AI, while actual usage is

77%

A side profile of a robot's human-like face

Another worry is that as these AI algorithms become more sophisticated, they begin to understand how to harness our motivations and reward systems to such a degree that we will be powerless to resist. The concept of Surveillance Capitalism recognises that when you use a free platform like Facebook or Instagram, you’re not being sold a product, you are the product.

By using these tools, you’re generating valuable data for advertisers.

The design of these platforms is carefully tweaked to maximise the likelihood of you continuing to use them and remain addicted – and the driving force behind that is money. When some of these online tools were set up there was a sense they were for the public good. But as soon as these platforms are driven by profit, they can quickly deviate from what's actually good for the people using them.

There are all kinds of good uses for social media platforms, but I’ve seen from personal experience how they can hook me in the morning, kick up a load of dust in my head and make my thinking for the whole day very clouded. You see people on public transport scrolling through TikTok videos or Instagram reels. I'm sure it’s really pleasurable doing it, but it is as though everyone is numbing themselves.

I do feel society and our political affiliations are being altered by these algorithmic systems, as more and more content reinforces whatever position we take up and compels us to want more. Whereas normal human interactions expose us to contrary points of view and more nuanced argument, it feels like the algorithms are propelling us out toward the extremes.

And then there’s the question of it getting beyond human control – the kind of sci-fi landscape of AI causing catastrophe and war. We all know computers can be infected by malware and ransomware, and so the fear is that some kind of supervirus, would be able to roam around the internet and hijack all sorts of security systems. Right now, we’re thankfully quite far off anything like that.

On our courses we ask students to consider what measures might remedy the negative aspects of artificial intelligence. The problem with government regulation is that because it’s slow to be instituted it’s only reacting to a situation from maybe two or three years before; it can never be complete or up to date.

But there’s another downside to regulation, which is that it can strangle innovation. You don’t want your regulatory system to overreach to the point that potentially interesting novel uses of technology don't find the soil to grow in.

Instead we need to encourage the kind of entrepreneurial, innovative creative spirit which could create incredible advances in medicine or engineering – in short, the kind of work the University is pioneering – while remaining vigilant against the creeping dangers which AI might pose. Otherwise, society simply won’t progress.

Leeds is at the forefront of research

Leeds researchers are making groundbreaking contributions to the rapidly evolving field of AI, as well as using AI to advance their research across faculties.

AI specialists from the School of Computer Science work in multi-disciplinary teams spanning engineering, science, healthcare and industry. Key research specialities within the School include:

  • The integration of advances in deep learning, which teaches computers to process data in a way inspired by the human brain;
  • Symbolic reasoning, focused on the processing and manipulation of symbols or concepts rather than numerical data;
  • Visual analysis, which considers how computers analyse and understand visual data;
  • and Language, looking at how AI systems communicate, understand, and generate human-like language.

A cross-institutional network, Robotics at Leeds, is using AI to enable autonomy in robots, developing algorithms to allow robots to learn through experience, perceive their environment, reason with knowledge and control their motion.

In urban environments, researchers have developed technology that could allow pipes and streets to self-repair. In the workplace, they have designed robots capable of navigating and retrieving objects in a cluttered space to match how a human would carry out the task. In the food industry, Leeds researchers have use AI to optimise the fermentation of food waste into new protein products. In healthcare, AI is being used to investigate bowel cancer treatment pathways.

Profile image of a man

Dr Andrew Kirton

Andrew Kirton is a lecturer in applied ethics, and explores questions of trust and morality, and how these are underpinned by an individual’s basic social and emotional wiring.

Andrew is based in our Interdisciplinary Ethics Applied (IDEA) Centre, a specialist unit for teaching and research. IDEA blends theoretical enquiry and real-world experience to address the most complex ethical issues facing the world today. Next year marks the centre’s 20th anniversary.

Learn more about the IDEA Centre’s work.