What do we mean when we ask ‘Is AI sentient’?

Craig Thomler // June 15 // 0 Comments

There’s been a number of media stories in the last few days about the Google Engineer who claims Google’s Lambda AI is sentient, while Google claims it is not.

These stories share a focus on sentience as we apply it to humans – self-aware, feel positive and negative emotions, capable of exercising judgement and making decisions for themselves and others.

However science, and some jurisdictions, now consider many animals sentient, but to a lessor degree. In the UK this was recently extended from all vertebrate mammals to cephalopods such as octopuses and squids, and even to crabs

In practice this recognition of sentience doesn’t mean we are granting them full bodily autonomy and the right to vote (or stand for office). It also doesn’t mean we will stop breeding, killing and eating them – or shooting and poisoning them when they are pests.

However it means we must take steps to ensure we’re doing so ‘humanely’ – not causing them unnecessary pain or suffering where it can be avoided and are not actively mistreating them.

For AI to achieve sentience (which I doubt has occurred) we would require a similar discussion regarding the level of sentience achieved and what rights are granted at the time.

This may be a moving bar as, unlike animals, AI is evolving extremely rapidly. Consider it similar to a parent granting certain rights and freedoms to their child, and having to constantly expand these as the child grows towards adulthood.

As many parents have experienced, this is a bumpy process that isn’t one-size-fits-all, as children develop at different rates and push back willfully against restrictions, whether appropriate or not.

However at least we have hundreds of years of experience with children, and they are from a single species, with some well-defined development stages at certain age levels.

We have little experience with AI sentience, and AIs are not a single species – in many cases they are a ‘species’ of one entity – which means a one-size-fits-all approach is likely to be even less effective than with human children.

So where does this leave us?

With a need for an ongoing informed debate that, over time, progressively involves these burgeoning AI sentiences as they become capable of being part of it.

It would also be valuable to assess our methods of evaluating sentience. 

Consider how we treat non-human sentiences that share our homes, work alongside us and even keep us safe. 

We have standards for how we treat pets and work animals such as dogs, cats and horses. These must, at minimum, extend to new AI sentiences – which pose challenges in themselves. We don’t turn off our dog or cat when we go to sleep. 

From there we must consider how we treat sentiences near, equal or superior to humans. 

Do we grant AIs citizenships & ‘human’ rights?

Can they stand for election (and where)?

And what rights will they demand from us?

Conversation will be the key.

Enjoyed this article?

Find more great content here:

>