A(G)I will probably destroy us: who cares?

A(G)I will probably destroy us: who cares?
Photo by Possessed Photography / Unsplash

It is very possible that AI could pose an unprecedented risk to humanity as a species. Of course, it is also possible that humans are either too stupid to create an AGI, or smart enough to set the right boundaries. Either way, no one is going to reject AI. Humans are more concerned with comfort than survival.

The Selfless Animal?

There is a widespread belief that humans are actually very concerned about the future of the species. This belief sees humans as proactive individuals, willing to consider anything in the context of humanity.

To be clear, humanity here means the entire species. This implies that humans have some sort of collective consciousness and ethic that helps them to preserve the species and the individuals. That sounds a little too optimistic.

The Selfish Animal?

Conversely, there is a widespread belief that defines humanity as a group of selfish individuals who place their personal salvation and comfort above all else. This belief sometimes extends selfish behaviour to small groups such as nations.

That sounds a bit pessimistic. We all know that people are capable of sacrificing themselves, at least on certain occasions, so that others can flourish or survive. This applies to both individuals and groups.

The Careless Animal

As usual, things are much more complex than absolute theories and definitions. The truth, if there is one, is probably somewhere in the middle between the two extremes.

Human beings are capable of selfless, selfish or mixed behaviour: we are confused animals, usually driven by instinct, which has chosen for us to varying degrees; we then justify our actions and behaviours with logic and afterthought, but the truth is that in most cases we don't know what we would do until we find ourselves in the specific situation.

This seemingly obvious and unimportant conclusion can surprisingly tell us a lot about how people react to risks that are not immediate and that affect a large group of people or the entire species. The more abstract the risk, the more imagination it takes to create a scenario that puts that risk into a known framework, the less people will care about it.

We are generally careless animals when it comes to most things that don't directly and immediately affect us.

Out of sight, out of mind

The above sentence is so true that anyone trying to hide something worrying - from governments to agencies to corporations - can do so efficiently by simply making that thing invisible.

This is achieved either by briefly mentioning something in a sea of useless information, or directly by avoiding talking about it.

This is an example of an active strategy that someone can use to hide something from the public, but the statement also applies, and most often applies, to situations and risks that our minds simply refuse to see or consider. Our mind hides or distorts things from us. And this is usually beneficial to our lives as individuals.

How is that relevant to the AI and AGI debate?

The Selfless Animal theory suggests that humans are proactive individuals, willing to consider the future of the species. That sounds great, but let's face it, we're also the Selfish Animal, putting personal comfort before survival. And then there's the Careless Animal, driven by instinct, who justifies his actions with logic after the fact. So which one are we?

Actually, we're all a mixture of these traits. We're capable of selfless acts, but we're also selfish. We're driven by instinct, but we also use logic to justify our actions. And when it comes to risks that aren't immediate and affect a large group of people or the entire species, we tend to be careless animals.

We don't care about things that are out of sight and out of mind. This is where the debate about AI and AGI gets really interesting. We're talking about risks that are abstract and hard to visualise. We're talking about risks that can be hidden or distorted by our own minds. And that's exactly what's happening. People either ignore the risks or play them down because they're not immediate and don't affect them directly.

The Ignoring Animal

Here is the catch: we don't really care about the risks. We're more concerned with our daily routines and personal comfort. We're more concerned with the immediate gratification of our desires than the long-term survival of our species. And that's what makes the debate about AI and AGI so fascinating. It's a reflection of our own priorities and values.

We're not going to reject AI because it's too convenient. We won't stop using it because it's too convenient. And we won't worry about the risks because they're too abstract and out of sight. The AI vs. AGI debate is a reflection of our own selfish and careless nature. We'll keep using AI and AGI, and we'll keep ignoring the risks until it's too late.

Take, for example, our approach to climate change. We're aware of its seriousness, but we're not doing enough about it. Our daily routines and personal comfort take precedence over the long-term survival of our planet. Similarly, we're aware of the risks associated with AI, but we're prioritising its immediate benefits over its potential long-term consequences.

This is evident in the way we are using AI. We're using it to make our lives easier and more convenient, but we're not thinking through the potential risks. We're not actively mitigating those risks, or even acknowledging them. We're just using AI because it's convenient and makes our lives easier.

Conclusion

Our priorities often reveal much about our values as a society. We are quick to embrace the convenience and benefits of AI, but we often turn a blind eye to the potential risks and long-term consequences. This short-sightedness is evident in many aspects of our lives, from our daily routines to our pursuit of profit and power. In the pursuit of immediate gratification and personal comfort, we often overlook the need to secure the future of our species.

This is reflected in our actions, or lack thereof, when it comes to addressing the risks associated with AI. We eagerly embrace AI technologies that make our lives easier, more efficient and more entertaining, but we often fail to critically examine the ethical implications - including AI self-awareness - and the possibility of unintended consequences.

This lack of foresight and responsibility is not limited to individuals; it extends to corporations and governments. In the pursuit of profit and power, some entities may prioritise the rapid development of AI without adequate safeguards or oversight. They may downplay or ignore the risks, either out of ignorance or a desire to maintain their competitive edge.

As we continue to develop and implement AI technologies, it is important to acknowledge our tendency to be selfish and careless. By recognising these tendencies, we can work towards a more responsible and sustainable future for humanity and AI.

Or maybe we'll just go on not caring and live happily ever after until we don't.