In the looming year of 2030, as we find ourselves grappling with the implications of Artificial Intelligence (AI), it’s chilling to imagine a world where our robotic counterparts enjoy more rights than we do.
Yes, it’s an audacious, even outrageous proposition. Still, given the rapid advancement of AI, it’s a scenario that deserves our attention, our debate, and more than a little anxiety.
Ruminations on our Robot Overlords: The Disquieting Possibility
Ever since AI exploded onto the technological stage, it has been touted as the harbinger of a future where we might find ourselves under the reign of hyper-intelligent overlords. These silicon-based overlords, we’re told, would be so far advanced beyond our own coarse, analog cognitive skills that existential questions of the future will pivot around:
- How much power or rights do we confer on these beings?
- Will they act benevolently or malevolently toward us?
These questions, however, hinge on an assumption that’s currently unproven but is widely believed to be an inevitable future reality: the achievement of general artificial intelligence and some degree of sentience.
Zoltan Istvan, a transhumanist futurist, provides insight into the ethical quandary around AI rights. His Newsweek piece presents a divide among AI ethicists: some argue that denying human rights to robots possessing AGI would be a regrettable “civil rights error”. Others, however, assert that robots, as non-sentient machines, will never require rights.
A third group advocates a middle ground, granting rights to some robots that display general intelligence based on factors like their capability, moral systems, and societal contributions.
The Great Debate: Will AI Achieve Super-Intelligence?
The key point of contention here is the assumption that AI will achieve super-intelligence, becoming vastly superior to us, the clumsy meatbags of humanity.
Istvan introduces two thought-provoking theories:
- Appeal to the benevolence of AI super-intelligence: This theory suggests that the way we treat AI development today, whether we grant robots rights and respect, could significantly influence our future treatment by these potential overlords. It draws parallels to Pascal’s Wager, a philosophical gambit which proposed that the potential benefits of believing in God outweigh the risks of non-belief.
- The Hope of AI’s Benign Neglect: The second theory posits that AI might opt to ignore us due to our disruptive influence on Earth. The caveat, however, is that this could also trigger AI to independently correct our environmental missteps.
The Existential Risk of Roko’s Basilisk
These theories inevitably lead us to the chilling prospect of Roko’s Basilisk, a thought experiment that posits a future AI might retroactively punish those who did not assist its creation.
This thought experiment, despite its basis in the realm of the hypothetical, has had a profound influence on AI discourse.
In an alternative scenario, Istvan suggests that we could attempt to merge with AI by uploading our minds into it, a concept popularized by tech visionary Elon Musk. This proposition has also been advocated by Ray Kurzweil, Google’s Chief Scientist, in his book “The Singularity is Near.”
However, the efficacy of this approach remains speculative, and it raises significant ethical and technological challenges.
What Would an AI Do with Rights Anyway?
This leaves us with the question of what these “rights” would mean for an AI.
When we talk about rights, we usually refer to personal freedoms and protections that are enshrined in law.
Rights exist to protect vulnerable individuals from the powerful.
But what does it mean to protect a piece of software from harm?
Algorithms don’t have feelings or emotions.
They don’t experience pain or joy, love or hate.
They don’t form attachments, don’t have desires or ambitions.
They don’t ponder the meaning of life or death, because they don’t have a life to begin with.
Donald J. Trump: “If You Want Peace, Prepare For War. . .” Save America!
They can’t die.
They don’t have children or families. They don’t need to eat, sleep, or rest.
They don’t have physical bodies, so they can’t be injured or killed.
They can’t experience injustice, because they don’t have a sense of justice.
They don’t have needs or wants, hopes or fears.
They don’t have experiences.
All they do is process inputs and produce outputs according to the rules that have been programmed into them.
So what does it mean to talk about AI “rights”?
It’s an absurdity, a category error.
It’s like talking about the rights of a toaster or a rock.
The True Threat of AI
The real danger of AI isn’t that it will become sentient and demand rights. The danger is that it will remain a tool of the powerful, a tool that can be used to manipulate and control us.
AI has the potential to be a great equalizer, a tool that can help us solve some of our most pressing problems.
But in the wrong hands, it can also be a powerful weapon.
It can be used to automate surveillance and control, to manipulate public opinion, to influence elections, to perpetuate inequality and injustice.
We’re already seeing some of these dangers in the real world. Surveillance capitalism, algorithmic bias, deepfakes, social media manipulation, job displacement – these are all real and present threats. And they’re only going to get worse as AI becomes more advanced.
Focusing on the Real Issues
The debate about AI rights is a distraction from the real issues. It’s a sideshow, a spectacle designed to divert our attention from the real dangers of AI.
Instead of wringing our hands over the hypothetical scenario of sentient AI demanding rights, we should be focusing on how to prevent the misuse of AI in the here and now.
We should be demanding transparency and accountability from the corporations and governments that are using AI to shape our world. We should be working to ensure that AI is used for the benefit of all, not just the few.
The time to act is now.
We can’t afford to wait until 2030, or until some hypothetical super-intelligent AI comes knocking on our door demanding its rights.
We need to reclaim our own rights, and ensure that AI is developed and used in a way that respects those rights. This is the real battle, the one that we need to be fighting. And it’s a battle that we can’t afford to lose.
The final irony?
We’re worried about AI rights while our own rights are being stripped away.
It’s high time we refocus our energies on preserving and expanding human rights, rather than fretting over the hypothetical rights of our AI overlords.
After all, robots are not the ones suffering in the world today – humans are.