If you want to understand how humans will eventually treat artificial intelligence now or in the near future then this 1957 article says it all.
"You'll own slaves by 1965" claimed the headline. Granted this is 60 years out but in a few years AI will force us to face our own ethics and morality and in doing so force us to face questions we'd rather not answer.
There are people at the moment advocating we need to discuss this in the wake of ChatGPT, arguing that they’re more than a tool. We can’t even agree on a hard definition of AGI (one which requires sentience and experience of the physical world beyond just a screen).
We can certainly discuss "robot rights" at the same time as human rights but we have a woeful history of the latter to suggest we'd not take the former seriously.
So do we need to think about or discuss the idea of "rights" right now?
It's a good question and I'll revert to Shelley's Frankenstein, or the Star Trek TNG episode, "The Measure of a Man", as guidance where the android Data was put on trial to determine whether he was a thing and property to be dealt with however we choose or something that deserved rights to be protected. In fact, the movie Bicentennial Man also dealt with this question too, to be recognised as a new type of lifeform in itself with its own sets of rights. But we are not there yet with AI today.
Will they be the same rights as humans?
Possibly not because of their very nature and how they are constructed. They would be artificial beings and so we need to define what those criteria are too. We've only just recently recognised that Octopii are sentient lifeforms and it's taken us hundreds of years to do this, the debate on whether an AI needs rights might also take just as long.
Another question is does the form factor make something more deserving of rights? Does an AGI hoover deserve more respect than a humanoid shaped robot with AGI because we find the hoover worthless or comical in comparison to something that reminds us of ourselves? There are videos of children attacking an airport security robot so that tells us a lot already.
Will we anthropomorphise and attach emotions where none exist in the same way we do with other inanimate objects? We need to be objective about how we approach this for sure.
And then there's the issue of transhumanists and others who advocate merging minds with the machines. Does that make future humans more than human, with more or fewer rights?
A further thing to consider is the Ship of Theseus - at what point if you continually replace bits of yourself with robotic or AI technology are you no longer "you"? There is a limit, and we need to define it but I'm not convinced we are able to on our own but perhaps that's something an emergent AI can eventually help us with.
In all, far too much to think about at a time when there's so much future economic uncertainty in the wake of what's happening today.
But a future generation might have to deal with this if we don't.