Over the past few decades, science fiction-inspired technologies have undergone significant advancements. The humanoid robots of today are highly dexterous, artificially intelligent, and more relevant to our lives than ever before.
While robots are generally treated as tools or machines created and controlled by humans for specific tasks, our dependance on them is expected to grow.
Already, humanoids are beginning to make their way from the factory floor to our homes and workplaces – demonstrating their ability to curb the shortage of labour, ease demand for elderly care, and do jobs that are too dangerous for people.
Research shows that if the challenges around technology, affordability and wide public acceptance are overcome, the market for humanoids could reach $154 billion by 2035.
However as robots become more human-like, concerns arise about autonomy, decision-making, and potential harm. Thus, understanding the risks associated with granting or withholding ethical or moral rights is crucial.
“With their presence becoming increasingly integrated into our lives, ethical and philosophical questions surrounding whether legal or moral rights should ever be granted to humanoid robots have become pivotal,” says Malebu Makgalemela Mogohloane, executive: Enterprise Risk Management at Telkom.
“While it might seem trivial, we must think about the implications of creating entities that mirror us in form and, on increasingly many levels, function.”
For instance, Mogohloane points out, humanoids with high intellect and emotions might require regulations that ensure that their development, usage, and care adhere to ethical standards.
Without clear and defined rights, humanoids could be subjected to various forms of exploitation, including long working hours, low wages, and unsafe working conditions. They may also encounter prejudice because of their looks or ability, or because they are considered less than human.
If legal protection is not put in place, those who develop or possess humanoids may be held liable for any damage caused to them – or by them.
“At the intersection of innovation and morality, conversations about the role of humanoid robots in society are not just about technology but a reflection of our values, morals, and the fabric of our humanity,” says Mogohloane.
Some may argue that, despite robot behaviour being programmed to resemble that of people, robots are not living beings and should not receive the same treatment as humans or animals.
However, we should consider how close people and robots can be to each other. In some parts of the world, robots are providing companionship to the elderly who would otherwise be isolated. Robots are learning to develop a sense of humor. One has even been granted citizenship.
“As such, if robots reach a level of sophistication where they can experience some form of consciousness, denying them rights could be morally questionable,” says Mogohloane.
“But these rights might come with the expectation of responsibility and accountability. If robots are given certain capabilities, they may need to be held accountable for their actions, like humans.”
Mogohloane insists that society needs legal frameworks to address the status of humanoids, to determine whether they are classified as property, machines, or entities deserving of rights. We need to establish whether legal protections are necessary to ensure safety from exploitation, abuse, or discrimination.
Beyond that, we must prioritise investment in reskilling and retraining programmes for people to cultivate a culture of acceptance towards humanoids. Ultimately, the decision on whether robots should have rights will depend on societal values, ethical considerations, and how technology evolves.
“Together, let’s create an inclusive future where humanoids are treated with respect,” says Mogohloane. “We need a future marked by proactivity, harmonious coexistence, and careful deliberation to ensure humans and robots are protected”.