Should there be laws regulating humanoid robots

2018-09-10 15:08 #0 by: Niklas

With robots and artificial intelligence getting more sophisticated, are our current law's enough? Imagine having a private conversation with someone, for instance at a party, and later find out that the ”person” you spoke with was a robot. Or imagine going on a date not knowing that you are actually meeting with a very sophisticated AI-powered humanoid. Or imagine finding out that your shrink is a robot.

I would act and say different things to a robot than to a real human. Therefore I want to know if the person I talk to on the phone is a software program instead of a human. I want to know if people I meet are not human.

Today human robots and digital assistants aren't good enough to fool most people. That will change. They will be capable of fooling anyone into believing that the person at the other end of the phone is a real person, or that the shop clerk you are flirting with is interested in you, not a robot doing everything they can to sell you something.

Is it okay for software and hardware to mimic people without us knowing what is what? Do you think we need to adapt our laws to a future where humans and robots are living side by side, looking and acting the same? Just like it is a crime to pose as someone else, shouldn't it be illegal to pose as a human when you are not? In my opinion it would be fraud to let an object imitate a human if there is a risk of confusion.

What do you think? Do we need to adjust our laws? Is this covered by the laws we already have?

(Photo by Parker Johnson at Unsplash)

2018-09-11 22:57 #1 by: jordan

It seems so... futuristic? That we can have this kind of discussion. I think it is worrying that like when data from humans is put online, that it can be accessed and stored by anyone or anything, so I imagine more privacy laws in regards to what robots are able to store as data might be needed.

2018-09-12 07:29 #2 by: Niklas

Artificial intelligence and machine learning are here and very fast getting good. In the beginning they will mostly be doing good. When we have accepted and gotten used to them some evil people will start using them for bad things. That's why I think we should have proper laws in place beforehand.

#1: Don't you think we need laws stopping computers and robots from impersonating humans unless they openly advertise it?

2018-09-12 13:30 #3 by: Tammie

Yes, I strongly believe that we do need that. Robots can also have other abilities that humans don't have. They can have the ability to record conversations and a human would have to use a device to do that but a robot is like a computer and can be recording or taking pictures with out a persons knowledge. 

Happy creating!


Host of Paints and Crafts

2018-09-13 20:03 #4 by: jordan

#2 Maybe in a similar fashion to how GDPR affected websites in Europe? Yes that would be a good idea IMO.

2018-09-14 09:34 #5 by: Evelina

Yes, and who has power and control over the robots. They could be used by people to gain power in the world by generating propaganda, marketing purposes, etc. Therefore, an individual needs to know whether its a robot in order to not take for granted what the robot says. 

#1 Jordan has a good point, if the robots are able to track data from their environment, such as conversations, tracking human behaviour, etc, it could be detrimental, such as threatening democratic societies. I mean the robots will be able to get hacked too which is a scary thought. 

2018-09-14 11:58 #6 by: Niklas

Does anyone know of a country that already has laws like this in place?

2018-09-18 13:40 #7 by: Niklas

Here's an early example of why we have to make sure anonymous use of biometric cloning is forbidden. A reporter trains Lyrebird, an artificial intelligence-based software program, to recognize his voice. He then calls his mom, using pre-written phrases, and has a short phone conversation with her.

The voice isn't perfect but it's close and good enough to fool someone who knows you well.

2018-10-01 10:59 #8 by: Niklas

I found this article from Axios. They have a different view than I. They seem to think the problem is the people developing the technology, not the laws. I don’t think there is a problem with developing the technology, but with how it can be used.

Why it matters: Increasingly accessible tools for creating convincing fake videos are a "deadly virus," said Hany Farid, a digital-forensics expert at Dartmouth. "Worldwide, a lot of governments are worried about this phenomenon. I don't think this has been overblown."

» Why AI academics are enabling deepfakes - Axios

2018-10-01 19:10 #9 by: jordan

#8 I do think that the people developing it could be a problem, as sad as it is there will be people out there that will do anything for personal gain,


There is anohter comment in this discussion. It is, however, only visible for logged in members. To read the comment, log in or register to become a member.