Annons:
Tagopinion
Read 2263 times
Niklas
2018-09-10 17:08

Should there be laws regulating humanoid robots

Image 1. Click to open in original size.

With robots and artificial intelligence getting more sophisticated, are our current law's enough? Imagine having a private conversation with someone, for instance at a party, and later find out that the ”person” you spoke with was a robot. Or imagine going on a date not knowing that you are actually meeting with a very sophisticated AI-powered humanoid. Or imagine finding out that your shrink is a robot.

I would act and say different things to a robot than to a real human. Therefore I want to know if the person I talk to on the phone is a software program instead of a human. I want to know if people I meet are not human.

Today human robots and digital assistants aren't good enough to fool most people. That will change. They will be capable of fooling anyone into believing that the person at the other end of the phone is a real person, or that the shop clerk you are flirting with is interested in you, not a robot doing everything they can to sell you something.

Is it okay for software and hardware to mimic people without us knowing what is what? Do you think we need to adapt our laws to a future where humans and robots are living side by side, looking and acting the same? Just like it is a crime to pose as someone else, shouldn't it be illegal to pose as a human when you are not? In my opinion it would be fraud to let an object imitate a human if there is a risk of confusion.

What do you think? Do we need to adjust our laws? Is this covered by the laws we already have?

(Photo by Parker Johnson at Unsplash)


Best regards, Niklas 🎈

Annons:
jordan
2018-09-12 00:57
#1

It seems so… futuristic? That we can have this kind of discussion. I think it is worrying that like when data from humans is put online, that it can be accessed and stored by anyone or anything, so I imagine more privacy laws in regards to what robots are able to store as data might be needed.

Niklas
2018-09-12 09:29
#2

Artificial intelligence and machine learning are here and very fast getting good. In the beginning they will mostly be doing good. When we have accepted and gotten used to them some evil people will start using them for bad things. That's why I think we should have proper laws in place beforehand. #1: Don't you think we need laws stopping computers and robots from impersonating humans unless they openly advertise it?


Best regards, Niklas 🎈

Tammie
2018-09-12 15:30
#3

Yes, I strongly believe that we do need that. Robots can also have other abilities that humans don't have. They can have the ability to record conversations and a human would have to use a device to do that but a robot is like a computer and can be recording or taking pictures with out a persons knowledge.

Happy creating!

Tammie

Host of Paints and Crafts

jordan
2018-09-13 22:03
#4

#2 Maybe in a similar fashion to how GDPR affected websites in Europe? Yes that would be a good idea IMO.

Evelina
2018-09-14 11:34
#5

Yes, and who has power and control over the robots. They could be used by people to gain power in the world by generating propaganda, marketing purposes, etc. Therefore, an individual needs to know whether its a robot in order to not take for granted what the robot says. 

#1 Jordan has a good point, if the robots are able to track data from their environment, such as conversations, tracking human behaviour, etc, it could be detrimental, such as threatening democratic societies. I mean the robots will be able to get hacked too which is a scary thought.

Niklas
2018-09-14 13:58
#6

Does anyone know of a country that already has laws like this in place?


Best regards, Niklas 🎈

Annons:
Niklas
2018-09-18 15:40
#7

Here's an early example of why we have to make sure anonymous use of biometric cloning is forbidden. A reporter trains Lyrebird, an artificial intelligence-based software program, to recognize his voice. He then calls his mom, using pre-written phrases, and has a short phone conversation with her. The voice isn't perfect but it's close and good enough to fool someone who knows you well.


Best regards, Niklas 🎈

Niklas
2018-10-01 12:59
#8

I found this article from Axios. They have a different view than I. They seem to think the problem is the people developing the technology, not the laws. I don’t think there is a problem with developing the technology, but with how it can be used.

Why it matters: Increasingly accessible tools for creating convincing fake videos are a "deadly virus," said Hany Farid, a digital-forensics expert at Dartmouth. "Worldwide, a lot of governments are worried about this phenomenon. I don't think this has been overblown."

» Why AI academics are enabling deepfakes - Axios


Best regards, Niklas 🎈

jordan
2018-10-01 21:10
#9

#8 I do think that the people developing it could be a problem, as sad as it is there will be people out there that will do anything for personal gain,

Niklas
2018-10-02 10:44
#10

I think that will more likely happen with people using the technologies once they have been developed.


Best regards, Niklas 🎈

Niklas
2019-08-23 13:10
#11

The New York Times has a short video with Claire Wardle explaining what Deepfakes are and the risks they come with. I think it is worth watching as it directly relates to what I wrote in #0.

» Opinion | This Video May Not Be Real - The New York Times


Best regards, Niklas 🎈

Tammie
2019-08-26 05:40
#12

I think the use of robots is worrisome and the potential for misuse is really high.

Happy creating!

Tammie

Host of Paints and Crafts

jordan
2019-08-26 17:06
#13

#11 I find Deepfake really concerning, even at this point where the technology is fairly primitive. There has already been horror stories from it, and technology like that is only going to improve, making things like online verification even more important to safeguard people from the threats that may arise.

Annons:
Niklas
2019-08-27 10:25
#14

Online verification/identification is an excellent idea and not just for this, but every situation when it is vital to know who you are dealing with.


Best regards, Niklas 🎈

jordan
2019-08-27 23:53
#15

I know that in Korea you have to link your government I.D (I guess it is like the P numberin Sweden) to any activity you do online. Maybe that is a model that others may follow?

Niklas
2019-08-28 11:23
#16

If by “any activity” they mean important government stuff, I agree. Estonian citizens (and e-residents) also have digital ID cards for everything from banking to starting companies and turning in tax return forms.


Best regards, Niklas 🎈

jordan
2019-08-28 22:43
#17

I think the Korean version links your name to I.D on everything, including social media (I believe one of the main ones is called Naver?), although it doesn't seem to stop trolls on websites such as Ilbe. The Estonian system seems like a good point halfway, acting more on convenience than monitoring people.

Niklas
2019-08-29 11:12
#18

Yes, the Estonian way seems more European. 😀


Best regards, Niklas 🎈

Niklas
2019-09-06 11:06
#19

Here’s the next example of fraud using cloned people.

» Scammer Successfully Deepfaked CEO's Voice To Fool Underling Into Transferring $243,000


Best regards, Niklas 🎈

[Joab]
2019-10-10 09:32
#20

Oh boy, how much new types of criminality our modern technology introduces. This one bothers me more than others since it will give us a society where you must be more and more suspicious of everyone and everything.

Annons:
Scroll to top
Annons: