« All Episodes: Gigaom AI Minute – January 11

-
-
0:00
0:00
0:00

How do you teach a machine morals? There is of course no data set on which to train it. There’s no agreed upon scripture, or even if there were, there’s no agreed upon interpretation of that scripture. There is no single moral precept that can be universally applied in all situations.

Even if a single person wanted to teach the AI their own values, their own moral code, there are so many exceptions and provisos and so many things that are situational that it probably can’t be done. So if this is all true, then a machine can never be a moral agent, and it will always be the human behind the machine.

Share your thoughts on this topic.

Comment

Community guidelines

Be sure to review our Community Guidelines. By continuing you are agreeing to our Terms of Service and Privacy Policy.