« All Episodes: Gigaom AI Minute – October 22

:: ::
Allowing algorithms to be used for finding warning signs among people and whether or not that is ethical is the topic of today's AI Minute.

Gigaom brings you our unique analysis and commentary on the present and future of AI.


Earlier this year Facebook tested algorithms designed to detect the warning signs of depression. If they find them, they reach out to the person with links on ways to get help. How comfortable are we with this approach? Assuming we're talking about publicly available data, how far would we be willing to push this?

How would you feel about an algorithm that tried to spot people who were being radicalized, or another algorithm that tried to spot people who might do a mass shooting? Are we comfortable allowing the government to use this source of public data? What about non-public data sources? Are we willing to allow the government to have access to those in the name of security? No matter what you think about the future of artificial intelligence these are questions we will undoubtedly have to answer sooner rather than later.

Share your thoughts on this topic.


Community guidelines

Be sure to review our Community Guidelines. By continuing you are agreeing to our Terms of Service and Privacy Policy.