Gigaom brings you our unique analysis and commentary on the present and future of AI.
Earlier this year Facebook tested algorithms designed to detect the warning signs of depression. If they find them, they reach out to the person with links on ways to get help. How comfortable are we with this approach? Assuming we're talking about publicly available data, how far would we be willing to push this?
How would you feel about an algorithm that tried to spot people who were being radicalized, or another algorithm that tried to spot people who might do a mass shooting? Are we comfortable allowing the government to use this source of public data? What about non-public data sources? Are we willing to allow the government to have access to those in the name of security? No matter what you think about the future of artificial intelligence these are questions we will undoubtedly have to answer sooner rather than later.