AI’s biggest risk factor: Data gone wrong

Artificial intelligence and machine learning promise to radically transform many industries, but they also pose significant risks — many of which are yet to be discovered, given that the technology is only now beginning to be rolled out in force.

There have already been a number of public, and embarrassing, examples of AI gone bad. Microsoft’s Tay went from innocent chatbot to a crazed racist in just a day, corrupted by Twitter trolls. Two years ago, Google had to censor image searches for keywords like “gorilla” and “chimp” because it returned photos of African-Americans — and the problem still hasn’t been fully fixed in its Google Photos app.

To read this article in full, please click here

Artificial intelligence and machine learning promise to radically transform many industries, but they also pose significant risks — many of which are yet to be discovered, given that the technology is only now beginning to be rolled out in force.

There have already been a number of public, and embarrassing, examples of AI gone bad. Microsoft's Tay went from innocent chatbot to a crazed racist in just a day, corrupted by Twitter trolls. Two years ago, Google had to censor image searches for keywords like "gorilla" and "chimp" because it returned photos of African-Americans — and the problem still hasn't been fully fixed in its Google Photos app.

To read this article in full, please click here

AI’s biggest risk factor: Data gone wrong

Artificial intelligence and machine learning promise to radically transform many industries, but they also pose significant risks — many of which are yet to be discovered, given that the technology is only now beginning to be rolled out in force.

There have already been a number of public, and embarrassing, examples of AI gone bad. Microsoft’s Tay went from innocent chatbot to a crazed racist in just a day, corrupted by Twitter trolls. Two years ago, Google had to censor image searches for keywords like “gorilla” and “chimp” because it returned photos of African-Americans — and the problem still hasn’t been fully fixed in its Google Photos app.

To read this article in full, please click here

Artificial intelligence and machine learning promise to radically transform many industries, but they also pose significant risks — many of which are yet to be discovered, given that the technology is only now beginning to be rolled out in force.

There have already been a number of public, and embarrassing, examples of AI gone bad. Microsoft's Tay went from innocent chatbot to a crazed racist in just a day, corrupted by Twitter trolls. Two years ago, Google had to censor image searches for keywords like "gorilla" and "chimp" because it returned photos of African-Americans — and the problem still hasn't been fully fixed in its Google Photos app.

To read this article in full, please click here

U.S. phone users least likely to switch after security breach

Globally, 47 percent of consumers would switch their mobile phone carrier in the event of a security breach, up 7 percent from last year, but only 29 percent of Americans would do the same, according to a new Nokia survey of more than 20,000 custom…

Globally, 47 percent of consumers would switch their mobile phone carrier in the event of a security breach, up 7 percent from last year, but only 29 percent of Americans would do the same, according to a new Nokia survey of more than 20,000 customers.

Mexico ranked the highest, at 73 percent.

Part of the reason for the difference is that customers have different attitudes toward the relationship between them, their phones, and their carriers.

"In some markets, people think that they own the device, and the operator is just the connection," said Giuseppe Targia, vice president of the security and IoT business unit at Nokia Group.

To read this article in full or to leave a comment, please click here