J. Nathan Matthias from MIT published the results of a study consisting on outsourcing to readers the job of suggesting algorithms that a source requires fact-checking. His group calls this approach “AI Nudging”, the principle being that “we can persuade algorithms to behave differently by persuading people to behave differently”.
“Across the internet, people learn to live with AI systems they can’t control. For example, Uber drivers tweak their driving to optimize their income. Our collective behavior already influences AI systems all the time, but so far, the public lacks information on what that influence actually is. These opaque outcomes can be a problem when algorithms perform key roles in society, like health, safety, and fairness. To solve this problem, some researchers are designing “society-in-the-loop” systems . Others are developing methods to audit algorithms . Yet neither approach offers a way to manage the everyday behavior of systems whose code we can’t control. Our study with r/worldnews offers a third direction; we can persuade algorithms to behave differently by persuading people to behave differently.”