I’m so the wrong person to be asking this, Asher.
Perspective is a brand new machine learning project, to spot flames (“toxic comments”) online.
We are on a site which uses a lot of bots (bots trained through machine learning) to do things like assign topics, detect grammar mistakes in questions, and detect near duplicate questions.
Are they useful? Yes.
Are they a replacement for human intervention? As anyone who’s spent more than 5 minutes on here knows, no. They are still quite fallible, because these are AI-hard questions.
Now. Moderation is a hugely controversial topic in Quora. People are very unhappy with moderation outcomes, and protest it to the skies.
Will people be more happy if it is substantially done by bot (if it isn’t already)? No. They will completely lose their shit. You know it, I know it. If Quora is already doing it, there’s an excellent reason why they’re keeping shtum about it.
Will they be right to? We know that in some domains, robots do better than humans. Grading essays, for example. (And boy, is there a shitstorm about that in the letters pages of newspapers.)
But note that moderation is something that needs sensitivity and judgement. Note how unhappy people are with the crude decision-making of bots now on Quora—bots actually are helpful, but only to get you 80% of the way there, and people complain endlessly (and rightly) about the 20% crap that’s left over.
At best, bots would be a backup of what happens now in moderation, with community reporting. They could find more potential infractions. They would find a hell of a lot more false alarms. They would either make for much more work for human mods (because there’ll be more crap to wade through), or else they will actually replace human mods—and if you think people are unhappy now, wait till the bots start unilaterally banning people. The revolt that would trigger really would impact Quora’s bottom line, because it wouldn’t be just the odd false positive, it’d be a bot bloodbath.
You should also bear in mind that Perspective is barely out of the Google research lab; it would take a lot of tweaking to become reliable at enterprise scale. Quora is likely prone to Not Invented Here syndrome, like many a startup is. But I wouldn’t blame them in this case: if Quora know what they are doing, they have their own research lab going, looking into developing their own bot smarts. Both because they know their own problem space better (one hopes), and because that kind of research capital—and the training data we all volunteer for it—is the kind of asset they really can monetise.