Could the Quora bots pass the Turing Test by being mistaken for stupid uneducated humans?

Some might query whether this question is insincere, and a pretext for complaining about Moderation by Bot.

I am not of that number. Our interaction with Quora Moderation is a preview of our interaction with Artificial Intelligence in general, as it becomes more and more widespread. We’re getting more of it here on the Quoras, because Quora has drunk the Bot/Machine Learning Kool-Aid, and thinks it the solution to all their scalability problems (modulo The tribunal of the marshals). But where Quora staggers, others are following, and I hear Messenger chatbots are all the rage now in small business.

A decade or so ago, I remember seeing a documentary on what our interaction with AI was likely to be. The talking head pointed out that a generalised AI, like Sci-Fi expects, was not going to happen in a hurry; what would happen soon would be AI trained in very specific niches, and ignorant of anything outside that niche. That you would walk away from the bank, muttering “That ATM was pretty dumb, wasn’t it.”

That Moderation bot was pretty dumb, wasn’t it.

So the bots you encounter in the near future, in the general case, are going to pass the Turing Test and fail the Turing test in the same way ELIZA did way back when: they’ll be fine as long as you’re talking within their domain of expertise, and they fall apart as soon as you try to hold a general conversation with them. Though the domain of expertise has broadened massively since the 60s.

So much for the general case. Specifically for Quora, there’s a couple of catches:

Our moderation process emphasizes rule-based decisions that are fair and consistent. Every moderation decision on the site must be based on an existing policy. All we care about are policies; we don’t make decisions based on the substantive nature of the content that a user has published.

(It’s possible that I’ve extrapolated more in that statement than Marc meant; but that interpretation suits me.)

Stupid uneducated humans, as OP describes them, would lack the ability to apply context, equity, discretion and judgement. Many moderation judgements seem to their recipients to do so, as is routinely protested for BNBR judgements—especially when the recipients can’t divine what their supposed infraction was. So you can in fact see where the Turing test comes into it.

(See e.g. discussion in comments of Habib Fanny: Yet another violation. Can’t use a historical quotation that features the word “nigger,” apparently. by Nick Nicholas on The Insurgency. Was it because Habib was citing Lee Atwater? That would indeed be stupid. Was it, rather, because he was taunting Republicans? Possibly likelier, and a better judgement—but not what most readers assumed, including myself.)

The irony is, that bots, if anything, might, might just do a better job of at least some of context, equity, discretion and judgement. Consistency, at least, they would nail. But the existence of The tribunal of the marshals still tells me that Quora aren’t trusting the bots (and subcontractors) to moderate everyone.

  • We have in fact had a reverse Turing test with Quora Moderation recently. We have assumed that the abysmal job done of moderating content in the new Anonymous system was because we weren’t getting the promised individual vetting of content, and it was being left to bots. In fact, as revealed in Anonymous Screening by Jack Fraser on The Insurgency, we have been getting vetting of content, by spectacularly incompetent subcontractors.
    • As Jennifer Edeburn pointed out in comments, a bot would have done a far more consistent job of vetting content than the subcontractors did.

Leave a Reply

Your email address will not be published. Required fields are marked *