How do Quora’s algorithms “understand” irony?

I agree with Dion Shaw’s answer: detecting irony is a subtle skill, which requires you to deduce, from real world knowledge, that the speaker intends the complete opposite of what they’re literally saying, and that they think it’s appropriate to do so because they regard the question as not worth answering literally (typically because they regard the answer as obvious).

Computers aren’t doing well at detecting irony in general, and the problem is AI-hard. (Real Artificial Intelligence, with a social and intentional factor, not just machine learning.) In fact, the one paper I read about it recently was as crude as it possibly could be—it only got so far as working out that the person was speaking an untruth, ergo, Irony! But of course lying, error, and irony are not the same thing at all, even if all of them reduce to the same truth-conditional purview of a statement not being true.

Quora’s algorithms, sadly, as not in the business of extracting truth from ironic answers, which is at least part of the reason why Joke Answers are frowned upon. I have to say, I find it difficult to see how Quora’s algorithms are extracting meaning from the wide range of answers given here at all. But they don’t have to; they merely have to understand upvotes, credentials, and social networks of users.

Leave a Reply

Your email address will not be published. Required fields are marked *