We often hear words like “Artificial Intelligence” and “Machine Learning” being thrown around in terms of their capacity to steal our jobs and take over the world. But we rarely stop to discuss the impact that this technology can have in keeping us safe.
We live in a context where almost 1.5 billion people log in to their Facebook accounts daily. When we add to that the time spent on other social media platforms, such as YouTube, Twitter, LinkedIn and Instagram, the power of social media to significantly impact our world becomes scarily apparent.
In an attempt to minimize the negative impacts, social media companies are using humans to physically sift through and flag articles, videos and other posts that could be in violation of their code of conduct. In fact, by the end of 2018, Facebook claims that they will employ over 20,000 humans to do exactly that. However, the sheer amount of content that these platforms carry is beginning to illuminate that this reliance humans are not a sustainable solution – and there is a technological alternative. YouTube has recently announced that whilst they’re expanding their human moderating team, they are also increasing the reliance on AI and the results are undeniable.
In a recent company blog, YouTube’s CEO Susan Wojcicki claimed that “Since we started using machine learning to flag violent and extremist content in June , the technology has reviewed and flagged content that would have taken 180,000 people working 40 hours a week to assess”. Their experience so far has signified that AI moderation is not only more efficient, but it also limits the emotional burden to employees, struggling through hours of gratuitous posts.
If you’re still cynical, I urge you to consider this – Facebook currently has 2.13b active monthly users and aims to have 20,000 safety and security specialists. This means that there is one moderator per 100,000 users, expected to monitor all the articles, videos, images and messages shared by these accounts.
If AI technology can be used to strengthen this effort, you won’t hear me objecting!
Since we started using machine learning to flag violent and extremist content in June , the technology has reviewed and flagged content that would have taken 180,000 people working 40 hours a week to assess