How These Top Social Networking Platforms are Using AI to Fight Spam
The number of social media users is growing by leaps and bounds with each passing day. The count of the users increased by a whopping 202 million between April 2018 and April 2019, which accounts to a new social media user every 6.4 seconds!
This bulk of users on social media platforms creates a playground for social spam.
Wikipedia defines social spam as – “Unwanted spam content appearing on social networking services, social bookmarking sites, and any website with user-generated content (comments, chat, etc.). It can be manifested in many ways, including bulk messages, profanity, insults, hate speech, malicious links, fraudulent reviews, fake friends, and personally identifiable information.”
Every year, it is growing in terms of variety and the level of complexity as exemplified by the following scenarios:
- According to a HubSpot survey – 47% of the social media reporters witnessed increased spam in their feeds.
- The Cambridge Analytica Scandal had some actors misuse Facebook for spreading wrong information and generate content in a bid to influence the 2016 USA presidential election.
- The prevalence of trolls on Twitter was so rampant that it reportedly dissolved the platform’s deal with Disney in 2016.
- Security loopholes in Snapchat’s API lead to Find Friends Exploit, exposing users who were listed as private along with mass account creation. Snap even came up with an apology, but the damage was already done.
We can go on about the increase in spams on social networks. But the burning question at the moment is – what can be done to nip it in the bud? One of the solutions that is being adopted far and wide by social media biggies include the use of AI (Artificial Intelligence) and ML (Machine Learning).
Here’s a sneak peek of what are some of the actions taken by them to combat this issue:
The issue of hate speech and offensive comments is not new for Instagrammers. It is so widespread on the content-sharing platform that it had to rope in an AI system called DeepText to tackle it. For instance, it introduced a keyword filter that automatically removed some of the offensive words that appeared on the feed. The filter worked on the mechanism of removing words that were “often reported as inappropriate.” Moreover, it also targeted custom keywords set by the users. The system was also trained to work on at least two million comments that were categorized in various segments, including racism and bullying. The same system also assists in spam detection by using data assets and human input. It identifies the fake accounts and works to clean the comments posted by them on various posts and videos. The platform is continuing to gather more data sets to enhance the accuracy of this system.
Twitter still relies on humans playing a major role in combating spam (especially harassment) alongside AI. Twitter’s plight is mainly due to the face that it needs to strike a balance between free speech while maintaining healthy conversations on the microblogging platform. For now, automation does play a role in certain situations, but the users on the platform are helping to train the AI. The platform gathers data on how often any account is muted, blocked, reported, retweeted, liked, or replied to. This is how the AI mechanism can recognize an account, which has been already blocked by a large number of people and flag it for the moderators to take action. The AI is also able to distinguish between both negative and positive interactions. It helps the platform curate experience for the users. In another crucial step to enhance healthy conversations on the platform, Twitter announced the acquisition of U.K.-based artificial-intelligence startup Fabula AI. Fabula has developed the ability to analyze “very large and complex data sets.” This can detect various manipulation tactics on the network and also identify patterns that other machine-learning techniques can’t.
Social media giant Facebook has been battling spam for quite a while. But at the same time, it is also fortifying the platform against bots and spams. To begin with, it is making use of behavioral data and human intervention to stop the deluge of bots. Next, it relies on AI to automatically detect fake accounts based on the number of accounts per device. The mechanism is simple – Facebook’s AI labels an account as a bot when it signs up and sends more than 100 friend requests in a span of a minute. Facebook also expanded a fact-checking program to include images and videos. The tools used for this purpose used AI of AdVerif.ai. This helps the platform to find flagged images and then go for a reverse image lookup and proceed to see alterations. Moreover, Bloomsburry AI also sites pretty on its list of acquisitions, which will help the social network battle spams. It will bring to the table natural language processing capabilities along with Cape, an AI technology to read documents and answer the questions based on the content.
YouTube is another social media platform that is plagued with trolls that wreak havoc on the comments section on the videos posted by the users. The platform uses a version of Perspective AI moderating tools, which were developed by Google Alphabet’s Jigsaw. This step can filter the toxic comments and curb online harassment that ruins conversations. The AI quickly gathers information from the labels, and the moderators can then take action. Ergo, they can filter out comments that are declared toxic by the algorithm.
What About LinkedIn?
Professional social networking network LinkedIn’s user base spans over 200 nations and territories. One of the major issues that plagues the site is fake profiles. These profiles are used to contact professionals to procure information. This leads to poor user experience. The platform paired its human review system with AI and machine learning based on the reports of fake accounts made by its members.
Here’s what the firm had to say:
- Between January and June 2019, they took action on 21.6 million fake accounts.
- They prevented 19.5 million fake accounts from being created at registration. About 95% of them were stopped automatically, with no access to the platform.
- They restricted 2 million fake accounts before members reported them. This was made possible by pairing human review with artificial intelligence and machine learning.
AI is Getting There
AI and ML are delivering the best solutions on the social media platforms out there.
However, there are certain limitations that lead to the need for human intervention at some or the other point. We should also not forget the fact that spammers and the mechanisms they follow are becoming sophisticated every day. Therefore, staying ahead of them in the game is not an option but rather, a compulsion.
How will AI shape up in the coming days to combat more sophisticated forms of social spam is something that we need to wait and watch.
What are your thoughts on this?