Advertisment

Twitter announces new measures to fight spam and malicious automation

Twitter introduced new measures to fight abuse and trolls, new policies on hateful conduct and violent extremism and is bringing in new technology and staff

author-image
PCQ Bureau
New Update
Twitter

Every day, people go to Twitter to see what’s happening in India and around the world. One of the most important parts of Twitter’s focus on improving the health of conversations on the platform is ensuring people have access to credible, relevant, and high-quality information on Twitter.

Advertisment

To help move towards this goal, Twitter introduced new measures to fight abuse and trolls, new policies on hateful conduct and violent extremism, and is bringing in new technology and staff to fight spam and abuse.

New Processes for Fighting Malicious Automation and Spam

Twitter fights spam and malicious automation strategically and at scale. The platform’s focus is increasingly on proactively identifying problematic accounts and behaviour rather than waiting until Twitter receives a report.

Advertisment

Twitter focuses on developing machine learning tools that identify and take action on networks of spammy or automated accounts automatically. This lets the platform tackle attempts to manipulate conversations on the online platform at scale, across languages and time zones, without relying on reactive reports.

Twitter has seen a positive impact on investments in this space:

  • In May 2018, Twitter’s systems identified and challenged more than 9.9 million potentially spammy or automated accounts per week. That’s up from 6.4 million in December 2017, and 3.2 million in September.
Advertisment
  • Due to technology and process improvements during the past year, it is now removing 214% more accounts for violating the platform’s spam policies on a year-on-year basis.

  • At the same time, the average number of spam reports Twitter received through its reporting flow continued to drop — from an average of approximately 25,000 per day in March to approximately 17,000 per day in May. Twitter has also seen a 10% drop in spam reports from search as a result of its recent changes. These decreases in reports received means people are encountering less spam in their timeline, search, and across the Twitter product.
Advertisment

  • Twitter is also moving rapidly to curb spam and abuse originating via Twitter’s APIs. In Q1 2018, the platform suspended more than 142,000 applications in violation of its rules — collectively responsible for more than 130 million low-quality, spammy tweets. Twitter has maintained this pace of proactive action, removing an average of more than 49,000 malicious applications per month in April and May. It is increasingly using automated and proactive detection methods to find misuses of the platform before they impact anyone’s experience. More than half of the applications Twitter suspended in Q1 were suspended within one week of registration, many within hours.

These numbers show that Twitter’s tools are working: It is preventing or catching more of this activity itself before the users ever see it on Twitter.

Advertisment

Platform manipulation and spam are challenges the platform continues to face and which continue to evolve, and it is striving to be more transparent with users about its work.  Twitter is sharing four new steps it is taking to address these issues:

1) Reducing the visibility of suspicious accounts in Tweet and account metrics

A common form of spammy and automated behaviour is following accounts in coordinated, bulk ways. Often accounts engaged in these activities are successfully caught by Twitter’s automated detection tools (and removed from its active user metrics) shortly after the behaviour begins.

Advertisment

But now Twitter has started updating account metrics in near-real time: for example, the number of followers an account has, or the number of likes or Retweets a Tweet receives, will be correctly updated when Twitter takes action on accounts.

So, if Twitter puts an account into a read-only state (where the account can’t engage with others or Tweet) because the systems have detected it behaving suspiciously, the platform now removes it from follower figures and engagement counts until it passes a challenge, like confirming a phone number.

It will also display a warning on read-only accounts and prevent new accounts from following them to help prevent inadvertent exposure to potentially malicious content. If the account passes the challenge, its footprint will be restored (though it may take a few hours). Twitter is working to make these protections more transparent to anyone who may try to interact with an account in this read-only state.

Advertisment

As a result of these improvements, some people may notice their own account metrics change more regularly. This is an important shift in how Twitter displays Tweet and account information to ensure that malicious actors aren’t able to artificially boost an account’s credibility permanently by inflating metrics like the number of followers.

In the coming weeks, it will update about additional steps the platform is taking to reduce the impact of this sort of activity on the platform.

2) Improving Twitter signup process

To make it harder to register spam accounts, Twitter is also going to require new accounts to confirm either an email address or phone number when they sign up for Twitter. This is an important change to defend against people who try to take advantage of the platform’s openness.

Twitter will be working closely with its Trust & Safety Council and other expert NGOs to ensure this change does not hurt someone in a high-risk environment where anonymity is important. This will roll out later in the year.

3) Auditing existing accounts for signs of automated signup

Twitter is also conducting an audit to secure a number of legacy systems used to create accounts. The platform’s goal is to ensure that every account created on Twitter has passed some simple, automatic security checks designed to prevent automated signups. The new protections the platform has developed as a result of this audit have already helped Twitter prevent more than 50,000 spammy signups per day.

As part of this audit, it is imminently taking action to challenge a large number of suspected spam accounts that was caught as part of an investigation into misuse of an old part of the signup flow. These accounts are primarily following spammers, who in many cases appear to have automatically or bulk followed verified or other high-profile accounts suggested to new accounts during the platform’s signup flow.

As a result of this action, some people may see their follower counts drop; when it challenges an account, follows originating from that account are hidden until the account owner passes that challenge. This does not mean accounts appearing to lose followers did anything wrong; they were the targets of spam that the platform is now cleaning up. Twitter has recently been taking more steps to clean up spam and automated activity and is working to be more transparent about these kinds of actions.

4) Expansion of its malicious behaviour detection systems

Twitter is also now automating some processes where it sees suspicious account activity, like exceptionally high-volume tweeting with the same hashtag, or using the same @handlewithout a reply from the account you’re mentioning. These tests vary in intensity, and at a simple level may involve the account owner completing simple reCAPTCHA process or a password reset request. More complex cases are automatically passed to our team for review.

What Users Can Do

There are important steps users can take to protect their security on Twitter:

  • Enable two-factor authentication. Instead of only entering a password to log in, they’ll also enter a code which is sent to their mobile phone. This verification helps make sure that only a user themselves can access their account.
  • Regularly review any third-party applications. Users can review and revoke access for applications by visiting the Apps tab in their account settings on the website.
  • Don’t re-use a password across multiple platforms or websites. Have a unique password for each of the accounts.
  • Users can also use a FIDO Universal 2nd Factor (U2F) security key for login verification when signing into Twitter.

Additionally, if someone believes they may have been incorrectly actioned by one of Twitter’s automated spam detection systems, they can use Twitter’s appeals process to request the review of their case.

Next Steps

Going forward, Twitter is continuing to invest across the board in its approach to these issues, including leveraging machine learning technology and partnerships with third parties. Twitter will soon announce the results of its Request for Proposals for public health metrics research.

These issues are felt around the world, from elections to emergency events and high-profile public conversations. As Twitter has stated in recent announcements, the public health of the conversation on Twitter is a critical metric by which the platform will measure its success in these areas.

spam twitter trolls
Advertisment

Stay connected with us through our social media channels for the latest updates and news!

Follow us: