Twitter has announced a range of actions intended to bolster efforts to fight spam and “malicious automation” (aka bad bots) on its platform — including increased security measures around account verification and sign-up; running a historical audit to catch spammers who signed up when its systems were more lax; and taking a more proactive approach to identifying spam activity to reduce its ability to make an impact.
It says the new steps build on previously announced measures to fight abuse and trolls, and new policies on hateful conduct and violent extremism.
The company has also recently been publicly seeking new technology and staff to fight spam and abuse.
All of which is attempting to turn around Twitter’s reputation for being awful at tackling abuse.
“Our focus is increasingly on proactively identifying problematic accounts and behavior rather than waiting until we receive a report,” Twitter’s Yoel Roth and Del Harvey write in the latest blog update. “We focus on developing machine learning tools that identify and take action on networks of spammy or automated accounts automatically. This lets us tackle attempts to manipulate conversations on Twitter at scale, across languages and time zones, without relying on reactive reports.”
“Platform manipulation and spam are challenges we continue to face and which continue to evolve, and we’re striving to be more transparent with you about our work,” they add, after giving a progress update on the performance of its anti-spambot systems, saying they picked up more than 9.9M “potentially spammy or automated accounts” per week in May, up from 6.4M in December 2017 and 3.2M in September.
Among the welcome — if VERY long overdue — changes is an incoming requirement for new accounts to confirm either an email address or phone number when they sign up, in order to make it harder for people to register spam accounts.
“This is an important change to defend against people who try to take advantage of our openness,” they write. “We will be working closely with our Trust & Safety Council and other expert NGOs to ensure this change does not hurt someone in a high-risk environment where anonymity is important. Look for this to roll out later this year.”
The company has also been wading into its own inglorious legacy of spam failure by conducting historical audits of some legacy sign-up systems to try to clear bad actors off the platform.
Well, better late than never as they say.
Twitter says it’s already identified “a large number” of suspected spam accounts as a result of investigating misuse of an old part of its signup flow — saying these are “primarily follow spammers”, i.e. spambots who automatically or bulk followed verified or other high-profile accounts at the point of sign up.
And it says it will be challenging these accounts to prove its ‘spammer’ classification wrong.
As a result of this it warns that some users may see a drop in their follow counts.
“When we challenge an account, follows originating from that account are hidden until the account owner passes that challenge. This does not mean accounts appearing to lose followers did anything wrong; they were the targets of spam that we are now cleaning up,” it writes. “We’ve recently been taking more steps to clean up spam and automated activity and close the loopholes they’d exploited, and are working to be more transparent about these kinds of actions.”
“Our goal is to ensure that every account created on Twitter has passed some simple, automatic security checks designed to prevent automated signups. The new protections we’ve developed as a result of this audit have already helped us prevent more than 50,000 spammy signups per day,” it adds.
As part of this shift in approach to reduce the visibility and power of spambots by impacting their ability to bogusly influence genuine users, Twitter has also tweaked how it displays follower and like counts across its platform — saying it’s now updating account metrics in “near-real time”.
So it warns users they may notice their accounts metrics changing more regularly.
“But we think this is an important shift in how we display Tweet and account information to ensure that malicious actors aren’t able to artificially boost an account’s credibility permanently by inflating metrics like the number of followers,” it adds — noting also that it’s taking additional steps to reduce spammer visibility which it will have more to say about “in the coming weeks”.
Another change Twitter is flagging up now is an expansion of its malicious behavior detection systems. On this it says it’s automating some processes where it sees suspicious account activity — such as “exceptionally high-volume tweeting with the same hashtag, or using the same @handle without a reply from the account you’re mentioning”.
And while that’s clearly great news for anyone who hates high volume spam — and the damage spamming can very evidently do — it’s also a crying shame it’s taken Twitter this long to take these kinds of obvious problems seriously.
Better late than never is pretty cold comfort when you consider the ugly social divisions that malicious entities have fueled by being so freely able to misappropriate the amplification power of social media. Because tech CEOs were essentially asleep at the wheel — and deaf to the warnings being sounded about their tools for years.
There’s clearly a human cost to platforms prioritizing growth at the expense of wider societal responsibilities, as Facebook has also been realizing of late.
And while both these companies may be trying to clean house now they have no quick fixes for mending rips in the social fabric which were exacerbated as a consequence of the at-scale spreading of fake news and worse enabled by their own platforms.
Though, in March, Twitter CEO Jack Dorsey put out a call for ideas to help it capture, measure and evaluate healthy interactions on its platform and the health of public conversations generally — saying: “Ultimately we want to have a measurement of how it affects the broader society and public health, but also individual health, as well.”
So a differently stripped, more civically minded Twitter is seeking to emerge from the bushes.
Twitter users who fall foul of its new automated malicious behavior checks can expect to have to pass some sort of ‘no, actually I am human’ test — which it says will “vary in intensity”, giving examples such as a simple reCAPTCHA process, at the lowest friction end, or a slightly more arduous password reset request.
“More complex cases are automatically passed to our team for review,” it adds.
There’s also an appeals process for users who believe they have been incorrectly IDed by one of the automated spam detection systems — letting them request a case review.
Another welcome if tardy addition: Twitter has added support for stronger two-factor authentication as Twitter users will now be able to use a USB security key (using the U2F open authentication standard) for login verification when signing into Twitter.
It urges users to enable 2FA if they haven’t already, and regularly review third party apps attached to their account to revoke access they no longer wish to grant.
The company finishes by saying it will continue to invest “across the board” to try to tackle spam and malicious automated activity, including by “leveraging machine learning technology and partnerships with third parties” — saying: “These issues are felt around the world, from elections to emergency events and high-profile public conversations. As we have stated in recent announcements, the public health of the conversation on Twitter is a critical metric by which we will measure our success in these areas.”
The results of a Request for Proposals for public health metrics research which Twitter called for earlier this year will be announced soon, it adds.
from Social – TechCrunch https://ift.tt/2IvDs4v
No comments:
Post a Comment