Project Counsel Media is a division of Luminative Media. We cover the areas of cyber security, digital technology, legal technology, media, and mobile technology.



About Luminative Media: our intention is to delve deeper into issues, at greater length and with more historical and social context, in order to illuminate pathways of thought that are not possible to pursue through the immediacy of daily media. For more on our vision please click on our logo:

________________

Meta's latest Quarterly Adversarial Threat Report:
some wild stories from their counter-espionage team


Why every platform ought to pay attention to the company's latest report on threat hunting


BY:
Anthony Nicci
Cybersecurity Attorney / Reporter
PROJECT COUNSEL MEDIA



5 August 2022 (Mykonos, Greece)Our team is prepping this weekend for the digital tech/digital media "unconference" in Mykonos which has been in hiatus for a few years. It has been running for quite awhile and was started quite by accident by a group of digital media sensei who happen to be in or near Mykonos almost every summer. It draws attorneys, CEOs, managers, developers, executives, tech/tool providers, investors, etc. from a very wide range of companies and institutions, most of them connected to the TMT (technology, media, and telecom) industries, but also a large percentage of scientists and all-around bright thinkers. It just grows organically, pure word-of-mouth.

And, yes, everybody eschews the moniker “thought leader” 🤮. It is just a bunch of brainiacs chatting.

Our boss, Greg Bufithis, has hosted some sessions (venues are scattered) for quite some time. The term “unconference” has been applied, or self-applied, to a wide range of gatherings that try to avoid one or more aspects of a conventional conference (such as high fees, sponsored presentations, and top-down organization). It is participant driven. Greg wrote about it here.

A lot of the conversations do revolve around digital tools and technologies, especially the advances made to engage with customers. And this year there will be a first: a bit of a tutorial on forensic extraction tools for mobile led by one of our eDiscovery vendor partners plus one of our cybersecurity vendor partners. Pretty much pulled from the headlines in the U.S. where on 6 January 2021 we saw a failed insurrection/revolution (or is it actually still in progress? 😎) with a massive attempted cover-up of involvement and responsibility that included the wiping of all January 6, 2021 messages from the mobile phones of top Pentagon officials plus the mobile phones of major operatives at two other major Federal government agencies. American exceptionalism indeed.

One other major area to be discussed based on some emails I have seen circulating is going to be about platform power. In the original reckoning over platform power that took place after the 2016 election, attention focused on a handful of big platforms; Facebook, Instagram, YouTube, and to a lesser extent, Twitter. All of those companies invested significantly in trust and safety after the election, making it harder for adversaries to make inroads. At the same time, a handful of major new platforms have popped up since then, each of which have varying levels of enforcement capabilities. As a result, influence operations have never targeted multiple platforms simultaneously until now.

So herein a few notes on some interesting bits from Meta's latest Quarterly Adversarial Threat Report.

We have written exabytes (as have many others) about the most salient ways bad actors manipulate public perception through platforms. While not all of them will come as much of a surprise to dedicated security professionals, they do give food for thought to plenty of platform employees, regulators, and journalists looking to understand what new forces may come into play in the midterm elections and beyond. 

The latest analysis comes from Meta and its team of employees working on threat intelligence, disruption, and cyber espionage investigations. Its Quarterly Adversarial Threat Report, which you can find here, is the latest installment in a five-year effort by the company to account for the various ways people try to manipulate tech platforms. While the company’s work originates with its own apps, most prominently Facebook and Instagram, increasingly the report covers threats that span a wide swathe of the consumer internet. 

The rise of open-source and commercially available malware has made it easier for people to mount attacks. Years ago, building software that could extract all the text messages on your phone, along with your contacts, location history, and other sensitive data, was a somewhat specialized affair. 

These days, it’s becoming less so. In its report, Meta documents finding a group of hackers in Pakistan known as APT36 that targeted government officials, military personnel, and human rights activists. The goal was to get them to install malware, and it wasn’t hard: they simply downloaded from Github a free tool known as XploitSPY - “developed by a group of self-reported ethical hackers in India” - and lightly modified it:

“It democratizes access to more sophisticated capabilities that maybe you're not able to build yourself".

A second, potentially more worrisome aspect of the rise of off-the-shelf malware is that it makes it more difficult for companies like Meta to figure out who is behind the attacks. Malware created by state actors often carries telltale signs of who developed it in its code; when everyone is using the same code, though, platforms lose an important signal:

“It lets you hide in the noise. If a bunch of different threat actors are throwing the same malware all over the internet, it makes it harder for analysts to pull together exactly who is behind it.”

Incidentally, if you’re wondering why Microsoft-owned Github is hosting code that can be used to target government officials and other high-value targets in this way, so am I. It certainly doesn't make our lives any easier out there. But it's also the nature of the internet - it's very hard to contain something like this. I'm not sure that you could make it disappear.

Github's official response?

"Our Acceptable Use Policies were developed to ensure our platform can accommodate dual-use software, and we assume positive intention and use of these projects to promote and drive improvements across the ecosystem. We do not allow use of GitHub in direct support of unlawful attacks that cause technical harm, and we actively investigate abuse reports and quickly take action where content violates our terms".

Two key tactics for silencing dissent - brigading and mass-reporting - are growing in popularity. An important objective of people who spend all day thinking about how to ruin tech platforms is to silence their political enemies. One way you can do this is to get a bunch of people to harass your enemies - a practice known as brigading. Another way you can do it is to get a bunch of people to pretend that your enemies have violated a bunch of platform policies and falsely report them.

Brigading and mass-reporting are two sides of the same coin, and both are on the rise.

In the most recent quarter, the company found a network of 2,800 accounts, groups and pages in Indonesia that tried to force a bunch of Wahhabi Muslims off the platform. It’s clear from the tactics they used that they were quite determined. From the report:

"To conceal their activity and avoid detection, the individuals in this network would replace letters with numbers when posting about their targets. They, at times, created fake accounts that impersonated real people and then used them to report authentic users for impersonation".

Meanwhile, Meta also found a Hindu nationalist campaign of 300 Facebook and Instagram accounts in India that organized harassment against activists, comedians, and actors, among other groups. Here’s how it worked:

"These accounts would call on others to harass people who posted content that this group deemed offensive to Hindus. The members of this network would then post high volumes of negative comments under the targets’ posts. In response, some people would hide or delete their posts leading to celebratory comments claiming a successful raid.”

Neither approach is new, exactly, but as more networks use that approach platforms need to change their own approach. In the old days, the company would remove harassing posts and accounts one by one. Now they’re actively looking for groups that are setting up coordinated campaigns, whether they’re doing it on Facebook, Telegram groups, Discord servers, or elsewhere. It’s going from content-by-content enforcement to treating the network more holistically.

Every attack is a cross-platform attack now

As I noted above, in the original reckoning over platform power that took place after the 2016 election, attention focused on a handful of big platforms; Facebook, Instagram, YouTube, and to a lesser extent, Twitter. Influence operations had never targeted more platforms simultaneously.

But now ... a shift.

For instance, take a troll farm Meta discovered operating out of St. Petersburg, Russia this quarter. It began operating around the start of Russia’s invasion of Ukraine, having advertised for “spammers, commenters, content analysts, designers and programmers.” It was exposed by a brave undercover Russian investigative reporter.

The troll farm wasn’t very good, Meta says - its efforts to create fake accounts were easily foiled. At the same time, it’s notable just how wide a net the Russians cast:

"Our investigation found attempts at driving comments to people’s content on Instagram, Facebook, TikTok, Twitter, YouTube, LinkedIn, VKontakte and Odnoklassniki. It appears that hired 'trolls' worked in shifts seven days a week, with a daily brief break for lunch. According to public reporting, they were divided into teams specializing on particular platforms they were meant to 'spam'. The operation had an overt and a covert component. Overtly, they ran a Telegram channel that regularly called on its followers to go to particular accounts or posts by public figures or news media and flood them with pro-Russia comments. Covertly, they ran fake accounts that posted such comments themselves — likely to make it look as if their crowdsourcing had been effective".

This is fairly standard Russian “perception hacking" - working to make it seem like the company’s influence operations are more effective than they actually are. At the same time, the report shows that no one company can mitigate the threat that even a D-list troll farm can pose. The more platforms you operate on, the harder the threat is to counter by one defender.

Representatives from the big platforms meet with each other every two weeks to exchange information about new threats and what can be done to stop them. They also meet regularly with government agencies as well.

That’s important to know, particularly given that there are still few regulations - if any - on what threats platforms should be required to look for, and what to do about them when they do discover influence operations. Given how intensely the world reacted to news of Russian troll farms in 2016, you would think we would have more to show for the furor by this point than some voluntary industry working groups. 

But the proliferation of platforms carries with it a silver lining, too. It's also harder to run an operation across a dozen platforms than just on one. And their hit rate is getting weaker and weaker. So we're seeing them work harder for less return.

Still a very long way to go but the increased attention can only help.

* * * * * * * * * * * * * * * 

To read this post and our other musings please read our blog by clicking here


* * * * * * * * * * * * * * *