Proximity and power: civil society’s role in democratizing spyware research

Ovi
Ovi

Threat intelligence today is a commodity. It is monetized, gated, and shaped to fit the needs of commercial clients before communities. It's a product, collected and tracked to be baked into security products & to protect share holders. Before it defends anybody, it has to turn a profit. Before it acts as a mechanism for human defense—it's licensed, packaged up, or sold back to those who need it most.

Civil society, exist at the far edge of this economy. We sit on the edge, hands outstretched, waiting for the overflow. If you're lucky, you might get help from a civic lab with spare cycles. But in most of the world, access to high-fidelity threat intelligence & forensics still depends on who you know, how visible your threat is, or what infrastructure you can afford to trust.

This isn't a bug in the system. It is the system.

There are extraordinary people doing vital work in this space. Their research has exposed state-sponsored surveillance, documented the technical mechanics of commercial spyware, and supported some of the most at-risk communities in the world. But the current architecture is fragile. The pipeline—collection, triage, analysis, attribution, publication—is narrow and centralized. A small number of vital groups have become the primary endpoints for an overwhelming tidal wave of global abuse. And I mean that not as an accusation. But as a reflection of where capacity has historically concentrated, through lack of global resources and the squeezing of civil society by capital. But it cannot be the long-term model.

The bottleneck is not just about scale—it’s about access. Victims often have no direct route to investigate potential compromise. Local organizations are forced to escalate incidents through opaque or overburdened channels to under-resourced helplines and labs. There is no widely available open-source infrastructure for self-triage. Because there's no infrastructure in the first place. There is no “public health” equivalent in digital forensics. And even where tools do exist, they are often complex, under-maintained, or require technical skills well beyond the reach of most users. We have made spyware detection an expert-only space, despite its impact being deeply personal and widely distributed.

Even the language of threat intelligence has been enclosed. The naming of threat actors—ostensibly to bring clarity—has become fractured, vendor-specific, and proprietary. Name conventions and threat actor artwork become brand identities. These aren’t just aliases; they’re product lines.

Yes, this is 'Rainbow Ronin' - iVerify's Threat Actor naming & artwork for NSO group. A Pegasus in a kimono. Yes, that's is what you are seeing.

The same group might be called three different things depending on which firm is publishing. This isn’t a taxonomic accident. It’s the direct result of proprietary clustering: each vendor defines its own actor sets based on internal telemetry and data models, and those mappings become product differentiators. You mean you've not heard of Salt Typhoon?! Gasps. What should be shared knowledge becomes intellectual property. The naming collisions we see are not just inconvenient—they’re features of the market. Confusion is a moat. Exclusivity is a business model.

All of this reflects the same underlying dynamic: threat intelligence is still treated as a commodity first and a human right second. Even when researchers are motivated by justice, the surrounding ecosystem—legal, commercial, technical—tends to reinforce exclusivity. Intelligence is circulated in closed feeds, under NDAs, behind paywalls, or through private briefings. TLP RED is rarely about intelligence preservation, but intellectual property. If you’re outside the perimeter, you’re in the dark, waiting for a tip.

Recently I was thinking about incident response firms, and how different their model once was and yet their production of threat intelligence was high quality. One of the clearest counterexamples to the telemetry-obsessed approach we see today comes from Mandiant... before it got swallowed by Google (ofc). Mandiant wasn’t a data platform. It didn’t rely on deploying agents across millions of endpoints or hoovering up oceans of DNS logs and cloud telemetry. What it had was access: direct, hands-on engagement with compromised systems, and the discipline to investigate them deeply. Their visibility came not from scale, but from proximity. And that proximity was powerful enough to uncover SUNBURST the backdoor in SolarWinds Orion — one of the most consequential state-sponsored campaigns in recent history — along with many others.

This principle matters more than ever. Because in civil society, we have proximity. We are the ones being targeted. Journalists, lawyers, organizers, exiles, researchers, defenders—we sit directly in the blast radius of mercenary spyware and nation-state surveillance. These attacks aren’t abstract. They arrive through our phones, our messages, our networks. We don’t need to guess what infection looks like. We live inside the proximity window. What we lack is the capacity to act on it.

If visibility comes from proximity, then civil society already holds some of the most critical threat intelligence in the world—data that could save lives. It’s just not being analyzed, processed, or shared, because the infrastructure to do so doesn’t exist. We have the visibility. We don’t have the tools.

That’s why decentralizing spyware research isn’t just a matter of scale or efficiency—it’s a strategic imperative. Every infected device, every strange crash log, every compromised communication is a potential signal. If we can build capacity for local triage, remote forensics, and actionable threat data sharing—securely and on our own terms—I think we might have a chance at inverting the power structure. I want to say here that this isn’t about replacing experts in reverse engineering or vulnerability research. It’s about enabling local capacity through technology so detection starts from the ground up.

Decentralization allows us to operationalize our proximity. It means turning victims into investigators, observers into analysts. It means converting lived experience into collective defense. It doesn’t require massive infrastructure. It requires accessible open-source tools, methodologies, and protocols built for trust rather than control. The signals are already there. The infections are already happening. What’s missing is the architecture that allows us to respond at the edge.

*!!Capacity building!!* I hear you call. Capacity building is important, but I think it will only take us so far. Trainings, workshops, and rapid-response engagements help in the short term—they are absolutely needed—but they are fundamentally extractive if not paired with long-term infrastructure. We’ve seen this cycle repeat: a crisis unfolds, support is delivered, a few skills are transferred, and then the ecosystem resets. Without public infrastructure—tools, protocols and methodologies that outlive the grant cycle—we will be stuck rebuilding the same capabilities over and over again.

Some argue you can’t investigate complex spyware threats without training. But that’s exactly what embedded infrastructure is designed to solve. We don’t teach people to write their own encryption algorithms—we give them Signal. We don’t expect them to design their own anonymity networks—we give them Tor. We don’t ask them to engineer their own secure operating systems—we give them Tails. Technologies like Signal, Tor and Tails have become embedded responses to systemic risks. Their use doesn’t require deep technical expertise. They require access, openness, context, and trust—not constant retraining.

What if the same can be true for digital forensics? What if, with the right design, threat investigation doesn’t have to be expert-only. Tools can guide users through anomaly detection. Devices can be queried for signs of compromise with local, privacy-preserving workflows. Remote forensics—when implemented securely and ethically—can remove the need for physical access altogether, enabling experts to support victims across borders or under threat without compromising safety or control. This is crucial in civil society contexts, where meeting in person can be dangerous or impossible. We can bake in forensic visibility the same way we baked in end-to-end encryption—by making it a feature, not a skillset.

I think our goal shouldn't be to endlessly “train up” new responders just to keep pace. Some training will always be needed — but it should reinforce capacity and not substitute for missing infrastructure. The real aim is to equip communities with the forensic and analytical technology to respond without needing to rely on constant retraining. That only happens when knowledge and tooling become embedded and open — when they become public interest infrastructure.

We know what the future should look like: threat intelligence that saves lives by flowing outward from communities, not downward from platforms. Forensics that begin at the point of compromise, not the point of sale. Infrastructure that is shaped by proximity, not by capital. Tools that don’t close-source their methodologies or require command-line juggling, but are accessible to anybody with the need across the globe.

This is not a problem of innovation. It is a problem of will, of power, and of control. The private sector has built defensive infrastructure at planetary scale—but it was never built for us, because it was never built by us. Civil society doesn’t need a parallel product stack. We need autonomy. That means investing not just in capacity, but in public, civic infrastructure that decentralizes investigation, redistributes knowledge, and embeds threat intelligence into the communities most targeted—not just most visible.

There is no technical reason this future can’t exist. The only question is whether we choose to build it—and for whom.

https://BARGHEST.asia
https://github.com/BARGHEST-ngo

Onward,
Ovi