Incentivizing anti-abuse proactivity among online service providers

Online good and bad actors alike use services provided by different kinds of companies, like content or DNS hosting, email sending and receiving, or domain name registration. The good guys, on one hand, use them for legitimate purposes like running businesses that may sell products or services, help connect people, bring music, graphic or video content to the users, provide information, and so on. We all use them frequently and are thankful that they exist.

On the other hand, bad guys need these services so they can, for example, run their phishing campaigns, launch disruptive attacks, distribute malware, exfiltrate data, steal people’s money through fraud, or distribute child abuse material. Through their malicious (and illegal, depending on the jurisdictions) activity, they abuse the services and, by doing so, most frequently they violate the terms and conditions defined by the providers.

When do bad guys abuse a service? For example, when they register domain names for phishing, or when they send spam campaigns, or when they send large volumetric attacks that consume a victim’s (or degrade a network operator’s) bandwidth.

In cases where the service providers are in jurisdictions where they have the legal obligation to act on certain types of activity, like anything related to child abuse material, they act swiftly removing any and all associated content and reporting to law enforcement as appropriate. But in other cases, where such legal obligations don’t exist, some service providers are not as quick to act as a reasonable user would expect, even if their own terms and conditions allow them to. I am specifically referring to those cases where there is evidence that demonstrates that a customer of theirs is abusing their service for something that is malicious (in particular, for phishing, distribution of malware, botnet command and control, fraud, and scams) and that is likely to harm users.

In the absence of a legal obligation, most providers lack incentives (heck, many times they can’t even afford to have a dedicated resource) to be proactive on anti-abuse because, they believe, doing so will reduce their already low profit margins and drive customers away, reducing their revenue and shrinking their market share. And to be honest, their profit margins are indeed low, so this is a legitimate concern.

How can this be addressed? How can the entire ecosystem, or a relevant portion of it, move in a direction where proactive anti-abuse work is not punished with negative financial consequences due to increased operating costs, a smaller market share and overall, less profitability? What if, for example, there were tax incentives for those companies that implement a minimum standard baseline of proactive anti-abuse? What if they could identify themselves as proactive on anti-abuse and gain the deserved trust and recognition from their customers, that would probably help increase loyalty to their brands? What if there was something like a registry similar to MANRS or KINDNS but for the service providers that voluntarily decide to effectively be proactive against abuse, that is publicly available for the entire world to see? How else could they be incentivized to be proactive?

For good or bad, businesses are about making a profit so if there is no financial (and reputational) incentive, in the absence of legal obligations it’s unreasonable to expect that service providers will change their stance from being moderately or completely reactive, or uninterested, to being truly voluntarily proactive. Obviously, this doesn’t apply to the few that are themselves malicious, criminal, or friendly towards threat actors.

Too much has been said about obligations for takedowns or suspensions when valid reports with evidence are submitted to the service providers. Isn’t it time to discuss about an even better scenario? Think about this: When the anti-abuse work is done reactively via takedowns and suspensions that happen after the abuse reports are submitted, the threat actors already launched their campaigns and they already victimized users, which is what allowed the harmful traffic to be seen, analyzed, and identified as such, and then reported. In other words, for the victims, the takedowns and suspensions of the malicious infrastructure used to attack them are always too late.

I am not being naïve when I write this post. I am all too aware that there is no silver bullet. I am all too aware of the complexities of the problem. Too many jurisdictions, too many interests, too much money to be made, and sometimes (fortunately not so frequently,) not benign interests that will simply not change regardless of any possible incentives or legal obligations.

This post is for the sake of the discussion, which I believe is worth having. What should be the way forward? A registry for proactive anti-abuse champions can somewhat easily be created and maintained, and it could have global coverage (or regional, or be country-specific, whatever the preference may be.) Then again, who should create and maintain it? Who can be that one organization with enough global recognition, that is geo-politically neutral, that could help drive the definition of the criteria for the service providers to be awarded with the ‘anti-abuse champion’ seal and add the corresponding entry in the registry?

And what about the financial incentives? What country would be willing to take the lead and give example to the rest of the world? Would Europe take the initiative as a first regional initiative? Would the Organization for Islamic Cooperation or the Organization of American States take the lead in their regions? What about the ASEAN countries? How could these incentives, should they ever exist in a relevant number of countries, be standardized internationally?

Lastly, for absolute clarity, this is not about freedom of expression and censorship, or potential trademark or copyright violations. The courts are there for these issues.

Is anyone interested in discussing this? If yes, please do reach out to carlos.alvarez at first.org .

Published on FIRST POST: Jan-Mar 2024