title: Possible routes for distributed anti-abuse systems
date: 2017-04-04 18:00
author: Christine Lemmer-Webber
tags: federation, anti-abuse
slug: possible-distributed-anti-abuse
---
I work on federated standards and systems, particularly
ActivityPub.
Of course, if you work on this stuff, every now and then the question
of "how do you deal with abuse?" very rightly comes up.
Most recently Mastodon has gotten
some attention, which is great!
But of course, people are raising the question,
can federation systems really protect people from abuse?
(It's not the first time to come up either; at LibrePlanet in 2015 a
number of us held a "social justice for federated free software systems"
dinner and were discussing things then.)
It's an important question to ask, and I'm afraid the answer is,
"not reliably yet".
But in this blogpost I hope to show that there may be some hope for
the future.
A few things I think you want out of such a system:
-
It should actually be decentralized.
It's possible to run a mega-node that everyone screens their content
against, but then what's the point?
-
The most important thing is for the system to prevent attackers from
being able to deliver hateful content.
An attack in a social system means getting your message across, so
that's what we don't want to happen.
-
But who are we protecting, and against what?
It's difficult to know, because even very progressive groups often don't
anticipate who they need to protect; "social justice" groups of the past
are often exclusionary against other groups until they find out they need
to be otherwise (eg in each of these important social movements, some
prominent members have had problems including other social justice groups:
racist suffragists, civil rights activists exclusionary against gay and
lesbian groups, gay and lesbian groups exclusionary against transgender
individuals...).
The point is: if we haven't gotten it all right in the past, we might not
get it all right in the present, so the most important thing is to allow
communities to protect themselves from hate.
Of course, keep in mind that no technology system is going
to be perfect; these are all imperfect tools for mitigation.
But what technical decisions you make do also affect who is
empowered in a system, so it's also still important to work on
these, though none of them are panaceas.
With those core bits down, what strategies are available?
There are a few I've been paying close attention to
(keep in mind that I am an expert in zero of these routes at present):
-
Federated Blocklists: The easiest "starter" route.
And good news!
If you're using the
ActivityPub standard,
there's
already a Block activity,
and you could build up group-moderated collections of people to block.
A decent first step, but I don't think it gets you very far; for one thing,
being the maintainer of a public blocklist is a risky activity;
trolls might use that information to attack you.
That and merging/squashing blocklists might be awkward in this system.
-
Federated reputation systems:
You could also take it a step further by using something like the
Stellar consensus protocol
(more info in paper form
or even
a graphic novel).
Stellar is a cryptographically signed ledger. Okay, yes, that makes it a
kind of blockchain (which will make some peoples' eyes glaze over, but
technically a signed git repository is also a blockchain), but it's not
necessarily restricted to use of cryptocurrencies... you can track any
kinds of transactions with it.
Which means we could also track blocklists, or even less binary
reputation systems! But what's most interesting about Stellar is that
it's also federated... and in this case, federation means you can
choose what groups you trust... but due to math'y concepts that
I occasionally totally get upon being explained to me and then forget the
moment someone asks me to explain to someone else, consensus is still
enforced within the "slices" of groups you are following.
You can imagine maybe the needs of an LGBT community and a Furry
community might overlap, but they might not be the same, and maybe you'd
be subscribed to just one or both, or neither.
Or pick your other social groups, go wild.
That said, I'm not sure how to make these "transactions" not public in
this system, so it's very out there in the open, but since there's a
voting system built-in maybe particular individuals won't be as liable
for being attacked as individuals maintaining a blocklist are.
Introducing a sliding-scale "social reputation system" may also introduce
other dangerous problems, though I think Stellar's design is probably the
least dangerous of all of these since it probably will still keep abusers
out of a particular targeted group, but will allow
marginalized-but-not-recognized-by-larger groups still avenues to set up
their own slices as well.
-
"Charging" for distributing messages:
Hoo boy, this one's going to be controversial!
This was suggested to me by someone smart in the whole distributed
technology space.
It's not necessarily what we would normally consider real money that
would be charged to distribute things... it could be a kind of "whuffie"
cryptocurrency that you have to pay.
Well the upside to this is it would keep low-funded abusers out of a
system... the downside is that you've now basically powered your
decentralized social network through pay-to-play capitalism.
Unfortunately, even if the cryptocurrency is just some "social media fun
money", imaginary currencies have a way of turning into real currencies;
see paying for in-game currency in any massively multiplayer game ever.
I don't think this gives us the power dynamics we want in our system, but
it's worth noting that "it's one way to do it"... with serious side
effects.
-
Web of trust / Friend of a Friend networks:
Well researched in crypto systems, though nobody's built really good
UIs for them.
Still, a lot of potential if the system was somehow made friendly and
didn't require showing up to a nerd-heavy "key-signing party"...
if the system could have marking who you trust and who you don't (and not
just as in terms of verifying keys) built as an elegant part of the UI,
then yes I think this could be a good component for recognizing who you
might allow to send you messages.
There are also risks in having these associations be completely public,
though I think web of trust systems don't necessarily have to be
public... you can recurse outward from the individuals you do already
know.
(Edit: My friend ArneBab
suggests that looking at how Freenet handles its web of trust
would be a good starting point for someone wishing to research
this.
I have 0 experience with Freenet, but
here are
some
resources.)
-
Distributed recommendation systems:
Think of recommender systems in
(sorry for the centralized system references)
Amazon, Netflix, or any of the major social networks
(Twitter, Facebook, etc).
Is there a way to tell if someone or some message may be relevant to you,
depending on who else you follow? Almost nobody seems to be doing research
here, but not quite nobody; here's one paper:
Collaborative Filtering with Privacy.
Would it work?
I have no idea, but the paper's title sure sounds compelling.
(Edit:
ArneBab also points out that
credence-p2p might also be useful
to look at.
Relevant papers here.)
-
Good ol' Bayesian filtering:
Unfortunately, I think that there's too many alternate routes of attacks
for just processing a message's statistical contents to be good enough,
though I think it's probably a good component of an anti-abuse system.
In fact, maybe we should be talking about solutions that can use multiple
components, and be very adaptive...
-
Distributed machine learning sets:
Probably way too computationally expensive to run in a decentralized
network, but maybe I'm wrong.
Maybe this can be done in a the right way, but I get the impression
that without the training dataset it's probably not useful?
Prove me wrong!
But I also just don't know enough about machine learning.
Has the right property of being adaptive, though.
-
Genetic programs:
Okay, I hear you saying,
"what?? genetic programming?? as in programs that evolve?"
It's a field of study that has quite a bit of research behind it,
but very little application in the real world... but it might be a good
basis for filtering systems in a federated network
(I'm beginning to explore this but I have no idea if it will bear fruit).
Programs might evolve on your machine and mine which adapt to the changing
nature of social attacks.
And best of all, in a distributed network, we might be able to send our
genetic anti-abuse programs to each other... and they could breed and make
new anti-abuse baby programs!
However, for this to work the programs would have to carry part of the
information of their "experiences" from parent to child.
After all, a program isn't going to very likely randomly bump into finding
out that a hateful group has started using "cuck" as a slur.
But programs keep information around while they run, and it's possible that
parent programs could teach wordlists and other information to their
children, or to other programs.
And if you already have a trust network, your programs could propagate their
techniques and information with each other.
(There's a risk of a side channel attack though: you might be able to find
some of the content of information sent/received by checking the wordlists
or etc being passed around by these programs.)
(You'd definitely want your programs sandboxed if you took this route,
and I think it would be good for filtering only... if you expose output
methods, your programs might start talking on the network, and who knows
what would happen!)
One big upside to this is that if it worked, it should work in a
distributed system... you're effectively occasionally bringing the
anti-abuse hamster cages together now and then.
However, you do get into an ontology problem... if these programs are
making up wordlists and binding them to generated symbols, you're
effectively generating a new language.
That's not too far from human-generated language, and so at that point
you're talking about a computer-generated natural language... but I think
there may be evolutionary incentive to agree upon terms.
Setting up the "fitness" of the program (same with the machine learning
route) would also have to involve determining what filtering is useful /
isn't useful to the user of the program, and that's a whole challenging
problem domain of its own (though you could start with just manually
marking correct/incorrect the way people train their spam filters with
spam/ham).
But... okay by now this sounds pretty far-fetched, I know, but I think it
has some promise... I'm beginning to explore it with a derivative of some
of the ideas from
PushGP.
I'm not sure if any of these ideas will work but I think this is both the
most entertainingly exciting and crazy at the same time.
(On another side, I also think there's an untapped potential for
roguelike AI that's driven by genetic algorithms...)
There's definitely one huge downside to this though, even if it
was effective (the same problem machine learning groups have)...
the programs would be nearly unreadable to humans!
Would this really be the only source of information you'd want to trust?
-
Expert / constraint based systems:
Everyone's super into "machine learning" based systems right now, but
it's hard to tell what on earth those systems are doing, even
when their results are impressive (not far off from genetic algorithms,
as above! but genetic algorithms may not require the same crazy large
centralized datasets that machine learning systems tend to).
Luckily there's a whole other branch of AI involving "expert systems" and
"symbolic reasoning" and etc.
The most promising of these I think is the
propagator model by Sussman / Radul / and many others
(if you've seen the constraint system in SICP, this is a grandchild of
that design).
One interesting thing about the propagator model is that it can come to
conclusions from exploring many different sources, and it can tell
you how it came to those conclusions.
These systems are incredible and under-explored, though there's a catch:
usually they're hand-wired, or the rules are added manually (which is
partly how you can tell where the conclusions came from, since the
symbols for those sources may be labeled by a human... but who knows,
maybe there's a way to map a machines concept of some term to a human's
anyway).
I think this won't probably be adaptive enough for the fast-changing
world of different attack structures... but! but! we've explored a lot of
other ideas above, and maybe you have some combination of a
reputation system, and a genetic programming system, and etc, and this
branch of study could be a great route to glue those very differing
systems together and get a sense of what may be safe / unsafe from
different sources... and at least understand how each source, on its
macro level, contributed to a conclusion about whether or not to trust a
message or individual.
Okay, well that's it I think!
Those are all the routes I've been thinking about.
None of these routes are proven, but I hope that gives some evidence that
there are avenues worth exploring... and that there is likely hope for
the federated web to protect people... and maybe we could even do it
better for the silos.
After all, if we could do filtering as well as the big orgs,
even if it were just at or nearly at the same level (which isn't as good
as I'd like), that's already a win: it would mean we could protect
people, and also preserve the autonomy of marginalized groups... who
aren't very likely to be well protected by centralized regimes if push
really does come to shove.
I hope that inspires some people!
If you have other routes that should be added to this list or you're
exploring or would like to explore one of these directions, please
contact me.
Once the W3C Social Working
Group wraps up, I'm to be co-chair of the following Social Community
Group, and this is something we want to explore there.
Update: I'm happy to see that
the Matrix folks
also see this as "the single biggest existential threat" and
"a problem that the whole decentralised web community has in common"...
apparently they already have been looking at the Stellar approach.
More from their
FOSDEM talk slides.
I agree that this is a problem facing the whole decentralized web, and
I'm glad / hopeful that there's interest in working together.
Now's a good time to be implementing and experimenting!