A presentation given by Robert W. Gehl, @rwg@aoir.social, at FediForum (online), 20250911.
The corresponding slides are available here.
Note: I do not consent to the scraping of this content for training AI or any other purpose.
Hello, everyone! And thank you for coming to this virtual book launch event! Thanks to Johannes and the FediForum for hosting, as well. In preparing for this, I needed a bit of inspiration, so I turned, as so many others have, to the...
...Inspirational Skeletor account on Mas.to. Indeed, Skeletor! The possibilities are all in front of us.
That turns out to be a good description of a core idea in my new book, Move Slowly and Build Bridges, which was released from Oxford University last month. In the book, I take seriously the idea that, at least when it comes to social media, another world is possible.
(As an aside, the cover art comes courtesy of Artist Marcia X -- I want to give a shoutout to Marcia for agreeing to supply artwork for the cover!) Ok, so back to this idea that another social media world is possible...
For many years, ever since my doctoral work in the late Aughts, I've been studying alternative social media -- the work of activists who took a critical look at the Facebooks and Twitters of the world and said, "we can do better." We can make a better social media world. I've referred to this sort of work as "critical reverse engineering" -- encountering an object like Facebook, taking it apart through critique, and then attempting to build something better. For years, my idea of better was something that would be privacy-conscious, that would throw off what we now call "surveillance capitalism."
In the early 2010s, I started a project called the Social Media Alternatives Project, where students and I collected data on alternative social media.
This research program took me to intriguing places, such as diaspora*, Quitter, and one of my favorite examples, Twister, a totally peer-to-peer microblogging system.
This line of research even took me onto the Dark Web, where I wrote about people setting up open source social networking sites where folks could socialize through their pseudonyms -- sites that were built as a direct response to the emphasis on 'real world identities' and surveillance on Facebook.
Throughout this line of research, I firmly believed that the way out of corporate social media surveillance would be achieved through largely technical means -- the proper implementation of decentralization, the latest encryption tools, and free and open source software. And I still believe those things are important.
But as Inspirational Skeletor tells us, Whatever it is you're seeking won't come in the form you're expecting.
Which brings me to Mastodon. I first encountered Mastodon in 2017, joining Mastodon.xyz. Much as I had other alternative social media -- Twister, diaspora* -- I began studying Mastodon in more detail as part of the Social Media Alternatives Project. And as I write about in the book, it checked a lot of boxes: it was federated. It was free and open source software. And it was (and remains) predominantly free of online behavioral marketing and other surveillance capitalism techniques.
But there is more here than the laudable technical achievements of Mastodon. Mastodon -- and the broader fediverse -- have all challenged my presumptions about what alternative social media might be. In my PhD training, I studied Science and Technology Studies, a field of inquiry that, in part, focuses on how technical elements and social elements are mutually constitutive. This falls under the label "sociotechnical." In this line of thinking, social practices, politics, ways of thinking are not determined by technical change, nor do they purely produce technical change. Rather, the two constantly interact. When it comes to alternative social media, one cannot study Mastodon and the ActivityPub-based fediverse without accounting for social practices as much as the technical details.
We spend a great deal of time these days discussing protocols, such as ActivityPub and ATProto. We also see an emerging slogan, "People over Protocols (and Protocols over Platforms)." But one thing we can also do is consider the human stories of technical protocols.
So that brings me to my first sociotechnical story. This story is taken from my book about Mastodon and the ActivityPub-based fediverse. It's a story of protocols and the people who make them.
I have a chapter of the book focusing on the development of ActivityPub. I've talked about this in previous presentations, but quickly here I will say that one reason I focus on Mastodon in the book is because Mastodon was important in the history of ActivityPub. Through a series of historical accidents, Mastodon was the largest and most well-known implementer of the ActivityPub protocol, and this implementation prompted the W3C to give the Social Web Working Group more time finish the ActivityPub spec -- an extension now known as the "Mastodon extension."
Besides that bit of history, I also argue in the book ActivityPub is an incredible technical achievement. Consider the stress-test it survived when Musk bought Twitter. While admins who lived through this faced long hours, in the end, ActivityPub proved more than capable of absorbing waves of hundreds of thousands of new people, as well as tens of thousands of new servers. I think that moment was a triumphant moment for ActivityPub as a technical protocol. But when I asked several of the authors of ActivityPub about the process, I heard something in addition to a sense of achievement.
One part of that story that may not be discussed much is the trauma that the authors of the protocol each told me about -- independently of one another. I interviewed Any Guy, Evan Prodromou, and Christine Lemmer-Webber, and they each declared that the process of writing, testing, and guiding the ActivityPub spec through the World Wide Web Consortium process was intensely difficult and even deeply distressing.
This is a story that doesn't appear in technical documents -- it doesn't even really appear in meeting minutes -- but has certainly happened across much of the history of the development of technologies we take for granted. This happy little ActivityPub diagram covers up a great deal of human pain.
So a lesson of sociotechnical story #1 is: as we consider protocols, consider the stories of the people who made them.
But there's even more to say about the sociotechnical aspects of the technical elements of the fediverse. I have two more stories to tell. Let me set them up by talking a bit more about "protocol" and "federation." As I discuss in the book (and as I'm sure many of you are aware), "protocol" certainly has a technical definition -- a set of highly specified rules that structure the transfer of data between computers. But there are other meanings of protocol, meanings I drew on in the book. Let's turn to the trusty Oxford English Dictionary.
If we go to the Oxford English Dictionary, we see definition 6.a. Here, "protocol" is tied to "The official rules of etiquette to be observed by the head of state and other dignitaries... the procedure governing diplomatic occasions... the observance of this."
Another meaning has to do with treaties. Meaning, 3.a: "a draft of a diplomatic document... signed by the parties concerned, of agreed provisions to be embodied in a formal treaty." Hold this idea of treaty in mind for a moment, and let me turn to another technical term.
"Federation" also has a technical meaning. For example, in databases, it can refer to the distribution of data across many independent databases, often with some unified view into them. Arguably we can think of the Fediverse in this way.
But of course, federation is a much, much older term: borrowed from Latin, _foedus_ and meaning : a treaty, an agreement among peoples. Here I think of Indigenous confederacies, such as the Haudenosaunee, or the Council of Three fires, in the land I am currently residing in.
This idea of federation is also tied to the term "covenant" -- an agreement among groups of people. Note the "League of Nations Covenant" here.
So that brings me to sociotechnical story #2: a story of the covenantal fediverse.
In 2023, I was asked to help start a Mastodon instance for the Association of Internet Researchers (or AoIR for short), which is an academic organization of people studying the Internet. AoIR was no longer interested in contributing to Twitter after Musk purchased it. One of the first things I did as a Mastodon admin was help my instance members establish a Code of Conduct. This is a very common practice across much of the fediverse. Codes of conduct are ethical documents that help condition the practices of the instance. But there is no technical reason to set these up, and indeed there's no requirement to have Codes of Conduct in any Free and Open Source software project. You're just as free to ignore this. Where did this common cultural practice come from?
It came from the activism of people like Coraline Ada Ehmke and others who argued that the tech sector had to move past naive visions of meritocracy. Ehmke argues that meritocracy actually excludes more people than it includes -- and indeed, the low number of women and people of color in FOSS development has long been a problem. Ehmke and her allies were arguing for Codes of Conduct at technology conferences in the 2010s (and indeed, FediForum has a code of conduct as a result.)
Ehmke's activism came at a key moment in fediverse history, because as we know Mastodon development started in 2016, right at the time the Codes of Conduct activism was happening. The Mastodon project not only adopted a code of conduct for the development side, but also for Mastodon.social, and other early Mastodon instances followed suit. It has since become a common practice on the fediverse. These rules provide ethical guidelines for social media use. But they are incredibly flexible -- unlike the pretentions of Meta or X, who believe that their rules are globally applicale, instances-as-communities can set their own rules -- their own local covenants.
But these rules don't just provide the basis for content moderation on individual fediverse servers. The fediverse is a network of tens of thousands of small servers, where individuals can interact with others on different servers.
You might wonder, if folks on my server are well-moderated, but a server appears on the network, one that's full of trolls, what good are my local rules? What happens when trolls from outside my server harass folks on my server? Let's return to the idea of treaties in protocols. Fediverse servers also include the ability to block other servers. And the decision to block a server is often based on the existence -- or lack of -- a strong code of conduct on other servers.
That Troll Server likely has no code of conduct, or at least one that says "anything goes!" But there is no guarantee other servers will maintain their connection to it. Indeed, throughout the history of the fediverse, alt-right, troll servers have been isolated through by the rest of the network.
Indeed, as multiple research projects have shown, there is common language emerging from the various codes of conduct found across the fediverse -- broadly speaking, the promise of active moderation against hate. So while local instances can set their own rules, we do see an emerging ethical consensus, exemplified in documents like the Mastodon server covenant.
Similarly, consider another covenant, the Fedipact, created by vanta rainbow black in reaction to Meta's adoption of ActivityPub. Fedipact signatories (full disclosure, my instance is a signatory) agree to not federate with Threads. Whether you agree with the fedipact might hinge on whether you agree with their assertion that Meta has historically been a bad actor as a corporation. For their part, as vanta told me, she loves the fediverse for accepting her and for its active moderation against hate. They contrast this with Meta's Facebook, where she received death threats and, when they reported such posts, Meta did nothing at all.
The lesson of sociotechnical story #2 is: Federation is a diplomatic practice as much as it is a technical one. This means we must consider ethics and politics as much as technical practice.
Related to this is sociotechnical story #3, which involves listing and blocking. While blocking instances individually can be effective, the ease with which someone can set up an ActivityPub instance means that the network can get waves of bad actors.
Marcia X and Ginger created the #fediblock hashtag to help others name and identify bad actors on the fediverse. In doing so, they were engaging in a long history of feminist hashtag activism.
Later, after years of advocacy, Mastodon implemented the ability to mass import blocklists, making projects like The Bad Space, Oliphant's lists, and later the IFTAS list extremely valuable for content moderation. Neither #fediblock nor CSV files are particularly 'high-tech' -- in fact, in an interview, Marcia told me she considered #fediblock to not really be a technical intervention. And yet, these are valuable parts of the fediverse's sociotechnical protocols.
However, while blocklists are a powerful tool, they are controversial -- in fact, this is nothing new. We can go back to Twitter blocklists to see arguments about them. Indeed, any sort of moral listing is controversial.
Listing is, in fact, an ancient technology, and it brings with it two problems: they do not contain their own selection criteria and they seemingly can expand infinitely. The first problem leads to people attacking the creators of lists, rather than the practices that gave rise to the lists in the first place. The second problem, the fear of infinite growth, leads to people believing that eventually a listmaker will put them on the list with no hope of being removed.
Indeed, in my interviews with people like Marcia X, Ro (who created the Bad Space), and Oliphant, they all reported receiving hate for making things like #fediblock and advocating for blocklists. Given that Ro is Black and Marcia is Afro-Caribbean, often the attacks were racist. Such attacks led to Ro's original instance, Playvicious.social, shutting down in 2020.
In spite of this, people like Ro, Marcia, and others have given back to the fediverse. #fediblock is still in active use to this day, and is a valuable tool to watch for early indications of unethical actors. The Bad Space is not just one list, but a collection of multiple lists, governed by multiple people. IFTAS has followed suit, developing well-governed lists and mechanisms to remove any instances that are listed unfairly.
In fact, by the time I was asked to help start AoIR.social and serve as an admin in 2023, it was easy to import blocklists to make sure instance members were protected from racist and transphobic instances. And I regularly check #fediblock to keep an eye on new developments so the members of AoIR.social can go about their business of talking about the Internet.
So the lesson I take from sociotechnical story #3: Maintaining a covenantal fediverse is a constant process of struggle. What may be easy now often covers up histories of such struggles.
Today I've talked about the covenantal fediverse’s shared ethical program, considering how technical protocols get produced, how codes of conduct shape federation connections, how moderation practices are discussed and shared among admins, and about debates over blocklists and preemptive blocking. A question remains: What exactly is the ethical program of the fediverse? Reflecting on these practices, I believe that the covenantal fediverse relies on what moral theorists call “the ethics of care.” The ethics of care is about relationships, not individual choices. The relationships can be with other people, of course, but also with machines as well as the natural world.
In other words, care is also a sociotechnical practice, and it can easily be found at every level of the fediverse: people are caring for the codebase, caring for ActivityPub, caring for their instances, watching SideKiq as waves of people arrive.
They moderate, they post. They add alt text to images. They add content warnings.
They set up channels for mutual aid, and they receive mutual aid. They set up instances with their own money, and they donate money to support instance admins.
And as we know, there are dangers of care. Some folks burn out and leave. They shut down instances or stop acting as moderators. Some leave forever, and some come back. Some express their care through flame wars and acrimony.
But they also have fun: they share jokes, and they watch movies together (shoutout to #Monsterdon!)
Thinking about the fediverse's ethics of care reminds me of the late Stuart Hall's observation that "there are no guarantees." By this he meant that any given social change is not inevitable, that it takes conscious and active struggle. If we believe the fediverse is a means escape surveillance capitalism and the corporate domination of our sociality, then the fediverse will require constant care and struggle. As I like to say, it reminds us that "social media" involves sociality, and sociality -- living with others -- is not done through pushing buttons, but is done through communication. And there's no guarantee it will survive. But given what I've seen in my study of the fediverse, at the very least the people struggling here are showing us that..
Inspirational Skeletor is right: another social media world is possible.
Thank you!