[This is an edited and redacted email rant that I sent to a friend back in April 2022, in response to Elon Musk getting involved with Twitter, and Musk’s statements about content moderation. My friend is a lawyer, and not very familiar with Twitter, but he was interested and receptive to the idea that Musk might make Twitter a free-speech zone. (If you know Twitter already, skip “How Twitter Works”)]
[My friend:] “I do like Musk’s idea of opening the moderation algorithm for public view, unless there’s some compelling reason not to do so.”
There are indeed compelling reasons not to do so. They are:
1) it’s not an “algorithm”, per se, and
2) to the extent that it is an algorithm, as soon as you specify exactly how the Twitter algorithm either moderates or ranks results, you open yourself up to exploitation from spammers and abusers.
(By the way, there’s a reason why you and Elon potentially agree about this, and I disagree. It’s because you and Elon have something in common. The thing you and Elon have in common is that you both have no idea what you’re talking about, whereas I, by contrast, do know what I’m talking about. 🙂 Fighting search engine spam (analogous to Twitter spam) was my job for about three years. Feed ranking (analogous to Twitter ranking) was my job at LinkedIn. I suspect that during that time Elon was learning a lot about online payment systems and rockets and batteries and he knows a lot more about those things than I do. But no matter how brilliant Elon thinks he is, he knows some things that he has spent time learning about and not other things that he hasn’t spent any time on. It shows.)
So anyway, Elon is on record that he will do two things if he takes over Twitter:
1) Publish the Twitter moderation algorithm
2) Get rid of spammers and spam bots
(hmm…. do human spammers have free speech in Elon’s view? Or does everyone have free speech except for, you know, Obviously Bad people?)
Absent a radical reconfiguration of what Twitter is and how it works (which I sketch below), these two goals are incompatible. Elon doesn’t realize that they are incompatible. Why doesn’t he realize this? Because he doesn’t know what he is talking about.
This is like a politician promising to eliminate all zoning (because freedom) and also promising to stop certain kinds of businesses from being located near schools. Or promising to promote the renewable energy transition and also promising to keep solar-panel tariffs high for our union workers.
I’ll explain how Twitter works (basic features), then moderation, then ranking. At the end I’ll sketch out Elon’s real options in terms of moderation and algorithms and publishing them.
How Twitter works
[Skip if you’re familiar with Twitter.] The mechanics of Twitter are that you sign up for an account that allows you to both tweet and view tweets by others. Notably, there is no hard verification of identity (a point of controversy) except for famous high-visibility people, who can apply for a “blue checkmark” identity verification. This means that if you see a Twitter account with the name “Elon Musk” and it has a blue checkmark next to it, then you can conclude that it’s actually the Elon Musk you think it is (or possibly a delegate using his account). If you’re a normal user you just sign up with an email account (which is not disclosed publicly in your tweets) and you can give yourself whatever name or title you want (including “NotElonMusk” or “ElonMuskParody”, or just “Elon Musk”).
Once you have a Twitter account, you can choose to “follow” other users. This means that (by default) their tweets will show up in your feed. You can also interact with tweets in various ways: clicking on them, reading them, “liking” them, replying to them, and “retweeting” them, which means that you re-broadcast someone else’s tweet out as a tweet from your own account. These actions on tweets also affect how those tweets are ranked.
Retweets are how tweets achieve virality. If I tweet something, and you retweet my tweet, and then Elon notices your retweet of my tweet and retweets it too, then all of a sudden I either have 15 minutes of positive Internet fame due to love from Elon fanboys, or I have millions of people who hate me and 0.01% of those people are sending me death threats, which means that I’m getting hundreds of death threats, and some of those people figure out exactly who I am and what my home address is, and are tweeting my home address with an image of a gunsight. (But I digress.)
What you actually see in your feed is increasingly ML-determined, as opposed to just a list of tweets from people you follow. I’ll get into that more in the Ranking section.
Moderation
Let’s start with the most basic premise: Twitter must police the content that is on its platform.
There is no real disagreement about this – only about what policing, how much policing, and how the policing should work. Elon is pushing a (not very well thought-through) proposal for minimal policing – he thinks that Twitter should block all the content that is actually illegal, but allow all content that’s legal. Sounds good!
OK, so let’s think about how we’re going to implement Elon’s proposal. Imagine for a moment that you’re head of Content Moderation at Twitter, or you’re Chief Counsel and both the Legal team and the Content Moderation team report up to you. You’re responsible for the rubber meeting the road, and so you have to not only determine how the policy works, but also are responsible for the budgets, hiring, management supervision, etc.
Starting implementation proposal: since we want to allow legal tweets and block illegal tweets, we should look at the tweets and determine which ones are illegal. We have highly trained legal talent, so this should be straightforward.
But … most hand-wavy thoughts and opinions about moderation don’t bother to confront the scale challenges of something like Twitter. I don’t have much sympathy for Mark Zuckerberg, but I was sympathetic to his plight when he was being grilled by AOC in a hearing, and she was basically saying “What do you mean you’re not going to take down all the disinformation from Facebook?” and he was trying to explain that the sheer scale of Facebook usage made that challenging. Her mental model was clearly that you just look at the bad stuff, and if it’s bad then you take it down. What’s so hard about that?
Quick sanity check on scale, with some Fermi estimation (which I know is your favorite kind of estimation, especially because it is a good counter to Magical Thinking):
- Twitter has 353 million active users, and processes 500 million tweets per day.
- How long should it take for a legal expert to look at a tweet and determine whether it’s illegal? A minute per tweet? OK, so let’s say 30 seconds, which means 120 tweets per hour.
- How much do we pay our experts? Let’s say $100/hour, fully loaded.
- This means that we can evaluate tweets for legality at an average cost of $0.83 per tweet
- So moderation of 500 million tweets today will cost a total of $415M per day, or $151 billion/year in moderation labor costs
- Twitter’s revenue in 2021 was ~$5 billion, so this approach would mean spending about 30x its revenues on moderation. So this approach as posed will not work.
Also, note that implementing this seriously would mean that you’re turning a real-time system into a non-real-time system. Users would post tweets, which would then go into a moderation queue, and some unknown period of time later the tweet would be either published or rejected. Then someone else might reply, and that reply would held up in moderation for a while. This would turn a “social network” into an unsocial network, and remove much of its interactive appeal.
OK, so that’s a reductio ad absurdum of a moderation system that is per-tweet and purely human-driven. Now let’s look at a system that’s purely algorithmic.
This might seem like good news to you from a job-security point of view, but the state-of-the-art of ML text analysis has not gotten to the point where you can feed a tweet into a black box and have it tell you whether the tweet was illegal, with perfect accuracy. Sure, you could train a ML model on a large corpus of legal tweets and illegal tweets, and get something that did a reasonable job of separating them, but it will have a certain percentage of Type I and Type II errors. Should we delete all the tweets that are flagged by this model? Well, if we do, we’ll be blocking some content that’s perfectly legal, and that’s not what Elon wants. We’ll get fired! Also plenty of illegal content will slip through.
Anecdote: when I was at Yahoo! Search in the early 2000’s, a colleague of mine spent some evenings and weekends training an ML model to detect child-porn web pages. He took this to the lawyers, who praised him up and down for his civic-mindedness in working on this, and then said “You can’t run this”.
I’m making up the figures, but let’s say that it had a 10% rate of each kind of error. The problem as they described it was that, given that child porn is rare, most of the pages flagged by the model will actually not be child porn. So you will have 100s of millions of pages flagged as potential child porn. You can’t just delete them all, because it’s a sizable portion of your whole index. But under the law at that time, if Yahoo! had any reason to believe that they had found child-porn web pages they had to report the offending content to law enforcement within 48 hours. Which would mean Yahoo! staff reviewing 100s of millions of web pages with a 48-hour turnaround. Which meant …. sorry, no child porn detector. See No Evil, Hear No Evil, etc.
OK, now in your imagined role as Chief Content Officer / Chief Counsel for Twitter, it’s time to get real, and to not let the perfect be the enemy of the good. We need an approach to eliminating illegal content that is pretty good, stops most of the bad tweets, lets through most of the good tweets, and spends less money per year than Twitter takes in as revenue. There are some obvious steps we can take to get better moderation bang for the buck:
- Make a hybrid human/machine system. Algorithms flag particularly suspicious content which is kicked up to humans for review. Though less accurate, this is a lot cheaper than having humans review everything.
- Use cheaper reviewers – rather than $100/hour legal experts, use mostly $10/hour less-trained judges – maybe offshore. Have a tiered system where less-expert judges can pass the tricky cases upward.
- Enlist your users to help – put a “flag this tweet for review” button on every tweet. Put those flagged tweets at the head of the review queue.
- Enforce against bad accounts rather than bad tweets (i.e. aggregate at the user level rather than tweet level). If a user sends an illegal tweet, warn them; if they send another one, suspend their account.
In Fermi-estimation style, there are probably 2 to 3 independent cost reductions of 90% in the above list, so maybe that takes the cost of our moderation program from $150B down to between $150M and $1.5B? That’s still a sizable expense, but not one that can’t possibly be paid for.
The main takeaway I want us to have from the above discussion is that although Twitter aspires to a stricter standard than just legality (Twitter Community Guidelines, or whatever they call it) the broad outlines of how moderation works will be roughly the same whether the standard is minimum legality or the Guidelines.
What I mean by “it’s not an algorithm, per se” is that the results that any given Twitter user sees are there by virtue of a mixture of:
- pure algorithms (no human in the loop)
- human manual intervention (i.e. someone decides that a given user should be banned), and
- machine learning that attempts to algorithmically generalize from human judgments (for example, if Twitter accounts that disproportionately include tweet words that are associated with accounts that have been banned, they might in turn be ranked lower).
- machine learning that does shallow review of a large number of tweets, and flags a minority of tweets for further human review.
Anecdote: when I tweeted something along the lines of “If you’re stupid enough to believe [thing X], then you might a well kill yourself now!”, I instantly received a message from Twitter that I seemed to be advocating suicide, which was counter to the Guidelines, and so my Twitter account was being suspended for 6(?) hours, during which time I wouldn’t be able to tweet. The fact that I received this ban instantly after tweeting suggests that I was banned by an algorithm not a person, and the algorithm was operating without immediate human review. This implies that at least some temporary bans are purely algorithmic. I suspect that permanent bans require human review, though I don’t know for sure.
So the overall behavior of the system is a complex combination of algorithms (working with a mix of well-understood rules and ML-trained algorithms) and human judgments, and algorithms trained on human judgments. It’s difficult to see what it would mean to “open[..] the moderation algorithm for public view”. What are you going to do: describe the whole human moderation process as part of describing the algorithm?
By the way – a tangent that I thought would be amusing to explore. If the above seems like obscurantism to you, let me ask you, as a lawyer: what is the algorithm that describes how criminal sentences are determined, and why hasn’t that algorithm been published? It seems like a matter that would be of great public interest, and if that algorithm were published then it might greatly increase public confidence in the institutions that determine criminal sentences. Instead, as I understand it, it’s some weird mix of “guidelines” (i.e. soft rules) and mandatory minimums (i.e. hard rules), and prosecutorial discretion including plea bargains (i.e. human judgment) and judicial discretion (i.e. human judgment), which at the end of the day ends up with very little transparency for those who are sentenced. Also, as has been much discussed in public forums, the latitude that is afforded to both prosecutors and judges opens the door to their biases (both conscious and unconscious) which might express itself in left-right terms, but in practice might be even more likely to express itself in biases around ethnicity or social class. Now, possibly you think that the discretionary aspects are actually a bad idea and that it should be crisply algorithmic and transparent (e.g. anyone convicted of murder gets exactly 13.5 years in prison, no matter who they are or what the circumstances), but that is not what it’s like now, and presumably there are historical reasons why that is the case, and there are arguments in favor of having human judgment in the loop, which arguments have at least partially carried the day.
OK, but let’s say that we want to be as transparent as possible about the automated parts of the algorithm, especially the parts that flag tweets for human review. Why don’t we post the code for those so that everyone can understand how they work?
The problem is that with respect to some “bad” tweeters, moderation is an adversarial process. Twitter wants to moderate them (so they don’t behave badly) and the tweeters want to avoid that moderation and do their bad stuff anyway. Publishing information about how they are detected and moderated just gives them ammunition to figure out how not to be detected.
So let’s say that Twitter has a rudimentary child porn detector that flags tweets for review. How does it work? Well, say that Twitter has constructed a list of 50 words that seem to be highly correlated with child-porny content, and if a tweet has more than 2 of those words, then it gets flagged for review. It published that “algorithm” and that list of words. Great, says the child-porn aficionado – I’ll just avoid those words in my tweets, and use synonyms, misspellings, etc. Or maybe Twitter has a fancy machine-learned model built on a training set of child-porny tweets. To fully specify how it works, it publishes the model. Great, says the aficionado – I’ll just run my porny tweets through that model before posting to make sure that the model won’t trigger.
It’s a war between the moderators and the bad actors, and (as Germany found out when the UK broke all their codes) it’s hard to win a war when the other side knows all your secrets.
Anecdote: When I was in charge of anti-webspam algorithms at Yahoo! Search in the mid-2000’s, I would occasionally go to what then were called “webmaster” conferences, for people who made websites and published content on the web. For many of these folks, their businesses lived and died by how well their websites ranked on the major search engines. So they were extremely interested in learning how web search worked, and how spam moderation and anti-spam ranking worked, so they could rank as high as possible, in a white-hat kind of way, without getting flagged by Yahoo! as spammers and then getting banned or de-ranked.
One example of spam from the time was link-spamming: since search engines take inbound links as a positive signal, you might be tempted to buy a bunch of sites and have them all link to the site you’re trying to boost. Yahoo said that it viewed this technique as spam, and too many inlinking sites that appeared to be controlled by the same entity would get you deranked or banned. On the other hand, there are plausible reasons to cross-link (like your museum site links to your gift-shop site and vice versa – that should be OK).
I would be on panel discussions where some of these extremely interested webmasters would ask specific questions about how ranking or spam detection worked, like “So how many cross-linking sites can I create without being considered a link-spammer?” It’s a reasonable question, but we knew that if we said “More than 100 and you’re a link spammer” we would instantly see gazillions of link networks with exactly 100 sites in each. I learned that at these panels I could say absolutely nothing substantive about how anything worked, even though learning how things worked was the reason the webmasters were there. “What’s the secret of ranking high on Yahoo! Search? Make good content that people like.”, I would say, unhelpfully. Webmasters found that frustrating, and so did I, and soon lost my enthusiasm for going to webmaster conferences.
Ranking
In the very early days of Twitter, your feed was a strictly chronologically-sorted list of all the tweets from all the accounts you follow. This was fairly quickly replaced with a “best” sorting, not strictly chronological, with results ranked by an estimation (mostly machine-learned) of how interesting the tweets will be to you specifically. In practice this works well, and this ranking-based prioritization allows users to follow many accounts and have the “best” tweets forefronted.
All sorts of signals can feed into this ranking, including both personalized and global signals. For example, if I follow you and Elon Musk and our friend Jeff, and I tend to interact with your tweets and Elon Musk’s tweets a lot, but not so much Jeff’s tweets, then yours and Elon’s tweets will be ranked higher in my feed than Jeff’s tweets are. If I follow many many people then quite possibly there are people I follow whose tweets will never come to my attention in practice. Also, tweets from people I follow that have also gotten a lot of likes and retweets are likely to rank high, as will tweets that have also been liked/retweeted by people I follow. There can also be content analysis – like tweets on (ML-detected) topics that I tend to interact with show up higher. So maybe [climate investor]’s tweets that are about climate change or angel investing show up higher in my feed than his tweets about living in Seattle, based on the terms that show up in the tweets and my history of interaction with those terms.
The interesting thing about this kind of ML-based ranking, is that if you do it well, then it really works. By “works” I mean that people will tend to use and interact with such a ranked feed a lot more, and will also report much more satisfaction with the product than if you just sort everything chronologically. (This kind of optimization is also dangerous and possibly polarizing, as we’ve seen with Facebook and the destruction of the political system in the U.S. It all depends on what you are optimizing for.)
Anecdote: One of my responsibilities when I was at LinkedIn in the mid-2010s was the ranking function for the LinkedIn feed, which is very similar to the Twitter ranking problem. This ranking function was trained with all sorts of signals as input, some of them global, and some of them personalized, and sorted by a ML-based assessment of how likely I was to click and interact. Among the signals was recency of the post, so other things equal you’re likely to see more fresh content than stale content.
The fact that this ML-based ranking actually worked was a challenge for product managers, who viewed their job as improving the Feed by implementing product theories as new ranking policies. An example theory was that people want to read more news on the weekend, and more non-news content during the week (or vice versa). So we had to test a version of the Feed where news content was ranked slightly higher on the weekend, and slightly lower during the week. By “test” here, I mean A/B test – we took the user population, divided it randomly into two buckets, and showed the existing version to bucket A and his weekend/weekday variant to bucket B. This was the standard way to vet potential ranking improvements.
And guess what: all the engagement metrics went down for bucket B. This could be most simply because the theory about news and weekends was wrong. But, more subtly, it could be that to the extent that weekend-news is favored by users, then that effect is already captured by the ML algorithm (since its input features include day of the week and classifications of types of content), and balanced against all the other predictive factors. Forcing one factor to have higher weight than it had due the existing implementation pushed the overall function of its sweet spot.
Similarly, product managers would channel users by complaining that the feed was stale, meaning that too many results were 3 or 5 days old rather than fresh today. So we would A/B-test a recency-boosted version. Guess what? To the extent that users preferred freshness over other factors like relevance, that was already reflected in the algorithm which balances all the different signals. So the recency-boosted version did worse. In general, hand-created policies that overrode ML ranking gave worse results by the metrics.
So this kind of ML ranking really works, and is really good at optimizing for whatever target you give it. If you have it optimize for clicks and vituperative argumentation, then stand back.
Which tweets you see in your Twitter feed has become increasingly ML-driven. Until fairly recently, you would see a ranked list of tweets from people you follow. Now, Twitter is beginning to infer tweets that you might be interested in that are not drawn from the strict set of people you follow. These will get injected into your feed, usually tagged with an explanation about the heuristic that was used – like “based on your likes” (for tweets on subjects that you seem to respond to) or “liked by people you know” (for Tweets that are not from people you follow directly but are one hop away in your social network). Somewhat annoyingly, you can unfollow people intentionally but then still end up seeing their tweets.
Anecdote: I decided that I was not that interested in X’s tweets about Y, and so I unfollowed him. But I saw just about the same amount of tweets from him as before, due to “based on your likes” and “liked by people you know”. Luckily there are options called “block” (I stop seeing his tweets and he can’t see mine, and if he tries to look at mine he’s loudly told that I blocked him), and “mute” (I just stop seeing his tweets, all of them, but he can still look at mine and she’s never the wiser. I chose “mute”.)
The takeaway here is tweets you see are not just a time-sorted list of tweets from people you follow, but are ranked and to some extent even selected by opaque machine-learned algorithms, and in practice users like Twitter a lot better when this is the case, even though the transparency of the system and controllability by the user declines, which other things equal is a bad thing.
So what could Elon actually do?
Let’s take Elon at his word, and that he wants greater transparency, greater “freedom of speech”, and also to do away with spammers and spambots. Let’s take those goals as seriously as we can, despite the fact that they conflict with each other. How should he proceed?
First of all, he could decide that the moderation standard will be legality, or as close to that minimal standard as he can get.
Now he has already departed from that by saying that he will get rid of spammers and spambots. Let’s say that I set up 100 independent Twitter accounts all under my control, and I use NLP algorithms to generate many fake tweets all on the same theme (as Russia famously did, for example), and I make those accounts follow various twitter users and like their tweets in hope of getting liked and followed back, and have each of my 100 accounts generate thousands of tweets per day. Now, I don’t believe that there is anything illegal about doing this, but Elon wants such accounts to be banned because he rightfully believes that the overall Twitter experience would be much better without spambots.
So it seems that Musk is proposing a “Musk community standard” which is, minimally, that you must not post illegal content and you must not set up spambot accounts. It turns out that there’s no way in principle to look at a tweet or account and determine whether it’s spamming – you have to use algorithms which have Type 1 and Type 2 errors. So we’re both going beyond pure legality and introducing some lack of transparency into the system if we combat spambots.
But that’s it – let’s hold the line there. Legal non-spambot tweets only – but otherwise anything goes.
Now everybody, and I mean everybody, who has ever run a social or community-driven site (e.g. Yishan Wong) thinks that this is a terrible idea, and will lead to mayhem in the real world. Like the scenario I mentioned earlier: I post a tweet critical of Musk, you retweet it, Musk retweets your retweet, but adds a sarcastic comment, 1000 Musk fanboys retweet that, 1 Musk fanboy who happens to also be a white supremacist retweets one of those saying “this guy [me] isn’t just a Musk-hater he’s a n*****-lover”, and someone retweets that and says “Who is this n*****-lover anyway?”, and someone researches me (aka “doxxes” me) and says “Found him! He’s T C in San Francisco, he works at Adobe, and his address is 2115 X, and his phone number is Y”, and the reply says “San Francisco! Of course he’s probably a liberal f****t as well as a n*****-lover”, and someone posts a photo of me with gunsight superimposed, and someone else says “How long are we going to put up with liberal f*****s like this anyway?”, and someone else says “Not another day. Tonight’s the night. 7pm. Be there.” and that tweet gets 1500 likes and retweets.
With your lawyerly eye, can you look at that chain of events and say if or when the tweets start to be illegal? I don’t know, maybe at some point it’s illegal hate speech? Or maybe at the end there’s incitement to riot, or something like that? I do not know myself. But wherever you end up drawing that line, I suspect that the basic chain could be modified to happen in basically the same way with a little more circumspection to happen just this side of the law. I would also guess that much fewer such chains run to some bad real-world completion in a world where the Community Guidelines are enforced, and people who have apparently whipped other people up into violent frenzies get permanent Twitter bans even when they don’t receive criminal charges. (Now before you do the both-siderist thing, I am making no claims here about bias or the lack of it. My only claim is that even imperfect and possibly-biased enforcement of something like the Twitter Community Guidelines (a stricter standard than legality) is going to interrupt more mayhem-chains like this that lead to real-world harassment and violence than enforcement of a minimum-legality standard. Maybe the Guidelines will interrupt 96% of white-supremacist mayhem chains as opposed to only 93% of BLM mayhem chains, because of the (hypothesized) leftist bias of Twitter moderators, which will be terribly unfair to white supremacists, and will add to their already-long list of grievances. I don’t know and (for purposes of this argument) I don’t care. I just argue that there will be less mayhem.)
My suspicion is that in practice Musk would start with legality+nospam as the standard, and would find himself forced by bad behavior to add more enforced behavioral guidelines over time, until he arrived at the Extended Musk Community Guidelines which, although they might be more minimal than the current Twitter guidelines, would not be as simple as he has envisioned.
Also, for all the reasons stated, I don’t think that he can “publish the moderation algorithm”. Even if it’s about legality alone it will be a complicated human/machine hybrid, and publishing the algorithmic pieces that flag potentially-illegal content will simply make it easier for bad actors to avoid being flagged.
What can Musk do to make the system more transparent? A lot, actually. Much of it would be around removing ML algorithms and replacing them with simple publicly-stated rules. The simplest would be to forswear the “best” ranking (non-chronological, machine-learned) ranking of tweets you see, and replace it with simple chronological ranking of all tweets from accounts you follow (and only from accounts you follow). This will be a worse user experience (as I’m sure Twitter established by A/B test when they launched the first version of best ranking), but it would be more transparent, more user-controllable, and less exploitable by spammers. Even though chronological ranking is selectable by users as an option, and very few people choose it and many who do eventually switch back, it would change the overall character of Twitter if chronological was the only option for everyone. It will be interesting to see what happens if he does it. Usage and user enthusiasm would decline, and presumably revenue along with it, but maybe that’s OK. He has already said that he’s not going this for the money.
Another change Elon could make is to identity and anonymity (and he has talked about doing this). As I said, some elites can opt into blue-check verification, where they provide proof of identity and Twitter marks them as being who they say they are, but most Twitter users are effectively anonymous if they want to be. They can choose a Twitter name and handle that has nothing to do with their real name and safely say whatever they want (unless and until they are “doxxed”). One of my favorite accounts to follow is @Angry_Staffer, who was allegedly an anonymous staffer in the Obama White House when he started the account.
Clearly some people feel freed by anonymity to be meaner than they otherwise might be, and Musk has argued that if you required everyone to be identified with their real name, as verified by documents, then they would behave better. (Facebook does something like this.) With that said, it’s pretty thorny. Can political dissidents tweet (whether from China, Saudi Arabia, Russia, Ukraine, or the U.S.)? Can people tweet and ask for help about their abusive marriages, bad bosses, or mental health challengers? It’s clearly full of tradeoffs. I will admit that there’s something that seems a bit off about making Twitter a haven for free speech (including political speech) but first making sure that your government knows who you are and can hold you accountable for what you say.
I’ll leave you with this thought from a NYT article:
“Mr. Musk himself has had a rocky relationship with online speech. This year, he tried to quash a Twitter account that tracked his private jet, citing personal and safety reasons.”
This is the guy who might now get total control of Twitter. Do you think that a Twitter account that tracks Musk’s private jet is going to have more free speech latitude after Musk buys Twitter than it did before?