More than 13 million users have joined the ranks of the latest buzzy social media platform Bluesky, the self-proclaimed "billionaire-proof" micro-blog that is taking on Elon Musk's X.
Since the presidential election — and Musk's hard right turn — X users have begun leaving the platform en masse, harkening back to the early days of Musk's acquisition and the search for Twitter alternatives. Bluesky (also known as Bluesky Social) has gained about one million new members a day ever since, shooting up into the number one spot on the App Store.
The social networking site is "designed to not be controlled by a single company," the company writes. "Traditional social networks are often closed platforms with a central authority. There’s a small group of people who control those companies, and they have total control over how users can use the platform and what developers can build." It's mission, in summary, is to host a social network made in the image of the early internet, one created by the people, for the people.
Who owns Bluesky?
Bluesky is the brainchild of former Twitter CEO Jack Dorsey, originally an internal Twitter initiative created in 2019 to explore moving the platform into an open source infrastructure. Dorsey left his former company in 2021, speedily turning the work of the Bluesky team into a decentralized, open source alternative to modern social media behemoths. Bluesky became an independent company in 2021 and officially launched onto the scene in 2022; Dorsey sat on Bluesky's board until earlier this year.
In May, Dorsey unceremoniously left Bluesky's leadership, deleting his account and urging users to go back to (or stay on) X instead of joining the burgeoning social media site. Dorsey had previously expressed support for Musk's takeover and redirection of X, praising the billionaire's take on "anti-censorship" and making the case for an "algorithm-free" world. He later invested millions into a crypto-aligned social network known as Nostr.
Since then, the platform has remained under the supervision of CEO Jay Graber, originally appointed by Dorsey in 2021 after being called in to consult on the internal Bluesky project. Graber is a software engineer specialized in decentralized technology and an alternative banking advocate, previously working in both bitcoin mining and the development of privacy-focused cryptocurrency platform ZCash. A vocal opponent to Big Tech, she previously launched a Facebook Events alternative called Happening, described as a user-first event coordinating website.
The rest of Bluesky's board includes Jeremie Miller (the inventor of Jabber/XMPP, an open communication protocol for instant messaging), Mike Masnick (founder of blog TechDirt and the "Silicon Valley oracle" behind the Copia Institute think tank), and Kinjal Shah (a general partner at venture capital company Blockchain Capital).
What are Bluesky's community guidelines?
Bluesky summarizes its community guidelines using three company principles: empowering user choice, cultivating a welcoming environment, and evolving with feedback. What does this mean in practice? That's harder to pinpoint. For the most part, Bluesky's guidelines are pretty straightforward. It's outlined content it prohibits users from sharing, including materials from hate groups or "proscribed terror groups," child sexual abuse material, and content facilitating sexual exploitation. It prohibits predatory behavior, data theft, doxxing, the spreading of misleading information, scams, and copyright infringement, in addition to hate speech or harassment based on race, gender, religion, ethnicity, nationality, disability, or sexual orientation — Bluesky does not expressly mention misgendering or deadnaming protections, a policy choice which has been criticized over on its rival X.
The platform relies on self-reporting mechanism for posts, accounts, or direct messages, but does offer blocking features. Bluesky does not let users make their profiles private. All posts, likes, blocks, and mutelists are public, while direct messages are private.
What is Bluesky's privacy policy?
Bluesky does collect personal information, including birthdates and emails, when provided. According to its privacy policy, Bluesky reserves the right to use this data for administrative and marketing purposes, and may share this information with third parties.
Graber has said that the platform has no plans to introduce "traditional advertising" onto the site, an essential revenue generating component of most modern social media sites that often includes harvesting or selling personal data to create "microtargetted" advertisements for users. Bluesky has also pledged to never use user data to train generative AI systems. "We do not use any of your content to train generative AI, and have no intention of doing so," wrote the company, which has yet to introduce gen AI features onto the platform.
Bluesky abides by the principle of "age-gating" its users, requiring each account register a birthdate upon signing up. After incorrectly stating the age minimum was 18 years old in an interview with the BBC, Graber and Bluesky later clarified that the age minimum is actually 13 years of age. Graber has stated that Bluesky does not support ID verification as a matter of user privacy — ID verification laws are popular among politicians advocating for child safety regulations, but have stoked major privacy concerns among experts.
"Child safety is extremely important for Bluesky," the company told the BBC. "You must be at least 13 years old to sign up for an account, and anyone under 18 using Bluesky has additional settings applied to ensure that the content they see is safe for minors."
How does Bluesky approach content moderation?
According to the platform, Bluesky's moderation efforts focus on creating an "ecosystem of third-party providers" that can be "composed and customized" by the user's experience. Essentially, Bluesky leverages traditional moderation efforts alongside community-engineered moderation, including fact-checking (sort of like X's Community Notes) and innovations added by third parties and custom algorithms decided by the user. Moderation isn't done by a central authority, but by server administrators — meaning different servers have different moderation, and users can subscribe to these different feeds at will.
The site uses three levels of moderation: Automated filtering (for illegal or objectionable material), manual administrator actions (to filter out or flag content for users), and this "community labeling." On Nov. 25, the platform announced it would be quadrupling its content moderation team from 25 to 100 people, addressing growing concern about the increase in child sexual abuse materials and other harmful content as users flocked to the site.