Hosting a Personal Mastodon
Hosting a Personal Mastodon⌗
I’m part of the T_____r diaspora.
Something about paying $8 per-month to the world’s richest product manager didn’t feel right.
Unfortunately, the few big Mastodon hosts had already suspended registration by the time I decided to move.
There are lots of smaller instances available, but I’ve been burned with T______r and I’m genuinely worried there are some problems ahead for the flotilla of fediverse hosts. The administrators of these instances are clearly dedicated and competent; but how can I be sure they’ll stay online?
So, I decided to setup an instance just for me and host it on my own domain. If I fuck it up it’s only my own fault.
Interzone⌗
That’s this domain. It’s named after the hallucinatory mixing pot from William S. Burrough’s Naked Lunch. A Taiwanese TLD just felt right.
The hardware behind the Mastodon is:
- A droplet hosting the Mastodon web, sidekiq, and streaming servers (2GB RAM, 2 CPU, 60GB SSD);
- A managed PostgreSQL instance (1GB RAM, 1 CPU, 10GB SSD);
- A managed Redis instance (1GB RAM, 1 CPU, 10GB SSD); and,
- A managed Spaces bucket (250GB) with a CDN.
The combined cost of all of this is about $52, or two take-out meals, per-month.
Is that overkill for a single user instance? Most definitely.
I could have colocated the PostgreSQL and Redis instances with the servers, and stored and hosted content from the machine itself. But I want this instance to last, I’m a SWE not an SRE, and I just don’t trust myself to properly manage my own databases.
The distance between docker-compose down
and disaster is pretty small.
Setting up your own⌗
I’ll assume you’re comfortable with cloud and linux, and focus on the trickier parts of the setup that I haven’t seen covered anywhere else.
The basics:
- Sign-up for DigitalOcean if you haven’t already, and create a new application to collect all of your Mastodon assets;
- Spin up the droplet, databases, and spaces bucket that I mentioned in the last section;
- Follow the official guide but skip anything that involves setting up PostgreSQL or Redis on your own machine.
Before you run RAILS_ENV=production bundle exec rake mastodon:setup
in
that guide, check the sections below for some DigitalOcean-specific
gotchas.
PostgreSQL⌗
The database was pretty easy to setup, with a couple of gotchas:
- Create an empty
postgres
database before starting the setup script, because the connection check is really “does apostgres
database exist there?” check; - Create a non-superuser account for
mastodon
but grant it permission to create databases, the setup script will fail if its database already exists.
With all that done, the connection parameters in the DigitalOcean console work just fine.
Redis⌗
The low-level Redis client used by Mastodon does not support TLS, so
you’ll need to setup a tunnel. I chose stunnel4
for this - it has a
DPKG and setup is pretty straightforward:
$ apt install stunnel4
Then, put the following into /etc/stunnel/redis-client.conf
:
pid = /var/run/stunnel4/redis-client.pid
[redis]
client = yes
accept = 6379
connect = <YOUR-VPC-REDIS-URL>:<YOUR-REDIS-PORT>
And enable the stunnel4
service:
$ systemctl enable --now stunnel4
Now, when you’re running the Mastodon setup script, provide localhost
and 6379
as the host and port for Redis, and the password from the
DigitalOcean console.
DigitalOcean Spaces⌗
This is another S3 API-compatible clone that manages to have a much better UI than the AWS offering:
- Make sure to enable the CDN for your space. You don’t need to provide a custom subdomain for it;
- You’ll need an access and secret key pair for its API, it’s not obvious but you can generate it from the “API” section in the DigitalOcean console.
When you’re setting up Mastodon, enter whatever garbage you want when it asks for your S3 bucket details. It doesn’t matter yet.
Before you run your server, edit your .env.production
and update the
values to look like:
S3_ENABLED=true
S3_BUCKET=<BUCKET-NAME>
S3_ENDPOINT=<BUCKET-ENDPOINT>
S3_ALIAS_HOST=<BUCKET-NAME>.<REGION>.cdn.digitaloceanspaces.com/<BUCKET-NAME>
AWS_ACCESS_KEY_ID=<API-ACCESS-KEY>
AWS_SECRET_ACCESS_KEY=<API-SECRET-KEY>
Yeah, the S3_ALIAS_HOST
setting has the bucket name twice. I know
it’s weird, but you need it, fixing it would take a pull request to
Mastodon, and I’m too lazy.
Feedback is a gift⌗
Thanks for reading, if you have any corrections/pain hit me up at @[email protected].