Welcome back to The Deep Dive, the show where we grab that huge stack of technical
docs
and just boil it all down for you, best.
Today we are popping the hood on self-hosting, specifically we're going to tackle
the communication
platform called Stote.
And we are really going to connect the dots.
Our mission today is, I think, pretty simple.
We want to build a roadmap for a beginner.
The official instructions for deploying Stote, which is a full stack chat platform,
can look,
well, they can look pretty intimidating.
Like a foreign language.
Exactly.
So we're going to distill the critical how and, maybe more importantly, the why of
running
this thing on your own server using that massive complexity shield we call Docker.
Self-hosting is really the path to digital independence, but that journey, it
always starts
with a lot of setup.
We're going to show you not just which button to press, but what's actually
happening when
you press it.
And we'll focus on the security side because that's where people usually get tripped
up.
Before we jump in though, this deep dive is brought to you by Safe Server.
Safe Server handles software hosting and supports digital transformation.
They give you that solid foundation you need for a project like this one.
To learn more about how they can help you out, just head over to www.safeserver.de.
Again, that's www.safeserver.de.
Okay, let's unpack this right away.
When a beginner hears self-hosting, they're probably thinking of, I don't know, one
little
app.
Right.
If you look at the material for Stote, it looks like something way bigger.
What are we actually deploying here?
You've hit on the most important point right at the start.
You are not deploying a single application, I mean, not even close.
The configuration in the docs, it deploys a complete ready-to-go digital ecosystem.
You're getting the backend API server, the web front end, a dedicated file server.
Wait, a dedicated file server and a metadata proxy?
Why do you need a separate proxy just for metadata and images?
That's a great question.
You could try to make the main server do it, but it would just grind to a halt, and
it's
a security risk.
The proxy is like a shield.
Its main job is to fetch things like link previews.
So when someone pays to YouTube link, the proxy goes and gets that thumbnail and
title,
but it does it anonymously and securely.
It takes the load off your main app and hides your server's IP.
It's just good digital hygiene.
I see.
So we're talking about running four different services all at once.
That sounds like a full-time job for a sysadmin a decade ago.
How does Docker make this possible for a beginner?
Docker's the only reason this is possible for a beginner.
It's the magic.
Instead of you having to install a specific database version and Node.js and
configure
all the network ports.
A total nightmare.
A total nightmare.
Yeah.
Docker just wraps it all up.
The docker-compose.iml file is basically a recipe that says, hey, run these four
things,
connect them this specific way, and don't bother me with the details.
It turns chaos into a single command.
It packages the chaos.
I like that.
Okay.
Let's talk reality.
Someone's spinning up a new server for this.
What are the minimum specs we're talking about?
The docs are pretty realistic here.
For a functional instance, even a small one, you need a machine with at least two vCPUs
and two gig gibbies in memory.
And what happens if you go with less?
You're just going to have a bad time.
Things will crash.
Performance will be terrible.
All those services need a little room to breathe.
And for the operating system, they recommend Ubuntu Server.
That's what they use in production, so it's the safest bet.
Got it.
Two vCPUs, two gigs of RAM, Ubuntu.
So we've got our server.
We're connected via SSH.
Before we even type Git or Docker, you always say we need to lock it down.
What happens if I just skip that part?
You're starting with known vulnerabilities.
Running AppGitUpdate and AppGitUpgrade at VH is just foundational.
But the firewall, that's the critical step.
The youth thing.
The uncomplicated firewall, yeah.
If you forget to configure it, you just put a giant kick me sign on the internet
for every
automated bot out there.
But if I'm just running this for my small team, is it really that big of a threat?
It's not overkill, it's survival.
The threat isn't targeted, it's automated.
If you don't run a Mathis AT, default deny, and then specifically open up SSH, HTTP,
and
HTTPS at only those, you might be exposing your database port to the whole world.
Wow.
Okay.
You're basically shrinking your attack surface down to just the three doors you
absolutely
need open.
That makes sense.
Moving the defense line right up to the front door.
And what about securing SSH itself?
Well, if you set up an SSH key, which you really, really should, the next logical
step
is to just turn off password authentication completely.
You just edit a config file for that?
Yep.
In etc.
Shoot config.
Passwords are always the weakest link.
By using only trypto keys, you basically eliminate the entire threat of brute force
password
attacks.
It's a huge security win.
Excellent.
Okay.
Server's locked down.
It's updated.
Now we need the tools for the job.
What do we have to install before we can even think about STOPE?
Two main things.
First, you need Git, obviously, to pull the code.
And second, you need the entire Docker suite.
Not just Docker itself.
No.
You need docker-race, container.io, and the Docker Compose plugin.
You need the whole stack because Docker Compose is the tool that's going to orchestrate
all
the different containers.
It's the conductor.
Exactly.
It's the conductor of the container orchestra.
So dependencies are installed.
We use Git clone to get the self-hosted repository.
Now we're in the folder.
For someone who hates editing config files, what's the shortcut the docs give us?
The elegance is a simple shell script.
You just run .generic-config.ash, your .domain, and that's it.
It creates the core config files for you, and it prefills your domain name
everywhere
it needs to be.
It's the easiest way to get started.
That does lower the barrier quite a bit.
So we've run the script, but before we launch, what are some of the other knobs we
can tweak?
This is where you get a lot of control.
You can immediately enable things like email verification to stop spam accounts.
Or you can add a captcha to the signup page.
Oh, that's useful.
Or maybe you want to use your own S3 bucket for file storage.
You can set that up right away.
These aren't just details.
They're choices that define how your community works.
Okay, so we've set our configuration, the moment of truth.
What's the command to actually launch this whole thing, and how do we make sure it
stays
running?
The final step is Docker ComposeUp.
Yeah.
Here's the trick.
The first time you run it without the LD flag.
You run it in the foreground.
Why?
Because you want to see the logs.
Oh.
A flood of text is going to fill your screen, and you want to watch it for, say, 30
seconds.
You're looking for errors.
Is the database connecting?
Is the API starting?
It's your preflight check.
The preflight check.
I like that.
Exactly.
Once you see everything looks stable, you hit well, brish plus C to stop it, and
then
you run the real command.
Docker compose up A-O-B-O-D, that little D means detached, and it runs everything
in
the background, so it stays up even after you log out.
And we're live, but probably still on test settings.
Let's talk customization.
The docs make a big deal about replacing every instance of local.stote.chat with
your real
domain.
Why is that so important?
It's dangerous to miss this because of how modern browsers work.
The default config uses unencrypted HTTP.
If you deploy that, browsers will just block it.
You'll get security warnings, things won't load, it'll be completely broken.
So you have to manually find and replace it in the file?
In both revolt.tombol and .env.web, yes.
And when you do, you have to switch the protocols.
What do you mean?
Your public URL needs to be HTTPS, not HTTP.
And this is one people always forget.
The real-time communication has to use secure web sockets, so WSS, not WS.
If you miss that WS.part, your chat just won't work.
That's a huge...
Gotcha.
Okay.
Let's talk about control.
Say I want to run a private, members-only instance.
How do I make it invite-only?
So you start by editing revolt.tombol and setting invite-only true-true.
That's the easy part.
You've locked the front door.
But how do people get in?
Right.
It doesn't automatically create any invites.
You have to generate the codes yourself, manually.
And how technical is that?
It forces you to get your hands dirty.
You have to go directly into the database.
You run docker-compose exec-database-mong-dash to open the Mongo shell.
And then you run a database command to literally insert an invite code into the
invites collection.
Wow.
Okay.
So you're not just clicking a button in a web UI.
Nope.
This is a key moment for any new admin because it shows you that for real
operational control,
you're manipulating the database directly.
That's a whole other level of technical depth.
That really drives home the commitment.
Speaking of which, let's talk updates.
What's the standard process when new code is released?
It's a three-step dance, usually.
First you run git pull to get the latest config changes.
Second, this is crucial, you manually compare your config file with a new example
one to
see what's changed.
You can't just skip that.
You really can't.
Then you pull the new images with Docker Compose pull and finally restart with
Docker Compose
a PD.
But that simple process doesn't always work, which brings us to the big warnings.
All right, we're moving from the happy setup phase to the cold reality of
maintenance.
What are the big pitfalls the source material is screaming about?
The first one is about the database.
The docs explicitly warn you, do not add port 27017.27017 to expose your database
to the
internet.
That database has everything, user data, messages, all of it.
Exposing it is a catastrophic security failure.
So if you see a tutorial telling you to do that, just close the tab.
Run the other way.
Absolutely.
The second and maybe even bigger hurdle is that sometimes you have to perform
manual
data migrations.
What does that mean?
It means there's a breaking change.
The source material gives a perfect example from September 30, 2024.
The data structure changed so much that you couldn't just restart.
You had to manually start a temporary container, shell into it, and run special
scripts to
convert your database to the new format.
Wait, so if I missed that announcement and just ran the normal update command, what
would
happen? My instance would just break.
It would almost certainly fail to start.
The new code wouldn't be able to read the old database.
This is what separates self-hosting from a sauce product.
You are now the on-call engineer responsible for these high-states data procedures.
That's a huge commitment.
And it wasn't just data, right?
There were config file changes, too.
Correct.
Look at the November 28, 2024 update.
They just renamed some sections in the config file.
API.vapid became pushed.vapid.
If you didn't see that and manually change your file, your push notifications would
just
silently stop working.
So you have to be constantly reading the change log.
You have to have an active relationship with it, or essential features will just
break
under you.
And why all this diligence?
What kinds of security flaws are actually being patched that make this so critical?
Oh, the list of advisories tells a pretty clear story.
Back in June 2024, there was a bug that allowed unrestricted account creation.
Unpatched, your server becomes a spam factory.
In December, a vulnerability could let someone crash your server with a simple
denial of
service attack.
So real availability risks, what about user data?
Any close calls there?
Absolutely.
The February 2025 advisories are pretty eye-opening.
One of them notes that webhook tokens were basically public, which could let people
mess
with your integration.
Yikes.
But even worse was another one from that month.
A bug in the message fetching could be exploited to download the entire message
history of
a channel, completely bypassing the normal limit.
Wow.
That right there proves that staying up to date isn't a suggestion.
It's mandatory if you care about user trust.
Exactly.
You are the one responsible for protecting your users from these things, and the
only
way to do that is to stay on top of these very technical, very manual updates.
Incredible.
So we've gone from a blank server through Docker, set up security, customized it,
and
now we've stared into the abyss of long-term operational commitment.
The goal was to give a clear roadmap.
Docker makes the launch easy, but it's the ongoing diligence that's the true cost
of
digital independence.
So what's the final takeaway for someone listening who is thinking about going down
this road?
I think given what we've seen, the need for these frequent manual migrations, the
serious
security advisories from data leaks to denial of service, the real question you
have to
ask yourself is this, what is the true long-term commitment in time and technical
skill required
to maintain your digital independence with a platform this dynamic?
It's not really a hobby.
It's more like a part-time job.
A crucial thought to end on.
Thank you for joining us for this deep dive.
And thanks again to Safe Server for supporting the show.
Safe Server can handle your software hosting and supports digital transformation.
We'll see you next time for the next deep dive into the sources.
We'll see you next time for the next deep dive into the sources.