Today's Deep-Dive: SerpBear
Ep. 203

Today's Deep-Dive: SerpBear

Episode description

This episode discusses SerpBear, an open-source tool designed for tracking Google search rankings and conducting keyword research. It addresses the challenges of manually checking keyword positions and highlights the need for an efficient solution. SerpBear allows users to track unlimited keywords and domains, but the actual search capabilities depend on external scraping services that may have their own limits or costs. The tool utilizes third-party scrapers to check rankings and offers features like email notifications for position changes, an API for data integration, and keyword research through a Google Ads test account and Google Search Console integration.

Users can set up SerpBear by deploying the software, connecting it to a scraping service, and adding keywords for tracking. While the software itself can be hosted for free on certain platforms, users are responsible for costs associated with scraping services. The document emphasizes the trade-off between control and convenience, encouraging users to consider their needs regarding setup complexity and cost structure. In conclusion, SerpBear provides a flexible option for those willing to manage their own SEO tracking, with a focus on data control and customizability.

Download transcript (.srt)
0:00

So you've got a website, right?

0:01

And you really wanna know how it's doing on Google

0:03

for the keywords that actually matter to your audience.

0:07

Sounds simple.

0:08

Yeah, sounds simple, but then you try to figure out

0:10

where you actually land on that search results page,

0:13

the SRP, for maybe dozens, even hundreds of keywords,

0:18

and checking manually every day.

0:20

Forget it.

0:21

Oh, absolutely, it's just, it's not feasible.

0:23

Way too time consuming.

0:24

Exactly, and finding a tool that does it

0:26

without costing a fortune or overwhelming you

0:30

with a million features you'll never use, that's tough too.

0:33

It's easy to get buried in information.

0:35

It really is.

0:36

I mean, understanding your ranking is so basic

0:38

to knowing if your website's even working,

0:40

if people are finding you,

0:42

but just getting that info reliably, efficiently.

0:45

Yeah, that's often the tricky part.

0:47

And that difficulty,

0:48

that's exactly why we do the deep dive.

0:50

We take the source material that you, the listener,

0:52

share with us, and for this one,

0:54

we've got notes from a GitHub repo

0:56

and some intro docs about a particular tool.

0:58

All right, and we act as your guides, basically.

1:00

We pull out the main points,

1:01

explain how it all hangs together,

1:03

and help you get up to speed fast,

1:05

sticking really closely

1:06

to what's actually in those documents.

1:08

So our mission for this deep dive,

1:10

we're unpacking an open source tool called SerpBear.

1:13

Based only on the sources we have,

1:15

we're gonna look at what it is,

1:17

what features the docs say it has,

1:19

get a handle on how it actually works behind the scenes,

1:22

and walk through the setup steps they lay out.

1:23

We really wanna make this clear,

1:25

especially if maybe you're just starting out

1:26

with this kind of Serp-y tracking software,

1:29

trying to keep it beginner friendly.

1:31

Absolutely.

1:32

Think of this as your quick guide

1:34

to understanding the specific approach

1:36

to tracking your search rankings,

1:38

all based on the info provided.

1:40

Okay, let's do it.

1:40

But first, before we jump in,

1:42

a huge thank you to the supporter of this deep dive,

1:45

SafeServer.

1:46

Yes, thanks, SafeServer.

1:47

SafeServer is a fantastic partner.

1:50

They can handle the hosting for software

1:52

just like the one we're discussing, SerpBear,

1:54

and really support you in your whole digital transformation.

1:58

You can find out more over at www.safeserver.de.

2:03

That's www.saferver.de.

2:06

Check them out.

2:10

Okay, so, SerpBear, let's unpack it.

2:14

Right at the top, the sources use two key phrases,

2:17

open source and search engine position tracking

2:20

and keyword research app.

2:22

So the main idea seems to be, well,

2:24

tracking where your keywords rank on Google

2:26

and telling you when things change.

2:28

It's the core description from the sources, yeah.

2:30

It's built to be open source,

2:31

so the code's out there for anyone to see your tweak

2:33

and it's really focused on Google position tracking

2:36

and helping with keyword research.

2:38

Okay, so not just tracking where you are now

2:40

but also helping find new keywords.

2:41

That's interesting.

2:42

Now the docs list several features.

2:44

One jumps out immediately, unlimited keywords.

2:47

Yeah, that sounds pretty good, doesn't it?

2:48

Yeah.

2:49

Especially compared to a lot of paid tools

2:50

that have strict limits.

2:51

It really does.

2:52

The sources say word for word almost,

2:54

unlimited domains and unlimited keywords

2:56

for tracking SERPy positions.

2:58

But, and this is important,

3:01

we need to connect that claim

3:02

with other bits in the documentation.

3:05

There's like a little asterisk footnote.

3:06

Ah, okay.

3:07

The fine print.

3:08

Sort of.

3:09

It comes up again in the getting started bit

3:11

and a comparison table they include.

3:12

It clarifies things.

3:14

Okay, so based on the sources,

3:15

what's the real deal with unlimited?

3:18

Well, the sources explain that while SERP Bear itself,

3:21

the software, doesn't count the number of keywords

3:23

you put in, the actual number of searches

3:26

it can run each day or month.

3:28

That depends entirely on the external scraping service

3:32

you hook it up to.

3:33

Ah, okay.

3:34

So SERP Bear doesn't do the searching itself.

3:37

Not directly, no.

3:38

And that asterisk footnote,

3:41

it qualifies unlimited lookups by saying something

3:43

like free up to a limit and points to the free plan

3:47

of a service like Scraping Robot as an example,

3:49

which apparently gives you 5,000 lookups a month.

3:52

Right, I see.

3:53

So the software can handle unlimited,

3:55

but the work of checking the ranks is done by another service

3:58

and that service might have limits or costs.

4:00

That's a really key distinction.

4:02

Exactly.

4:03

It really highlights the whole model of this tool, doesn't it?

4:05

You get the software, which is flexible,

4:07

but you need to bring your own engine, so to speak,

4:10

to actually do the Google checking.

4:12

Yeah, it does.

4:13

And that ties right into how it works,

4:15

which the sources also cover.

4:16

So how does SERP Bear check those rankings?

4:19

According to the docs.

4:21

It uses third-party website scrapers or proxy IPs.

4:26

The sources explain it like this.

4:28

Think of a scraper as a sort of automated mini browser.

4:31

It goes to Google, types in your keyword,

4:33

and then scans the results page to find your website.

4:37

SERP Bear itself just tells one of these external services

4:39

to do that.

4:40

The docs list examples like Scraping Robot, SERP Appy,

4:43

SERP Choppy, or they mention you could

4:46

use your own set of proxies if you have them.

4:48

Gotcha.

4:49

So SERP Bear is like the control panel,

4:51

and it sends the orders out to one of these external scraping

4:54

services.

4:55

That makes total sense now why the number of checks

4:57

depends on that service.

4:58

Precisely.

4:58

That's the core way it gets the ranking info.

5:01

OK.

5:01

What about some of the other features mentioned?

5:03

You said notifications earlier?

5:04

Right.

5:04

Email notifications.

5:05

The sources say you can get alerts about position changes,

5:08

keywords moving up or down, and you can choose how often.

5:11

Daily, weekly, or monthly.

5:13

Useful, but how does it send those emails?

5:16

Ah, well the sources say you need to set up SMTP details.

5:20

SMTP.

5:21

For someone maybe not familiar, what's that in simple terms?

5:23

Sure.

5:24

SMTP just stands for Simple Mail Transfer Protocol.

5:27

It's basically the standard internet language computers

5:30

use to send emails to each other.

5:32

So you need to tell SerpBear how to talk to an email sending

5:35

service.

5:35

Like connecting it to Mailgun or something similar.

5:38

Exactly.

5:39

The sources actually mention Elastic Email or SendPulse

5:43

as examples.

5:44

And note that they have free options to get started.

5:46

So you plug those details into SerpBear settings.

5:49

OK, makes sense.

5:50

So you need an email service for the alerts.

5:52

The docs also mention an API.

5:54

They do, yeah.

5:54

A built-in API.

5:57

The documentation suggests this is useful

5:59

if you want to pull your ranking data out of SerpBear

6:01

and maybe feed it into other tools you use,

6:03

like custom dashboards or marketing reports.

6:06

Right, getting the data out for other purposes.

6:09

Cool.

6:09

What about the keyword research part?

6:11

How does that work according to the sources?

6:13

So for this, the sources say it integrates with a Google Ads

6:16

test account.

6:18

That's an interesting detail, the test part.

6:20

Yeah, why a test account, do they say?

6:22

The docs don't really elaborate on the why.

6:25

They just specify using a test account.

6:27

But through that connection, SerpBear

6:29

can apparently auto-suggest keyword ideas

6:31

based on your site's content, and it can also

6:33

show you monthly search volume data for keywords.

6:36

It helps you find what people are actually searching for.

6:39

Interesting.

6:39

Using a test ads account for volume data.

6:43

OK.

6:44

And then there's also integration with Google Search

6:46

Console, GSC.

6:47

That's Google's own thing for site owners.

6:49

Yeah, and the sources suggest this is a pretty powerful one.

6:52

By linking SerpBear to your GSC account,

6:55

you can apparently pull in actual data,

6:57

like how many clicks and impressions

6:59

your site got for specific keywords directly

7:02

from Google's own reporting.

7:03

Oh, wow.

7:04

So that goes beyond just rank, right?

7:05

That's real traffic data.

7:06

Exactly.

7:07

So let's just say it helps you see real visit counts,

7:09

find new keywords you didn't even know you were ranking for,

7:12

and spot your top performing pages and keywords by country,

7:15

all using that real GSC data.

7:17

Nice.

7:18

So you get the rank position from the scrapers, maybe

7:21

search volume estimates from the Google Ads test account,

7:24

and then the actual performance data from GSC.

7:27

That paints a much fuller picture, doesn't it?

7:29

It does, according to the sources.

7:31

It brings different data points together in one interface.

7:33

What about using this on your phone?

7:35

Any mention of that?

7:36

Yep, the sources mention a mobile app.

7:38

Specifically, they call it a PWA, a Progressive Web App.

7:42

Ah, OK.

7:43

So not a native App Store app, but one that works well

7:46

in a mobile browser.

7:47

Pretty much.

7:48

It means you can access CertBare on your phone or tablet

7:50

through the browser.

7:51

But it's designed to feel more like using a dedicated app.

7:54

Smoother experience on mobile.

7:56

Good to know.

7:57

And let's circle back to cost for a second.

7:59

The sources make a point about zero cost to war and end.

8:02

Yes, that's highlighted.

8:04

The documentation specifically says

8:06

you can run CertBare on platforms like MoGenius.com

8:09

or Fly.io for free.

8:11

So that's about the hosting cost for the CertBare software

8:14

itself, right?

8:15

Not the scraping costs we talked about.

8:17

Exactly.

8:17

It refers to the cost of actually running

8:19

the application, the infrastructure for CertBare.

8:22

On those specific platforms, the sources

8:24

say that cost can be zero.

8:26

It just reinforces that open source self-hosted model.

8:29

You deploy it, you run it, and here are some potentially

8:32

free places to do that.

8:33

OK.

8:34

So zero cost to host the app itself on certain platforms.

8:39

Potentially free, up to a point.

8:41

Or paid costs for the scraping service.

8:44

It's definitely a different kind of cost structure

8:46

than just paying one monthly fee for everything.

8:49

Precisely.

8:49

It's a key difference in the approach described

8:51

in the source material.

8:53

You trade off convenience for control and potentially lower

8:55

direct software costs.

8:57

And one last feature mentioned, exporting.

8:59

Oh, yeah.

9:00

Simple but important.

9:01

Export CSV lets you download your keyword data

9:03

into a spreadsheet file, which is always

9:05

handy for doing your own analysis or making reports.

9:07

OK, great.

9:08

So that seems to cover the main features the sources lay out.

9:11

The tracking via external scrapers, the notifications,

9:15

the API, keyword research using that Google Ads Test account

9:19

link, the really valuable GSE integration for real data,

9:23

mobile access via PWA, potentially free hosting,

9:27

and data export.

9:29

Seems like a good summary based on the docs.

9:31

Now, let's say someone listening is thinking,

9:33

OK, this sounds kind of interesting,

9:35

maybe worth trying.

9:36

How do you actually get started?

9:38

The sources give a step-by-step path, right?

9:40

They do.

9:40

They lay out a sequence.

9:41

First step, deploy and run the app.

9:44

Right.

9:44

This isn't just a website you sign up for.

9:45

You actually have to get the software code

9:47

and run it somewhere, self-hosted.

9:49

Exactly.

9:50

The source is mentioned using Docker,

9:52

which is a popular way to package and run software,

9:55

or maybe running it without Docker,

9:56

depending on your setup.

9:57

But yeah, step one is getting it running.

9:59

OK, you get it running.

10:00

Then what?

10:00

Step two, access the app and log in.

10:03

Once it's running, you go to its web address and log in.

10:06

Step three, add your first domain.

10:08

Tell it which website you actually want to start tracking.

10:11

Pretty straightforward so far.

10:12

Add your site.

10:13

What's next?

10:14

Step four is key, and it loops back to how it works.

10:18

You need to get a free API key from a provider

10:21

like Scraping Robot, or choose a paid one.

10:23

This is getting that engine we talked

10:25

about, the scraping service.

10:26

Exactly.

10:27

The sources say you can skip this

10:28

if you're using your own proxies.

10:30

But for most people, it sounds like grabbing an API

10:32

key from one of these services is the way to go.

10:34

And then you have to tell SerpBear to use that key.

10:37

Correct.

10:37

Step five, set up the scraping API or proxy details

10:41

inside SerpBear's settings.

10:44

This is where you connect your running SerpBear app

10:46

to that external service so it can actually

10:48

start checking the rankings.

10:49

OK.

10:49

Connect the software to the data source.

10:51

Makes sense.

10:52

Then you add your keywords.

10:53

Yep.

10:53

Step six, add your keywords.

10:55

The specific search terms you want

10:57

SerpBear to monitor for the domain you added earlier.

11:00

And there were some optional steps, too.

11:02

Right.

11:02

Step seven is optional.

11:04

Set up those SMTP details in the settings

11:06

if you want the email notifications.

11:08

Like we said, connect it to an email sending service,

11:10

maybe one of the free ones the source has mentioned.

11:12

Got it.

11:13

For the alerts.

11:14

And step eight, also optional, integrate Google Ads,

11:18

test account, and Google Search Console.

11:21

That's if you want the keyword research data

11:23

and those deeper insights about actual clicks

11:26

and impressions from GSC.

11:27

OK.

11:27

So the basic setup is get the app running, add your site,

11:31

connect it to a scraper, add your keywords.

11:33

The emails and the deeper Google integrations

11:36

are extras you can configure after that core setup.

11:40

That seems to be the flow outlined in the sources.

11:42

Yeah.

11:42

It really walks you through that self-hosted process

11:45

and highlights the need for that external scraping component.

11:48

It definitely shows you're more hands-on with this kind of tool.

11:51

You have control, but also the responsibility

11:53

for setting up these different pieces.

11:56

And finally, just a quick note.

11:57

The sources briefly mentioned the tech stack.

11:59

Oh, yeah.

12:00

Anything interesting?

12:01

It's built with Next.IS, which is a popular web framework.

12:04

And it uses Sklite for the database.

12:06

Just a little technical detail included.

12:08

OK, good to know.

12:09

So we've kind of taken CertBear apart now

12:12

based purely on the GitHub notes and docs you shared.

12:15

If we boil it all down, what's the main takeaway

12:18

from this deep dive?

12:19

Well, based strictly on these sources,

12:21

CertBear looks like a potentially powerful open

12:23

source option if you want to host your own Google ranking

12:26

tracker.

12:27

It seems to offer the core features you'd

12:29

need tracking alerts, unlimited keyword capacity anyway.

12:32

Right, capacity being the keyword there.

12:35

Exactly, plus those useful integrations

12:37

with GSC and Google Ads for deeper data.

12:40

But the big thing is the model.

12:42

You deploy it, you run it, and crucially, you

12:45

need to connect it to and potentially pay

12:47

for a separate scraping service to actually do

12:49

the rank checking, even if the CertBear software itself,

12:52

and maybe it's hosting on certain platforms, can be free.

12:55

So it puts you in the driver's seat for your data,

12:57

maybe offers a different cost model than typical subscriptions,

13:00

but it definitely requires more setup.

13:02

It's a trade-off, isn't it?

13:04

It really is.

13:05

And that brings up a good thought for you, the listener,

13:07

to ponder.

13:08

Given this model open source software, self-hosted,

13:12

relying on external scrapers with their own free or paid

13:15

tiers, how does that really stack up

13:17

against the more traditional all-in-one paid SEO software

13:21

where they handle everything behind one subscription fee?

13:24

Yeah, what are the real trade-offs for you?

13:26

Think about control versus convenience,

13:29

cost structure versus simplicity,

13:31

and maybe the technical skills needed versus ease of use.

13:34

It's a different way to get that crucial SRP data.

13:37

Definitely something to consider based

13:38

on your own needs and resources.

13:40

And if you do want to dig into the nitty gritty

13:42

of deploying SerpBear or compare those scraping services

13:45

the sources mentioned, then checking out the actual source

13:48

documentation in the GitHub repository

13:50

would be your next step.

13:50

Absolutely.

13:51

The sources are always the place to go for the full details.

13:54

And one last time, a big thank you

13:55

to SafeServer for supporting this deep dive.

13:58

Remember, they can help with hosting software like SerpBear

14:00

and generally support your digital transformation efforts.

14:04

Find out more at www.safeserver.de.

14:07

That's www.saferver.detec.

14:14

Thanks again to SafeServer, and thank you

14:15

for joining us on this deep dive into SerpBear.

14:18

We hope unpacking these sources helped clarify

14:20

what this tool is all about.

14:21

We'll catch you on the next one.

14:21

We'll catch you on the next one.