Let's Discuss Scrapebox

Status
Not open for further replies.


Why would anybody want to write an ebook about how big a massive fail Scrapebox is?
 
One gripe I have about this sweet ass tool is that when you switch to slow poster and then minimize, the tab lights up as if it needs your attention or its done or something. I find myself repeatedly bringing it back up and going "oh yeah".

Other than that I love it. And by love it I mean I love barely using it and watching all of my sites vanish from the SERPs. Even my bookmarks disappear.


Here's a kingofsp/turbo collaborative tip (although he doesn't know it yet)

Read kingofsp's post Extremely Effective White Hat Link Building Method | Off-White Hat

Then go to scrapebox

1. Paste your relevant url into SB.
2. Run Link extractor Addon (external links only).
3. Export those links back into SB.
4. Run Alive Check addon.

and VOILA! There's your links to send to site owners. All automated and shit.
 
  • Like
Reactions: FTC-Hater
Here's a kingofsp/turbo collaborative tip (although he doesn't know it yet)

Read kingofsp's post Extremely Effective White Hat Link Building Method | Off-White Hat

Then go to scrapebox

1. Paste your relevant url into SB.
2. Run Link extractor Addon (external links only).
3. Export those links back into SB.
4. Run Alive Check addon.

and VOILA! There's your links to send to site owners. All automated and shit.

too bad scrapebox sucks a fattie or else this would be a good tip
 
Still do not understand how to use multi-line ones. If we use "Pligg" one as the example, each line is its own query that will produce entirely different results page?! How am I supposed to combine them? You mention "chaining", but how exactly is it supposed to work in practice? Also looking at "Guestbook" footprint from the same example, seems to me each line is missing an operator like "inurl:"... Does it seem buggy to you too?

What you may not realize is that xRumer comes with its own scraper hrefer. It has the ability to take a huge list of footprints such as in that blog post, scrape any results from each query and then compile all the results in a text doc.

As far as it being "buggy", he's just giving you the footprint. Whether you use the inurl: or not will give you different results and therefore a broader population to work with.
 
After download, Google spidered my home IP address and I forgot to put no-index meta tags on filezilla. This lead to google scanning logs & finding all of my sites. They preemptively banned every site I had.

Glad to hear from someone who understands this whole thing (it has been "whoosh" for most). Many people don't even know how to set no-index meta tags on FileZilla.

Count me amongst those that don't even know about this. Can either of you explain this topic in a little more detail?
 
Count me amongst those that don't even know about this. Can either of you explain this topic in a little more detail?


Sounds like you know nothing about the massive footprint Filezilla leaves all over your sites/hosting accounts then? lol

you're pretty much fucked unless you format your internet and download the whole thing again. and re-upload your sites onto a clean install, but with the no-index meta tags set this time.

probably best not to use ftp at all in future, to be safe. change the number plates on your computer too and paint it a different color if you can afford it. scratch off the serial numbers etc you know the drill. the usual nanobot tracing precautions.
 
  • Like
Reactions: kingofsp
What you may not realize is that xRumer comes with its own scraper hrefer. It has the ability to take a huge list of footprints such as in that blog post, scrape any results from each query and then compile all the results in a text doc.

As far as it being "buggy", he's just giving you the footprint. Whether you use the inurl: or not will give you different results and therefore a broader population to work with.

It's interesting that you brought up hrefer. It seems to me that you can do more advanced scraping w/ hrefer when compared to Scrapebox, but I cannot get Hrefer to remotely match the speed that Scrapebox scrapes at.

Any tips?
 
Dchuk + Guerilla,

Firezilla??? Waaaaah?

I thought Firezilla was an ftp program, not something to do with scraping software?

[insert newb smiley]
 
Filezilla, not Firezilla. You need to open up more ports and increase your upload settings if you want to run hrefer at full throttle.

You can find more info about it on the Filezilla forums.
 
Filezilla, not Firezilla. You need to open up more ports and increase your upload settings if you want to run hrefer at full throttle.

You can find more info about it on the Filezilla forums.

Works for me
 
When picking some BE posts with a keyword list (no custom footprints), I'm not getting a lot of relevant results for those keywords. I do get thousands of results, but not on topic. Any idea why this is happening?
 
From this moment forth this thread shall be about bunnies.

6a00e54f9290b988340120a7a9ec61970b-800wi
 
Status
Not open for further replies.