Let's Discuss Scrapebox

Status
Not open for further replies.
I'm just about ready to give you $57 so you stop asking the same question over and over again.

Ha ha, it is not just money though, but time to invest in learning a tool and correctly integrating it into your process. Really the only reason I asked is to get to the latter part.

Nothing beats hands on experience........if buying Scrapebox gives you a reference point, then go for it. At $57 its nearly a throwaway investment.

But if Market Samurai is making you happy (and profitable), then keep using it.

I agree. As far as Market Samurai, now it is making me feel ambivalent. Helped me start thinking about the process for targeted SEO, but I can certainly see its limitations now.

Thanks to everyone for sharing your insights!
 


Ha ha, it is not just money though, but time to invest in learning a tool and correctly integrating it into your process. Really the only reason I asked is to get to the latter part.



I agree. As far as Market Samurai, now it is making me feel ambivalent. Helped me start thinking about the process for targeted SEO, but I can certainly see its limitations now.

Thanks to everyone for sharing your insights!

All tools are intentionally limited in scope in order to complete their designed task. This whole discussion boils down to learning the theory before you abstract the technique. That's it
 
All tools are intentionally limited in scope in order to complete their designed task. This whole discussion boils down to learning the theory before you abstract the technique. That's it

I just re-read the whole thread and I think I found what I was missing...

Here's my thoughts on scrapebox:

If you had to look up what a proxy was when you first fired it up, you shouldn't use Scrapebox
If you had to look up what a footprint was when you first fired it up, you shouldn't use Scrapebox
If you just heard of Akismet for the first time when you first fired it up, you shouldn't use Scrapebox

Everything that that tool does should be done manually first before doing it in an automated way. That goes for all tools actually.

^^ This

I must confess to not knowing enough about footprints. My experience with SEO so far has been running a fully whitehat blog network, with all backlinks being natural. No overt backlink manipulation and never having been hit with the kind of penalties people talked about here. Now looking to take my game to the next level, obviously without putting my aged domains at risk...

What do you recommend to read up on footprints and how they are used? I started some Googling but the info is all over the place. What do you think might be a typical "Scrapebox footprint" likely to trigger penalties? Can there still be a way to use auto-posting functionality without generating an obvious footprint?

Thanks!
 
it is all about theory first, then practice later.

I actually buy a lot of tools and also program my own as well. I look at it as I have been doing Internet marketing in some form since 1999 and web design/programming since 1996 so my theory and ideas have had time to solidify and be proven. At this point, I might use a tool to just simply speed up the process of my theory/idea/method, not just spit me out answers I have no meaning or background on.

Also, what if you come across a tool that is just shit, but you don't know it because you dont really know how it came across its information to begin with?
 
What do you recommend to read up on footprints and how they are used? I started some Googling but the info is all over the place. What do you think might be a typical "Scrapebox footprint" likely to trigger penalties? Can there still be a way to use auto-posting functionality without generating an obvious footprint?

Thanks!

Ok, there's two kinds of footprints that you seem to be mixing up. First, there are website footprints that you use to FIND targets to post your links on, and then there are the footprints you LEAVE as you start posting links.

I'm not going to go into detail here, but I'll get it started. Go visit a bunch of wordpress powered blogs and look for stuff that is consistently present on all of the comment pages. Piece of text, url string, etc. Then figure out how to search for that specifically...

In regards to leaving footprints, just don't be dumb. Rotate names, IP's, clear your cache out. Basic cover your tracks stuff.
 
Question: When manually finding "backlink opportunities", what's the trick for finding the higher PR gems? If this is a trade secret or some shit then I guess I shouldn't expect an answer, but I've been curious about this for a while now. Obviously the tools can do it, so there's probably a manual way as well. Up to this point I've never cared all that much about PR. I just tracked down good, niche related places to get links which still works well, not worrying about PR.
 
Scrape a list, use Scrapebox to check PR, check Alexa etc.

Export to csv, pull into a spreadsheet program, start sorting data. Keep only the high PR/high Alexa stuff.
 
The only thing Scrapebox will not tell you is if a site is do-follow or no-follow.
 
Thank you guys for your patience with me!

I know I come across as a retard to all of you making a living off advanced SEO, but I am committed to mastering this stuff!

Ok, there's two kinds of footprints that you seem to be mixing up. First, there are website footprints that you use to FIND targets to post your links on, and then there are the footprints you LEAVE as you start posting links.

I'm not going to go into detail here, but I'll get it started. Go visit a bunch of wordpress powered blogs and look for stuff that is consistently present on all of the comment pages. Piece of text, url string, etc. Then figure out how to search for that specifically...

In regards to leaving footprints, just don't be dumb. Rotate names, IP's, clear your cache out. Basic cover your tracks stuff.

Great point. Though I suppose both "FIND" and "LEAVE" footprints can be documented with the same notation?

Speaking of which I found this interesting post with examples SEO Footprints » iXrumer. I understand the first few "one-liner" footprints from this list are single queries you can build from "Advanced Search" box in Google. What I am a bit confused about is how to manually execute multi-line footprints from that example, for instance "Example of Footprints of Pligg for Google.com". Do you run each of them separately and then do logical AND or OR on the page or domain results? How is it realistic to execute something that complex on scale without automation tools? Can I just paste those examples in Scrapebox "as is" and just get the result (of course I have to understand what steps it will run to generate it)?

Now, I can see how n00b Scrapebox users can get themselves in trouble leaving obvious footprints when auto-posting if they just blast 10,000 comments to Wordpress blogs - like some earlier posters in this thread. As far as your advice on not leaving footprints - did you mean that should always be done manually or if applied properly Scrapebox could be still used to auto-post (on smaller scale) without ringing the alarms?

it is all about theory first, then practice later.

I actually buy a lot of tools and also program my own as well. I look at it as I have been doing Internet marketing in some form since 1999 and web design/programming since 1996 so my theory and ideas have had time to solidify and be proven. At this point, I might use a tool to just simply speed up the process of my theory/idea/method, not just spit me out answers I have no meaning or background on.

Also, what if you come across a tool that is just shit, but you don't know it because you dont really know how it came across its information to begin with?

I agree. That was my experience with many tools. Sometimes though playing with tools might help learn the process.

As far as Market Samurai, I find it very helpful and transparent when it comes to scraping keyword stats / suggestions, competition onpage stats / backlink counts, but looks like its "backlink opportunities" feature is too limited and obfuscated to be useful.

Question: When manually finding "backlink opportunities", what's the trick for finding the higher PR gems? If this is a trade secret or some shit then I guess I shouldn't expect an answer, but I've been curious about this for a while now. Obviously the tools can do it, so there's probably a manual way as well. Up to this point I've never cared all that much about PR. I just tracked down good, niche related places to get links which still works well, not worrying about PR.

FYI - if you use SeoQuake plugin for Firefox or Chrome it will show you SEO stats (PR, backlink counts, Alexa rank, etc) on every results page and will let you sort them by every factor, including PR, right there.
 
The only thing Scrapebox will not tell you is if a site is do-follow or no-follow.

Besides manually Ctr U, Ctr F - nofollow on every single blog, is there a fast way to eliminate nofollow blogs from a scrape?
 
> if you use SeoQuake plugin for Firefox or Chrome it will show you SEO stats
> (PR, backlink counts, Alexa rank, etc) on every results page and will let you
> sort them by every factor, including PR, right there

I read on the plugin page that you should only use SEO Quake on one page at a time.
People were suggesting that if you kept using it on search results pages, Google will do bad things to you.
 
Speaking of which I found this interesting post with examples SEO Footprints » iXrumer. I understand the first few "one-liner" footprints from this list are single queries you can build from "Advanced Search" box in Google. What I am a bit confused about is how to manually execute multi-line footprints from that example, for instance "Example of Footprints of Pligg for Google.com". Do you run each of them separately and then do logical AND or OR on the page or domain results? How is it realistic to execute something that complex on scale without automation tools? Can I just paste those examples in Scrapebox "as is" and just get the result (of course I have to understand what steps it will run to generate it)?

Did you try those footprints in google yet? You can just paste them right in there and see what comes back, nothing secret going on, just chaining of functions...

There are some fundamentals that fall beneath all of these techniques that really really need to be understood before you load up 100k urls and click spam.

I really do, on a very serious note, recommend that everyone only use scrapebox for researching purposes (IE, don't fucking post with it) until they understand what's really going on.
 
Did you try those footprints in google yet? You can just paste them right in there and see what comes back, nothing secret going on, just chaining of functions...

There are some fundamentals that fall beneath all of these techniques that really really need to be understood before you load up 100k urls and click spam.

I really do, on a very serious note, recommend that everyone only use scrapebox for researching purposes (IE, don't fucking post with it) until they understand what's really going on.

I tried the one-liners, just pasting into Google address bar - definitely get the idea.

Still do not understand how to use multi-line ones. If we use "Pligg" one as the example, each line is its own query that will produce entirely different results page?! How am I supposed to combine them? You mention "chaining", but how exactly is it supposed to work in practice? Also looking at "Guestbook" footprint from the same example, seems to me each line is missing an operator like "inurl:"... Does it seem buggy to you too?

Definitely get the point about using every precaution before even thinking of auto-posting. Nice to have such feature and it could be used judiciously, but thoughtless spamming is definitely a no-no. First step is to think about the footprint of your blast - looking at every observable factor, especially speed of link acquisition, but also your posting IPs, domains and the footprint of targeted sites.

Seems like there is no way to avoid leaving some sort of footprint with ANY auto-poster (e.g. if all you do is use Scrapebox for blog commenting). It is just a matter of how obvious it is to detect and penalize and whether it may (or may not) blend in with the rest of your link profile... For example posting through any "pre-packaged" Scrapebox proxy IPs is probably shooting yourself in the head...
 
Still do not understand how to use multi-line ones. If we use "Pligg" one as the example, each line is its own query that will produce entirely different results page?! How am I supposed to combine them? You mention "chaining", but how exactly is it supposed to work in practice? Also looking at "Guestbook" footprint from the same example, seems to me each line is missing an operator like "inurl:"... Does it seem buggy to you too?

Those are all just different ways to find pligg sites. They aren't supposed to be used together, each one will return a different subset of pligg sites that have that same footprint....
 
neat to see a few thousands new backlinks in yahoo site explorer overnight but probably all of them are nofollow so it is useless or ?
 
Those are all just different ways to find pligg sites. They aren't supposed to be used together, each one will return a different subset of pligg sites that have that same footprint....

OK, makes sense. Seems though that some of these expressions might produce quite a few false positives.

But I get that the general idea is to take a UNION of all result sets returned by every alternative line in a footprint.

Time to get the tool and run some tests!
 
One gripe I have about this sweet ass tool is that when you switch to slow poster and then minimize, the tab lights up as if it needs your attention or its done or something. I find myself repeatedly bringing it back up and going "oh yeah".

Other than that I love it. And by love it I mean I love barely using it and watching all of my sites vanish from the SERPs. Even my bookmarks disappear.
 
Status
Not open for further replies.