Thank you guys for your patience with me!
I know I come across as a retard to all of you making a living off advanced SEO, but I am committed to mastering this stuff!
Ok, there's two kinds of footprints that you seem to be mixing up. First, there are website footprints that you use to FIND targets to post your links on, and then there are the footprints you LEAVE as you start posting links.
I'm not going to go into detail here, but I'll get it started. Go visit a bunch of wordpress powered blogs and look for stuff that is consistently present on all of the comment pages. Piece of text, url string, etc. Then figure out how to search for that specifically...
In regards to leaving footprints, just don't be dumb. Rotate names, IP's, clear your cache out. Basic cover your tracks stuff.
Great point. Though I suppose both "FIND" and "LEAVE" footprints can be documented with the same notation?
Speaking of which I found this interesting post with examples
SEO Footprints » iXrumer. I understand the first few "one-liner" footprints from this list are single queries you can build from "Advanced Search" box in Google. What I am a bit confused about is how to manually execute multi-line footprints from that example, for instance "Example of Footprints of Pligg for Google.com". Do you run each of them separately and then do logical AND or OR on the page or domain results? How is it realistic to execute something that complex on scale without automation tools? Can I just paste those examples in Scrapebox "as is" and just get the result (of course I have to understand what steps it will run to generate it)?
Now, I can see how n00b Scrapebox users can get themselves in trouble leaving obvious footprints when auto-posting if they just blast 10,000 comments to Wordpress blogs - like some earlier posters in this thread. As far as your advice on not leaving footprints - did you mean that should always be done manually or if applied properly Scrapebox could be still used to auto-post (on smaller scale) without ringing the alarms?
it is all about theory first, then practice later.
I actually buy a lot of tools and also program my own as well. I look at it as I have been doing Internet marketing in some form since 1999 and web design/programming since 1996 so my theory and ideas have had time to solidify and be proven. At this point, I might use a tool to just simply speed up the process of my theory/idea/method, not just spit me out answers I have no meaning or background on.
Also, what if you come across a tool that is just shit, but you don't know it because you dont really know how it came across its information to begin with?
I agree. That was my experience with many tools. Sometimes though playing with tools might help learn the process.
As far as Market Samurai, I find it very helpful and transparent when it comes to scraping keyword stats / suggestions, competition onpage stats / backlink counts, but looks like its "backlink opportunities" feature is too limited and obfuscated to be useful.
Question: When manually finding "backlink opportunities", what's the trick for finding the higher PR gems? If this is a trade secret or some shit then I guess I shouldn't expect an answer, but I've been curious about this for a while now. Obviously the tools can do it, so there's probably a manual way as well. Up to this point I've never cared all that much about PR. I just tracked down good, niche related places to get links which still works well, not worrying about PR.
FYI - if you use SeoQuake plugin for Firefox or Chrome it will show you SEO stats (PR, backlink counts, Alexa rank, etc) on every results page and will let you sort them by every factor, including PR, right there.