Personally, I'm waiting on Dchuk to write a Scrapebox ebook.
That should be his niche case study site haha
Personally, I'm waiting on Dchuk to write a Scrapebox ebook.
Why would anybody want to write an ebook about how big a massive fail Scrapebox is?
One gripe I have about this sweet ass tool is that when you switch to slow poster and then minimize, the tab lights up as if it needs your attention or its done or something. I find myself repeatedly bringing it back up and going "oh yeah".
Other than that I love it. And by love it I mean I love barely using it and watching all of my sites vanish from the SERPs. Even my bookmarks disappear.
"SCRAPEBOX - The Truth will SHOCK You!
How Scrapebox ruined my life and other confessions of back alley crackwhore abortions."
Here's a kingofsp/turbo collaborative tip (although he doesn't know it yet)
Read kingofsp's post Extremely Effective White Hat Link Building Method | Off-White Hat
Then go to scrapebox
1. Paste your relevant url into SB.
2. Run Link extractor Addon (external links only).
3. Export those links back into SB.
4. Run Alive Check addon.
and VOILA! There's your links to send to site owners. All automated and shit.
too bad scrapebox sucks a fattie or else this would be a good tip
Still do not understand how to use multi-line ones. If we use "Pligg" one as the example, each line is its own query that will produce entirely different results page?! How am I supposed to combine them? You mention "chaining", but how exactly is it supposed to work in practice? Also looking at "Guestbook" footprint from the same example, seems to me each line is missing an operator like "inurl:"... Does it seem buggy to you too?
After download, Google spidered my home IP address and I forgot to put no-index meta tags on filezilla. This lead to google scanning logs & finding all of my sites. They preemptively banned every site I had.
Glad to hear from someone who understands this whole thing (it has been "whoosh" for most). Many people don't even know how to set no-index meta tags on FileZilla.
Count me amongst those that don't even know about this. Can either of you explain this topic in a little more detail?
What you may not realize is that xRumer comes with its own scraper hrefer. It has the ability to take a huge list of footprints such as in that blog post, scrape any results from each query and then compile all the results in a text doc.
As far as it being "buggy", he's just giving you the footprint. Whether you use the inurl: or not will give you different results and therefore a broader population to work with.
Increase your upload speed in the Filezilla settings.Any tips?
Increase your upload speed in the Filezilla settings.
Filezilla, not Firezilla. You need to open up more ports and increase your upload settings if you want to run hrefer at full throttle.
You can find more info about it on the Filezilla forums.