How To Use SER To Get Public Proxies For Scrapebox
ron
SERLists.com
Hey guys,
Somebody just asked me about this, so I thought I would spell out what I do.
The issue is that SB harvesting of public proxies sucks - that guy could learn a thing or two from @sven. But when I need to check indexing or PR on Scrapebox, this is what I do to get a bunch of Public Proxies FAST:
First, here's where it all happens:
- Shut down SER
- Create a dup project, stick in a different URL just in case
it tries to post
- All real projects are inactive, the dummy project is active
(only one)
- Check all boxes above - make sure path to where proxies are
written is convenient, like desktop
- Save and start dummy project
- It takes about 3-10 minutes, but the text file appears with
about 150 proxies and then you hit Scrapebox quickly thereafter - Public Proxies burn out
lightning fast - especially as you use them in Scrapebox
- You will get about 2-4 iterations of checking done on
SB, but as you know, proxies burn out very fast, hence more iterations
- You reach a point where the Public Proxies are no longer good, and you
only have a small number of index or PR checks remaining
- When I get the unchecked list in SB down to about 30, I
substitute my real proxies (or my IP)to finish those final 30 or whatever
I have run into a situation where the proxy list didn't publish
to my desktop, and I went into the tab above and started clicking things off
and back on, and it triggered the publishing of the text file.
So this is how I do big indexing checks and PR checks. Everybody talks about this and that source for public proxies. You don't need it. This is 1000% faster than running SB to find proxies - complete waste of time to use SB to find proxies. 5 minutes versus i hour. Plus way better public proxies than SB.
So +1 to SER and @sven on this. It kicks ass.
Ron
Tagged:
Comments
I'll have to try this out next time I've got some hardcore index checking action going on!
Cheers.