Checked then When Posting it Fails/Says no Contact Form
JudderMan
UK
I have scraped huge lists from Scrapebox (as I was just burning proxies using the in-built scraper), imported a fairly large one 600k, let the tool run to check them, then started posting and I am getting 1000s of failed posts due to no contact form. How can this be if they've already been checked?
Also, yesterday I had an insanely good success rate (160k list scraped with GSAWC and 120k posted successfully) with only 5 dedicated proxies (German) and then last night, it just tailed off to 1k posted from 120k (not the same 120k) and kept burning proxies (dedicated 50 USA and 15 German), all from the same great proxy provider. Even lowering the threads to 3 each (scrape/post) it just couldn't handle it.
Is the tool best letting it scrape on its own and with only a handful of keywords? Am I asking too much throwing 2000+ keywords into the mix?
Also, this is to the users of the tool: is there a sweet spot for the amount of threads to use or is it like SER where it's mostly depends on the proxies? I'm running 300 threads on 65 dedi proxies. I used to use 10x threads/proxies on SER, for instance.
It's awesome btw, love it. It was an instant purchase once I'd seen that you'd launched it.
Also, yesterday I had an insanely good success rate (160k list scraped with GSAWC and 120k posted successfully) with only 5 dedicated proxies (German) and then last night, it just tailed off to 1k posted from 120k (not the same 120k) and kept burning proxies (dedicated 50 USA and 15 German), all from the same great proxy provider. Even lowering the threads to 3 each (scrape/post) it just couldn't handle it.
Is the tool best letting it scrape on its own and with only a handful of keywords? Am I asking too much throwing 2000+ keywords into the mix?
Also, this is to the users of the tool: is there a sweet spot for the amount of threads to use or is it like SER where it's mostly depends on the proxies? I'm running 300 threads on 65 dedi proxies. I used to use 10x threads/proxies on SER, for instance.
It's awesome btw, love it. It was an instant purchase once I'd seen that you'd launched it.
Comments
Also, when trying to add scraped lists (from scrapebox) via clipboard or file, it seems to only allow 230-260k. I've checked with various lists ranging from 500k to 17m. If this is the limit, that's fine, I will chunk the lists into 200k files.
Which leads me onto my next question. If I add URLs from a list I've created, how can you make the tool only use those and not scrape for more. I have tried check-only then post only, but the target sites goes up (or is this because it doesn't allow more than 250k imported urls and they're filtered in once the first 250k is processing? @Sven I'd love to hear what you think.
--
import of any file size should work...please send the file that does not import correctly
--
if you start a project without scraping, it is not scraping However, if a domain change is detected and you use the option to add that as new target, it would increase the counter as well.