Skip to content

Checked then When Posting it Fails/Says no Contact Form

edited April 2018 in GSA Website Contact
I have scraped huge lists from Scrapebox (as I was just burning proxies using the in-built scraper), imported a fairly large one 600k, let the tool run to check them, then started posting and I am getting 1000s of failed posts due to no contact form. How can this be if they've already been checked?

Also, yesterday I had an insanely good success rate (160k list scraped with GSAWC and 120k posted successfully) with only 5 dedicated proxies (German) and then last night, it just tailed off to 1k posted from 120k (not the same 120k) and kept burning proxies (dedicated 50 USA and 15 German), all from the same great proxy provider. Even lowering the threads to 3 each (scrape/post) it just couldn't handle it.

Is the tool best letting it scrape on its own and with only a handful of keywords? Am I asking too much throwing 2000+ keywords into the mix?

Also, this is to the users of the tool: is there a sweet spot for the amount of threads to use or is it like SER where it's mostly depends on the proxies? I'm running 300 threads on 65 dedi proxies. I used to use 10x threads/proxies on SER, for instance.

It's awesome btw, love it. It was an instant purchase once I'd seen that you'd launched it.


  • SvenSven
    your proxies might be bad? I have no clue without details on the progress. Manually check f there is a contact form (double click on the status cell) with proxies off.
  • I test them regularly. They're from a very reputable seller (he's on here), high quality. They seem to get banned on the crappier search engines, so I might just stick to a handful of SEs, that must be it. I will do as you say and check, though.
  • Quick update on this: seems the tool is a bit excited to say the proxies are down/banned when they're not. Not sure why. I have tested them while running, stopped and tested them via different ways not just the in-built proxy tester and they are all good, but the error message comes up. I kept the tool running and it seems fine, but I can't tell if it actually is or not.

    Also, when trying to add scraped lists (from scrapebox) via clipboard or file, it seems to only allow 230-260k. I've checked with various lists ranging from 500k to 17m. If this is the limit, that's fine, I will chunk the lists into 200k files.

    Which leads me onto my next question. If I add URLs from a list I've created, how can you make the tool only use those and not scrape for more. I have tried check-only then post only, but the target sites goes up (or is this because it doesn't allow more than 250k imported urls and they're filtered in once the first 250k is processing? @Sven I'd love to hear what you think.
  • SvenSven
    banned message: indeed that message comes up on a guess of ban, i will optimize this
    import of any file size should work...please send the file that does not import correctly
    if you start a project without scraping, it is not scraping ;) However, if a domain change is detected and you use the option to add that as new target, it would increase the counter as well.
Sign In or Register to comment.