Skip to content

Poor results when scraping - proxy issue or not?

Hey,
Im scraping with gscraper a lot to lighten the banrisk of my private proxies. Gathering a deduped (domain) list of 100k over night is what I normally have as a result when using ser footprints.
I then sort them into a second copy of ser to identify and sort in(into a seperate txt file). normally its around 50% identify on the first run.
I take this list (approx 50k) and feed a project with it (where of course the scraped engines are enabled).
On lucky days I get 50-100 submitted(!) out of this. "no engine matches" dominates the log.
I tried it with different proxies and lower thread count, which didn't help much. its proxy-hub ones.
Any ideas What may cause this? Or is this a normal result from scraping?
Best Regards
Sign In or Register to comment.