Skip to content

GSA SER Says “No targets left” Even Though I Imported Millions of Identified URLs — What’s Going On?

Hey guys,

I really need help figuring this out because something is not adding up at all.

I imported a little over 6 million identified URLs into one project.


I ticked all platforms, disabled search engines, and set the project to use my global site list → Identified folder.

After less than 2 days, I get this message:

“No targets to post to (no search engines chosen, no url extraction chosen, no scheduled posting)”

There is no way SER processed 6 million URLs in under 48 hours with LPM 20–30 and 100 threads. Not even close.

So I tried again…

This time I created separate projects per platform (Articles, Blog Comments, Guestbooks, etc.).


Instead of letting SER pull from the global site list automatically, I imported platform-specific URLs into each project manually.

Same result — after a short period:
“No targets left to post”

How can this be possible when I have almost 3 million Article URLs alone, and tons more across other platforms?

Did anyone run into this before?
Is SER skipping URLs?
Is there some filtering, footprint detection, or caching behavior deleting targets I’m not aware of?

Any insight would help because this makes absolutely no sense right now.






















And last picture options are like above only difference all platforms are ticked and it's fetching from global site list "Identified" instead of importing in project list directly.


Comments

  • That is absolutelly normal. Just because you have a large identified list does not mean that gsa can build a backlink on those websites.  For example probably of 90% of websites today use wordpress. GSA will identify those websites correctly and add them into Article  - Wordpress category but of course only a very, very, very small percentage of those websites gsa will be able to post to and get a verified backlink.
  • borisake said:
    That is absolutelly normal. Just because you have a large identified list does not mean that gsa can build a backlink on those websites.  For example probably of 90% of websites today use wordpress. GSA will identify those websites correctly and add them into Article  - Wordpress category but of course only a very, very, very small percentage of those websites gsa will be able to post to and get a verified backlink.

    I understand that just because a URL is in the Identified list it doesn’t mean SER will be able to post to it — that part is clear.

    But that’s not the issue I’m talking about.

    My point is:
    SER could not have processed all those URLs in under 2 days, regardless of whether they are postable or not.

    Processing = loading the URL, checking the engine footprint, parsing it, and determining whether it’s usable. Even if 90% are trash or unpostable, SER still needs to touch them.

    With 6+ million URLs, LPM 20–30, and 100 threads, there is simply no way SER went through the entire list that fast. That is why the “No targets left” message makes no sense.

    For example:
    Even at an unrealistic rate of 100 LPM, it would still take almost 42 hours just to look at 250k URLs — not millions. And I’m nowhere near 100 LPM.

    So the question is:

    Why is SER stopping early as if it already processed everything?

    – Is it skipping URLs?
    – Is it ignoring whole chunks of the list?
    – Is it hitting a cache limit or memory limit?
    – Is there deduping going on across projects?
    – Is global site list conflicting with project-imported lists?
    – Is SER filtering by engine footprint incorrectly?

    That’s what I’m trying to figure out.

    I know the list isn’t fully postable — but SER should still go through all 6 million URLs, and it clearly isn’t. That’s the actual problem.

    If anyone has experienced SER stopping early like this or skipping large portions of site lists, I’d appreciate some insight.

  • cherubcherub SERnuke.com
    Are these identified urls or identified domains? Because there is a huge difference. You could have 10k identified urls on 1 domain, but for most non-Blog comment engines that will still be 1 submission/verification, and after that submission SER will skip any other identified urls on that domain. Unless you have scheduled posting enabled, which you don't seem to.
  • cherub said:
    Are these identified urls or identified domains? Because there is a huge difference. You could have 10k identified urls on 1 domain, but for most non-Blog comment engines that will still be 1 submission/verification, and after that submission SER will skip any other identified urls on that domain. Unless you have scheduled posting enabled, which you don't seem to.

    Hi @cherub ,

    They are identified URLs and domains, but duplicates are already cleared.

    I’m using the auto dedupe option in SER — so only unique URLs are kept for Blog Comments, Guest Posts, Image Comments, and Trackbacks, and unique domains are kept for all other engines.

    My understanding is that SER automatically knows which engines require URL-level deduping and which require domain-level, so when I select the auto option, it deletes:

    • duplicate URLs for engines like blog comments, guestbooks, image comments, trackbacks

    • duplicate domains for engines that only allow one submission per domain (articles, wikis, social networks, etc.)

    Is that correct?

    Because if so, then the list I’m feeding SER is already reduced to unique targets per engine. Which means SER should still have far more targets available than what it reports when it stops and says “No targets left.”

    Just want to confirm that I’m understanding the deduping logic correctly.



  • googlealchemistgooglealchemist Anywhere I want
    Actually came on here with a similar issue with more recent bulk testing as things seem weird to me as well...with even more identified domains than you...and getting the same error message about no more targets to post to really really quick. And thats totally de dupped on a domain level not just url. Ive tried both options like you too...using global site list for the ident folder/list in the options, as well as importing them as target urls into my test project directly. Same thing.
  • googlealchemistgooglealchemist Anywhere I want
    Anonymous said:
    cherub said:
    Are these identified urls or identified domains? Because there is a huge difference. You could have 10k identified urls on 1 domain, but for most non-Blog comment engines that will still be 1 submission/verification, and after that submission SER will skip any other identified urls on that domain. Unless you have scheduled posting enabled, which you don't seem to.

    Hi @cherub ,

    They are identified URLs and domains, but duplicates are already cleared.

    I’m using the auto dedupe option in SER — so only unique URLs are kept for Blog Comments, Guest Posts, Image Comments, and Trackbacks, and unique domains are kept for all other engines.

    My understanding is that SER automatically knows which engines require URL-level deduping and which require domain-level, so when I select the auto option, it deletes:

    • duplicate URLs for engines like blog comments, guestbooks, image comments, trackbacks

    • duplicate domains for engines that only allow one submission per domain (articles, wikis, social networks, etc.)

    Is that correct?

    Because if so, then the list I’m feeding SER is already reduced to unique targets per engine. Which means SER should still have far more targets available than what it reports when it stops and says “No targets left.”

    Just want to confirm that I’m understanding the deduping logic correctly.



    Just trying to troubleshoot here...and I doubt its the main issue but have to ask...whats the ballpark % split of identified urls/domains that are specific to each category? ie how many of your total unique domains (not urls) are in the blog comment/gb/track category....vs how many are in the article/social network/wiki category?




  • Hey @googlealchemist

    I added it in first post, for Article platform it's mostly 90 % Wordpress engine sites and they suck badly, most of them are not postable and has registration disabled or no form to submit and even when you submit they rarely get verified, that engine sucks it's highly moderated so no use submitting to it no matter what kind of content you do they will get deleted eventually.
Sign In or Register to comment.