Skip to content

No Luck with GSA SER, Need Help

Recently purchased GSA SER. Trying to figure out about Blog Comment.

I made 10 Web2.0 properties with two articles on each one. The first day I was with GSA SER + GSA Captcha Breaker trials, and I successfully got several links to each property. Now I am stuck. Verified links count grow speed has decreased. On day 5: 3 of 10 projects have only 3...4 back links.

Tried several settings, different keywords - no luck.

My setup:
Threads: 300
Public proxies. Search for new every 1 minute, test = Google (Search).
Proxy used: Search Engines and Submissions.
Captcha: GSA Breaker then Manual (today disabled).
83 key phrases, mostly 2 words each. Manually selected, not harvested.
Try searching with similar looking keywords is ON
Anchor Text = variations of blog author name.
Search Engines: all English.
Analyze and post to competitors backlinks is ON (today).

Captcha Breaker for half of this day: 10/23 (43%)

Comments

  • Public proxies may be the issue here.
  • DeeeeeeeeDeeeeeeee the Americas
    edited August 2019
    Hi, KirillZ. nice to meet you! :) I'm another user of GSA products, just like you. And, just like you, I'm still learning. I don't have answers, but maybe I can help clarify a little?

    Threads: 300 -  That's a LOT of threads to get so little results. So, yup, something's up..

    Proxy used: Search Engines and Submissions. Do u test against Google for your scraping set of public proxies? Do u have them in two sets, one for posting, the other (that have passed G00G in another? You can sort them into public and private! AND..then the proxies u use to post/verify may be blocked! So 2 potential issues! Maybe BOTH!!!)

    Captcha: GSA Breaker then Manual (today disabled). Do u mean u disabled GSA-CB or the manual on fail option? Is Captcha breaker receiving the CAPTCHAs but failing? Or, are there very few even needed to be de-coded? (just scroll up in CB interface and you'll see)

    83 key phrases, mostly 2 words each. Manually selected, not harvested. - Seems like a good amount. I've used less, some users use way more. But this seems OK. Do you have them separated in the right way?? (else it's just ONE LONG KW!!!!!!!)

    Try searching with similar looking keywords is ON - No issue, tho after a while this may get very full of phrases and you may want to clear it. But no way on day 2!!!!

    Anchor Text = variations of blog author name. - Could not be of issue!

    Search Engines: all English. - Posting in русский? If you're using English, no issue!

    Analyze and post to competitors backlinks is ON (today) - As fr as I understand, this is NOT going to be SOLELY performing this function. Just sometimes.

    Captcha Breaker for half of this day: 10/23 (43%) - Answers my above Q - 23 targets found is very low. But roughly half solved is good. So it's not that GSA-CB isn't get up to communicate with SER. That works, apparently.

    Are you getting any error messages?

    You must have the right engines selected, else you'd get NO results, not just few.

    The CAPTCHAs are being solved. Everything else looks good. Perhaps you aren't finding targets? Scraping issue?

    I'm no expert but I have used SER for a little bit and so I hope this has helped.
  • Deeeeeeee said:

    Proxy used: Search Engines and Submissions. Do u test against Google for your scraping set of public proxies? Do u have them in two sets, one for posting, the other (that have passed G00G in another? You can sort them into public and private! AND..then the proxies u use to post/verify may be blocked! So 2 potential issues! Maybe BOTH!!!)
    GSA SER checked the proxies it found using "Google (Search)" test. I see no reasons to manually break the scrapped proxies into "public" and "private" because whey will be banned sooner or later and I'll forced to do this separation again by myself.

    Now 4 of 10 projects reported "maybe blocked by search engines, no scheduled postings" and I have a lot of zeros in log:

    20:25:26: [ ] <b>000/000</b> [Page END] results on wotbox for KeywordLuv with query coffee roaster "This site uses KeywordLuv."<br>20:25:26: [ ] <b>000/000</b> [Page END] results on google BM for HubSpot with query "Posted @ Allowed tags: link, bold, italics" "Comments" fresh roast<br>20:25:26: [ ] <b>000/000</b> [Page END] results on Exalead for KeywordLuv with query "Incoming Search terms" KeywordLuv coffee roaster<br>20:25:26: [ ] <b>000/000</b> [Page END] results on Omgili for KeywordLuv with query coffee roaster "Incoming Search terms" KeywordLuv<br>20:25:26: [ ] <b>000/001</b> [Page END] results on Yandex.com for KeywordLuv with query coffee roaster "Incoming Search terms" KeywordLuv

    But proxies list has never depleted. I always have several hundreds listed as "good".
    Deeeeeeee said:

    Captcha: GSA Breaker then Manual (today disabled). Do u mean u disabled GSA-CB or the manual on fail option? Is Captcha breaker receiving the CAPTCHAs but failing? Or, are there very few even needed to be de-coded? (just scroll up in CB interface and you'll see)

    I mean, today I have disabled any manual CAPTCHA solving by myself. Yesterday I do some job manually and got some back links. Today I have fewer links. Maybe because the manual input is disabled. It is hard to say, too small the numbers!

    I think, I have too few captcha solving jobs to GSA Captcha Breaker. Low targets to post = low work amount for Captcha Breaker.

    Please look at my keyword list. Today I tried this:

    coffee beans, coffee drink, roasted coffee, coffee roast, coffee roaster, coffee roastery, fresh roast, coffee Ethiopia, coffee Sudan, arabica coffee, robusta coffee, coffee flavor, roasted beans, coffee temperature, coffee water, coffee ice, coffee caffeine, coffee ground, espresso, french press, french press coffee, latte coffee, iced coffee,&nbsp; coffee consumption, coffee drinking, coffee brewer, coffee maker, coffee machine, coffee diseases, coffee quality, coffee consumption, coffee beverage, roasted brewed, organic coffee, green coffee, unroasted coffee, coffee pot, coffee break, coffee Colombia, coffee Guatemala, coffee Venezuela, coffee export, coffee import, robusta arabica, coffee blend, arabica blend, robusta blend, arabica caffeine, robusta caffeine, robusta crema, espresso crema, coffee flavor, coffee aroma, coffee taste, espresso taste, espresso aroma, arabica taste, arabica flavor, arabica aroma, robusta aroma, robusta taste, robusta flavor, kopi luwak, home roasted coffee, coffee bean color, coffee dark roast, oily coffee beans, coffee light roast, coffee medium light roast, coffee medium roast, coffee medium dark roast, coffee dark roast, very dark coffee roast, coffee roast flavor, darker roasts, stronger coffee flavor, dark roast coffee, decaffeination, coffee decaffeination, decaffeinated coffee, non-caffeinated coffee, decaf, decaffeination process
    This is totally is not a secret :) This is for T2 properties, and they are "general" blogs and its for testing and learning. I can put any keywords here if you have a better idea.

    While writing the reply, 4 of 10 projects with "maybe blocked by search engines, no scheduled postings" message became OK :) Now it paused two of them because of "100 submissions reached" but Last verified URLs list is completely empty!

    I also attached my bad words.
  • DeeeeeeeeDeeeeeeee the Americas
    edited August 2019
    "...GSA SER checked the proxies it found using "Google (Search)" test..."

    I think maybe with SER you must check firstly that proxies are indeed, anonymous, then secondly against Google. (But this doesn't seem to be your issue??)

    In GSa Proxy Scraper, you can set it up to test against both G and anon (and more), automatically. Then, u can select those that only pass the criteria that u choose, automatically.

    -_-_-_-_-_-_-_-_-_

    "I see no reasons to manually break the scrapped proxies into "public" and "private"

    Hmm...Proxies that are good for scraping are a much smaller subset of total proxies, I've found. So, I did this to keep such proxies assigned to tasks that best suit them. I guess you have a point, though...I guess you could use them for both if you're ONLY collecting those that pass Google.  I did not. I kept any that were anonymous, even if failing G.

    -_-_-_-_-_-_-_-_-_

    "Now 4 of 10 projects reported "maybe blocked by search engines, no scheduled postings"

    So then you need to re-test your proxies, no? If they WERE OK with Google, and now are not,  maybe they've been blocked since you imported?

    -_-_-_-_-_-_-_-_-_

    Wow...better bad word list than I have. Good job!!  Lots of weird words. I'm happy I don't know what some of this is. :) Don't want to even know what sites have stuff like that.

    14:27:47: [-] 001/940 filter "poker" matches content on http://www.xxxxxx. com

    Plus, if blocked, you'd see this...↑↑↑↑↑↑

    -_-_-_-_-_-_-_-_-_

    20:25:26: [ ] <b>000/000</b> [Page END] results on Exalead for KeywordLuv with query "Incoming Search terms" KeywordLuv coffee roaster

    Hmm...to me, this is new.  So bear with me, we're learning together! :)

    What is KeywordLuv ??
    <br>
    OK....it's a type of blog comment engine. (That is my terrible weak spot, def need to pay attention to  the engines already!!)

    Here, I need to learn also. It looks like you're searching on "exalead" engine, trying to ID any KeywordLuv blogs that match the term coffee roaster, using the footprint
    "Incoming Search terms"

    Now, I am asking Qs just as you are. Does SER always look for a single identifying footprint for a type of engine?

    Now I need help, too!!! Anyway, thanks for helping me focus on something I also need to pay attention to, in order to ever improve how I work with SER! :)

    Any experienced users know of a GSA forum post or blog somewhere you can direct me to, to learn more about this, finally?? Many thanks! :)


  • Looks interesting :) Now it bothers me to put some articles when uncheck then check Blog Comment. Everything else is disabled.

    OK, now I have some optimism.


  • Now, I am asking Qs just as you are. Does SER always look for a single identifying footprint for a type of engine?

    ---

    I saw on YouTube there is some kind of signature editor in GSA. I found the difference in project settings from the pictures above when I tried to find that editor. Editor is not found but...

    After I checked all engines manually I got two links! ;)


    Wow...better bad word list than I have. Good job!!  Lots of weird words. I'm happy I don't know what most of this is. Don't want to even know what sites have stuff like that.

    My list is from blackhatworld forum. I think it is good idea to dig into GSA internals, find out the signatures it uses for commenting. Test them with general high volume keywords, refine, and put some extra signatures from the BHW.

    I own Scrapebox license and found that custom footprints give the larger lists. But now I have no private proxies (my budget is limited) and can't use Scrapebox in full scale. Public proxies are banned by Google very fast.

    In opposite to Scrapebox, GSA is doing some other job between the search queries. Public proxies may stay alive for a longer time.




  • Buddy, add me on Skype: Intellab5. Get private proxies first, then I will setup GSA for you using anydesk.
  • Stellario said:
    Buddy, add me on Skype: Intellab5. Get private proxies first, then I will setup GSA for you using anydesk.
    Thanks for the offer! Could you please recommend a private proxy source and estimate the proxy count I need. I am going to start with 70 of Web 2.0 properties with daily limit "5 verification per day" for each.

Sign In or Register to comment.