Skip to content

How long does it take you guys to index contextual links? With or without services?

Just wondering what the average times are for indexing contextual links.

Engines/Platforms don't matter, just rough ideas either with or without services.

Ignoring unique content and whatever.

So then for future knowledge, I know if it's either taking too long, not never gonna index no matter what.

At the moment, no matter what I do, It seems really hard to index, none are for me so far except the odd few out of hundreds/thousands.
This is with using dofollow blog comments, guestbooks & image comments as T3


  • shaunshaun
    I usually wait 10 days, with the blog comment T3 I am happy if 30% is indexed these days. Elite link indexer gets around 60% indexed in the same time frame.
  • Hmm, 10 days aint bad to be fair and 30% is alright considering it's just from a t3.

    So if my tier 1 pauses at 250/day verified.
    What would be best for 2 & 3 to get the best chances of indexing T1 ?
  • shaunshaun
    Earlier this year I was getting 80% indexed with a T3 :(.

    How do you mean for your T2 and T3? Like platform selection?
  • Woah that's huge, I mean like verified per day on t2, so I don't give t3 to much to try and index up the chain if you get me.
  • shaunshaun
    Ah like submits per day you mean?

    Everyone has their own way I guess and I can see why this is confusing because its never broken down to explain what they mean you see stuff like 10:100:1000 and it doesnt say if thats per URL or total tier or what. Until recently I also used a ratio similar to that but there seems to be a growing link retention problem with the domains on the lists (probably because they are getting hit so hard).

    So instead of building say 10 domains per day and then scaling up from there I spread the load a little better like this.

    T1 - 50 Links per day
    T2 - 25 links per day per T1 link
    T3 - 10 links per day per T2 link

    Before my time people used to say 5 links per day and as a noob I just adopted that but sites can take much more than that. Also, with link retention and indexing the 50 built per day ends up being much lower but because you are building 50 instead of say 5 it doesnt matter as much ig you lost a T1 link where as if you went T1 - 10 T2 - 50 if you lost a T1 link you are taking much more of a hit.

    I'm not sure if you were one of the guys who were hit when went down but a few of my own sites have this problem as well as a few guys I used to talk to in PM. It was a pretty big domain supported by most web 2 tools. I used to use it a lot on my RX based sites for T1 then it went offline and everything below it was instantly lost. Its kind of the same thing but you expect SER articles to go down much quicker.
  • Yeah died and I got slapped, not sure if that helped towards the G slap but might of been a factor of unnatural link loss maybe.

    Hmm good point, instead of just running per project, per tier url makes much much more sense!

    Well I'll give this a shot, thanks mate always here to help and I appreciate it!
  • @shaun, another thing. What is the best way for me to mass check indexing? Is scrapebox reliable with proxies
  • 710fla710fla ★ #1 GSA SER VERIFIED LIST
    @shaun do you send just the last tier of contextuals to an indexer or all the contextuals you create?
  • @Anth20 - I use it for (somewhat) mass checking with proxies. There are quite a few false negatives, but it does a decent job overall. You could probably rerun everything that comes up as not indexed a couple times and get even better accuracy.
  • @shaun - Really helpful post, but what do you mean by saying you don't build 10 domains per day? For some reason I can't connect that to the rest of what you wrote.
  • shaunshaun
    I use scrapebox for index checking. If I use a T3 to index I don't send anything to and indexer, if I don't use a T3 then everything is sent to an indexer.
  • shaunshaun
    Like 10 domains T1, scaling to 20 on T2 scaling to 30 on T3 for example.
  • @shaun @redrays

    When I checked with scrapebox, I exported the "indexed" then manually checked. Basically if anything comes up with the search for that specific url, then scrapebox says that it's indexed which isn't true.

  • @Anth20 - that's not good to hear. Do you have a feel for how many false positives you're getting per batch of urls?
  • @redrays Not much of an idea yet, I will do a thorough check now and report back.
  • shaunshaun
    Thanks for pointing this out mate. Back when I first started using SER I can remember checking a few different methods of index checking but indexing was so much easier back then so when I manually checked they were usually indexed anyway.
  • No worries, well after checking a lot of tier 1 contextuals, narrowed down to 27 in scrapebox (yeah thats bad) and out of those 27 links, really 19 are indexed after manual check.

    So on a bigger scale of index checking I think SB isn't that good. It's alright for rough ideas maybe, maybe not.

    When checking in SB, probably 70% of links checked will return either a similar patterned url, or something similar in results and tell us it's indexed because something was found.

    Isn't there a way we can do an index check with

    That way, it's directly checking just the url, not if it's seen anywhere or something similar shows up

  • shaunshaun
    The only SER instance I have with URLs in it that will be indexed is locked in on a scheduled post blast until tomorrow so I have no targets to test on :(.

    If Scrapebox just puts the URL through google and says positive if it gets a hit then it explains the difference between how long Scrapebox and SER can run without interruption on the same proxies. I'm guessing SER might use inurl:”” or something to force google to only return that URL and scrapebox just runs the regular URL.
  • Yeah exactly that, even SER doesn't give us good index checking results. 90% go green then upon manually checking none of them are even indexed, hmm 
  • shaunshaun
    At the end of the day the rank increases are the best indication of how SER is doing but the index rates on the tiers were a good quick indicator as the rank changes take seemingly random time frames. It still would be nice to have a way to know exactly how much is indexed.
  • Yeah and still good to know that what we are doing to get links discovered kinda works. Otherwise pointlessly making links might be wasting time and resources.

    Does having links indexed matter? Someone mentioned they don't need to be indexed to help towards ranking a site, they just need to be crawled? @shaun
  • shaunshaun
    No idea now :P.

    I guess you could make two sites targeting the exact same keywords with the exact same structure and try to index one ser of links and just crawl the other and track.
  • shaunshaun
    @Anth20 could you do me a favor and check some of the false positives with inurl:”” with your url between the "" in Google.

    I have tested a batch of URLs and some of them just show niche related indexed pages from the same domain when the url is put into google on the first page but when it is put in with the inurl "" it shows it as it filters the rest out.

    These URLs are aged though so they may all actually be indexed, I only manually checked 10 out of the batch.

  • @Anth20 - it's been debated a lot, on this forum (here for example and elsewhere. I feel like it matters based on what I've experienced and the emphasis others put on it, but who really knows for sure?
  • Running through a check of 786 contextual urls, reporting back when done @shaun @redrays

    Will do the inurl:""

    Usually I either do these:
    or just the url (manually checking it and seeing if the exact url is there)

    Should be 5mins or so..
  • Ok so SB gives me this result:

    After manually checking with inurl:"" (checked 10 urls) 5 are indexed 5 are not.

    Not sure of another way of manually checking but completed a few captchas in google because of repetitive suspicious browsing, so not gonna carry on.

    So still, 50/50 is still false positives, any ideas of other ways to manually check? Wanna check all of them if I can
  • Ok so I just came up with an awesome idea!

    And put your list of contextual links
    add inurl:" (as a prefix)
    add " (as a suffix)

    Now you have a list of your contextuals like inurl:""

    Put them as your keywords to scrape only google, if they return results they are indexed!
    Good idea? Seems to be correct with the ones I'm checking so far, let me know.

    @shaun @redrays
  • @Anth20 - sounds like a good solution to me, I'll have to give it a try the next time I'm running these checks.
  • shaunshaun
    This man flu is killing me, my head is totally pounding right now :(.

    I'm going to have a nap then see how I feel and try jump on this. Bath that sounds decent I know another tweet that might work and I will try it when I wake up again if my heads any better.

    So doing it with that tweak is matching up to what the index checker says is indexed?
  • @redrays @shaun

    Yeah well basically this is doing what we do manually, and if results are returned those ones are indexed :)
Sign In or Register to comment.