Skip to content

Index check at root domain level.

shaunshaun https://www.youtube.com/ShaunMarrs
Evening @Sven,

Part of my weekly servicing involves getting rid of contextual domains whose root domains are not indexed in Google as it suggests the domain is penalised and deindexed so stops the user building out a T2 or T3 beneath it when there is a high chance it will never get indexed and give a benefit. Although I have a way to do this with scrapebox it takes a while.

Is there any chance you could add this to your to do list, even if its to the bottom? Basically something in settings where you can scan contextual platforms and do an index check on their root domains then be presented with the option to remove the ones that are not indexed? 

Cheers

Shaun

Comments

  • SvenSven www.GSA-Online.de
    you mean this index check should be done using robot.txt ?
  • shaunshaun https://www.youtube.com/ShaunMarrs
    @sven no, like say in the verifies URLs folder I have ....


    I then put them in scrapebox, trim to root so it would be .....


    Instead of just one URL though I do it for one file at a time so it could have like 300 domains. I then index check them at that level, then remove the ones that are indexed. Save the ones that arnt indexed to a file, load the original list back into scrapebox and then use the remove urls containing entries from file feature with the not indexed list to purge the potentially dead domains.

    I used to also alive check the deindexed domains but it seems Google will keep dead urls in their index for some time so I dropped that step.

    Also I used to merge all contextual verified folders into one big file for the process but realised pretty quick that reverifying the list after the purge loses a fair few targets. 
  • shaunshaun https://www.youtube.com/ShaunMarrs
    edited July 2016
    @sven, just realised if you could add a feature to "remove from URL list" option under tools where it removes URLs containing xxxx then this could be used to make the hybrid method with scrapebox work much faster :).


    Basically the user could merge all the contextual verifieds into one and then run the steps above and then the final deindexed file run through the remove urls containing xxx from the verified files and it would purge it. It would be a good idea to have a pop up with the engines like the remove duplicate so you can skip blogs and such.

    So if the url 


    is pushed through the remove URLs from verifieds containing xxxx it would purge 


    sure to the url containing the trimed to root version of it, and it would save the user a shit ton of time compaired to the current way I do it.
Sign In or Register to comment.