New OnPage SEO Function - Need Feedback

The latest updates of GSA Keyword Research have been all for a new module that offer a functionality that other tools would give you for 200USD per month. Though I am in need of some feedback here. Right now @s4nt0s (as always, thanks a lot!) was the only one giving his thoughts to this and Im happy he did. Without him, I wouldn't have thought coding this.
Below is the screenshot of this new module:

You can reach it by:
1. double click on a keyword to open the "Keyword Competition" Screen
2. Click again on Compare //Later updates will probably include both tables in one//
Now what you have here is a table with you site, the top results from the search engine and the Average (change compare algorithm at top) of top3/5/10.
The icons and colors indicate how you should change the row value on your page compared to the top3/5/10 to reach a better ranking.
Please let me know what other values are important for you (disable the option "Hide unused Items" at the top to see all available data). Im currently in search for all possible meta/link tag values there are (even if not important for SEO).
Thanked by 1Michaelvg
Tagged:
Comments
http://www.seo-hero.tech
I advise you to think carefully about it is invaluable.
More Ngram information (patent from Google...) : Using Ngram Phrase Models to Generate Site Quality Scores
It is neither more nor less than the recipe for the success of a site compared to its competitors. I imagine that he could work with SER to add ngrams in our articles (in accordance with its title) in order to make it excellent, to allow the creation of semantic thiers of madness (T1 < T2 < T3) ....
Outbound links social network.
If the site/page use analytic tracker.
Longtrail ... but think ngram is better
https://moz.com/learn/seo/on-page-factors
This forum page : https://validator.w3.org/nu/?doc=https%3A%2F%2Fforum.gsa-online.de%2Fdiscussion%2F27390%2Fnew-onpage-seo-function-need-feedback%2Fp1
TheBestIndexer (^^) : https://validator.w3.org/nu/?doc=https%3A%2F%2Fthebestindexer.com%2F
It is to get a general idea, and that too is a option that adds value.
For exemple:
http://www.siteliner.com (full website)
https://www.duplichecker.com/ (copy/paste article)
https://smallseotools.com/plagiarism-checker/ (copy/paste article)
https://copywritely.com/plagiarism-checker/ (copy/paste article)
What do you think of bringing the word count closer to the density for more readability ?
In speed the response latency (server side)...
I don't know if the classe ip may interest you ?
I just sent you a message about the ngrams ^^
Although where the real gains are made is with TF/IDF resp. NLP keyword suggestions. While some parameters like avg # of words is helpful, most parameters don't matter. If you have some h2 more and some h3 less than the competitor doesn't mean shit.
Here's a recent post about SurferSEO and how it works and suggests keywords based on TF/IDF and Google NLP and here's some code on how to run URLs/content against Google's NLP API. For meaningful results the boilerplate content should be removed prior analysis, e.g. with https://github.com/buriy/python-readability
I'd consider buying GSA Keyword Research if this was going to be implemented. Any plans for this?
For example, if for a keyword compared, we find in the best sites, a group of identical words containing it, it is absolutely necessary to include that in our site. This group of words is strongly linked to the context of the keyword.
....
The recipe for success
PS : Personally I already bought it because if this function happens, Sven should increase the price of the software because it will take a lot of value.
What are the limitations of the free trial?
Very sincerely I keep telling Sven to increase the price of his software because it is really not the kind of tool that I would like to see in the hands of my competitors ^^
1) Is there a reason you implemented ngram rather than TF/IDF (or WDF/IDF as it's called in Germany)?
2) Any plans to enrich the suggested keywords with results from Google's NLP API?
These tools exist already since many years, e.g. Page Optimizer Pro, Cora, Surfer SEO. There's even free ones such as https://www.seobility.net/en/tf-idf-keyword-tool/, although limited to content only while the former compare way more parameters.
I guess your competitors have been using them for a long time already
If you have ideas, this is the right place to explain them
Edit No I do not live on a rock ^^ but it is true that I am not used much. Having said that they are generally limited or quite expensive while we have all the research results on hand. With the right ideas, you can have it all in one software, just explain it to Sven
Edit By cons there are only unique words, it is more interesting to recover all that is the best, whatever the size. Besides, whatever the name used by the sites, I do not see what could be better.
Edit : Does anyone have a link or it would be possible to recover the semantics of a word or group of words? Maybe with Google Knowledge Graph api ?
1) TD/IDF is known as WDF/IDF in Germany and consists of two parts. WDF stands for "within document frequency" while IDF stands for "inverse document frequency".
2) I think it would be great to separately show keywords which are considered relevant to the first 3 SERP URLs by the Google NLP API. I would only display "new" keywords (and topics) which are not already listed in the ngram (or tf*idf) list. This would possibly give us an edge over the competition since it will show keywords considered relevant by Google but missing on the top 3 sites. Here's a tutorial with fully working code on how to achieve this: https://sashadagayev.com/systematically-analyze-your-content-vs-competitor-content-and-make-actionable-improvements/
Edit I would also like to know if all is scraped only once from the start (all the html of each page) in order to avoid the use of proxy in the event of multiple manipulation.
It could also be interesting to know which platform (wordpress, drupal ...) the pages of the sites use.
A classification of the loading time also.
If the proxy to find takes a little time to be found, the loading of the results can be displayed without delay, just explain what is happening and that it will happen just after.
It could show all the URLs a site has with various technical information about each URL like:
Status code (useful to find errors)
Indexability (to see if the url is blocked by robots.txt or by other means)
Inbound links (number of links a given URL recives from the rest of the site, this should be exportable in some way) This is useful to build silos. It would be also useful to visualize these links in some way but I'm still thinking about a way to do it.
Inbound links that are unique to the examined page only
Outbound links (number of links a URL has to an external domain, this also should be exportable)
Outbound links (unique)
Percentage of
Response time
Redirect type
Redirect URLs
Structured data info like Errors, warnings
A bit more content focused research tab could contain information about
Page title (showing the whole page title to spot possible errors and character length with pixel width and optional red and green indictor)
Meta description
Meta keywords
H1
H2
Content word count
Size
Last modified
These are all what I can think to be useful right now but others may chime in and tell what are their favorites. Also you can sneak a peak at screamingfrog and see what other technical stuff they show. Probably they do that for a reason ;D
Thanks @Sven for the next "GREAT" tool
For Keyword competition, before the matrix, we do not have the information (age of the domain and other) of the domain to compare since we enter it by clicking on compare. It may be necessary to enter an operation just before