Main differences/benefits of using PI vs SER directly other than resources?

I've seen the main argument being to keep SER available JUST for posting links vs allocating any resources to importing/identifying links...
Other than that, are there any/alot of benefits to using the standalone PI software vs just the import/identify function in the settings/sitelists area of SER?
More detailed filtering of lists, or is it that much quicker, or is it better at identifying targets vs what SER will identify, etc?
I've had both for many years, but haven't used PI for a long time.
Thanks
Other than that, are there any/alot of benefits to using the standalone PI software vs just the import/identify function in the settings/sitelists area of SER?
More detailed filtering of lists, or is it that much quicker, or is it better at identifying targets vs what SER will identify, etc?
I've had both for many years, but haven't used PI for a long time.
Thanks
Comments
So if you scrape things like blog comments and guestbooks, those pages are loaded with external links that are potential working sites for SER.
The other benefit of using PI is that it will put sites into multiple engine files as some engines could be placed in more than 1 engine. For example, blog comments can also be wordpress sites. Xenforo can be both forum and url redirect sites.
In fact a lot of the sites you end up scraping can become redirects. Even sites like youtube.com, bing.com, xhamster.com etc can all be turned into redirect urls with GSA SER.