♦♦♦ Beta Testers Needed For Stand Alone Identify And Sort In ♦♦♦

Hey everyone,

I'm looking for around 5 people to beta test the stand alone identify and sort in before final release.
Here's a screenshot:

Some features:
- Ability to run multiple identify and sort-in campaigns simultaneously.
- Identify and sort in based on keywords in meta data, <title>, visible text, etc.
- Ability to monitor a folder in real time so as your external scraper is scraping (E.G. Scrapebox/Gscraper) PI will automatically read from the output file and identify and sort in the URLS.
- Easily export projects to .SL
- Identify custom platforms
I'm only looking for people that have the time to really test and provide feedback on bugs, improvements, etc. If you don't have the time, please don't apply.
If you're interested, please post in this thread and list what operating system you're able to test it on. Once I have 5 people picked out, I'll start a private chat with the 5 selected members and we can discuss improvements/bugs there and this thread will be closed.
Thanks!
Comments
I currently have GScraper running on Dedi from GreenCloudVPS. I would like to be a beta test for this tool. I usually split my scraped URLs into dummy projects to autimatically sort but I wanted to see the difference with the link count if i use your software.
Looking forward to beta testing this.
PS: While watching the video, I noticed that there's no option to use proxies. Might as well add it as one of the features..
How much will this cost?
Perhaps there could be some discounts to owners of a gsa ser license?
Alright!
Perhaps the ones that have other GSA software can get a deeper discount?
With updates to PR happening never again, this would be a great idea.
I downloaded the trial, it seems easy enough to use BUT with only a small list (61K urls) it timed out when only a third of the way through. The system seems to give the impression that restarting the program will restart the job , but it looks like it went back to the beginning. If I want to process the rest of that file do I need to mess about with the input file to delete those urls already processed ?
e.g sometimes I use trackback or blog comment verified urls list from GSA SER to scrape outbound link on it (as we know sometimes there many outbound links inside it) then I send to scrapebox link extractor addon to scrape all outbound links. The result I got 500k or more potential website list and send it to GSA SER.
If GSA platform identifier can do it (scrape outbound link from verified urls -> filter/identify urls -> send to GSA) I think our job more saving (no need to buy more proxies to scraping urls), simple and fast. (just idea)
Scrapebox link extractor
with this way I never buy much proxies to scrape new urls
What you think about it?