Yeah I know some of you guys been waiting for me to come out with it here. Now that CB is here and relatively stable, I want to shift the conversation to a feature I think will solve a LOT of big issues we're having with submission quality.
SER is essentially a plain-text database of lists segmented into platforms. Something I've noticed is that the larger my database gets (now 2.5 million... yiikes!), the more threads I see being used to filter out the low PR targets from my global list each run. Sometimes a half hour goes by until I get anything higher than a PR2.
So here's a no-brainer... What we need is a way to sort and send selections of links to our own separate, named lists. Lists that are totally seperate from the global list because otherwise it just churns through hundreds of thousands of low PR links in the global list looking for high enough PRs. In my case, it's sometimes an hour later until it stumbles on a few decent targets. At least that's what it's like with a big fatty-mcfatterson list.
So is it just me noticing this or does the PR filtering process seem grossly redundant? It wastes all of those threads rechecking PRs from the global list which it should already have recorded from when it first identified the platform (or at least on first submission attempt).
I've been wondering for awhile now... why can't I specifically target PR links instead of having to filter through the whole global list over again each time? Why not use a database system like every other submission software instead of just churning through a plain text grab-bag of platform links?
I love the platform segmentation, but now how about quality segmentation?
If I want to submit to my highest quality links efficiently, the only way I know to do this is to filter every one of my global list segments with scrapebox and setup an entirely separate instance of SER to churn it.