Ok, I'm quite pissed so instead of creating a rage thread, I'll channel this rage into something useful.
The last couple of days I was scraping contextual URLs for my latest project. At ~95% scraping progress my computer crashed. After turning it back on I realised that SER scraper isn't saving the URLs in real time to your custom .txt file. It only saves once the progress is finished or aborted.
So how about you make it save every ~1-5% so instead of losing the whole list you only lose the last 5% or so.