Diji1 has perfectly answered your question Unfortunately I believe it's against the forum rules to talk about other forums here, so I cannot do that. It's not too hard to find, though.
Thank you for the compliments Will keep it updated as often as I can. Happy scraping! I'm sure you'll get tons of untapped sites, especially with the other languages.
Yes, that is on my to do list. However, not many platforms have a "foreign" version so it may or may not be quite limited. I'll make sure I'll get some out after I add a few more new languages (which are coming very soon).
@Yurium: Just using articles SB crashed loading all keywords. (memory error)
What I did is split the articles footprints and (EN) keywords into 10 files, each the same number of entries.
Then I opened 10 instances of Scrapebox and loaded the keywords-1 list, added the footprints-1 list and scraped. Rinse and repeat for the other 9.
Whilst Scrapebox is a "Swiss Army Tool" for many things it's crap with large lists and URL scraping. I'd recommend paying the money for GSraper - does the same thing as above only at speeds that have to be seen to be believed and using much larger lists. It leaves Scrapebox in the dust as far as scraping URLS.
@OnkelMicha Some of our lists have millions of keywords, so importing and scraping ALL of them at one go would cause certain programs to crash. Try splitting them up into lists of 10 and scraping them one by one as what Diji1 recommended. That would be a lot more efficient than scraping an entire list at one go.
@gtsurfs Yes, you can, but that would be a lot less efficient than using a scraper as SER's main purpose is to create links, not scrape. I highly recommend you to get a scraper to use along with SER. If you know what you're doing, you can earn the money back easily. If you really are tight on budget, then I guess you could do that. Just load a list up when creating a project.
@AlexR My footprint list consists of the ones I've managed to collect over the years (my personal footprint list) as well as the ones in SER. There's a lot more footprints than the ones included in SER such as those for edu\gov sites.
@Cherub .... I imported some of the foreign language KW lists into Gscraper and even after converting them to UTF-8 I am getting alot of garbled characters. Any suggestions? Thx
@sweeppicker Why did you convert them to UTF-8? I've already encoded non ISO languages for scraping on Google so you can just load those up directly, they are equivalent versions The reason why you are seeing garbled characters is because your text editor does not have the proper language pack to utilize.
@joland I process all orders manually to prevent any leaking of the lists. Please be patient, as promised it'll be sent out WITHIN 24 hours
Comments
I would like to see Korean KW list if possible.
Don't make me wait so long..
Bought your list this morning, still waiting to receive it?
What I did is split the articles footprints and (EN) keywords into 10 files, each the same number of entries.
Then I opened 10 instances of Scrapebox and loaded the keywords-1 list, added the footprints-1 list and scraped. Rinse and repeat for the other 9.
Whilst Scrapebox is a "Swiss Army Tool" for many things it's crap with large lists and URL scraping. I'd recommend paying the money for GSraper - does the same thing as above only at speeds that have to be seen to be believed and using much larger lists. It leaves Scrapebox in the dust as far as scraping URLS.
@OnkelMicha
Some of our lists have millions of keywords, so importing and scraping ALL of them at one go would cause certain programs to crash. Try splitting them up into lists of 10 and scraping them one by one as what Diji1 recommended. That would be a lot more efficient than scraping an entire list at one go.
@gtsurfs
Yes, you can, but that would be a lot less efficient than using a scraper as SER's main purpose is to create links, not scrape. I highly recommend you to get a scraper to use along with SER. If you know what you're doing, you can earn the money back easily. If you really are tight on budget, then I guess you could do that. Just load a list up when creating a project.
@AlexR
My footprint list consists of the ones I've managed to collect over the years (my personal footprint list) as well as the ones in SER. There's a lot more footprints than the ones included in SER such as those for edu\gov sites.
Why did you convert them to UTF-8? I've already encoded non ISO languages for scraping on Google so you can just load those up directly, they are equivalent versions The reason why you are seeing garbled characters is because your text editor does not have the proper language pack to utilize.
@joland
I process all orders manually to prevent any leaking of the lists. Please be patient, as promised it'll be sent out WITHIN 24 hours
thank you.
An invoice? For what?
Just sent out the keyword lists to all buyers. Will add a Korean keywords list very soon.