This is an App that takes a URL from the old web and looks for it on SAFEnetwork. OK, it’s not there right! But wait… it then offers to access it from the old web and store it on SAFEnetwork, where it will then be available if anyone else tries to access it (using this App).
It’s not trivial - for example, who pays to store uploaded content?
Also, any dynamic content would potentially be quickly out of date, so this might need to be detectable, and an option provided to have it updated (also to browse earlier stored versions).
(BTW The inspiration for this comes from http://twitter.com/Sci_hub which does this for scientific papers which have been pulled from behind prohibitively expensive paywalls and made available free to everyone. Most papers are already available, but when one isn’t, Sci hub automatically goes and finds it using keys donated by people who happen to have access, and want everyone to have access for free.)
The real open web. No more “walled gardens”. This could be really cool.
Just to be a little more subversive: I could see some browser/safe plug-ins for a wiki-leaks type upload (or download - depending on how you view it) too But the places that those “secret” things are kept would never allow plug-ins of this kind.
Wouldn’t it be possible for outproxies to be established. Of course the SAFE exit node could be seized and attacked much in the same way Tor’s exit nodes are, but that hasn’t stop them so far. The greater level of security and the sybil immunity make IMO it a much better option to do it with SAFE.
As for payment for your original proposal, it could be pooled together by interested parties. Say I want to import foopocks.com. I then post my desire to have it imported on the relevant app by simply adding the url to the app. The app informs me of the estimated cost. I tell it the maximum I’m willing to pay. In time when others add the same url to the app, both parties are informed of the growing pool of investors, the amount both would have to pay, an estimate of how many others need to request the import (join the pool) in order for the individual user to meet the amount the user can afford.
So it’s simple.
You open the app.
It tells you to paste the url.
It submits it to the decentralized database.
It informs you of the url import status (i.e. how many others have requested it, has it already been imported, site elements that need updating, how much it would cost to import it, etc).
Gives you the option of chatting/messaging others who also requested the url import so that you may negotiate how much each will pay.
The app can then gather the total coin necessary to import the site from the investor pool.
Downloads the site then inform all interested parties.
It could also be designed to take a tiny portion of the transaction to compensate the developer.
Could this copying of “walled garden” data be a way to port people over from existing apps like facebook, etc, until they got to critical mass and full adoption happened? My mind is in brainstorm mode… Have to think on this one a bit.
Like I said in the other thread, this could be some huge marketing if we found a way to “sync things up”. I would be concerned with anonymity though. It’s something we would have to think long and hard about. I’m willing to put some brain power behind the idea though, if not actual code (but something like this might persuade me to put fingers to keyboard and put out some code).
Hmm, I’m not sure what you mean. Anything a user does on the clearnet is subject to surveillance. If they want to upload a clearsite to SAFE they need only apply for it via the app suggested above. Once all of the previously stated conditions are met, the app then proceeds to find a viable outproxy to grab the data from the clearnet. The outproxy assumes all of the risk but of course is protected by plausible deniability. So basically anonymous users pay and the outproxy stores the copy of the clear site for them. Am I missing something? Kinda likely considering how I torture my brain.
Funding said server would be a thing. Initially I thought about having the external server do a full crawl, but as @Tonda notes, you’d have no way of knowing how much content you’d get (and therefore cost).
So maybe just discrete URLs? (then of course there is the question of images… you could leave them to be interpreted by the browser. But then, that’s not SAFE). Which is another cost concern. I guess youd have to aggregate the PUT cost across all requests…?
I need to look at the API examples to see what’s what these days. I haven’t had a look since September, but I’ve been meaning to get back on the horse as the MVP approacheth.
I was wondering: would these sites need to reside with an owner? I initially thought yes, but then: who can prove they own any site?
Also: anonymity, as @chadrickm noted. To what level should safe not import scripts (tracking… adblocking). If we just GET the raw site, it’ll come with all that baggage. Though I suppose it’s not any different to just looking at it in your browser normally.
But if the point here is keeping it secure (and why else would you be checking this via safe), this would be needed.
HMMMMmmm. ha. Interesting though. (sorry for train of thought post )
Thanks for the ping, @happybeing; you’ve inspired me to get back at reading API examples!
Well worth it then! And thanks for your comments. Lots of issues to think about aren’t there. I think it could start very simple - but with some ideal goals in the back of the mind - and then listen to feedback, wants etc.
I’ve been playing with web scrapers recently so know there are some really good tools that would make it easy to prototype this aspect.
I ran a web scraper back in the day from Carnegie Mellon (a friend of mine went there). We were essentially pulling sites and getting specific data after parsing (something of what Google was doing at the time). Eventually the site we started with blocked us because they thought we were attacking them. My friend didn’t get into trouble but we were worried for a time. Fun days