Interface between the SAFE network and standard Internet

I am wondering what the prospects are or will be regarding any kind of interface between the SAFE network and the traditional www and Internet based communications? Once the SAFE network gets off the ground and becomes widely adopted, it seems there would be a need for some way of passing data between old and new maybe? Can this be done securely?


I’ve been curious as a good way to port info in a easy way as well. But I know there will be browser extension(s) to see public data on the safe network from the legacy web. A Firefox browser extension I believe

1 Like

Hey Nigel,

Thanks for that, it’s good to know :wink:

1 Like

One could serve public and personal data from MaidSafe by sharing it over WWW or FTP and such.

Securely? Yes, HTTPS, FTP with TLS/SSL. But why and what’s the point?
People are designing a new protocol that addresses many problems at the same time, and now you are asking how to access it in a legacy way.

Hi Janitor,

My question was really about methods of (relatively) easily importing data and apps etc. from the existing vast web / network data onto the SAFE network. I have no interest in hybridising for the sake of it, clearly the aim would be to migrate everything over to SAFE as much as possible.


Exactly not at all interested in the legacy system but that doesn’t mean the legacy system doesn’t have good data to migrate. And I think the extensions are good to introduce people from the legacy system to SAFE and make that transition easier for them.


hi CMT

Okay, then security isn’t really important (since you’re getting data from the Web and posting it to the SAFE network - if you are getting the right data, then there’s nothing that would stand between your Web/FTP/etc. client and the SAFE network).

If you just wanted to create public repos of files you could write a 3 line shell script to wget data and PUT on the SAFE network, and return or share its SAFE address (or create a simple DB with URL, filename and SAFE address).
Obviously it’d be slow and take a long time, so a better program that could command & control several nodes to do that in parallel would be beneficial for multi-TB files or data sets.

1 Like