Science on safe net, CERN, SETI, Protein folding problems

Just saw this story being promoted by professor Brian Cox. I can’t wait till the full storage and computing power of the world is available on SAFEnet for the world.

Wonder how donating resources would work though? I could donates safe coins, would be nice to directly donate processing and space to these projects on the SAFENet somehow bypassing the coin? Charity mode? Where the coins go directly to the organisation and enable them to carry out their work? That would work? The general public would download a charity farmer, where the “processing and storage of computer used by the organisation” any coins would go to CERN for example and they could spend that on processing and data services. What do you think?

Edit: they would use processing and storage when they need it instead of relying on current downloadable apps to be on all the time.

7 Likes

hmhmmm - if someone made a decentralized computing app :thinking: - that app could be made in a way that you can set a rate at which you are willing to rent out your computation power or dedicate your resources to one user / pool of users *just-thinking-loud-here*

…or like bitcoin mining it could be made that you just donate resources - and if someone appends a fee he gets calculated first (with option to dedicate resources only / primarily to a certain user if you don’t earn something anyway)

2 Likes

I would have thought the SAFENetwork proving a distributed, secure, data store would be beneficial for this. Both inputs and outputs could be stores on SAFENetwork, which would allow a simple client to distribute compute.

5 Likes

Absolutely. As soon as SAFENet is launched just the data storage itself would be an amazing contribution to these projects. An app would need to be written to…

1 - Farm space and generate coins
2 - distribute user compute outside the safenet to solve problems
3 - distribute SAFECoins to clients that have data to upload.

Ofcourse, feeding your app with safecoin should also be an option.

2 Likes

Yes. This was one of the aspects that attracted me to project SAFE. The only problem though is the type of data involved. Maybe not as much of a concern for the budget of CERN, but for a lot of HPC data you would never want to waste a precious resource like SAFE on it. What I mean is that although the data is important, it only has temporary value. And there is a lot of it. You might only want to keep a reduced set of it forever. This is why I really hope MaidSafe and the community can agree on a simple very low cost method for storing “temporary” private unique data that won’t cost very much relative to standard immortalized data, since it isn’t intended to be kept very long. Neo had mentioned something in another thread about how to tag temporary and unique data. One thought I had was just adding a “/_tmp” folder to the SAFE-NFS that would handle data differently and have a different PUT cost or have some kind of a reimbursement method once deleted. There are a lot of security concerns and other details that would need to be well planned for though.

1 Like

Then store it in MDs and delete them when finished. A chunk stores 1MB a MD stores 1MB and they both cost the same to store. Seeing as its temp data then surely a storage code can be developed so that a MD address is generated for the piece of data being stored.

2 Likes

Yes, the MD is probably a good starting point. This definitely is mutable data, but it is also sacrificial data to some extent; a situation where you need a lot of capacity and performance and a lot of overwrites, but not for very long. The concept of a “_tmp folder” that I mentioned was from a user interface perspective. Anything uploaded or saved to the “/tmp” folder could be created as an MD with code you describe automatically, although this is probably an app level implementation. I’m not very clear on MD deletion though, I’ve read through the following but wasn’t sure if deleting an MD really frees up the storage resource in the network for someone else to use, or just removes one’s ability to reconstitute the data.

It seems to me that for some use cases, MD might be too valuable or overkill. Selfishly (since my resources are small) I was also dreaming about some way to store large quantities of temporary data in vault RAM only (with maybe just a little bit of redundancy for stability requirements) via the caching mechanism at a much reduced cost, so more like a “safe ram”. It also seems like the extra complexity for a lower cost demi-PUT for temporary/sacrificial data might have “symbiotic” benefits for both the users and the network (lower put cost because it will entice the user to accumulate less junk). This line of reasoning mostly applies to future developments in a computation module that might focus more on speed rather than safety. Anyhow, this was just a brainstorm to throw a few daydreams out there. Sorry for hijacking the thread.