We can bring this back over to the thread on how to better market SAFE but its a more general question.
I am the typical user who has never heard of SAFE before. But I have friends who think its great. I go to log in. I am told that I should contribute my typical use case. So if I generally use a lap top for a hour a night. I would expect SAFE to take up as much ram and drive resources as the average major applications while I have my system on. If I want to know more I learn that SAFE makes use of spare spare CPU and GPU cycles, and spare drive space and spare bandwidth while I am using it. That what I don’t need while my system is on goes to support the network. That I can adjust some of these parameters myself if I see a need. That I am encouraged to leave the system on if I can do so to earn SAFE coin or donate resources. Also that a coin earning system will have some minimum parameters and that I am free if I have a powerful enough system to have the coin app running in the background when I run my use case system.
If I have a phone it is the same set of expectations average phone app size and power etc down through the internet of things.
So all this seems to work but it would also imply that if I am accessing the open source equivalent of google street view running distributed (once the distributed computing function comes on line) that I am not burning safe coins, that in the normal use case I’m contributing enough back channel by having efficient utilization to avoid a computation bill. Rather that I am contributing as I go. Let me stress that again that I am contributing my use case as I go with a more efficient distributed use of power?
Now if I want super computer time to run an experiment that’s not the case and I can expect to pay a good bit of SAFE coins for access and that may not be running in distributed fashion but be access to some special ‘server’ like capacity… does that even work? Well if I want the distributed super equivalent I pay extra SAFE coin.