I LOVE it. Our marketing messages should be - take each of the above points, create a marketing punch line around it and publicize - push it through all channels of social media and block chain enthusiasts.
I meant use each of the lines above individually and create a punch line around it and then push it through all channels. Thats 15 powerful punch lines right here.
I also think we need liquidity - Need our coin listed on one MAJOR exchange. HitBTC alone is not going to cut it. Listing on a big exchange is critical for price appreciation.
“Running a machine learning app and letting the network do all the calculations” is nonsense. This is extremely hard to do (at least with current technology and algorithms).
The best we have so far is MPC - Multi Party Computation which is already several orders of magnitude slower than native speed.
Loads of machine learning is already done in the cloud. Think of NVIDIA’s cuda-cores and now Google’s TPU’s. That stuff is very much distributed, not in a geographical way but surely over different cards etc.
There’s also work on homomorphic systems combined with machine learning, something that could become big on systems like SAFE. The fact that there isn’t a current click-n-go-solution today doesn’t rule out anything for the future. And with project like Folding at home doing distributed computing for 18 years now there’s proof that this distributed stuff can be done.
Distributing calculations on aggregated data is one story. That’s what NVIDIA and Google are doing.
Doing distributed calculations over distributed data is a complete different story. Except for summary statistics, there’s not much one can do (in realistic time and complexity).
In addition to that, distributed data should also be encrypted (else they would be public data - one can just download and run algos on). In this last case only homomorphic encryption and MPC make sense as viable solutions. But training a machine learning model in encrypted state (with HE or MPC) can be orders of magnitude slower than training at native speed. Hence, a neural network that usually trains in a day (and I am being an optimist considering data is distributed), would train in ~ 100 days when such data is also encrypted.
Folding at home covers only one aspect of data analytics. Namely, every node in the FAH network receives a chunk of data and an algorithm that can be run independently on each data chunk (eg. detecting spikes in time series, modes, etc. ). That’s why it is easy to parallelize and distribute such a task.
This is not the case for general purpose machine learning.
I think there are more options here, such as bulletproofs and STARKS with range proofs and more as we move along. (BTW many folks will not know all the acronyms you are using, may be worth expanding them).
There is a large momentum now in neuroevolution and open-ended AI that in fact does allow easy parallelisation systems such as ec-star are good examples. So some issues with gradient search etc. can be much easier approached via co-operative methods. Then objective-less functions, as opposed to fitness or performance functions, are also extremely interesting. If you check out some of the stuff www.sentient.ai is playing with then you may find it as interesting as I do.
In any case what I mean is that distributing chunks of stuff to computers to work on in this case is applicable to AI, whether we call it machine learning (which I do) is perhaps debatable, but it is never the less interesting
Absolutely agree with you, distributing chunks is still a good way to parallelize machine learning tasks (whenever possible). I also like to call it machine learning as AI doesn’t really mean anything (i’m sure we don’t need to do marketing here
My point was that not all machine learning is about distributing chunks.
Will look at sentient.ai and continue this conversation.
Nice to meet you David. I “met” you via your self authentication and self encrypting data papers
And boy have they patented it . Only to keep it open.