As I read documentation trying to understand better how the network functions internally, and specially after reading the RFC 47 MutableData and RFC 48 – Authorise apps on behalf of the owner to mutate data, I’m getting the idea that Maid-Manager is like a smart contracts which verifies who sent each request by verifying signatures, and calling another smart contract (Data-Manager) to perform some other verifications based on the requester and signatures, and so forth…
Would be accurate to see the SAFE network as a set of smart contracts with an embedded distributed storage at the very bottom layer?
I’m aware I may be very wrong here, just thought it would be interesting to see what you think/understand.
I saw it from miles away when I finally understood the complexity of the system last fall. Maid-manager acts in a similar fashion like escrow service. From there, you can add more and more functionality but you don’t want to put too much process on the maid-manager so it has to be core basic functionality. The contract process on user side, to remove the processing burden on maid-managers. Hence your thinking about verification process. So you’re not the only one here thinking about this.
I don’t think I understand well enough to answer that, but it is not hard to imagine ways in which packets of computation (including smart contracts) could be processed on this network.
Example, a type of mutable data that indicates it is not to be stored by its group, but read, processed, results returned. So instead of farming by retrieving a chunk, the vaults read the data (inputs, algorithm) do the computation, and return the result. As with chunks, comparing the results across a quorum ensures the work has been done and is not faked.
Alternatively, I believe that zkSnarks provide another option where individual nodes could prove that they carried out the computation correctly.
Then, there’s no reason why a larger computation can’t be created and stored as a large series of linked data, and a payment (contract) sent to the network which rewards each group that handles and computes one step in the chain. This whole computation can I imagine be built up from a single base computational capability.
As you can probably tell, I haven’t thought this through and don’t know about the tricky parts of this kind of thing, but the basics seem to be catered for.