With atomic operations can the computation be done via an APP using the atomic operations? These operations I describe can also require interactions (signing) by other people running the app if needed to give the other side of the story.
That was a question with what I see in the atomic operation by Safe.
Maybe we can provide a subset of smart contracts with atomic operations which I would see as essential anyhow for other purposes. Then expanded upon as update to the codes are done.
Safe does not have a blockchain for operations to recorded in that the nodes all verify. So maybe a redesign of smart contracts needs to be done so that data objects can be used in place of the blockchain.
As you may have realised I am not up on the in and outs of smart contracts.
I have no idea about any of the technical side, but if SAFE is going to have distributed computing I can definitely see a big upside for adoption in doing it by supporting EVM.
I think that question requires substantial research on our parts to answer. I gather Intercoin is about issuing local currencies and then converting them to a common currency to spend outside of that local community. Forgive me for only briefly reading over the material, so apologies if incorrect.
If the local currencies were created within the safe network, as ledgers (with sequential data types), it would seem possible to swap between them. On safe network, it is trivial to create such a ledger and as network storage space is the only limit to their quantity. Each asset could have its own ledger, whether an NFT or a local currency.
Given the rules of a ledger could be defined when it is created, transfers could be conditional on the buyer, seller and an escrow signing the transaction. There could be a common escrow between multiple ledgers, allowing assets to only be transferred when both buyer and seller have signed the transactions on both ledgers.
In a smart contract, my understanding is that the escrow could be the network itself. That is, you define the contract in code and let a network agent automatically apply the escrow steps (in this use case).
My question would be - how necessary is this, especially for day 1? I understand it is desirable to have a network agent that can do this, but is it critical?
For context, we could have a pool of possible escrows assigned to a swap. These could be automated user apps, run from any device connected to the network. For the sake of simplicity, the buyer or seller could settle a fee with the escrow out of band (probably a better way).
Given that the seller, buyer would use an app to initiate the trade and the escrow would automatically approve, the exchange would happen quickly. The escrow has an incentive to generate a good reputation and fees, so will be driven to process the swap promptly.
All keys of all parties remain private and the network does not process anything on behalf of the users. Nodes all around the world donāt need to re-run the same contract over and over to get consensus. Transactions can be near network speed, rather than waiting for blocks.
I mused on some of this here too:
Of course, it would be desirable to have a way to run more complex distributed compute tasks and smart contracts. I feel this should be a longer term goal though, with a lot of thought put into the hows and whys. Maidsafe have proven that they are happy and able to forge new ways to accomplish difficult distributed problems. Iām unconvinced it is necessary to delay a release to accomplish these things though - there is plenty of time to add extra functionality in the future.
One other thing to mention is the AT2, which is the foundation for Safe Network Token transfers. These are initiated by the sender and authorised by the section to be applied atomically.
Iām not sure what the long term plans for AT2 are, but if it was in some way programmable, atomic swaps of other network assets would seem feasible. Perhaps these could be NFTs (in the form of local currencies or otherwise) or other digital content.
Can common smart contract use cases be provided using this technology? What other key use cases are delivered? Is general distributed processing able to deliver these and more in the long run? I think there are bigger questions here too.
We have taken it further now to include more data types (not just cash). When you see us refer to BRB (Byzantine Reliable Broadcast) then that is what we do. Itās an easy eli5 so I will try in a few bullets.
Client sends next update to Elders
Elders check this is sequential (they hold some state, like a Vclock, or even simple counter) and sign approval
Client gathers approvals and forms a signed ācommitā message
Client sends the commit to all Elders to sign and the deed is done. [counter is updated etc.]
[edit above is simplified a good bit, but]
If a client were to try and doublespend (or fork data) then itās prevented as it needs 2/3s approval of Elders to commit the message. i.e. if they try and give to only a few and process a transaction they have locked themselves out of any further updates, so they must complete the transaction.
That is fork prevention, however we do have also fork resolution mechanisms in place where a fork has no side effects and can be resolved.
Both are important to have for data as not all data has the same requirements (i.e. a fork prevented editor or game etc. would fail as everything needs ordered and be sequential, if we want concurrency at the data level then we ned to handle forks. This is why CRDT is great for the latter but can never be used (so far) for cytocurrencies etc. and you need that extra wee bit of order/atomicity).
Thanks, David. Iāve studied the code for sn_client/transfer_actor and its interactions with sn_transfers and I can see the above logic applying. Iāve yet to dive deeper than sn_client to see what is happening at the elder end, but the sn_client/sn_transfers code is all very clear and nice to read. Great to see!
Iāll see if I can stumble across the non-cash data types using the same process on my readings. Iām just walking through different parts of the code at a time, currently.
I suppose what Iād like to understand is what shape client update takes and how the elders validate it for approval. Iām sure Iāll get to that in the code, but for example, checking the balance is specific to a cash transfer, etc.
Iām also wondering if they could be combined. So we could approve a transfer from X from A to B, then approve a transfer of Y from B to C, then commit them both to the network as an atomic operation. Given they could be initiated by different users, where X and Y could be managed by different sections, Iām assuming this may not be trivial.
I agree with the previous discussion that smart contracts are both ultimately desirable and should be delayed until after launch.
It would almost be easy I imagine: There could be a standard container format (e.g. json) that specifies the processor/environment/vm and version and contains the actual smart contract code (no need to compile to bytecode, space isnāt a problem here). There could also be a way to take this public template, load it with injectable variables (e.g. the xorurl of a public blob that contained your payment_address, etc.), and giving a compiled xorurl that could serve as your specific version of the contract.
If we were to mimic, say the EVM, then the contract could refer to a specific public blob by xorurl that could be a java or webassembly implementation, etc. that would be verifiable and battle tested. You could even reference a blockchain-like appendable for state. One could then pay the network to run this contract on one of a few default vms and so forth.
Just thinking off a napkin here, but it seems to me that the network itself puts so many of the prerequisites in place, that there must be an efficient and simple (compared to the network itself) design for this. A basic smart contract VM could even be one of the first demos for a more general plugin architecture.
Probably an unpopular opinion but Iād rather see a smart contract (network agreed logic) that is built to be native to and align with the current design of Safe Network.
Itās more a gut feeling than a logical reason unfortunately so for now itās a somewhat uninformed opinion. Really interesting to read through this thread though. If @maidsafe was so inclined to design it I canāt help but wonder what their design approach would be, if it would have more security or consumer protection by allowing granular permissions (perhaps less wallets being depleted or wallet hacking) etc.
I feel like going with the VM is the easy route and more extensible for sure but most of what is out there is Wild West and not for everyone anyways, at least when it comes to DeFi.
@GregMagarshak I like your enthusiasm, and I doubt you need convince many people here re:smart contracts. The forum is filled with many great insights on the topic.
Re:adopting EVM, we should take a step back and think about what a āsmart contractā is. Isnāt it just a script, one that is running not on a server owned by some entity, but rather hosted and run in a way that is irrepressible (assuming correct implementation). Basically, an unstoppable script.
So if we go back to that underlying definition of a smart contract being an unstoppable/irrepressible script, it becomes obvious that such scripts are an expected evolution of a network that is itself designed to be unstoppable. It further becomes obvious that such scripts on Safe should be consistent with the networkās performance- and privacy-centric design. Stated differently, unstoppable scripts as implemented on blockchains have certain limitations as informed by the limitations of blockchains. It would make sense to re-imagine unstoppable scripts for the safe network itself (all the key ingredients are already in place).
The very interesting Polkadot Network uses Wasm for forkless upgrades, the closest I have seen to the idea of Safe Network autonomous upgrades that was discussed here a few years back.
Forkless Upgrades
By using Wasm in Substrate, the framework powering Polkadot, Kusama, and many connecting chains, the chains are given the ability to upgrade their runtime logic without hard forking. Hard forking is a standard method of upgrading a blockchain that is slow, inefficient, and error prone due to the levels of offline coordination required, and thus the propensity to bundle many upgrades into one large-scale event. By deploying Wasm on-chain and having nodes auto-enact the new logic at a certain block height, upgrades can be small, isolated, and very specific.
No, not forced dynamic updates so no push back AFAIK. They have just refined several years of governance lessons that have played out in one big evolutionary explosion within various blockchain projects, addressed as many of the previous problems as they could in a clear, democratic repeatable way. It is still early days and it has not stood the test of time, but it appears to be a very promising way forward to maintain a dynamic and upgradable system.
Here is a high level overview of how Stake-Weighted Voting, elected council and referendums work together make irreversible upgrades to the core:
Highly recommend as an example to learn from when considering Safe networks updating mechanics. The lessons from the blockchain space show that it becomes a nightmare to get the stakeholders to agree to upgrades once a network is released and generating significant real world value. It almost appears to be an common attack vector: drum up division so that no upgrade ever goes through or if it must go through, divide and conquer the community. Polkadot discussions cite this as a motivation for their current governance managed updates design.
Back onto the topic of Smart Contracts, here is an introduction to why Polkadot chose WebAssembly and why they consider it superior to the EVM standard (which I agree with, forget EVM go with Wasm all the way, the developer base will be much larger than the EVM Solidity space).
Hey @mav have you heard of Discrete Log Contracts (DLC)? Itās supposedly a multi-sig based solution to adding smart contracts on top of Bitcoin. In the linked podcast they say itās easy to port to other projects. Is this something you think is viable for Safe Network?
This is all I found on GitHub with a quick search but it has some papers linked in the read me.
@Sotros25 If I was to advocate any project to partner with it would be Tezos. There are some decentralized oracles that are on or are to launch on Tezos I believe. They are less likely to be so big for their britches to turn down a partnership where they run a safe node or proxy to sign signatures as a bridge to clear net oracle data. Just throwing out an ideas/suggestion if this could be fruitful.
Hopefully this is true. I suspect that Tezos may want to see at least a Fleming or Beta release to take the conversation seriously. A partnership would be great, however!
Guaranteed finality on each contract execution so we could avoid things like gas fees to run contractsā¦ nodes which run the code can set arbitrary execution times of them on their own systemā¦ much like setting your own vault sizeā¦