Monetary value as data object or projection of ledger entries

I get what you mean.

But if you carry that logic to the rest of the operations in the SAFE network then the whole of the network is vulnerable.

But this is what a lot of the programming and innovative workings of SAFE is about. The consensus model does what the ledger model does with everyone holding a copy of the ledger. Except the consensus model does not need everyone to agree, just the members of the section/group.

In effect if you feel that the consensus system is inadequate then the whole of the SAFE network is inadequate since the consensus mechanism is the very basis of the security of the network. It is what stops anyone from interfering with data stored on the network.

Then the coin transfers require 2 sections/groups to perform the transfer.

Maybe do a study on consensus (search on that word) and you can then understand further what it is all about.

tl;dr

consensus is what makes the “single” party incorrect. It is multiple parties unrelated to each other apart from forming the section/group that makies the decisions.

2 Likes

I’m jumping in and addressing your OP without having read the rest of the posts, because I don’t want to forget my perspectives. Mea culpa if others have already presented a more sensible perspective.

Safecoin can definitely be viewed as a data object, but there is a perspective by which you can view it as a ledger, to wit:

The “safecoin space” is a ledger of 2^32 slots. Each slot either has an existing safecoin or it doesn’t. If it has one, or one is created to go there per the rules of safecoin creation, it has a cryptographic owner listed. The state of that ownership can only be changed by the owner cryptographically.

I don’t think that’s quite the truth of things. It has to do with consensus is reached and actions are overseen. While one node may get the lead on writing a safecoin change, it has to be done per cryptographic signature that no one has but the safecoin owner. If that node sought to execute something different it would be caught out, whether it had senority or not. To make such an illicite change would require getting control of a quorum of nodes in that group, at an exact time, coordinating what would add up to an insignificant gain in itself. If you study up on the consensus form (which can take a bit of study and head work) you’ll start to get the picture.

May I commend to you the SAFE Network School podcast, especially Class IV, and Class IX, and perhaps VIII also. With disjoint groups and a couple other advances, there may be some variance from the current model, but the basic concept apply.

The School series not a perfect tool (having been put together by a technical simpleton), but anyone who isn’t familiar with the different aspects that make the SAFE Network different should benefit from them, at least from a high level.

Enough tooting my horn. Hope that helps. Now I’ll read all the other replies and decide if I should delete this one :grinning:.

[Edit] One additional thought while reading the other points is to remember that what the network does with safecoin is what the network does, period, except that safecoin is a reserved data type which the network accepts and gives in exchange for resources. The handling of safecoin is therefore not a special security case. All data is handled the same, including safecoin. That might be considered as a slight to the security of safecoin, but I think it is really elevating the security of DATA at large up to the level of currency.

9 Likes

Hi neo, thanks for the long reply! With all respect, it seems a little bit handwavy to me. I’m sorry, I do not mean to be rude. It’s just, I have been reading about the consensus model and the disjoint sections and data chains option B and even coding some simulations. I am aware of the concepts, it is not the cause of my question. I now wonder about technical details of a very specific part.

I do not like to use these kind of words because they are used in so much polemic situations, but it is a strawman argumentation about safecoin not trying to be ledger. I never claimed that :slight_smile: I never asked for that either :slight_smile:

There are two specific points I describe and I am describing how I see that solved with one method, but cannot see how it in detail is solved with the other.

You are getting close to the actual subject here:

But, again, then how? Technically. In detail. (That’s what I wonder, I do not require you to answer it, I just wonder if anyone knows?).

(I need to get down to the kitchen and make some waffles with my spouse :slight_smile: I’ll write more soon. Again, sorry neo for the short reply and rough formulations, wifey calls. fergish, thanks! I’ll come back later and reply :slight_smile: ).

1 Like

I think what @oetyng is lacking is a nuts and bolts description of how a transaction is carried out. I can’t provide this atm, but it occurs to me a misconception may be that there is ever one data object in one place, which makes it vulnerable to one controlling node.

Perhaps this is avoided by each approved operation needing to be carried out on every copy of the data? I can see how that would avoid the perceived vulnerability, but I’m not sure if this is the actual means by which it is achieved.

Alternatively I think @fergish may have hit the nail on the head with this, so a bit more detail around it might be the key:

Anyway, I think this is the area needing clarification.

2 Likes

Sorry not trying to be. Its just the consensus model is what SAFE uses to secure data which includes SAFEcoin objects. And if you feel after study this is not good enough then what can I say? I tried to give an overview of how consensus is not a “single point” of agreement, but is in fact many points have to agree or it doesn’t happen. Also I would not have explained so much to be handwavy

Not intended as such. I thought if you knew (or confirmed it is such) then it would aid in your understanding. Also if the intention is to be similar to physical coin transfers then relating it to a ledger in any way shape or form unless just as a pure comparison is just talk.

Well the code is not written, but basically the section/group of nodes handling the safecoin object refuses any other operations until the current one completes. The section/group checks the request for transfer is valid, then asks for further consensus from the other section/group of nodes and if all is OK the owner of the coin is changed. These sections/groups are unrelated nodes that have to agree on an action before it can occur. This is a huge discussion on how and it would be best to follow the links others have given and /or do a search on consensus.

Also calling my addition to what others said as handwavy is well … maybe showing that consensus doesn’t ring true with you yet. And since SAFE is built on the consensus model, not much of the security will ring true until your understanding of it grows.

1 Like

Somehow I missed out on the existence of this site. I listen to your podcasts on the forum when they come out but didn’t know you’d done a set of backgrounders collected as a ‘School’. Have now seen the error of my ways and added it to my 10 key facts post.

5 Likes

I should not have used those words, they are too loaded. It turns the discussion the wrong way :slight_smile: Let’s get back to the technical details.

I’ll just clarify: I wanted to say that the actual detail I am elaborating around, is at a very basic technical level, how in practice a specific step is performed, and that at the very point of writing the data that seals the transaction, there is no exact information - as I have seen - about that step. And that with a smart contract scenario, that brings up some questions.

So, what I think is not coming forth in my texts, is that I am not questioning safecoin or its design or its rationale.
I am not advocating ledger instead and I am not saying it should be one way or another. I think this is a disclaimer that needed to go on top. My bad for missing that.
I did however point out that ledger systems, like accounting, have been tested for a long time, and just like we like to refer to natural systems and assume that just by their mere longevity they have proven to be robust and carry an innate elegance/simplicity, I also thought that maybe (maybe) this is true for accounting too, since, although not ants-old, it actually is one of the oldest inventions of humans. But, that was with a large maybe.

But, now that I have explained that, let’s go back to the technical details. that is still the main focus.

There is a final step, in a transaction that is done when a smart contract is to be enforced. I described how I see that potentially happen, I pointed out the exact location in a supposed rough algorithm, where I see how two different systems would handle it differently, and that it seemed more trivial to me with one of them, and that I was still wondering about the last step in the safecoin implementation.

So, there’s a lot of eventual consistency in the network. I am quite familiar with eventual consistency, and that it does not require atomic operations - on the higher level. But, we’re still at a very high level here, we are going much more concrete:

You describe a situation here

I am in the details in the very last part of that sentence: the owner of the coin is changed.

I believe, if you read my texts again (if you feel up to it :slight_smile: ), with that in mind, my reasoning would make more sense.

In any type of system, that specific step is present, so I guess we can start with the details of it, then we can zoom out and see how that fits in to the larger concepts.

@fergish is, just as @happybeing observantly notes, getting close to this too:

I recognize this model in the very first post. It is just that, this is still not the exact description. And I am aware that safecoin is still in R&D (also mentioned in OP).
We’re quite close here to describing it, it seems like the knowledge is there. I’ll give it a try from this description above.

  1. A node gets a leading write of a safecoin change.
  2. This nodes does this per cryptographic signature of the safecoin owner. (hm?)
  3. This node has the power to write some other data than intended, but will be punished if doing so.

  1. This seems to say that there is an order, there is a leading write and then some following writes. What are the steps here? What does the write consist of? Is it a signature added to a field of an MD?
  2. I think this part is fuzzy. The leading writer uses the public key of the owner to sign what? This is not how I understand signatures, you use your own private key to sign something, it is then verified by others with the corresponding public key.
  3. Here we are touching one out of the two specific points I have been elaborating around. One node having the power to do an arbitrary write. We assume that it will not happen because it will not want to be punished.

First of all, is this even a correct representation?

If not, how are the exact steps carried out? I mean, the exact steps of this part might not have been decided, and I have not seen them in the general descriptions, so I do not require it to exist, but if it is out there it would be very nice to see it.

For anyone else who is wondering, as soon as I know the exact steps of MD mutations as a result of a smart contract condition being met, and being enforced by a group in SAFENet, I will explain it here. Maybe someone else will come first, but if not, I will lay it out here. That is a promise :slight_smile:

3 Likes

Because it has not been written. But I gave you the process that will be followed. It is atomic.

If you call it a smart contract then we’ve been using them in operating system programming for a long time. Atomic operations in operating system design/programing has been a necessary fact since the very first time sharing OS written in the 60’s. I’ve written a few myself and the atomic operation is very basic in structure. You accept the request and lock out any more occurring till the first one is finished. Look up semaphores in relation to OS design.

In operating system on a single CPU it was turn off interrupts - set semaphore - turn on interrupts - do operation - turn off interrupts - clear semaphore - turn on interrupts. Then CPUs came with a atomic semaphore instruction that could not be interrupted by interrupt or DMA which could then be used on multi-core to do atomic operations. In other systems they usually use a Operating system call to do it. In networking then queue of requests is often used and the process handling those requests have a lock mechanism. In database then its very similar where a lock is implemented within the database code.

So like those in SAFE the section/group will first check the request and if valid lock the requests for that particular coin and prevent another one for that coin starting. Then the consensus process starts (insert a volume of description here - ie go to github or other doco and spend a week - yes dismissive but truly it will take that long to grasp the workings and security of it). I described the basis of this process already. The final stage of changing ownership is the same as for any other MD object and can be examined in github now, or a search of this forum on consensus.

Yes it is difficult to get reasons or what a person is driving at in a couple of paragraphs.

So no SAFE will not be what crypto sees as a smart contract. It is a consensus of at least 64 computers in two section/groups checking all the things that need checking and crypto signing messages that will allow or deny the transfer.

To call it a smart contract is to imply that a lot of work is being performed by multiple sections of code.

But NO it is using the consensus mechanism that is at the heart of SAFE, maybe you could call the heart of SAFE’s security (consensus) one big distributed smart contract machine. But that implies that someone writes a contract and the system operates on it with a deal of complexities. But no it is a very simple system that uses the network consensus to do what it does for all other data objects except an extra step of confirming the transaction with another section/group of nodes.

I don’t have the exact process but it is in github repository under the consensus mechanism. And is why I keep saying, search on consensus and study it up. It’ll take a lot of reading and to describe it here is not a simple task.

The ownership changing is the same as any other object that has its owner changed.

If it was as simple as giving it to you in a paragraph then I would, but it is a indepth discussion in and of itself. Personally I don’t know the precise ownership change code to be able to give you chapter and verse for it. But I do know from what you wrote that you do not have enough knowledge of the consensus mechanism to go into that depth here.

Maybe @tfa or @mav can explain it in one paragraph, I cannot. I would ping David or other dev to but they are too busy.

3 Likes

Perhaps I can help a little Group Consensus with owned data (such as safecoin) means this:

  1. A user / Peer requests an action. (could be ownership change, or mutate the data inside the MD etc.)
    2: Send this action to the address of the MD
  2. The X nodes closest to that address then:
  • Confirm the Signatures etc. are OK
  • Vote for the action to proceed
  • Gather Votes from the group responsible
  • If enough Votes accumulate then the data is changed.

The whole network or any user can then confirm this has happened and the MD has this new “state”. It can only go to one new state at a time (they are versioned) via the group consensus. So several attempts to change this state (like doublespend) will fail and only one state change will occur. You will see an advanced version of this that allows branches of possible next state in Data Chains part 1. An overview is here

Using versions is a CDRT type see here albeit this also requires consensus (the group agreement). There are more advanced methods that can be employed here, but it’s also a reason for not actually deleting stuff internal to the network, although in the API it may look like it is deleted. If you alter remove a version the CRDT like behaviour fails.

Then to add a little bit more for perspective.

The group closest to an address (responsible for changing it) are called the Data Managers, they cannot locate themselves on the network, that is randomised on join and also on every relocate (see node age RFC).

A wee bit deeper (sry)

Now every node involved in the above consensus agreement (the Data Manager) is also an address on the network that has a set of Close Nodes, just like data. So these have another set of close group nodes surrounding them. This group is the Node Manager group. So if a node does bad things or requires monitoring this group do that. as they are then the Close Group to the node they can also agree the node’s fate, just like agreeing a data element should change. So each node must follow the rules and the rules are set, but enforced by a set of nodes close to the address of the “thing” in question (node/data etc.) where each node is allocated work and rules to follow by the network.

This all happens with every change to any data or network membership (nodes join/leave etc.). So each node “sees” their own network (XOR distance means each node has unique distance to every other node) but it’s a view close enough to obtain consensus like this, but we know when we want X nodes close to an address, the network view from that address means there are strictly X nodes close to this. I mean there are no nodes that share a distance so each node is a unique distance and therefor there can be only X nodes close if there are at least X network nodes in total, and we can then set X to a figure that means the network traffic is not huge, but large enough security is maintained. This is the always trade off we consider, so things like larger groups not all holding all the data each etc. are great, or larger groups where we can use things like nodes connected to few members is great (i.e. connect to a strongly secure connected graph and not each member) or even where we can potentially have nodes more trusted than others that can reduce traffic flows (node age is promising there). Atm we go for high security and forgo performance to get to launch, but these are the kind of things that will increase efficiency at no meaningful reduction of security as we move on.

Hope that helps a bit.

11 Likes

@oetyng, I found this post from a couple of years ago where someone has basically said what David just did above but perhaps a little more concise. Just click on the quote title and it should take you to the post in the topic. Actually you might find the whole topic interesting even though some of it is outdated.

Also when you read what I quoted remember that group size was 4 for testing and is in no way suitable security wise.

@dirvine, is the following quote accurate enough?

6 Likes

Thank you for taking time and write to me @neo. I think we are talking around each other a little bit though. And @dirvine, thanks alot. I would have to improve my formulations, so that you don’t have to spend time on explaining things that I was not really addressing (from my view at least). I am sorry about that. But I really appreciate that detailed answer David!


I spent last week translating the code in Prefix.rs, XORName.rs and routing_table/mod.rs logic from Rust to C# (because I wanted to use the real logic in my simulations, and because I find it easier to understand by implementing it myself), and the weekend before that I implemented a simple DataChains simulation from this description: https://forum.safedev.org/t/data-chains-general-rules-in-an-eventually-consistent-network/687. I read about CDRT from other topic where this was discussed the other week, and I am familiar about the node aging and churns and how this prevents sybil attacks and so on. I do know the consensus model, I know how it works, I know what it does. But my question is about something else. I hope I will be able to explain exactly what my question is about.
I do not claim to understand all just because I read it, far from. I’m just saying that I am not as unfamiliar with it as you might think. I do realize that my inability to clearly communicate my question is probably not helping in that matter. I will keep trying though :slight_smile:

Skip this part if you don't want to read the long background

I do not refer to SAFENet as the smart contract. I am talking about something akin to implementing a smart contract framework in/on SAFENet. Which to me, in the absolute simplest form, would be an app with a specific purpose, that is communicating with other nodes running same app, and reaching consensus on something. So, not a general purpose smart contract framework, but one single app, with a single purpose, but that reaches consensus about the output from the operations of its source code. A DApp.

When I come to the end of the explanation, I think also the mentions of atomicity, 2-phase commits and eventual consistency might make more sense. But we’ll see if I manage to express the thinking clearly enough.

As I started the topic, I have been thinking about “smart contract” implementation in SAFENet; execution of code by nodes, and reaching consensus upon the output and performing some action as a result of it. (<- Here is the crux, but we will come to this.) The way monetary value is represented, was part of the crux. But again, we will get there.
I will detail it as far as I can, so that I can point at the exact location where the logic supposedly sits, that I am trying to specify and reason about.

So, we have code that shall be executed on various nodes. For reaching consensus about the output we can very well use much of the logic SAFE already has in place for creating and managing groups.
If we want to duplicate this (i.e. we know that maidsafe developers are focusing on other things now, and will not incorporate this into network at this point), we create an app, which reuses the same logic. Now, stay with me and let’s not focus on the duplication of the logic. We assume we can reuse the ideas, and adapt to our specific needs.
So the functionality is about managing groups, much in the same way as in SAFENet, but additionally running code with some input and voting on the result of running this code.

Let’s say I have a matching app for ads (just an example). (Note, this can be done in many many ways, and the point is not that, but in the end, where the specific action is to be enforced by a group. We will get there, I will tell you when we are there, but it seems I need to describe a lot more of the background. So we’re at the background stage now.)
A number of users in the network runs this app. Every time a user places an ad, there is some logic run, in this user’s app, with regards to how to proceed with the operation. There could be some initial MD stored, and then messages are sent out to a group with responsibility for this ad. Let’s say that is the case.
When the members of group G receive message saying “At address S this MD is stored, of type this and that, which means you will run logic L with the input in fields X, Y, Z in the MD”.
Each member runs the logic, produces output in some MD we name B, and each of them sign the output they have calculated.
When N out of M signatures are available, we can consider the output stored in B to be valid. This output can be used in consequent operations, and so it can go on…
Let’s get concrete again.
We had a matching app.One user places an ad. N nodes in a group responsible for this ad, will run code that finds the perfect match for this ad. They will vote on this. The output is thus a matching ad. It could be an address stored in MD named B.
Let’s say this app has some other functionality too. That when an ad is matched, there is some monetary transaction being made. (We’re getting close to the crux, but yet far away).

Now, I am trying to understand the exact details of how the monetary transaction is being executed.

Here below we have all the steps in the above, and this is all just the way I have understood it to be required.

But, again the very last senctence: then the data is changed. This, exactly here is the crux I have been circling around from the very start. Nothing else.

How, in technical detail, is this data changed? Is one node responsible for actually updating the MD?
I have tried to imagine what is going on there, because I think this very part is not implemented in code (multi sig, transaction of ownership), but I think it would be simplest if this part is explained, confined to that single step: then the data is changed. That there is the transaction being sealed. This is what I have been talking about all the time.
We are so close now :slight_smile:

Maybe there is no exact description of this, but if there is, it would be really nice to know it.

5 Likes

Thanks a lot for that link @neo. I’m delving into it later.

This here is about the same thing that I tried to describe in detail in one of the previous posts.
Just the algo for how this is achieved with an MD, step by step. I did a sloppy suggestion, just to hint at what level of description I am thinking about.
Like this part the group must sign off on any change for it to be valid. It is like the ledger example, that there is no single operation by a node, changing the state of an MD. But the state change is proposed, and then is considered valid as the last required signature is written.

I will look further in that topic and see if I can find such a description there.

2 Likes

No problem, think of it like this. There are say 8 nodes all to agree, 5 of those 8 are enough. When you ask for the data even 3 could be wrong (or the only ones correct, for a few seconds) but the 5 answers you get back are the value at this precise time. So if the data is “changing” it will look like the last value up until 5 of the 8 agree then it’s the new value. The whole group (the remaining ones) then will come to consensus with the 5 already there.

The data chain then enforces this network wide, by that I mean the neighbour sections agree with the value. This is repeated for every value in the network “around” the location that values are stored in. This is the part that gets hard to envisage, but think like this,

Humans in Australia think that John Smith lives at 2 main street Troon, however to really know you speak to somebody close to main street Troon. They tell you exactly who lives there.

The network is like that, to find the truth you go to that part of the network, everyone else may have an idea, but that part knows the answer.

For Immutable Data it does not matter to be close as it’s value is hashed to produce it’s name. So the data would be in fact “John Smith, 2 Main Street, Troon” and the name is the hash of this. But Immutable Data can have changed content so we must visit Troon to see what it really currently is. Not that it cannot change the second we look at it, but in the case of safecoin etc. if you are looking to see if you own it, then it cannot change without you, no matter what the network does (unless it’s hacked completely, like a >50% bitcoin takeover etc.). In this case it’s a mix of network consensus and you owning the private key that ensures when the data belongs to you then it stays that way until you decide otherwise.

[Edit - Maybe the concise way to say this is “The data is changed when a client requests the data. The result it gets is from at least quorum of the group and that is the current correct answer.”

So in short each node changes the data and if more than 50% of them have then the data is changed. Data chains enforces this across more than one group and in fact a bit like a merkle tree, not in structure, but in purpose, will make sure that going fomr a known start point (say the genesis block) you can traverse the tree to ensure the nodes that say this is the data are in fact nodes that can be cryptographic-ally proven to be the correct nodes to attest that this is the data"]

PS: If you think you are not asking clearly, you have met the guy who cannot answer clearly :smiley: but we will get there, honest we will :smiley:

14 Likes

David’s last explanation sounds like what I was thinking here:

In other words, there is not one copy, but several nodes each looking after a copy, who all update their copy when a change has been agreed as valid. So no single node can fake the change or corrupt the value - it will always be out voted by the other nodes looking after that data.

7 Likes

@oetyng I don’t know if you caught this. Each of those nodes has/keeps a copy of the MD data object and the node changes its copy when consensus is met. Data chains ensure a requester gets the latest version of the MD in case some nodes have not yet updated the MD. IE avoiding race conditions etc

5 Likes

To answer the immediate question as directly and quickly as seems possible, there’s a ‘version’ value for each mutable data that’s incremented when it’s changed. This is used (as neo says) like a flag for committing changes and to prevent duplicate simultaneous changes.

See routing mutable_data L370 for where entry_version is incremented when updates happen.

This is actually referred to in a comment below on L429

For updates and deletes, the mutation is performed only if he entry version of the action is higher than the current version of the entry.

Owner is changed by calling clear() then insert(new_owner) - see L571 fn change_owner

Can the atomicity of this mechanism can be broken? A detailed read of the code is required (however is not actually performed in this post, despite the title of the next section).


Explaining By Reading The Code

Now, there’s also the broader issue of how this actually works and why a single node can’t just change the value.

The key word when discussing updates to mutable data is accumulation.

The transaction starts life being signed by the client then broadcast to the network. This is a very ‘raw’ state, not having accumulated any network ‘credibility’ besides that of the client signature. It’s routed to the appropriate section of the network, where it gradually accumulates (or doesn’t accumulate) credibility from nodes in that section. That credibility comes in the form of a signature from each vault in the section saying the transaction is valid according to that vault.

The place in code where this happens in the vault code mutable_data_cache; the whole file is really short and easy to read so there’s no particular methods to highlight.

The decision of each individual vault whether to sign-off on the credibility or not for a particular mutation happens in fn validate_concurrent_mutations

This decision starts life in each vault as a ‘pending write’ - see fn insert_pending_write.

Inserts the given mutation as a pending write. If the mutation doesn’t conflict with any existing pending mutations and is accepted (rejected is false), returns MutationVote to send to the other member of the group. Otherwise, returns None.

The caching of mutabledata by each vault in the section is what keeps any single vault from changing it - all the other vaults in the section would reject any future mutation since it wouldn’t match their cached version. (yes a can of worms has just been opened)

The whole file at vault/data_manager/cache.rs is worth reading to better understand the accumulation process.

The accumulation module is fairly stable and may also be worth reading.

Anyhow, this is some pointers to technical entry-points if that’s the desired approach. It’s not an answer, more a finger pointing in roughly that direction.


Explaining By Analogy

Mutating data is a bit like rolling a big stone down a hill.

A transaction entering the network is considered ‘valid’ if it has the correct signature from the client. This is the first point of validation. It’s like getting the right stone and the right person to the top of the hill. Without that prerequisite, nothing further can happen.

The transaction is routed to the section responsible for it. This is like the person starting to push on the rock to roll it down the hill.

The first vault in the section to be handed the transaction checks if it’s valid both in itself and against their own cached value of the mutabledata object. This is like the rock slowly gathering momentum just off the top of the hill. Once the transaction reaches the section, it begins accumulating to reach quorum (like the stone begins accumulating momentum).

If these checks pass, the vault sends the transaction with its signed approval to the other vaults in the section. Those vaults do the same checks and report their signed approval to all the other vaults in the section. The transaction incrementally accumulates credibility like the stone incrementally accumulates momentum rolling down the hill.

When the transaction reaches quorum, the mutation in cache is ‘saved to disk’ of the vaults responsible for it. Any future vault in the section not caching the new value will be treated as misbehaving. This is like the stone hitting a tree on the way down the hill, reaching the inevitable conclusion of the journey.

Transactions are discrete because they are a ‘single data point’ (ie the broadcasting part) but the event of it being ‘saved’ is not. Despite that, the conclusion is conclusive once reached.

There can be multiple transactions pending accumulation for the same mutabledata at the same time (eg multiple stones simultaneously rolling down the hill aimed at the same tree). The one that reaches quorum first is the one that’s committed.


Explaining By Changing Perspective

Consider some other questions which dig into the nuance of the transaction mechanism (these are equally applicable to understanding the nuances of blockchain transactions)

  • What happens if multiple simultaneous transactions are competing to be committed (ie a race condition happens)?

  • How irreversible are transactions once committed (eg in bitcoin consider zero confirmations and orphan blocks)?

  • How would someone other than the owner spend the coins?

  • How can the consensus mechanism be broken and what is the effect of that (eg in bitcoin 51% vs hardfork are two different attacks on consensus with different effects)?

  • How are new coins created and what prevents them being created illegally?

  • Can coins be deleted or burned and how would this happen?

There are answers to all these questions but they’re too long for this particular post. Use them as new ways of looking into the problem - it might shed some similar light in a different way to the question of ‘how is data committed’.

Best of all (but technically challenging) try to break it. Run a private network and ‘steal’ some mutable data. It very well may be possible; only testing will confirm it.

15 Likes

Great to read, just awesome :smiley:
And, that what you described, yes, it is now clear to me :smiley:

@mav, wow, thanks. Just superb. The clarity with which you perceive and accordingly respond is stunning.

@happybeing, yes this seems to be what actually clears the misconceptions.

Less important things

Something was missing in my picture.
It is really obvious now.

I was thinking, how could this 1 object be securely updated. But of course it is 8 objects being updated, so then the rest comes naturally.(I.e. correct value is the most agreed upon, we actually query 8 sources for the value, and we will get answers from which we can conclude by quorum which is currently true.)
It’s not that I didn’t know that there are 8 copies (and used to be 4). Just, something sent me off 359 degrees… (that means, standing right next to it and just not seeing it) Anyway. :slight_smile:

And regarding nodes getting into inconsistent state, that would only happen if they lost power right after, say for example this line. But it doesn’t matter since if it loses power it will be wiped and restarts, and so the inconsistent state is wiped with all of it’s other data. And it’s a non-existing problem.
(In data chain scenario I do not know what would happen there at reconnect, but I am pretty sure it has been covered.)

However, to run the example app I described, the background issue I started with, which sent me a bit off, is actually not yet solved. Even though the problem was not exactly where I was first looking.

To have a DApp evaluate logic, to have network agreed logic, currently it seems to me a parallel infrastructure would be needed, which if you want to use the SAFE consensus model, would be duplicating a lot of the existing algorithms.

I can write code that makes n instances of an app communicate on the SAFE network, and reach a quorum about a calculation. The manifestation of the quorum having been met is definitely crude, but it’s a start.
But, when an MD is about to change owner as a result of this, then it becomes more tricky.

As long as owners.Len() is not allowed to be > 1 at least :slight_smile:

I will need to read through some more parts to map out what I’m missing. I’ll be back with that.

6 Likes

This functionality is likely to be incorporated at the network level, but some time after the launch. We had a bit of a chat about doing it as an app here:

I say the above because I recall David saying this, and realised it was a much better way to go than building it on top. Unfortunately I don’t have a link to David’s post, but I’m clear he anticipated providing a way to do this using the built in concensus mechanism.

4 Likes

Came across that one before. I was thinking about a smaller scope, single purpose app, and not a computation app.

So just distribution of a particular app’s logic.

Yes, some time after launch it will probably be implemented at network level, but to have a dapp ready for launch some other solution would be needed. Hard to say if solution can be provided with reasonable effort put in, considering that network will have that after some time.
Preferably it would be abstracted away so that it could be switched out for the network implementation later, also hard to say if that can be done in a satisfying manner.

6 Likes

At the current stage of development of the SAFE network, I agree.

This is a post which seems related to your thoughts on smart contracts (replace structured data with mutable data, the concept remains identical):

https://forum.safedev.org/t/multisig-revamp-for-structureddata-appendabledata/135/49

I believe more complex ownership_test functions will be required in the future.

Ownership testing may take the form ownership_test(owners, signatures, script) so the evaluation of ownership is independent of the data object itself.

Also want to add @oetyng I really like the way you’ve approached this topic. Threads like this are why this is such a great community, so thanks for being part of it.

9 Likes