Seems like there will be trouble if people keep asking for a block that is 6 blocks old. Maybe a majority of elders say they have seen the new block and mark the 6 deep block as absolute. But the newest block hasn’t propagated yet to some others, so they vote their 5 deep block as unclear. Would they then be punished? They didn’t do anything wrong. If they are not punished, what would their incentive be to mark anything as absolute?
Yes, not very simple for Bitcoin. I think there is an issue in setting the “truth threshold” (unsure → sure) used for punishment. If it is block based, you are going to have the same issue of different nodes receiving blocks at different times. It doesn’t matter if the threshold is 6 blocks or 100 blocks. If it is time based (is block X seconds old), likewise you have different elders receiving the request at different times and therefore reaching different conclusions. Distributed consensus is certainly not simple.
So what is the incentive to ever commit to anything as final?
Okay … sorry I missed this thread when it started. Very interesting.
I’ve read all the way through and I think I understand the theory here. I would simply call this an oracle system. It has many uses.
Just off the bat, as Bitcoin blockchain was mentioned many times as an example: Why is the blockchain needed at all? Assuming oracles exist, my integrated SAFE bitcoin wallet can simply query for the results of my bitcoin addresses with the [bitcoin-address-query-plugin]. The plugin has the blockchain and deals with it. This is like any lightweight bitcoin wallet where the blockchain is managed by a centralized server - but in this instance, the data is verified by the network from multiple servers.
So we don’t need to deal with blockchain issues at all. Done.
Yeah, that would be a great one. And a nice way to integrate the existing torrents and torrent users into Safe Network.
There are tons of thing this could be used for … goes well beyond smart contracts.
Very clever @Antifragile !!
@dirvine - have you seen this thread yet? If not wondering if you could have a peek. I’m very curious if this is do-able or not and how hard would it be to implement. Suggest you start from @JPL summary post #37 above.
The problem would be how the elders source their information. If you are having them source a certain API X then you would expect them to all return the same value, maybe some elders are unable to access the API due to ISP-level restrictions and should then return a time-out or unable to retrieve status, so they don’t get punished for a null value or being too slow.
How about DNS poisoning at certain level where in some way the IP of the destination has been poisoned by a Man-In-The-Middle attack. The elder will get punished while he just did his job and is now being reported for being malicious. This could be an attack vector if those elders are punished and people who are able to exploit this force many elders to suddenly shut down.
I believe a high success return rate should be the leading determinant because we can never be 100% certain, but those who have the best consistency and success in being in consensus + speed should weight the final answer. Or the return data should have a status of “indecisive” with % of peers that agreed.
Also it would be great if certain services can pay the network to cache (ImmutableData) certain calls if necessary so the call be compared or be used to yield even higher confidence.
I don’t think that is how it works. The elders that choose to run these plugin’s source them themselves. They could be from the same open-source repo, or (less likely?) they could be coded by node manager themselves. If there are ISP’s blocking access somehow, then the elder can run the code on a local server - I would expect that to be the default in most any case. Latency would be an issue if you are querying a service on the Internet.
My understanding of the Elder setup:
Elder node sets up these services (servers); personally I suspect these would all be on local machine(s) of the Elder.
Elder defines the location of these services in a Safe Network config file. If they have a powerful enough machine, they might be running these all on the same box (also going to depend on the resources needed by the particular plugin), and if so all that would be in the config for the plugin is the local port number of the service.
services offered by the elder are broadcast by and in Safe Network.
I don’t see any need for network to have any sort of full compute layer with your proposal @Antifragile - compute can be done with plugins. In fact with plugins we could emulate other crypto VM’s like Ethereum and also create much simpler or more powerful VM plugins.
All Safe Network needs to concern itself with is determining & returning consensus results (and I don’t mean to make light of this - I’m sure there would be a lot of consideration./effort involved in developing this additional API? into Safe) and managing the Elders behavior.
That reminds me a bit of the idea behind DLCs in Bitcoin, where oracles just publish signed results and it is up to the people making the contact which sources they want to use in their contact.
It seems that using elders for consensus wouldn’t be required for a DLC oracle styled approach. It could just be like a separate reputation per operator per plugin at the application level. You’d just need some index of which nodes support a given plugin.
@Antifragile do you propose this is part of the core network code or would it be enough just to prove you are an elder on the Safe Network and adopt the plugins as if a parallel network?
I think I know the answer just because punishment wouldn’t be the same if not integrated into core code so the beauty seems to be piggy backing off the networks age reputation/promotion/demotion to have an equal amount of security and decentralization.
As an aside, I had an idea of how to make media content be properly tagged etc to make sure that media apps aren’t just getting filled with junk or improperly labeled content. Part of that off the wall brainstorm idea was to run basically a parallel Safe Network but the nodes redundantly process some same content to make sure there is consensus on if the info is correct for content and if so they then get rewarded a token (the mining or farming bit) and that token could be given value at market or otherwise, but I won’t get into that right now.
But it seems to me this piggy backing plug-in idea could work much better and lower the complexity for anyone wanting to create a similar plugin.
I think the real attraction here for me is being able to suck content and information properly from the clear net to Safe Network in a way that is at least somewhat reliable, continuous, decentralized, and incentivized.
Regardless whether integrated or running parallel, being required to provably be an elder means it gives even more reason and incentive to be an elder as it increases network utility by serving valuable information via node age/elder status. Imo.
I wonder what you and others think of if DBC alt coins could also be rewarded as further incentive to this kind of processing…
Look at chain link. I don’t think it gives a flying F about bear markets. So I think you make a great point there. Of course some plugins might do better than others but data is data and it needs to port to Safe Network, even for it to act as a bridge for reliable data for clear net once things progress far enough.
I’ve only spent a couple of hours trying to figure out a way for smart contracts to work, sprinkled with some of the snippets I’ve picked up skimming this thread over time. Though I did think about this a lot in the past and concluded it was hard, and that it would need to replicate or reuse a secure concensus mechanism such as we now have (Sections with splitting and node aging on good behaviour) etc.
Clearly that is to a degree in line with using Elders with plugins.
I don’t claim to have understood everything in this thread so have definitely missed stuff that might be important, but without a definitive description of how this can work, all in one place and explained to a level of detail like a white paper it is very hard to understand if this flies or not. Hence I’ve tried to figure out how it could work from first principles. But as noted, only for a couple of hours.
So far I can’t see how some key aspects could work, while I can imagine solutions to other parts.
He are some blocks I’m stumbling over:
how are contracts discovered by the network? I can imagine creating a smart contract and sending it to the network and arriving at a section which knows what to do with it but this quickly runs into other problems, such as…
how does the network discover nodes which are capable of running a particular contact (or plugin)?
As I tried to solve questions like this I ran into other stumbling blocks.
One reason I’ve taken this approach is that I don’t understand how using Elders can work. For example, if you punish Elders in terms of network status you risk undermining the network by opening up attack vectors. You could punish them by blacklisting them from running a particular plugin as @Antifragile has suggested, but this may also introduce problems. How is this status stored, managed etc?
In fact I don’t see any way this can work that interferes with normal network functions, even having Elders running arbitrary plugins could be a problem in this respect so I’m concerned about that, but it’s not the only problem.
I concluded that using Elders with plugins doesn’t necessarily solve the problem, though as noted this may be because I don’t fully understand how the issues I’m stumbling on plan to be dealt with.
I also am not sure you need to use Elders at all, so long as you can solve the problems I’ve encountered early on. If so I think you can allow any willing nodes to participate, and minimise the workload of Elders by pushing most of the work out to the client requesting the computation: such as selecting nodes from those willing to run a particular contract, and wrapping this up with a DBC for payment and the parameters needed to specify exactly what is required. This would be sent to the participating nodes who reply to the Section which matches the hash of the contract specification published by the client. Results are validated according to the contract and the DBC can be used to reward Section Elders and each participating node which fulfilled the request according to the criteria. I think there may be scope to get the client to do more of this work (eg allocating the rewards and creating DBCs to distribute them), but this is a start.
So the problem seems to be about what I listed as stumbling blocks, and may not need to rely on Elders running the computation at all, which I think would be preferable. There a question of how to punish nodes that don’t behave, such as by spamming the contract, but I think that’s solvable. For example, the client set a minimum node age and requiring any willing nodes to have put up a stake in the form of a DBC in order to be considered, which can be forfeit should a node not provide a result which meets some validation criteria.
I think what I’ve realised from this exercise is the importance of a more detailed design paper, an RFC or even code. Maybe someone has thought deeply enough to do this? Until then I’m not sure we can know if a particular approach is feasible.
Speaking only for myself, I’m just a backseat coder, and not a good one either! I think this thread is a great brainstorm and perhaps an interested and decent coder like yourself @happybeing will come along and begin attacking the problems here and fleshing something out. Probably not a rush in any case as this would definitely be a post-beta upgrade.
From what I understand of this proposal it seems like it would have less impact on the network than a full in-built decentralized VM and I find that very attractive as I’d really like to see consensus computing and oracles on Safe in the future - but wouldn’t want to ‘break’ the network’s speed and efficiency to have that functionality.
Thanks. I don’t understand the distinction between smart contract and plugin, both are inputs, code and outputs verified by the network in some way but maybe the answer to this is to define more precisely what the difference is in your scenario.
The network owned file is interesting and sounds like it might work both with your Elder + plugin approach, and with my more general any node approach. Essentially is a central database of nodes and their willingness to perform an optional class of operation (plugin or smart contract, however they are defined).
I don’t agree that Elders can be punished by removing their status because this interferes with the core security mechanisms of the network and we should avoid that wherever possible. I agree that using Elders gives some protection against bad behaviour but I don’t think this is necessary, and think we can avoid interfering with the network’s security mechanisms as explained.
How can you enforce this? I expect people will take the easy option and have them run on the same machine. Does this matter?
I’ll think about it more later, thanks again.
Thanks that’s helpful clarification.
I think the scheme I outlined achieves this and gives greater flexibility (and market efficiency - as in cost benefit), and doesn’t require interference with network operation (eg by demoting Elders).
Yes, that can work. I don’t think it’s necessarily better than a scheme that’s open to everyone with the trade off controlled by the client, though it may be simpler which is also a plus!
Oh generic plug-in framework… how I want you so.
Would be a great BGF project eventually.
The main purpose of a Elder run plug-in network is to port info from the clear net and/or act as oracles in a decentralized manner.
I’m thinking that Elders should have to prove to the plug-in and network with their sk? that they are operating both.
These groups of Elders running plugins should probably also have to provide redundant material to the group to come to an agreement that it is the desired result of what was being requested by users within the network.
Is there any form of punishment for Elders giving wrong info?
Should the Elders receive further incentive for this service and what should the reward be? SN Token?
An app specific token that apps utilizing the general plug-in framework could reward?
I imagine there is a client app and an elder plugin. The app the client uses should query multiple elders running plugin … and reward those that give consensus answer.
Just don’t reward them.
Could be either - open to the app & plugin developer.
Not sure that any framework is needed here? Seems nodes, can run what they like in addition to the network and provide oracle info … do elders need to necessarily be involved? This is just nodes sharing with nodes and forming their own consensus.
Of course a standard bit of code for running an oracle plugin + client app would be great.
I was thinking a little bit about this and to make it as seamless as possible I thought it’d be better if it was any SN Dapp that chooses to offer or knows when it’s asking for clear net requests, which it then passes on to a specific plug-in.
That way clients just interact with what they want to on the network and if what they happen to request is sent to a plug-in for a clear net query then they don’t have to know any better. As long as are no privacy concerns (which I don’t think there are) then it would be the most user friendly IMO.
Granted if the plugins were Dapp specific then maybe it could be hard for some plugins to gain traction amongst Elders. So you might have a point that just nodes coming to consensus could be more inclusive. But
The point is that there is already an element of trust established by piggy backing off of network Elders that may want to earn more SN Tokens and/or another token.
Could be as simple as, as long as they are returning the same results then it’s good enough? Collusion could be a lot easier on plugins but if bad participants are constantly being weeded out it would probably be sufficient.
I’m sure something important can be learned from already existing oracle networks and how they handle things which I’ve actually never looked into. Something I’ll have to get to.
I think that is a good option too and the most flexible. Personally I would like the option to have a token mined upon successful retrieval of specific information and the ability to set parameters of the token supply, mining reward over time, mining difficulty, etc. which perhaps those could be part of a DBC api that feeds into a plug-in api.
Not sure how it would all fit together to be honest but I definitely know what functionality I’d like to see.