Update 06 April, 2023

What is VRF?

VRF is a new term to me so from Wikipedia:

In cryptography, a verifiable random function (VRF) is a public-key pseudorandom function that provides proofs that its outputs were calculated correctly. The owner of the secret key can compute the function value as well as an associated proof for any input value. Everyone else, using the proof and the associated public key (or verification key[1]), can check that this value was indeed calculated correctly, yet this information cannot be used to find the secret key.[2]

A verifiable random function can be viewed as a public-key analogue of a keyed cryptographic hash[2] and as a cryptographic commitment to an exponentially large number of seemingly random bits.[3] The concept of a verifiable random function is closely related to that of a verifiable unpredictable function (VUF), whose outputs are hard to predict but do not necessarily seem random.[3][4]

The concept of a VRF was introduced by Micali, Rabin, and Vadhan in 1999.[4][5] Since then, verifiable random functions have found widespread use in cryptocurrencies, as well as in proposals for protocol design and cybersecurity

9 Likes

So all this is happening under stable set work but is there still the stable set concept? I thought that the stable set was mainly comprised of trusted and stable elders. So will nodes with consistent uptime in a close group that would have been an Elder because of node aging just be considered an archive node?

7 Likes

Fabulous update! And @dirvine’s extensive explanations in the stream are really, really valuable. Looks like it’s starting to come together for real.

15 Likes

It’s no longer needed. It was a way to stabilise membership for consensus. Now we just use good old group consensus. We just repurposed the repo as a landing zone fro all this work, but we will likely cut it all over to the main repo in just over a weeks time.

17 Likes

Okay this was my initial assumption but then I remembered the mention of archive nodes being easy to add now. And wondered if stable set was the way to differentiate archive nodes.

Where my mind is now boggling is how much nodes need to be trusted and what will allow a node to be an archive node. Seems the weighting of trust or good behavior is gone with node aging stripped.

If there is enough redundancy in data, and the network can handle replication under heavy churn then really there is no concern correct? That simple if I’ve got that right I guess.

Pretty amazing if true.

11 Likes

This is a neat thing. We don’t need to trust an archive node. We ask for data, the data is either self validation (chunks) or client signed and DBC backed.

The archive node has it or does not have it, but it cannot create it (unless it buys it :smiley: :smiley: )

To get these on the libp2p network, we just have nodes advertise themselves as archive nodes and they get every chunk stored and can retrieve it for us.

Same with DAG audit nodes. However here they would gossip out any doublespend attempts to the whole network. Thereby invalidating the DBC and helping to ensure the DBC is unspendable. So a backup if you like, one we likely don’t need but for the super careful types it will be beneficial to know there are layers of security above just group consensus.

i.e. we have our trust in data, but not in individual nodes.

18 Likes

Any idea how much the API will change in the coming weeks, I was about to take @happybeing’s hints and attempt to redo gooey in rust using the API instead. It is going to be a major learning curve for me so I don’t want to be banging my head unnecessarily :upside_down_face:

6 Likes

What’s the best place to start looking at the API’s currently available?

6 Likes

Well almost anyone would be better equipped to answer that as have never tried to use a API, I guess clues can be taken from the CLI and here

3 Likes

I love this update and the explanation. I feel sorry for you and the team that you had to go through so many twists and turns to get to where you are now but I’m sure it was worth it. It was never going to be a linear process like building a wall or walking a known trail. More like a jungle it seems!

I also love that there will be no more distinction between nodes that have been online for a long time and those that haven’t. I’d been thinking for a while that as it stood that after a few years the vast majority of the Elders would be running in datacentres or a Cloud service and be run by people very experienced in keeping infrastructure running. The centralisation would be extreme and make the system vulnerable to attack, continental scale internet disconnections and failure of entire AWS Regions (other cloud services are available).

Power cuts, hardware failure and house moves would keep all the ‘normal’ home users down at Adult or below. To say nothing of the difficulties of maybe not being able to do a OS upgrade or maybe even a quick router firmware update without losing age. There would be much less incentive for most potential users to add storage and compute to the network than if they can start earning right away and it doesn’t matter if their setup is offline for a bit.

I’m excited for you, the team and everyone here!

16 Likes

If it was me, I would start with the CLI and get familiar with the things you want to use, then look at how the CLI code does this. The CLI uses the Rust API so you may be able to cut and paste things you want to implement.

@Josh I doubt much will change, at least not in ways that are difficult to keep your code in synch with. You will learn a lot regardless, even if you give up in disgust :wink:. because most of your learning won’t be with the API but everything else you build on top of it.

So stop procrastinating and bite that bullet!

I think the biggest ‘changes’ will be in things built to make the API easier use. Not necessarily by MaidSafe but anyone starting with the API is likely to have to do extra work to do common operations that can be put in a library and shared to make things easier for those who follow.

I think that building gooey could be an excellent way to identify those operations and a step towards creating such a helper library.

12 Likes

I feel it will get simpler in many ways, but the network will do very little really. It’s gonna be the data type API’s that everyone will care about and mostly client side.

I would give it a couple of weeks and then we can see the direction much of this goes in. There should be a massive focus on API when we get this up and running.

22 Likes

Awesome updade and explanations!

Will this be exposed and user controllable?

Does this mean the network still needs to be single protocol (IPv4)? Or does the use of lib2p open doors for dualstack in the future?

9 Likes

It does, but that requires a lot of thinking as nodes need to be contactable with each other. So if one is ip6 and the close group all ip4 the network breaks as of now. Later though we may have some clever ideas, us or the libp2p community that is :slight_smile:

16 Likes

This is a fantastic update. Waking up indeed. Perhaps time to follow more closely once again :slight_smile:

18 Likes

Thank you for your input!

6 Likes

Indeed - will @Cantankerous ever post?
I am so jealous of that username, I want it for myself

10 Likes

Will @NotOpinionated do?

6 Likes

Why would you want it? :crazy_face:

5 Likes

So folk know what to effin expect, ya dobber!!!

5 Likes