Lets nail this down. We need a solid breakdown of how SAFE works. Freenet peeps want to know

I suspect @freedomuser’s point is that if a node is not getting GET requests for the data it holds it’s not getting paid. So they would optimize to drop data they aren’t getting paid for in the hopes that it is filled with data they are getting paid for - that is data that is popular and gets lots of GET requests. If this is true then data will drop out of the network.

If accesses of data are free, how do you prevent a storage node from running a bunch of get requests from another client to get more funds?

1 Like

Nah. Rust isn’t that complicated. Many of us have picked it up in very short order.

There are really compelling reasons to use Rust. Do you know anything about it?

As far as I understand it, nodes are rated based on how long they have served data. It is likely that a node would suffer having to reset itself.

See here:

Because they can’t know what is stored on their node, and so can’t choose what data to GET.
See here:

Right. It should be simple to keep a least recently used list of data in the store and drop the oldest when the store is full, in hopes of storing new data that will be requested more often. What incentive is there to not do this?

They’re rated based on serving specific pieces of data, or just serving any data in general? We’re talking about nodes dropping some data, not all.

From:

It is also worthy to note that a node can be dropped from the network if it acts outside the purview of the network parameters.

EDIT: Bear in mind that the info linked above is from an old thread. It may not be completely current, and I’m not the one to ask specifically about that. What I do know is that these questions have been discussed here at length, and there is a solution (the solution linked above may still be it).

Here’s the wiki. I’m not sure whether all of the info here is completely current, however I think this pertains to what you are querying:
https://safenetwork.wiki/en/Vaults_(How_it_works)
"All communications on the SAFE Network are carried out through close groups of 32 nodes. This prevents a rogue node(s) from behaving maliciously. It is not possible for a User to choose their own node ID, or to decide where their data is stored. This is calculated by the network. Every time a node disconnects from the network and reconnects, it is assigned a totally new and random ID."
As far as I know, if a vault continues to delete data that is unpopular, then it would be violating the SAFE networks rules. The close group of 32 nodes would then remove the node, or at least lower it’s rank, either wiping out it’s income, or reducing it.

Could anybody who is certain on these points chime in?

Asked by whom? Other nodes? How do they know the node doesn’t have the data if they haven’t requested it? And if they just request a hash or something similar to make the determination, we had the data at one point, so we just preemptively store that for when it’s requested.

Alright, so other nodes now think that storage space is used, but it’s not. What’s to stop me from either setting up a new node using that space, or better yet, alter the node to say I have more space than I actually do, so it doesn’t matter if they reduce the space they think I have?

What stops someone from continuously reconnecting to the network until they get the IDs they want.

What if the income is already spent?

Again, from https://safenetwork.wiki/en/Vaults_(How_it_works), and again with the proviso that this may not be completely current,

"Once consensus is reached, the DataManager passes the chunks to thirty-two DataHolderManagers, who in turn pass the chunks for storage with DataHolders. If a DataHolderManager reports that a DataHolder has gone offline, the DataManager decides, based on rankings assigned to Vaults, into which other Vault to put the chunk of data.

This way the chunks of data from the original file are constantly being monitored and supported to ensure the original data can be accessed and decrypted by the original User."

This raises the question that you are getting at: Sure, if a vault goes offline, then the data is reallocated. But what if only some chunks disappear from that vault? Do the DataHolderManagers recognise this?

I’m somewhat certain that they would recognise, but am not 100% sure they do, or how they do. This is a question for devs, or forum members that know more than I.

The ID by itself gains you nothing. Over time, as you serve data, and behave according to the network rules, your rank increases, and with that increase, your yield in SAFEcoin from GET requests. Switching off resets this (as you now have a new ID), and so your income level will be starting from scratch.

No, your previous income isn’t wiped out. Farming nodes earn income according to an algorithm that includes how highly they are ranked. Ranking develops over time. So you are starting from scratch every time you get removed from the network, or you drop your vault and start over. Your income then falls to the lowest level (far below network average).

2 Likes

You only are rewarded for serving data, and you can only serve data that you have.

If you pretend to have data you don’t have if you don’t serve it you will get demoted (So you get less data) and you also will not get rewarded for serving it.

Which ID do you want? How you you know?

If you disconnect you lose data and have to start fresh… That is a pretty big disincentive…

Income doesn’t come Tit for tat remember. You are paid via lottery system for being a useful node on the network, You are not paid per byte served or anything of that nature… The more you serve the better your rankings, and the more lottery tickets you get - but there isn’t a one for one relationship between serving files and the amount you are paid.

2 Likes

Data in SAFE vaults will be transient in nature. Constantly moving from vault to vault in response to demand. Gaming the system by resetting would be counter productive in this context. Keep in mind that there is both private and public forms of data. Public data can possibly be monitored on your vault given that it isn’t layered with keys that only data managers hold. In this case, you would know nothing. Deterministically chosen data managers would for a short period be the only key holders. These keys then change when new data mangers are chosen. This is just an example of how this might be solved. The lead devs on this team have poured countless ours coming up with even better methods. @dirvine can attest to this.

Holding data permanently has many great benefits. Most notably, machine learning. The theory from my understanding is that storage capacity is outpacing storage needs. In addition to using safecoin to incentivize resource commitment to the network, the relatively small entry barrier will increase the chances that many non specialized machines with unused space will add even greater capacity. Safecoin prices will also encourage the community at large to add capacity to the network in order to lower the price. This fuels perpetual growth. I predict based on the activity in this forum and the enthusiasm I have witnessed, that we will reach 100TB within the first 6 months after SAFE goes live. Growth will continue exponentially as the tech gains more traction.

1 Like

This thread is getting sexy! I’m learning things I didn’t even know after a few months of being here. This is turning out to be everything I hoped for! @freedomuser. Your questions are freaking great!!! Very thought provoking :smile: I beg you not to stop until they are ALL answered. Bonus points if you can find a serious flaw. We should create a bounty for this @happybeing .

4 Likes

I recently attended an in-person AMA with Eric S. Raymond, and he commented on Rust saying that he was excited to see the development thereof and thought that it had promise to be the “C replacement that we have been waiting for”. Very exciting times.

He also said that “Decentralization is long overdue”. I’m still kicking myself that I didn’t mention MaidSafe to him when I had the chance.

1 Like

Whatever you do, @Tonda, don’t let this list fall by the wayside!

In a roundabout way it basically is the client, in that it handles all the translation between what a given APP wants to do and the network.

It also handles all authentication that an APP might request, removing any authorization from the hands of individual (rogue) developers and placing it squarely in front of the user.

BELOW: A couple topics that hadn’t been touched on that I think are important to the network. However I’m not sure what relevance it may have to the comparison of Freenet.

Browsing the network

I’d also try to squeeze it in there somewhere that browsing the network doesn’t even require authentication. It is completely free and accessible to anyone.

Public vs. Private data

Putting aside the “economy wars” for a moment, I would definitely mention the differences between public and private data.

Sharing data

Think private shares between entities. It makes project collaboration, even proprietary, much much easiers.

Also, instantaneous directory sharing. Bye-bye bittorrent.

Lastly

You see that triangle up in the top left corner of your screen? @dirvine pointed out three rock-solid points that I’ve come to count on him repeating time and time again:

1.True data security (physical and otherwise)
2. Autonomous network (not AI)
3. Self authentication (but you already have that)

Good luck and I will certainly contribute more the further this goes on. (Have you thought about setting up a github repo where we can fiddle with it?)

1 Like

By zero-knowledge proof. The manager send randomly a nonce and the nodes return the nonce+data hash. If you don 't have the data you fail the test and the node is expelled from the network.

Entire Network operating analogously to social insects, especially ants. Each node has a little work to do, controls other nodes and is controlled by other nodes.
Each node adapts different roles according to the information received and act accordingly to that role.
Any non-conforming behavior is punished.

3 Likes

Welcome to the forum, hope you get some answers, there is a tone of papers/data/sites/podcasts and more so signposting is going to be what many folks use to answer these questions.

SAFE is a big proposition and as people cannot teach Latin in a sentence, likewise this needs a lot of digging, your in good company here though, so dig in and explore and enjoy it.

Not really system level languages, Rust is very much like c/c++ fro the 21st century in many ways. It is very like c++ with concepts (traits) and removing the ability for memory issues relating to ownership. No undefined behaviour which is a massive pull fro any Engineer to pay attention to.

Rust was a massive change for us and not something done lightly, but has proven to be a massive win. If you can read c/c++ you can read Rust and interestingly if you can read ruby/python you can read Rust when you understand stack and heap allocation.

I firmly believe an Engineer cannot ignore such things, it would be like a cart mechanic ignoring cars coming along really. It is that big of a change.

3 Likes

If you are the closest node to a piece of data, you’re the one that’s storing one of the chunks. Every few seconds, other nodes will ask you to do a little hash puzzle where you need to prove that you have the chunk. They’ll just sent you a few bits that need to be added to the chunk and provide them the hash of this puzzle. You can’t just delete unpopular files in the hope to make more money. Because when you just delete a chunk, you would fail the puzzle. And get deranked. The network will constantly check if you indeed are storing the chunks that you are close to.

When you go offline, they’ll notice as well. and now there’s only 3 chunks left, so they pick a new Vault that’s the closest to the data and he will store the chunks. Vaults get rewarded in Safecoin when do actually get a request to deliver a Chunk and they do. So people pay one time to PUT data to the network, after that the data will move over time over an x-number of Vaults. No one needs to pay again for that, it gives all these Vaults a chance to Farm Safecoin when that Chunk is requested.

Nothing. But you can’t pick your own group. Lets say we have 100K users in the beginning, all in groups with 32 nodes. That’s like 3125 groups. And because those groups are full with already 32 nodes, not a chance to target them. So you need to wait for a group with a spot left, which will probably be a group that’s forming with new users at the same time you try to connect. So the chance of you and 30 other friends to take over a group is quite small. You’re all connecting to the network, but you need to wait for a group to “pick you up” and to provide you an address. You can’t pick your own. So you get the opportunity from a group to join, and it will only last for a short time. You friends will get the opportunity from another group.

Just a nitpick, but as I understand it if there are 100K users there are 100K groups. A group is defined from the perspective of the subject node. If your and my vault are “close” we are in each other’s manager group, but both our manager groups will have members that the other manager group hasn’t got.

3 Likes

Actually, it means you become part of the manager group of the data chunk. You don’t store it yourself, but you pick two (random) vaults from the network and tell them to store the chunk for you. Your vault maintains a keep-alive connection with those two vaults and will forward any GETs for that chunk to those two vaults. If one goes off-line you’ll pick another random vault to store the chunk.

There are actually three manager groups for every data chunk like this, one defined by the hash(chunk), anotherr by the hash(hash(chunk)) and then another by the hash(hash(hash(chunk))). When a client sends a GET to the network, that GET is send to all three manager groups, and thus to 3*2=6 vaults (or at least to 4, I’m ignoring the possibility of sacrificial chunks not being stored here).

3 Likes