Launch of a community safe network

I can’t see the image atm but it can’t be the content of a vault. All data, including public data are encrypted on a vault, so neither the owner nor anyone else can tell what it is.

2 Likes

That is probably unencrypted data (and small enough for 1 chunk → no self encryption?) in a _public container (probably by the safe web hosting manager → to _public container).

Two special cases are _public which is used to store unencrypted data (the container is encrypted even if its content is not), and _publicNames which is used to store references to the Public IDs owned by the account.

Let me look for an old question that I asked: what is the added value of encrypting such data, because you have to find the Xor adress first to access it anyways? So e.g. using encrypted container _documents instead of _public.
Here is (part of) the answer: the vault owners (of that chunk) can see the contents of it, for what it is worth.

Edit:
my old question
container reference in safe_client_libs src code

Please correct if I’m wrong in the explanation above.

2 Likes

To be clear: small immutable data (<= 3072 bytes) are currently stored unencrypted in the vaults.

This a problem but it will be corrected with something similar to what I described as “constant indirection with encryption” in the past. As a side effect it will also prevent a vault operator gaming the farming reward issuing GET commands on the chunks he owns.

4 Likes

It’s not ideal to have even public data stored in plain text in my opinion. You could easily write a program to automatically cat through the contents of your vault and uncover all sorts of things that had been assumed by the authors to be secret. Not sure if that’s an education problem or a technology one though.

3 Likes

It has survived and the link still works. The contact list should be updated in safe_vault.crust.config file (see safe_vault.crust.config).

I think Maidsafe should consider adding this target in their safe_vault deliveries (in addition to existing linux, osx, and win binaries).

2 Likes

Raspberry Pi is by far the most popular of these cheap SBC’s.
But are there other of the more popular ones that need another compiled executable version?

I’d say a build for rpi raspbian is a good start (ie armhf) and then later maybe do a 64 bit version (eg for Pine64). I guess see what’s popular. It’s easy enough for community builds to be made. But I agree at least one ARM build by maidsafe could be handy.

1 Like

There is another here that is running that version on a Odroid HC2 if I read their posts correctly

1 Like

Found the post of the other person who got @bart’s Pi version working on an Odroid

2 Likes

No not found. The safe browser says “No content found at requested address” and yes I checked that other sites are there.

3 Likes

Thanks @draw
Unfortunately even though my network is 30+Mbits/sec up when tested, it can just achieve 3Mbits/sec up to the UK and can just achieve 10.5Mbits/sec up to LA in the USA. And my resource check only gets to 66% at 110 seconds to go and then uploads no more. Not sure why it stops at 66% when there is still 110 seconds remaining. Any ideas @tfa

I am the only one using the link and the upload speed is pretty constant at 420-450KBytes/sec and dies at about the 110 seconds to go

Maybe @mav can check his connection if he has a chance since he is on the nbn too. Could be my ISP too having issues with their OS links

2 Likes

I’ve been trying to connect periodically since the change to bootstrap nodes I think about a month ago.

I can’t connect since that change. There’s no clear reason why, at least not from the logs.

1 Like

I’m guessing they are in the UK/Europe and Australia does not have many good links to that area of the world and perhaps the reason I got around 3Mb/s to UK and around 10Mb/s to LA

1 Like

Well trying again today I realised why it stopped at the 110 seconds remaining. Its to do with the peers that the node communicates with (1 per 30 seconds approx). Since there was only 9 it stopped at 110 seconds to go because there were no more peers to resource test to. Today there are 10 peers and it stops at 80 seconds to go.

Thus if it gets to 14 peers then I would get to approx 90-96% unless some of them end up in the USA then I should be able to join the network as a node. Maybe this is the reason @mav that you also cannot connect

EDIT: And today I am not so sure that is quite right either.
@DGeddes, can you find someone to explain how the resource test actually works and how the number of peers affects it.

And @DGeddes can I ask that the resource test requirements for bandwidth be reduced by some 20% since a connection with a uplink speed (tested) of over 30Mbits/second cannot successfully complete it. It requires a datacentre with over 50mbits/sec to succeed. Then once its added then we could use that build to succeed.

7 Likes

Hi @neo,
Been away on holiday so getting back up to speed this morning. I’ll check this out and get back to you.
David.

3 Likes

Sorry we’ve been a while, @neo. This one kinda slipped through the cracks :sweat_smile: . One of us will look into it and try to give you an answer tomorrow :smiley:

1 Like

I don’t agree with this request. The resource proof duration depends on both the cpu power and the network bandwidth. Did you try a vault with a more powerful cpu?

Facilitating the test will allow connection of less powerful nodes and since it’s inception the community network has contained nodes that are not powerful enough.

The test is not deterministic and these nodes succeed to connect by trying for hours or even days. Then they will easily crash when there are some new activities (like node disappearance or relocation). But we need nodes that are reliable under exceptional conditions and so, to filter out the unreliable nodes I would suggest the contrary and raise the difficulty.

Current requirements are the same as past Maidsafe test networks which allowed vaults at home. Maybe the problem is related to the low number of contact nodes: at the beginning they were 9 of them and now only 4.

Then, there is a practical problem for changing the limit: the test is specified by existing nodes and not by the joining node. So we would need to start a new network and I don’t feel like it.

I had to reset, yesterday and today, my home vault and had no trouble reconnecting.

What I wonder is if, in the new routing, the requirements will be maintained or will be relaxed. it’s evident that, if the current ones are maintained, it will leave out an important number of possible vault.

2 Likes

I am not asking for it to be reduced from the 6Mbit/sec up it was meant to be, but to reduce by a little it from the 50Mbit/sec it is now

The resource proof was supposed to be 6Mbits/sec up and its over 50Mbits/sec at the moment. There is a problem with it or they upped it for their own alpha 2 network. For most the download is not the issue since this is usually much larger than the upload speeds and its download speed for relocations (down to the node and up is from multiple nodes) and filling up the vault.

That is the reason I asked for it to be reduced back to what it was. David did also say that the 6Mbits/sec up was more than they expected it needed to be and would reduce it later on.

Also the resource proof seems to be doing sequential uploads where many people can upload via multiple connections much faster and considering a node is potentially uploading multiple data (chinks, PARSEC messages, other messages) then to resource check using one upload connection (is sequential) is also an incorrect check.

So I respectively disagree that it should be kept at seemingly >=50Mbits/sec sequential upload testing

2 Likes