New Members: Start Here!

#21

I hadn’t seen the Primer. At a quick glance, it’s better than what I saw before… but it’s far from ideal.

Having a “complex project” argues against making the code the priority. You can “just code” a simple project. On a complex project, you have to understand how everything fits together, and you have to be sure that everybody working on it shares that understanding.

It’s not all about me understanding it. It’s also about me, and others, having reason to believe that you undertand it.

When I read SAFE Network documents, I tend to worry that a lot of decisions were made because, to put it in an extreme and over-provocative way, “We can’t think of an attack that breaks this in 5 minutes, so it must be impregnable, and anyway we have code to write”.

There’s so little argument given for why most of the claimed properties actually hold… but the more complexity you have in the system, the less likely it is that they do in fact hold.

I keep getting feelings that Sybil attacks and exit scams are lurking… but nothing ever mentions them or how they’re truly prevented. I see a lot of talk about anonymity and information hiding, but nothing of the form “the adversary can’t infer X from Y because some clear reason”, or “we assume that players A, B, and C aren’t colluding”.

I glanced over the white paper on the consensus protocol when it came out, and I saw a lot of complexity, an overwhelming amount of mechanistic detail, and not a lot of space spent on specific explanations of what attacks it resisted, what attacks it fell to, or why. Not even a list of in-scope and out-of-scope adversaries or attacks.

Maybe there’s really no way to fix this, but in case it’s useful, here are some random specific issues things that jumped to mind as I skimmed (much of) the Primer. Maybe there are answers to all of these. Probably there are answers to most of them. But I’m not sure that “the project is too complicated” reassures me about not having explanations.

  • Seniority seems like a very bad way to assign trust… and there’s no clear explanation of why trust is necessary or what trust is granted. What can a voting node do that makes it necessary to have it trustworthy? What options were considered for limiting trust? I see that the network uses measurements like “total this or that type of chunk count”, and security relies on them. I assume the voting has something to do with setting those. Which means it’s a serious issue and deserves a better answer than “we trust older nodes because”.

  • Sections seem awfully small. On the other hand, there doesn’t seem to be any explanation of how even the small sections guard against, for example, partition attacks. Or of what happens with partition attacks further up the hierarchy, for that matter. So how was the section size chosen, and what does happen if somebody tries to partition things? And the split-merge system must be incredibly complicated and attackable… where’s the detailed analysis that shows it will survive adversarial behavior?

  • The proxy system really has the feel of something thrown in without a lot of analysis. Even a three-layer proxy system like Tor has some real vulnerabilities. What’s the adversary model here? What are the risks? What other mitigations were considered?

  • We’re told “The group of Vaults to which the user is connected might know a little about what the user is doing on the network…”. What, exactly, do they know, and what can they do with it? Has anybody thought hard, from an adversarial perspective, about what the exposures are here? Have they written down those thoughts, so that some later change doesn’t violate an important assumption? Whatever information is disclosed to vaults, what’s the reason they need to know it?

  • Followed by “but they can only identify the user by their XOR address and not their IP. In this way, complete anonymity is assured.” Boy, does that get my spider sense tingling. I’ve been sometimes watching, sometimes actively working on, Internet anonymity since 2000… and I would never write a phrase like “complete anonymity is assured” without backing it up with a mathematical, or at least quasi-mathematical, proof.

    For example, what underlies the assumption that nobody can discover bindings between IP addresses and “XOR addresses”? Or between “XOR addresses” and other identifying information, perhaps in the actual data the user exchanges with the network? What does “anonymity” even mean here? The properties you’re really looking for are of the form "Alice can’t tie information X about Bob to information Y about Bob (where X or Y may or may not be the name “Bob”, and the connection between X and Y may be made by inference through some intermediate information). Well, what X’s and Y’s are we talking about?

  • The next part about “Multilayered encryption” is also kind of worrying. “Several extra layers are active when people use direct messaging or create a public profile”. Well, great… but what is each of those layers for? Throwing in more layers doesn’t help unless you understand what value each of them brings and how they interact.

  • In the same vein, “The network is meant to be as ‘zero knowledge’ as possible” is scary. Either you’re zero knowledge under some set of identified assumptions, or you’re not. And if you make a sweeping statement like “Farmers cannot possibly figure out what chunks from which file they are storing”, then you have to provide an argument for why that’s true. “Cannot possibly” is an incredibly extreme claim, and making it without proof sounds like snake oil.

  • There’s no real explanation of how farming works or how it’s protected from any particular attack or class of attacks. And when I go try to read the Safecoin RFC, it assumes that I know about a ton of details about entities that aren’t even mentioned in the primer… and on a skim it seems incomplete even if I did know them. And it looks like Safecoin is just an account with a “Client Manager”, which is presumably trusted because it’s an old node or old nodes

  • Centralized, hardcoded bootstrap nodes are a point of attack, but could of course be fixed easily.

… and on maybe less adversary-oriented issues…

  • The chunking protocol is pretty basic. Why was it chosen? What problem does chunking solve here? What does “most likely” really mean in “most likely on machines distributed around the world”? How do you assure that that’s “likely”? How are the machines chosen? Why chunking and not some other random approach, say fountain codes or whatever? And what does the “self encryption” protect against, anyway?

  • There’s no explanation of how any of this interacts with the “SAFE Network fundamental” that all data are immutable and undeleteable. And that’s especially of interest because I assume that particular rule is actually aspirational or metaphorical. You physically can’t keep everything for ever and there’s no real reason to want to.

11 Likes
#22

Hey @jbash, thanks for the feedback. The tone is a bit provocative but I think your points are perfectly reasonable. As far as documentation goes, this is something we know definitely needs some work. Me and a few other team members have been working hard over the past month to create and collate more up to date and rigorous documentation, though there’s still a lot of work to do there and it does take away from development time somewhat.

I feel like these are all excellent points and I’ll definitely be taking your feedback to heart. I specifically think we could use more diagrams, charts, maps, etc. in our current documentation – so, thanks! :+1:

15 Likes
#23

Hi @jbash :slight_smile: Boy that’s many interesting questions you’re asking. Let me try and help with some of them. Note that some of the moving parts of the SAFE Network are more or less settled, so we don’t have a definite answer to all of these questions.

Part of the questions seem to be criticising the primer by confusing it for what it really isn’t. It’s not documentation of the network. It’s an introduction to some of the parts that constitute the network so you can have an entry point for diving deeper into the various whitepapers etc. The primer was kindly contributed by members of the community to be a soft introduction to the SAFE Network, so it’s not surprising that it stays at a high level. It’s not supposed to be the only place that contains all the answers to all the questions you may have.

Seniority seems like a very bad way to assign trust…

I think you’re speaking of node ageing. Basically, seniority as in: this node provided resources to the Network over time grants trust as a Sybil protection mechanism. So only nodes with a certain age are trusted to be part of the consensus groups. In any consensus group, we expect to always have less than 1/3 malicious nodes (more could stall consensus). So no individual node is ever fully trusted. Only a supermajority agreeing nodes can make decisions that affect the network structure. With node ageing, we make it harder to control a supermajority of decision makers. Note that there is a mechanism for relocating nodes which relocates “younger” nodes more often, so this avoids an attack where the adversary would just spawn many nodes and try to target a specific consensus group.

Sections seem awfully small

I agree. The RFCs just throw away some random numbers so we can think of the problems and run testnets but these numbers will actually be decided based on data such as probability for the adversary to own a section in different situations. This is one of these areas where more work is needed.

The proxy system really has the feel of something thrown in without a lot of analysis.

What we’re saying here is: a proxy system is needed as it would be undesirable for all vaults you’re connecting to to know your IP address and meta-data such as: how often are you fetching data. It may not be specified in a lot of details right now but we are open to improving as we go and take cues from other open source projects like tor if needed.

We’re told “The group of Vaults to which the user is connected might know a little about what the user is doing on the network…”. What, exactly, do they know, and what can they do with it?

The close group will see encrypted packets coming from and to any node in the same group. They can see data flowing to and from other nodes but they don’t necessarily know whether the node requesting the data is requesting it for themselves or for another node who’s asking them. They could make assumptions though and try to gather meta-data about when a given public ID seems to request how many packets and such.

Followed by “but they can only identify the user by their XOR address and not their IP. In this way, complete anonymity is assured.” Boy, does that get my spider sense tingling. I’ve been sometimes watching, sometimes actively working on, Internet anonymity since 2000… and I would never write a phrase like “complete anonymity is assured” without backing it up with a mathematical, or at least quasi-mathematical, proof.

You can probably call it pseudonymity. But your XOR address also changes over time as you get rellocated, so even if you somehow managed to tie a given node’s PublicID to a certain identity, that would only be of value until the next time their node is relocated in the network.

The next part about “Multilayered encryption” is also kind of worrying. “Several extra layers are active when people use direct messaging or create a public profile”. Well, great… but what is each of those layers for?

This is one where the primer does not try to answer all these questions. To give you an element of answer: data is encrypted with itself through the process of self_encryption (I’ll let you research it because it’s gonna be a way too long post). Network communications are encrypted at the communication layer with crust.

In the same vein, “The network is meant to be as ‘zero knowledge’ as possible” is scary. Either you’re zero knowledge under some set of identified assumptions, or you’re not. And if you make a sweeping statement like “Farmers cannot possibly figure out what chunks from which file they are storing”, then you have to provide an argument for why that’s true. “Cannot possibly” is an incredibly extreme claim, and making it without proof sounds like snake oil.

Again, the primer isn’t the kind of documentation you expect it to be. It’s meant to be accessible as an entry point for people of all levels of prior knowledge. The point here is that farmers only store chunks that are encrypted. They don’t have access to the key. They “can’t possibly” read that data (until someone breaks encryption).

There’s no real explanation of how farming works or how it’s protected from any particular attack or class of attacks. And when I go try to read the Safecoin RFC, it assumes that I know about a ton of details about entities that aren’t even mentioned in the primer… and on a skim it seems incomplete even if I did know them. And it looks like Safecoin is just an account with a “Client Manager”, which is presumably trusted because it’s an old node or old nodes

This is one of the moving parts. We actually have a few competing RFCs where we discuss different implementations. RFCs are quite detailed and require a lot of prior knowledge because they’re the bits we use to actually design the network. Given the kind of questions you ask, I think the RFCs would be a better place for you to look for info than the primer alone. It will require a deep dive, though; I’m afraid. Also keep in mind that some older RFCs contain approaches that may have been superseeded by more recent RFCs. We do what we can to keep everything up-to-date and useful but it’s not always easy.

Centralized, hardcoded bootstrap nodes are a point of attack, but could of course be fixed easily.

Indeed.

The chunking protocol is pretty basic. Why was it chosen? What problem does chunking solve here?

It allows for packets to have a manageable size (in terms of networking)

And what does the “self encryption” protect against, anyway?

If for instance a user packets were only encrypted against this user’s private key, and if a malicious actor running a vault somehow stole their private key, they could try to decrypt all of the packets stored on that vault and see if by any chance any of them belongs to that user. With self_encryption, you would need to steal the private key + the data map for the data the user stored on the network and decrypt everything in the right order which you can’t do as easily.

There’s no explanation of how any of this interacts with the “SAFE Network fundamental” that all data are immutable and undeleteable. And that’s especially of interest because I assume that particular rule is actually aspirational or metaphorical. You physically can’t keep everything for ever and there’s no real reason to want to.

The fundamentals are a description of the aims of the project.

You physically can’t keep everything for ever and there’s no real reason to want to.

You can if Moore’s law for storage continues. Of course, this one is aspirational; though. Closer to launch, we will have documentation explaining exactly in what circumstances we expect data might get lost (easy example: all of the network nodes leave and only one node ever comes back. The data is lost for sure)

I hope this was somewhat helpful. It’s always great to receive feedback. Hopefully as time goes we can address many of your concerns, such as building up a comprehensive and accessible centralised documentation point that answers everything in one place. (It’s what https://hub.safedev.org/ is aspiring to become)

26 Likes
#24

Thanks for the detailed explanations @pierrechevalier83! I will add them to the frequently asked questions on the Bulgarian site :slight_smile:

10 Likes
#25

Thanks for feedback. A lot of info is on this forum. I try hard to ensure questions are answered where possible I agree that word like guaranteed definite never cannot etc. are not helpful. I also think though that

is equally misleading. (forever, infinity everything etc.).

I will try and give some more info to some of your concerns and see if it helps/makes sense

This would be a worrying thing if people thought that, so we need to figure out why this is the case?

There has been a huge amount of argument mind you.

I agree completely. This is always a concern

PARSEC is a consensus algorithm. So it does not solve any attack, it is purely for consensus (agreement). It does claim to do this with less than 1/3 of Byzantine nodes in the group of valid voters. What this does is mathematically and cryptographically prove an agreed order of events.

There is an RFC for this (node age) and we will also be releasing much more info that is clearer (I hope) as marketing have their sites on sybil explainers. Seniority, in this case, means a node has been doing the right thing for longer than another node. Therefore naturally it is more likely than a newer node that it will do the next thing correctly. It is more invested in a working network as well. some may use POW or POS as Sybil resistance (the former also as a kind of consensus).

i

At the moment they are very small. These will be larger in beta, but we are searching math models (we are currently modelling via simulations) here to further prove the assumptions on size verses security. lo depends on age and the number of non Elders (voting nodes), so a few moving parts. not to worry about but certainly to be clear about.

Proxy is misused at times. Currently bootstrap nodes act as proxies as well, but true proxies. This has only been for tests. There will be bootstrap nodes (random populated local cache) that only bootstrap a node (allow it to find it’s group). Then the client will use one Id to connect to the group close to that ID (the IDs client managers). Client to client messages are encrypted, client to groups (network addresses) are encrypted hop to hop (in fact crust also encrypts every single message from a node, but that is a lower layer).

Not sure what you mean in terms of mitigate here? Do you have something in mind? [The proxy is not an exit node type device if that is what you mean].

This is definitely an area we do need to detail much more accurately.

Yes, I agree “complete anonymity” is as useful as “completely secure” or “unhackable”. I am not so sure of quasi math though :slight_smile: But yes we can do much better.

This may be linked (misleading language, maybe), but it is not anybody can discover, but beyond the first hop (the proxies or client managers to be more precise). As this is a recursive network and not iterative (kad like) then a message is forwarded via the XOR network after the first hop.

In this case, there is an important consideration. The ID you choose to connect with (say X) can be a throwaway ID it should certainly not be tied in any way to another ID. So if “Bob” wants to send “Alice” some data, he can encrypt that to Alice and send it via the network. In this case neither the Id Bob or Alice needs to be known to anyone else, in fact, their keys can be purely just for that single communications link (Alice to Bob). Again the details here can be greatly simplified and formalised.

This is more detail, such as Crust connections (IP hops), overlay hops (the XOR space) and endpoint encryption etc.

This is definitely not completed and final design will be slightly different from the RFC, however, the RFC encompasses the main thrust of the algorithm, which is a supply/demand mechanism. This allows data to be stored forver and the network to balance resources effectively. That RFC shows that process using sacrificial chunks (which we may not have in Beta, but will use a similar mechanism to measure resource).

Agreed, however, even now they should never be used except for the first run of the software unless a user is given endpoints form a friend etc.

I am not sure self-ecnryption is so basic. In any case chunking is to break huge files into parts. These parts have no link (i.e. you cannot say which chunk belongs to what other chunk, easily) on the network unless the data map (map of links) is made public, Small obfuscated and encrypted parts are more difficult to identify and can also be protected from rainbow attacks with a small tweak. Also we do not want files to be able to be decrypted, even in decades to come. There are a few self encryption docs to cover much of this though.

Hopefully some of this helps. Documentation is always a nightmare, we probably have too much as opposed to too little, but many efforts are in place to address better documentation as we move towards launch. I would hope this forum is also a good place for questions and I hope you feel that it is as well, the longer you are here :+1:

Edit - ninja’d by @marcin and @pierrechevalier83 :slight_smile: cool

21 Likes
#26

Very interesting and valuable debate. Sometimes it takes a provocative line to prompt clarification of issues and ideas that have been floating around but not necessarily recorded in an easily accessible way.

@jbash I am one of the authors of the Primer. The impetus behind it was exactly the situation you describe but at a higher level: information was scattered about here and there and needed pulling together in one place so it would be easier to see how all the pieces fit together. Those of us behind the Primer are not particularly technical so our effort was intended for a more general audience - therefore I’m sure the language isn’t quite right in engineering terms. It’s not intended to be technical documentation so your spider senses are overreacting somewhat.

At some stage, God-of-time willing, we plan to update the Primer - a lot has happened in the last year and it’s getting a bit long in the tooth. When we do, a chapter on defence methods against the various types of attack you mentioned is certainly on the agenda. I may ask you to sense-check it if you’re willing.

21 Likes
#27

I’m impressed with the level of care in these responses. Thanks. That increases confidence, not just mine but probably that of future readers.

Since we seem to be discussing substance, a few responses. You can ignore any questions here if you don’t feel like answering them; the idea here isn’t to burn a lot of people’s time in one-off answers for idle hecklers like me.

Quotes are mixed willy-nilly in what’s below.

We can’t think of an attack that breaks this in 5 minutes, so it must be impregnable, and anyway we have code to write”.

This would be a worrying thing if people thought that, so we need to figure out why this is the case?

Well, I, at least, start out predisposed to think the worst, because I’ve seen it in so many other projects.

But I think that what brings it to mind here is that I feel like I keep seeing statements that have the feel of “We do (simplified description of X), so $some_sweeping_property_way_beyond_what_X_can_address is absolutely guaranteed”.

PARSEC is a consensus algorithm. So it does not solve any attack, it is purely for consensus (agreement). It does claim to do this with less than 1/3 of Byzantine nodes in the group of valid voters.

That definitely solves an attack, namely an attempt by a group of band actors consisting of less than 1/3 of the nodes to screw up whatever consensus gets agreed upon. I don’t think Byzantine failures are even all that likely in any really “plausible” system… unless you’re under intentional attack. So it seems to me that consensus is basically all about resisting attack.

… but the 1/3 property immediately begs the question of how you know who are “valid voters”. So we get to the next bit.

this node provided resources to the Network over time grants trust as a Sybil protection mechanism

and

Seniority, in this case, means a node has been doing the right thing for longer than another node. Therefore naturally it is more likely than a newer node that it will do the next thing correctly.

Until the network grows large, is it actually that expensive to provide a large proportion of its total resources? Why couldn’t I set up 100 (or 1000) fake nodes early on, provide most of the bandwidth for the whole network for a while, run for a year if necessary, gain most of the trust, and go from there? I only have to provide the resources used by the real nodes, so large numbers of fake nodes aren’t expensive for me.

… and once I’ve captured the network, how easy is it to get it back from me? I assume that the measurements of resources provided come from attestations from already trusted nodes, so once my avatars control the consensus, it seems like they could control it forever, or at least until some huge software adjustment happened.

Admittedly, you may win if the attacker doesn’t think ahead and invest while it’s cheap. Assuming, of course, that they can’t do something else to screw up the network and end up being seen as “senior” in part or all of it.

it would be undesirable for all vaults you’re connecting to to know your IP address and meta-data such as: how often are you fetching data.

… but your proxy knows that information. Why’s it better for the proxy to know it than for the actual vault to know it?

What other mitigations were considered?

Not sure what you mean in terms of mitigate here? Do you have something in mind?

If the proxy exists to keep a vault from knowing something about a client, then that means it’s there to mitigate the risk of the vault learning that thing. An example of an alternative mitigation might be to change the protocol between the client and the vault such that that thing is intrinsically not disclosed.

I don’t have anything specific in mind, because I truly don’t know what the proxy is trying to protect.

As an example of an alternative for something you might actually want to protect, you could in theory use PIR-like techniques to avoid disclosing which chunks a client (or another vault) was requesting. That’s just an example; I don’t think that’s actually feasible in practice, although I also don’t know that much about the subject.

they don’t necessarily know whether the node requesting the data is requesting it for themselves or for another node who’s asking them. They could make assumptions though and try to gather meta-data about when a given public ID seems to request how many packets and such.

Yeah, and it seems as though methods like that are usually pretty effective for people trying to break anonymity. This is the sort of thing that Tor et al have to deal with all the time.

even if you somehow managed to tie a given node’s PublicID to a certain identity, that would only be of value until the next time their node is relocated in the network.

The usual approach to that is to try to identify some kind of unique observable fingerprint for the node: apparent network location, activity patterns, oddball unique behaviors, even things like packet turnaround delays. Then you can identify it every time it returns to the network, and tie all its “identities” together. This is admittedly a lot harder if there are a lot of nodes and they act very similarly to one another… but attacks on anonymity systems have managed to pull a lot out of surprisingly small leaks, and it’s surprisingly hard not to leak. Heck, you can sometimes extract a device’s crypto key by timing its responses.

You can also run big Sybil attacks to get a more global view of what’s going on in the network, so you can correlate various messages related to what a node is doing. Which means that it’s dangerous to assume that nobody actually does have a global view.

So if “Bob” wants to send “Alice” some data, he can encrypt that to Alice and send it via the network.

Usually the problem isn’t a single message. It’s having the elements of a pattern of activity linked to one another… and perhaps eventually to something that more or less gives away a “real name”.

The point here is that farmers only store chunks that are encrypted. They don’t have access to the key. They “can’t possibly” read that data (until someone breaks encryption).

If I pad and encrypt a file before I inject it, and don’t share the key, then I know it’s confidential from the nodes storing it (and that’s equally true for DropBox).

… but what the primer says is that the farmers can’t determine which data they’re hosting. That’s actually a really important property, because the network is inevitably going to handle a significant amount of “forbidden” content. Copyright infringement, hate speech, terrorist propaganda, child porn, classified leaks, trade secrets, bomb making instructions, malware, whatever.

If a node operator can identify pieces of such content, it’s relatively easy for the legal and social environment to put pressure on the operator to filter it. That not only directly undermines some of the network goals, but it could introduce so many costs and so much complexity that it became impractical to run the network at all.

Suppose files are accessed by hash, and perhaps also searchable by name or metadata. It seems to me that if I as a node operator become aware that the file with hash, name, or metadata X is a “forbidden file”, I’m likely to be able to figure out whether I have any parts of it, just by trying to retrieve it myself. It may be hard for me to be sure, but in some ways that might even be worse. If I’m sure a chunk is part of an evil file, I can just drop it. What do I do if it just looks like a chunk is probably part of an evil file?

You mention “data maps”, which I assume are chunk lists and which you seem to say are secret, so maybe it’s not that simple, but there may still be an analogous concern.

It allows for packets to have a manageable size (in terms of networking)

and

In any case chunking is to break huge files into parts.

OK, but fountain coding would also break files down. There are probably lots of other approaches that I don’t even know about. You can even play games where you weave two or more files together into a common set of “chunks”. So why this form of chunking in particular?

(i.e. you cannot say which chunk belongs to what other chunk, easily)

There’s a lot of stuff hiding in that word “easily”… :slight_smile:

You can if Moore’s law for storage continues.

It won’t, any more than Moore’s law for compute did.

Documentation is always a nightmare, we probably have too much as opposed to too little,

You’re probably right. It may be more profitable to ask whether it’s the right documentation.

Anyway, thanks very much for your attention to all of this. It helps. And I know this stuff can be pretty thankless.

10 Likes
#28

Thank for your analysis. You raise problems that have been addressed again and again in this forum. Some have a clear answer while others involve a permanent discussion and search for possible solutions. I would be delighted if someone with knowledge would analyze in depth aspects such as observable fingerprint and the way to mitigate these problems.

On the control of the network by an attacker you can read topics like these:




Privacy and questionable content is a recurring theme:





And sustainability concerns too:



12 Likes
#29

Wow. Thanks for the research.

I’m going to shut up now for fear of provoking somebody to do a bunch more work. I won’t be posting anything more unless I come up with something that I think truly hasn’t been thought of.

7 Likes
#30

One of the main reasons I still love this project is how open and receptive the Maidsafe team is!

Ask an honest question and get an honest answer!

6 Likes
#31

Just quickly on this point (as you say we are on our own deadlines, but appreciate the input). This is true, wiht any network you can get in early and have more chance of a takeover. However, here as the networks grow the sections grow and dilute such early attackers. So the cost of attack will continually increase as you will need to add more nodes. But for sure the earlier this attack the better for that attacker.

I really just forwards messages. Vaults in the route will know nothing about it, but the destination group will need to know the authority of the mutation request (not for Gets, only for mutations).

Anyhow I just noticed you received a bunch of links below your questions. However we are aware our docs are lacking in areas, so please feel free to critique, poke around and try to find flaws. It is what will make us all better. So balance reading and poking, we do appreciate it all. At time I will answer fast, with typos and possibly seemingly abruptly, it is just being busy, so pull me up if you thnk I am too abrupt of quick with responses. We are an open bunch of folks so no worries and thanks again. Happy reading.

7 Likes
#32

Hi @jbash,

There’s really no need for you, or anyone, to shut up :smile: in this community!

As I mentioned before… we really appreciate your feedback because it’s going to help us get better in areas that need improving (like the documentation and how we disseminate the information contained in it).

Welcome aboard and I look forward to reading your next post.

David.

7 Likes
#33

It depends what fountain codes are intended for. On the client side for splitting large files into smaller parts fountain codes would be fine. On the network side fountain codes for reducing storage and bandwidth requirements (compared to pure redundancy), many people have discussed at length in Are Erasure Codes better than Replication for the SAFE network. I address redundancy vs erasure codes in the linked post #94 in the section for p19 (see quote below). Not a topic for beginners, but you seem to be good with analysis so hopefully can add some more insight in that topic.

The secure messaging algorithm for traversing xor space makes it much less practical to track and repair files via erasure coding.

For this reason I think erasure codes are fundamentally unsuited to being used at the network layer of the SAFE network. However they may still be useful at the client / app layer.


The RFCs are good to read and I personally consider them the very top of the tree of truth wrt documentation. I have pretty-much stopped using any other documents for technical information (mind you I’m currently heavily focused on backend and almost nothing api/client-side so ymmv).

Looking forward to hearing your thoughts, and enjoy the reading (I know I did when I first came to this community).

11 Likes
split this topic #34

2 posts were merged into an existing topic: Introduce yourself

#36

Hello world! Just arrived here after hearing about it on Mozilla’a IRL Podcast - Decentralize It - Imagine an internet free from corporate goliaths.. I downloaded and installed the SAFE browser for Linux and was redirected here to build up to Trust Level 1.

I’m a retired IT manager and tech enthusiast with over 30 years experience dabbling in a variety of systems and eco-systems. My latest project was to root and customize a couple of old phones with resources from the XDA-Develpers group and a new privacy-oriented phone OS start-up called /e/ - your data is YOUR data.

I am totally impressed with what I’ve seen so far (a half-hour in) of the MaidSafe project and looking forward to further exploration.

Regards,

David Baril
Almonte CA-ON

19 Likes
Introduce yourself
#37

Glad to have you aboard, David.
/e/ looks like an interesting project (struggling to see how I pronounce it when speaking tbh but that’s probably me more than anything else).
The community here are a great bunch and I’m sure you’ll fit in very nicely.

Any questions then you know what to do…

David.

3 Likes
#38

The /e/ folks themselves are having a discussion about the branding - a combination of pronunciation, searchability and trademark issues, if you can believe it. Nonetheless, a privacy-centric mobile OS is a good idea. Here in Canada, the carriers are lobbying for the right to monetize user data - yet another instance of surveillance capitalism. But that discussion is for another forum I suppose.

4 Likes
#39

Hello. Completely new to MAIDSAFE here. I am CTO of Oracle-D, a community and dApplication building project on the STEEM blockchain. We are one of the biggest projects on STEEM, with a lot of reach and reputation on there. We have been exploring how the SAFE network could be utilised to help us solve a big issue, and would welcome a conversation and potential collaboration with safe developers.
It’s been very interesting walking through the forums whilst I waited for my basic trust level, a very novel way of ensuring someone is serious.

14 Likes
#40

Hallo.
My impressions.

  1. Speak Spanish, we have a spanish forum? guide? the world is the people and the diferences, included the languaje.
  2. Im emocionado for stay here, i see a good project, organitation and order. About the levels… think than is many dificulte with the languaje. Any way i hope than i can will be part of this community.
    Have 2 cuestions.
  3. How can i run a SAFE node?
    2.If I run my node for a year or two creating a reliable and secure node but my computer is ruined … I lose my confidence level?

Thank you and best regards. successes

3 Likes
#41

Hola! We have a place for Spanish speakers:

Unfortunately my Spanish doesn’t go beyond ordering a beer :frowning_face:

11 Likes