Will bandwidth requirements end up causing centralization?

In other words as the amount of data that needs to be decentralized continues to grow (holding movies, pics, audio etc) the bandwidth requirements will continue to grow to be able to distribute all that data.

Eventually the only places with enough bandwidth and storage capacity to store the new internet would become the ISP’s, which essentially just puts us back to where we are now?

Although the difference between then and now would be that at least the data would be decentralized among ISP’s and encrypted.

However taking things a bit further eventually the bandwidth requirements would be more than even ISP’s can cope with at which point the internet would just get slower and slower until it became unusable?

1 Like

Isn’t it the other way around? I mean if you connect more data in one place you need more bandwidth to serve it, whereas by spreading it around each node needs less bandwidth because it is storing & serving up less data.

Its the spreading it around part that I’m worried about, that part requires tons of bandwidth before any of it even needs to be served up (which then uses even more bandwidth).

Ah, I see, but I don’t think centralising will help much unless you own a very large proportion of the network.

All those centralised vaults will be mostly communicating with outside vaults. This means that your data centre will need enormous bandwidth with the rest of the world, which is both a technical challenge and a cost that will make it harder to make a profit

yeah, so in that case the bandwidth requirements keep growing until first home users and then ISP’s can’t afford the bandwidth. In that case you get centralization as well because only the few people left running the system remain?

Why do you assume that the bandwidth for a home user will grow and become too much? I don’t see why this would be the case.

But even if some home users drop out, I’ve explained why it would not result in ever increasing centralisation - the cost of centralising would be increasingly high, forcing farming reward up, which in turn makes it profitable for smaller farmers and home users who have lower bandwidth and other costs. This means that there’s a balance, rather than a consistent move towards centralisation.

I just learned about Maidsafe this morning, got excited about it and started the process of installing it on my PC. After reading some forums and docs I discovered that I would need around 1TB/month data cap just to serve out a 1GB store of data. Reading more I found other users have the same problem where there are data caps like in the states. The answer for them was tough luck. It got me thinking that the more data being uploaded to Maidsafe the more that needs to be distributed. That distribution means more people need to store and copy it, which means the bandwidth requirements which are already too high for me (I have 250GB cap and 100Mbit/s line) will likely become too high for other people as well.

Was looking at buying some Maidsafe crypto but the way I feel now I can’t see how this project can succeed long term. It will become a victim of its own success, as in the more people using it the higher the bandwidth requirements, which means the less people using it.

we are building up network machine which can be inserted into individual home for collective farming. we inspired from decentralized mining pool. This machine will have network accelerator (layer3) that will maximize its farming profit. we believe this machine will prevent farming centralization.

1 Like

You are taking an overly negative interpretation IMO.

For example, supposing you are correct that with the current network design and reward model too many smaller home farmers will be squeezed out (which I don’t accept by the way - because nobody knows the bandwidth requirements and your figures are essentially made up / speculation). Even I can think of things that could be done to tackle this - without having given it serious thought (rewarding nodes for different tasks suited to their abilities for example - in fact, I think it was David Irvine who said that so not really my idea).

Well, do you suppose MaidSafe will hold their hands up - after eleven years - and say, damn, we can’t make this work? The reality is that they’ve solved this kind of problem multiple times along they way, and invented new techniques in order to do it.

Self Encryption is one example, which makes it possible to do several things that were previously impossible, such as logging into a network without servers (without your credentials leaving your machine), keeping data unintelligable but also being able to deduplicate it, and so on.

So I don’t think you’ve made the case that this is an issue, but even if it is that certainly does not mean the network will fail.

1 Like

I found it very difficult to find any good information regarding how much bandwidth is required per month, all I got was the estimate that I mentioned before. However the current minimum upload requirement is 6Mbit/s which works out to 32 GB per month. I’m guessing the download amount then will be a lot more than that, so a few hundred GB per month sounds about right.

As you say its speculation but thats part of the problem, it would be nice if there were some real stats around to work out:

  1. Minimum data cap required on average per month for say 1GB storage
  2. The trends of how much this has grown as usage increases, so some historical data

Without this sort of information it seems like whats required is a large investment in time and potentially money, without any good stats to work from other than some anecdotal comments when searching the forums.

1 Like

The reason such statistics aren’t around is because they are impossible to calculate. Yes, it would be nice, or rather brilliant, to know, but we don’t and we can’t.

@mav and @bluebird have made some measurements and visualisations based on the early test networks, but these are still not going to answer what the real SAFEnetwork will be like. For that we will have to wait and see.

1 Like

I wonder what would stop a denial of service attacker on a high speed network from simply generating and uploading enormous amounts of random data on the fly 24x7, which would then need to be replicated and distributed to all the nodes on the network, which would slow the whole system down and fill everyones hard drives up with rubbish.

Anyway, its new tech and I’m sure the dev guys have thought of these things already, given they’ve had 11 years to think of them. Was just hoping someone would answer with something more than it would be brilliant if we knew the answers to these things, and I’m sure the guys that have been working on this 11 years know the answer, but guess we just have to wait and see.

I think you are throwing mud in the hope that some will stick, because it’s not hard to see that having to pay to upload makes your denial of service attack very expensive, and frankly won’t bother the network because it doesn’t care if you upload garbage or the crown jewels. You pay for the resources you use and its up to you what you do with them.

As for being sure the devs know the answers to these questions, you make no sense to me. How can they possibly know details like this before the network is finished? They would need to know the final design, how fast the components could carry out their tasks, and how the different elements would perform together, along with many external and unpredictable factors such as how fast people would come and use the network, what they would use it for, and with what kind of equipment etc.

They can make estimates and speculate like the rest of us, but there’s not a lot of point because there’s no way to know, which is why I say it would be brilliant: because it’s not possible.