So i’ve heard if you do farming on the safenetwork, you’ll use a lot of bandwidth, is this true? Bitcoin has now a problem with how big the blockchain is, aka the blockchain is taking too much data, i hope the safenetwork won’t have a problem because of the fact it’s taking up to0 much bandwidth.
For the last couple of tests which had home vaults the minimum was set to 6Mbits/sec uplink
This was a high figure set to ensure the tests were (hopefully) not affected by bandwidth problems.
We have been told that this will reduce at some stage and then also it will be controlled by the network.
But yes if you have a 512Kbit/sec uplink then you would be a slow node and may even hinder the flow of data through the network.
Ok so what if ISP limit the amount of bandwidth we get every month or charge us per GB of bandwidth used? Would that mean in addition to electricity cost and equipment cost for farming there’s also bandwidth cost?
In Australia there’s something called NBN(national broadband network) which gives users VERY fast internet, however, the bandwidth is limited. So there’ll be a compromise between speed and total allowed bandwidth in a given time. The slower ADSL ones are unlimited in bandwidth(most of them, although still a fair go policy applies).
So i’m wondering realistically if you were farming with a 1TB solid state drive lets say, what bandwidth would you expect? Can anyone give me a ball park figure from the minimum to maximum expected bandwidth? And what if you had a 100TB farm with SSDs? Would it increase linearly by 100 times or would the bandwidth consumption increase exponentially?
That type of connection, unfortunately, would not be great for running many services at all. My own hope is that companies/countries stop doing that as it’s really holding people back, not us and our project, but everyone. Working from home, sharing videos, hangouts etc. all costs bandwidth, but hopefully as satellites and the like take over/assist/compete then this becomes less of an issue. In any case bandwidth limits are a killer, especially as the costs of over use is generally completely silly.
TPG and some other offer unlimited on NBN. I have unlimited 100Mbits/sec with optus (NOT NBN so slow uplink)
I think it would be wise to look around. TPG run a lot of their own fibre interconnect in Australia and their own undersea o/s links so in my experience have been reasonable with their speeds overseas and in AU. Barring of course any of the bad connections that plague all ISPs
But yes limited bandwidth is the problem for us Aussies.
There are a number of persona that vaults take including caching and consensus duties. So the scaling from 100MBytes vault to 1TB to 100TB will not be linear because of that. But another factor is that a 100TB vault will take a very long time before it is near full. Your vault-storage only generates/uses “bandwidth” when it stores a chunk and retrieves a chunk. And in order to be asked to retrieve a chunk it has to have had stored the chunk. So bandwidth usage will increase with the vault/node’s age too.
Alright i see! thanks! But even with unlimited bandwidth a fair go policy applies though, idk how long i can keep using it if i use an absurd amount of bandwidth every month. Which is why i wanted to ask the rough estimation of bandwidth say if, a 100TB vault is full, vs if a 1TB vault is full, any rough ideas?
You would really need to to
- How many vaults were running
- Home many users were using the system
- How full your vault was (network decides)
So an estimate is very difficult to make. Over time we will know using historic data, but not so much with guessing I am afraid. Sorry we cannot help so much, it’s like the question of how much to store XGb? The network will tell us as it’s calculating the costs at that time.
I was on TPG ADSL2+ for a time and downloading a lot, TB’s per month (often maxed out d/l) and TPG didn’t batter an eyelid. Although for NBN maxing out your upload 24/7 is not a good idea for “fair use” policies. But from memory there will be a option to limit upload bandwidth for a node in the node s/w @dirvine? In any case there is software that can limit bandwidth usage by an application.
I’d suggest that running a smaller vault first is the way to go, and see how that goes. You might find that even a 1TB ISP plan will be plenty by far for a Few GB vault. And then increase your vault size (or add another vault) until it reaches a point you think is the limit to use.
Remember that running a second vault (and more) increases bandwidth just because it is running a cache and the node normal operations. So it would be better to limit the number of vaults and increase size instead.
Apologies for my ignorance, but surely just because you have a full vault, it doesn’t mean that all the owners of each of the fractions of files will be accessing them 24/7?
I mean, for example if you uploaded all of your digital photos to the network, 99.9% of the time they will sit there idle and not being accessed won’t they?
Although, if music and video files are shared across the network via a streaming service… then I guess there would be more activity.
A stat from a while back in doc management was something like, 90% of data is looked at within 30 mins and never again. I don’t recall the exact figures, but we will have them somewhere, but yes, 30 mins after creation the vast vast majority of data is rarely, if ever, accessed (barring bots etc.)
Yes, and David gave the why.
The reason though that I mentioned the 2 or more vaults is that the vault is actually a node and is also doing other work for the network. Such things as caching, acting as a “hop” node for a chunk going from one place to another and other node tasks. All these things take bandwidth and most likely more than the actual vault storage retrievals. So having 2 vaults as opposed to a bigger single vault will cause more bandwidth, maybe even 80-90% more
Is the Net Neutrality issue in the US likely to cause problems if the ISPs get their own way?
I’m not in the USA but from my understanding is that it would mean that ISPs give priority to routes that pay the ISP more. So if you want your streaming faster the streaming company pays the ISP to give their servers priority.
Now for SAFE this would mean that in most cases the traffic will be given “normal” priority behind the premium services. If SAFE takes off then most traffic will be this way and the ISPs total bandwidth will be shared amongst SAFE users/vaults and no one uses the premium services
At which point the ISP starts to farm Safecoin and give priority to the traffic to and from their own farming equipment
Apologies for my ignorance too but If it’s ever never accessed again how can you still make sure all the data is still there and not disappeared?
The network automatically keeps track of all of the pieces and ensures that multiple copies are alive and well.
I meeting all these people buying bitcoin and now there is 875 different names of coins. So like philosphy
I just wanted to learn and then implement thanks
I have 1mb/sec will it be enough?
1mb/s upstream? From what I’ve read that would be on the low side…
That should be enough to use SAFE, but until we have vaults at home (alpha 4 or maybe later) we won’t know what is the cutoff for vaults. It was artificially set high for the previous vault test at home