5.000.000.000 hours of video watch a day
500.000 uploaded hours of video a day
or 10.000x video hours watched per day than hours uploaded.
gets are 1000x the puts cause each MB is in 1000 chunks so one put for 1MB upload but 1000 gets when that 1MB is requested
so we got 10.000 requests x1000 gets per request 10.000.000 gets per put
so with youtube’s consumtion there should be enough payment of the 10.000.000 gets for each put
200W/h consumption of a desktop pc
an average cost for kWh in the world is about 20c of the dollar and thats in UK.
0.20/1000 = 0.0002 → x 200 → 0.04 of dollar per hour electricity
720MB for an hour of full hd video means that per day we got 350.000.000MB or 360TB per day of storage
lets focus on one day. 360TB of storage needed, lets say each vault got 100gb of space so its 360.000gb / 100gb = 3.600 vaults
10.000.000 gets / 3.600 vaults = 2.777 gets per vault per hour so 2.777 = 0.04 of dollar x 3600 = 144 dollars electricity cost per hour uploaded
around 720MB for an hour of fullhd video so
144/720 = 0,20 dollars for a MB upload just to pay for the electricity. in the youtube scenario. for a GB its 200 dollars, thats for the youtube only scenario quick calculation of input and output for a day.
The price model count that user already have PC which run 24/7 or do not care much about electricity cost, or if does, than have low power consumption device which might be just 20W/h with single HDD. So than it should be just 2 cents for a MB upload.
Than high speed connection of each vault also reduce this price.
with 1PUT = 10 000GETS would be 0,002 cents
with 100Mbps of each vault would be 0,00003 cents
Interesting, I have had thoughts in my head that “get” maybe should cost 1/10 of a “put”, compared with those numbers you show with 10x watch hours per upload.
I know “get’s” is supposed to be free but can’t let the thought about everything should have a cost “there is no free lunch” escape me. Farmers perform work for delivering get requests and a cost of “get” request might also give a lower "put"price.
Are there mote people than me having similar thoughts?
In my head there is also numbers like $0.5/GB put, $0.05/GB get, don’t know if that would be reasonable or not, just feelings.
That is true for old PCs and some todays gaming rigs and high performance workstations. Normal PC is today around 60-80W.
And there is ton of low power computers available. For example I have small server with 32TB storage and power consumption <60 W in medium load. Raspberry Pi with connected HDD will be around 10W, with SSD even less.
Than the SafeNetwork would lost one of the main benefit. If you think that to store some random file on SafeNetwork is too expensive, than store just ones which are rare or expensive to lost. In general it should be more expensive than store on your own RAID storage, but accesible from every place connected to network forever.
1.-The size of the chunk doesn’t change the ratio between PUTs and GETs. If a video is seen, on average, 10,000 times that will be the final ratio between PUTs and GETs.
2.- You don’t count the cache effect as a small percentage of videos have hundreds of millions of viewers while others have few or none. And in these popular videos the cache substantially modifies the number of GETs a farmer will receive.
3.- 200W consumption to manage 100gb. is, in these days, ridiculous.
4.-Codecs improve year by year and 720MB, for a standard 1080p video, is too much.
5.-An amateur farmer will use his (paid) computer while using it for other things. He uses his (paid) internet connection and the spare space of his (paid) hard disk. So the real cost of being a farmer is, basically, zero.
6.-A professional farmer will ensure that his equipments are optimised and that the costs of both, purchase and use, are as low as possible.
I thought one 1mb was split into 1000 chunks… Mind memory I read that makes you remember things that are not true… There is a documentary about that.
Ok so disregard the x1000 gets but the 2.777 would simply be 2 or 3 gets per vault so if you even got cheapest pc at 10w its 0.002 dollars x 3600 = 7.2 dollars per hour so 7.2/720 is 0.01 dollars per MB? So 10 dollars per GB ok that is only for the electricity cost.
And I calc electricity cause people dont have pc open 24/7 most people shut down pc at night amd most people use their pc for a few hours a day not even all day
Later I will try make a model about how that progress every day to infinity
The 1080p on Youtube is about 2,3Mbps as MP4 H264 and 2,5Mbps as WEBM VP9, but most popular youtubers alredy use 2k, 4k or 8k. Than WEBM VP9 codec has bitrate:
2k ~ 8,6Mbps
4k ~ 17,3Mbps
8k ~ 21,2Mbps
And if they want keep highest quality like H.265 it is way more. So the quality of content is qoing to follow common speed of bandwidth and with ongoing fiber optics adoption, LTE 5gen…it will push uploaders to use high quality videos only with bigger average bitrate than today even with next generation codecs. https://www.tutorialguidacomefare.com/test-video-quality-720p-1080p-1440p-2160p-max-bitrate-which-compresses-youtube/
As you know, GETs are paid for by PUTs, so it isn’t free. It is just paid for indirectly.
It isn’t dissimilar to Google pricing free apps, such as YouTube, using adverts. The consumer isn’t directly paying for anything and there is no guarantee that there will be sufficient advert revenue, but it seems to work out.
With PUTs paying for GETs, we can at least see that there is a connection between uploading something and it being downloaded. It will be interesting to see how it pans out.
That is why your estimate is off base. “The YouTube only scenario” is an edge case of extreme read:write ratio. The diversity of other online activities will balance things. Some applications will write lot’s of chunks and never read them again for 10 years or never read them, implying an extreme Put:Get ratio.
Intuitively, its hard to ignore the fact that there will likely be more Gets than Puts even when considering all online activities. This is from the simple observation that it is easier to consume than produce content. I suspect that the ratio will be close to what ISPs provide for up vs down bandwidth, since they have likely already examined the economics of it. In USA I believe that ratio is usually 10:1 unless you have fiber or a business class connection. Averaging over all connections might push the GET:PUT ratio closer to 4:1 presuming a pareto 80:20 rule with a twist. The ubiquitous ability for anyone to passively earn Ptp rewards might also affect these ratios.
I love that you’ve done this exercise, but the results are not correct.
Is there a source for these figures?
Just a typo of 350 should be 360 but in numbers and calculations typos are sorta critical… in this particular case it doesn’t affect the result but do be careful.
What you’re saying is 3600 new vaults per day would be needed to allow 360TB/day of upload. The way this figure is used later on is incorrect.
This is important to clarify because 3600 is not the network size, but that’s how the figure is used later.
The calculation also implies that existing vaults are full so uploads must be handled by new vaults, which I think was not your intention.
When calculating GETs per vault it must account for total network size.
A few things.
10M gets is not right, since there’s message routing, so each GET from client-to-vault must do log2(total_vaults) hops. Every hop incurs the work of a GET. 10M chunks served to clients would be much more than 10M GET requests overall.
The units don’t match. 10M GETs comes from ‘gets per put’, 3600 vaults is ‘new vaults per day’ and the result of 2777 is ‘gets per vault per hour’. The numbers are incorrect in the first place, but the units are also a real mess.
10M ‘gets per put’ is used to calculate an hourly rate but the 10M GETs are spread out over a long time, not just one hour.
3600 ‘new vaults per day’ is incorrectly used as a substitute for the total network size.
It’s a handy calculation to do but the numbers and units are wrong, and with no source to back up the initial claim this is a really misleading result.
This figure is incorrect, and is obviously incorrect based on existing storage expectations. To put it in some real world context, Amazon Simple Storage Solution is 2.3c per GB for PUTs and 1c per GB for GETs (source). So where is the 10,000x factor coming from that results in 200 dollars per GB? Even accounting for electricity costs, multiple redundancies, work for routing, there’s no way they accumulate to 10,000x more. Maybe I’m wrong on this? Would be interested to know if people think storage on SAFE would reflect existing norms or not.
I can’t say I have a good method for doing the calcs otherwise I’d put it here. It gets really difficult and based on too many assumptions. Would be interested to see you take a second stab at this.
These people will not be vaults.
100 GB per vault (your assumption)… then reset that once per day by turning off and on… no way vault operators will accept that mount of churn and load on their network connection, nor will the network accept that amount of work.
I’m sorry if this post comes across as negative; this really is a great exercise to run through and I appreciate you putting in the effort to do so. Please do keep working at it, I think it’s worth pursuing despite the initial errors.
I think the biggest improvement can be made when calculating the total work in each hour. The dollar per work is already much closer to being correct so is affecting the result less than the total work numbers.
As I said before to you, the chunk size is one MB and stored as a single 1 MB chunk. Not in fragments.
Also needed is the rates for very popular and trending videos. This is an extremely important metric due to caching in the network.
Lets assume some old analysis that 35-50% of youtube downloads is for the trending and popular videos. Thus caching is operating for this percentage
Average hops is log2(total sections) NOTE: sections because each hop is back through one node of each section in the return path. Even if it is nodes instead of sections then multiply the amount by 1.5
Lets use 40% as the popular or trending video traffic. The average hops for caching is nominally 1/4 average hops since caching occurs closer to the receiving node.
For this youtube example we HAVE to assume safe is world wide and nodes are on the order of 100 million nodes. ~5% of the world’s internet users are running vaults. At approx 100 nodes per section makes this is 1 million sections
log2(1,000,000) = 20
60% of gets have 20 hops
40% of gets have 5 hops
This is an average of 14 hops per get request
Each hop is similar to the work needed for a get.
so we get 10,000 request x 1 get per request ==> 10,000 gets per put
Now taking into account caching and hops this becomes
14 * 10000 gets * hops ==> 14,000 hops (hops equiv to gets with no hops)
Thats nice. BUT the user is already using the computer so the electricity cost using your figures is ZERO
But a better measure is trying to determine the incremental cost for the drive and modem while transmitting. This is in terms of 2 to 5 watts MAX
==> max of 5/1000 KWH/H * 0.20/KWH * 24 hours ==> 0.024 max per day
==> min of 2/1000 KWH/H * 0.20/KWH * 24 hours ==> 0.0096 min per day
There are no metrics for %age on mobiles and many mobiles use lower resolution since the phone’s quota is restricted. This more than compensates for higher resolutions as most videos are viewed on phones now a days during people’s idle time.
Lets just use your estimate. Even though the hours viewed per day seems high
5 GHr viewed/day ==> 3,600,000,000,000 MB viewed/day ==> 3,600,000,000,000 Chunks/day ==> 3.6 Terra Chunks/day ==> 50.4 Terra Hops
500KHr uploaded/day ==> 360,000,000 MB uploaded ==> 360 M Chunks/day ==> 5.04 G Hops/day
For storage there is a significant amount of deduplication because there is a significant number of copyright free duplicate videos uploaded to youtube. As this figure is unknown we cannot account for it except to know the final figures will be noticeably smaller.
Considering that any computer built in the last 12 months has at least 1TB hard drive. Even laptops are 1TB or more. So 100GB as a typical vault is very much an underestimate. We can safely use 500GB as the average vault for 2019 and more from 2020, 2021
The world wide network which the whole scenario is based on has a minimum of 100 million vaults at an average of 500GB gives us 50,000,000,000 GB ==> 50 million TB
360TB/day is 2,880TB/day stored since 8 copies of each chunk
This represents 0.00576% of the total storage needed per day
OR 2.1% of the total storage for 1 whole year of youtube upload.
At that rate there is no significant number of new vaults required per day
That does not provide meaningful information since new data is not being directed only to any new vaults
Uploads are “randomly” spread across all vaults
Now lets consider the 100 million vaults and the incremental cost to run the vaults.
100,000,000 nodes * 0.024 $/node/day ==> 2.4 million dollars world wide per day
If we attribute ALL this cost to youtube then we can work out a cost for operation. This is not really valid since the network is also being used for a lot more than youtube.