Bitcoin eyes privacy https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-May/017898.html
From the article
Simply owning more hard drive storage does not necessarily equal more effective mining power on the [filecoin] network.
there appears to be no transparent way at the network level for retail investors to see how much of their purchased storage hard drive is actually effective mining power.
This is an important point they’re making. What it means is the network can say “you are not allowed to decide how much work you are going to do”. This is the network limiting itself rather than letting the users say “we want to put this much value into the network”. Imagine rocking up to filecoin with 100 TB and only being allowed to fill 5 TB. That would be super frustrating.
It would be a shame for SAFE to repeat this mistake. Rather than say there are 8 redundant copies of chunks (and no more), we should be aiming to say there is a minimum of 8 redundant copies but there may be more if farmers see value in more redundancy (like the idea in rewardable cache / deterministic cache).
If users want to put 1 PB of drive space to work (assuming they have the bandwidth etc to sustain it) but the network only needs 100 TB for chunks, what do you think those users will do with the remaining 900 TB? They’ll mine filecoin, or chia, or burstcoin, and to me that is the real waste. It’s far less wasteful to have them use that space to increase redundancy and security and performance on SAFE. And it’d be a FU to the other networks, our excess data is worth more to farmers than your primary data.
I just really don’t want to see farmers turned away from deploying their resources. It doesn’t make sense to me, especially since they’ll find some other way to use them which would be better spent on SAFE.
This may need is own topic!
I’ve not read the article, but is the point of limiting participation to help decentralise and also to maintain efficiency? If so it isn’t clear to me that’s a bad approach, and isn’t SAFE aiming to do the same thing one way or another (eg by limiting rewards for those who can deploy more resources by keeping it cost effective for those with just a normal PC)?
I think we want to reward participation/capacity up to the value it contributes, but if all extra capacity is rewarded regardless of this value, that’s both inefficient and poor distribution of rewards, and may undermine decentralisation.
At some point, simply adding redundancy is waste, so there must be a tailing off or some way of limiting rewards. So to me it’s about choosing the best mechanism we can, and striking the best balance etc.
What happens if there is no limit to the resources you can put on the network? Racket.
Bitcoin currently has more hashrate than ever before, but fewer people are mining it than ever before. From whom do the miners protect with this hashrate? From themselves, because only they have the ASICs to attack the bitcoin network…
Do we want someone like Google to own 10% of the space in the SAFE Network?
Good point. I guess this works if the delay in joining gives time for more participants to ‘join the queue’ which leads to greater diversity.
But if the delay in joining just pushes the attacker into a queue I feel like it just moves the problem of decentralisation from the network to the queue. There’s still the problem of how to pick the next node from the queue in a way that achieves decentralisation.
As for efficiency, I feel this is a tricky one to manage, since the network would need to understand external factors of efficiency which are probably better managed by the node operators (eg by selecting how many extra chunks to cache). For example, is 8 the best amount of efficiency? What if the performance is better with 20 chunks, does this justify increasing the fixed amount of redundancy? Does client efficiency matter (ie more redundancy and less hops) as well as node efficiency (ie less redundancy and more hops)? Just spitballing here… my main point is where the decision of efficiency should be, in the network or in the node operators, and can that be a sliding scale?
If surplus redundancy is rewarded less (on some sliding scale eg reward less if the chunk is further away from the node) then it allows the operator to decide when the reward becomes inefficient rather than leaving it only for the network to decide. The main difference I’m aiming for is the operator can decide to store as much or as little as they like. I think that makes sense.
What if that redundancy also increases performance? How does the network decide how to weigh the cost of extra redundancy vs the benefit of the performance? I feel this is most sensible to leave for farmers and clients to decide…
Moderators, feel free to move to a new topic or into the rewardable cache topic if you feel it necessary.
Not sure why you wouldn’t. Does this cause a problem? 10% seems like an ok amount, not able to cause consensus problems, could cause disruption if they left but not catastrophic loss… I dunno if 10% is ok or not, maybe more is still ok, maybe less is better…?
People want maximum profit. This will lead to over-optimization. Which will be ok 99% of the time and bad when it turns out that there is a pandemic and the only country that produces masks is China
If Google has 10% of the network and Microsoft has 10% and Apple has 10%, what stops them from reconciling and racketeering the remaining 70% of the network? Of course not for something big. Just for that little thing they only want this one time. “Why are you evil and want to reduce the network resource by 30% instead of giving us this less evil thing?”
If you want to own 10% of SAFE network, you maybe need 10% of bandwidth of total SAFE network. If it possible…?
To my mind, this is a direct result of the fact that bitcoin block reward is all-or-nothing, and there is no good way to reward those who contribute proportionally small amounts of work. (Mining pools approximate this, but introduce their own frictions via minimum payouts, fees, centralization, etc). If there was such a built-in proportional payout mechanism, then we should see a very long tail of miners, which would act as a hugely decentralized check on the big guys.
The same should be true of most any work/reward network. Hopefully in SAFE, we can make the promise that you will be rewarded in direct proportion to your contributed resources/work.
Consider the memory hierarchy of a modern computer. Cache is king when it comes to performance. The farming algorith should be weighted so that all nodes who return a chunk get a reward, but the fastest vault to return it to the client and the closest vault to the chunk receive the lion share. A simple inverse distance metric (space-time based) can accommodate most of this.
The real problem with mining pools is that it’s cheaper for me to buy Bitcoin than to mining it…
If there is no restriction of the provided resource in the SAFE Network, the same will happen. It will be cheaper for me to buy Safecoin from Google than to be a farmer…
But if there is a limit, people will sell their remaining free resources to another SAFE network. This is not necessarily a bad thing. There are enough free resources on the planet for 10 SAFE networks. (the other storage networks are not our competition because Storj is just a -10% discount token, you can pay with fiat; SIA is just too complicated for the average person)
I personally think that the more SAFE networks there are, the better (for our species, obviously not personally for us because we may lose our investment… ). In the long run, the best SAFE network will survive and dominate the others.
People don’t mine bitcoin because they have to pay $5 in electricity to earn $1 of bitcoin. It’s nothing to do with granularity and never will be. It’s about the cost.
Farming / mining will probably always converge on two kinds of operators: a) whoever has access to the cheapest or most efficient resources and b) whoever has the enough initial capital to sustain the risks of present costs vs future gains.
I admit this seems kinda shitty, but fixing the granularity doesn’t solve the cost / capital problems. SAFE addresses these problems by using a common resource that everyone has fairly similar amounts of (spare disk space, spare internet bandwidth).
But this use of ‘spare’ resources also exposes a risk. Bitcoin mining is relatively centralised because access to cheap energy is relatively centralised. Now think of bandwidth supply - it’s even worse than energy supply for centralisation! So I feel the risk of centralisation is still there… the way we think of it now with consumers having ‘spare bandwidth’ and ‘spare disk’ is probably going to seem quaint after a few years of the network going live, a bit like bitcoin running on ‘spare gpus’. Run through a few iterations of what ‘spare’ really means in the network, how the slack gets taken up by economic competition, how new types of spare resources will come into play, how that slack gets taken up by economic competition… when something becomes valuable it can no longer be called ‘spare’.
I think the vision for efficiency compared to blockchains can be achieved, but my thought experiments have unfortunately never arrived at an ideal fully distributed democratic system. However we can achieve degrees-of-better, which I am sure we will get with this network. This includes better granularity of reward.
Who are these “people” you speak of? Sounds theoretical, over-generalized, ivory-towerish.
I will grant that some/many people will act as you say, but…
I am speaking from personal experience. I have solar power here, with plenty of excess each day during spring, summer, fall. I also have various computing devices sitting unused, up to and including an older GPU mining rig all capable of generating hashes. Both the hardware and elec are sunk costs for me. So any return would be profit. I would happily mine some bitcoin if it would pay anything, even micro sub-satoshi on a regular basis. But it doesn’t because the granularity of payouts (dust, payout minimums and fees) of mining pools mean I would never in my lifetime receive a single payout as difficulty increase. I find that irritating.
I have been around since before asic mining. This is not theoretical for me. Now, you might say that I am an exception to the rule because it is uneconomical (at present prices) etc, etc. That’s fine… but there can be many exceptions, and together we would represent the [missing] long tail.
btw, a thought just occurred to me: I wonder if there are any btc mining pools that payout via lightning channels yet? Such a pool could send micro btc…
Just sell them for Bitcoin… You will make more money… I was mining with ASIC and the sad truth is that you made 50% of the profit in the first week
Let me see if I can clarify a bit why I think granularity and the long tail is so important.
For me, the power of cryptocurrency is that it is money of and for the people, not the state.
As such, it needs to be distributed fairly amongst the people.
The fairest way to distribute new money, in any system, even fiat, is to pay for valuable work performed.
Imagine we have a government fiat system where new money is spent into the economy in payment to those who provide services for the government.
Now, let’s suppose that after some clever lobbying, minimum thresholds are put on these payouts, such that only large companies could perform enough work/services to receive a payment.
Now, individuals must work for a large company and accept their wages. The company keeps most of the profit. Indidivuals are disempowered vs when they were an independent service provider. Or perhaps more likely, they choose not to engage in that industry at all.
Winner: big companies. Loser: individuals.
And it gets worse. Because every so often, the government holds an election where the service providers vote, in accordance to their work performed. In the old days, there were millions of individuals that could act as a check on the relatively few big companies. But now, most of those individuals are gone: either hired by the companies or doing other things. So the big companies are easily able to maneuver gov policies even more to their favor.
Let me say this in yet another way:
If we want everyone to be able to participate in the economy, they need to be able to earn by providing resources in the economy. It should not only the firstcomers who then grow to a scale where they can price everyone else out.
Think of all the people who heard about bitcoin in 2014+ and were excited to try mining, only to learn that its useless unless you have deep pockets. That’s a very different experience from those who started in 2010 or so…
I think its very important that someone could always participate in SAFE economy simply by:
- installing SAFE software
- Providing resources, ie farming.
- Receiving farming reward(s)
- Performing network actions, eg PUT that require SafeCoin.
If a time comes that I have to BUY SafeCoin to upload a file because I simply can’t make any via farming even after weeks, then I will decide the economic model is broken.
With cryptocurrencies you can only do 1 thing. To sell them. Who has the paper money to buy them? The rich. So, any cryptocurrency that becomes popular will be bought and owned mainly by the rich, not the people.
40% of all bitcoin is controlled by less than 2000 addresses… “The People’s money”…
Safecoin is different because in addition to selling it, you can use it for the life of your virtual self. Every virtual person is currently a slave of the large corporations. They give to their slaves a little space to live on… and reap all the profits from their labor
Safecoin gives us the opportunity to own virtual property. It sets us free. But only if its adoption grows with the eating of “the fat” (safecoin). If we let the big ones come first and eat all the meat from the bone we will end up like “the People’s money” - Bitcoin…
Precisely. These are not spare resources, they are resources, period. And I agree that the network should take as much as people are willing to give. The whole concept of sacrificial chunks for maximum redundancy, or using the all resources not suitable for primary vault status as a cache for improved performance seems like an obviously good idea given the right farming algorithm.
- Good luck with that.
- What makes you so sure that it is ideal? Consider some objectives (ie. min cost, max performance, max participation, min centralization, max security, max data integrity). There are a lot of equally optimal systems on the pareto curve that can possibly address these. In most complex systems a hierarchy/structure will usually arise, so I wouldn’t necessarily frown on something that doesn’t equate to a grey goo amorphous blob type of fully p2p distributed network.
Lol, hope you don’t mind me highlighting this but it is so perfect and succinct.
I agree if you want to contribute you shouldn’t be turned away. It might be a lot of the same data but if it is providing more speed or security while encouraging full participation, then that should happen. One thing I wonder, what kind of impact of churn events and data relocations could this have on network stress or performance? Negative? Positive? Neutral?
If there is enough redundancy they probably won’t matter. IMO the network will need to be stressed 100% 24/7 anyways, to not do so would be a waste of available resources