Update August 5th, 2021

Not arbitrage since there is no triangle involved between two suppliers and 2 receivers. Eg two exchanges and have arbitrage.

The issue is unintended consequences and attack vector with unlimited quotes.

Every thing else looks great from here. Quotes are a great idea and great way to get a set price before uploading one chunk of your file and cannot see any issues since it is basically a get price first then upload at that price.

Its the unintended consequences of unlimited time quotes which I mentioned just a few. There are and will be more seen if implemented with unlimited quotes. Businesses learnt the hard way but they learnt, lets not make the same mistakes with Safe and use the wisdom gained from painful experience of business over the last century.

Quote valid for a period (network events) is the safest and quickest way to circumvent the potential issues.

7 Likes

Could even make this a part of the quote disclaimer.

“Your upload will be interrupted if the current store cost exceeds quote+x% for any chunk in the quote.”
“The reason for this can be many fold and you will be able to resume uploads when this abnormal situation abates”

Sounds like a reasonable alternative to “time” limiting it

1 Like

Used quite differently though, so not really the same?

One will be hit all the time, the other…

The issue isn’t so much of limiting.
It’s not unlikely that there will show up something that will in practice become a limit in some way.

But putting in “fixes” based on vague catastrophy scenarios…

The idea of stabilising the costs is not mine - it’s just a collective conclusion from seeing that the StoreCost queries as they worked were not practical, and would not be practical with batch uploads.
As a first version of that, the quote is perpetually valid as a result of having been section signed.

There has so far been no reason to add any limits to that.

So, development moves on, and nothing is fixed prematurely. It’s KISS.

7 Likes

If you have storage space that is so large, that you can amass random data that will be difficult for the network to store… (means you’re a whole data network yourself?) and are willing to pay for it continuously, in a bet that maybe the price will go up so much that the payment relative to the data size, will be utterly insufficient…
(And while you amass random data, the network is also growing. Are you growing faster than the network?)

Then you could do so.

Will you?

You need to keep that data, because that’s the only data you will be able to upload with the quote.

Then if you do.
Then what will happen?

4 Likes

You’d use your newly freed up exabytes of storage to farm and try and recoup your costs of clogging up the network for a wee bit.

4 Likes

Yeah, somehow, that seems like a more profitable use of your massive data network.

It’s funny how there always seem to be “someone” out there, ready to throw away all sorts of money, time and resources, for dubious gain. They do not have better ways to make money on those resources…

1 Like

Data can be generated deterministically from a base key so the attacker don’t need to store it on his PC.

2 Likes

Right.

So, walk me through it then.

How much data do you intend to generate and then self encrypt and then get quotes for? How large pieces at a time?
Will you generate all at once, or fill up now and then as the network grows larger and you realise you need a heavier hammer?

Then, what sort of event will you be waiting for, how large of a price diff? How long will you be waiting for that? I mean when do you decide that it’s not worth waiting for anymore?

Then, what will you do in case of this event, pay everything at once, generate all the data, and then upload? Will you have the infrastructure ready all the time to be able to catch the moment, or are you going to find it on the fly?

(@Antifragile for you as well)

2 Likes

There’s some information missing here.

You run quote harvesting… meaning that during days, weeks, months, years, you ask for quotes, keep the cheapest ones? And you run this in a botnet during all the time.

Then at a given time x, you pay for them all at once?

Then upload all at once.

How do you intend to reuse your quotes? Pay for them again?

You have the receipts for paying, so better store them in that case.

And then you will be uploading again and again, using that receipt.
But it’s the same data. It won’t increase used storage any more. So your gun powder is used.

1 Like

I’m not seeing the payoff for this attack. The network is what - temporarily out of service? Destroyed? What’s the payoff?

What do you think this kind of attack would cost - please give some numbers and the workings out?

Sounds to me like the so called Google Attack. Does anyone still worry about that?

2 Likes

Okay, so when do you pay for them then?

What event are you waiting for? A large price difference (how large)?

Every quote will include an xorname per chunk.
31250 xornames occupy 1 MB.
The minimum size of the quote will be to use the max chunk size of 1MB.

How large of a quote (in size) will the network be able to handle?

You’ll be streaming say a 100MB quote to 7 Elders at a time? Seems like a number that wouldn’t be too small or too large.
That’s 100 x 31250 = 3.125M xornames.
In parallel you’ll do this with every section in the network.
How many sections do we use in the example? Let’s say 10.000?

10k x 7 x 100MB.

But each Elder is the limit, if they receive and process w 10MB/s (note: let’s use byte), then you have 10s for each quote of 3.125M xornames per section.

You’re connected to 10k sections, and 7 Elders in each, so every 10s you are processing 10k x 7 x 100MB = 7000 GB worth of quotes.

You need 700 GB/s bandwidth, and you would produce quotes for 10k x 3.125M xornames / 10s = 3.125 bn xornames / s. It would be 1000 quotes per second.

Each quote requires a DBC payment. So you connect to 1000 different sections every second, to reissue 1k DBCs into the recipients of the quote, and then send those in to the same sections, and get 1k receipts back.
It’s possible that you could get the processing of each payment done in a second (perhaps a bit optimistic). And you’ll do this in parallel.
So in 2 seconds you’ll have 1000 receipts. 500 receipts per s.

Each xorname would give you 1MB data upload.
Each receipt is 3.125M xornames, so that’s 500 x 3.125 TB per second, let’s say 1500 TB/s, that you now have clearance to upload.

This has to fit in 10k sections, so you’ll be sending 150 GB per section, and you’ll be sending to 3 Elders this time, so 3 x 150 GB per section. 450 GB/s x 10k = 4500 TB/s

So, continuosly, as you get receipts for your payments, this is the amount of data you try to push into the network per second, 4500 TB. (What was the estimated cost per GB? $0.5? 1.5k TB/s, so 750.000 dollars per second worth of data. Okay. Cool.)

Now, each Elder was processing, what 10MB/s, so that batch of 150GB will take an Elder 150k/10 = 15000 seconds, around 4 hours. So that’s just 750.000 dollars per 4 hours, or 4.5 million dollars per day. For the data only.

Well, we could have just started there. How much will an Elder process per s?

And, since we are just spamming here it seems, why do it like that, when you can just query for data, and cause Elders to max out on bandwidth/cpu?

Or why not just use the botnet to DDoS all the Elders directly? Should be far more straight forward?
No need to pay for any data?

6 Likes

nope :smiley: …

3 Likes

Don’t make it random. Just make a counter and use each increment as the data. That way you can get quotes every 100 counts. Then you do not need to store the data since you can generate it (&its encryption) on the fly. Get your mates to start their counts at different values and all have unique data.

Mind you it would still cost a damn lot to upload so much data. But something a 10K botnet could cause a bump or issues uploading that data if you had enough tokens. At least it will be cheaper and the network cannot do anything about it other than reject chunks when full.

The issue is a slashdot style of issue when people are told of this great way to upload their media collection and never need their USB drive when watching movies at their mates place. So we get millions* around the world decide to do just that. So they (over a few months) get quotes for all their media files (one quote per file thanks of an app written for this purpose) and then for months are uploading their movies they ripped off their dvd/brays which means dedup is not so good.

*1 million is 0.3% of USA population or 0.05% of 1st world countries or 0.008% of most countries with easy access to media.

Its this sort of issue that is possible and we’ve seen similar when slashdot of a website or product causes a huge inrush of interest. Products take 6 to 12 months to become readily available again due to the back log of orders.

That is the more likely way an issue will occur with the store cost algorithm having its hands tied in trying to slow uploads till enough farmers come online.

Now your & @tfa 's idea of interrupting uploads if the store cost algorithm has the store price significantly higher than the store cost for particular chunks solves the traditional problems encountered in unlimited quotes (before they learnt not to do those umlimited ones)

If you have storage space that is so large, that you can amass random data

and this is an example of closed mind thinking. Think outside the box.

Well explained.

@tfa make a good proposal along these lines. Maybe double store cost over the quoted price for the chunks. Just causes a delay in uploading since the store cost should drop again according the think tank since the network will forever grow and all rosy.

1 Like

The hands are not tied because the network has other ways to slow the uploads.

Also the storecost has multiple purposes, it isn’t just for slowing uploads. It sounds you feel storecost has failed if it can’t slow uploads, when really storecost may be achieving other things even if it’s not doing a very good job at slowing uploads.

I feel the objection to perpetual quote is because the quote degrades the rate limiting effect of storecost, which is true to a degree. But the rate limiting effect of storecost is maybe not as strong as the other rate limiting options available to the network. So the ‘bad’ effects of the quote on storecost are maybe not as bad as they’re being made out to be. Maybe the effects are very very bad, but I’m not convinced yet.

There’s an interesting user psychology angle here.

If I get a cheap perpetual quote but my upload fails continuously for ‘other reasons’, I’m probably happy to try again later because the price is good.

With no perpetual quote, if I start uploading cheap and then later it becomes expensive, the experience can be pretty surprising and unpleasant.

I wonder if there’s some difference in retention with the different storecost models.

It’s also really common on the current internet to be told ‘this site is too busy come back later when it’s less busy’. It’s very uncommon to see ‘this suddenly got a lot more expensive so come back later when it’s cheaper’. I admit this isn’t a particularly strong argument but the familiarity of experience is perhaps another thing to consider.

One last thing, the historical experience for quotes with unlimited vs limited timeframe, is there literature outlining why limited timeframes became the norm? I’ve never heard of problems with unlimited quote periods (my ignorance, I’m sure problems exist), so would be keen to read about this if anyone has links.

8 Likes

At least someone agrees. I have only said it interferes (ie affects to a degree.)

What are some of the other ways to limit rate? Other than flat out rejection. Maybe I am putting weight on store cost to aid in reducing rate because the other methods are not talked about, other than reject the upload.

Yes and no. This should be a 3 set of situations

  1. Perpetual quotes
  2. quotes with sensible limit
  3. no quotes
    Framing it as either/or is not a good discussion to have.

Rejection by a network that claims to be perpetual data is not going to be acceptable to a portion of the users, and you only have to look at the karens out there who complain to the manager about everything not going their way. Too many blog posts about how the perpetual storage network cannot even store a file is not going to help anything.

Yes reject has to be there as a last option, but it should have sufficient layers above it for mitigating the problem as is reasonable to do.

Yes there are those who (prob most) will accept delays. Now @tfa 's idea works here especially if the user is informed that there is a delay and upload will resume later.

I agree with quotes (no need for perpetual). For perpetual, how many uploads are going to take weeks? Just because a quote is great for having a known price does not mean it has to be perpetual, its a false equivalence.

But this is not what is being presented. Just your download failed no space available. Nothing about if the key for that chunk is now used up since the upload of that chunk proceeded&failed and the key marked off.

Yes, businesses change direction. Being legally tied to a quote from 20 years ago (even 6 months) is a bad trap if the business cannot perform the work required and have to contract out to other companies to do it.

Now its known that after any bug-fix/upgrade-to-quote-system/other-issue the previous quotes will not work anyhow. So the arguments on how the perpetual system is ever so great for quotes is rather lopsided isn’t it.

tl;dr
Quotes are great, it is not a black/white argument against perpetual quotes. There is pros and cons for perpetual. But to argue in terms of perpetual quotes or no quotes is not a good argument to have and discussions fall flat if we do that

The store cost algorithm system was introduced as a control system to help regulate things. Now think of it as a simple control system using an OP-amp and then see perpetual quoting as a grounding wire floating around the inputs to the op-amp and randomly toughing the components and that is what its like. It may cause no issues in future and it may cause moderate issues and may cause serious issues, or anywhere in between. Its an unknown that business history shows has problems, Its a un-modelled problem presented as a ubeaut perfect solution. NO the quotes is that ubeaut system. The perpetual is only there because it is somehow seen as a great innovation and the brainchild that cannot be questioned.

1 Like

Thoroughly enjoying following along and participating in the convo but I must ask, can someone please tell me in short why the need for perpetuity?

2 Likes

It doesn’t have as much downside as is being discussed (I think) and because of that it is simpler to not put a time limit to a quote. You could always get a new quote for your unique batch of data if you feel the price may be better far later.

There are benefits to this as a frictionless experience to users experiencing the network with intermittent connectivity or working between online/offline. Benefits to the network getting all data and transactions for each chunk batched in advance.

The fact it goes along with the “buy now, pay later” or rather “batch now, pay later” is quite timely. That is an emerging trend and one people would recognize and quite favor. Quite brilliant in many aspects, IMO.