Gaming farming rewards - cost for GETs

Depends on the node chosen by the elders and there is no reason to assume it will be constant. Remember the network is dynamic and no reason to expect that a 10 hop path will remain constant

Anyhow its not as important as the fact that finding which chunks that are in your vault is not easy and just finding one before being relocated could be pure luck.

I think you are playing a different game than I play here.

Simplest case would assume certain conditions. Those conditions that I use for my reasoning, are those of static network.
Additional complicating factors were accounted for separately (such as the dynamics of nodes leaving and joining, other clients requesting the data).

2 Likes

This indeed is another factor. I didn’t mention it, but was also assuming that you can issue requests on individual chunks by knowing the chunks (as one would when owning the vault holding them).
If it is from the beginning not possible, even with custom client code, to issue requests for individual chunks in the network, then it is completely infeasible to gain farming rewards this way I would say.

1 Like

Hmmm - there must be some sort of database where the chunks are stored - if I wouldn’t know which chunks I store I couldn’t deliver them :face_with_monocle:

Small maps are just one chunk/data map - so I could only request small files below 1mb but with those it should work after analysing my vault container…?

Ps: damn I should stop throwing stuff in without reading the rest first… Sorry…

1 Like

Each Vault stores the data whose XOR address is close to its own XOR address. Similar to a farmer whose job it is to take care of the land around his house.

This gaming of farming and the questions about chunks, whether vault can determine how to request it brought new question to me. It is PtP related. Based on our last PtP discussion I assume every chunk should contain some information about uploader, so PtP can pay that uploader based on frequency of GET requests. This means, Vault and section will know who is the owner of that chunk. Without PtP public data upload can be anonymous. With PtP there has to be uploader wallet address. This means, all uploaded PtP related content can be grouped by uploader wallet address. I see 2 consequences here.
1)
Vaults will have content that has to be unencrypted (wallet address) and that fact makes it easier to track that content. For example, someone uploads illegal video/photo, part of it is stored on my Vault. Now those chunks will be tagged by wallet address. Police can find out that some of data on my computer is illegal. They have a proof thanks to that address, which can be tracked to original uploader. Vault owner will be responsible for that data. That wallet address is unique id of all PtP content uploaded by that uploader. Is this correct? If not how will be PtP address managed?

  1. Wallet address associated to chunk makes easier to find out all chunks belonging to a singe file. This is true only if PtP is for every chunk. If it is per file, this is not a case. So how is it, is PtP per file or per chunk?

All this wouldn’t be a problem, if there was a way how to encode single address to multiaddress space. Like in bitcoin and elliptic curve cryptography. One seed -> infinite addresses. Unique address for every chunk. This will solve all problems described, both same address for all chunks per file, and same address for all uploaded files. But this will require the wallet to keep track of all those addresses.

2 Likes

Unless each chunk can be requested individually by some means, I think everything else becomes academic. But let’s continue assuming they can.

Somewhere in code base there should be something clearing up how the cache works now.

If it doesn’t cap somehow, all memory could be used up, which could make the vault crash, or just make it (and everything else) dead slow.

If there is a circular buffer (hopefully then with size based on the vault memory and not hard coded) well then the effect of this would be that effective TTL of that node’s cache when maxed out, is dependent on the size of this circular buffer. If requests are maximised and stored chunks size is larger than circular buffer max size in a node A, then effective TTL could be lower than the configured cache TTL. If there is however any node B on the path between client and node A, with a larger circular buffer, then that node will set the lower limit of the cache TTL on that request path (up to the configured TTL, which normally would be the lower limit).

This is not possible. Iirc all chunks in a vault are encrypted by another node prior to being sent to the vault. This would also include ownership and PtP or PtD related info. The farmer/vault persona has no idea how to directly request a chunk stored in their vault. Ie. “Pure luck”…

Perhaps a sophisticated oob collusion and fancy timing analysis to figure out if a particular public data chunk was stored in their vault could increase the odds but still very challenging. After that, the attacker needs to compete with the 8 other closest nodes and caching… Not very game-able, however the question of whether or not a ddos attack on a particular public data chunk could overwhelm the caching algorithm is interesting to consider. However I would consider this a genuine real time ddos attack rather than a gaming of the farming algorithm. I believe we already discussed methods of indirection and random relocation in a previous thread to try and stop situations like this.

3 Likes

I am not sure how you overwhelm the caching since the closet node to the requester will simply supply the chunk. You might get delays or dropped requests if it exceeds the node’s ability to supply the chunk, upload speed of the node will affect this the most. Then for a DDOS this is an effect for each of the nodes responding.

So any overwhelming is the inability of the close nodes to supply the data. This just has an effect on a relatively small part of the whole network. A bit like having stepped on a ant’s nest and have a few biting/stinging you.

1 Like

Sure - but if my vault wouldn’t know which chunk has which xor address I couldn’t deliver the right chunk for the right request - so I assume every vault must have kind of an index which chunks it contains

If this would be the case I would say it’s indeed not a realistic option to request those but

And who tells me how I call the data I store and need to deliver (and how would he know that…?) if I don’t know the xor names of the data I store? Oo

I don’t know how it could be possible to hide from me what chunks I store… Sure the content may be encrypted but I know their names… Everything else doesn’t make sense to me…

Ps:

Yes - simple setup - one client in a random position and trying to not become too fast in requesting data or pokering to to have more data than can be cached

2 Likes

The vault does not receive directly the request does it. The elders tell the vault to retrieve the chunk the elders specify and that means they apply whatever function to the address that was applied when telling your vault to store it.

Now if the section can receive 1 request per 10 minutes before caching elsewhere covers the request and it averages out which vault supplies the chunk then its once per 80 minutes or 18 gets per day for that chunk on your vault. Now if cache average keep time for the chunk is 30 minutes for any cache in the hop chain then its 6 requests per day. At say 0.00001 safecoin per get then you earn 0.0219 safecoin per year of gaming with one machine.

Now if you have 10000 machines and there are 1000 sections then there is a max of 1000 paths and many of those intersect so average of 10 paths out of the section with your vault. So we can multiple the requests by 10, but we have to extend the cache response time to at least double since there will be multiple cache paths potentially supplying the chunk. But if the requests are not coordinated then it may always be the cache responding even at one request per machine per hour or 2 hours since your 10 machines per hour per path will likely keep at least one of the caches responding.

Now your 10000 client machines will not be perfectly spread out across XOR space so the paths will not he 1000 but could be as low as 100 to 200 hundred paths and this makes the figures worse. In fact too many machines may defeat the purpose and you need to ensure each client machine is using a different path to your vault. But since you cannot know the path then thats out.

Have fun trying up multiple machines for a year to get maybe .2 to .5 safe coin per years (assuming get rewards are as high as 0.00001

1 Like

So I need an elder vault to pull this off? That would not precisely be mitigating but making sure that farming will become centralized…

See my edit above to give you an idea of how bad it can be even if you knew one chunk.

Frankly it will be like the script kiddies who get tired of DDOS a major site after a day or two or three. How long would you pay to keep up such an gaming attack?

You assume that I don’t know more chunks than can be cached

What makes you so sure about this? The network doesn’t know time - the only thing making cache expire should be volume (i wouldn’t see why here suddenly time should be relevant)

  1. if a cache holds the chunk for less than a few minutes then what use is it? A cache has to be fit for the purpose doesn’t it.
  2. the way caching will work caching chunks passing through it.
  3. there will only be a limited number of paths to a limited number of clients
  4. simple thought experiment apply caching theory and path theory gives this sort of analysis. It’ll take quite a long analysis to express this in more specifics

Time has always been significant for caching analysis. That doesn’t mean the network needs to know time.

1 Like

My vault stores all data with prefix of my section but routed through xor distance to/from me :face_with_monocle::thinking: right - I somehow thought of only one route because my vault has one position and stores closest xor address and it’s routed along xor distances

1 Like

No it hops back as part of the requirements for anonymous data and not allowing any node knowing the source or destination

What if my client is sitting in the neighbour section? Clients can reconnect as often as they want if I’m not mistaken…?

Have to ask someone who knows the code. We’ve always been told the number of hops is on the order of log of the number of sections and required to be a few in order to provide the anonymity.

For your example its easy to do if the section tells the node to send it to a section not on the actual path and then that section hop it as normal

I always assumed the log n is worst case/longest path to chunk discovery not a stated goal :face_with_monocle:

But routing along xor would be only one path from the neighbour section and only one hop…

2 Likes