Net neutrality

The more popular an application on the SAFE network gets the faster it becomes. Built-in scalability.

Will this create an unfair advantage for established apps?
Could the creation of more copies of the same file on the network be purchased from the network? Would this increase download speed?

Correct me if I’m wrong, but I don’t think more popular apps go faster because there’s only a certain number of redundancy.

That DOES make me wonder how the network accounts for 1 file being accessed 1000 times in the span of 10 minutes… and if you can buy more redundancy, does THAT speed up access? That’s a great point. At first glance, the system seems to actually be a torrent-style system with only 4 seeders.

So how does the network handle popular content?

You probably know most of this already…I’ll give it a stab, others can correct me.

Will this create an unfair advantage for established apps?

I would say there is a first mover Builder advantage, but wouldn’t call it unfair. Same as first mover Farmer advantage. The actual rewards algorithm is still being finalised I believe.

Could the creation of more copies of the same file on the network be purchased from the network?

Do you mean the vault that shared the file, having it’s wallet rewarded from the network?

Example: If Warner Brothers uploaded a new release movie first, the network would shred any further identical copies that were uploaded. Thereby rewarding the first up-loader. It’s not clear to me how the network treats slightly altered copies that try and game the system.

Once you get passed the ISP, the speed would be determined by the availability of nodes (Disk, CPU and RAM) that are closest to you in time - latency terms.

The idea is to get data off personal hard drives, up onto the network. The network then trashes any identical copies…so this frees up vast swathes of personal hdd space.

Given this, it’s silly to then ‘download’ files again. What’s intended to happen is that the network becomes fast enough that your streaming the movie, reading the PDF, playing the game straight off the network :slight_smile:

So the more you give to the network by (a) uploading, (b) deleting the local version (c) allocating that saved space to the network… the faster it gets.

So really in practice, you don’t store any personal data (in file format) at all on your machines, except those required to run the operating system and local programs, including vault software.

You would run a modest sized SSD dedicated to the operating system, local programs and vault software. Then share any additional hard drives 100% to the network…the data is more secure there.

Later on as Google, Yahoo, Microsoft etc lose their economy of scale advantage, they are forced to follow suit (hopefully) this frees up loads more resources and the network becoming even faster (10 years maybe)

Another area that is not entirely clear (to me), is the role that peering links will play in seeding and evolving the network.

1 Like

@Russell:

That DOES make me wonder how the network accounts for 1 file being accessed 1000 times in the span of 10 minutes… and if you can buy more redundancy, does THAT speed up access?

My guess, based on @dirvine saying (recently) there’s no farming reward for cached copies, only for actual GET requests to a vault, is that these are not currently accounted for in the content reward model either.

It’s not easy to see how this affects the rewards for popular content. I guess it both reduces the increase in reward per access as popularity increases, which I like!

Also, access which doesn’t hit a cached copy will be favoured, such as access from different nodes over repeated access from the same nodes. That’s bloody brilliant!

Bloody hell, if that’s how it works, SAFE has spam cut off almost before it hits the network.

Hang on while I buy more SafeCoin!

1 Like

I worded it wrong. :slight_smile: , is there a way for a new app to compete with this (above) and is there enough of a difference to care?

So really in practice, you don’t store any personal data (in file
format) at all on your machines, except those required to run the
operating system and local programs, including vault software.

Will there be an option to retain certain local files which are used frequently, such as large video files? I often stay at a place with satellite internet, which is capped at 10GB a month, so it would be advantageous to have as may configurable options as possible to retain data, limit uploading/downloading, etc, while still utilizing the network as much as possible.

I think you may be crossing up the ideas of stored/retrieved data with apps. Increased speed of data retrieval is due to opportunistic caching, which is done as a core network function responding to actual demand for the data per unit time. The caching increases under demand and retards when demand drops. This wouldn’t be subject to a purchase function because it’s user-demand driven, not provider driven.

Apps are a different thing. I’m not sure about how the rewards for popular apps are coded, but the reward has more to do with the number of users of the app (and how often they use it, I’m sure). That’s a different critter. First mover advantage will play in, as with any product, but won’t put innovation at a real disadvantage. Think altavista or other early search engines compared to google.

2 Likes

Just like currently, anything you want to have handy for speed or off-line usage, store it on your local drive.

1 Like

I created an testnet1 AMA question relevant to this - asking if syncing to local directories was going to be part of the standard MaidSafe file sharing app. Hope so!

This would be a Dropbox type function, which I think will be an app rather than a native network function. At least that’s what I’ve gotten from absorbing @dirvine interviews, etc

1 Like

I’m talking about an app! The MaidSafe dropbox like app :slight_smile:

Sorry. You did say app.

My basic point, though, is that I think the company is leaving such an app as low hanging fruit for others, so it will be an early SAFE app, rather than a Maidsafe app.

I’m talking about an app that MaidSafe is planning to release. I forget the name, I think it has “life” in it.

Can you point me to an explanation of this? I’m really curious how this works. Is there a layman explanation of what happens? With arrows and charts and diagrams?

See quote above from David Irvine, under janeannford’s post.

“Life Stuff” I think is what you’re talking about. Not sure about current plans from Maidsafe on this. You may be right.

I don’t completely follow it, maybe. Is it saying that the network will create more copies, more redundancy, so that it can be doled out faster?

I could be missing a fundamental thing here. My understanding of this is that it works the way a torrent works. So in that logic, you’d need that one file to exist many times in order to meet the traffic of something massive and popular. So the network will judge the popularity of a file and have it exist on more nodes accordingly?

Let’s say in the case of an 10gb video file.

Yes. Say chunk “ABC” is part of your 10GB file and you want to download that. Your client app will make a network request to “Get chunk ABC”. This request will be passed to the 4 vaults whose IDs are closest to “ABC” (they’re termed DataManagers) since they know which vaults actually hold chunk ABC.

These 4 actual holders then get passed your request and reply with the chunk. This reply containing the chunk goes via the 4 DataManagers and from there back to you via a non-deterministic route comprising various other vaults.

As the reply is passed along, each vault that comes into contact with chunk ABC caches it in a first-in-first-out queue. If a given vault’s queue is full, the oldest chunk it’s caching will be displaced/forgotten and replaced with ABC.

So say you try and get ABC again shorty after. Each vault that receives your second request will check their caches for ABC, and if they have it, they short-circuit the normal process and reply with ABC to you again.

Since every client trying to retrieve ABC will be sending requests towards the same 4 DataManagers, the caching will tend to focus around those guys, protecting them from actually receiving the requests. The more popular a chunk is, the more often it will be plonked to the top of random vaults’ caches, and the area covered by cached copies will swell out from DataManagers for ABC.

So, as well as protecting the DataManagers and the actual holders from DDOS (deliberate or otherwise), it also means that clients requesting a popular chunk should receive it back from the network more quickly than a less popular one on average.

As the chunk’s popularity wanes, the cached copies get displaced by newer chunks and the replication count drops back towards just the actual holders again.

This is the current implementation of “opportunistic caching”. Things may change slightly with upcoming design meetings, but fundamentally we’ll need to keep some such mechanism in place to protect against DDOS incidents. The speedup for popular chunks is really just a nice by-product of this.

Hope that explains it a bit further?

6 Likes

That did it, thanks @Fraser! Perfect explanation.

I feel a SAFEnetwork for dummies coming on with lots of pictures

More like this from the Vault document…please Shona :blush:

3 Likes