SafeNetwork's economics to incentivize different types of memory

High bandwidth + High redundancy + Low latency = Expensive
Vs.
Low bandwidth + Low redundancy + High latency = Cheap

From what I understand, with SAFE, High bandwidth imply high redundancy, low bandwidth could mean low redundancy but not necessarily.

I’m aware of the concept of caching but it doesn’t seems to address the issue.

For example it’s not because some data is not being accessed often that when they’ll be requested they don’t require high speed and reliability.

As a rather typical user I use different type of memory in different ways. Apart from the multiple levels of volatile memory, I also have a high speed SSD, I have a slow but large hard-drive connected, I have off-site duplicated data. I have large offline zero redundancy storage (historic data, with marginal value).

Will there be different kind of storage in the SafeNetwork and if not I would like that we address these storage systeme attributes in relation to SAFE.

(Taken from wikipedia and stripped down)

Characteristics of storage
Storage technologies at all levels of the storage hierarchy can be differentiated by evaluating certain core characteristics as well as measuring characteristics specific to a particular implementation. These core characteristics are volatility, mutability, accessibility, and addressability. For any particular implementation of any storage technology, the characteristics worth measuring are capacity and performance.

Volatility

Mutability
Read/write storage or mutable storage
Allows information to be overwritten at any time.

Write speed vs Read speed

Accessibility

Random access
Any location in storage can be accessed at any moment in approximately the same amount of time. Such characteristic is well suited for primary and secondary storage. Most semiconductor memories and disk drives provide

random access.

Sequential access
The accessing of pieces of information will be in a serial order, one after the other; therefore the time to access a particular piece of information depends upon which piece of information was last accessed. Such characteristic is typical of off-line storage.

Addressability

Location-addressable
Each individually accessible unit of information in storage is selected with its numerical memory address.

File addressable
Information is divided into files of variable length, and a particular file is selected with human-readable directory and file names…

Capacity

Memory storage density

Performance

Latency
The time it takes to access a particular location in storage.

Throughput
The rate at which information can be read from or written to the storage.

**Granularity :
The size of the largest “chunk” of data that can be efficiently accessed as a single unit, e.g. without introducing additional latency.

MAX_CHUNK_SIZE defined as 1MB.
MIN_CHUNK_SIZE defined as 1KB.

Reliability
The probability of spontaneous bit value change under various conditions, or overall failure rate.

Energy use

2 Likes

Tying to understand what you are trying to say.

SAFE is providing internet storage, %age wise all access will relatively similar. All forms of memory/storage fall in a range of access speeds. SSD has a range of a few hundred percent, same for disk drives, memory on modern PCs range about 100% (1300-2600)

And SAFE will see similar range in speeds for its internet storage. The greatest speed issues are the initial latency for first byte read and then your link speed (SAFE can get multiple chunks at once and max out most home internet links)

So are you trying to say that SAFE should provide faster than internet access somehow, for a different cost?

1 Like

[Appended to OP] …

1 Like

At this stage SAFE provides two types of storage.

The file/chunk storage - immutable storage that is most like a massive array of disks over the internet.

And Secure Data Storage - mutable storage of upto 100KBytes each. Also delivered across the internet.

So similar access profile (internet disk)

Later there will be archive storage that will see some nodes permanently storing least used chunks, perhaps even on more modern write once storage devices.

1 Like

How are these two type of memory different in relation to the above storage characteristics ?

How granular are these chunks ?

1 Like

The chunks are up to 1MByte in size.

The above does not include internet storage, or even network storage. It seems to be the model from the late eighties. with the added DVD to the CD system.

There are a number of newer memory/storage systems now.

Internet storage would be on the order of 1 second latency and speed limited by the user’s internet connection.

1 Like

When someone download a 1.1MB image from SAFE would there be many people sending this 1mb chunk or would the 1mb chunk be split further into smaller parts ?

Would the 100kb trim be sent as a 1mb chunk or 0.1mb ?

The above basic characteristics have been taken from https://en.wikipedia.org/wiki/Computer_data_storage and I apply any digital storage.

1 Like

The 1.1Mb file would be stored as 3 chunks, which is required by the self-encryption algorithm
So it would be stored as 3 chunks of equal size.

To answer what you wanted to know, I gather the current process is to pad the last chunk out to be 1 MBytes for files >3MBytes long. But this is not certain.

That it may have. It is still missing network storage systems NAS for instance. iSCSI is another missing. Both of those do not fit in with any of those mentioned. NAS can be used on PCs and iSCSI is mostly for multi-server systems

And more importantly it is missing internet storage

2 Likes

Besides file size, there is no difference in speed in the data retrieval. The Network is data-agnostic, and every file will be passed just as fast as possible.

Other (faster) types of memory would have to be client-side. Even though the economics might (probably would) work out, I don’t believe that the tech could handle that kind of prioritized data access.

1 Like

So for every 50KB images you’d need to receive 3MB, And can one of these 1mb chunk be split smaller and be transferred in paralle ?
It could be a problem if most internet connection around are 7mbps down / 800kbps up. Downloading anything would take as much as 12 second if one of these connection is contributing a chunk.

I’m talking only about the characteristics of storage systems not about any other part or the article. Again, every storage system has to operate within the same basic constrains to some degree. Latency, Reliability Capacity, Throughput ect .

1 Like

I think this is an interesting concept.

Assuming each chunk is stored 3 or 4 times somewhere in the world, generally on an asynchronous internet connection, it seems unlikely that users will be able to max out their home connection when downloading files from Safe.

However, if there were ‘premium’ options that cost more, it could be appealing for storing files that require faster service (e.g 8k video files for streaming).

To do this, there would need to be multiple types of immutable data on the network with different properties and different costs.

If an ultra fast type existed, it may require 10 copies and specify farmers must have synchronous connections.

The network could rank valuts, and only those that meet this tougher criteria (minimum upload speed for example) could farm this premium high-speed data.

This high speed data would cost more to put, and incentivise people to invest in better internet connections for their vaults to improve earnings if this data type was popular.

Would something like this be possible to give options for hosting files with higher bandwidth requirements?

If in practice the network is very fast, this may not be required, but I think it’s an interesting concept.

2 Likes

No. The chunk size is variable and can be as small as 1kB.

/// MAX_CHUNK_SIZE defined as 1MB.
pub const MAX_CHUNK_SIZE: u32 = 1024 * 1024;
/// MIN_CHUNK_SIZE defined as 1KB.
pub const MIN_CHUNK_SIZE: u32 = 1024;

2 Likes

Thanks for the reply, Edited my above post with the information.

You can also just offset by one byte the data in the file and every chunk will have a different hash. Hence doubling the redundancy and bandwidth. It will be very trivial for the client side to manage those two copy as one.

I guess where I was getting it wrong is that for large files (e.g. 8k video streams), while there may be a speed limit for downloading each individual chunk, I don’t think there is a limit on how many chunks you can download in parallel, so you should be able to max out your home connection by streaming more chunks at the same time.

If my download speed was 50x the average upload speed for farmers, I could simply download 100 chunks at a time and it’ll still utilise my connection despite the slow upload speeds of farmers.

If this is the case, my idea would be unnecessary to increase speed, and as you say, to increase redundancy, simply upload two copies with a tiny difference that can be handled at the client side & pay twice.

Correct.

Most video/music software buffers the content because of the slowness of the internet/storage. So on SAFE this equates to the software requesting many chunks in advance.

Thus the delay the user sees is the latency to starting the video/music. Otherwise it could even be a better experience that what some receive from the current internet because chunks requested arrive in parallel, not sequentially like off a streaming server.

1 Like