How do database applications get up-to-date data

If an application uses a database, then I assume the database data is distributed.
Then if an update is made to the databases won’t several chunks of the data have to be updated for the redundancy thing.
While this is happening, if an app access the database, how does it know which chunk of data is most up-to-date? Depending on which chunk of data the app accesses, won’t that data be different?

Yes.

The section handling the MD containing the data segment handles all the copies.

It knows because once consensus has been reached it is recorded in the datachain and the nodes go about updating their copies. When you request the MD the section knows which copies are up to date and which ones are not and the section will only deliver the most up to date copy for one of its nodes that has it.

Before consensus then its not considered updated, nor is the updating APP told that it is. Just like a database engine

If multiple chunks of data need to be updated before consensus what type of performance impact does this have?

Chunks are immutable data. So they are not updated.

New chunks are written rather than doing an update. This is why its called immutable

Now if you meant MD objects. Then the updates are performed in a queue just like database updates. This isn’t to say that each one completes before the next starts, but they are processed in the order received and once consensus is reached the MD is then written by each node to its store. Think pipeline.

For the one particulr chunk then an update has to complete before another update on that MD can start.

When you mutate Mutable Data you must give the version which you are changing, and if that does not match what is there, the update fails.

This ensures that there’s only one valid state.

For example, if two clients try to mutate the same MD, only one succeeds. The one which failed can then decide how to handle the error - getting the new version and trying to update that for example.

1 Like

what type of performance impact does this have compared to a centralised database?

The fact that it is not centralised is the compensation for the increased time to handle one record/block

So a often used database is handling thousands of records at a time. For SAFE this will mean that most of the records are being handled by different sections. So there is a high degree of parallel processing.

But obviously if one section is handling 2 or 3 updates then there will be an impact. But even then its like multi-threading in a multiprocessor system where the section can be working on different MDs at the same *time*