Using the network to receive message/other notifications

Maybe this topic cointains also useful information:

PS Another pubsub system is MQTT. I wonder if stuff like ‘Last Will and Testament’ could be implemented.
Edit: something that the app should implement I guess.

1 Like

That sounds good mostly. But how would my account get notified when someone else’s account updates a file that I’m syncing? Wouldn’t that require them to have the app installed too?

1 Like

Ah that is private shared data, not just your own. This is a different issue. As long as you are happy with your own data part we can move on to that.

In private shared data a user (or app) would have to poll the network right now for changes. This could be made much better with push notification, which is entirely OK and possible. For that we need a pub/sub type mechanism, which is also OK. All of the mechanics are there for that.

The part that the core tech does not have right now is push notification. This is pretty easy though and is quite easy. When a client connects it will connect to its close group. They can now cache messages for that client on login. So that part is OK, however, you login to several devices and want to have all your data on them all. Atm this is not catered for in this scenario as your login would sync your data. An app would need to almost resend these messages again to the client group for the next device to login. I am not sure it is smart at all to have your actual data on machines like this, for this reason and many others.

So we are cool with push notification etc. that is easy, but syncing local data may require an app for that, but its not that simple. If you have web pages, blogs, messages, call transcripts, call media, files etc. then any app that creates those would need to also know this everything is also stored locally desire.

This probably needs more thought as to why locally store everything everywhere and sync?

5 Likes

An app would need to almost resend these messages again to the client group for the next device to login.

Hmm, we shouldn’t rely on every single device’s app working correctly… Couldn’t the push notification just be sent to the relevant account’s inbox rather than to a client? No-one besides the user would have to care if they want per-file syncing, as the notification would just be plonked in the inbox anyway. Unless I’m misunderstanding inboxes? If the goal is for people to store files on their hard drives less and less, then a push-to-client type of push seems kind of redundant except for farming.

Local storage is great for many reasons, and might well go out of fashion. But we should prepare for the inevitability that someone will succeed in restricting the internet in the future. Maybe satellites will be destroyed, maybe a new signal blocking technology will be employed somewhere, or maybe there will be laws that ban wifi in airports. I bet not many people predicted the Russia Telegram war.

1 Like

Yes, but then it becomes what inbox on what device? It’s not so simple really, but I feel that is because it is unsafe. There are ways to achieve this but probably not so much my area as I am opposed to all data on all devices as its a waste to me. I agree ubiquitous connectivity is not here yet, but I don’t feel it is far, otherwise Chromebooks etc. would not be selling.

So the prob here is each device would need to update any file changed from another device in “realish” time I suspect that will mean tying device ids of some kind to your account and almost have an inbox per device for sync of all this data. If it were only static data then perhaps you could run a daemon to check a bit like dropbox does, but that won’t cover all your emails etc. I think. It is possible though, but I imagine you have lost all the security and privacy at this stage and would be as well using a server-based 3rd party system. Of course, unless we can find some clever mechanism to sync all data on all devices.

I see it simpler to offline cache data though, like last accessed data and so on. That would just be static data as well mind you but could be helpful. Then messaging clients etc. could if they were designed to, have offline cache capability.

I still would not use that though as it would tie me to a device and I am pretty keen that devices hold absolutely no identifying data on anyone.

[Edit]
All of the above ignores privately shared group data. That gets difficult if you offline edit a doc that 100 online editors are working on. The sync process there is much more like a git conflict resolution.

2 Likes

I think that would be a decent balance for many people, assuming the local cache is also encrypted, etc. Using local storage as a cache seems like a good logical step forward for moving data to remote storage, while retaining decent performance and robustness.

For security critical usage, clearly this has negative trade offs though.

2 Likes

Couldn’t this all be solved by giving each account a decentralised inbox? One place where all devices regularly can check for all file updates and other notifications such as emails. Then it’s up to each device to view the latest notification every second (or however long makes sense given the network’s refresh rate) and act on it independently. Come to think of it, that seems necessary so that you can see when a shared file has been updated when using a public/friend’s computer.

Not sure what you mean here, an inbox per device? like having different accounts on each device that all share all the same data and also have the data stored locally on them?

A single account’s inbox stored on the network, so not on any device. Like normal network data. So even if all my devices disintegrate, I could connect another one and still see my previously messages. Like computers pulling emails and various other notifications from a centralised server in classical Internet, except the server would be Safe itself.

Yes, but would you get the latest. So say you have 2 devices with different states that needed to sync. If they did by logging in and getting all the latest stuff then you do need to differentiate the device and store something there or grab a cpuid etc. and tie a login to that somehow.

Maybe this is a bit confused (for me anyway) what if we bulletpoint your idea, I think it is

  1. Store data locally
  2. Have multiple devices able to be logged into 1 account
  3. As any device alters a file or gets a message etc. it should be synced to all your logged in devices
  4. As a device logs in, it must traverse all data on the network it has locally and sync that
  5. If there is a private share of data, then each device of every member logged in must sync any changes (IMHO any private share should then be considered unsafe, but that is another matter)

I think that is it, but perhaps you could clarify. There are a lot of side effects here if this is the list, like race conditions between devices and security etc. but perhaps we can take them one by one?

Hope this helps us to get an understanding

  1. As any device alters a file or gets a message etc. it should be synced to all your logged in devices

I’ll try to clarify what I mean here as it might be a source of misunderstanding.

I suggest that:

  • If the user alters a file on Device1, then Device1’s Safe app should handle uploading this to the Safe network. The Safe app would be logged in to Account1.
  • The Safe network then distributes the file across the world, and sends Account1 (but not any physical device) a notification message that the file has been altered by Device1.
  • Device1’s Safe app polls the Safe network for Account1’s messages, finds this new message, and then does nothing as it is Device1.
  • Device2’s Safe app (also logged in to Account1) polls the Safe network for Account1’s messages, finds this new message, and proceeds to download the new version of the file.

As a device logs in, it must traverse all data on the network it has locally and sync that

This doesn’t seem necessary, so here’s my alternative take:

  • Every time the Safe app uploads a file, it would store the latest upload time in a local file.
  • Every time the Safe app is started, it could ask its local filesystem which files have been edited (locally) since that time. Though this could be turned off by the user if necessary for any reason.
  • Any files that have been edited (hopefully none of them as the app should be running continuously) would then be uploaded to the Safe network.
  • The Safe app would then start polling the network for notifications.

If there is a private share of data, then each device of every member logged in must sync any changes

There would be no requirement for this, but it would make things easier for the individual members. If they turned off automatic message polling, they could still view the updates by clicking a manual ‘message poll’ button, or by visiting the file in a web browser or something.

Race conditions looks like it could be complicated. But hopefully that wouldn’t be too glaring an issue for normal usage. Giving each message an ID number so they can be ordered would help.

So the client managers would need to know all the devices and when they all say they have read the message then it can be purged. Each device would somehow mark that message as read.

  • We would need to make sure then that when you lose a device that you have a way of registering the loss, so the list does not go on forever.

Yes, I understand this part, but when you start an app it will need all those changes, but also any existing logged in devices would need them posted to it as well. It could be a lot of traffic, but doable.

Race conditions are hard, but this is also hard, it would need to be device id and number pair. Otherwise, the numbers could collide between devices. It does get very tricky. Then other devices need to be able to know when you add devices (could help security to only allow login form certain devices) or not, but if each device has the same auth and you leave one on, then your scuppered as the person changes your password and you are not near the device to stop that, but I digress. You could use a pin that removes the device after 3 failed attempts etc. but it gets more work fast this road.

For sync you can start down long roads like give every directory a hash and only traverse (sync) those with different hashes that you have at that root etc. But all this is to make your files on a device and that part is really hard to understand for me, it is the antithesis of safe to have multiple copies in my opinion anyway. I am not sure I am looking unbiasedly at your proposals due to this, but I am trying, I hope I am unbiased. I am tired, but trying to help, none of this will be anywhere near as simple as you imagine though. I Am pretty certain of that. No need to stop thinking and fixing that though.

So the client managers would need to know all the devices and when they all say they have read the message then it can be purged.

That’s sounds like a good way to do it. Another way of purging would be to tell any/each app to delete Account1’s notifications 100 days after reading them. It wouldn’t need any awareness of other devices for this.

Although, is it even necessary to automatically delete old notifications by default? They seem small and harmless, as long as notifications can be accessed by most recent first.

when you start an app it will need all those changes, but also any existing logged in devices would need them posted to it as well.

Unless Device1’s app didn’t finish uploading files edited/created on Device1 before being terminated, then there would be no changes. Device2 would only be affected by Device1 coming online if the user had been using Device1 for offline editing. No more traffic than downloading the new files any other way.

I have no real ideas about the race condition problem. I assume Safe will have brilliant mechanism to sort it out when it becomes a problem without syncing apps. Someone will try to update mutable data from two devices at once at some point.

If Safe accounts have decentralised notification trays that upload apps (and other apps, websites, etc) can be permitted to plonk notifications in, then I’m inclined to think that race conditions become a problem solely for upload apps to deal with rather than the network as a whole. I’ll give that more thought though…

For sync you can start down long roads like give every directory a hash and only traverse (sync) those with different hashes that you have at that root etc.

That could be useful occasionally, to make sure the app hasn’t malfunctioned and failed to update a file for some reason. But I can’t see it being necessary very often.

I might try to condense/clarify my suggestion to make it more readable. I’m pretty sure someone will find a way to make it work. If the infrastructure (just the account notification tray) is included in the network release, a lot of things will be possible that maybe weren’t before.

I appreciate you taking the time to explain it all in detail. Every discovered problem with the idea is a solution waiting to be found (​:

1 Like

The only thing to consider here is SAFE knows nothing about time and so far that has been critical, so that timer is dubious It is not natural to have a stopwatch, we can do better.

Agreed. However as discussed it will probably change from ‘constantly checking’ to ‘constantly waiting’ via a pubsub type service. Polling is not a sustainable model for this network.

I think a popular way to do messaging / editing on the SAFE network will be chains of operational transformations. This means editors “operate on their local copies in a lock-free, non-blocking manner, and the changes are then propagated to the rest of the clients; this ensures the client high responsiveness in an otherwise high-latency environment such as the Internet”

My idea of what would happen is 1) an initial sync of new transformations and then 2) the client opens a subscription to be notified of future changes (or start polling if pubsub doesn’t exist). This only happens for files in use since the phone is essentially a ‘view’ to the network; the files on the phone are not the data, just a starting-point for rendering the view. No point updating views that aren’t being viewed so I would generally think not all 40K files need updating.

Sure, not about time-in-seconds but it does know about the sequence datachain events which are a progression of events (aka time). I think having a pubsub timeout as a function of elapsed datachain events would work ok.


A bit of harebrained conceptualising on messaging:

The source>destination paradigm of messages is currently focused on client>network (and back again) but should also be able to do network>client. I think of websockets and longpolling on the current web… prior to that a server could not push data, only the client could pull. That’s fine, but not general enough for the messaging requirements of the SAFE network.

And furthermore, messages should be able to go file>file, ie a successful mutation to MD1 should be able to then be used as an input to a potential mutation of MD2. Currently this requires a client to poll MD1 for changes and then manually mutate MD2, but it would be great if MD2 could itself subscribe to changes in MD1 and the network messages directly to MD2 when MD1 is changed, no client needed. ie MD2 becomes a client subscribed to MD1.

In other words, inputs for mutations can currently only be signatures. That’s fine. But eventually having other mutations as inputs would be interesting and lead to a design of messaging that is more fully generalised.

Everything is an xorname. Thus everything should be able to ‘be a file’ and ‘be a client’ and ‘be a message’. There should not be too much distinction between these things. :face_with_raised_eyebrow:

2 Likes

just thinking outside the box here - not everything needs to be through SafeNet? What if encrypted chunks on one machine were just copied over to other machines via dropbox? (or something similar) Would you then be able to log into your Safe Network account on the other machine and access the files? In that manner, SN only needs to update the one machine.

1 Like

Hmm, come to think of it, if the operating system’s date/time were changed to a year in the future, a timer could mess things up.

TylerAbeoJordan:

What if encrypted chunks on one machine were just copied over to other machines via dropbox? (or something similar)

The only reasons I’m not doing this are because of how expensive it would be, and how dodgy the software is compared to just using rsync and an external drive whenever I switch device every week or so. I think I’d rather just store everything on Safe and then redownload all 400,000 files each time than install dropbox to be honest… I might change my mind and get NextCloud soon.

2 Likes

So for immutable files.

You have a MD where the latest version of the file’s datamap pointer is stored. If the device you use has a different datamap pointer then you need to update your version on your device’s disk.

This can be done whenever you need to operate on the file. It really doesn’t matter if your copy is old if you never use it.

You can also have a APP that checks the MDs for all or all the important files to check if any file needs updating. Alternatively whenever the file is updated which produces a new datamap then a message could be sent (the app updating the file could do this)

This maybe a less than perfect way of doing it but is workable.

1 Like

Could be possible to use SAFE as a ‘trustless meeting place’ for clients to then connect directly to each other via rsync / webrtc etc. Still keep ‘the single source of truth’ for file content on the network but for high-performance sync /edits do it directly peer to peer with discovery via SAFE.

Similar idea to bitcoin being “a highly accessible and perfectly trustworthy robotic judge and conduct most of our business outside of the court room” source.

6 Likes

:thinking: I’m sure they said the same of Facebook; I’ve deleted both. Reddit too much the echo chamber - polite way of putting it but I expect in time will be seen to already be past its best. Also that rather limited audience for those who have subscribed to a subreddit and not visible to those who have not.

Still, the OP collection of content from the unSAFE internet is a good idea… reminds me of old school newsfeeds. A simple management of channels of alsorts of input would be interesting to see… though that reminds me of the RSS feeds in email software than I wonder no-one uses.

1 Like