1st the SAFE network works with chunks (upto 1MB in size)
Each chunk is self encrypted before being sent out to the network to be stored.
Each chunk is indivisible content wise. And being encrypted it is almost inconceivable that you will come across any pair of chunks with the 1st half (or whatever) exactly the same without the whole being exactly the same.
As such it is pointless to talk of a vault trying to “dedup” based on a portions of chunks.
If you want to attempt to save storage by checking each 512 byte sector (or 4K block) then any NAS software that already has it will do the same. But even at 4K of a 1MB encrypted chunk that is encrypted again before finally being sent to the vault for storage, would be like generating random 4K blocks and having it as a duplicate of another previous generated one, but if you want to try then go ahead.
But mathematically you might get one 4K disk storage block duplicate every 100 billion chunks sent to you. Have fun. Maybe actually 1 in 100 billion billion chunks if you are lucky.
For local stored files, even if encrypted by the NAS, you are very likely to find duplicates. Like the backups you keep on the NAS That OS file is the same today as it was the last 100 backups and encrypts in the NAS to the same bytes. But not with already dedup chunks that are further randomised by another encryption by teh managers before being given tot he vault to store. The 2nd encryption is to prevent someone self encrypting certain files then looking for those chunks being stored in their vault or cached or passing along the chain to the client