Kind of yes and no.
I do have my own strength-first-speed-last style of encryption software that is a modification of the one-time-pad ready for download for years, but what I have found out about my software is that it has one critical security flaw: the ciphertext contains an ecryption key ID and each encryption key of the current mmmv_crypt_t1 implementation has only one ID per key. That metadata RUINS ANONYMITY.
The flaw can be fixed easily by generating thousands of ID-s for each key and at the decryption side there should be a key management program that reads the ID-s of all keys to a 2-column-SQLite table that makes it really fast to find the relation:
key_ID -> keyfile_path_on_disk
but that’s about a week of work, probably 2 weeks, may be a month as sub-tasks emerge and I have to fix/upgrade/publish my Kibuvits Ruby Library (KRL) before taking on that task. I’ll probably get the KRL back in line relatively quickly, probably within a month, but, I have to earn a living somehow and unless any of my client projects requires it, that work gets postponed.
Secondly, my encryption algorithm is slow-as-hell not because it is written in Ruby (Ruby certainly contributes to the slowness, but it is not the main reason), but because due to the strength-first-and-all-the-rest-second approach the encryption algorithm totally neglects the memory access ANTIPATTERN, which is that random pieces of the encryption/decryption key are randomly picked from a key that does not fit into the CPU-cache. That kind of memory access pattern makes the algorithm terribly slow regardless of the programming language that it is implemented in. But, the slow-as-hell approach is a conscious choice and it used in the name of security. The slowness itself does not strengthen the encryption algorithm, the slowness is only a side effect, but the random selection of the pieces of the key does reduce the risk that some particular part of the key “wears out” much more than other parts of the key.
A total opposite to the memory access anti-pattern can be found from a string concatenation implementation that can also speed up (10-fold, depending on data) multiplications of integers, because from memory allocation and access point of view
10 * 10 * 10 = 1000
is very similar to string concatenation. That comes very practical, when calculating factorials. Memory access pattern based optimization techniques are programming language agnostic, just like the algorithmic complexity based optimizations are programming language agnostic.
By the way, You may port the string concat function to Your Rust. The sample implementations of the watershed string concatenation algorithm are umder MIT. You may find that algorithm extremely helpful, when concatenating bitstreams, re-assembling the pieces of files, at the decryption/readout step of the MaidSafe.
From code style point of view the difference is:
//old and naive:
//smarter option that uses the watershed concatenation:
ar<<chunk_1 //inserts chunk_1 to ar
I actually have a Ruby generalization of the watershed concatenation, like “template style code”, which You may use for experimentation. At the old KRL it’s called: x_apply_binary_operator_t1, at class Kibuvits_ix. The watershed concatenation algorithm is useful only, when the concatenation result takes considerably more memory than any of the roughly equally sized concatenatable pieces.
So, coming back to the idea, can I solve it without requiring changes to the MaidSafe: I have an idea, how to alleviate the situation, but it is a matter of philosophy, whether a “Safe” is actually a proper safe that can withstand its main attackers or a “Safe” is only “better-than-the-current-alternatives”. In my opinion the “minimum-viable-product” approach is inappropriate for the case of the MaidSafe project. Actually, I’ve seen the trend of reverting to some “minimum-viable-product” also at other projects, interestingly often in the case of projects that are mainly developed by Americans or Brits. For example, the GNU Net seems to take a kind of sloppy approach, which I summarize as: “The small hashes will (have to) do, because user comfort trumps the resilience to state actors.” It seems to me that it really is a culture thing: which is more important, happiness of users and end user comfort or the fact that the device/product/software really does the job even if user comfort is brought as a victim to the technical strengths of the product. It is a fact that remembering and writing/entering passwords/pins is an uncomfortable burden, but bank cards just would not work without that burden, as is also the case with all online accounts, with the exception of ssh-key based systems, which just have a different kind of burden in the form of setting up the ssh keys.
Thank You for reading my comment.
P.S. It’s not for me to say, how others should do their software development. Just like everybody, I also have subjective opinions.