Kind of yes and no.
I do have my own strength-first-speed-last style of encryption software that is a modification of the one-time-pad ready for download for years, but what I have found out about my software is that it has one critical security flaw: the ciphertext contains an ecryption key ID and each encryption key of the current mmmv_crypt_t1 implementation has only one ID per key. That metadata RUINS ANONYMITY.
The flaw can be fixed easily by generating thousands of ID-s for each key and at the decryption side there should be a key management program that reads the ID-s of all keys to a 2-column-SQLite table that makes it really fast to find the relation:
key_ID ā keyfile_path_on_disk
but thatās about a week of work, probably 2 weeks, may be a month as sub-tasks emerge and I have to fix/upgrade/publish my Kibuvits Ruby Library (KRL) before taking on that task. Iāll probably get the KRL back in line relatively quickly, probably within a month, but, I have to earn a living somehow and unless any of my client projects requires it, that work gets postponed.
Secondly, my encryption algorithm is slow-as-hell not because it is written in Ruby (Ruby certainly contributes to the slowness, but it is not the main reason), but because due to the strength-first-and-all-the-rest-second approach the encryption algorithm totally neglects the memory access ANTIPATTERN, which is that random pieces of the encryption/decryption key are randomly picked from a key that does not fit into the CPU-cache. That kind of memory access pattern makes the algorithm terribly slow regardless of the programming language that it is implemented in. But, the slow-as-hell approach is a conscious choice and it used in the name of security. The slowness itself does not strengthen the encryption algorithm, the slowness is only a side effect, but the random selection of the pieces of the key does reduce the risk that some particular part of the key āwears outā much more than other parts of the key.
A total opposite to the memory access anti-pattern can be found from a string concatenation implementation that can also speed up (10-fold, depending on data) multiplications of integers, because from memory allocation and access point of view
10 * 10 * 10 = 1000
is very similar to string concatenation. That comes very practical, when calculating factorials. Memory access pattern based optimization techniques are programming language agnostic, just like the algorithmic complexity based optimizations are programming language agnostic.
My knowledge of memory access patterns based optimization techniques originates from one of my āprevious livesā, when I worked years by writing speed-optimized C++ for image processing/image analysis. My C++ is currently, spring 2017, very rusty, but the fact that the demos are all in interpreted languages, not system programming languages, does not mean that I do not know the system programming side. As a matter of fact I believe that in some cases a smartly written Ruby/PHP/JavaScript/Python can probably outperform C++/C# at the very same datasets. Please do not believe me, please try out the string concatenation Ruby/PHP demos and try to outperform it in C++ with the dumb A+B+C+ā¦+Cn style code. My Ruby/PHP will probably win, if the dataset is ābig enoughā, just like it is the case with algorithmic complexity.
By the way, You may port the string concat function to Your Rust. The sample implementations of the watershed string concatenation algorithm are umder MIT. You may find that algorithm extremely helpful, when concatenating bitstreams, re-assembling the pieces of files, at the decryption/readout step of the MaidSafe.
From code style point of view the difference is:
//old and naive:
whole_file=chunk_1+chunk_2+chunk_3
//smarter option that uses the watershed concatenation:
ar=Array.new
ar<<chunk_1 //inserts chunk_1 to ar
ar<<chunk_2
ar<<chunk_3
whole_file=concat_by_using_watershed_concatenation(ar)
I actually have a Ruby generalization of the watershed concatenation, like ātemplate style codeā, which You may use for experimentation. At the old KRL itās called: x_apply_binary_operator_t1, at class Kibuvits_ix. The watershed concatenation algorithm is useful only, when the concatenation result takes considerably more memory than any of the roughly equally sized concatenatable pieces.
So, coming back to the idea, can I solve it without requiring changes to the MaidSafe: I have an idea, how to alleviate the situation, but it is a matter of philosophy, whether a āSafeā is actually a proper safe that can withstand its main attackers or a āSafeā is only ābetter-than-the-current-alternativesā. In my opinion the āminimum-viable-productā approach is inappropriate for the case of the MaidSafe project. Actually, Iāve seen the trend of reverting to some āminimum-viable-productā also at other projects, interestingly often in the case of projects that are mainly developed by Americans or Brits. For example, the GNU Net seems to take a kind of sloppy approach, which I summarize as: āThe small hashes will (have to) do, because user comfort trumps the resilience to state actors.ā It seems to me that it really is a culture thing: which is more important, happiness of users and end user comfort or the fact that the device/product/software really does the job even if user comfort is brought as a victim to the technical strengths of the product. It is a fact that remembering and writing/entering passwords/pins is an uncomfortable burden, but bank cards just would not work without that burden, as is also the case with all online accounts, with the exception of ssh-key based systems, which just have a different kind of burden in the form of setting up the ssh keys.
Thank You for reading my comment.
P.S. Itās not for me to say, how others should do their software development. Just like everybody, I also have subjective opinions.