Crypto chat

Yes, I wrote about a true one time pad and completely random cipher data. The solution I have used is neither a real one time pad, nor true random generation, yet my hope is that it’s a close approximation to that.

It would be possible to generate a key with equal length as the message I guess. I don’t know how secure it is but instead of a 256-bit hash value it’s very easy to extend the size of the hash value to arbitrary length. So for example a public key can be generated with the same length as the total message. Does the private key have to be equally long as the public key for perfect encryption?

Here is a paper about the cryptographic properties of the Rule 30 cellular automaton: http://www.stephenwolfram.com/publications/academic/cryptography-cellular-automata.pdf

Think about it like this: Websites are more and more using https, the “secure” version of http with encryption like AES among others. Who really believes the NSA is unable to listen to https traffic?

Yes, there are known and published attacks on https and https often relies on AES. However, that doesn’t mean that AES is broken. In fact, the underlying crypto primitives are most likely the strongest link in the https chain.

The problem with accepting Rule 30 is the lack of public analysis around it (unlike that of AES, SHA, etc.). So, I would hazard that Rule 30 is an easier to break hash than SHA.

Last note, don’t quote a comment from an article and cite it as if it’s from the body of the article or at least flag that you’re doing so. And in response to that person’s comments:

Actually, even with orders of magnitude faster machines, brute forcing will still practically take forever.

Yes, if you assume that your adversary can break everything, then you lose. The only way out would be cryptosystem whose hardness (in terms of how long it takes to crack) is provably lower-boundable, which is currently beyond (public) state of the art. Until then, we need to make by with the best that we have.

True. Not only is Rule 30 very little tested for cryptography. My implementation may also be totally flawed. :confounded: Still I find it useful as an experiment.

Yes, the quote about the NSA having faster machines and better analytical knowledge was from a comment, not from the article itself. I should have mentioned that.

I forgot to answer that. Rule 30 is considered computationally irreducible because to know the value of a cell after say 1,000 generations, each step has to be calculated. There are no (known) shortcut formulas for the calculation of the value. Of course, for cryptanalysis it’s the reverse calculation that’s used, and that requires the additional property that it should be extremely difficult to do such reverse calculation. Stephen Wolfram has claimed that for Rule 30, cryptanalysis is likely to be difficult (given that the values are chosen sparsely from the center column of the cellular automaton).

Great conversation folks, and good research. My position is that the crypto community is still reeling from the revelations and messing that happened with NIST. I feel there is more doubt over EC encryption than AES, but AES is far from perfect and in some cases is weakened. A few years back a good related key attack was found and Bruce Sheider recommended changing the number of rounds of the algorithm for AES256 from 10 → 14 if I remember correctly. It may have been more rounds, but it was clear the rounds were not as high as you would want for Rijndael. That was suspicious at the time.

I take AES as something that is potentially broken, but less likely than EC (granted different types of encryption). The take away for me is we have reduced and possibly significantly reduced the faith in peer review and years of in the field testing. Look now apparently heartbleed was known for 2 years by the NSA, so the in field testing is very much harmed when this type of action is taken.

It is a real shame as many have spent their career in this crypto and the results, specifications and more cannot now be relied upon. So the normal scientific method at its very core has been what the NSA revelations have damaged. It is now hard to categorise and measure efficiency of algorithms and standards any more. For me this changes the game and makes crypto extremely dangerous as we know we do not have the truth regarding measurements and tests. NIST/NSA have really shot a whole industry in the head here.

So a mix of many schemes is probably best, we should not have to, but I feel its the world we live in, people have messed with the scientific method and the level of this interference is unknown, meaning we cannot trust results as we should be able to.

Those folks are not looking after us the way we think they, so greater care is required.

4 Likes

I was thinking of adding a standard encryption algorithm to my experimental one. But then it will be difficult to test whether my algorithm is of any use or not. So I will use only my approach and see how far it goes.

Good move you can generally add another stream cipher when the logic works, it should not affect things too much. If you look at what we do, its called pipelining. So we create a mix of operations and create a pipeline. So you feed plaintext in and it goes through many changes and out pops your encrypted data. That way you can insert another algorithm into the pipeline.

Interestingly this pipelining, allows SAH2 and 1 to have the so called spongy capabilities SHA3 has. Its pretty good. You can see this in self_encryption.cc where we perform the enc/dec steps. We still need to add a pipeline fork for getting the hash of encrypted data as we have an extra copy operation in that algorithm (copies are bad :-))

I have been thinking a lot about maidsafe’s use of AES as well. I want this project to succeed, and it seemed odd to me that AES was picked as a part of its core encryption. Everyone involved here seems very knowledgeable on it’s issues, and the thing is we already have a great alternative. I have been discussing this with a friend of mine, who deserves a lot of the credit for the text below and might decide to join the conversation, and thought I should share our discussion here. I have also mentioned this to @dallyshalla previously.

I fully acknowledge that my knowledge of Maidsafe’s code is incomplete and based only on the documentation I have cherry picked to read so far, or from talk’s I have heard.

AES (Rijndael 128/192/256) is an industry standard and provides reasonable levels of security. It’s fine for your banking, entering your credit card number, etc. But when looking at ciphers for something new that is designed to be as secure as possible, and intended to last a long time, it is a poor choice. The algorithm/standard is open to biclique and related-key attacks. AES 128 was compromised by a biclique attack. 192 and 256 have been compromised by related-key attacks. It is absolutely true that the attacks were under controlled circumstances, and it would be nowhere near as easy to attack the cipher in the wild, but AES was compromised nonetheless. These issues exist, and will become increasingly easier to exploit over time (something that Maid plans on being around for). So, that being state, I now think that it would be really valuable here, as context on the issue, to take a moment to look back at why Rijndael was picked for AES in the first place:

During the AES competition, three major algorithms emerged; Rijndael, Twofish, and Serpent. Twofish is nice, but represented the middle ground for security, and has since experienced a lack of adoption for adequate testing. So, Rijndael and Serpent. Rijndael had all of the above issues but was specifically chosen because, while acknowledged to be less secure even at the time, it performed far better on late 90’s hardware than Serpent. This was important because the government was looking for an algorithm to replace DES, and work on every device. Rijndael worked on 90’s desktops, laptops, servers, etc at reasonable speeds. The above security concerns were also computationally infeasible at the time so, from a late 90’s standpoint, both algorithms were equally secure.

Serpent however, is a much better choice today. The performance issues are negligible. You can do full disk encryption with Serpent (with LUKS for example), and only have a few mb/s difference between an encrypted and non encrypted disk. AES uses 10, 12, or 14 rounds of transformation (depending on key size), while Serpent uses 32 rounds. Currently, all known attacks on Serpent are computationally infeasible. While AES has been compromised in its entirety, the closest anyone has come to compromising Serpent is a theoretical paper in which 12 of the 32 rounds could be broken in a controlled environment.

AES has only been compromised in a controlled environment to the best of our knowledge, and it is perfectly reasonable for most forms of communication. It is also a very important standard, so most programs use AES to still be compatible with anything it needs to communicate with. The issue is there is a much better algorithm out there, which has never been compromised, and which provides far more security. When designing something from scratch with security in mind, there is very little reason to use AES over Serpent. People use AES to maintain compatibility with legacy systems, and because it won the AES competition… but without looking at Why it won. MAID is about creating a new internet that fixes the flaws of the old one, improves on it, and becomes a decentralized foundation that lasts for a very long time to come. If you are taking the future-looking stance on security and strength of the protocol, AES is not the right choice. Serpent is currently the best we have when it comes to future expected security, with AES becoming more and more likely compromised going forward. As a decentralized system it might be hard for Maid to make fundamental protocol changes in the future, I don’t know enough to be sure, but assume it’s much the same way you can’t update TCP/IP or DNS to deal with new security realities. Instead they need to be replaced. These new ideas are not immune to how people adopt standards. Bitcoin’s code base is also expected to solidify, pushing modifications up a layer just the same way we layer on top of tcp/ip.

That is the broad strokes of my concern. Specifically the application could use the serpent-xts-plain64 serpent cipher, with 512b keysize, and the whirlpool hash, for storage security.

It’s important to resist the urge to make your own crypto for fundamental security, unless there is a really good reasons, because it can’t have the same level of extensive academic testing and review that existing crypto has, something I’m sure of course you are all well aware of.

I would also recommend taking a look at this paper, discussing topics around the selection of AES, written by Bruce Schneier: https://www.schneier.com/paper-twofish-final.pdf For some context to the paper, Bruce Schneier designed all the 'fish block ciphers. That paper is a comparison of the three AES finalists, and goes over the ‘serpent vs rijndael’ argument. Some highlights are the chart on page 4 showing the security levels of the ciphers, and sentences like this: “We believe that the results of the straw poll at the Third AES Candidate Conference reflected this dichotomy: those that thought security was paramount chose Serpent, while those more concerned with performance chose Rijndael.” And this paper is even before several of the now known possible academic setting compromises in Rijndael were found.

The days of the performance issues with Serpent are gone, this was all from a late 90’s perspective. I use Serpent for my laptops full disk encryption and have no visible performance loss. Truecrypt even offered a combination: Serpent within Twofish within AES. And even with all three, the performance impact was tiny. Which also makes the point that if maid wanted to keep the name and comforting marketing of AES, then while unnecessary, something like AES+Serpent would completely work (or even better Serpent within Twofish). These are very well known and trusted protocols in the crypto world; it would not be an issue.

Sorry for the wall-of-text, that’s something I tend to do. Please take a look and let me know what your thoughts are. I really want Maid to succeed in the long term, and this is a big concern for that to happen.

-Travis

6 Likes

I agree, you will see AES is wrapped in our self encryption scheme. People tell us AES will never be broken (that is actually the argument made) and we are going to far with what we do. This is exactly the reason we do go further than AES alone, we don’t trust it or any algorithm right now. I think serpent is a great contender as well, don’t get me wrong.

So again this is another reason for what we do with our obfuscation steps. Don’t worry about the wall of text it is all very good info.

3 Likes

I have added several layers to the encryption algorithm. It’s now truly massive encryption :smiley: :

  1. The whole (always padded to 160 characters) message is encrypted by adding (modulo 256) random bytes to each byte of the message.

  2. All the bits in the message are then shuffled randomly with a bit by bit Fisher-Yates shuffle.

  3. After that the message is encrypted with a 4-round SHA-256 Feistel block cipher.

  4. And lastly the message is encrypted with a Rule 30 stream cipher based on a CPU nanosecond clock random nonce and a private 256-bit key.

Here is a JavaScript demo of the algorithm: https://jsfiddle.net/kktbksrz/

5 Likes

What did I say ?

9b072216ed0582cef71a98028f94d949ab554bfe828f7daf237f050fb78cc09b adb3ce10303739949b505a88251f446ea471b00dd44420b3b7a1698feb008bda d23f82b9ae91737d900468fba75102fbd85be8d6c0bfd9a7da8539b248b33f79 85fb1290331507dbefdf8f0d198096070ea62e94c8703afea58d51ed09748c33 a2f44f7f13ef5c03a7c374aa59ed03a1b85fd008ae9e52d5c91e602073dc4783 884e3b3a549cc3ae25b37b33d094094a31b9dbe5ce527763b54fb8ebaf6ee40a 9f815a5a711fff504443350664fb77823c4a16470e683aedd5c0e1e489be60fa d261fc0afff34182fcc8b5a1e021cd474f1a4ca0c969be44ad1c63f6b41313c1 72dce9dffdffdf89ff2ae4424b1132d76266a5d4c2fc2a1f8d2732eca6439262 c9ca4af6331cff134f21b94825067043c5f4c09ad101b075848e20bec1cb69e0 3a5b0970e4b80465bef8fdaf2fabdc8b64f5272b82268cb951fe0ba974c708d9

That’s an easy one. You said “hi”. I can intuitively tell that by the patterns of the bits in the encrypted message. Just kidding!

EDIT: Actually, unless you changed the private key, your message can be decrypted directly with the private key: hash(‘This is a test key’).

1 Like

Each time I say the same its output is indeed different - that works - looking forward to the real thing :slight_smile:

FYI, that post was from october 2014 :upside_down:

1 Like

Is this safe in this implementation (I am not familiar with Wolframs cipher here). Usually clock is bad and PRNG is used, may be different here though? Just curious, not a critique

In the JavaScript implementation the CPU nanosecond clock is sampled with the (very) jittery setTimeout() function into a 1024 bit string which then is hashed into a 256 bit nonce. Randomness is indeed a tricky subject yet practically in the Chrome, Firefox and Internet Explorer browsers the 1024 bits generated look sufficiently random. What will happen with Moore’s law and computers getting magnitudes more powerful? I believe the current implementation will be good enough for many years to come. And if not the nonce can easily be upgraded.

2 Likes

No worries, it may be a bone of contention in an audit, just in case :thumbsup: There was a bitcoin wallet that suffered as a result of a bad seed. I Think though it used the hash of a return value, which was a 404 or similar as a site went sssl. So not in that league, but clocks are not used as crypto secure seeds. I Think it’s worth a quick look around at a PRNG for javascript or check how others create seeds etc. Just in case. I would hate to have somebody unfairly pick at this.

Great work though, really nice to see.

3 Likes

If you need random in the browser for crypto purposes, you should really use Crypto.getRandomValues() - Web APIs | MDN if it’s available (you can polyfill for a fallback). Granted most people will tell you that in-browser crypto is not safe (the most popular post on this subject is https://www.nccgroup.trust/us/about-us/newsroom-and-events/blog/2011/august/javascript-cryptography-considered-harmful/). No need to debate the finer points here of course, just thought I’d link it.

4 Likes

One potential problem is that the generation of the nonce is in my case at the mercy of the implementation of the setTimeout() and performance.now() functions. Yet those are very much in practice “set in stone” so to speak. Google, Mozilla, Microsoft or Opera Software cannot just willy-nilly tamper with such core functionality without affecting their whole community of developers and applications. So in practice I believe it’s reliable to use such implementation for generating randomness. I challenge you to come up with an alternative approach that is truly random without the use of clock sampling, and without the need for extra hardware.