Continuing the discussion from Process Separation:
Somehow @ned14 and I got into a discussion about programming practices, and I think this is worthy of its own thread. I know Nial could go much longer than I could on this subject, so I’m wondering if he will take the bait into this discussion.
So one new question Nial : do the moves in C++ make things more confusing to developers? How about the function reference qualifier stuff (
void foo() &&)? I showed some other developers that and they looked terrified. However, both allow some slick “tricks” with libraries …
I don’t know much about Ada, but since the US government was heavily involved I always assumed it was crap. I heard one person in particular talk about his experience programming in Ada (years ago obviously), and he had positive things to say about it. The biggest difficulty seemed to be that there were no Ada programmers in the industry (outside of US government work), and therefore an Ada programmer couldn’t be “hired”. I’m guessing that on the job training was likely difficult too. I honestly don’t know what caused its demise, the Wikipedia page doesn’t have enough information.
I don’t have any metrics, but I know from personal experience that moving to abstractions for the particular domain has worked best. Sometimes even working with iterators directly can cause problems, which is why I am a huge fan of the
boost::range library. I probably became too much of a supporter, my aversion to writing loops became too much I think. Even smaller things like using
boost::iterator_range<const boost::uint8_t*> to process binary data was a huge advantage over the older pointer and length idiom. Many loops became infinite because the pointer and length weren’t updated in lock-step, whereas an abstraction for a chunk of data didn’t have this problem and resulted in easier to read code. For some reason
range.advance_begin(10) is so much easier to read than
data += 10; length -= 10. The good thing was that it was really easy to convince programmers to use iterator_range, the bad was that sometimes they would “forget” or not see when it would be helpful.
I was big on exceptions for a while because it made some things more simple (functions that rarely should error, like memory allocation issues), but tracking flow control is difficult (if you want to make a program nearly impossible to RE, use exceptions I’m told). Eventually I tried to only use exceptions on shutdown conditions (this file HAS to be opened, it was not, throw exception and abort process), but somehow their usage would sneak into recoverable exceptions. Then you’ve created another problem with exception safety on classes - what a nightmare. Are you against exceptions in the “its time to go down” cases (like memory) too?
And I assume std::expected is from the Alexandrescu talk I saw a while ago? I was so excited after watching that video, that I went to work and told everyone about it. And no one cared. I might have been talking too fast for people to make sense of it, but I really liked that idea. I didn’t know this was being considered for the STL, I would really like for this to be in the standard.
Of course fewer bugs result, but my first experience into a code-generation compiler bug was that the abstraction amplified the difficulty in finding the bug. I didn’t know what to do when my favorite phrase may have been incorrect “the bug is always in my code”. Actually it should be “the segfault is always my fault”.