I’m still new to the forum and learning about the platform but these are my initial impressions that might address your points:
I think there are drawbacks. Besides introducing performance limits like I mentioned before, I would say that your method would reduce obfuscation and the desire to ensure that the security model is quantum computing resilient. The SAFE network is intended to stand the test of time. It could be theorized that (eventually) whatever basic encryption technique you choose will be weakened or broken when the attacker has the enough computing power and the whole file, but self-encryption helps get around this. While the minimum number of chunks for self-encryption might be 3, larger files can give you much more to work with during the XOR pass such that it becomes (dare I say) impossible to extract any meaningful data. Telling the adversary that one can link together meaningful info by picking the right set of 3 chunks at a time gives them a lot more to start with than just saying “It’s all random… good luck”. Since the choice of optimal trade-off between obscurity and redundancy is subjective, it makes a lot of sense to just let the network do this automatically based on file-size. Also, redundancy can always be improved by adding more storage (which also helps obfuscation), no need to cripple your future-proof encryption scheme for more of the same… right? Pre-chunking in might also negatively affect de-duplication.
Not necessarily if what I mentioned above is true. Your suggestion may be a feature that users would like to play with, but the current default would seem to me to be ideal. Hypothetically, you could tell users that increasing file-size reduces resiliency but increases obfuscation, but if the network is already handling the multiple levels of redundancy automatically in the background this sort of statement would misrepresent the actual data resiliency. I guess the point I am trying to make is that you get both benefits with the current scheme, whereas introducing your concept could sacrifice some key properties while increasing complicatedness.
It is messy. As @neo mentioned QuickPar or par2 on linux try to do this in a standardized way and are better than a couple of split and cat commands in a terminal or splitting up a zip or tar archive. I’m not convinced that this would be better than just keeping the files in their native formats and then having specialized apps for specific file-formats that could achieve what you are asking (and more) but make it more user friendly. Do one thing and do it well?