I only mention the tons of bits to illustrate the possibilities. In any real proposal we would stick with 64 bits. It could equally be 128 bits.
#####If in the future we need more division then the core could be made to recognise 2 versions of the note, the 1st version is the number of bits decided now (64 or 128) and the new double the bits. The core always uses the larger when storing a value. I doubt there is that many atoms in the earth as 2^128 so doubt we need more than 128 bits for this.
64 bits allows 19 digits of accuracy, and if want negative then it works out that its only 18 digits. Lets stick with 18 digits. This is a form of fixed precision, just all decimal places.
Then 0.125 is stored as 125000000000000000
Easy to do maths on this and no need for BCD (nibble = 4 bits) or storing one digit per byte. And if add 2 values together it does not exceed the capacity of an integer with that many bits. and if it exceeds 999999999999999999 after the add then a coin is released from being frozen to return in addition to the note. We subtract 10^19 from the new notes value that exceeds a coin’s worth to account for it
Then all that is needed is to specify the number of places the current system wishes to work in. 1/100ths or millionths etc. Only do this if we want it. Maybe it can be an account setting that the user wants (for viewing value, not rounding of such thing).
TL;DR the actual value of the fraction is store in an integer in the maximum number of digits the “int” can contain as amounts of that very small fraction.