As far as I understand this question is why the line of reasoning is easier assuming fixed processing.
Currently the flaoting point arithmetic of our workstations is not perfect. Almost every operation introduces tiny errors. It is the same with the restricted resolution of discrete values in fixed point arithmetic. Only in floating point the errors are tinier. So a floating point arithmetic of a certain wordlength is more precise than fixed point arithmetic of the same wordlength. But the quality of a floating point arithmetic also depends on the quality of the floating point algorithms.
If I am not mistaken, it is difficult to determine the precision, or so to speak the resolution, of floating point arithmetic. But as said in my first post, I remembered this statement from Bob Katz' book, chapter "Single Precision, Double Precision, or Floating Point?" (p. 206f in the book edition I own):
32-bit floating point processors are generally regarded as inferior-sounding to 48-bit (double-precision fixed), and 40-bit float.
That is why my argument was based on fictitious 48-bit fixed audio processing.