tux99 wrote:the ADC uses 24 bit but then transfers the recording over the wire as 32 bit. I wonder why it does that
Many of the new interfaces are now using CPUs and DSP chips that can access memory only in 8, 16, or 32 bit words; not 24-bit. For example, Intel/AMD CPUs can't directly manipulate a 24-bit word. So internally, a 24-bit device must make each sample 32-bit. That doesn't mean you get any extra resolution. They just pad out the upper, or lower 4 bits with zeroes. I assume Focusrite transfers it as 32-bit because a driver running on Intel/AMD ultimately must pad out to 32-bit anyway. So in this case, it's not going to be less efficient (than if you converted 24 to 32, twice).
The OP should record at 32-bit linear. He should not
use plughw to convert to 24-bit during recording/playback, as that will add latency. After recording, if desired, use the DAW's "sample conversion" function (most DAWs have such a feature) to convert to 24-bit linear. (Or just choose to save as 24-bit wave file). There will be no loss of fidelity whatsoever.