Hello Katja,
Fortunately I had a conversation with a colleague today about IR normalisation for another reason and probably we got our answer.
In simulation softwares (Acoustics 3D models simulations), the auralisation module when deriving an IR and then the convolution of the anechoic file, applies a gain at the two stages and stores the info in the file.
In the case of the IR it applies a gain in order to have the maximum peak digital value (0.9999999999) and then consequently the same gain to all the rest of the IR (24-bit integer).
When doing the auralisation it applies another normalization gain to prevent clipping of the auralised file and takes into account that auralisation is made at 16-bit integer.
a simple program for the IR normalisation (converting the IR to PCM) can be:
Variable definitions
FP(N) = floating point impulse response
PCM(N) = linear impulse response
M = sample length
Algorithm
MAX = 0
For N = 1 to M
If abs (FP(N))> Max then MAX = abs(FP(N))
Next N
GAIN = (2^16)/MAX
For N = 1 to M
PCM(N) = FP(N) * GAIN
Next N
Store GAIN
As you can see the gain is arbitrary and depends on the IR (or the recorded sweep response) and there isn't a general rule as far as I know.
IN our case we should do these normalisation of the file and store somewhere (metadata in the wav file) the gain applied that would be useful in future applicaitons.
IN the case of multichannel IRs this can be a bit more complex but one thing at time is better.
Hope this helps
Bassik