The effect (on comb filtering) of adding a 30ms delay to one speaker in a stereo pair.
Caveats: room reflections are not considered here. Assumption is a mono sound source reproduced in both speakers. Speakers are 10m apart. 200Hz and above assumes a 60-degree directional driver. This assumes that single frequencies are being played for a significant length of time (much greater than 30ms, for example).
Note: 30ms diagrams assume speaker #1 has a 0 ms delay and speaker #2 has a 30ms delay.
40Hz, 0ms
40 Hz 30 ms.
50 Hz 0 ms
50 Hz 30 ms
60 Hz 0 ms
60 Hz 30 ms
70 Hz 0 ms
70 Hz 30 ms
80 Hz 0 ms
80 Hz 30 ms
Note: As we go to 200 Hz, a single wavelength is 5 ms. So a 30ms delay is six full cycles at 200 Hz, so there really won't be any difference in the patterns for the nulls and lobes. Any multiple of 200 Hz (400, 800, 1000, 2000, 4000, 8000, etc.) will exhibit this same behavior. For this reason I've included some additional frequencies at random points to avoid this problem.
200 Hz 0 ms Wavelength = approx 5.6'
200 Hz 30 ms
222Hz 0 ms
222Hz 30 ms
400 Hz 0 ms Wavelength = approx 2.8'
400 Hz 30 ms
431 Hz 0 ms
431 Hz 30 ms
800 Hz 0 ms Wavelength = approx 1.4'
800 Hz 30 ms
1000 Hz 0 ms Wavelength = approx 8.5"
1000 Hz 30 ms
1234 Hz 0 ms
1234 Hz 30 ms
2000 Hz 0 ms Wavelength = approx 4"
2000 Hz 30 ms
2531 Hz 0 ms
2531 Hz 30 ms
4000 Hz 0 ms Wavelength - approx 2"
4000 Hz 30 ms
8000 Hz 0 ms Wavelength = approx 1"
8000 Hz 30 ms