Archive for December 2009

Interpreting Coherence on the FFT Analyzer

December 31, 2009

When we make the transition from the simplistic renderings of one-dimensional acoustical analysis, i.e: those that present amplitude only, we have several new traces to contend with. The most notable are phase and coherence. Phase is often minimally grasped and coherence even less so. These are both difficult because we don’t have simple mechanisms in our ear/brain system to detect them, especially when either of these functions varies with frequency (which is almost always the case).

Last week I received an interesting set of questions about coherence from Geoff Maurice of Brockville Ontario and I though it best to answer it here since others may share the interest and contribute to the discussion. What Geoff is looking for is coherence explained without going hard into the math, so here we go.

1) Coherence is a statistical metric: It monitors the extent of the variation in the data sampled. Therefore, first and foremost we must have multiple samples. In FFT terms this means were are “averaging” the data.  If we have at least two samples, we can statistically evalaute the averge amplitude and phase values and the deviation between the individual samples.  An average amplitude value of +6 dB could be the product of two nearly identical – or vastly different samples. The coherence value indicates (inversely) the extent of the devation between the average and the individual samples.  Low deviation= high coherence and vice versa

2) The deviations between the samples that degrade coherence can be EITHER amplitude or phase or BOTH. Most factors affect both. Examples of this are wind, ambient noise, reverberation, a change in a filter setting. 

3) There are some factors that degrade coherence continually and some that degrade it only for a limited time. Continual degradation is caused by  the summation of the original signal with a relatively short delayed copy (the most obvious example is an echo).  The comb filtering results in a series of peaks and dips in both amplitude AND coherence. Variable degradation comes from non-correlation sources such as ambient noise.

4) Geoff asks: what mechanism causes a complete cancellation to reduce the coherence.  This is an interesting one. A complete cancellation at a given frequency results from the  summation of equal level signals that 180 degrees apart. At the place where this occurs (our measurement mic) the original signal (the first arrival) is not audible since it has been neutralized by the reflection. Since our transfer function analyzer is looking to compare the original electronic signal to the acoustic arrival it finds the signal missing at the mic. This does mean there is silence at the mic. Far from it. Instead what we have is any and all other signals in that frequency range. Reflections from long ago, signals from other sources (stage, sound fx) and ambient noise.  All of these signals will have very poor correlation to the original electronic refernce signal. On the contrast side the same early reflection will make for excellent coherence at other frequencies where the late arrival falls 360 out of phase – which means it is “in phase” and will add to the original signal. Strong early reflections make for stable coherence – high AND low. 

5) Geoff asked about the relationship of the number of averages to the coherence value. The coh value is calculated from whatever quantity you choose. If the deviation percentage is the same over two samples as it is over 16 the coh value will be matched. In the case of SIM we use a coherence blanking function that screens data below a given threshold. THAT threshold varies with # of averages. Why?  If you have just two samples you have “he said/she said” – who is right?  (we know this but I am not saying) – so these two better match or we can not resolve it.  With 16 samples we have lots of data to work with. One or two can be far off and not pull the average down to far.  So we use a high threshold (90%) for 2 averages and a low threshold for blanking with 16 average (20%).

6) next Q was – would you call a coherence value of 50% accurate?    yes………and no. A coherence of any value can be accurate. 0% coherence is an accurate representation of all noise and no signal.  But I think where the question goes is: how much coherence is good enough to act on?  The answer here is the ultimate in sliding scale.  Grading a curve on a curve.  If I am measuring a frontfill speaker in the second row , 50% is a very poor value.  If measuring an array in the back of an arena 50% is a great number. For me looking at coherence trends is more helpful than considering a particular number. A drop to 50% in an area where others are much higher gets my attention as would a drop to 20% where the median is 50% . A rise to 50 % where most are very low would also get my attention. Why is this range getting through, while others aren’t?

There is a start on this subject. Anyone with thoughts is invited to comment



December 18, 2009

LDI  2009

On November 20th and 21st I went to LDI and conducted seminars on (guess what) sound system design and optimization. These were part of a Cirque du Soleil sponsored education program and each session was 2 hours. I find it difficult to cram that subject into a 32 hour seminar so compressing it down to 2 is just too much fun.  Nonetheless we did have two good sessions – the best was the second one where I was joined by Paul Garrity, Matt Ezold (both from Aurbach and Associates) and Bob Barbagallo from Solotech.  All of us have been involved in a large number of the Cirque productions such as the Beatles, Zumanity, Zaia and others. This gave us an opportunity to share our perpectives on how these projects come together. Paul and Matt described their role in translating the artistic vision into a stage, flyspace, lifts and a room while Bob B described the process of taking the macro version down to the minute details of wire terminations etc. I described my role in taking the hundreds of speakers and making them work together to create even coverage over the space. It was an informative day for me, getting a chance to sit back and see what goes on before I am ever brought in to the picture.

Sound Systems: Design & Optimization – 2nd Edition

December 18, 2009

The second edition came off the press in October of this year and I am quite happy with the result. First thing is that the book can sit on your lap or a table without flopping around. The graphics came out much better. I was able to get them reformatted so as to get the most crtical elements like MAPP and FFT plots to the maximum size allowable on the page.  I also was much more directive in getting related graphics – like a series of 4 plots,  to be placed together on the page so that they can be viewed together. This was not a minor effort. Probably 80% of the graphics were reworked but this time the effort seem to pay off.  When I open the book – the sizes seem right and most importantly the data is visually clear enough to prove the points being made.

I have over two years of teaching experience behind me with the old book, and this was very helpful in terms of guidance toward the second edition.  I expanded on things that were found to be the most useful in my seminars and reduced some areas that were redundant as well as ones that repeated themselves or said the same thing over again, repeatedly.  While the publisher gave me a goal of 25% new material and 15% longer than the 1st edition, I just felt it would be cruel and unusual punishment to add 75 pages.  I did manage the 25% new material but increased the density so that the expansion is less than 8% (40 pages). I have yet to hear a complaint that my book was too short so I tried my best NOT to make it longer.

The section on cardioid subwoofers was expanded and contains a much improved explanation of how the steering occurs and how frequency affects the directionality of subs.

The discussion of the behavior of modern line array speakers was also expanded and includes the use of all-pass filters for beam steering.

The design chapters were reworked very heavily with a much more linear set of design procedures for deriving models, spacing, splay angles etc.

A final chapter was added that provides example design for a theatre, concert hall, arena and house of worship. This chapter brings together all the concepts described previously and is almost entirely graphical (about 200 new graphics were created for it).  

Hopefully this will help clarify what’s new. Any comments are welcome