Archive for the ‘Alignment & Design’ category

Earthquake video of our sound system at Tokyo Disney Sea

March 13, 2011

Here is a video from Tokyo Disney Sea taken during the quake. It was sent to me by Matt Ferguson. This sound system was designed by Roger Gans, Mike Shannon, myself and also Bill Platt. Also involved on the audio side were Richard Bugg, Francois Bergeron, Martin Carillo, and many others. We had strict design limits on the poles regarding weight – for wind and ….you got it. Each of the poles is retracted inside a chamber and then hydraulically expanded twice a day for day-time and night-time shows. This was the day show.

Here is a picture of the interior of one the speaker vaults. The speakers are already up and out at this point.  We had different sizes of poles for different areas because some locations did not needs as many speakers. Of the poles shown in the video, only the lighter weight unit goes down – but there are 44 poles total so I have no idea about the rest.

It is worth noting that alot of time and effort was put into the emergency announce capability of the system. This was something that book the Japanese and Disney engineers kept very much in focus. The announcements that you hear on the video were done in advance and loaded on to a “360” sample playback device. It was necessary to allot 30% more time for Japanese language of an equivalent phrase in English.

We played those samples MANY times through all sorts of contingency scenarios. It was a good feeling to see it come through when it really counted.

Speaker enclosure at Tokyo Disney Sea

LR Mains+Downfill

This is one of the big poles. 21 ft total

Some big poles with rehearsing boats

Here are a couple of the poles during a boat rehearsal.

Big pole -Speakers with lighting

In the big scheme of things this is very small. I was happy to see the buildings holding up there. I wish for the safety and speedy recovery for everyone there.


The Emperor’s New Stereo

March 9, 2011

I was contacted a few months back by Jose Luis Diaz about an article I wrote for Mix magazine -in 1998. He asked did I have a copy of the original in pdf form.  No. I am not the best archivist. 😦

Well it turns out he had a Spanish translation of it and he RETRANSLATED it back to me. 🙂

It’s funny for me to see the old article and the extremely crude drawing quality of that era. As for the subject matter itself, it still holds up pretty well. Not too long ago went to another concert with a 5 piece jazz band where the piano was on the left and the guitar on the right. We had really great seats on the left side. The piano and drums and bass were fresh and clear. The guitar I heard when it came back off the wall on the right side. Bet it sounded great at FOH.

So here it is ……once again.  And if you want the Spanish version go here


The Emperor’s New Mix

Unveiling the stereo myth on live sound

(Bob McCarthy Mix Magazine January 1998)

Once upon a time, there was an emperor living in a giant palace.

After mixing some tracks in his private studio, the emperor was so happy with the stereo image that he decided to throw a concert for his 5000 closest friends.

For the occasion, he bought a new luxuriously advanced stereo sound system.

Before the show started, the emperor told the audience what the sound system sales man had said to him:

”This system has such magic qualities, that it’s capable of creating perfect stereo imaging in every seat. Every person that doesn’t experiences stereo imaging is, obviously, vulgar and not suitable for his job.”

Everyone was sitting to the left and to the right all along the center walkway.

The sound system was set in such a way, that all the seats where inside the left and right P.A. towers coverage area.

The concert began.

The emperor was sitting in the center of the room, and he marveled at his own sophistication. The stereo image was perfect!

Everyone else shuffled in their seats realizing how vulgar they were and the danger they faced of losing their jobs if they were caught. To them, the sound appeared to come almost exclusively from the nearest P.A tower from their location.

When the concert finished, all the guests congratulated the emperor over the vivid stereo image they had experienced. Everything seemed to go well until a little boy, putting words to everyone’s thoughts, said:

”Why did all the music except the tom drum come from the right speaker?”

What the boy had said was true, and everyone knew it.

For some reason, the stereo image only worked in the very center of the room. How could this be? Was there something wrong with the sound system? With the mix? With the room acoustics? None of the above.


There is one simple and irrefutable problem: stereo effects don’t scale when moved from a studio to a bigger room. You could have all the stereo coverage needed for every seat, but that doesn’t mean you’ll experience stereo imaging when you leave the center.

Everyone agrees that stereo spatialization is better perceived from the center. But in a studio, or in a living room, one can move freely over a large part of the room and still experience reasonably effective stereo.

Try it yourself: Play a well mixed track in your living room, sit directly in front of the left speaker and close your eyes. Although off-centre, it’s still possible to identify the instruments all along different horizontal locations in between speakers. Now try it again in front of the P.A tower of the left, from a 30 meters distance in a concert hall. No more gradual horizontal movement between both sides. The image stays almost exclusively in the left speaker.

Keep your eyes closed, and slowly head to the center of the room (be careful!) until you reach a point where you find the same panoramic image you experienced in your living room. Be objective! This is all about real experience, not expected results. Surely, you will be standing just a few steps away from the center of the room, not much further than in your living room.

The distance you can travel in your living room while retaining acceptable stereo imaging is almost the same as you can travel in a 5000 seat concert hall before you lose spatialization.


Panoramic location between two sound sources depends on two interrelated factors: Time differences and Intensity differences. Let’s analyze intensity differences first.Turn gradually the pan pot in your console to the right. You have created now a difference in the level between the channels, favoring the right one, thus, the stereo image (as it’s expected) moves to the right.

This happens, as long as you remain seated in the center of both speakers. If, by any chance, you’re sitting to either side, the image won’t move the same way the pan pot does. Why? Here comes the defining factor in sound localization: time difference.

We locate the image depending on which source arrives first to our ears, even if the time difference is minimal and the later source has more intensity. The psychoacoustic relation between these two factors is known as ”Precedence effect” and was analyzed in 1950, among others, by the now famous Dr. Helmut Haas.

The ”sweet spot” for binaural localization (stereo imaging) is within the first millisecond of time difference. If the time difference exceeds the 5 milliseconds, the sound image can only be moved by brute force. The channel that arrives last must be 10 dB louder than the first to achieve this.

Now this is where the scale concept really comes alive.

Time and intensity differences don’t translate equally when we scale from a small space to a large one.

The intensity difference is a proportion between the level of both sources (the two speakers, the two channels…). The intensity relationship between left and right channel is the same in your living room than in a stadium. If you’re standing at twice the distance from one speaker in reference to the other, the intensity difference will be 6 dB, This will remain the same, no matter if the difference is 1.5 and 3 meters, or 15 and 30 meters.

The time difference, however, is not a proportion. It is simply, the DIFFERENCE in the arrival time of both sources.

While the intensity difference was kept constant in the previous example, the time difference will be multiplied by 10 when we increase the distance from 1.5 (4.4 ms approx.) to 15 meters (44 ms).

Given that the time difference is the predominating factor in sound location, you can clearly see that the odds are low when you’re trying to achieve stereo in large scale.

Because we only have a 5 ms window to control the image, the usable space to recreate stereo in a stadium is, in proportion, really small compared to your living room. In other words, the horizontal area needed to experience true stereo localization (the space where the images can be situated) is barely larger in a stadium than it is in your living room.

Nobody wants to admit that there is no stereo for the big crowds. From a mix engineer point of view, stereo represents an advantage. If he is mixing from the center of the room, it’s easier to listen individually to each instrument in the mix if they are panned all along the horizon. Plus, it’s more fun this way.

The diagram shows a concert room and a living room. The living room is in scale to the concert room. The light-shaded area in the living room drawing shows the area where the time difference between sources is less than 5 ms. This is the area where true stereo is achieved.

The same shading in the concert room is where one would assume you could obtain stereo imaging. The dark-shaded area shows the real area where stereo works properly in a concert room.


The search for a stereo image can have a negative effect in the frequency response uniformity if the speakers are arranged in a way where there is too many overlapping of coverage area.

Signals panned to the center, almost always the important channels, will arrive at different times to the seats far from the center. This causes severe comb filtering and changes the frequency response for each listener.

Comb filtering, or combing, is one of the side effects caused by combining signals that aren’t in sync. The time differences change the phase relation between both speakers for all the frequencies. In any location, the frequency response obtained will depend on the phase relation between both signals. When the phase matches, there will be a total sum. When the phase is inverted, there will be a total cancelation.

In any point in between those two, the combined signal won’t have sums or cancelations. Instead, it will have a series of audible peaks and depressions in the obtained response. Each change in location will hold different time differences between left and right channel, and because of this, a new phase relation, resulting in a new series of peaks and depressions in the frequency response.

The irregularities caused by combing are more severe when you have two signals with the same intensity but different time arrival.

The more you try to spread the stereo, increasing the overlapping area of the speakers, the more audible will the peaks and dips will be. This is not to be taken lightly. A sound system with a large overlapping area will have variations of up to 30 dB in the frequency response over a band width that changes from seat to seat, turning EQ into something completely arbitrary. A short 1 ms delay will create a 1-octave hole in 500 Hz and will scale that way. Longer delays degrade the intelligibility and the sound quality even further.

If the stereo image is the most important, then you should fully pan the channels and make the overlapping coverage area of the speakers fill the room. The only way to beat time difference is forcing it with intensity. Although this expands the stereophonic area, you will be left with terrible level differences between channels at both sides of the room. However, channels panned to the center will have a variable response over the listening area, caused by the combing obtained with all the overlapping.

This technique was used for many years by a nameless touring band, which hard-panned several of its musicians. In the center of the listening area, the stereo was fantastic.

However, fans that couldn’t arrive early to the shows, in order to get seats in the center, would have to choose between listening to the left drummer and the guitar player, or, the right drummer and the keyboard player.

If the priority is to make the entire band enjoyable for the whole audience (and I expect it to be this way), then, leave the stereo as a special effect. Design the sound system in a way that the overlapping of the left and right speakers roughly matches the 5 ms time delay window area. Reduce the level of infill speakers so the front and center coverage can be achieved without big overlapping spaces. Don’t waste your time, energy and money on stereo delays and fills.


All of these can sound radical, maybe even heretical to many readers. After all, we have put too much time and effort into stereo reproduction in P.A systems.

It would be awesome if we could achieve stereo in every seat of the room, or even half of them. If a large amount of the audience receives the benefits of stereo imaging, we could argue that combing and intelligibility loss are a reasonable price to pay for it. But it is futile and self-destructive to fight against the laws of physics and psychoacoustics and to pretend that we are experiencing stereo, when we are not. Remember our priorities.

It is unlikely that our customers will raise their voice because they don’t have enough stereo. They certainly will of course, if everything sounds like a telephone or can’t be understood, two of the most common results when searching for stereo on big shows.

Mono sound reinforcements seem like something we should have already discarded for something better, but they have a big advantage over stereo: They work.

This is not a statement that will please the emperor, or the band manager, but it does hold some truth: ”This system has such magic qualities that it’s capable of creating perfect mono imaging in every seat”.

So thank you Jose Luis.

Toning Your Sound System

September 10, 2010
No this is NOT a typo. I did not mean to write “Tuning your sound system” because that is entirely a different subject. So what is the difference between toning and tuning?

 Here is a simple example from the muscial side: This is my son Simon. He has a guitar effects pedal that has exactly the TONE of Eddie Van Halen. One small thing though: he can’t TUNE his guitar.

A legend in his own mind


Sound systems also have a similar contrast between these two concepts. Tuning  a sound system (in my estimation) is where you adjust the system so that it has uniform response over the listening area, with minimal distortion, maximum intelligibility and best available sonic imaging. Tuning is about making the far seats similar to the near seats. An objectively verifiable – but verifiably unattainable goal of same level, same frequency response, same intelligilbility throughout the room. Making the underbalcony as similar as possible to the mix position (which hopefully is NOT under the balcony). It is about making sure every driver is wired correctly, still alive, aimed at the right place and cleanly crossed over to the next one. It is about making it so the mix engineer can mix with confidence that theirs is a SHARED experience. Because it an objective pursuit, the use of prediction tools, analysis tools and our ears all play important roles in the process.  It does NOT, however mean that it sounds GOOD. “Good” is subjective.

Toning, on the other hand, can’t be done wrong. It is entirely subjective. Toning a system is the setting of a bank of global equalization filters at the output of the mix console that drives the sound system. If you want to set it by ear fine. If you want to set it by 10,000 hours of acoustical analysis containing mean/spline/root squared/Boolean averaging then go for it. If I am the mixer and I don’t like it, I will change it. Too bad. I like MY tone better. Deal with it. I don’t like flat. Deal with it. I like flat. Deal with it. There is nothing at stake here. Nothing to argue about. And no need to bring objectivity, or an analyzer to the table. The global equalizer is just an extension of the mix console eq. In the end the mixer will choose what they want to eq on a channel by channel  basis and what they want to eq globally. But also in the end there is no wrong answer, because it is entirely subjective. I have worked shows where, in my opinion the mix sounded like a cat in heat. That’s my opinion, and therefore not relevant, unless asked for. I asked the mixer “Are you happy with that?” They say “Yes”.  As long as I have ensured the cat in heat is transmitted equally to everybody in the room (i.e. TUNING the sound system), my work is done.

Good toning enhnaces the musical quaility, or natural quality of transmitted sound. Good tuning ensures that the good (or bad) toning makes it beyond the mix position.

 Piano Tuning…. and Toning

One does not have to know how to play a piano to be a competent piano tuner. It is an objective pursuit. Numbers. It can be done with an analyzer and/or a trained ear. The toning of a piano, a subjective paramater, cannot be wrong. John Cage opens up the piano and scatters nuts and bolts on top of the strings. This “tones” the piano. Is it wrong? Of course not. But before John Cage plays the “prepared” , i.e. toned piano, do you think he has it TUNED?  You bet.

John Cage Prepared Piano - a subjectively "toned" piano

 Below is another example of a “toned” piano.

I always wanted to find a way to work a deer head into my music

 Below are the tools for TUNING a piano. Similar to the ones we find our artistic auto mechanics using to TUNE up our car.

Tuning Forks

Hmmm..... Digital calipers: Objective or subjective?

Strobe tuner: otherwise known as a frequency analyzer


Just semantics or more?

So why do I make this distinction?  Because I have recently experienced several cases where people are confusing these concepts. In one case a guy wrote an article about how much better systems sound if they don’t have a flat response. Better to have peaks and dips. He notes that people that tune sound systems with analyzers do the clients a disservice by making thr system “flat”. Who am I to argue with this. He doesn’t like flat. OK. However, in the course of putting down acoustic analyzers for global equalization, the article never mentions the OTHER things that we use analyzers for: checking polarity, aiming the speakers, adjusting splay angles, adjusting relative level between speakers, setting crossovers, phase alignment, intelligibility analysis, treating reflections or most importantly: working to make it sound uniform throughout the room. The article compares equalizing your church sound system to your home hi-fi, which is to say TONING the system.  Maybe this guy’s approach is great for toning the system, but it is useless for tuning the sound system. The article “The fallacy of a flat system” can be found here


Then I received a question from one of my recent students from Asia:

Dear Bob:

Last week I join the BRAND X SPEAKER COMPANY seminar, they use another method to alignment the line-array system.

1) the whole line-array should be same EQ & same level.

2) they use room capture software to alignment the line-array system. They capture about 15 trace at difference mic position in the venue but not on axis speaker position and finally they sum average of the trace to 1 result then EQ it. What do you think?


This was my reply:

1) the whole line-array should be same EQ & same level. 

I cannot find any good reason for this. The lower area is covered by the lower boxes, the upper area by the upper boxes. They are in very different acoustic environments, they are very much at different distances. Why lock yourself into a solution with no flexibility? If the end result is a perfect match… then great. If not…what can you do besides make excuses?

2) they use room capture software to alignment the line-array system. They capture about 15 trace at difference mic position in the venue but not on axis speaker position and finally they sum average of the trace to 1 result then EQ it. What do you think?

This solves NOTHING. The end result is the same eq to all speakers. If it was an average of 2 positions or 20,000 positions the average is still just ONE set of parameters. If it sounds different in the front than the back before you average then it will sound exactly the same amount different AFTER the average. Why bother to take samples all around the room if you are not going to do anything about the DIFFERENCES around the room? It is just a waste of time.

The only reason to use an analyzer is to get objective answers such as: is it the same or different?  Not for subjective ones such as – does it sound good?

Example: Let’s say you average 20,000 seats and put that in as the eq for all speakers. Then the mixer hears it and wants a boost at 2 kHz.  What are you going to do? You are going to boost 2 kHz or get fired. Who cares about the average now?



In this case a manufacturer is using toning techniques without dealing with the tuning part. BOTH must be applied if we are going to bring the tonal experience to the people that pay to hear our sound systems.


Keep an eye on both sides of the issue, but bring the right tool for the job:

Recommended system tuning attire

System toning outfit (Women only PLEASE!)


Toning apparel for men

SIM3 Optimization & Design Seminar at UC Irvine

June 29, 2010

We just completed a 4-day SIM3 training seminar in the south side of southern California. UC Irvine is located very near the ocean, which makes one wonder how folks could study when the surf’s up. It is also right next to John Wayne Airport. Naturally I flew in and out of LAX, and drove the hour down to the other airport. Why? Because I live in St. Louis, which USED to be an aviation town (ever heard of Charles Lindbergh, McDonnell Douglas or TWA? – all just museum stuff now.).

Measuring, measuring, measuring

We had a good sized class of 19, including grad students and professors from UCI, some engineers for Creative Technologies ( a rental house specializing in corporate), some freelancers and two special guests: Daniel Lundberg  and Jamie Anderson. There were 3 people (not including Jamie) who had attended my seminar previously and were returning. This is, for me, the highest honor and I am very grateful for the support of Will Nealie (whose photos are shown here), Chuck Boyle and Szilard Boros.

The Venue

We were fortunate to get to do the seminar on the stage of the 300 seat Claire Trevor Theater. This allowed us to measure first in the controlled circumstance of the near-field on stage and then work our way out into the house. As an added bonus we were allowed to measure (and re-design and retune) the house system, which had an up-to-date line array type system of 8 x Meyer M1-Ds.

The class moved along very smoothly. We covered LOTS of ground and the acoustics of the hall were very favorable so that students could get a look at what real systems can do in a good hall.

The class progresses in complexity over the 4 days, beginning with measuring a processor, then on to a near field single speaker, adding a subwoofer , near field arrays, distributed arrays and then out in the house where we design a full system and tune it. All the while the progression of complexity is underscored by the theory behind the data. The number one focus point of a SIM3 seminar is understanding what the data says and WHY they data says that. Proper diagnosis must ALWAYS come before treatment, and all treatments need follow-up testing. If they don’t work then get started on a new diagnosis. SIM school tunings are never rehearsed so when something shows up on the screen, we all are seeing the data for the first time. There are always surprises and this was no exception.

In the course of the tuning here we found that our original design had too much coverage for the room. If we had gone to MAPP On-line or even used a simple protractor on the plan view of the room this could have been seen in advance. But PURPOSELY we did not use those tools to find the answer. It is better for the learning process to see how unkowns can be decoded by the analysis methods. The “Main” array was 2 x UPJ-1P in a coupled point source, located at the house left stage edge. Our goal was to cover evenly across the room – a straight horizontal line along the 3rd to last row. As we measured the 1st speaker across the row we could see that it cover ALMOST the entire width…. almost. Adding the 2nd speaker was WAY too much, leaving it off, left us 4 seats short of the aisle. Conclusion: Our design was flawed. (This made it just like a REAL gig except that the designer’s ego was not at stake).

 It is much better (as a learning experience) to use the SIM 3 Analyzer to prove the design was wrong and to force us to consider the optimization options that had the highest prospect for success. If only we could wave a magic wand an turn this 80 degree speaker into a 50 degree speaker!  Oh….. WE CAN!     In this case we rotated the UPJ-1P horn on the 1st speaker (they are 80×50 rotatable) so that we got 50 degrees of horizontal coverage for the “A” speaker (the longer throw). This covered enough of the room to make a successful, smooth transition to the B speaker. Then we added the “B” speaker – too wide again until we rotated its horn as well. The end result was even coverage across the straight line  of the 3rd to last row within 1 dB.  The process involved measuring on axis, at crossover and off axis until the splay angles were optimized, the eq’s set (individually and together), levels set and delayedso they were phase-aligned at the crossover. Then we added the subwoofer to the array in both an overlapping and non-overlapping mode (different delays were needed for this). Finally we added a small delay speaker to extend the coverage evenly into the corner. We even took a few minutes to show the effects of adding excess delay (the side effects of the Haas Effect) and watched as the coherence and combined level at the delay location became worse than if the delay speaker had been turned off. This is always an eye-opening moment at my seminars. 

Tuning (and retuning ) the Line Array

Because the class moved along so quickly we had the luxury of extra time to take on the house system. This system is made  available for students to re-design, re-hang, re-angle, re-tune, re-etc…….  This particular config had been specified by a student AMA (against medical advice) so the professors were quite interested to see how it would look under the scope. The answer:  ________________________flat line.

The horizontal orientation was the most severe in-tilt I have ever seen (OK I am pretty new to this but I have seen a FEW systems). It was such an inside job that the Left side of the PA missed most of the…………. left side.  The mix position was in the very rear of the house right side. From a horizontal standpoint the left speaker was pointed at the right wall IN FRONT of the mix position. If you are having trouble visualing this here is a pic to help.

Horror-zontal aim points for the PA

So we measured and found that the left cluster was more than 6 dB louder on the house right than on the left. Obviously the speakers would need to be opened outward. 

Before- ONAX A vs OFF A - Off mic is near top row at the last seat on house Left

We had, however, spent the previous 2-speaker tuning  focused on the horizontal plane interaction between the pair. Here we had 8 boxes in the vertical plane– that is what we wanted to see – and we had 5 mics running from top to bottom in a diagonal line where the speaker was pointed.  As it stood, nobody knew what the current vertical angles of the cluster were. We had the 8 boxes wired in 3 zones 3-3-2 as an ABC array. It was offered to bring down the array and see the angles – then we could play in MAPP and see the response…….NO, NO, NO. Much better to turn it on and see what we have. This way we can learn how to hunt down an array in the wild. We know these 3 subsytems are out there – but where?

I don’t recommend working on systems where you don’t know where the speakers are pointed, but it is important to be able to find where they actually ARE pointed – even if you have a piece of paper (or I-pad) to show you. The learning experience here was the process of finding. Here is a pic to get the idea of where the mics were:

Mic placement strategy

Terminology sidebar – ONAX (On Axis), X (Crossover), VTOP (Vertical top),VBOT (Vertical bottom).

Before we get to any tuning, we dummy checked each mic and speaker to make sure we had everything wired right. In the course of this we set the delay compensation for each mic and they ran from around 50 ms to 13 ms so we are looking at near seats that are around 12 dB closer (a ratio of 4:1) than the rear seats. The array will need to overcome this difference in proximity.   

So we began with just the upper system “A” on (the top 3 boxes). We compared Onax A, VTOP and XAB positions. VTOP (around the mixer ) was a disaster. No HF, no coherence and the far side much louder than the near side.

Original angles - ONAX A vs VTOP

UCI M1D R1 - VTOP A Solo -before EQ


Perfect mix area!!!!! XAB was down slightly from ONAX A so now we knew (vertically) where “A” was pointed: Too low.

Before- ONAX A vs XAB

The cluster was already very high so we can’t move it much.  The real answer would need to be getting some up-angle in the array. This would require some real-world rigging and this was not going to happen in our short time frame so it does not seem that we will fully solve the mix position.

Onward. We moved the ONAX A mic up and down a row and found that our original position had the most level – we had found the center of A. We eq’d it and stored it as a reference level.

Next begins the search for B. We looked at the ONAX B mic and moved it up and down until we found its high-water mark. The level at B was stronger by about 3 dB (compared to A). It was also about 3 dB (70%) closer. This made it obvious that the splay angles chosen for this array were wrong. How did we know?  The job of the different splay angles is to create a matched level at different distances. Here we were seeing that as we got closer, it got louder – the expected propeties of getting closer to a symmetric, non-directional source, not one that is creating asymmetry in the vertical plane. We eq’d the B system and reduced its level 3 dB.

Next up was the bottom two boxes, system “C”.  This system covered the front rows REALLY well. It was 7 dB louder than at the back and we were still in the 4th row.  It got even louder up closer but we gave up.

Before- ONAX A vs ONAX C

Conclusion: The cluster system needed to generate around 12 dB of level difference from top to bottom. It actually achieved 3 or 4 dB.  Time for the cluster to come down and redesign the system.

Redesigning and retuning the Line Array

We have no drawings of the room. Not even a napkin sketch. The UCI internet is not getting through to my laptop. We are going to have to go it alone. 

This is what we know (a) the cluster is too low, we have more than enough angle to reach the bottom and we need 12 dB more level at the top than the bottom. This means that the splay angles for the C section need to be at least 4x wider than the A section.

So how do you design a line array with no Manufacturer Official Line Array Calculator, no Mapp On-line, no drawings? We need to know the angle spread from top to bottom, and the difference in range from top to bottom. So we looked at the existing angles and found that the overall angle spread was 40 degrees. We know that was more than wide enough. We know we have a 4:1 distance ratio.

So we need 35 to 36 degrees of spread – we have 8 boxes (7 splay angles) – the average splay angle will be 5 degrees. (5 deg x 7 = 35 deg). We know the widest angle we can get for an M1D is 8 degrees. If we have 8 degrees at the bottom and 2 degrees at the top (a 4:1 ratio) we will approach our 12 dB range ratio. Add ’em up  (2-2)-3-(6-6)-(8-8) = 35 degrees. System A = is 3 boxes at 2 deg (a 6 deg speaker), 3 deg splay to system B (a 12 deg speaker) and then on to C (a 16 deg speaker).  Here is a picture of the design in progress: Yes – that is the AS-BUILT paperwork under my hand.

Calculating the splay angles based on range ratio

The new angles were put into the cluster and up it went – pretty much as high as it could reasonably go (about a foot or two higher than before) and we resumed measuring. This went very quickly now. The center point of each subsytem was easy to find since they each were made up of a symmetric angle set. The center of A was at cabinet #2, the center of B was at # 5 and the center of C was between 7 and 8. Each system was eq’d separately and levels set. The level tapering needed to bring the lower systems into compliance was 1 and 3 dB respectively, a far cry from the 3 and 7 dB previously. The systems were combined – first A & B and then C was added and a very uniform frequency response and level was created over the space. The level from front to back (back being the top row) was now 1 dB. The mix position still sucked – but we knew we could not save that without a rigger.

Reworking the angles

 First we looked at the ONAX A position, and EQ’d it. This will be our level/spectral standard going forward.

The next step was to look at the response at the mix position. We expected that things would not be improved much here since we were not able to aim the array up high enough to hit here……..and we were not disappointed.  Well I mean we were not surprised.

After- A at ONAX A Compared to A at VTOP

After - Response of A solo at ONAX compared to B solo at ONAX B

 The EQ applied is slightly different for A and B respectively. The difference is minor because both “speakers” are comprised of 3 elements. The splay angle is different which creates a different summation gain of 3 dB – the correct amount to compensate for the difference in distance.

After- A at ONAX A Compared to AB at ONAX A

Above – You can see the addition at A that occurs when B is added. The response shows no loss but the gain varies with frequency. As frequency falls, the percentage overlap increases and the addition increases. At 8 kHz the percentage overlap is so low that we see no addition. By contrast, at 125 Hz we see 6 dB addition. All frequencies between show gain values between 0 and +6 dB. This is a great example of 3rd order speaker behavior.

After- AB at ONAX A and ONAX B

After- AB at ONAX A ONAX B and XAB


After- HF ZOOM - AB at ONAX A ONAX B and XAB

 Above is a zoomed look at the uniformity of the HF levels.

Combined System A+B  EQ

Once A and B are combined we look at the LF region to see where the coupling was shared in both directions. Frequencies that were boosted in all locations can be equalized by matched filters in the A and B sections. In this case a 160 Hz filter was applied. Below we have a zoomed in comparison of before and after the AB Eq was added.

After- AB at ONAX A -Combined system EQ

 The screen below shows how we have restored the Combined AB response to the same shape as our initial A solo reponse.

After- AB at ONAX A -Combined system EQ compared to solo A EQ

 The AB sytem is now complete

After- AB with combined EQ at ONAX A ONAX B and XAB

Combined System: Adding (AB) + C

Now that AB is complete we turn our focus to C. Speaker C (2 boxes) was EQ’d as a soloist and it’s level set to match the ONAX A standard. The solo EQ response appears below.

After- C at ONAX C EQ and Level
The response below shows the full combined response ABC at ONAX A and C positions, giving us a clear view of the difference between top and bottom (not much!). The distance ratio between these two locations is around 8 dB!

After- ABC at ONAX A and ONAX C - top to bottom compared

 Finally we sell the full system ABC at its 3 main locations.

After- ABC at ONAX A, B and C

Was this the best way to design a system? I would not recommend it, if you have the option of drawings etc. But in the end we still have to test it – and that is where the final design comes from. In this classroom setting we made the tuning process drive the design. What we learned from our data was translated into an updated design and this was then measured. The result was a winner. This process, in a few hours was a distillation of 25 years of work for me. Everything I I ever learned about design came from the process of trying to tune an existing design, and learning from it.

There are additional class photos which will be placed in the “Seminars” Page on the right of this blog page.

and finally………………….

I did manage to bring home some good data from this tuning so I will add those to this post later. Soon… I promise.


New York Trainings – Updated with Pics

May 19, 2010

I am in NYC this week for two rounds of classes: SIM3 training in Brooklyn and the Broadway Sound Master class on the NYU campus in the East Village. The SIM class is at City Tech in Brooklyn – which is where John Huntington teaches. We have several of his students joining the class which is really nice. This trip marked a first for me – even though I have been coming to NYC quite regularly since 1984 this was the 1st time I ever set fot in Brooklyn. We arelocated over by the legendary brooklyn bridge and I can see it from the school – so hey – I can add another borough to my list……Manhattan, Queens – been 2 places there – La Guardia AND JFK – wow, and now two places in Brooklyn – City Tech AND Peter Luger – the famous steakhouse.

We are halfway through the class and pretty much right on schedule. Looking forward to the BSMC this weekend – always a great learning experience for me

****** John Huntington was kind enough to take some pics of the seminar.


Macau COD Tuning: Day 7 (Underwater SIM) – Updated with screen dumps

May 16, 2010

Today was scattered around different tasks. We tested the repositioned Sound Beams, further refined the Constellation settings, picked off stragglers in need of verification – or RE-verification, and finally SIM’d the underwater speakers.

Soundbeams repositioned

First order of business was to find out what we had gained from moving the SB-2’s up 2 meters. The answer: 6-8 dB, quite substantial and well worth the effort. The area with the most image distortion from the main arrays is “the beach” , the rows close to the water edge. These stand to benefit the most, vertically, from the SB-2’s contribution. The close seats have the lowest risk of hearing the unit directly behind them . By contrast, the  high ground seats near the back need the SB-2’s the least, but carry the highest risk of  experiencing distracting localization and pre-echo from the SB-2’s.  And yet there is another layer at work once we get the full circle of crossed swords into play (all 8 SB-2’s). The rear seats have the highest amount of isolation – being in the pattern of only 2 of the SB-2’s (the one facing them and the one behind).  Also in their favor is the front-back asymmetry of the human hearing system, which favors the front by virtue of our pinna.

The lowest seats have the most interactivity between the 8 crossing SB-2 patterns, being much closer to the the angular coverage edge of the 6 remaining speakers. The multiple extra arrivals come from the sides, which are much easier to localize.  What we have is a very complex weave of multiple arrivals out of time and differing in level. It can’t be solved with delay unless you are MC Escher – because it’s a circle. The only solution is careful monitoring of the SB-2 level with respect to the other main systems. Fortunately you can get a lot of localization effect with very little level, so the SB-2’s will be able to add an image steering with minimal detection   

Curtain Time

Sound is invisible. Highly directional devices are REALLY challenging to aim at a spot 130 ms away – even on a clear day. But there is NEVER a clear day in the shower right?  Well our SB-2’s are behind a shower curtain so we could not do simple tasks like putting a laser on top and seeing where it goes. It goes a foot in front of the speaker. Great. So we have to measure it and find it, in order to verify that the 4 SB’s on the opoosite match the near side. This went ok for the most part – but #3 did not match its brother across the ocean. Different in 3 positions, and NOT making any sense in terms of angled down – or to the left etc. The culprit as it turned out: folds in the curtain. All but #3 and 4 had no folds in front of them. #3 abd 6 did – and they did not match. Where is the interior designer when you need him!  

Underwater SIM

Having completed our must-do list we could not resist the chance to measure the underwater monitor speakers. These 36 speakers are used to communicate to the divers, the swimmers and to play music to keep the swimmers on cue.  The large quantity is due to the fact that the pool is a highly variable environment with lifts going up and down, bubbles, fountains, leaking oil wells – oh wait – no that is the Gulf of mexico – anyway LOTS OF STUFF GOING ON. So we put in a DPA hydrophone, plugged it into SIM and got a transfer function of some of the speakers in the water. VERY AMAZING! I have some plots which I will post when I get a chance. I have NEVER seen so many strong reflections – the impulse response looked like a 3 minute earthquake. The freq response looked terrible – but it looked AMAZINGLY like the spec sheet for the product. We eq’d a bunch of different ones and even found one that was reverse polarity!

When we were all done the next some folks got in the pool to listen. Quite cool. We have renamed it the SIMMING pool.

**** Update*****

Here is some data from the underwater measurement and tuning. The impulse response is quite fascinating because the reflections were so much closer than I expected. I was thinking only of the reflections off the sidewalls and not so much of the floor and ceiling. The CEILING?  Well in waterworld the H2O/Air interface is like a wall. Not much sound energy moves across that barrier and a LOT comes back down. This was pointed out to me by Dr. Roger Schwenke and the up/down bounce is very much in evidence in the impulse response. The density of the reflections is greater than I have ever seen for a set of speakers and their wall reflections. They are closely spaced roughly 1.1 ms – which is the same distance as 5ms in air,  a wavelength of about 1.7m (5.5 ft).

Above – impulse response of underwater speakers. First arrival is at 11.25 ms.

Impulse response: Underwater speaker closeup

Above. You can see the approx 1.1ms spacing

Here then is a freq response of the speaker(s). We did an initial eq at 890 Hz, the worst offender. You can see a before and after eq screen shot here. I turned off the coherence blanking to help see the extent of the damage these reflections threw at us. THe phase response looks so mangled you might thing we set the delayfinder wrong – but no. It it is right………….for the 1st arrival at least, just not so right for the 2nd, 3rd, 4th, 5th,……….  Interesting, though was the fact that, in spite of all the reflections, the basic tonal shape was quite consistent. Consistently bad, but consistently similar to the manufacturer’s data sheet. Concistent means we can EQ and get some positive effect.

1st Eq (before and after) of underwater speakers

As we looked at more and more of these we saw the basic shape come through and refined the EQ. The screens here shows before the EQ was applied to the system and after. Flat as a ruler eh????????????? LOL

Before EQ

After EQ

Before and after EQ

Macau COD Tuning: Day 6 – Constellation

May 11, 2010

Constellation Tuning

Most of today was dedicated to Constellation tuning. Steve Ellison programmed up the menu of user-settable presets that will become the painter’s pallette for the system designers, Francois Bergeron and Vikram Kirby. The pallette gives them easily understood parameters such as the the reverb Time, gain, etc, that will allow them to tailor the response of the sound system/room to the music and spectacle as the creative process unfolds. 

The beauty of electroacoustic architecture is that the acoustics can be reshaped from song to song, gradually so that the audience has no conscious awareness or the opposite: a dramatic moment to create a strong conscious effect. The settings can be made completely plausible for the shape of the space that you see around you, or can be dryer and more intimate than you might expect or, of course, much larger and more reflective. Once the lights go down, the mind loses sight of the scale of the performance space, and creative minds can begin to operate on rescaling the room to most appropriately contain the soundscape.

Imagine yourself having the ability to pull down a wall of thick curtains in a small room and reveal the walls of the Notre Dame Cathedral behind them. This is the level of capability now in the hands of the system designers. This is NOTHING like having a Lexicon at FOH. I use this simple analogy: A dry room with house reverb puts the singer in the shower but leaves the audience watching from the desert. (Who the singer is that you imagine in the shower I will leave up to you). All the reveberation is in front of the listeners, and the room acoustics clues of spatiality are missing. A room with electro-acoustic architecture puts us ALL (audience and performer) in the shower, desert, or something in-between TOGETHER. The spatial clues are there – precisely because they are ACTUALLY there. An audience member’s clap will reverberate from the “walls” just as the performers do – this absolutely will NOT happen with FOH reverb.

Yes it can happen with actual hard walls – but walls only have one setting. Yes it can happen with variable acoustics (moving panels, drapes etc.) such as we see in some modern concert halls. But the Constellation system does not require a four hour labor call to open chamber doors, drop in curtains etc, to move a hall from highly reverberant (symphony) to less reverberant (chamber), or to extremely reverberant (organ). Constellation can move in seconds, with a single click (or cue) from dry enough to feel a tight, pulsing, fast-paced drum beat all the way to cathedral chanting (and very importantly, the land between).

It is no coincidence that this capability was designed into this system. Francois Bergeron has been an expert in complex spatial sound systems for all the years I have known him. After all he is the guy who programmed “The Little Mermaid” show for me at Tokyo Disney Sea, where an entire orchestra is swirls around and goes down the drain. It has been running there every 20 minutes since 2001.

Hopefully you get the idea. What Steve, Pierre and I will leave Vikram and Francois with with will be a simple web page with programmable presets which can be logged in as cues in LCS. Then the fun begins, integrating this into the production.

SIM Tuning – Leftovers

In addition to Constellation tuning there were a few leftovers from our previous tuning work. We had to check the 28 Melodie boxes of the 4 opposite side clusters. The fact that we waited this long for this step lots about the quality of the install by Solotech. The very small number of wiring/install issues gave us high confidence that these clusters would be in good shape. 28 speakers checked: 28 speakers good.  

The fact that we did not have to reposition any of the 8 clusters or modify any of the inter-box angles says lots about the quality level of the Thinkwell design team. Anyone reading this who has been on a job site where I did the SIM tuning knows what the odds are that speakers are going to SUBSTANTIALLY moved is: very high.  In this install there were 56 Melodies (Mains),  23 CQ-2, 32 UpJunior, 43 UPM-1P, 32 MM4-XP (Surrounds), 10 600 HP (subs) and 24 UPJ-1Ps (Constellation) that required NO REPOSITIONING OR ANGLE CHANGE.  The cardioid configuration of subwoofers was re-angled to make best use of its cardioid steering (a very simple job for the riggers). Only the Soundbeams had to be moved (two meters rise in level and angularly adjusted) – again a minor change in the big scheme of things.  Hats off to the Thinkwell team for excellent design work and to the installers for putting it in like the plans.

Sound beam discovery

The design goals of the SB-2 are quite challenging: to cover the opposite side of the circle -without disturbing the near side. The intent was not COVERAGE in the traditional sense – as in – sound or NO sound, but foremost for vertical image control.  This is where scenic design adds a challenge: a shower curtain. Yes a thin plastic sheet around the room perimeter to obscure technical areas, catwalks etc. The SB-2s are behind it. Acoustically transparent, of course….. kind of.  We measured with the curtain in place – and pulled back – 6 dB loss from 2k Hz on up. Result: we have to drive the HF harder to get to the other side. Result: splash and spill on the near side is stronger than desired. Result:  Adjusted the tilt angle up to reduce the level on the near side. Result: Better but still not optimal. Decision: Riggers will move the SB-2s up 2 meters during the daytime tomorrow and we will reset the angle down and try again.

More Verification: Opposite side line subwoofers

The verification proofing technique makes use of symmetry of the room. We placed mics at opposite sides and stored the individual responses of box 1 to 5 with its mirror opposite. 1 polarity reversal found.

Still More Verification: Underbalcony speakers 8-32

Level and EQ were verified as matched to each of the remaining UB speakers. Delays were adjusted individually for each because the geometry of the Mains and Ubalcs, as constructed do not make for an absolutely concentric pair of circles. The differences overall ranged about 3ms over the 70ms of approximate range. To have a system designed to able to be tweaked to this level of detail is something you just don’t see every day………. or pretty much ANY other day.  Wow!