Jump to content

Voice only Tops


Recommended Posts

  • Members

I'm going to respectfully suggest that you do a bit of studying up on this before continuing this absurd, horribly factually inaccurate argument.

 

I've been designing touring level systems (not just systems but amplifiers, signal processing and speaker products) and mixing FOH for over 30 years and my day job for the past ~20 years has been designing instrument amplifiers on a fairly large scale.

 

For example, piano covers a wide range, add the harmonics and you are talking <100Hz all the way up to 10kHz. Are you really telling us that nothing else can be in this range? What about other instruments playing the same notes (fundamental), they HAVE to share spectrum. Physics doesn't allow it any other way.

 

They overlap ALL OVER THE PLACE, that's how it works.

Link to comment
Share on other sites

  • Replies 54
  • Created
  • Last Reply
  • Members

What I was trying to get across after a few to many beers was I have been at large shows where I panned instruments

and saw a real bennefit to NOT having everything piled on top of one another in the mix in a mono situation. This being said I believe that there is a benefit to having a system with a separate sound system for each instrument in the band.

Example

At a Bluegrass fest I have a mando in on the left and a banjo on the right. Instead of running a mono mix panning

the mando a bit to the left and the banjo a bit to the right really seems to decrease clutter blasting out of one box.

So this being said I can see a real benefit to having a seperate system along side of the instrument system for VOX.

 

Link to comment
Share on other sites

  • Members
At a Bluegrass fest I have a mando in on the left and a banjo on the right. Instead of running a mono mix panning

the mando a bit to the left and the banjo a bit to the right really seems to decrease clutter blasting out of one box.

So this being said I can see a real benefit to having a seperate system along side of the instrument system for VOX.

and I can see all kinds of negatives from the bleed between systems at the sources. IMO, with the bleed from the instruments into the vocal system really messing with any kind of coherency. Even worse the other way if there are some loud stage monitors. All that worrying about the same acoustic centers for coherency is totally thrown out the window.

Link to comment
Share on other sites

  • Members
Really? So you tell me what major thing in physics has been since Altec and JBL and EV introduced all the physics that are still being used to this day in modern sound systems?

 

Lots has changed with the execution of the devices AND the amplifiers available to drive them. No longer are designers constrained by 30-50 watt amps from the era those early devices came from. The drivers themselves are able to handle 6-800 watts RMS with minimal power compression compared with older drivers, because the flux field geometry is better understood, the cooling paths and air flow transfer functions are much better implemented, the adhesives and materials in general are far superior, and real world costs of drivers and amplifiers have fallen by a factor of close to 10, so now the trade offs of smaller size and lower weight allow a small box generate a much higher SPL tuned to a wider bandwidth, as a lower cost. This means that designers are not limited as before, and a PA system that costs say $100k today is probably 10x higher performance than one from the good 'ol days of the 1960's.

 

Even high frequency horn and waveguide technology is vastly improved. High SPL used to be available only from a device like a 24482 which rolled off before 10kHz. The new 1.5 and 2" HF drivers are night and day different, in part due to materials, in part the phase plug improvements and in part due to horn flair improvements. Waveguides for line arrays are a totally different approach that works very, very well too.

Link to comment
Share on other sites

  • Members
I'm going to respectfully suggest that you do a bit of studying up on this before continuing this absurd' date=' horribly factually inaccurate argument . [/quote']

 

If you don't understand what I'm talking about then my guess you are more into building systems and less experience mixing. I have a complete understanding of both. I am an electronic tech and built system for a living years back. I mainly run a recording studio now but have worked in all areas of the business for over 40 years so I may just have a few valid points you may be dismissing before you even understand what I'm talking about.

 

My point about mixing using frequency separation is how you mix professionally in the studio or live. I didn't say parts don't overlap, I said instruments have targeted range of tones and you limit instruments to their natural ranges. A normal bass guitar produces a frequency response from 40hz to around 3K. You don't hear much of the fundamental frequencies, you feel them. Most mixes target a basses first and second order of harmonics which reside in the 80 to 500K range.

 

Yes you will have higher frequencies up in the 2~5K range but they have much less power and are not the dominant resonance of the instrument and most of those frequencies will be masked by other instruments in a mix. Even plugged in direct with no amp or speakers to roll off frequency response, a bass guitars pickups just don't produce much high frequencies even with someone filing up a bright slap bass tone and using active pickups. You know ears can be easily fooled into thinking there is something there that doesn't exist. You have aural illusions just like you have visual illusions.

 

Using a simple audio analyzer anyone can download to a laptop is what separates fact from fiction and I suggest you verify what I'm talking about before you dismiss it as being absurd, horribly factually inaccurate. Test tools don't lie, they produce undeniable facts if the person using them has the skill and education to understand them.

 

If you pump say bass through a PA, its naturally going to pump a good 90% of its power through the through the bottom cabs, not through the upper mids or horns. If for some reason you do mix the bass response ultra flat and the upper frequencies "unnaturally" cranked way up, will interfere with other instruments that are suppose to dominate those frequencies. A bass synth for example can extend from very low to very high in frequency. In order to prevent that instrument from masking others you notch the frequencies out with an EQ so you have a hole where other instruments resides. You don't just leave everything flat response unless all you want is a bunch of white noise.

 

If you have allot of instruments in a mix you roll off frequencies they don't, or don't need to produce. This helps to limit unwanted noise an targets the mics sound source. This is especially useful on drums to prevents unneeded bleed over which causes phase issues when you have multiple mics collecting bleed over. I mean this is audio 101 kindergarten stuff that's used in both studio and live. I'm surely not inventing anything new here. If you don't get what I'm talking about then Maybe I'm wasting my time attempting to enlighten you on how pro mixing works.

Link to comment
Share on other sites

  • Members
Indeed things have improved mechanically and electrically but the physics of sound has never changed :)

Thanks for all the great input Aged.

 

 

Physics never changes, but the understanding and implementation of it has in the last 40 years since Pink Floyd and the Dead toured! There are no major tours presently using a separate sound system for each instrument.

Link to comment
Share on other sites

  • Members

Nobody has ever mentioned (or implied) correcting the voicing of any instrument to achieve flat response (constant power density). This is not what you wrote or described earlier even if its what you intended. Every instrument's response cab be described by a mathematical equation represented by a Fourier series of sine waves. Altering this series through eq alters the ratio of this series (which are the harmonics) and thus the tone. When you overlap all of these sets of series in a mix, you generate a much more complex and dense series of sine waves. This is one way to describe what happens mathematically in a mix, which holds true analog or digitally. If you want a particular guitar, for example, to sound like that guitar, the series must remain the same. If you want it to sound like a piano, the series must change to become like that of a piano (the basis of some types of synths). If you change something by removing something that's part of its defining series, it will sound different. I think I understand what you were trying to say, but the way you described it was not what I think you are doing when mixing. You may be a tech and mix, but that doesn't discount or qualify that you understand what's going on under the hood. I am a "real" engineer, I have specialized in audio and audio design which includes ~35 years of studying and doing at the A tier level. This also includes running a part time sound company for much of this time, servicing mostly national acts and mixing at this level too, so I walk the walk, in addition to talking the talk. I also cut my teeth in the San Francisco Bay Area music scene, which gave me first hand experience with the very folks who experimented with the concepts that evolved through the Dead's wall of sound system and all of the evolutions the resulted through the years to where things are today. It was an exciting time and place to be in the audio industry. Your comments about this concept clearly show that you do not understand why that system approach came to be and why it evolved away from it. What was important about the system was some of the major concepts that were learned from the exercise and how the evolution of sound systems (especially speakers and mixers) evolved from this information. I know for fact that some of my products benefitted from what I learned from that time and others did too.

Link to comment
Share on other sites

  • Members

I had a chat with the OP about this situation the other day. My guess is if the initial performance had been in a decent sounding room, the questions never would have arisen. As I always say "Acoustics are everything" and a bad room is still a bad room. That said pattern control can help a lot.

 

Upon occasion I have sent vocals (or specific instruments) to either a center cluster or a stage apron front fill via either a matrix or an aux to increase clarity without muddying up the whole mix . I think this would qualify as having a separate PA for the vocals. FWIW I think this is really pretty common practice, it's just viewed differently.

Link to comment
Share on other sites

  • Members
Upon occasion I have sent vocals (or specific instruments) to either a center cluster or a stage apron front fill via either a matrix or an aux to increase clarity without muddying up the whole mix . I think this would qualify as having a separate PA for the vocals. FWIW I think this is really pretty common practice' date=' it's just viewed differently.[/quote']

 

Good example. In the case ofthe front fills, the reason vocals only (unless there are other unamplified instruments that need local reinforecment) is used is because there is excess energy from the stage wash that is usually comprised of amplified instruments and drums (and trumpest, etc if present) and where there is poor coverage from the main hangs. For center cluster fill, again that is usually used where you are trying to bring some localization back to the center of the stage, mostly for folks who are outside of the coverage of the main hang but receiving plenty of stage wash. In very wide applications, this is more the norm, but in this condition you will also find acoustic guitars, and such sharing the same space because they would suffer from the same problem.

 

Let's look at the reasons behind why these things are done, analyzing them out of context is not a good way to come to an understanding of WHY it's done. Just because somebody else does something doesn't automatically mean that it's good in every instance.

 

Link to comment
Share on other sites

  • Members

 

Good example. In the case ofthe front fills, the reason vocals only (unless there are other unamplified instruments that need local reinforecment) is used is because there is excess energy from the stage wash that is usually comprised of amplified instruments and drums (and trumpest, etc if present) and where there is poor coverage from the main hangs. For center cluster fill, again that is usually used where you are trying to bring some localization back to the center of the stage, mostly for folks who are outside of the coverage of the main hang but receiving plenty of stage wash. In very wide applications, this is more the norm, but in this condition you will also find acoustic guitars, and such sharing the same space because they would suffer from the same problem.

 

Let's look at the reasons behind why these things are done, analyzing them out of context is not a good way to come to an understanding of WHY it's done. Just because somebody else does something doesn't automatically mean that it's good in every instance.

 

True. It's usually not so much as the main PA won't project clear vocals as that in the near field center where coverage is diminished a little clarity of select things (usually vocals) is needed. With the inordinately loud stage volume in an already live room that the OP was having issues with, this would probably have made the artist/promoters happier. Doing this would also probably satisfy what they were asking for in the future (whether it's needed or not in the next venue) and is an easy fix.

Link to comment
Share on other sites

  • Members

and the coverage targets where the added clarity (or to fill in the missing part of the program) is needed only and not in the whole room or the acoustics of the original problem will do the same thing to the fill system.

Link to comment
Share on other sites

  • Members

 

I'm lost as to why a discussion about mixing instruments to achieve clarity has anything to do with the subject of having a separate vocal channel? :)

 

I was attempting to make the point that there's no need for separate vocal cabs. In a larger system properly mixed, the vocal frequencies will already dominate the upper mids and horns

Link to comment
Share on other sites

  • Members

 

 

I was attempting to make the point that there's no need for separate vocal cabs. In a larger system properly mixed, the vocal frequencies will already dominate the upper mids and horns

 

That's so far from the truth as to be, unfortunately, laughable. I don't know how to put it in a milder form. The vocal frequency range is typically from 150Hz up into the 5-10kHz range (including the lower harmonics), the same place as just about everything else.

Link to comment
Share on other sites

  • Members

So what we have gathered here is

 

1. Running seperate tops for vocal only is not advantageaous with the technology available in modern sound systems.

2. The original problem was a loud stage volume in a room with high RT and high reflection.

3. Running a matrix with vox in center boxes can be very effective.

4. The physics of sound has never and will never change.

 

Good thread guys.

We need more threads like this.

 

 

Link to comment
Share on other sites

  • Members

Correction, running a vocal only matrix is advantageous when it's filling in a zome where the vocal coverage is not good and the stage wash dominates the mix in that group of listeners. Generally this would be front fills, outfills, under balcony fills... not generally in the areas where the main PA has dood coverage and balance.

 

The physics of sound has not changed, our tools for dealing with the physics of sound has certainly evolved.

Link to comment
Share on other sites

  • Members

PSG ... I wouldn't characterize it that way. Running a separate vocal system will almost always result in measurable improvements in reduction of distortion and improvement in dynamic range assuming it is set up properly. The benefits will usually track the headroom already present in a system. Systems with adequate headroom will see less benefit than those that are already taxed. It is costly and complex. Will the benefits outweigh other potential issues. It depends on how it occurs to you.

Link to comment
Share on other sites

  • Members

I found an old article in Mix Magazine and the following quote is a very simple explanation of what Dave was thinking when he designed his dual PA.....

The idea that evolved was based on knowledge he acquired while designing Rat Sound's MicroWedge stage monitors. During this process, he did quite a bit of research and was able to prove that loudspeakers have reduced clarity as the signal being provided to them increases in complexity.

“Just listen to a vocal mic through two speakers at high volume and then add in a 50Hz tone at high volume,” Rat explains. “It blurs the vocals. Then use two speakers with the vocal in one and the tone in the other. The vocal will stay clear. I believe the primary issue has to do with the speaker efficiency and linearity while the voice coil is centered in the gap. The speaker is less efficient when the voice coil is at its extremes because the 50Hz tone reduces the time that the voice coil is centered. Some monitor engineers run separate instrument and vocal wedges for this reason. What if I applied that setup on a grander scale, as in two P.A. systems?”

The resulting house loudspeaker design comprises dual V-DOSC line arrays flown next to each other on each side of the stage. Via the Midas XL3000 house console, any instrument or vocal can be sent to either the inner or outer loudspeaker arrays. Typically, side-by-side systems would introduce unacceptable comb filtering issues, but because each P.A. is reproducing different instruments, it's not a problem.

 

Link to comment
Share on other sites

  • Members

The problem that's being described is intermodulation distortion where one signal modulates another. This is the best explanation I have heard, though IM distortion of the devices themselves has decreased greatly with technology, and high passing is another way to improve things.

 

It used to be much worse in the good 'ol days because there rarely enough rig for the gig, the speakers were always being driven into serious power compression, the IM contrinution by the magnetic field modulation was higher, the overall distortion was higher, the mics and electronics were not as goo, the filters were nothing like today's DSP, etc.

 

If you have enough rig for the gig, and it's properly tuned of course, I think the difference would be pretty darn small. There may be some things that get a little better and some a little worse (the gorilla in the room is that any bleed will now be reproduced by 2 separated sources (reciprically) which may contribute to the cure being as bad or worse than the disease. IMO, most folks would be better off doubling up the PA and mixing within the new limits than splitting the PA up and driving both parts into limiting. That way, there's theoretically 3dB greater heardroom avaiilable provided the mix doesn't get louder and eat it all back up

 

It should not be forgotten that when 2 signals sum in air, there may still be the effects of IM distortion, though through somewhat different mechanisms. Then, exactly the same IM distortion mechanism occurs at the microphone diaphragm and especially at the ear drum.

Link to comment
Share on other sites

  • Members
It should not be forgotten that when 2 signals sum in air' date=' there may still be the effects of IM distortion, though through somewhat different mechanisms. Then, exactly the same IM distortion mechanism occurs at the microphone diaphragm and especially at the ear drum. [/quote']

 

I know this is straying OT quite a bit but you've brought an interesting question to mind. I always wondered about co-axial speakers that don't have a separate "horn" per-say but rather an acoustically transparent dust cap on the woofer through which the HF passes. Doesn't the LF modulate the compression driver's output drastically in doing this. Also since the horn is part of the acoustic loading of the compression driver (although less than the throat), doesn't it make for varying HF impedance depending on what the Woofer is doing? I love the idea of time aligned single point source systems but the "horn through the middle" approach seems to make more sense. Or does it???

 

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.


×
×
  • Create New...