Announcement
Collapse
No announcement yet.

Audacity - Asymmetric waveform

Collapse
X
  • Time
  • Show
Clear All
new posts

  • Audacity - Asymmetric waveform

    Hi! This is what I notice when I record my vocals (beginner/intermediate singing practice):



    The dark blue trace is not as symmetrical as I would expect. What is the root cause of this asymmetry? Is it the DAW/equipment, or can it be caused by the actual sound? If it is the sound, is it something that can be corrected/improved (like learning not to pop the mic)?

    The Audacity Manual explains that the light blue trace is calculated from a local RMS, and the dark blue trace from tracking the most prominent harmonic. As you can see from their examples, all their traces are symmetrical (maybe they have been "prettified").

    http://manual.audacityteam.org/man/a..._waveform.html

    Anyway, my first thought was that the dark blue trace should be symmetrical, and any asymmetry should be in the light blue trace, because RMS contains offset information, and a harmonic does not. But then I guess the RMS is being used to find zero? But that still doesn't explain what causes the asymmetry, in the first place. What would cause an occasional offset?

    So far, I've been told mass movement of air (for example, due to proximity to the mic. I experimented with backing off, and I managed to get more asymmetry!)

    Somebody also mentioned that the asymmetry can reduce bandwith, as the trace will clip sooner than if it were symmetrical, so it is worth addressing, if possible.

    And 1001gear believes it is "dark harmonics":

    Originally posted by 1001gear View Post
    These are dark harmonics...
    I don't know how tongue-in-cheek that may be, but he invited (nay "dared") me to re-post in this forum, not realizing that I have a hide like a teflon rhino.

    So, any ideas, anyone?

    Thanks.

  • #2
    brrrump
    Originally posted by Unconfigured Static HTML Widget...








    Write Something, or Drag and Drop Images Here...

    Comment


    • #3
      Its not unique to Audacity. Nearly all DAW programs that have Wave views can show you similar results.

      Pure Sine waves are the most likely to be symmetrical. They can be offset by SV voltage however so a north or south going wave may be larger then the other.

      Guitars, Vocals, bass rarely produce pure sine waves. You have all kinds of harmonics riding on top of the sine wave which can be larger on one side or the other. On guitar for example, the initial pick attack of the string may cause a larger peak on the North going wave. Afterwards the sting may vibrate in an elliptical pattern - instead of vibrating left and right across the pickups to vibrating towards and away from the frets. Even palm muting can change the looks or a waveform.

      Bass guitar picked with the fingers vs being plucked with a pick can create nearly half wave symmetry along the entire track. Then if you compress then track it may wind up looking normal.

      The waveforms you posted may simply be asymmetrical because you had some room reflection that got into the mic that just happened to be in phase with the original signal which made one side stronger. It can be an issue with being slightly off axis a bit or with a vocal, the pronunciation of particular words. Maybe that built in reverb spring on an amp when mixed with the dry signal colored the sound more on one side. Maybe an echo unit had its slab back trimming one side in the form of phase cancellation.

      You wont know which of these may be the cause unless you use your ears and what's between them.

      Sometimes you can have a preamp or gain box produce an asymmetrical wave pattern because of DC leakage. The only time it really causes a problem when the center point is north or south of the zero point due to DC leakage. It may prevent a track from reaching full peak to peak levels when boosted and possibly cause asymmetrical clipping or at least limit the possible maximum volume levels. Allot of it can be cumulative and only affect things once the tracks are mixed down to a stereo file. A bunch of offset tracks may be noticeable

      There has to be quite a bit of DC offset for this to be an issue however. Your posted clips don't even come close to being in that category.

      You can use a DC Offset plugin to remove the offset but its not going to do anything for a wave which has no offset and is just asymmetrical for natural reasons.

      The voice can easily have asymmetrical peals too. You can easily have one side of the vocal cords tighter then the other or the guttural voice being used mechanically restricts one half of the air passage making one side stronger then the other.


      None of this matters at all so long as it sounds good. If you notice the issue more with one preamp over the other then yes there may be am issue with the way it produces the signal, but you can usually hear it happening when you track and avoid having it happen if the first place.

      If you hear no issues simply forget the way the waves look and stop trying to mix with your eyes. Video is for the eyes, audio is for the ears.

      The goal in mixing is not to make everything look orderly and symmetrical. In fact its often just the opposite. What is produced in one track may wind up being comb filtered. Another track may be comb filtered to be somewhat of a mirror image to the other track. The two together can coexist in the same general frequency range but because one has some boosted frequencies where the other is scooped the ears can hear them as separate aural images.

      You can also have things like Reverb, Echo, Chorus, Drive on the track that may make the waveforms unrecognizable compared to a sine wave.
      Drums often have Asymmetrical waveforms because the initial transient is the loudest, or maybe the mic being uses if more sensitive when the diaphragm is sucked out vs being pushed in by the air pressure.

      There are a gazillion other things that can cause irregular waveforms. You only have to worry about the ones that cause issues. Most of those you can hear with the ears or detect when your ability to manipulate them is affected. This ability does come with experience too. I'm not saying you should ignore looking at the waves, in fact just the opposite. Over time you'll know what looks normal and what doesn't.

      You'll normally hear something that doesn't sound right, so you expand the waveform to see if it shows visually. If it does there are tools like the ones Ozone makes which can actually rewrite a waveform to get rid of the flaw. Its extremely tedious however. I've done that kind of stuff in certain cases where I had no alternative. Its allot easier and a thousand times faster just track a new part.

      As an audio engineer we don't always have that luxury however. If you have a band come in and record then fin the flaw later when mixing it may be better to just fix that flaw then bring a band member back in to re-track the flaw. Learning to use the tools can give you an edge over other engineers in that case. There's allot of studios that do audio and video restorations too. The original people on those recordings may not even be alive any more so you use whatever tools you can. This all leads back to getting quality tracks in the beginning of course. You don't want to waste any more time then you have to repairing poorly recorded tracks when the fix is simply learning to track better.
      Last edited by WRGKMC; 06-07-2016, 08:47 AM.

      Comment


      • #4
        ^
        Thanks.

        So it could be the sound, the gadgetry or the electronics, or a combination. Some effects you mentioned, I would expect to be persistent, not transient. And others, I would have expected to be < 1/10th second. But, ho hum, stranger things happen, and if it is commonplace, and not a symptom of bad technique, that's the most important thing.

        It wasn't really a mixing question. Visuals are excellent for training vocals. No need to use only your ears. For example, try singing a siren from A2 to A4, say, and try to keep the intensity even. The real time visual feedback of a waveform will train both your ears and your voice. Learning to do it only by ear would be some feat. But your ears will eventually learn from the visuals.

        Comment


        • #5
          Visuals are excellent for training vocals. No need to use only your ears. For example, try singing a siren from A2 to A4, say, and try to keep the intensity even. The real time visual feedback of a waveform will train both your ears and your voice. Learning to do it only by ear would be some feat. But your ears will eventually learn from the visuals.
          You may think so at this point but its actually an epic fail. I'm surely not surprised by it because its such a common misconception. The PC generation has grown up with beliefs, everything can be learned via a computer screen. My Experience predates PC,s by a good 20+ years so I have a clear knowledge in the use of both. Computers have been a great aid in all kinds of education, but they still lack in so many ways. They can surely point you in the right direction but fail miserably in giving you the actual hands on experience needed in becoming an actual professional.

          The fact is, the only time you actually need your eyes is for reading actual musical notes tabs or lyrics. Yes it may seem to be tough to learn without a visual aid. Noone ever said it was easy becoming an accomplished musician however. Only the most dedicated to learning their craft succeed because music requires both knowledge and physical dexterity. The knowledge has never been easier because of computers and because the knowledge is easy to obtain, it makes people think the craft skills are too. A computer cannot give you what most important - the physical control to play an instrument or sing well. That only comes through hard work involving thousands upon thousands of hours of hard work.

          Seeing a visual response may be interesting but it actually does nothing to actually improve your skill as a musician. In fact it can be more of a distraction in focusing on actually singing properly then it can actually be for providing feedback. If it did have a benefit I would be a world class singer/player given the fact I began using an oscilloscope back in 1972 and used one for decades on a daily basis as an electronic technician. The waveform you posted is simply an "Oscilloscope Histogram".

          Some things you can do that can be fun, and give you a more comprehensive understanding.

          You should have the ability to expand the waveform lengthwise. Expand it so you see a couple of seconds worth of the waveform and you'll see more details in what's actually being written. You'll find different pitches and notes can produce different sized waves. Low notes for example will likely have more amplitude then higher frequencies. Your ears hear them at similar volumes but because our hearing is non linear and most sensitive around 1~2Khz, notes in the midrange don't need to be as loud as they need to be in the high and low ranges.

          (Typical Hi Fi systems commonly boost the highs and lows to increase the "Percieved" loudness which is not the same as actual loudness.)

          A better tool singing through might be something like Autotune. It will detect pitch variances which can be very useful in teaching yourself to maintain a pitch, Its like singing into a guitar tuner and trying to keep the needle in the center.

          For judging frequency responses try a Frequency Analyzer. A frequency analyzer only detects the overall frequency response, not nessarily the pitch, but you will see pitch mover the peak frequency response because the root note and first order of harmonics will constitute the greatest portion of the peak.

          It will give you a readout that already compensates for the ways the ears hear loudness and pitch too. Try downloading a copy of Voxengo Span. Its Free and you can use it to compare two tracks at the same time. You can stick it in the main bus, solo two tracks panned left and right and see how the frequencies of tracks collectively add together to build a full frequency Mix.

          It also lets you detect masking. If you have two tracks competing for the same frequencies the responses overlap resulting in a volume war with one track shadowing another so it cant be heard. The trick is to have each track carve out some of the total frequency response and minimize overlapping so each instrument can be clearly heard. Fir example, its common to roll off guitar frequencies below 160HZ because those frequencies aren't missed This frees up more "Turf" and reduces masking of the bass so it can be clearly heard.

          In other cases you may need to use an EQ to create notches. The vocals may be coverd up too much by other instruments, so you use the Frequency analyzer to see what frequencies the vocals are strongest at, then create an EQ Notch to attenuate the frequencies of the instruments masking the vocals at that frequency. Vocals are often above the guitars so rolling guitar off at about 5K Frees up space above to allow the vocals to be heard, and rolling it off at 160 allows the bass to be heard. Sometimes you even need to notch the guitar at around 4K so the so the snare can be heard, or give the snare a 4K boost, or do a little of both.

          As I said, these tools aren't a shortcuts to becoming a better musician. they are very useful to someone new at missing a recording. It is essential to experiment with all the tools available if you want to be good at Mixing and recording. You just have to realize these are actually two different art forms which overlap one another.

          Knowing how to play well doesn't make you a better recording engineer, in fact its often an impediment you have to overcome. The use engineering tools does nothing to make you a better performer either. Back in the early studios, these were both treated as completely different jobs requiring different skill sets.

          Because the cost has come down on the recording, its something many musicians are getting into now. Problem is that line between the two has become blurred by many who think they are one in the same.

          If you really want to become highly efficient at both trades, treat them as separate jobs and when you do switch hats forget about the other. You'll find this is still the quickest way of becoming proficient at both.
          Last edited by WRGKMC; 06-08-2016, 01:40 PM.

          Comment


          • #6
            Originally posted by WRGKMC View Post
            Seeing a visual response may be interesting but it actually does nothing to actually improve your skill as a musician. If it did I would be a world class singer/player given the fact I began using an oscilloscope back in 1972 and used one for decades on a daily basis as an electronic technician. The waveform you posted is simply an "Oscilloscope Histogram".
            I spent many years researching waves as a research scientist, so I know a thing or two about them.

            I disagree with your conclusions specifically when it comes to practising vocals. Of course the point was never that staring at a histogram will make you a world class singer. That is rather a straw man argument.

            This is not about replacing your ears, but about using every resource available. Another resource you use when practising vocals is feeling and nervous feedback. It is not just your ears. Vocals are special in this way, because it is notoriously difficult to describe the internal coordinations of your inbuilt vocal instrument. Singers use all kind of imagery.

            I am very familiar with the limitations of graphical output, maybe even more so than you. The trick is to know the limitations, not to throw the baby out with the bath water. I know for a fact that visual traces have been a great help to me. I can give a ton of examples. You even mention a visual aid, yourself, although centering a needle is not computer based. Another aid that is often mentioned is using a candle to reduce the breathiness of your singing. The idea is to try to sing without blowing the candle out.

            The point about keeping an even amplitude trace is simply one of control. It must not be interpreted as something that can replace perception. I am already aware of all that, but I recall that the ear is most sensitive at 3K odd (and that has to vary from person to person), supposedly coinciding with the singers formant. Also, the histogram shows pressure amplitude, not power, which is itself proportional to the square of the frequency. So, maybe your sensitivity comparison was referring to something different.

            You mention a number of interesting ideas, which I might investigate. But, I still say...

            COMPUTERS FTW!!
            Last edited by kickingtone; 06-08-2016, 10:22 AM.

            Comment


            • #7
              Originally posted by kickingtone View Post

              Also, the histogram shows pressure amplitude, not power, which is itself proportional to the square of the frequency. So, maybe your sensitivity comparison was referring to something different.

              You mention a number of interesting ideas, which I might investigate. But, I still say...

              COMPUTERS FTW!!
              I'm not sure where I used the term power it was unlikely being used as an electrical measurement in watts and instead it was used as a substitute for strength or amplitude.

              Power in watts is calculated with Volts, Amps and Ohms. Frequency isn't even a factor unless its to figure out the AC resistance (impedance in ohms)




              AC passing through a coil (Electromagnetic) produces more reluctance (AC Resistance) as frequency goes up. Capacitors work the opposite. As frequency goes up so does their reactance. Their resistance goes down as AC frequency goes up. Straight resistors work the same in AC and DC circuits and don't change resistance as frequency changes.


              Maybe there's some way of using Frequency in one of the other branches of physics for computing power. I know the basic formulas are all the same for all and you simply plug in different parameters, but I've never seen frequency used for calculating power before.

              The frequency is tied to how components pass or block AC based on frequency and therefore consume current and release it in the form of heat. Power is a consumption measurement to consume or pass current and fore current which incorporates frequency into the resistive measurement for calculating power.

              Sine waves aren't unique to electricity. Pressure isn't how its normally described in electronics. I suspect you worked with another branch of physics which describes waves as pressure. In music electronics its rarely if ever used that way because its too easily confused with actual sound pressure waves measured in decibels.

              The Histogram depicts the amplitude and frequency of ac waves which also includes all orders of harmonics and any other noise that might be captured by a transducer over time.

              Signals don't require air to be generated however. A guitar pickup or an electronic keyboard will work in a vacuum. So will a piezo, LED or laser Transducer so air pressure using a mic diaphragm is only one method of generating an electrical sine wave.

              The waveform is Audacity is peak to peak. The RMS coloration is simply a mathematical logrhythm used by calculating the Root Mean Square of the peak value which is .707 or 70.7% of the peak value.

              This is a fairly simple computation a computer can perform during the wave rendering process and can be fairly helpful in the mixing process, especially when you want to make sure your final stereo mixdown winds up being about -16db to -14db so you have enough headroom to run mastering plugins and get the mix volume up to commercial levels.

              I'm able to read the RMS level of most tracks without the additional coloration you get in audacity. I simply view the top 30% as being peak and everything below it as RMS. Pretty simple really.


              You want to get into something really cool for vocals get yourself a copy or Antares AVox.

              This tool actually allows you to sculpt a vocal track. Pitch changes can make a voice sound deeper or smaller when used mildly but quickly begins to sound like alien voice once you go up or down a few semitones in pitch.

              This tool allows you to actually manipulate the size of the throat to change the size of the voice especially when manipulating pitch so you don't wind yo sounding like a chipmunk going up in pitch or the devil going down in pitch. You can for example give a woman's voice the chest cavity and head size of a male and convert it to a male voice, or shrink a mans voice to sound like a woman.

              Some of it may not work as well as it should and I'm sure within another 10 years or so you'll simply have a gender or pitch button in a daw that will do this stuff automatically. But since you're into using software creatively, I think this might be your cup of tea.

              http://www.sweetwater.com/store/deta...FQUIaQodU8EPZw
              Last edited by WRGKMC; 06-08-2016, 02:19 PM.

              Comment


              • #8
                ^
                I was actually referring to sound waves which are air pressure waves, not to electrical signals. For example, depending on how the wave dissipates energy, in one cycle of the wave, the work done by the pressure is proportional to the amplitude of the wave. And the power, which is the rate at which that work is done varies with frequency.

                So, when they say that the ear is most sensitive to sound waves of a particular frequency, we need to know what is being kept constant, the amplitude of the pressure wave, or the intensity of the pressure wave. They would give rise to different results.

                Comment


                • #9
                  ^^^^ Cool, That's what I had guessed. Hearing sound waves falls under the category of Psychoacoustics, a branch of science studying the responses associated with sound which is a sub category of psychophysics.

                  The hearing range is non linear compared to the waveforms seen in a DAW which is linear. It helps to know how one impacts the other when using a DAW waveform when mixing.

                  The graph here shows the dB levels needed by the ears to hear pitches at even perceived loudness. A 31 hz frequency needs to be around 70dB to match a 250hz frequency at 10dB.




                  If you were to flip this scale upside down it comes pretty close to many speaker responses. This midrange driver for example rolls off the lows where our hearing is least sensitive.



                  Attached Files

                  Comment


                  • #10
                    ^
                    Yep. That's basically the graph I've seen, showing that sensitivity is closer to the 3KHz mark.

                    I'm rather sceptical of the research. I don't how much averaging they had to do, and how much variation there was between different people.

                    The point that occurred to me is that our reaction to sound could be more closely related to the intensity of the sound, rather than the amplitude of the pressure. Perhaps our ears evolved to assess how powerful the thing or beast making the sound is.

                    In optics, it is the power (or intensity) against which we measure sensitivity. It makes sense that our eyes would want to prevent damage from intense light, rather than monitor for electromagnetic wave amplitude.

                    If we were to use power for acoustic sensitivity, the curve would be flatter, or even curve the other way. And the contour could bottom out earlier, which is where I thought your 1~2Khz may have come from.

                    Comment


                    • #11
                      ^^ When I was growing up every kid going to school had their ears and eyes tested as did all kids in public schools. With all those hundreds of millions of children being tested around the world using a standardized hearing test, I believe the accuracy of the chart is an accepted science at this point. They tested your eyes too as do they test them when getting a license every so many years so I'm sure that science is well established too.

                      The Decibel is a logarithmic formula of Alexander Graham Bell. He came from a family which taught the deaf to speak.
                      The decibel came about through his invention of the Telephone. It was used to measure audible signal loss over telephone wires. Telephones used DC current which produced allot of frequency loss due to wire resistance. They needed a new standard for measuring these losses. Prior to the phone they used a telegraph which was essentially a digital device. They used Miles of Standard Cable prior to the telephone which was a power loss measurement over a mile of cable. When they started transmitting speech over wire, the older standard became insufficient for measuring audio fidelity losses. Decibels (Bells) became the new standard adopted by all major countries as the new standard of measuring both audible and transmitted sound signals.

                      Individuals may have unique hearing responses, but when studios mix music its done to satisfy the greatest number of listeners. Psychoacoustics or the emotional responses to sound are a different branch of sound which should be confused with the standards being used.

                      Man needs laws an guidelines just like he needs a common language. Without an adopted system of measurement you have nothing but chaos and no way one man can build upon another mans works. Systems of measure do get replaces by better systems on a regular basis just as words in a dictionary get revised. Just look at how many words have been added since PC computers and the internet were developed.

                      Since analog reached its peak quite a while ago and since Digital is replacing many of the transmission methods, its unlikely we'll see a standard like the Decibel replaced or revised. It is what it is and scientifically provides a method of understanding something that was invisible for so many years (electricity) Of course since the invention of the nuclear microscope, scientists can see electrons now and actually see what was previously invisible. It has changed some of the theories, but for the most part it have changed many of those theories to fact, or at least logical and trustworthy enough for most educated individuals to use reliably.


                      The decibel originates from methods used to quantify signal loss in telegraph and telephone circuits. The unit for loss was originally Miles of Standard Cable (MSC). 1 MSC corresponded to the loss of power over a 1 mile (approximately 1.6 km) length of standard telephone cable at a frequency of 5000 radians per second (795.8 Hz), and matched closely the smallest attenuation detectable to the average listener. The standard telephone cable implied was "a cable having uniformly distributed resistance of 88 ohms per loop mile and uniformly distributed shunt capacitance of 0.054 microfarad per mile" (approximately 19 gauge).[4]
                      In 1924, Bell Telephone Laboratories received favorable response to a new unit definition among members of the International Advisory Committee on Long Distance Telephony in Europe and replaced the MSC with the Transmission Unit (TU). 1 TU was defined such that the number of TUs was ten times the base-10 logarithm of the ratio of measured power to a reference power level.[5] The definition was conveniently chosen such that 1 TU approximated 1 MSC; specifically, 1 MSC was 1.056 TU. In 1928, the Bell system renamed the TU into the decibel,[6] being one tenth of a newly defined unit for the base-10 logarithm of the power ratio. It was named the bel, in honor of the telecommunications pioneer Alexander Graham Bell.[7] The bel is seldom used, as the decibel was the proposed working unit.[8]
                      The naming and early definition of the decibel is described in the NBS Standard's Yearbook of 1931:[9]
                      Since the earliest days of the telephone, the need for a unit in which to measure the transmission efficiency of telephone facilities has been recognized. The introduction of cable in 1896 afforded a stable basis for a convenient unit and the "mile of standard" cable came into general use shortly thereafter. This unit was employed up to 1923 when a new unit was adopted as being more suitable for modern telephone work. The new transmission unit is widely used among the foreign telephone organizations and recently it was termed the "decibel" at the suggestion of the International Advisory Committee on Long Distance Telephony.
                      The decibel may be defined by the statement that two amounts of power differ by 1 decibel when they are in the ratio of 100.1 and any two amounts of power differ by N decibels when they are in the ratio of 10'N(0.1). The number of transmission units expressing the ratio of any two powers is therefore ten times the common logarithm of that ratio. This method of designating the gain or loss of power in telephone circuits permits direct addition or subtraction of the units expressing the efficiency of different parts of the circuit...
                      In April 2003, the International Committee for Weights and Measures (CIPM) considered a recommendation for the inclusion of the decibel in the International System of Units (SI), but decided against the proposal.[10] However, the decibel is recognized by other international bodies such as the International Electrotechnical Commission (IEC) and International Organization for Standardization (ISO).[11] The IEC permits the use of the decibel with field quantities as well as power and this recommendation is followed by many national standards bodies, such as NIST, which justifies the use of the decibel for voltage ratios.[12] The term field quantity is deprecated by ISO 80000-1, which favors root-power. In spite of their widespread use, suffixes (such as in dBA or dBV) are not recognized by the IEC or ISO.
                      Last edited by WRGKMC; 06-12-2016, 08:15 AM.

                      Comment


                      • #12
                        I don't believe that the chart reflects the methods used for testing school children. If I recall correctly, the research was based on asking subjects to subjectively compare the "loudness" of pairs of tones.

                        Decibels is not the issue, here. Even if you were to use power for your y-axis, it would still make sense to use decibels power, given the range of values under examination.

                        The issues I raised are not related to the scale used. One issue is simply that sensitivity to sound varies with age, sex, phenotype, and a host of other variables. I can hear the sound made by the LEDs on my network hub and router, for example. Some people cannot hear LEDs. There is huge variation. So, I am sceptical of putting much emphasis on an average. It's about as dodgy as an "ideal Body Mass Index".

                        The other issue, which does not question the science, is simply in regards the notion of linearity. Buried in your quote above is the phrase "the smallest attenuation detectable". Attenuation of what? There are many candidates, and, as you can see from the graph, they have not chosen the most linear. My question is, why not choose something that is more linear with sensitivity of the ear, like power. We use power when it comes to light.

                        And, talking about light and sensitivity of the eye, trying to average that is as useless as trying to make an average pair of glasses. Trying to average biological function is at best dubious. The results are usually extremely sketchy. Just because something is an established, iconic or traditional way or representing a result, does not make it rigorous science. We know how BMI has relatively recently been called into question.
                        Last edited by kickingtone; 06-12-2016, 10:47 AM.

                        Comment


                        • #13
                          Originally posted by kickingtone View Post
                          I don't believe that the chart reflects the methods used for testing school children. If I recall correctly, the research was based on asking subjects to subjectively compare the "loudness" of pairs of tones.

                          The other issue, which does not question the science, is simply in regards the notion of linearity. Buried in your quote above is the phrase "the smallest attenuation detectable". Attenuation of what? There are many candidates, and, as you can see from the graph, they have not chosen the most linear. My question is, why not choose something that is more linear with sensitivity of the ear, like power. We use power when it comes to light.

                          And, talking about light and sensitivity of the eye, trying to average that is as useless as trying to make an average pair of glasses. Trying to average biological function is at best dubious. The results are usually extremely sketchy. Just because something is an established, iconic or traditional way or representing a result, does not make it rigorous science. We know how BMI has relatively recently been called into question.
                          Acoustics is a science just like electronics and any other form of physics. We cant change the laws already adopted simply because we disagree with them. The only way to change them is to offer something that offers a better alternative to what's being used.

                          My interests are not in changing the science, but to simply use it as needed to produce quality recordings. You don't need to understand the science in order to produce a quality recording, you simply have to have good discernible ears which guide you to making the correct decisions when mixing and have a good understanding of how to use the tools of the trade as an operator of audio gear.

                          This is no different then someone who drives a race car for a living. He doesn't have to know jack about the mechanics of the vehicle, he simply needs to know how to drive the hell out of the car.

                          Most audio engineers do learn their basics however, simply because it gives them a better understanding on how to use the tools of the trade.

                          There are parallels in audio which are similar in many respects to other wave technologies like light and electronics. The formulas used are the same formulas you learn in algebra and calculus, but the parameters you plug into those formulas have different roots. Unless you have a solid understanding of those basics, its unlikely you'll choose the right parameters nor be able to build upon those basics to come up with reliable conclusions when you combine them.

                          Sound is no more then simple physical vibration. Vibrations travels through solids, liquids and gasses at different speeds. Our ears have similarities to microphones. Our nerves conduct impulses much like wires do. They have even tapped into nerves using microphones to allow deaf people to hear.

                          When mechanical wave energy is converted to electronic wave energy (AC), there is also a change in the parameters used to define each of those sciences. An electrical wave is not a mechanical wave and it does require its own formulas to predict how it works. Much of those changes come form the electronic components themselves. When analog electrical waves are converted to digital, the parameters used to define that science again change.

                          The sciences used in each are unique to that branch of physics and have been refined by mans ability to manipulate that science to his own wants and desires. That all came about form one man building upon another mans discoveries over history. Studying the history is one of the fastest easiest ways of gaining an understanding of why things are they way they are today.

                          Along that time line of discoveries you find many inventors and scientists who spent their entire lives perusing an idea and simply winding up at a dead end. Someone else comes along, views his work and takes it in another direction and winds up making it successful. It can become popular and profitable if it is something marketable.

                          The components that comprise a simple amplifier are a combination of thousands of individual inventions developed over history. Something as simple as a capacitor uses static electricity to change the nature of an AC wave yet that science dates back at least 2,600 years to the ancient Greeks. Theoretical understanding remained slow until the seventeenth and eighteenth centuries. Even then, practical applications for electricity were few, and it wasn't till inventors like Ben Franklin were able to begin to harness and manipulate it. It wasn't till the nineteenth century that engineers were able to put it to industrial and residential use.

                          I mention all of this because you question the wisdom in back of simple everyday standards used in audio. I have no problem with that of course. Its important to incorporate a hands on method of learning to reinforce the theoretical. You also have things like peoples ears which will vary both in nerve sensitivity and how the mind perceives sound based on emotions and other factors.

                          Music contains emotion which is a result of the artists ability to manipulate notes. Mixing a recording is all about making those notes appealing to the greatest number of listeners. If an engineer is lucky, he may even like what he hears but its not a requirement. I've done many recordings for others where I not only disliked the music but it was very painful to mix. That didn't stop me from getting good results however, because I knew the goal was to target a specific audience, not indulge my own personal preferences.

                          Comment


                          • #14
                            Originally posted by WRGKMC View Post

                            Acoustics is a science just like electronics and any other form of physics. We cant change the laws already adopted simply because we disagree with them. The only way to change them is to offer something that offers a better alternative to what's being used.
                            I think you misunderstand. I was not proposing a change in any law. I was talking about appropriate unit of measurement. Newton's Laws still hold in their domain of application, whether you choose to measure in feet, ounces and hours, for example. But you could be making your life more difficult than it needs to be.

                            Just as a matter of interest, I did a quick google to see if "my" idea was mentioned. And, sure enough, it isn't "my" idea, and it doesn't disagree with any law. This article from The Scientist and Engineer's Guide to Digital Signal Processing By Steven W. Smith, Ph.D. uses power/intensity and power/intensity decibels, which is what I argued seems more natural. See how he is able to base his argument on energy transfer? Pressure just wouldn't work, here.

                            The perception of loudness relates roughly to the sound power to an exponent of 1/3. For example, if you increase the sound power by a factor of ten, listeners will report that the loudness has increased by a factor of about two (101/3 ≈ 2). This is a major problem for eliminating undesirable environmental sounds, for instance, the beefed-up stereo in the next door apartment. Suppose you diligently cover 99% of your wall with a perfect soundproof material, missing only 1% of the surface area due to doors, corners, vents, etc. Even though the sound power has been reduced to only 1% of its former value, the perceived loudness has only dropped to about 0.011/3 ≈ 0.2, or 20%.


                            Originally posted by WRGKMC View Post
                            I mention all of this because you question the wisdom in back of simple everyday standards used in audio. I have no problem with that of course. Its important to incorporate a hands on method of learning to reinforce the theoretical. You also have things like peoples ears which will vary both in nerve sensitivity and how the mind perceives sound based on emotions and other factors.
                            Well, my suggestions are not at all arbitrary, as I have a background in mathematics and theoretical and experimental physics. My instincts are actually correct, in this case. I am not saying that the older, traditional standards are without merit, only that they may have been superseded by more modern standards.
                            Last edited by kickingtone; 06-14-2016, 11:24 AM.

                            Comment


                            • #15
                              I've completely lost what you're looking for at this point. You seem to be randomly jumping from one area to another. Maybe I don't understand what you're looking for but it seems like you're trying to adapt the terminology and parameters of one branch of physics to another.

                              This may give you some understanding of how things work but I'm afraid you will be handicapped when you're trying to communicate with others who simply learned audio from the ground up.

                              My education is in electronics and music. I've studied both for about 50 years and even taught manufacture specific electronics for 10 years. I've self educated myself in enough acoustic science to apply where needed. My day job has consisted of 38 years in digital imaging, computers and networking all the way down to component level. I know enough about digital imaging to know what areas parallel digital audio and what areas don't.

                              In connection with video and digital image sensors, decibels generally represent ratios of video voltages or digitized light levels. In fiber optics Decibels are used as a loss measurement just like resistance are used to describe audio losses in copper wires.
                              In an optical link, if a known amount of optical power, in dBm (referenced to 1 mW), is launched into a fiber, and the losses, in dB (decibels), of each component (e.g., connectors, splices, and lengths of fiber) are known, the overall link loss may be quickly calculated by addition and subtraction of decibel quantities. In spectrometry and optics, the blocking unit used to measure optical density is equivalent to −1 B.
                              I'm glad you found the DB chart enlightening, but you should understand its based on average hearing at one specific frequency. In audio we deal with frequencies from 20hz to 20Khz.

                              That chart uses 0dB as the threshold of hearing at 3K. If you were to build a chart using 1Hz intervals from 20Hz up to 20K hz using the same threshold of hearing as a basis you get this chart.





                              The threshold of hearing varies at different frequencies. You need about 70db to begin hearing a 20hz wave compared to the 0db threshold at 3K.

                              The "perceived" loudness of sound doubles with each 10db increase. 3kHz signal is going to sound 7X louder then 20hz signal at 80db because our hearing is non linear.

                              This comes right back to your original post. The waveform with the largest peaks are mostly low frequencies because they produce the most pressure on a microphone diaphragm. The instrument producing those peaks produces a wide frequency response not a single frequency.
                              The lowest notes of a guitar string may produce frequencies as low as 80hz (the root note of an open low E string) Its highest dB levels occur around the first harmonic at 500hz and it may have overtones that reach up to 6Khz or higher.

                              If you use a high pass filter to remove everything below say 800hz, that large blip on the histogram may be reduced to only 10% of its original size because the bass frequencies accounted for most of that amplitude. The instrument may sound just as loud as it did before because the middle and upper frequencies are still there where the hearing is most sensitive. It will simply lack the bottom end kick we feel more then we hear.

                              One other notable item going back to your original question about the wave looking asymmetrical.

                              Low frequencies are extremely large. A microphone may only capture a small portion of the wave depending on its placement.
                              A 20Hz wave for example requires 56.5' of space to become fully formed. The asymmetrical waves in your posted example may have higher frequencies which are fully formed, but you may only capture partial waveforms in the lower frequencies which accounts for the asymmetry.

                              The longer waves also take longer to rise. The beginning of the transient looks uneven, because you have higher frequencies occurring at the same time as the bass frequencies which rise to full formation before the bass does. These higher frequencies are of lower decibel levels and are seen down in that RMS level of the wave.

                              The RMS portion of the wave looks fully formed compared to the higher amplitude bass frequencies which are just beginning to fully form. A 2000hz wave will have 50 completed cycles written before 1/2 of the 20hz wave is completed - That 1/2 wave portion of the Histogram is going to look asymmetrical when you magnify it. Once a complete cycle of the bass frequency is written it will become more symmetrical. A 2000hz wave It will have 100 complete cycles completed by the time one cycle of 20hz is written. This is why the RMS portion containing all the higher frequencies will look symmetrical in comparison to the peak bass frequencies its also why you get a < shape to the beginning of the transient. The higher frequencies are already full formed before that lower frequency gains maximum amplitude.



                              Like I said, Download a copy of Voxengo Span, stick it in your effect buss and view the waveform in real time. Then put an EQ before the Audio Analyzer and watch what happens when you remove bass frequencies.

                              Along with the hands on, read up on the topic so you fully understand how and when it needs to be used. I don't know anyone who can quite this stuff off the top of their head. You don't have to remember all the finer details and math in order to use it in audio. You can always Google it up if you have something requiring those details. I learned all this stuff in school and haven't had to use much of it since

                              http://www.animations.physics.unsw.edu.au/jw/dB.htm

                              This is the stuff that is directly involved in audio. You may find many parallels to your own background but this is the stuff audio engineers use on a regular basis.

                              http://www.sengpielaudio.com/calculator-levelchange.htm



                              Last edited by WRGKMC; 06-15-2016, 08:44 AM.

                              Comment

                              Working...
                              X