Jump to content

What kind of "power" is being measured in this standard waveform?


Recommended Posts

  • Members

Here it is: (see attachment) the typical stereo waveform as you'll see it represented pictorially in a jillion audio programs. I DO appreciate that its peaks and valleys are measuring "power"... ie., Amplitude, correct?

 

But I also know that a waveform--- that records information from about 0Hz to 30Khz---- is a complex thing. Different frequencies are grooving along at different powers, of course, and some frequencies, not at all.

 

So: When we look at this "summation" waveform, just what has been summed, and by what criterion?

 

I ask this because I notice that HIGHER frequencies in a musical waveform will often "peg" your digital dB readout at 0.0dB, even though the mid- and lower-frequencies still have loads of headroom? In fact they might still sound rather quiet in your mix.

 

So I ask you: If this typical wave readout is so vague, what's it useful for? Best I can tell, its real use is just to let you see the silences in between musical moments (haha), and to roughly keep you in the right freq vicinity for optimum recording.

 

So, are there better, more sophisticated ways to read what's going on in the myriad amplitudes of your recording? (By the way, I am not a "brickwall/limiter" fan... I do NOT like to see a horizontal line at the top of my peaks, just sayin', though I know from viewing modern rock/rave records that this is very much the done thing nowadays.)

 

Thanks, ras

01258984ac48c5d0ce6d529bf87adba1.JPG.2e74e042bf44ec014f4144510e69d1ab.JPG

Link to comment
Share on other sites

  • CMS Author

The problem with measuring what you want is that music has essentially no steady state. Things are always changing in time as well as frequency and amplitude. The waveform display that you see is the sum of all frequencies at their respective amplitudes at an instant of time.

 

The tool to use for measuring all the frequencies and their respective amplitudes is a spectrum analyzer, but you can only see something significant when looking at a constant waveform. Look at music with a spectrum analyzer and you'll just see peaks and valleys dancing around to the rhythm. An eyeball average will tell you if your music is bass-heavy or explains screechy vocals, but unless you freeze it in time you can't see how many frequencies are present at that time.

 

You can get some insight by looking at simpler complex waveforms with a spectrum analyzer, for example a constant amplitude and frequency square or triangle wave, or a sine wave slightly overdriving an amplifier to introduce even or odd order harmonic distortion. You'll see frequencies other than the fundamental, and their relative amplitudes.

 

Amplitude isn't power, but loudness is related to power. Loudness meters average the energy in a waveform but it's an average over time so anything instantaneous that happens within that time period contributes to the average, but you can't pick it out of the woods.

 

It's true that high frequencies contain more energy than low frequencies of the same amplitude, but with real voices or musicial instruments, the peak amplitude of high frequencies is usually considerably lower than that of low frequencies. This is why a bi-amplified speaker can get away with a lower powered amplifier for the tweeter than for the woofer. It's also the reason why tweeters blow out when subjected to high frequency feedback or electronic music with sounds that aren't created by real instruments.

 

As far as what the value of a waveform display is, well, for one, people expect to see it. But also, if you zoom in closely enough, it shows you the actual peak level of every sample. With digital recording, there's an absolute maximum level (0 dBFS) that can't be exceeded. When you push the level too hard you can see the tops of the cycles flatten out at the top and bottom.

 

It was fairly common for the meters on an analog cassette recorder to be weighted toward high frequencies because, given the low tape speed, high frequencies (due to the higher energy for a given amplitude) can saturate the tape mre easily than low frequencies, and the distortion at high frequencies is more annoying that low frequency distortion. So if you were to watch a meter like that while sweeping a constant amplitude sine wve up in frequency, you'd see it start to rise as the frequency goes up. But a digital meter won't do that since it only cares about peak amplitude.

 

There are a number of signal generator (function generator) apps for mobile devices. Load one up, connect the output of your phone or tablet to your sound card input, and watch the meters while you're playing with frequency while keeping the amplitude constant.

 

Link to comment
Share on other sites

  • Members

David, you know I love you, bro, but have you ever considered reading a good, comprehensive book on the basics of sound and electronic audio?

 

It IS a big, complex field, but that's why us trying to answer your particular, narrowly-focused questions as they arise from your curiosity is a very inefficient process that probably does you little good, as, having participated in this endeavor for a few years now, I just don't see the coherent understanding of those basics seeming to form behind your questions.

 

It's not that they're not 'good' questions -- it's just that a little focused background education on your part would probably give you a much more coherent, knit-together understanding of the basics that would lead you to the answer to many of your questions and give you a good foundation for asking better focused and more productive questions whose answers would hopefully further elevate your efforts.

 

Your questions are often framed in interesting, provocative ways, but as they say in the audio social media forums, in essence, "these questions have been asked and answered thousands of times."

 

 

I suspect your next, entirely reasonable, question would be: So what book would you suggest I read to get this basic understanding?

 

Unfortunately, for many of us, it's probably easier to answer a given specific question than to go sort through the basic audio books. When we were coming up there were like, maybe 5. Now there's, I dunno, 5,000? wink.png And, indeed, from some of the utter nonsense I see written in places like (the beloved but often filled-with-silliness) TapeOp, not all of them are likely well-written or accurate.

 

But maybe someone does have a suggestion there.

 

It's not that I don't want to answer your questions -- I enjoy the effort by and large -- but for your own good, I really think you need to work on your foundational understanding -- and that will make things a lot easier for you in the long run even if maybe making the questions a little harder but a little more interesting to try to answer.

Link to comment
Share on other sites

  • Members

Blue, fair enough... point taken. It's true I tend to take a "guerrilla" approach to my learning; yes, even scattershot at times: I'll study the trees before I see the forest. I think my approach may be inductive rather than deductive.

 

In this particular question, I'm kind of "coming in for a landing" of understanding, trying to "wrap up" some ideas for a final understanding.

 

I'm actually very comfortable reading an FFT Spectrogram (see attachment). I really do see what's going on in that environment... Every little frequency a "singer" unto himself. And the whole idea of "surgically" tweaking a waveform in spectrogram is still--- you must agree--- still a very new idea. ie., the new "Celemony MELODYNE" approach of using FFT reading to allow a musical waveform to be edited almost as practically as one would edit a MIDI file. That's some new s**t, no?

 

I've been using Sony SPECTRALAYERS recently... where you can, among other things, combine two waveforms in FFT spectrogram.... and the app knows how to blend them such that the waves will overlap in an ingenious manner to yield a THIRD spectrogram... a sort of "cherry pie lattice", in which no frequencies "step on" each other. Or you can pluck a portion of a spectrogram... then reverse its phase against itself, to silence just that tiny bit of sound.

 

With all due respect: Do YOU know how to do this sort of thing? Is anyone on this forum dissecting audio in this fashion at present? This is a brand-new way of looking at audio. I'm treading some pioneering ground, truth told.

 

So I think I can be forgiven if I back up a bit, and ask for clarification on an earlier idea. My questions are not quite as naive as they may seem. Craig's rubric says "Sound, Studio and Stage" and that's what I'm here to discuss.

 

In my own life, God gave me a bit of a whammy: he had me born into the poorest hillbilly family imaginable, in the middle of country nowhere, no leg-ups or connections, where nothing above being a clerk at a filling station would ever be possible to me. I've had to tooth and claw my way... just to be amongst America's literate and lucid. So I'm getting a late start-- I'm 52--- at learning audio stuff. A smidgen of grace on your part wouldn't go amiss.

 

Besides, most audio books are drier than a popcorn fart, and are outdated five years after they're published.

 

fetch?id=31603031

Link to comment
Share on other sites

  • Members
Blue, fair enough... point taken. It's true I tend to take a "guerrilla" approach to my learning; yes, even scattershot at times: I'll study the trees before I see the forest. I think my approach may be inductive rather than deductive.

 

In this particular question, I'm kind of "coming in for a landing" of understanding, trying to "wrap up" some ideas for a final understanding.

 

I'm actually very comfortable reading an FFT Spectrogram (see attachment). I really do see what's going on in that environment... Every little frequency a "singer" unto himself. And the whole idea of "surgically" tweaking a waveform in spectrogram is still--- you must agree--- still a very new idea. ie., the new "Celemony MELODYNE" approach of using FFT reading to allow a musical waveform to be edited almost as practically as one would edit a MIDI file. That's some new s**t, no?

 

I've been using Sony SPECTRALAYERS recently... where you can, among other things, combine two waveforms in FFT spectrogram.... and the app knows how to blend them such that the waves will overlap in an ingenious manner to yield a THIRD spectrogram... a sort of "cherry pie lattice", in which no frequencies "step on" each other. Or you can pluck a portion of a spectrogram... then reverse its phase against itself, to silence just that tiny bit of sound.

 

With all due respect: Do YOU know how to do this sort of thing? Is anyone on this forum dissecting audio in this fashion at present? This is a brand-new way of looking at audio. I'm treading some pioneering ground, truth told.

 

So I think I can be forgiven if I back up a bit, and ask for clarification on an earlier idea. My questions are not quite as naive as they may seem. Craig's rubric says "Sound, Studio and Stage" and that's what I'm here to discuss.

 

In my own life, God gave me a bit of a whammy: he had me born into the poorest hillbilly family imaginable, in the middle of country nowhere, no leg-ups or connections, where nothing above being a clerk at a filling station would ever be possible to me. I've had to tooth and claw my way... just to be amongst America's literate and lucid. So I'm getting a late start-- I'm 52--- at learning audio stuff. A smidgen of grace on your part wouldn't go amiss.

 

Besides, most audio books are drier than a popcorn fart, and are outdated five years after they're published.

 

fetch?id=31603031

 

David, you know how fond I am of you, but I'll even reiterate it. So please don't feel I'm being critical.

 

It's just that I've been around the block talking about sound and audio issues to those with an incomplete (or little) grasp -- around the block a bunch of times.

 

And I know that to really help you understand stuff, that advanced knowledge has to be built on fundamentals. Not unlike playing music. Sure, someone can learn a couple chords on a guitar and bash out a Creedence song or two (or twenty) but you know that that's just a cork bobbing on the surface -- and learning about specific practices (like convolution) without understanding fundamentals typically involves so much 'black-boxing' of what should be basic understandings that whatever knowledge you build in such a fashion will be isolated -- it will be difficult to impossible to tie your understanding of one, superficially understood process to others -- your knowledge of the one will be difficult to connect to other areas -- because you're missing the fundamental connections -- and THAT typically means much wasted effort.

 

It was one thing when you were focused on outcomes and just wanted quick instruction on how to do some specific thing -- but now you're asking questions that are very difficult to explain to someone who doesn't have a decent grip on basics.

 

None of us are born knowing things and there's absolutely no shame in being where you are and having to do a little 'remedial' tech education -- it just wasn't important to you before -- but now, based on your own questions, your curiosity is drawing you to want to understand processes and concepts for which you're missing some basic building blocks.

 

In no way do I want to stop trying to help you -- I just want the effort to produce the best results possible :)

Link to comment
Share on other sites

  • CMS Author
I've been using Sony SPECTRALAYERS recently... where you can' date=' among other things, combine two waveforms in FFT spectrogram.... and the app knows how to blend them such that the waves will overlap in an ingenious manner to yield a THIRD spectrogram... a sort of "cherry pie lattice", in which no frequencies "step on" each other. Or you can pluck a portion of a spectrogram... then reverse its phase against itself, to silence just that tiny bit of sound.[/quote']

 

My hat's off to you. I got a copy of Spectral Layers when it first came out and I couldn't get a grip on the display. I know about time and pitch with volume expressed as a color, but I just never encountered anything useful that I could do with it. The concept of spectral editing isn't terribly new.

 

Ten or so years ago when I was writing for Pro Audio Review, I did a survey of editing programs (today the'd be called "mastering" software) and I made a point of the value of the spectral editor in Wavelab. I found that useful for removing a feedback squeal or a chair squeak in a recording. It was easier to do than what you have to do to achieve the same thing in Spectral Layers. But it was a problem solver, not a sound design tool. I record what I hear, and if I can clean it up and remove distracting sounds, I'll do it, but I don't try to change it into something else.

 

If you're using Spectral Layers to create new sounds by rebuilding the spectral content of a complex waveform, I'm impressed. I have a 40 year old analog synthesizer around here with real knobs and patch cords that I mess around with for a few hours every couple of years. I can make some interesting sounds, but I have no idea what to do with them. I don't write songs or melodies.

 

Is anyone on this forum dissecting audio in this fashion at present? This is a brand-new way of looking at audio. I'm treading some pioneering ground, truth told.

 

You're doing something I've never thought about, that's for sure. Reading some of Craig's posts and articles about sound synthesis, he might be getting toward what you're doing, but I think he's going about it from another direction.

 

So I think I can be forgiven if I back up a bit, and ask for clarification on an earlier idea. My questions are not quite as naive as they may seem. Craig's rubric says "Sound, Studio and Stage" and that's what I'm here to discuss.

 

From what you asked to start this discussion, I had no idea where you were going. Now I understand what you're after. Go for it. Take good notes so you can explain what you're doing. But what's confusing is the vocabulary. I couldn't conceive of what you were really asking.

 

Besides, most audio books are drier than a popcorn fart, and are outdated five years after they're published.

 

They're not exactly cliff hangers but fundamentals are still valid. When I was in college (1960) we ddn't have a spectrum analyzer and fast Fourier transforms hadn't been invented yet. The way they taught us about complex waveforms and the frequencies that they contained was by taking a photograph of an oscilloscope trace a few cycles long (we did have oscilloscopes and Polaroid cameras), then overlaying a piece of graph paper and picked points off the waveform from which we could measure period and amplitude and figure out what frequencies in what amplitude ratio were mixed together to get that waveform. You kids have it too simple today. ;)

 

 

 

Link to comment
Share on other sites

  • Members

Thanks, Mike. Yeah, the SpectraLayers GUI was inscrutable to me at first. I literally had to spend a whole weekend just busting my behind to figure it out.

 

Wow, that spectral technique you used in 1960 (You must be my Dad's age... b. 1942) sure sounds laborious. Yeah, lots of things are way too easy nowadays. To be honest, I can't imagine how elaborate tape editing used to be done with razor blades... Making tape loops (say, for a Mellotron, or to make Marilyn McCoo's high notes last an eternity, as Bones Howe did for the 5th Dimension) must've entailed some really tricky work and excellent ears.

Link to comment
Share on other sites

  • CMS Author
Thanks' date=' Mike. Yeah, the SpectraLayers GUI was [i']inscrutable [/i]to me at first. I literally had to spend a whole weekend just busting my behind to figure it out.

 

I'm glad to hear that I'm not the only one who found it incomprehensible. At least you got over it. I never really went back to it.

 

Wow, that spectral technique you used in 1960 (You must be my Dad's age... b. 1942) sure sounds laborious. Yeah, lots of things are way too easy nowadays.

 

That's scary. Yes, I was born in 1943. Actually we did have a better way to see what was in a complex waveform as long as it was continous and constant, like passing a sine wave through an amplifier that had some distortion. We had a Hewlett-Packard tuned voltmeter in the lab. It's an AC voltmeter with a tuneable very narrow bandpass filter ahead of it. You could tune it to measure the amplitude of the fundamental and as many harmonics as you could see, and add them all up to get the THD. The exercise on the graph paper was to prove that it correlated with the meter.

 

To be honest, I can't imagine how elaborate tape editing used to be done with razor blades... Making tape loops (say, for a Mellotron, or to make Marilyn McCoo's high notes last an eternity, as Bones Howe did for the 5th Dimension) must've entailed some really tricky work and excellent ears.

 

To be honest, splicing tape is a bit tedious, but I find editing on a DAW to be equally tedious, but in different ways. Tape is physical. You can hold it in your hand, and as long as you keep your wits about you, you know just what you're holding and you know right where you're going to put it. It only goes together one way - in between the two free ends, and I always know where those ends are - one on each reel or in each hand, never very far apart.

 

With a DAW you click one way when cutting and the free ends immediately tack themselves together seamlessly, so if you wanted to put something in place of what you took out, you have to find your place again. And if I remember to tell it to leave a hole where I cut something out and it's more than a couple of seconds, I always find myself scrolling around way too much to find the splice point. I know I can set locate points, but I don't always remember to do that. And if I'm splicing in something that isn't the exact length that I took out, I have ot make adjustments. There are just too many steps to remember for every splice. But you can't beat the accuracy (if you're careful) or the un-doability (if you aren't).

 

 

 

Link to comment
Share on other sites

  • Members

I started editing tape in grade school in the early 60s and bought my first splicing block not long afterward. I've done thousands of tape splices. (Not to mention 8 and 16 mm film, but let's stay on track.) My current splicing block (would have to look for it) is a super precision-machined block that cost $40 back in the 80s. (Holy crap, over $90 today!?! For a block of metal with some grooves in it? WTF was I thinking?!? LOL! Damn rec gear was stupid expensive back then.)

 

We're all different, of course, and Mike and I are about as different as two technology and bluegrass loving individuals could be in many ways, I suppose -- and Mike's discomfort with DAW editing is no secret around here wink.png -- but I have to say that the utter ease of editing in a modern DAW (I use Sonar) makes his preference for tape splicing a real head-scratcher for me.

 

Obviously, if the music is already on the grid, that makes much editing moderately straightforward (as long as people don't get confused into thinking that transient peaks always represent the downbeat).

 

But even with free time recordings -- I've done a lot of edits of acoustic guitar played without time reference -- modern DAW tools make much editing ridiculously easy, it seems to me. Take for instance, Sonar's 'transparent clip' mode that allows one to see both a clip that's being dragged and the audio 'beneath' it in the timeline...

 

[video=youtube;anhXaq9_eg8]

 

Link to comment
Share on other sites

  • Members

If you ask me....... Eyes are for seeing and ears are for hearing. A fancy GUI wont do a better job just because its colorful and its exactly those graphics that becomes a distraction to what you're hearing as it is with most eye candy is.

 

Isotope uses that same kind of spectral kind of GUI on their wave recovery plugin called RX. I have it and have used it for some clipped stuff where I did a tracking session and redoing them wasn't an option. I have others that do just as good a job without all the fancy colored graphics. Seeing the RX has been out a long time now I suspect Sony copied them to some degree using colors to depict the various power vs frequency response levels so you can manipulate them.

 

The question then becomes why are you needing to use it. If you're doing audio forensics or restoration, then a tool like this can be useful isolating sounds within the spectrum. If you're attempting to use it as a mastering tool, I doubt much good can come from it. The bus has already left once your tracks are mixed. If you're looking to improve that mix down, you aren't going to find it using that tool. You have to take care of the sound imbalances in earlier steps, The sound source, the front end gear, tracking or mixing tracks.

 

Once tracks are mixed down to a stereo file, there's not allot you can (or should be doing) to fix things at that point. There's no Aladdin's lamp you can rub to make the sound quality emerge for a poorly mixed recording. All you will do is make unnatural things emerge from it.

 

The fact is, Instruments do overlap each other in frequency responses and you cant adjust one without influencing another. Most mastering tools use broad sweeping adjustments, to even things up and but its done to optimize, not RX. Any kind of repair work leaves scares and those scars can be uglier then what you're attempting to remove.

 

I really don't see anything in that Sony plugin can accomplish that would be worth the effort when mastering. Maybe if you were converting old 78 LP's that had really bad noise levels and poor frequency content, you could strip away some of that noise and optimize the sound, or maybe if you were doing Film restoration work (something Sony does with their film collection) a tool like this might be useful.

 

For normal stuff, All you're doing is attempting to is use your eyes instead of your ears. Unfortunately, that's not going to get you the best results, and its going to be even worse if you don't know what you're actually attempting to accomplish.

 

If you're wanting to mess with frequency responses of mixed recordings, I suggest you look into using a good EQ and Frequency analyzer together. If you cant master with those tools, you aren't going to do better with a more sophisticated tool that has targeted applications.

 

I'd bet Harbal would do just as good a job if not better for most stuff. I been using it for nearly 15 years now and I haven't come across any mixes that have frequency flaws that couldn't be diagnosed and evened up with it. The other tool is a good multiband compressor. If your mix cant be evened up with those two tools, then you need to go back to the mix (or re-track parts) and fix things there.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...