Jump to content

AN1x filters are not correctly modeled


Don Solaris

Recommended Posts

  • Members

 

{censored}, I just wish there was an OSX Editor for the AN1x....I can't even externally save or load patches! If my board dies or resets for some reason, I'm gonna lose a whole lot of {censored}in' patches!



halcyo

 

 

 

I've run the editor under OSX using classic mode. So, there is a way to do it if you can fire up classic. Also, in terms of saving your patches and loading them from your computer, you have several possibilities (all freeware). I happen to use this:

http://www.snoize.com/SysExLibrarian/

 

HTH,

 

aL

Link to comment
Share on other sites

  • Replies 76
  • Created
  • Last Reply
  • Members

Perhaps you mean a smaller point FFT. Decreasing resolution isn't a way to smooth things out.
:(

 

Why not? A smaller FFT size will mean that several smaller chucks will be taken and averaged, instead of a few big chunks. I honestly don't see why that shouldn't work. Of course we could get the same effect with greater spectral line resolution just by obtaining more data, but to understand a filter's slope characteristics, how much resolution is really needed? Especially if a more reasonable range is chosen (like 3KHz-6KHz), there should be plenty of resolution to obtain an accurate plot of the filter's slope.

Link to comment
Share on other sites

  • Members

 

That I understand. My point is, if you use a smaller FFT sample size, it has the effect of smoothing out the graph

 

1) it doesn't smooth the graph -> zoom in 450-900 Hz region and see what you get.

 

2) it brings out incorrect results -> zoom in 450-900 Hz again, use 1024 FFT graph and 16k FFT graph (with normalized data), compare results, tell your conclusion.

 

 

Especially if a more reasonable range is chosen (like 3KHz-6KHz)

 

This is way too high. To find the filter slope (before you zoom in) it takes at least 1 octave below and 1 octave above from observed frequency. And one octave above of 3-6 is 6-12 kHz, a region where many synths already fade out in the power (this includes VA synths, such as AN1x that on 12 kHz is already at -20 dB no matter you applied filter or not).

 

 

Forgive me, for I am neither a master of statistical analysis nor FFT techniques, but (for the purposes of our experiment) what is the advantage of normalizing the data vs just using a smaller FFT size?

 

Dunno, i'm neither a master of FFT. All i care is that the data in the graph is readable. Thats all.

 

When i look at this graph i can read the data and find the filter slope with exact difference in dB. Ok, now take a look at your graph and tell me the value of the filter slope. How much is it? Is it 9 dB or 15 dB? 6 dB more or less is a big difference (that's the whole filter pole difference we talk about). It is impossible to read the exact value of this filter's slope.

 

By averaging the data, the same graph would 1) be easier to read 2) made it possible to deeply zoom into it, to find out the exact value in dB of the filter slope.

Link to comment
Share on other sites

  • Members

 

1) it doesn't smooth the graph -> zoom in 450-900 Hz region and see what you get.


2) it brings out incorrect results -> zoom in 450-900 Hz again, use 1024 FFT graph and 16k FFT graph (with normalized data), compare results, tell your conclusion.

 

 

This is exactly the effect of decreased spectral line resolution. It won't work very well at low frequencies. That's why I suggested a higher frequency range. A resolution of like 100Hz isn't very good in the 450-900Hz range, but it's fine at 3-6KHz.

 

 

 

This is way too high. To find the filter slope (before you zoom in) it takes at least 1 octave below and 1 octave above from observed frequency. And one octave above of 3-6 is 6-12 kHz, a region where many synths already fade out in the power (this includes VA synths, such as AN1x that on 12 kHz is already at -20 dB no matter you applied filter or not).

 

 

I don't really follow. If the slope is relatively constant, it only takes two points. If the slope is not relatively constant, than descriptors like "-24dB/octave" don't apply and we need to start talking about the nonlinear behaviors of the filter instead of looking at simple slopes.

 

 

Dunno, i'm neither a master of FFT. All i care is that the data in the graph is readable. Thats all.


When i look at
i can read the data and find the filter slope with exact difference in dB. Ok, now take a look at
and tell me the value of the filter slope. How much is it? Is it 9 dB or 15 dB? 6 dB more or less is a big difference (that's the whole filter pole difference we talk about). It is impossible to read the exact value of this filter's slope.


By averaging the data, the same graph would 1) be easier to read 2) made it possible to deeply zoom into it, to find out the exact value in dB of the filter slope.

 

 

To me, it's pretty easy to eyeball the slope. averaging would make the graph prettier, but it's easy for me to see that the graph is about -18dB/oct +-3dB anywhere on the graph. If you aren't satisfied, PM me and I can email you the data and if you want to make your own graph, feel free. I'm not about to futz around in excel.

Link to comment
Share on other sites

  • Members
{censored}, I just wish there was an OSX Editor for the AN1x....I can't even externally save or load patches! If my board dies or resets for some reason, I'm gonna lose a whole lot of {censored}in' patches!

Dude, it's called doing a sysex dump and load. Every program that does midi can do that these days. If your using a program that can't then throw it away and get something modern.

Link to comment
Share on other sites

  • Members

 

Dude, it's called doing a sysex dump and load. Every program that does midi can do that these days. If your using a program that can't then throw it away and get something modern.

 

 

The AN1xEdit program goes way beyond sysex dumps. It's a full editor plus a nice librarian. It was never ported to OS X, though some people have gotten it to run under Classic (Yamaha's official position is this is not supported). It does run on Windows from 98 up to XP.

Link to comment
Share on other sites

  • Members

 

This is exactly the effect of decreased spectral line resolution. It won't work very well at low frequencies. That's why I suggested a higher frequency range.

You can't use low FFT to "normalize", because it results in incorrect data. How much incorrect, depends on what FFT window was used i.e. Hanning, Hamming, Gaussian, Blackman-Harris etc, each giving completely different results.

 

 

I don't really follow. If the slope is relatively constant, it only takes two points.

Before you can measure the slope, you need to find the transition bandwidth. The less filter poles, the larger transition. For a 2 pole Chebychev filter this can be a tricky job, specially if there is additional passband ripple (and there always is, once you go and build the filter with real world components). And thrust me, this takes a little bit more than just one octave.

 

 

To me, it's pretty easy to eyeball the slope. averaging would make the graph prettier, but it's easy for me to see that the graph is about -18dB/oct +-3dB anywhere on the graph.

 

Interesting, reading the same graph i found the slope to be 13 dB.

 

 

If you aren't satisfied, PM me and I can email you the data and if you want to make your own graph, feel free. I'm not about to futz around in excel.

Sure. Here is my email address.

Link to comment
Share on other sites

  • Members

You can't use low FFT to "normalize", because it results in incorrect data. How much incorrect, depends on what FFT window was used i.e. Hanning, Hamming, Gaussian, Blackman-Harris etc, each giving
completely different
results.

 

I'm familiar with windowing, and for the record I believe a blackman or hanning window would be most appropriate - we don't need excellent frequency resolution because we are analyzing things that are supposed to be pretty smooth and we are looking at the spectrum on a rather macro-scale. Amplitude resolution and leakage suppression are more important, and those two windows both do well in that area.

 

Anyway, I don't feel like debating it any further, because I don't think anybody here really understands Fourier transforms well enough otherwise there would be a better explanation than "it results in incorrect data". How does it make the data incorrect? Break it down for me, I'm not a master but I have actually had college-level exposure to this stuff, plus my own reading of "The Scientist and Engineer's Guide to DSP" so if you are thorough or at least very precise I should have no problem understanding. "Incorrect data" is extremely vague. I have no problem admitting I'm wrong if you can demonstrate that (after all this is for our own enrichment), but don't tell me that I'm wrong, show me why I'm wrong.

 

Before you can measure the slope, you need to find the transition bandwidth. The less filter poles, the larger transition. For a 2 pole Chebychev filter this can be a tricky job, specially if there is additional passband ripple (and there always is, once you go and build the filter with real world components). And thrust me, this takes a
little bit more
than just one octave.

 

Good point, and that's why I took the slope far from the cutoff point yet well above the noise floor. I'm making the informed assumption that the transition bandwidth will be less than a few octaves.

 

And umm...I'd prefer not to thrust you, but thanks for the invite. :confused:;)

 

 

Interesting, reading the same graph i found the slope to be 13 dB.

 

From 3k-6k? How did you manage that? Find the lowest possible point at the upper bound, and the highest possible point at the lower bound?

 

Sure. Here is my
email
address.

 

Sent. I just sent the wav files for you, so you can do your own analysis, since I don't think we agree on methodology.

Link to comment
Share on other sites

  • Members

 

The less filter poles, the larger transition. For a 2 pole Chebychev filter this can be a tricky job, specially if there is additional passband ripple (and there always is, once you go and build the filter with real world components). And thrust me, this takes a
little bit more
than just one octave.

That statement is too generalized. Compare a Bessel and Chebychev and filter.

 

In band and pass band ripple do not come from real world component tolerances, they come from the fact that real world filters are causal or non anticipatory, they cannot see the future. This is the same reason brick wall filter slopes don't exist.

Link to comment
Share on other sites

  • Members

How
does it make the data incorrect? Break it down for me, I'm not a master but I have actually had college-level exposure to this stuff, plus my own reading of "The Scientist and Engineer's Guide to DSP" so if you are thorough or at least very precise I should have no problem understanding. "Incorrect data" is extremely vague. I have no problem admitting I'm wrong if you can demonstrate that (after all this is for our own enrichment), but don't tell me that I'm wrong, show me why I'm wrong.

 

 

 

A 4 pole Butterworth filter analyzed with Blackman-Harris window @ 1024 FFT for the frequency range 400-22000 Hz:

 

insertimage.gif

 

 

A 4 pole Butterworth filter analyzed with Blackman-Harris window @ 32k FFT for the frequency range 400-22000 Hz:

 

insertimage.gif

 

Lets pick a spot. Say 5.6 kHz. The first graph tells -80 dB, the second tells -96 dB. I think the most of the world calls the data on the first graph "incorrect".

Link to comment
Share on other sites

  • Members

In band and pass band ripple do not come from real world component tolerances, they come from the fact that real world filters are causal or non anticipatory, they cannot see the future.

Heh, tell that to my high frequency filters. No matter what trick I've tried like placing inductors 90 deg against each other, shielding etc. i can't get away from the ripple and awful attenuation. So far the only workaround was splitting the filter in two sections. First you apply gentle 2-3 pole filter to prevent demodulation, then you amplify it for 10-15 dB and then you hit it hard with a sharp 7 pole Chebychev. The amplifier compensates all losses, and from outside it looks like the ideal filter. ;)

 

I think situation in digital synths is much simpler. It is all DSP. You want 24 dB, you got 24 dB. I still wonder .. i dunno maybe Yamaha engineers were modeling this "24dB" filter by comparing it to some actual device. On a contrast, when you switch to 24 dB on JP-8080 you can hear it right away it is 24 dB, same thing with Super JV series. Maybe some companies have different criteria for what 24dB filter is and how it should perform.

Link to comment
Share on other sites

  • Members

It is all nice that you had a college-level exposure to this stuff, and read "The Scientist and Engineer's Guide to DSP". This is all great and cool. But it won't change the fact that low FFT analysis brings out incorrect results and
can not
be used as a replacement for the high FFT + data normalization in white noise applications. There is nothing i need to tell or prove to you. It's a fact! (for details read Urban's post).

 

Which details in urban's post? I don't think there are any that address my specific query.

 

 

A 4 pole Butterworth filter analyzed with Blackman-Harris window @ 1024 FFT for the frequency range 400-22000 Hz:


f1.gif


A 4 pole Butterworth filter analyzed with Blackman-Harris window @ 32k FFT for the frequency range 400-22000 Hz:


f2.gif

Lets pick a spot. Say 5.6 kHz. The first graph tells -80 dB, the second tells -96 dB. I think the most of the world calls the data on the first graph "incorrect".

 

How do you know which one is incorrect? Judging by the vagueness of your responses thus far I don't think you even have the technical understanding to make the assertion. All the layman could conclude is that one is different from the other. Anyway, I come up with very different results when performing the *same* analysis (4pole lowpass butterworth filter on lowpass noise) - observe:

 

1024:

 

Butter1024.png

 

32k:

 

Butter32k.png

 

As you can see, the only difference in this case is that the relative amplitude of the entire analysis is different (which really doesn't matter, since we are only concerned with the relative amplitude of different points on the plot) and that the 32k FFT is a lot less smooth for reasons I've previously explained FFT analysis tools average or sum each block analysis - since our filters are static, this will *not* distort the results, it will improve them.

 

The program I used, BTW, is SpectraLAB. I can show similar results in *any* FFT analysis tool I've ever used (and I've used a bunch), including Audicaty which we were initially using. I'm actually curious as to how you could have come up with such radically different results.

 

*edit* I think I figured it out - your 1024-point FFT is just one block of analysis - my whole point is that if you analyze an entire file, with several seconds worth of data, the FFT can analyze many blocks of that noise, and add them together, thus averaging the spectrum. That's what I'm showing. Since for this test I used a 10 second noise sample passed through a butterworth filter, these graphs are approximately 431 and 13 blocks added together, respectively. Yes, with just one block you will on average show a greater error, but if enough blocks are processed, the difference becomes insignificant pretty quickly, in fact the smaller FFT size will begin to show less deviation more rapidly, as I have shown. But you have to understand the tools.

 

Yes. I understand. It is because i lie, and the software must be lying too.

 

Umm, you missed the joke. You said "thrust", instead of "trust". Two very different things. :)

 

 

So ... to conclude this mission impossible: smaller FFT size
does not
equal data normalization. All the graphs so far confirmed this. None showed the opposite.

 

Mine showed the opposite.

Link to comment
Share on other sites

  • Members

You're doing 9 pole systems with inductors?!
:o
:o
:o

I don't envy your job. To much interaction between stages with passive filters. Good luck.
:(

 

Yeah, but that's why passive filters can kick ass for musical devices (most guitar amp tone controls, for example)! I couldn't imagine designing such systems for precision work, but for character...hell yeah.

Link to comment
Share on other sites

  • Members

 

I find this thread interesting. You may proceed.


I'm unfamiliar with the various methods of filtering in the digital domain; If some delay is acceptable to look ahead in the data, is it then possible to do a "brick wall"?

This begins to go into the real world which we're not taught in school. So, my disclaimer is that I try my best to bring facts to this forum rather than leave people misguided. Engineering is not easy, and there are common misconceptions around every corner. For example, the people on [AH] are talking about headphone out looped back into a Voyager as a way to possibly damage the low output impedance headphone out by drawing too much current. Likely story, seeing that something with even 1k input impedance is much higher than headphones with little inductance. Someone somewhere made a statement and that has been blown out of proportion to the point that people fear using the feedback method.

 

Anyhow, filters.

 

The Chebychev filter is using delay (phase) to get a steeper response. The problem is the delay cannot be constant per frequency, there has to be a linear relationship between the two. So, as you add more phase rotation, some harmonics come out later than they should from the filter, causing a ringing on the output.

 

I imagine you can get closer to an ideal filter in DSP, but you could in the analog world too. A 72 pole Bessel analog filter would have close to an ideal response, but would take up a lot of space. I'm sure the same thing could be done in an FPGA, but I haven't studied enough FIR filters to even imagine how to go about it.

 

Filters themselves can be an engineering career. There are tons of books on them for various applications. The analysis gets very complex very quickly.

Link to comment
Share on other sites

  • Members

Mine showed the opposite.

I used Adobe Audition 2.0 (ex cool edit) for above analysis. One block (the same one) was used for the test. Of course, if i take 10 minutes of white noise, there is no doubt i will get the perfect curve even with 512 FFT, since all blocks will be averaged. However, for shorter periods of time, and deep zooming i prefer max possible FFT density. That is exactly what i did in my analysis of AN1x filters.

 

If we zoom out far enough, a 64 FFT will do the job. Right! My images were done in 450-900 Hz region. Because of the the low frequency and large zoom (12 dB difference) i took the 16k FFT to ensure i can read the slope to the precision of 0.1 (forgive me, i'm obsessed with precision*). And to make data easier to read i normalized it. I am not so sure that low FFT would give enough precision for this work i did. In the end, why do we have 16k, 32k etc. FFT's when 1k is enough. Eh?... :)

 

*took me 2 hours to adjust graphs in gnuplot to find exact values of those 3 filters. But now i got them!

Link to comment
Share on other sites

  • Members

I used Adobe Audition 2.0 (ex cool edit) for above analysis. One block (the same one) was used for the test. Of course, if i take 10 minutes of white noise, there is no doubt i will get the perfect curve even with 512 FFT, since all blocks will be averaged. However, for shorter periods of time, and deep zooming i prefer max possible FFT density. That is exactly what i did in my analysis of AN1x filters.


If we zoom out far enough, a 64 FFT will do the job. Right! My images were done in 450-900 Hz region. Because of the the low frequency and large zoom (12 dB difference) i took the 16k FFT to ensure i can read the slope to the precision of 0.1 (forgive me, i'm obsessed with precision*). And to make data easier to read i normalized it. I am not so sure that low FFT would give enough precision for this work i did. In the end, why do we have 16k, 32k etc. FFT's when 1k is enough. Eh?...
:)

*took me 2 hours to adjust graphs in gnuplot to find exact values of those 3 filters. But now i got them!

 

Well, I don't think 10 minutes of noise would be necessary but hey, at least we understand each other. :) Just a difference in methodology. I prefer to just take lots of data when I can (which admittedly I didn't do for the Mirage analysis, probably would have yielded better results if I took more data or maybe even just resampled at like 384KHz and applied some decimation), since well, it's just more convenient.

 

For some applications, definitely larger FFT block sizes are very important - for example if a recording has some interference and we would like to know exactly what frequency (or which frequencies) that interference exists, we need to have a higher spectral line resolution to really pinpoint that. Likewise, if we were measuring ripple or maybe even the transition bandwidth, larger block sizes would be necessary.

 

Anyway, it was a good discussion (certainly a lot more interesting and productive than some of the...erm...topics around here). :) Having to explain one's ideas really helps to solidify them in one's own mind.

Link to comment
Share on other sites

  • 6 years later...
  • Members

 


Don Solaris wrote:
Especially if a more reasonable range is chosen (like 3KHz-6KHz)

 

This is way too high. To find the filter slope (before you zoom in) it takes at least 1 octave below and 1 octave above from observed frequency. And one octave above of 3-6 is 6-12 kHz, a region where many synths already fade out in the power (this includes VA synths, such as AN1x that on 12 kHz is already at -20 dB no matter you applied filter or not).

Sorry, but what is this supposed to mean?

 

I assumed it was an implication that the fixed analogue reconstruction low-pass filter (LPF), necessary to remove high-frequency imaging (a.k.a. DAC-side aliasing, though I try to avoid that term due to its normal use at the ADC) and other artefacts from the DACd signal, had a cut-off frequency well within the audible range. That would explain the huge attenuation at 12 kHz you reported, but it also would have horrifying implications for sound quality. The next time I played my AN1x, I used headphones that must have been crap, so it did sound muffled, and I thought this theory was plausible. So, I wanted to test it for myself.

 

Suffice it to say listening on proper speakers, and proving this empirically, found no such thing: there are no cuts in the audible range, just a reconstruction LPF with the very sane

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.


×
×
  • Create New...