Jump to content

AwayEam

Members
  • Posts

    644
  • Joined

  • Last visited

Converted

  • Location
    Toronto

AwayEam's Achievements

Newbie

Newbie (1/14)

0

Reputation

  1. The problem becomes apparent when we look at how a sampler works. A sampler's clock circuit 'gives' it a certain amount of time to take a measurement of the signal voltage at the AD, approximately .000023 seconds at 44.1 kHz sampling rate or .000010 sec at 96, and .000005 seconds (5 µs) at 192. HEY BRO2BRO I gotta point out that any cheap-ass Ethernet interface can sample voltages at rates measured in hundreds of megahertz. Sampling audio is obviously a bit different, but if a converter is buggering up at a few hundred KHz, it's just poorly designed.
  2. in the end, who cares since 99.999 percet of people can't tell the difference. are we recording ONLY for that .001% ? I doubt it. HEY BRO I think most of the debate is about whether anyone can reliably hear this mess. We're trying to decide if these audio emperors who claim to hear a difference are wearing clothes or not. I think the whole exercise is tiresome and bordering on obscene. For me, the Dan Lavry posts that Bro2Bro linked to are pretty close to the last word on the subject,
  3. Then, I adjusted the track volume on my DAW to be at -15dB HEY BRO The DAW's track volume doesn't have anything to do with recording. Only the input gain is important.
  4. ...It means I knew how to push a button and not f' things up in the process... HEY BRO That's the trick, isn't it? Assuming you have talented performers, the job of the tracking engineer is simply to not {censored} anything up. Of course, as I'm finding out, that's far easier said than done. In audio, it still takes an experienced professional just to not make a bollocks of things -- never mind get something that sounds great.
  5. Jim I work with high end electronics all day. I even do some rudimentary circut design and some embedeed programming Its not like I don;t have a clue as to what I am talking about. You can create a wav and then create another. but what you cannot do is create multple simultanoeus waves or transmit them.Just not gonna happen. Now if you have parrellel buffers where you can store the save data and you have a system to broadcast them simultanouesly then yes you can create simultanoues waves. But when your hearing digital audio and I would be glad to use a osciliscope to prove this you can only have waves in sucession. The magic happens at the DA conversion step with buffers and oversampling. It happens really fast though but you can only process data in sucession. Each function generated by the instructions may eat one cycle or multiple clock cycles it depends on the design of the core. You have to realize that these cycles are pretty damn fast to. the average 8mhz cpu can run a whole helluva lot of cycles in 1 second. as slow as it is. DSP chips work the same way. The only difference is that they have very fast cores designed to do lots of sampling and multiplication/diviosn in quick sucession. but even DSP chips only execute one instruction after the next. add all that together and you get a series of event but never simultenous events. thats where the faster digital medium sampling rates really help. But they also eat more clock. chicken in one hand cock in the other. Which one is better ? I dunno but without both you can't make eggs. HEY BRO As mentioned earlier, this is why there's latency in digital audio... You seem to have stumbled your way through an unrelated collection of facts that are basically correct, but I'm really not sure how you're arriving at the conclusion that the sequential operation of processors causes a degradation in digital audio quality.
×
×
  • Create New...