Jump to content

AluminumNeck

Members
  • Posts

    2,150
  • Joined

  • Last visited

Everything posted by AluminumNeck

  1. there is a difficult to quantify difference " I am at work using decent PC speakers" but its there. Its almost like some sort of conversion artifact is either being added or missing in the 2 clips clip 1 vs clip 2.
  2. 2008). If it's interesting, maybe I'll post those. No they are the same length. I just had to zoom on time at sligthly different levels. Sorry about that.
  3. it pumps and oscilates everytime the kick drum is kicked. Its just sucks the air out of the room.
  4. Here are the Screen shot comparing the original 1992 release CD vs the Ten remaster remix job. What a awesome job they did destroying this awesome CD.The lower wave is the original and the upper is the hack job. Nothing like some brickwall limiting to destroy a great cd.I also lowered the volume of the remaster job to show in totality just how much data was lost.
  5. Sprinkle the spice into the suace for the mix. Do not make the spice the mix. Compression is a great tool to add glue to stuff but its a spice not flavor. If you get hung up that one tool will be your deliverance your deluding yourself. If you need lots of compression and EQ on the mastering end something is very wrong a the track or mix level. If you're looking for a detailed read about mastering, pick up Bob Katz's "Mastering Audio: The Art and the Science". Pretty much everything you'd ever want to know about mastering, although it's geared less toward the home recordist and more toward the studio recordist. It provides a world of knowledge on the subject though. I'm learning how to use a multi-band compressor at the moment. I've always just used a basic limiter and some gentle EQ to "master" my recordings, but the multi-band seems like it's really the key tool for mastering (if I could figure out how to use it correctly...) I prefer trying to master my own recordings no matter how hard I fail, because someday when, after trying and trying, I finally get it right it'll be worth it not to have to pay big bucks to get a recording mastered.
  6. you are now on ignore. First off DSP cores are just really fast math processors. Thats about it. http://en.wikipedia.org/wiki/Digital_signal_processor they perform really fast linear "occassionaly parrellel" math process's this depends entirely on the core and how its configured. This topic isn't about DSP signal processors which in and of themselves are entirely different animals then whats found in consumer and PC hardware outside of sound cards and other specifically designed gear. Sample rates are how much data can be sampled. You need a minimum sample rate to capture a given spectrum. everthing else beyond goes to accuracy. I am not the one changing topics. I am pointing out a flaw in the thought process. What people really want is a high quality Analog sound out of digital hardware. The processing power isn't there yet. but its close. secondly digital does not produce the same output as analog. That why they sound different. Buy some good test equipment and not a radio shack scope and find out for yourself. and the end result of digital hardware is that you loose data in both ends of the AD to DA due to the sampling process. Which would not be lost unless the tape medium was deficient. and for the last time. I am not saying one is better then the other. Just different. your also going to get a differen wave output between digital and analog given the nature of the medium. But continue to be ignorant its ok. Digitall things happen sucessively where with analog gear things can and do happen simultenously. the issue is whats that really mean at the end of the day. if things happen rapidly enough in sucession does the ear belive that they happened at the same time ? Lol, sample rate defines exactly what frequencies can be captured, which is mathematically proven by the Nyquist Theorem. For the third time, nobody is saying there isn't a difference between digital and analog. I haven't once mentioned audible frequencies in my discussions with you. My whole argument is that your explanations of DSP are way off. Ok, but again my problem is with your explanation of why. It is provably wrong. Everytime someone points that out, you seem to change topics. If you want physical proof, I could provide you with a sequential software imlementation of convolution, and a highly-parallel custom circuit implementation of convolution implemented in an FPGA. One is sequential, the other does hundreds of operations per cycle, and they both produce the exact same output. Thus, sequential/parallel execution is completely irrelevant. Only the end result is ever heard. Before repeating again that the difference between analog and digital is caused by sequential computation, please understand this previous paragraph.
  7. sample rate has nothing do to with frequency response. I am talking sample speed. How fast and how many how much depth. secondly that is essentially what all these debates are about. The difference bwteen audio quality of digital and analog. There are differences and thats the core of these discussions. the annoying part is that this keeps comming back to audiable frequencys. But thats not really the debate. The real debate is percieved sound quality and what are the unique qualitys of higher sample rates and bit depths. I am not going to disagree that 44.1 captures 21.55 khz. Thats not what I am talking about. I am talking about sonic interaction. This is the area where Analog seems to win out. The question is why. To determine possiable cuases you have to look at how both mediums achieve a wav form for output. Therin lies the difference. Ok, why would it be nice? Can you hear 160 kHz frequencies? So is parallel computation. What is the point? Sequential computation is the difference between analog and digital??? What does sequential computation have to do with anything? The difference between analog and digital is continuous vs. discrete. Digital signals enable computation on signals (i.e., DSP). That computation can be completely sequential, embarissingly parallel, or any point in between, while still computing the same result. In fact, these summations you are referring to are likely done more commonly in parallel as an adder tree dataflow graph. Before you start talking about embedded processors, yes, many processors can only execute a single instruction at a time. But again, this doesn't affect sound quality. It only changes the latency required to compute a summed signal. Again, nobody is arguing that there isn't a difference between analog and digital. But, your explanation of the difference is misguided.
  8. if we are all truncating/dithering down to 16/44.1 then what would be the point of recording at 64/320khz ? Playback has to catch up first. I didn;t say it was needed. I siad it would be nice. Personally I think we are one generation of electronics away from that type of bandwidth. sequential computation is a reality. I am long over it. I don't think you hear it but perciving some intangable quality difference between the 2 is very real. It sodes eist so stop trying ot say it doesn't, But others who have commented on the way digital mixing vs analog mixing works and sounds have made similar analogys.There is a difference between the two. The point I am making is that the sequential computation is what make them different. Not so much the bandwidth but the way in which the wav's are created. I don't think anyone was questioning the difference between analog and digital. However, we were definitely questioning your explanation of being able to hear sequential computation. I'd like to ask why you think 320 kHz would be nice, but I'm afraid this would jump back to the beginning of the thread. If you want to use a really high sample rate, go ahead, but please don't spread information saying that it is needed. The whole point of the Larvy article was to dispute this claim.
  9. I am talking about the difference between ANAlog and digital. Analog does not sum. Digital does and therin lies a finite difference.how much this matters ? Depends on you philosphy. My argument is that faster digital audio would negate the difference and maybe a different approach to how its coded and used. What would be nice is 64 bit 320khz sampling. That would theoretically be better then anything ever. It would also allow vastly higher level of detail. I can here a difference bwetween 24/48 and 24/96 and even 16/44 and 16 at various sampling rates. . but beyond that it becomes a moot difference. My point is that your neve going to have some of the natural qualitys of Analog out of digital. Some of us care some of us don't. I am in the latter.I don;t presonally care but a 24/32 bit 96khz bandwidth would be nice. Why do you assume that none of us have embedded system experience? You're right. The don't play back 2 different signals, they sum the signals and play back the sum. What's wrong with that? As I said ealier, microprocessors have been able to execute multiple instructions simultaneously for 2 decades! If you are going to quote us your CV, you might as well sound like you know what you are talking about. Regardless, this is irrelevant. You are describing one way that signals can be summed. So what? This is not what you hear.
  10. I will borrow the big scope this weekend and get out the record player. I will also do a audio capture of the same source on digital playback. I will post up the results.
  11. until something overlaps yes. But then again I have acess to very high end osciliscope on occassion. Its not a big deal but you do get parrelell signals from analog. Unless of course they overlap. get your hands on a 233mhz scope. its very minute in nature but its still there. Parallel waves? You're telling me that you're seeing multiple waveforms on a oscilloscope screen?
  12. I am not saying one is better then the other. I am stating there is a difference. And some people find that difference to be very apealing. You will see analog signals morph together. Whereas with digital you get a sumation. That difference is the distinction between good quality analog and good quality digital. The summing takes away some of the charecteristics of the waveform becuase we are limited by the function of the wave generator at some point for resolution. Don't take this as a critisizing of digital. Its a great medium. If they double the current bandwidth they will be totally inaduaiabley different. With the newest generation of embeded processors comming out the gap between PC and handheld devices and small player is about to dispear. Why not push the envelope all the way up. that my point. It can't hurt. but they are different. I guess I'm still not following you. If they can sum signals, why would they need to play back two simultaneous different signals? Why not just sum them together and play the one resulting signal?
  13. and there is the diferenece between analog and digital. Analog represent all waves that may exist simultanoeusly unless they overlap. Whereas digital must SUM the waves into distinct events. Also they do come in sucession. Grab a osicliscope and have a look. You will find more wav patterns in a good analog record then in a digital one.where you will see them is in the parrelel. You won't see parrelel waves in a digital recording. I'm pretty sure that's inaccurate. I mean, think about it. If it worked that way you'd be unable to mix anything larger than, say, a dozen tracks without running into some pretty serious problems. You certainly wouldn't be able to mix sessions with a hundred-and-some-odd tracks like you see in some places. It just doesn't make sense. Your DAW software adds the waves together digitally and then sends the resulting waveform data to the DA converter, which converts it into one single analog waveform. It doesn't switch back and forth between them. It wouldn't make sense. Your computer is basically a big adding machine, and summing waves together is exactly what it's good at - addition.
  14. they can sum signals.Correct, what they can't do is play back 2 simultaneously different signals. Hence the difference between analog and digital. This is not how it works. DAWs sum multiple signals together, and then play the result back. There is no time multiplexing. It is inaudible because you are hearing a single summed signal, not two time multiplexed signals.
  15. a digital circut can.IE if you have multiple circuts in parrellel. But a microprocessor cannot unless it has multiple math cores most processors do not.. I am not talking about dependencys. I am talking about exectuing instructions. We are talking bout executing instructions here. I do some low level coding for odds and ends embedded devices I ain;t really good but I do understand how processors work. This is also why you get a larger more significant gain in performance from running multiple processing cores vs one single core of the same total clock speed. Digital circuits can do 10s of 1000s of operations in parallel! You are referring to dependencies. Dependent operations cannot be done in parallel, but they can certainly be done in the same clock cycle. VLIW and superscalar microprocessors have been around for 2 decades. This is nothing new. Correct. There are plenty of VLIW DSP architectures.
  16. thats exactly what it is doing. Its just doing it very very very fast. So fast that it is inaudable. But this is where analog wins out. analog can have multple sinusoidial waves simultanoeusly but the caveat is that when waves overlap that one will moentarily cancel the other. This is why analog seems to GEL better. I guess I'm not following exactly what it is you're trying to say, AN. You're saying that a digital system can only generate "one wave at a time" but what wave are you talking about? Say you record a guitar track and a bass track. Are you saying that when you play those back at the same time, the computer is only playing back one at a time, switching between the two? I'm not trying to nitpick; I just want to make sure I understand what you're saying because I'm a little confused right now.
  17. Jim I work with high end electronics all day. I even do some rudimentary circut design and some embedeed programming Its not like I don;t have a clue as to what I am talking about. You can create a wav and then create another. but what you cannot do is create multple simultanoeus waves or transmit them.Just not gonna happen. Now if you have parrellel buffers where you can store the save data and you have a system to broadcast them simultanouesly then yes you can create simultanoues waves. But when your hearing digital audio and I would be glad to use a osciliscope to prove this you can only have waves in sucession. The magic happens at the DA conversion step with buffers and oversampling. It happens really fast though but you can only process data in sucession. Each function generated by the instructions may eat one cycle or multiple clock cycles it depends on the design of the core. You have to realize that these cycles are pretty damn fast to. the average 8mhz cpu can run a whole helluva lot of cycles in 1 second. as slow as it is. DSP chips work the same way. The only difference is that they have very fast cores designed to do lots of sampling and multiplication/diviosn in quick sucession. but even DSP chips only execute one instruction after the next. add all that together and you get a series of event but never simultenous events. thats where the faster digital medium sampling rates really help. But they also eat more clock. chicken in one hand cock in the other. Which one is better ? I dunno but without both you can't make eggs. No it is all at once. When the waveform is reconstructed, thats what you hear. The time it takes to decompose, do the series of summations, and reconstruction is why we have latency.
  18. yes and no. but it will never be simultenous like analog. Thats the quality thats really being debated. The occurance of simultenous data broadcast. but what your neglecting is that each of those transforms occurs in rapid succesion. Not all at once. Actually no. This is exactly what fourier transform does. It decomposes the signal into a series of sinewave additions and then reconstructs them. Thats the definition of a summation series. It doesn't just lump it all together and try to guess, it breaks it down into series of summations, and then lumps it back together.
  19. digital equipment does not do anything simultanoesuly ever. you have a clock and in that clock cycle you can execute instructions. However you can only execute one instruction or a series of instructions per clock cycle. Ie 8+8 = 16 then the next 4+4=8. then you can add 16+8 = 24 You can't do this 8+8=16 4+4=8 and have them both sum and translate to 24 in one cycle in parralellel. However what you cant do is do both math instructions simultaneously. You can do one after the next. now some processors can execute multiple instructions per clock cycle. Motoroalla stuff is great at this. Mostly embeded hardware stuf like the MP86586 spanish oak processors used in automotive controllers. This is my point. With digital audio you get a sucession of waves. With analog you can have simultaneous waves occuring at exectly the same time. and yes your converters have a bit order. Now the job of the Daw is to take ech message and convert it into a properly placd bit. it works a bit like this Lets look at 2 channels. channel one send out data in lets says this fiarly standard format. Header,Time stamp,data,checksum so right there with have 4 bytes of information. Now if we use 16 bit so we can condense some of the messages into one bit of data. so we can have a hex value of 65025 or FFFF use the first 2 byte for the Id and the second 2 bytes for the time the next 2 byte are the data and the last 2 bytes are the checksum to verify the data. so we can send a message that looks like this FFFF ID and time stamp FFFF audio sample data FFFF checksum thats just for one channel sending 1 sample. Now we have the next channel repeating again the same protocal and data. I am unsure if they actually use a checksum on audio data streams over fire wire but you get the point. you don;t just send samples over the data bus. and you also can;t send them simultaneously. you can send them one at a time in rapid sucession. Thats all. the same holds true on both ends of the conversion proces. at a minimum on firewire to keep the data organized you have to have a timestamp "Master Clock" and a ID byte. Where the data goes and a Data bytes. so in reality in your smultenous recording structure is basically sending data in rapid sucession but never simultaenously. I am unsure the the exact protocals used in the data buss but it will never approach the actual simultaneous quality of analog. as bandwidth goes up it will get closer. You're missing a fundamental part of the process here, AN. Your asymmetrical wave is simply a bunch of symmetrical sinusoidal waves stacked on top of each other, of varying frequencies and lengths. As long as your sampling rate is above twice the frequency of the highest-frequency component of that weird shape, you'll reproduce it with 100% accuracy. I'm not sure exactly what you're trying to say where you talk about a bit here and a bit of the next... Are you saying that, for example, an 8-track converter only samples one track at a time? I could be wrong, but I don't think that's accurate. In fact I'd be VERY surprised if that was true, just on the face of it, as it seems like it would be much easier to design eight channels of conversion working in tandem off a single clock than it would be to cascade it. Could there be design ramifications that would call for a cascaded design? Sure. And I'm not a converter designer so it's entirely possible that I'm ignorant of something here. It just seems like since they're all acting off the same clock signal (they pretty much HAVE to be), it would be easiest to have them acting together.
  20. I also dislike class D amplifiers. I can hear that PWM output stage effect on the notes and harmonic content.
  21. except if you only use 2 samples per cycle you not going to get a asymetical wave. You going to get a perfect sinusodial wave. 16b 44.1 is about 50 yards short of the goal line on accurately reproducing audio "Its pretty damn good however". The only caveat is that 95% of listener won't hear the difference of the next 50 yards becuase they have crappy equipment.Even if they have great equipment they still won't hear it. If you want to create oddly shappe wave. You need more data points. Trying to make a sqaure wave with a sinusoid is actually pretty easy as long at you can esily determine the rising and fallin edge and hieght of the sqaure. you can create a sqaure with with 2 sonusoidial wave in reality. Use the zero cross of the sonusiod to determine edge start. Use the top of the positive swing to set height. Use the next zero crossing to determine off. This isn;t the problem. Its when you have a bump in the wave that you create issue. Like if you need a saw tooth. also the sampler on the output side is not going to try to summ the waves. Then we have the effect of the the ants marching. Each wave is followed in succession by the next wave. In analog we have coinciding multiples of signals. In digital we create one wave after the next. When the guy a few pages back was talking about how analog seemed to coelsce better this is why. In analog things can happen simultaneously. In digital we get rapid succession. The closer we get digital to zero in terms of event timming the more faithful it becomes. Whats really needed is a new medium where were can actually have multiple time stamps and streams in sequence and in parrellel. Watch your mix quality improve 10x. right now when you record 8 simultaneous tracks your not. You are recording bit 1 of 1 then bit 1 of 2 then bit 1 of 3 so on and so forth. This is why digital artifacts are such a pain in the ass. in analog you get 8 simultaneous events. Righto, sorry guys! Thats exactly what i was saying and was wrong about on the last page. The fact i missed and you're missing is that ANY wonky ass goofy shaped jaggedy asymetrical waveform can be created with a summation of sine waves of various frequencies. The highest frequency sine wave in that puzzle would be the one Nyquist is talking about and operating on. so double THAT frequency is the correct sampling rate
  22. not really. Keep reading. That has been covered.
  23. This is of course assuming that all audio signals are perfectly sinosudial. Which they are not.The other issue I see with the whole argument is that rate at which you can reproduce audio. What about asymetrical amplitudes. They do occur and would throw a nyquist 2 sample pardigm right out the door. For example. In a live room. You may have dozens of frequencys working together simultaneously. In a CD you get one recreation then another and it stack up to a bunch of quickly occuring but staged signals.
  24. in the future where improved speaker techonology and uber fast processors come into light "not far off BTW" I bet we see quality improvements well into the 300khz zone at 128bit.at that point we will be cooking with grease and we should be able to actually reproduce live accoustic enviroments.
  25. No need. There is something to having more sonic capture space to work in. Lavry makes nice stuff but at the end of the day there are betters way to skin a cat. I'll get my coat
×
×
  • Create New...