Jump to content

Software vs. Hardware


Recommended Posts

  • Members

I can say with certainty that most software makes me feel like I'm wallowing around inside a 85 year old prostitute's vagina trying to find some creative urge, while the right hardware gets me going immediately.


It's glorious that we don't all have to play by the same rules, no?

 

I'm off software for life now ... :eek:

Link to comment
Share on other sites

  • Replies 154
  • Created
  • Last Reply
  • Members

This thread turned into a good lunchtime read.

 

I view working with computer-based audio technology the same way I view working with any computer-based technology. The technology changes rapidly and working with it requires continual learning and adapting as well as frequent updating of hardware and software. I accept the fact that I am going to be buying and learning new stuff periodically with a time period on the order of just a couple of years (or less). For me that makes it fun and exciting but YMMV.

 

I have found there is always a sweet spot in price-performance curve (hockey-stick shaped curve with sweet spot right at the bend in the stick). For example, you can buy an internal (SATA) LG combo dual layer DVD/CD burner/ROM with blu-ray play capability for USD120 from newegg now. That technology is entering the sweet spot now, but when I built my DAW PC last September a blu-ray drive alone was about USD400 which placed it outside the sweet spot.

Link to comment
Share on other sites

  • Members

 

As I've read elsewhere though, shouldn't hardware stage pianos kind of try to get away from sampling and more into modeling? Like using technology similar to Pianoteq which uses much less storage than the 40 gig or so for Ivory. I think that would be fantastic if Kurzweil were to use a modeling piano as a basis on their boards...then layer on top of it all their programming and effects.

 

 

Pianoteq has tons of potential, but in the time I spent with it I just couldn't get (my definition of, which I'm aware isn't everybody's) a good piano sound out of it. It's versatile as hell and with regards to tuning and extreme/experimental sounds unmatched for a piano VSTI, but it seemed to be lacking in the fullness and in particular the bass range. My favorite piano sounds have all come from sample library based piano software.

 

I can't wait for them to perfect the modeling approach though. I just don't think it's quite there yet for piano.

Link to comment
Share on other sites

  • Members

Some of people I've collaborated with lately don't seem to know their plugs real well. I've seen people with dozens of plugins,and i don't see how they really learn them all. I've been in settings before where the engineer is clearly is guessing and checking with various reverb and compression plugs. Yes i know it takes experimentation to find good results, but it suffers from diminishing returns.

 

Sometimes in a collaborative session where plug-ins are used i find myself thinking the following:

 

"If you just bought that plug-in, can we spend less time messing with it and get something done. "

 

"did you choose that preset on purpose or just think the preset name is cool?"

 

"Wow you've just spent an hour replacing drum sounds and have not improved anything or moved closer to finishing the song. "

 

 

Now I feel like geeking out. I think there might be some confusion between emulation and virtualization. Either that or i'm confused about it. Part of this is just me rambling on. You've been warned.

 

The future of running multiple os's on multi-processor platforms is currently implemented using virtualization not emulation. I'm under the impression that Intel is building virtualization instructions into their new processors. It doesn't make sense to emulate x86 architecture with x86 architecture.

 

http://blog.1530technologies.com/2006/08/virtual_machine.html

 

dsp modeling for audio on the other hand has a long way to go. Think of the Focusrite Liquid Channel. I have heard voice models that are indistinguishable from real voice where the entire throat is modeled. It doesn't render in real time, and i intuit that it would not benefit from multiple processors because it is nonlinear. Granted, I haven't programmed a dsp in a while, but i doubt the architecture has changed.

 

The big benefit to multiprocessors is that threads run in parallel but it creates some new challenges specifically on the bus. When you have 64 processors and they're all working on the same memory its gets really hairy synchronizing it or just really slow. Furthermore, i don't think we have the tools to write software effectively with multiprocessors yet, and it's gonna be a long time before we do.

 

Multi-core processors have been around for a long time. The physics department where I attended University had a parallel computer with with a ridiculous amount of processors. I can't remember the number, but i think it was 128 or possibly even 1024. I heard it was a bitch to program, and it took a long time to develop the software specifically for it. It could do things that x86 couldn't dream of even though the processor speeds were an order of magnitude slower than the generation of PC's at the time. I'm thinking cell processors are capable of this but i don't know. That is a new architecture though, and huge paradigm shift.

 

I'm personally excited about FPGA co-processors. Since FPGA's are programmed hardware that can change, its like the best of both worlds. I have no idea whether it is practical or not tho. We probably haven't seen it yet for reason, but I've seen it discussed.

 

I think I'm done rambling now.

Link to comment
Share on other sites

  • Members

FWIW I swore off software for the past few years b/c working I.T. made me want to hang myself with a mouse cord. But now that I've had a break from that hell I'm suddenly getting jazzed on software again and above all I'm incredibly psyched on Reason, of all things. I've been on it since version 1 and have had like a love/hate thing with it forever but I just took a little time to get a real controller setup in place, turned a cheapo LCD sideways for the rackage, moved the sequencer onto another widescreen and I'm seriously pumped and flying around it with a super productive ease.

 

I'm making some killer sounds with Reaktor lately too. And oddly enough, haven't really touched Live a whole lot and that was my main deal since it came out. Trying something new in 2009 I guess.

Link to comment
Share on other sites

  • Members

 

I think there might be some confusion between emulation and virtualization. Either that or i'm confused about it.

Well, more getting caught up with the differences in distinction rather than the point, which is that software isn't really tied to anything, whether OS or hardware. History also proves an irony - that you can actually have an easier time running older software, rather than the more difficult time many people posting here seem to expect. As stated, the problems only tend to come when software is deliberately tied to a specific piece of hardware (like a copy protection dongle etc) and you have no way to hook up that hardware and there is nothing to emulate it. When you avoid this software is going to outlive any hardware. It's clear some people don't understand this, and think it only lives as long as their Dell computer or a particular OS like XP or Vista.

 

Virtualization isn't really related to anything in this thread either, other than to say that future hardware will be tied to a particular x86 OS even less, meaning it'll be easier to run different things. This was only mentioned because some people like to bring up silly examples like Dave Smiths Reality, which was coded before standards like VST were accepted. It relies on a particular OS and is currently a pain to use unless you dual boot or use multiple computers. The point is that even these annoyances are likely to diminish with time, if you really want to use something like that. Since you can already do things like run Logic on OSX inside Windows, whilst using Sonar at the same time, and vice versa, it's sort of surprising people don't "get" how things will be in future. The performance isn't there right now because hardware cannot be shared. Now that they've improved this on the CPU that's exactly what companies like Intel are working on next.

 

As for things like multi-threading, audio is lucky. It's easy to make use of multiple cores since a typical musical usage scenario sees you using multiple processes on multiple tracks (individual EQ, compression, reverb, and different instruments all running on their own channels). DAWs are still in the process of writing decent code to handle this though, and it becomes much more difficult when trying to balance resources on a single process. Indeed, this balancing act is partly why audio plugins can actually exhibit worse performance at lower latencies on a dual socket 2x4 core motherboard than a single socket 4 core board, and supposedly Intel has improved some of this on the i7 processor. Very few instruments currently attempt to make good use of multi-threading, and some may benefit a lot more than others. However, a point will indeed come when a lot of cores are just sitting there idle, and that's precisely the time when people will start considering whether they should fire up an old OS and use some of those cycles for old plugins on an old OS..

 

Emulation is actually a big issue that's affecting performance right now too, and most of you will be dealing with it very soon - if not already. Windows 7 is likely to be released in 6-9 months time, and is likely to push 64 bit computing more since system specs are starting to move beyond 3GB. 64 bit audio performance currently sucks because DAW's have to translate 32 bit plugins. When you do that it tends to cut the expected performance in half IE you can run twice as many plugins if you just boot a 32 bit native OS. It's likely to take several years before 64 bit audio makes any sense. It will either take until there's enough 64 bit plugins, or a time when the computer is so fast that people simply don't care that they're losing a lot of performance.

 

 

I'm personally excited about FPGA co-processors. Since FPGA's are programmed hardware that can change, its like the best of both worlds. I have no idea whether it is practical or not tho. We probably haven't seen it yet for reason, but I've seen it discussed.

Well, there's no doubt that what the traditional CPU is in the process of changing. That's part of the reason why AMD bought ATI. If you're interested to know more you should do a search for Intel Larrabee. Although it's aimed at graphics and Nvidia/ATi initially, it's extremely clear that its signalling a change that's going to see graphics cards move beyond what people traditionally think a graphics card is for. Indeed, you're already seeing some of this with the encoding apps Nvidia and ATi have released for Mpeg4 video.

Link to comment
Share on other sites

  • Members

Well, more getting caught up with the differences in distinction rather than the point, which is that software isn't really tied to anything, whether OS or hardware.

 

 

I would recommend some healthy skepticism of Intel's direction. They sometimes make architecture decisions based on marketing. Remember the Pentium 4? Fail. Intel and nvidia are going to have an interesting war the next couple years, and I predict nvidia to win. In theory your right about software being tied to nothing, but in practice see "pentium 4 fail" for a recent example out of many, why sometimes theory and practice don't live in harmony.

 

now getting the thread back on track.

 

The direction computers take has huge impact on the future of the software vs hardware debate. There are a lot of strategies to software processing like DSP Modeling, Emulation, and Virtualization. The complexity of this stuff is off the charts and takes a lot man hours to make work well. It comes down to another theme of this thread: what works for you. The problem is that audio processing is a lowest priority for Wintel, and that the effort spent on software would be better served making hardware. :):)

Link to comment
Share on other sites

  • Members

 

I'll revisit softsynths when mainstream computers are low power / solid state. Those Victorian-era spinning disk things piss me off.

 

 

Heh. I'm currently in the process of (hopefully) building a dedicated VST machine with a primary solid state drive (with a secondary spinning disc thingy for samples). And the AMD Athalon chipset I'm using is supposedly low-power, lower than the Atom in some circumstances.

 

It should be an interesting experiment to see what it can run. (Will it only be capable of the Synth1s of the world? Or will it be able to power the mighty Omnisphere? Time will tell, especially since this gives me a good excuse to play with nLite for an ultra-lean XP install.)

Link to comment
Share on other sites

  • Members

 

Well, there's no doubt that what the traditional CPU is in the process of changing. That's part of the reason why AMD bought ATI. If you're interested to know more you should do a search for Intel Larrabee. Although it's aimed at graphics and Nvidia/ATi initially, it's extremely clear that its signalling a change that's going to see graphics cards move beyond what people traditionally think a graphics card is for. Indeed, you're already seeing some of this with the encoding apps Nvidia and ATi have released for Mpeg4 video.

 

 

Well, take a look at what Apple has planned with it's next iteration of OS X, Snow Leopard:

 

"Another powerful Snow Leopard technology, OpenCL (Open Computing Language), makes it possible for developers to efficiently tap the vast gigaflops of computing power currently locked up in the graphics processing unit (GPU). With GPUs approaching processing speeds of a trillion operations per second, they

Link to comment
Share on other sites

  • Members

 

one of these days I need to make a youtube video of me playing Vaz Modular on my touchscreen notebook- hardware/software are nothing but illusory categorical models of perception

 

 

LOL - no, the distinction is not illusory. Sometimes it can be erroneous if one isn't aware of the exact distinction - for instance when comparing a software synth, to a digital hardware synth. But certainly the fact that a Waldorf Q is in its own hardware box, with its own box and knobs that you can touch, is not an illusory distinction from Vaz running on a pc, whatever interface you may want to purchase to use it with. There is a difference between making a category mistake, and viewing any hardware/software distinction as illusory. and, I've got to tell you - my hardware modular and vaz modular are very much hardware vs software in a quite real sense. No illusion there.

Link to comment
Share on other sites

  • Members

 

LOL - no, the distinction is not illusory. Sometimes it can be erroneous if one isn't aware of the exact distinction - for instance when comparing a software synth, to a digital hardware synth. But certainly the fact that a Waldorf Q is in its own hardware box, with its own box and knobs that you can touch, is not an illusory distinction from Vaz running on a pc, whatever interface you may want to purchase to use it with. There is a difference between making a category mistake, and viewing any hardware/software distinction as illusory. and, I've got to tell you - my hardware modular and vaz modular are very much hardware vs software in a quite real sense. No illusion there.

 

 

not to mention latency. and yes, even 11ms is latency. a set of USB pads and a DAW is not an MPC, because when you hit a pad on an MPC you hear it at the same time that you feel it.

Link to comment
Share on other sites

  • Members

 

not to mention latency. and yes, even 11ms is latency. a set of USB pads and a DAW is not an MPC, because when you hit a pad on an MPC you hear it at the same time that you feel it.

 

 

Hardware has latency too. Admittedly, usually hardware latency is pretty darn low.

 

But even on something as immediate as an analog, there will be a tiny amount of delay with the key-scanning processing and the control signals and whatnot.

Link to comment
Share on other sites

  • Members

Hardware has latency too. Admittedly, usually hardware latency is pretty darn low.


But even on something as immediate as an analog, there will be a tiny amount of delay with the key-scanning processing and the control signals and whatnot.

 

 

I think i'm going to start offering mods to expensive pianos in which i change the distance between the hammers and the strings and call it "natural latency compensation". Sync your Steinway to protools. :p

Link to comment
Share on other sites

  • 2 years later...

Archived

This topic is now archived and is closed to further replies.


×
×
  • Create New...