Jump to content

Putting together a high quality home studio


Recommended Posts

  • Replies 61
  • Created
  • Last Reply
  • Members

Actually, you can easily just group all the drum tracks and move them up or down at once with the mouse. It's pretty easy to do.

 

Yeah - I edited my post, but it's just not the same as grabbing a handful of faders. Sometimes it's just nice to have something physical in front of you as opposed to clicking a mouse. Don't get me wrong. Right now I'm completely in the box, and like it, but there are times when I mix at friends' places who have control surfaces and it can be very nice.

 

I don't usually group faders, though. Rarely, anyway. Instead I run all of my drum tracks to a stereo bus, then set up an aux track with that bus as the input. Now I have a stereo bus with the drum mix on it, and I can use that aux fader instead of moving the drum tracks with a group. That way I can strap a compressor across that aux and make some interesting sounds. ;) And then I can also setup a send from that aux to another bus, send it to another aux track, and strap on a second compressor to make some really neat stuff happen. If that's the kind of sound I'm looking for, anyway.

 

...which was a long-winded way of saying I generally prefer auxes to groups, lol.

Link to comment
Share on other sites

  • Members

Oh, yeh, I always use busses in that way as well. I do the common thing, of snare mics to a snare buss, toms to a toms buss, kicks to a kick buss, cymbals to a cymbals buss, then all of those to a drums buss. But, sometimes, for optimal gain staging, you may want to adjust a number of drum tracks at once or something instead of just adjusting the final drums buss.

Link to comment
Share on other sites

  • Members

This would just be my choice, obviously (it will be as it comes out,I'll add it to the existing) but if the main goal is to make your stuff, get rid of ProTools and Waves and get a Xite-1 by Sonic Core, not only because it's much less expensive, but because it's much better, i.m.o. The routing-mixing environment is unique, the sound quality is stellar, the power is huge (like 10 of the big current Scope Professional cards) and together with 50 hi-end effects you get the most stunning arsenal of dsp synthesizers in the market (all the Creamware stuff and their Modular) and top notch 3rd p. options.

Not to mention that the Xite-1 is a 1U rack thing that can be both connected to PCI-E on a desktop or to an Express Card in a laptop.

 

My 0,02

Link to comment
Share on other sites

  • Members

The thing is though, unless that type of product is supported by Waves, TL, MacDSP, DUY, Kjaerhus, Voxengo, et al, it really is a hard sell, because you are limited to whatever plugs the vendor can create itself or some small set of third party ones that they can talk into supporting their platform.

 

This is the huge gotcha with all hardware based systems, and it's really the secret to Digi's success. They have that almost across the board support of the best plug makers. It's really an effective monopoly in the same way that Windows is, because they reached a point where the vendors of the plugs consider it in their best interests to support that platform, to the detriment of others if they are short on manpower (and they always are.) Once that point is reached, it's kind of over unless the giant stumbles in some major way or some paradigm shifting change occurs.

 

To many people, particluary when you are talking about a huge set of plugs like the large Waves bundles, the plugs carry a huge amount of weight, because that's what the person really knows and has put a lot of time into learning.

Link to comment
Share on other sites

  • Members

The thing is though, unless that type of product is supported by Waves, TL, MacDSP, DUY, Kjaerhus, Voxengo, et al, it really is a hard sell, because you are limited to whatever plugs the vendor can create itself or some small set of third party ones that they can talk into supporting their platform.


This is the huge gotcha with all hardware based systems, and it's really the secret to Digi's success. They have that almost across the board support of the best plug makers. It's really an effective monopoly in the same way that Windows is, because they reached a point where the vendors of the plugs consider it in their best interests to support that platform, to the detriment of others if they are short on manpower (and they always are.) Once that point is reached, it's kind of over unless the giant stumbles in some major way or some paradigm shifting change occurs.


To many people, particluary when you are talking about a huge set of plugs like the large Waves bundles, the plugs carry a huge amount of weight, because that's what the person really knows and has put a lot of time into learning.

 

Well, the good thing about a Scope environment is that you can use whatever native you want in your ASIO app....just think to it as a huge hardware (dsp's are hardware) studio full of goodies, stunning synths (you haven't any it seems) stereo and surround mixers and verbs, the ability to have hardware and software ports in a single virtual studio and patch anything with anything else, plus a great audiocard, that surrounds your native app which can keep working as usual, with the benefit of not being separated by converters.

Personally I just stopped using native processors, I send all the tracks to separate busses and I mix in Scope, in real time, together with synths and whatever outboard I want to integrate (I currently have a Lexicon MPX200 and an analog vintage Spring Reverb which is connected in the environment to a device with filters and envelope followers where I can completely reshape its color and duration to my needs). I'm using the current PCI cards, the new Xite-1 will multiply my power substantially.

 

Let me make an example of what I'm going to do with Xite-1 out of my studio as for live use. I will have 6-7 huge Modular synthesizers with 8-10 voices of poliphony each, a Prodyssey (Odyssey emulation) a Minimax and a ProTone (you can guess what do they emulate), a 48 tracks mixer with EQ and compression on every channel, Hi end reverbs (Sonic Timeworks sells some huge ones for Scope) Modulation fx's, 22 band Vocoder with a Prophet 5 emulation as synthesis source, Cubase SX3 running with eventual pre recorded audio soundscapes and recording fully multitrack all the gig. Obviously I'll have mics and guitars plugged in, all recorded in realtime too.

 

This is not going to use more than 2/3 of the Xite-1, I'll have the gig recorded in multitrack, i can record the sounds without necessarily recording the effects that are tailored for the live situation, I can also only record the MIDI data and keep the synths in real time also for the mixdown (errors? fixed! ;) ), re-tweak everything for the mixdown and save 2 different projects.

I can save and load different concert setups.

The mastering tools are also great, the product can be finished.

 

All this within a 1U rack hardware that costs the 3-4 % of what you planned and a notebook.

 

Try to have a close look to it and to what it does, for what it costs in comparison of what you planned, you double the power of your studio, or more. Do yourself a favor and check it if you can. :)

 

It's incredible how a conservative approach to this subject can make you miss a world of possibilities and enhancements....these guys at Sonic Core have the best thing around but the people don't even suspect what's about.

Link to comment
Share on other sites

  • Members

It's not really generally practical to have a lot of mixing of DSP based and native plugs on a single track. The audio has to make multiple trips into and out of the DSP box, so you kind of kill your latency. It's fine for mixing, but for tracking it would be not very good. And many of us use amp sims and soft synths and drum synths, so we have to track through the DAW.

Link to comment
Share on other sites

  • Members

It's not really generally practical to have a lot of mixing of DSP based and native plugs on a single track. The audio has to make multiple trips into and out of the DSP box, so you kind of kill your latency. It's fine for mixing, but for tracking it would be not very good. And many of us use amp sims and soft synths and drum synths, so we have to track through the DAW.

 

What happens in the dsp box has a virtual 0 latency (some processes may take few samples). My way to use it is to monitor and mix everything through a dsp mixer (also much better sounding than Cubase one) and just "suffer" the 3ms of latency in Cubase if I loose my sanity and decide to use a vst synth instead of a Scope one. Despite the fact that 3ms is quite great, I still prefer the sounds of the instruments and the effects in Scope, and at 0 latency. What you want to insert in the native app tracks you do it exactly as you do with any other low latency hi-q sound card

 

If you want to hear how a guitar amp emulation that I'm building with the free sdk software we can have for Scope sounds, check this, that is also found in the sticky " Is your recording / mix any good? Find out here!" thread in this forum.

Also the Rhodes (a true one) piano is processed with a freebie 3rd p. Scope device called FluLiq and I got it from another Scoper as an mp3.

 

As for the quality of synthesis and 3rd p. development you can have an idea of what's happening listening to some demos of the Adern's FleXor line of modular modules. I might be biased, but there's nothing like that in my opinion in the native world....or any other dsp platform...for now ;)

 

The fact is that you can do here some things that are impossible in any other system a.f.a.i.k.. you could for example stream from the internet some cool radio news in some strange language, patch the system wave output driver that is represented as an object with outputs directly to the PsyQ (a Vitalizer emu for Scope) or a Frequency shifter or whatever weird you would like or maybe a Vocoder and have it integrated in your session, patched to a mixer track like the synths or the asio tracks...everything "sounding" in the machine has a physical representation in the environment and can be processed and recorded in real time. It's another world completely.

 

Sorry if I keep on raving on this system, but I think that SonicCore deserves much more attention for what they do. :)

Link to comment
Share on other sites

  • Members

 

What happens in the dsp box has a virtual 0 latency (some processes may take few samples).

 

 

Unless I'm missing something, that is not possible. It's got to be on the PCI/PCI-e buss, so it has to have at least that much latency, plus it has to have more latency for getting the data to/from the box, which cannot be instantaneous. Even very good sound cards on the PCI bus have 2.5'ish ms latency at 44.1K, and firewire based boxes add another half a millisecond or more to provide their own buffered transmissions.

 

Given that the 2.5'ish part is inherent in the system itself basically, I can't see how this box could do any better. If you you mix VST and DSP plugs such that the DSP one is first, that means that the data has to go to the box, come back to the host, and then go back to the box again, even if you monitor thorugh the box. So that would be getting up to around 7'ish milliseconds.

Link to comment
Share on other sites

  • Members

Unless I'm missing something, that is not possible. It's got to be on the PCI/PCI-e buss, so it has to have at least that much latency, plus it has to have more latency for getting the data to/from the box, which cannot be instantaneous. Even very good sound cards on the PCI bus have 2.5'ish ms latency at 44.1K, and firewire based boxes add another half a millisecond or more to provide their own buffered transmissions.


Given that the 2.5'ish part is inherent in the system itself basically, I can't see how this box could do any better. If you you mix VST and DSP plugs such that the DSP one is first, that means that the data has to go to the box, come back to the host, and then go back to the box again, even if you monitor thorugh the box. So that would be getting up to around 7'ish milliseconds.

 

You are missing something....imagine to have only the Scope software running, it will show all the devices loaded in the dsp's in a nice graphical environment where they can be patched. If you connect your MIDI controller or a guitar to be processed, the dsp synth or the amp emu or whatever other fx or instrument have a virtual 0 latency, the same you would get with any dsp hardware, a nord lead keyboard or a digital foot box. all the stuff that runs on the dsp's has to be considered like hardware.

 

You will have a latency only when playing the vst instruments loaded, f.ex. in Cubase, a 3ms at 44.1 that will get lower at higher frequencies, like with every hi end sound card.

Obviously, when you playback the stuff on your ASIO host, latency gets compensated and all the audio tracks are aligned with the eventual vst instruments you recorded, this is exactly the same with every good card and Cubase or Nuendo or any good host.

So when you play your dsp synth or you process your guitar or vocal to be tracked you just have a virtual 0 latency (virtual means that hardware inherent latencies are taken in account, but like any hi-end hardware electronic instrument and MIDI that has it's own limits with any device) in reaction to your playing, you will be listening to the audio tracks as they come from the host and just playing along like it was an acoustic guitar in your studio. All the mix happens in the Scope mixer, which has no latency at all, because it happens in the sharc dsp's with sample accurate real time processes.

 

In fact the mixdown can't be made offline, you just route the Mix out of the mixer to an ASIO pair or more if using the Surround Mixer, and you record back in realtime in a new Cubase Stereo track. Eventual external gear connected to the AES/EBU, Adat or analog inputs of the card are like patched to an hardware mixer, they don't need to be compensated at all, because they act directly in the dsp environment.

Usually I mixdown at full bit rate in Cubase. When you have a properly finalized stereo track in Cubase and all the mastering stuff is done (I do it in Scope as well), then you apply your favorite dither plugin and render the stereo track, properly cut and delimited as your 16 bits file ready for CD.

 

I hope the picture is more clear now :)

Link to comment
Share on other sites

  • Members

Alfonso,

 

I'm curious about the new scope system. It seems like to me its kinda of a standalone all dsp unit. Meaning all Fx, dynamics, plugs, and mixing in one unit. However, it seems alot of people who use it talk about interfacing it with Cubase. I'm a nuendo user with 2 RME multiface units, but I'm just trying to figure out why I continue to involve nuendo if I were to spend that kind of money on scope. Why not just send everything into the scope mix and be done. Why bother with Nuendo at all? Or is there something I'm not getting with the scope system, that forces you to use a native host. Can scope get inputs from another pci buss, like say an RME system?

 

It seems to me that it better suited to just get whatever outboard convertors you wanted and maybe a sync box, and just lightpipe directly into the thing, rather than going thru a pci buss then into your host then into scope.

Link to comment
Share on other sites

  • Members

Alfonso,


I'm curious about the new scope system. It seems like to me its kinda of a standalone all dsp unit. Meaning all Fx, dynamics, plugs, and mixing in one unit. However, it seems alot of people who use it talk about interfacing it with Cubase. I'm a nuendo user with 2 RME multiface units, but I'm just trying to figure out why I continue to involve nuendo if I were to spend that kind of money on scope. Why not just send everything into the scope mix and be done. Why bother with Nuendo at all? Or is there something I'm not getting with the scope system, that forces you to use a native host. Can scope get inputs from another pci buss, like say an RME system?


It seems to me that it better suited to just get whatever outboard convertors you wanted and maybe a sync box, and just lightpipe directly into the thing, rather than going thru a pci buss then into your host then into scope.

 

The Scope system doesn't have any sequencer, it just works with any app who is capable of that task and that will communicate with it through ASIO or system drivers. I use Cubase SX3 because I'm a Cubase user since the 80's, but for me it's quite redundant. It would be enough to have MIDI tracks and audio tracks to be routed to separate ASIO busses, that is what I do, I ignore Cubase mixer and plugins that pretty suck and I fortunately don't need its summing bus that just sounds wrong to my ears.

So you need an app to record MIDI tracks, this still is latency free because the through function in Cubase, whose midi tracks are routed back to the Scope environment to play the synths, is not affected by latency. You could also hook your Scope synth directly to the Scope MIDI input, but I didn't find any difference, so it's more practical to have the Synths already controlled throug the sequencer MIDI tracks. This doesn't bring latency at all, as I said the "through" function in Cubase and Nuendo is really such, I don't know other apps.

 

In Scope there is an audio tracker though, called VDAT, it is a virtual ADAT recorder, but working at 16, 24 or 32 integer bits. It sounds beautiful, it doesn't use any system driver, it directly sends to HD whatever you route to it and it also can be syncronized, if you want, through ASIO2 sync protocol, or MTC. For the fact that it doesn't have any audio engine and the very clever and safe procedure to pre-allocate a virtual tape on the HD before you start recording, it has a pristine clear quality and allows for a very high count of high resolution tracks simultaneous recording even in weak systems.

But it misses the visual arranging window operations we are all used to, and also you still may want to record MIDI, so at the end you need something like Cubase. On the other side, if you have to track a live event or a band or you just like to work in the old way it's unbeatable.

 

But Scope is an audio card . You might keep Nuendo, what you might not use is the RME's....the fact is that Nuendo can be sync'ed only to one source's clock. What the RME's do is entirely covered by Scope. The Xite-1 has BNC's connectors too, you could have 2 PC's running together or just switch the ASIO driver in the host if you want to have all the cards active on it. You can also use the Multifaces, sync'ed via BNC, lightpiped to Scope, as additional analog inputs, but having Nuendo sync'ed to Scope source...

 

I started using Scope back in 1999. It was very different, much less powerful than now, but the routing environment and the quality of synths was already what caught my attention then, also because I was tired of the romplers and I had sold everything, I didn't have any more synths and the opportunity to have everything in a card was tempting.

Now it has become a monster of power. I'm waiting for the new one as a baby is waiting for the tit. In the last years, despite the extreme financial difficulties, the old Creamware company has set the new reference points in VA synthesis, some 3rd p. companies like Adern are extremely respected in all the dsp world community. Stuff like the Minimax and the Pro12 cause serious headaches to all the native coders of vintage synth emulations. The best reverbs sold by Sonic Timeworks are the P100 and the A100 coded by Warp69 for the Scope platform.

It's true that the current multicore native processors allow for great power and there is a great variety of plugins, but the Scope stuff is another planet for me, and I judge by ears.

 

Xite-1 is a dream for a Scope user. The power of 10 old Scope professional cards, the bandwidth of PCI Express protocol and the portability make of it the product of the year for me. Seeing how it is covered by the specialized press reminds me of a mail I received by a music specialized magazine on how great was my CD sold on CD baby and if I could provide a couple of copies for a review. I sent them the copies, and I was immediately asked to buy some ad space and nothing else....the message was clear.

 

This is one of the reasons I'm trying to build some interest around it. :)

Link to comment
Share on other sites

  • Members

Thanks for clarification. I actually do not use the sequencer in Nuendo. I tend to rewire Reason or something like it when I do sequences. For the most part, my system is nothing more than a glorified tape machine. I do use the automation (volume and panning mostly) along with some waves dynamics and eqing plugins in nuendo.

 

The only reason I asked about using the RME's is simply because I like them. The convertors are awesome and allow me to multi track at any sample/bit rate I want. Using the light pipes on the Scope would limit your sample rate depending on how many tracks you were recording. Anyway I didn't understand the "switching the ASIO" comment. I was wondering with scope can you use the RME cards as an input and the Xite as additional input and DSP all within the scope platform.

Link to comment
Share on other sites

  • Members

Anyway I didn't understand the "switching the ASIO" comment. I was wondering with scope can you use the RME cards as an input and the Xite as additional input and DSP all within the scope platform.

 

When you load Nuendo the first time, in the device settings you can choose which audio driver driver to use. Say you have both a Scope and a motherboard embedded sound card active. You will have to select the Scope ASIO driver or the so called "Asio Multimedia driver", which is the standard windows driver or the ASIO Direct X etc." . all of them are options, but if you want to use specific ASIO drivewrs made from the cards builder you have to select them. Having two different audiocards in a system means that only one will be represented as an ASIO input in Nuendo, and it will be the one whose driver you've selected in the Nuendo Device settings panel.

 

If you syncronize the RME and the Scope via wordclock thy can well work side by side, but only one of them can be seen by Nuendo as an ASIO source or destination. At that point you can go with the RME into Scope with an external connection between them, and select the Scope Asio driver in Nuendo to interface with Cubase.

Whichever input you select in Scope for the Multiface, say the ADAT ports, that in Scope can also work in S-Mux configuration for 96khz(8 I/O's instead of 16), or the AES/EBU if you just need stereo as the Spdif option for ADAT ports, will be represented in Scope as another device with outputs (or inputs if you are sending to Multiface) that you will patch to the main mixer and process as you like and/or send to the ASIO destinations (Cubase inputs).

Switching the driver means that if you just need to track something to Cubase from the RME, you just select the RME ASIO device (or whatever the name) in Nuendo's settings and you'll see its I/O's directly in Cubase. When in a second moment you might want to mix in Scope, you select the Scope ASIO driver in Nuendo and you will find the Scope software I/O's in your tracks inspectors.

Nuendo is only capable to see one ASIO driver at a time. It can be how many I/O's you want within the specs, but one type only. In Scope you have different ASIO drivers at different resolutions optimized for the different ASIO apps. you have integer 16, 24 and 32 bits as well as 32float for cubase and similar and I don't know what else will be in the new Scope 5 software coming out with Xite-1. The one that is loaded and present in the scope project will be seen from Nuendo and selectable. As you might already know, the one you selected in Nuendo remains the default one until you change it again. The device panel will show all the active ones available.

What you can have in Scope, though, is that you can load ASIO drivers as well as "Wave" drivers (corresponding to ASIO Multimedia, that's windows system drivers), so you can have at the same time Nuendo or Cubase on ASIO and another app like Sound Forge or Penguin Audio Metering software connected to "Wave" drivers, or just the Media Player or the internet sound, that could be routed and recorded directly to Nuendo. You can patch all the I/O's present in the Scope environment the way you like, a single output can be routed to multiple inputs, parallel processing is the easiest thing to do.

 

All the knobs and faders in Scope can be controlled with MIDI CC's, so you can well design your automation curves in MIDI tracks. All the Scope devices who receive MIDI CC's also send them. The MIDI CC's can be assigned freely to each knob. You can have several MIDI drivers loaded, that will be seen by Cubase or Nuendo as multiple MIDI outs, each one with 16 channels so you can really have plenty of MIDI stuff. In the scope environment they are just objects with inputs or outputs.

The Scope objects that are graphically patchable, most of the times show a GUI when you double click them, Synth panels, Mixer panels, Effect settings panels, Number of ASIO channels settings and so on. GUI Positions can be stored in screensets.

The Hardware I/Os can be renamed if you have some fixed connections with external devices so you will have objects called "RME analog inputs" or "Guitar input" or "Lexicon MPX200" or whatever, with the corresponding I/O's for audio stream and eventual MIDI control. The cool thing is that everything becomes an object to patch in the same environment, no matter if it's coming from hardware or software.

I'll never stop to have a grin in my face whenever I think how cool is all that! :D

Link to comment
Share on other sites

  • Members

 

Ha, Jimbroni,

Obviously you can route your Multiface inputs to its digital outputs, right?

 

 

Oh yes. The multi can work in whats called "disconnect mode" You load up the multi set the output routing assignments put it to slave rather than master, click disconnect and unplug the firewire. As long as you leave the power on the settings remain.

 

Right now, I have one set as the master, and the second feeds as a slave into the adat inputs of the first. So I get 16 channels at 44.1, but if I do 88.2 I only get 12 channels. 8 analog from the master and 4 digital from the slave.

 

Which leaves me still wondering though. It seems you can set the RME up as ASIO in Nuendo, then go into scope and use ASIO (nuendo) as an input. But I'm wondering could you simply select ASIO (RME) as an input to the VDAT system you described.

 

As opposed to slaving both RME into the xite, which would leave with only 8 inputs at 88.2. Not only that I prefer the sound of analog straight to pci buss, rather than analog to Adat to pci. It seems to me I'd need an external clock source to do this with 3 audio cards essentially.

Link to comment
Share on other sites

  • Members

 

....It seems you can set the RME up as ASIO in Nuendo, then go into scope and use ASIO (nuendo) as an input. But I'm wondering could you simply select ASIO (RME) as an input to the VDAT system you described.

 

 

The problem is that if you set Nuendo to work with RME Asio driver it won't be able to see any other ASIO driver, that's a limitation of the ASIO protocol and Nuendo. And as the only way for Nuendo to communicate with Scope environment is to use a driver loaded in Scope that only can be selected in Nuendo "instead" of RME one, so that's not possible. The most logical way to use the VDAT is to go to it through the hardware inputs.

 

You can also take in account the 4 analog ins (XITE-1 converters should be very good) and the 2 AES/EBU inputs, along with the 16 ADAT inputs for the RME. Slaving Scope is not an issue, it works both ways also if slaved from ADAT or AES/EBU and still has BNC I/Os...

Link to comment
Share on other sites

  • Members

Sounds like a great system. Sorry to derail the thread, I was just curious about how this thing works. Its totally unique.

 

What kind mixing automation does the scope have? And when you say no sequencer does that mean, no digital audio manipulation, like splitting audio and creating loops? Or is it more of a rewire device that allows mixing and DSP?

Link to comment
Share on other sites

  • Members

 

You are missing something....imagine to have only the Scope software running, it will show all the devices loaded in the dsp's in a nice graphical environment where they can be patched. If you connect your MIDI controller or a guitar to be processed, the dsp synth or the amp emu or whatever other fx or instrument have a virtual 0 latency, the same you would get with any dsp hardware, a nord lead keyboard or a digital foot box. all the stuff that runs on the dsp's has to be considered like hardware.

 

 

Well, that's a very unlikely scenario. In reality all of us are going to be recording either real audio which is stored on the computer, not in the device and therefore has to round trip through the device, and multiple times if non-DSP based plugs are used, or we are going to be using synths that don't run on the device, so same thing. So the zero latency scenario is one that's so constricted that it's not really very useful for most people.

Link to comment
Share on other sites

  • Members

Well, that's a very unlikely scenario. In reality all of us are going to be recording either real audio which is stored on the computer, not in the device and therefore has to round trip through the device, and multiple times if non-DSP based plugs are used, or we are going to be using synths that don't run on the device, so same thing. So the zero latency scenario is one that's so constricted that it's not really very useful for most people.

 

Sorry, but I think that a closer look to the subject would make things more clear. I know that it's not easy to imagine this system without seeing it in action, it works in an area that often people tend to think as outside of the computer, just because it is out of the sequencer. But these features typical of outboard gear (virtual O latency, real time processing, independence from the CPU, synthesis sounding like real hardware, superior sound quality coming from dedicated dsp processors especially built for sound processing, sample accurate real time operation, impossible in a native system for how it's built) have been placed behind the converters, with the advantages typical of software like the multiple instances, the immediate management and storage of vast preset libraries, the direct connections with software ports and the possibility to save and recall each project in a snap.

As I said, if you really have to use a VSTi, that function will be performed exactly like with any hi q. audio card, with the actual Scope PCI cards the latency is 3ms at 44.1khz for the VST's loaded in the CPU. I can't understand where the problem is. You are only additionally provided with a fantastic dsp hardware studio of extreme quality and a hard to imagine computational power which responds like hardware, because it is hardware, but genially integrated in a software environment.

But I think that a direct experience would be better than my words. :)

Link to comment
Share on other sites

  • Members

OK, here's why it's a problem. I want to track this part. I want to use some synth on this box, then I want to use Waves RenComp, then I want to use some other thing on this box, and then the Wave IR-L reverb. So now the audio has come from the track, go to the box, come back to the host, go back to the box, come back to the host. Every one of those transitions will add to the latency, making it useless for tracking.

 

That's the huge benefit that ProTools has. Almost everything you need is supported in TDM format, so you don't have those transitions back and forth. There are some things you might want to use that aren't TDM, but as long as you can wait until the mixing stage to use them it's not a problem. And if you only use CPU based plugs first and never after a TDM plug, then it's OK. But for many of us, who are doing synths, DI'd guitars with amp sims and so forth, it's just not practical to require multiple trips back and forth from a tracking latency standpoint.

 

I've still seen nothing that would make me believe that this box is somehow so magical as to avoid this issue, unless you are willing to live within fairly significant restrictions. It woudl be fine if you are working purely by tracking audio in with no requirement to monitor through the DAW and no plug-based processing during the tracking, and then did your mixing in a very traditional scenario. But for someone like me, having everything (including the tools I'm used to and/or want to use) be DSP based is important, and worth the cost if that's what it takes.

Link to comment
Share on other sites

  • Members

That's the huge benefit that ProTools has. Almost everything you need is supported in TDM format, so you don't have those transitions back and forth.

 

I never have to go back and forth. You don't have an idea of what is available for Scope currently. I stopped using CPU based stuff long time ago. My productions are Scope 100%. The only non Scope process that I use is UV22hr for some stuff and POW-r 3 for some other when I make the 16bit file.

 

A big difference with PT is that it is not as known and it is not a standard, that's the reason I recommended it for a personal studio, but quality wise it has nothing less, some things are superior, especially in the synthesis, the routing capabilities and the flexibility and to my ear also in the vintage warmth emulations, convincing and crystal clear. Warmth doesn't mean muddiness and clarity is not harshness. Then I'm a modular synthesis addicted, and the Scope modular with the FleXor stuff is widely accepted as the best dsp modular ever.

 

I'm with you in this....I need to be dsp based, but I like Scope much more than PT. And as my work is to make music and sound design for documentaries and tv as well as my own stuff as a musician and not to mix conformist aspiring rock stars, the choice was obliged. I need these tools more. :)

Link to comment
Share on other sites

  • Members

I don't question the quality of the DSP based tools they provide or anything. I'm just saying that tools rule. No matter how nice they are, I'm not going to toss my heavy investment in Waves plugs, both in money and in terms of time spent learning them and getting comfortable with them. That's kind of what my first response to you was getting at, but we sort of drifted a bit. The advantage that PT has is that the tools that so many people have an want to use are available for PTs in DSP form. That's their huge advantage and why it would be difficult dislodge them, in the say way it would be difficult to dislodge MS when they have all the vendors creating drivers for them for their hardware. Anyone else has to compete against that with home grown tools, or with a small sample of vendor's offerings who they can talk into supporting their platform. It's an almost impossible barrier to break through and become truely competitive.

Link to comment
Share on other sites

  • Members

Don't let fear of unfamiliarity be your judge in making music. He just offered an alternative that is much cheaper and would allow your budget to include some really nice outboard stuff.

 

If you already bought a bunch of waves TDM stuff then so be it, but if you have waves vst and direct x, I think you'd have to replace that stuff anyway.

Link to comment
Share on other sites

  • Members

I have native Waves and URS stuff now, but it would just be an upgrade to TDM. There's no way I'd dump them really, and I'm sure that's the case for most people who use them. They'll be in use in any pro studio you are likely to go into, so the 'portability' of the experience with that plug set is very high and so therefore is the portability of your projects.

 

I'm not trying to be a nay sayer, just saying that it's really the tools that count in the end, and so this device would be of less interest to me.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.


×
×
  • Create New...