Jump to content

Thunderbolt I/O


Recommended Posts

  • Members

Has anyone switched over to a thunderbolt connected I/O? I use my DAW for live performances and am always looking for ways to reduce latency. Running Reason7 on a new MacBook Pro 64 in 64 bit at 96k with a Roland OctoCapture I'm at about 6ms.

Link to comment
Share on other sites

  • Members

I may consider it when the interfaces come down in price and the ports become a standard. Last time I checked the cost of both a computer and interface are pretty high. I don't use my setup for any line of live work and my interfaces have zero latency monitoring so latency isn't even a factor.

Link to comment
Share on other sites

  • Members

That is analog in to analog out in live time. So I'm not counting latencies in instruments before they get too the input jack nor some plug-ins which add additional time. With what I'm doing direct monitoring is not an option. So with some processes that I have patched into vocals for example I'm probably in the 10-11ms range between when I sing until I hear it back in my IEMs. I can actually operate in this range without problems. I think people worry more about it than is really necessary ... but ... improving on it couldn't be bad either.

 

Guessing exactly how digital systems (that you didn't have a hand in designing) actually work is a bit mystifying. So I don't know where the gating in my system actually is. Thunderbolt should be many times faster than my current USB2 hookup but that may be insignificant in the overall of things.

 

I was hoping that someone out there has made the switch and could say "with my system I knocked off some number of ms"

Link to comment
Share on other sites

  • CMS Author
With what I'm doing direct monitoring is not an option. So with some processes that I have patched into vocals for example I'm probably in the 10-11ms range between when I sing until I hear it back in my IEMs. I can actually operate in this range without problems. I think people worry more about it than is really necessary ... but ... improving on it couldn't be bad either.

 

10 ms is OK. 2-3 ms is not, due to the comb filtering that occurs due to the arrival time difference at your eardrum between that of the earphone and the direct (through your throat) sound.

 

Guessing exactly how digital systems (that you didn't have a hand in designing) actually work is a bit mystifying. So I don't know where the gating in my system actually is. Thunderbolt should be many times faster than my current USB2 hookup but that may be insignificant in the overall of things.

 

USB is plenty fast. The advantage of the faster Thunderbird bus is that it's in essence a fatter pipeline. You can hang more devices on to it without overloading it. The manufacturers like it because they can save the cost of a couple of USB connectors. Likewise, modern A/D and D/A converters are plenty fast - witness the couple of tenths of a millisecond delay in the "zero latency" monitoring on interfaces with an internal DSP monitor mixer.

 

The majority of the latency comes from the number crunching and the operating system management that's between the input and output. The real improvement would come with an operating system dedicated to processing the audio, but it's hard to sell that.

Link to comment
Share on other sites

  • Members

 

I'm running 16 channels at the moment and a PC so I'd have to switch to Apple which I have no desire to do because they are overpriced for what they do and I'd have to get larger interfaces which are closer to the 2K range. Plus I'd have to upgrade my entire computer to even think about running thunderbolt. You "have" to do both.

 

If you're really getting 6ms (which I'm highly skeptical you are, sorry) that's as good as it can possibly get. Thunderbolt 1 will allow 10G bandwidth but you still have the issue of your computer bus, CPU, Hard drives and the program itself which are the cause of the latency. Increasing port speed isn't going to make those items run any faster and you'll still have latency issues processing the data, retrieving data from the hard drive, and buffering the data while the CPU does its mathematical computations.

 

What I think you're looking at is your buffer settings which has nothing to do with the actual computer speed and latency. I'm not sure how you're testing your throughput speed but I suspect its allot higher then you suspect.

 

I tried to find some comparisons on line about the differences in port speeds but the information is pretty sketchy. Thunderbolt currently requires two channels to get maximum bidirectional speed. It can receive 10G in one direction on one channel and transmit 1G in the other direction on another channel.

 

As far as I can see, Thunderbolt 1 is already obsolete. They are working on Thunderbolt 2 already and expect to have double the speed out of the box with it. It will also be bidirectional communication over the same line which is going to make all the hardware including the interfaces being used now obsolete. There are limitations however. Their goal is to get to 100G and the only way to get to those speeds is with fiber optics. They think they can get to half that speed using copper cables but no more than that. This is always something you have to look at when you're looking to upgrade to the current technology. How long will it be around.

 

Again, even if the data does get from point A to point B quickly, it still gets backed up in buffers until the CPU, Drive and Program has time to process that data and all of those have their own technical issues preventing them from being improved.

 

What I'd suggest is if your haven't moved to a solid state drive yet, it may be the first biggest bang for the buck. Hard drives store your program your audio data, virtual instruments audio effects. Mechanical Drives do have a maximum rotational speed which is pretty good but you still have the time it takes for them to spin up. Then you have the heads that need to move around to read and write the data. With a solid state drive you don't have moving heads and the read/write speeds are as good as they get.

 

The other item is multiple CPU cores of course. Having an efficient quad core running 64 bit, plenty of memory and a mother board with the highest buss speed possible will maximize throughput and minimize latency. The DAW programs too have wildly different efficiency standards as does the Plugin choices being used.

 

If all of those items have been addressed, you "may" loose a millisecond of latency. That can be an expensive investment if you have to upgrade all of those items. I would have to do them all to get much better results then I'm already getting. I use PCI cards which are much faster then USB but I know they aren't the cause of my latency. I'm only running a Dual core computer as my DAW and unless I upgrade everything I'm not going to get much lower then around 15us.

 

This little non invasive program will tell you what your "actual" computer latency is. I suggest you run it so you know is it can be improved. The lowest possible latency you can use is the average, Not the lowest peak. If your average is lower then 6us then moving to thunderbolt may get you closer to that mark. If like most, your average is higher getting a faster interface and com port isn't going to do anything for you. (Most computers have an average between 25~100us)

 

http://www.thesycon.de/deu/latency_check.shtml

 

Link to comment
Share on other sites

  • CMS Author
I'm hoping to discover something like this comparison from Avid.

 

Better look for something that's better documented than that chart. What are they actually measuring? What's the test setup? Is there any reason not to be suspicious of a manufacturer showing that his product is many times better than "top performing" competitors?

Link to comment
Share on other sites

  • Members

That chart doesn't tell you anything. You have to test your computer latency first.

Its doesn't matter if the interface or port is faster if the cause of the latency is your computer.

 

Look at it this way. A mouse and computer keyboard Monitor, Printer are external peripherals.

 

Does a faster printer or higher quality mouse make your computer run faster?

Link to comment
Share on other sites

  • Members

First... I'm kinda surprised at the hostility I'm reading here. It's a simple question with a simple answer. I may not like the answer but I would like to know it in any event. Mike ... I certainly am not taking this marketing chart as anything other than a reason to investigate. It seems to support that faster interfaces can reduce throughput. I don't know the conditions of the test but I think it fairly safe to assume it to be consistent across the devices tested. WRGKMC ... While a faster mouse wouldn't speed up the computer it would speed up the job, which is the question I'm asking. It seems reasonable to assume that if you can shove the data in faster then it will come out the other end faster as well. I'm just looking for the proportions here. The Avid "claims" certainly seem to support it.

Link to comment
Share on other sites

I'm currently reviewing a couple of Thunderbolt interfaces... so far, I'm pretty impressed. Latency of one is in the ~2ms range - and that is while it is utilizing its onboard DSP to print effects in near-real time. I've never seen a USB interface that could do that.

Link to comment
Share on other sites

  • CMS Author

First... I'm kinda surprised at the hostility I'm reading here. It's a simple question with a simple answer.

That's the problem - it's not a simple question with a simple answer, or at least it shouldn't be. When dealing with a system, you can speculate how a single component (in this case, the Thunderbolt port) will the performance of a certain parameter, but we have to all be talking the same language in order to interpret a stated result in the same way.

 

The term "latency" when applied to a digital recording system can refer to a few different paths. Which one are you asking about?

 

They're building better interfaces every few months, and I suspect writing better drivers to go along with the hardware. But there's a lot more going on between input and output than shoveling bits into the CPU and back out again. If there's a bottleneck (and there always will be one if you push the system hard enough) you want to find out what it is. So far, for practical modern systems, it doesn't seem to be the USB port, at least for a small number of channels.

 

I'm not trying to start a fight here. I just want to know what problem Thunderbolk will solve, if any. Maybe it will allow you to push 256 channels through the port without dropping any bits while you can only do 48 with USB. But will those bits get processed any faster and sent back out to the outside world any faster? Not solely because of Thunderbolt.

 

I certainly am not taking this marketing chart as anything other than a reason to investigate. It seems to support that faster interfaces can reduce throughput.

 

No, it seems to support that a particular device (which happens to incorporate Thunderbolt I/O) is faster than another device that doesn't. What we don't know is what's between the Thunderbolt gozinta and goazouta that might also influence the turn-around speed. I'm sure this is a fine interface and it might give better performance in your application than a different one.

 

I won't be able to analyze this for a while yet since I don't have a computer with "Thunderbolt, but I'll continue to remain skeptical - not necessarily of the performance of this product, but that the boost in performance is a result of Thunderbolt I/O and not something else in the design. For example, the new UA interfaces have a built-in DSP processor for effects that's just like hanging another box on to the computer like their original outboard DSP product line.

 

Because of Thunderbolt's fatter pipeline, the I/O to the external DSP processor can go through the same cable as the audio without slowing the audio down. It allows them to build a multi-funciton box that doesn't require multiple cables to connect it. And no doubt newer parts and manufacturing processes, as well as amortization of previous software development, allows them to make a more affordable, and hence more attractive, multi-function device which may, in fact, appear to offer lower latency when processing effects in real time.

Link to comment
Share on other sites

  • Members

Well I thought the question was simple :) So let me restate it. Will using a thunderbolt interface result in an improvement in latency from analog input to analog output as opposed to .using a USB 2 interface, both handling the same input signal an conditions? I'll be attending Gearfest in 2 weeks and expect to see at least one new thunderbolt interface there. At that point I can simple plug into my laptop and get a readout directly. I was just hoping that someone had some experience using both kinds of interfaces directly in their systems and could report about it. In my personal case I'm not concerned about an interface's ability to handle other DSP jobs, but others might be so I'm sure that info would also be useful to know.

Link to comment
Share on other sites

  • Members

From you're response it appears you entirely missed what I was attempting to explain. I'm sorry if you think there was any hostility involved. I just don't think you have a correct understanding of the digital architecture involved that's leading you to false assumptions. I will attempt once more to explain and then you're on your own.

 

There are two ways an interface can be set up. 1.=Direct/Zero latency monitoring. 2.= Processed monitoring (which is what you're using)

 

The path of data when an interface is set up for zero latency recording is like this.

 

#1.= Analog signal in > input Preamplifier >{splitter] > [Mixer] > Output preamp> Analog Output signal to Monitors

 

In this chain, the input signal you hear does not get converted to a digital. It passes through the splitter and mixer and remains analog all the way through to your speakers. The only latency is whatever the preamps or wiring might creates.

 

When Record is selected, The signal does get split. You only hear the analog signal going straight through, but the signal is also routed through the converter, through the CPU and written to the hard drive. You just don't hear that binary signal.

 

Next you can play back a signal from the drive, It gets added to the analog signal going straight through the interface via the mixer. Again the input signal remains analog and you can record new tracks while playing back others and your input signal remains analog and therefore has zero latency.

 

#2 Is very different. What happens when you select processed monitoring is you split the interface in half. It essentially becomes two separate devices like a binary transmitter and receiver. When you use it this way using plugins live the path becomes digitized.

 

2.= Analog in > Preamp > Splitter > Analog/Digital Converter> Interface out> Computer port, (USB/PCI/FireWire Thunderbolt) > Motherboard Bussing> Memory First in/First Out (FIFO) buffers > CPU> Software bussing and processing plugins by the CPU> Motherboard Bussing> Memory FIFO Buffers> Computer port, (USB/PCI/FireWire Thunderbolt) > Interface Input > Digital to Analog converters> Mixer > Output Preamps > Analog Monitors

 

This is my quick off the cuff description of what goes on in path #2. As you can see, everything in bold there occurs inside the computer, Not inside the interface.

 

Changing the interface May change how quickly the data gets too and from the computer buss. From there its a matter of temp storage and processing by the computer that's involved.

 

A faster interface it does nothing to speed up how fast the CPU processes the binary data. It may cut down latency "Only If" the communication port is the "Cause of your latency.

 

For example, if the packets coming through the hose are slow, The FIFO memory is used more. It waits till complete packets of data are available for the CPU to process before forwarding them on. This way the CPU is given the entire chunk/packet and performs its math on the data, not in bits and pieces in real time. Both incoming and outgoing data is packed in packets. These packets are used by the communication protocol of the interface type and its protocol keeps tabs on the number of bits by the way the packets are stamped.

 

You can see it as an assembly line. The CPU always runs at its maximum speed but it only takes complete packets when performing its math. This way it doesn't get hung up and frozen if bits are missing. Its the job of the communication ports Drivers to makes sure all the data coming to and from and to the interface is received and transmitted complete. Computer uses protocol to count bits and make sure the packets are complete and if they aren't the computer talks to the interface and asks to have the data resent until it does get all the data.

 

As I said, the ports may run faster and get entire packets into the buffers faster, and if so "you may" have faster real time processing.

But, as I said, USB, Firewire, and PCI are already extremely fast, especially when you're only running a single channel of information. If you were tracking multiple channels of information, it may be faster or more reliable. Most of the improvement is likely due to fewer transmission packet failures.

 

The rest of the path doesn't change at all. If you have a slow computer CPU, Memory speed, Fat plugin that requires allot of computations etc, nothing may change with a faster communication port.

 

Look at it this way. Your computer network card is super fast capable at working at rocket speeds. Does it make the internet run faster? Of course not., The internet varies in speed all the time and most of the time its completely out of your control. Same thing happens inside of your computer pushing ones and zeros around. If you have the interface set up for monitoring a processed signal, there are a bunch of factors involves.

 

 

Hope this helps explains things a little. Its not meant to be anything but informative. I only named a few of the simplest items to give you an idea of what I was talking about when I meant computer latency. The entire path is a whole lot more complex then the simple steps I posted there and will vary depending on the type of interface and communication port you use. Most of it occurs at the speed of light and Its difficult to slow it down and explain what goes on in details, but at least this gives you some idea.

 

It would be really cool if I could do a Tron type video to guide someone of the path from point A to point B. I've done similar type videos in the past when I used to write electronic training videos. Its allot easier to explain what goes on vs using a thousand words to explain the same thing.

Link to comment
Share on other sites

  • Members

A faster interface it does nothing to speed up how fast the CPU processes the binary data. It may cut down latency "Only If" the communication port is the "Cause of your latency ...

 

As I said, the ports may run faster and get entire packets into the buffers faster, and if so "you may" have faster real time processing.

.

 

So that is the essence of my question. For any given setup the rest is fixed by other considerations and is not a part of my question. So it seems to me that the quickest way to answer is to plug it in and find out.

 

So it appears that only Phil has had any first hand experience in this area. The new interfaces in my price range are still vaporware but I expect to see them in the flesh in a couple of weeks. So as soon as I can get my hands on one I'll report back.

 

Link to comment
Share on other sites

  • CMS Author
Well I thought the question was simple :) So let me restate it. Will using a thunderbolt interface result in an improvement in latency from analog input to analog output as opposed to .using a USB 2 interface, both handling the same input signal an conditions?

 

Quite possibly, but it may be because of the interface hardware and firmware itself, and not because of how it's connected to the computer. As I said, these things keep getting better every generation, and a Thunderbolt interface is likely to be better than a USB interface with similar features, but only because it's using some newer technology. Some USB devices are making headway in the latency game too. It used to be that USB interfaces had a fixed buffer for the USB port, typically in the 3 to 5 ms range, that's separate from and independent of the ASIO buffer. What you see reported as "latency" by most DAW software is the ASIO buffer latency. Most drivers don't report the USB buffer latency. You can reduce in size until the data won't keep up, but you usually have to live with whatever the manufacturer chose as a safe USB buffer size. But new hardware and new discoveries about audio over USB can get away with a much smaller buffer for the I/O port. Some offer an adjustment for it (usually called something like "safe mode") and at least one has taken the bold step and eliminated this buffer or made it a very small fixed value.

 

I'll be attending Gearfest in 2 weeks and expect to see at least one new thunderbolt interface there. At that point I can simple plug into my laptop and get a readout directly.

You can't always use the latency time or number of samples that a DAW tells you as a measurement of the actual audio throughput speed. If you want to know what the real latency is, you can use a DAW and a couple of patch cables to make a fairly simple test setup with which you can measure the real delay.

 

Create one track in a DAW consisting of a series of pulses (you only need a few). Send the playback of that track out one of the outputs, and patch that back into an input Make sure that your DAW's latency compensation is turned off, and record the playback of the pulse train on another track (as if you were doing an overdub). Then, zoom in on the tracks and measure the time between a pulse on the original track and its corresponding recording on the new track. That's the real input/output latency. Rehearse at home with some of your own gear.

 

Link to comment
Share on other sites

  • Members

 

Create one track in a DAW consisting of a series of pulses (you only need a few). Send the playback of that track out one of the outputs, and patch that back into an input Make sure that your DAW's latency compensation is turned off, and record the playback of the pulse train on another track (as if you were doing an overdub). Then, zoom in on the tracks and measure the time between a pulse on the original track and its corresponding recording on the new track. That's the real input/output latency. Rehearse at home with some of your own gear.

 

 

I don't know if that trick will work for him completely because he's using a plugin's in real time like you would a piece of hardware. Since every plugin is going to require a different amount of time based on its logarithmic computations, I'm not sure the results will be valid. He can figure out if the interface has latency without a plugin running comparing a dry signal to the processed signal.

 

You can do this in most programs fairly easily. You pump a mono signal into to channels, and set one track for zero latency monitoring and the other for processed monitoring. If the mono signal in the monitors move from center to one side when toggled from direct to processed monitoring, you have latency. The more latency there is the more the delay on the one side to where it becomes a slap echo or longer.

 

In a program like Sonar the toggling is done with buttons on the virtual mixer. In Cubase, the signal is direct with the channel slider off. When you push the slider up it gradually mixes in the processed signal.

 

In Some DAW setups like mine you have two Buffer settings. One is in the DAW program, and one is in the Interface drivers.

 

In Sonar I have a buffer setting that basically has a slider labeled Fast and safe (I believe) Upon setup of the DAW it scans the hardware and sets this automatically for optimal performance. In my case its 9ms. If I override its settings I don't detect any difference in latency so if its there is ultra minimal, or its non functional.

 

The second Buffer setting deals with the ASIO drivers. Its called DMA Buffer Size. This sets the amount of system memory dedicated to

digital audio buffering. Setting a buffer size too small results in clicks or pops in the audio stream as some data may be lost.

Larger buffers cause more latency but prevent the pops and clicks and crashes that might occur with smaller buffer sizes.

 

This setting has little to do with the hardware latency. In my case I can adjust it down to about 10ms for tracking a single track. Tracking involves very little CPU processing, The data is simply written to the disk and stored and the DAW program has practically no effect on the data flow.

 

If you add a plugin to the tracking path, or you play that data back from the hard drive and insert plugins, then you need to increase the buffer size to allow the CPU time to perform logarithmic computations. The more tracks and the more plugins the greater buffer size is needed so the CPU doesn't start skipping the data and creating holes. Guess an analogy would be its much like an engine that misfires because the fuel mixture is too lean.

 

If the buffers are adjusted up so enough data is stored, the CPU wont run out of data to process. Having it set too high backs the data up into buffers when it isn't needed and slows the data flow down. Its safe because you don't loose data and get digital noise from missing gaps in the data, but it does slow the Processing down.

 

Having large buffers set doesn't matter at all if you're recording and using zero latency monitoring. New tracks are automatically aligned with the old, and other than the track meters moving before you hear the sound, or GUI sluggishness moving faders and knobs, the sound is unaffected.

 

Because dboomer is using his interface like a guitar modeler plugin instead of using hardware based effects, his minimal latency setting is going to vary depending on how many plugins he uses on the front end. Because these plugins are going to be the same whether he uses a faster interface isn't going to lower their need for their buffering. Its a software issue, not a hardware issue.

 

In my case I run allot of tracks and high end plugins mixing. I set my buffers very high, close to one second. I have zero issues with digital noise or crashes this way and it doesn't affect my tracking because I use zero latency and do not monitor the processed signal.

 

There are some stand alone plugins like Guitar rig that can be run without a DAW. They are optimized for running low latency so you may not hear much delay using them. I'm not sure how they do it but it seems to override the latency I have set up for the DAW program. The problem is it doesn't seem to take advantage of the larger buffers and becomes highly unstable or crashes when you add on more modules. There's little technical info on the program so I'm not sure how and why the program works the way it does, but its of no use to me unless the buffers can be accessed from the program and optimized for the computer you're using. They seem to stick it under the "Minimal Computer Requirements" tab and if your computer doesn't function well, you have to optimize your computer, not tweak it.

 

Anyway Mr dboomer will be getting that new interface and I'll be curious to hear how he makes out with a faster interface. If he already has a high end quad core computer with the memory maxed out and properly optimized he's likely getting the lowest latency he can from the computer.

 

About all he can do after that is find the lowest latency DAW program and plugins he can. There may be some things he can do like turn off the network card and set the video requirements down, turn off unneeded services, remove junk programs and make sure none are running in the background that will rob the CPU's attention form the job at hand processing audio data. Most of that improvement is minor on newer computers but it still may be good for a millisecond or two.

Link to comment
Share on other sites

  • Members

The answer is "maybe". It's definitely not "definitely".

 

6 ms round-trip, measured at the analog end is really very good, especially at 96 kHz sample rate (though lowering the sample rate won't necessarily improve latency either.)

 

The biggest issue is a specific vendor's implementation. With the same technology, some vendors can rock and others not so much. So, replacing a really quite good USB solution with an arbitrary Thunderbolt one is not particularly likely to improve things. But it might!

Link to comment
Share on other sites

  • Members

I don't know if that trick will work for him completely because he's using a plugin's in real time like you would a piece of hardware. Since every plugin is going to require a different amount of time based on its logarithmic computations, I'm not sure the results will be valid. He can figure out if the interface has latency without a plugin running comparing a dry signal to the processed signal.

 

Actually, it'll work just fine, as long as he doesn't change the buffering parameters.

 

First, the buffering parameters ultimately define the latency that happens inside the computer and irrespective of the USB/Firewire/TBolt interface drivers. Changing plugins won't affect that, and changing from his normal setup to the test setup won't affect anything on the other side of that boundary.

 

Second, some plugins add a specific latency (for example, SIR convolution reverb). Rarely would anyone use such a plugin, live; for live use, we need a plugin that doesn't delay the output from the input. That is, as it fills buffer number N, it produces the output that correlates directly to the input from buffer N. It replaces the data, without inducing a delay. This is not a factor of CPU performance, it's algorithmic. Plugins that introduce latencies publish that info to the DAW so that the DAW can compensate. You shouldn't be using one that has a latency of anything but zero for live use. (Well, you could tolerate a small number of samples, but I don't know of an example that does this. Convolutions like SIR have latencies due to symmetrical kernels .... let's not go there.)

 

Third, if the CPU can't keep up, for a plugin that's CPU-heavy, increasing latency (per se) does not help. (It can help to use bigger buffers, but only by reducing the DAW overhead, which is tiny, so this only helps when you're right at the hairy edge.) My point is, if it takes more than 1 second of CPU time to process 1 second of audio, it won't work, and adding latency won't help.

 

So, the test outlined above should be a good one.

Link to comment
Share on other sites

  • CMS Author

 

 

I don't know if that trick will work for him completely because he's using a plugin's in real time like you would a piece of hardware. Since every plugin is going to require a different amount of time based on its logarithmic computations, I'm not sure the results will be valid. He can figure out if the interface has latency without a plugin running comparing a dry signal to the processed signal.

 

You've provided some useful information about a complete system, but the question asked was whether Thunderbolt I/O between the computer and interface would improve analog throughput. In order to answer that question by conducting an experiment as I suggested, you want to test the most basic level system you can - no plug-ins.

 

In order to really compare interface I/O speeds, it's also necessary to use the same buffer settings. Ultimately, the device with Thunderbolt I/O may be able to work glitch-free within a system with a smaller buffer than a similar USB-connected device, and this would indeed show a performance improvement in analog thoughput, but it's really the computer/software that's reducing the latency, not the interface itself.

 

So, the answer is still "maybe." Buy it, Try it. If it isn't better or doesn't solve your problem, return it.

Link to comment
Share on other sites

  • Members
Well I thought the question was simple :) So let me restate it. Will using a thunderbolt interface result in an improvement in latency from analog input to analog output as opposed to .using a USB 2 interface, both handling the same input signal an conditions? I'll be attending Gearfest in 2 weeks and expect to see at least one new thunderbolt interface there. At that point I can simple plug into my laptop and get a readout directly. I was just hoping that someone had some experience using both kinds of interfaces directly in their systems and could report about it. In my personal case I'm not concerned about an interface's ability to handle other DSP jobs, but others might be so I'm sure that info would also be useful to know.

 

I'm trying to get an answer also.

 

I have an Apollo with an empty TB slot, last week UA released TB PCIe drivers, and enabling firmware.

 

I found this info on the UAD forum.

 

"Well, I can't believe it but UA finally did the right thing. As promised they went deep into the firmware of Apollo and added direct PCI-E support for users who've upgraded to Thunderbolt cards. They had to completely wipe firewire functionality from the device, probably due to memory limitations with the firmware, but I'm happy to say the result is a major performance upgrade (at least based on my initial tests). There is still a small amount of additional latency compared to top PCI-E performers but in usage I really think it's negligible. Round trip latencies have been almost cut in half at all buffer settings. What's more, the Apollo seems rock solid all the way down to 32 samples (at the cost of some CPU power as expected). Somewhere around 256-512 buffer settings the CPU usage hits its minimum, so larger buffers won't yield real benefit. Output latencies are kept lower than input latencies (as was the case with the i/o on FW as well) so for virtual instrument players latency won't be a problem up to 256 samples (and even then I doubt it will affect many). Using VSTs in the DAW in real time will probably warrant 128 or below. As stated even 32 is perfectly usable on my 3.4GHz i7 quad.

 

New latencies for each buffer setting are

 

32 - 3.97ms RT - 1.16ms Output (was 8.32ms RT - 2.79ms Output)

 

64 - 5.42ms RT - 1.88ms Output (was 9.77ms RT - 3.51ms Output)

 

128 - 7.32ms RT - 3.33ms Output (was 12.68ms RT - 4.97ms Output)

 

256 - 14.13ms RT - 6.24ms Output (was 18.48ms RT - 7.87ms Output)

 

On FW (via Thunderbolt option card) I was forced to use 128ms or 256ms on large projects. Now I can use 32ms and 64ms instead. That means my real-world latency improvement with the new drivers drops from 12.68ms to 3.97ms, and from 18.48ms to 5.42ms on large projects (Round Trip). That's an incredible improvement. Really stellar stuff! I also have a feeling there may even be some more room for improvement there as well.

 

The Thunderbolt upgrade will most definitely be worth the price tag to those looking for minimal round trip through the DAW. Studios looking for a new interface with very low real-world latencies can now consider the Apollo as well. Of course, low latency performance will be limited by CPU capability (as with all interfaces), but anyone looking for extreme performance in that area will certainly have a powerful i7 quad-core CPU that is up to the task.

 

This is a serious update. I can't recall many companies taking initiative with existing products like this. I would have expected an ApolloXT or something to debut with these performance improvements. That UA made this a free upgrade is extremely noteworthy.

 

The loss of FW will be a bummer to a few, particularly those using FW drives directly connected to Apollo. You can still use the TB-FW adapter on the second Thunderbolt port to connect your drives, but you'll need them to be powered externally since the Apollo does not provide power over Thunderbolt. It's a small price to pay if you ask me. This is not an update any thunderbolt card owner should skip.

 

On another note, the new 1073 seems to be the real deal as well. I sampled it on a variety of tracks from vocals, guitars, kick/snare, etc. and I was able to dial in some pretty fantastic sounds with very little effort. For an EQ with very limited apparent flexibility it sure can do a whole lot. I have yet to try the other new 1073s (IK, Waves) at length but this one seems like a no brainer.

 

PC AAX (yay, no more whiners) and scribble strips are icing on the cake. Update of the year indeed. Next survey you get from UA take care in answering honestly. These guys are really listening and you just might get what you ask for."

 

Looks like I will get one eventually, waiting for the card to go on sale.

 

Windows users, people are having success running Apollo with Thunderbolt motherboards, but UA doesn't officially support it.

They are awaiting official Windows TB support.

In those cases the UAD processing is using TB bandwidth but the audio driver has FW800 bandwidth limitations.

Link to comment
Share on other sites

  • CMS Author

I'm confused about the connectivity you're describing here. Where does PCIe come in? Is there a card that goes in the Thunderbolt card slot that connects to a PCIe board that you put in the computer? Gee, that's just like they used to make 'em before they started making "you don't have to take your computer apart to use it" interfaces.

 

If that's how it works, can you get Firewire back when you remove the PCIe card? Or is there a new model that has only the connector to a PCIe host card? Or is this a way to connect an Appolo directly to the UAD-2 PCIe DSP accelerator card?

 

I've looked at the UA web pages for Appolo and UAD-2 and can't see what you're talking about.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...