Jump to content

TOO GOOD TO BE TRUE ?


techristian

Recommended Posts

  • Members

"To illustrate how AudioImmersion works, let us picture a room with a drummer and guitarist simultaneously playing. The device has been placed between them and is recording. When they stop, thanks to Zylia’s prototype, they can listen to what they’ve played together. Or to the drums on their own. Or to the guitar alone. How is this possible? The device uploads the audio data to a cloud, where it can be processed to obtain the separate tracks. The service for this processing comes with AudioImmersion and that’s why the magic ball is described by its creators a live audio recording system. Zylia assures that its prototype, which contains not one but a whole array of microphones, makes studio-quality recordings."

 

I'm interested in the processing that separates out tracks. If that could be applied to conventional stereo recordings, it would be possible to clean up tracks on older recordings really easily...

 

I had heard about this at AES but didn't get a chance to attend any demos.

Link to comment
Share on other sites

  • Members

Obviously, it would seem to have more than a bit in common with Celemony's 'multi-voice' pitch correction/mod technology (which can be used to change the pitch of individual notes of a chord in an audio file, for instance).

 

Provocative that it has to upload the signal to a mainframe (oops, I mean the cloud :rolleyes2: ) for processing. Obvious revenue stream/economic barrier (depending on your perspective) there.

 

My response to this is much the same as my response to the Celemony tech: I'll believe it when I hear it. With Celemony, that tech (using just local processing) was surprisingly effective.

 

Maybe this will be, too. It is the 'age of tech miracles'... but in the age of miracles, it's especially important to keep your eyes wide open

Link to comment
Share on other sites

My concern? How do you optimize mic placement and the source to room ambience ratio for the individual instruments? Even if it can extract the audio cleanly for each separate sound source in the room (and I'm assuming it's being done with a combo of Celemony type processing being applied to a multiple directional mic array - assuming each dimple is a transducer), the physical location of the mic array will influence the amount of room reflections and ambience that's picked up.

 

I'd want to learn a lot more about this before coming to any conclusions, but my initial thought is that it would have a hard time matching the flexibility of multiple individual microphones.

Link to comment
Share on other sites

  • Members

There were a couple interesting programs at AES that could isolate vocals and mix drums up or down, but nothing with complete separation. OTOH there are some "de-reverbing" algorithms. If you combined enough of these, you might be able to isolate everything but as we all know...the more you mess with something, the greater the chances for artifacts.

Link to comment
Share on other sites

If this is just a post production algorithm , why do we need the ball at all ?

 

Dan

 

Again, I'm assuming each dimple in the ball is a transducer, and that it's a directional array; it's not just software, but a combination of hardware recording and software decoding involved. At least that's my semi-educated guess.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...