Jump to content

The boundaries of Synthesis?


Recommended Posts

  • Members

Trying to be serious for a second :D

 

So we have pioneered and explored many of the various forms of synthesis,

 

_______________________

 

Additive Synthesis,

 

Subtractive Synthesis,

 

FM Frequency Modulation,

 

RM Ring Modulation,

 

AM Amplitude Modulation,

 

Granular,

 

Wavetable,

 

PM Physical Modeling,

 

Vector synthesis,

 

...etc

 

So where does it go from here Gents?

 

Is there any method or form to be discovered, that is radically different. or have the boundaries of synthesis been exhausted to a point that they come back on themselves?

Evolutionary, rather than Revolutionary perhaps?

 

Do we actually need any new form of synthesis?

 

 

Discuss ..

Link to comment
Share on other sites

  • Members

In my opinion Physical Modeling is still in its infancy. There is a big difference between making a calculation that produces a signal that sounds like a physical instrument (e.g. Karplus-Strong) and generating a signal using a mathematical model based on first principals.

 

At some level it may be possible to make an isomorphism between a calculation such as Karplus-Strong and a numerical method used to estimate solutions to a mathematical model involving a very simple Wave equation, much like one can make analogies between various network/node equations in electronic circuits and structures with numerical methods for solving the Laplace equation. However there is a big difference between that and developing a full set of equations for the generation of sound by a specific instrument.

 

As an aside it is possible that a specific model for a given instrument is not required. For example two drums with different drum head shapes may produce the same sound (spectrum). There is a very famous mathematical paper entitled"Can one hear the shape of a drum?" by Mark Kac (American Mathematical Monthly 73 (4, part 2): 1

Link to comment
Share on other sites

  • Members

 

In my opinion Physical Modeling is still in its infancy. There is a big difference between making a calculation that produces a signal that sounds like a physical instrument (e.g. Karplus-Strong) and generating a signal using a mathematical model based on first principals.

 

At some level it may be possible to make an isomorphism between a calculation such as Karplus-Strong and a numerical method used to estimate solutions to a mathematical model involving a very simple Wave equation, much like one can make analogies between various network/node equations in electronic circuits and structures with numerical methods for solving the Laplace equation. However there is a big difference between that and developing a full set of equations for the generation of sound by a specific instrument.

 

As an aside it is possible that a specific model for a given instrument is not required. For example two drums with different drum head shapes may produce the same sound (spectrum). There is a very famous mathematical paper entitled"Can one hear the shape of a drum?" by Mark Kac (American Mathematical Monthly 73 (4, part 2): 1

Link to comment
Share on other sites

  • Members

So where does it go from here Gents?

 

Better music? :p

 

There's still far too much linear, single-path thinking out there as well as restrictions in user interfaces. Let's solve that first. Let's get rid of all of MIDI's drawbacks and pick something better. Let's teach people that innovation is not desperately trying to copy Radiohead, Nine Inch Nails or Justice 2 years after the fact, and that music theory does not make you less creative.

 

Plus, there's still loads of work to do on the algorithm quality. Let's see how the Solaris gives the VST folks a much needed kick in the behind in terms of analog emulation, because we've still got quite a way to go.

Link to comment
Share on other sites

  • Members

You forgot Sample synthesis, just for completeness' sake.

 

While it may be argued that sampling is not synthesis, the idea is that each of those methods produce a pitched signal that then can be modified by a sequence of filters and amplifiers (and further modified by all manner of stuff, depending on the synthesizer).

 

So, once a sound has been turned into a set of values (sampled), it is converted into a signal that then can be set to a pitch (even if the pitch is not clear to the ear, like the sound of a burp), and so forth.

Link to comment
Share on other sites

  • Members

I'll echo the controller remarks. We got the sounds. What we don't have is really good ways to provide expressiveness into the sounds. Synthesizer sounds even today are less dynamic than acoustic instruments, I'd like to see more movement to resolve this.

Link to comment
Share on other sites

  • Members

I recall an effect of MP3 low quality encoding that enhanced some synthi vowel sounds from a Korg OASIS, making them come alive. In fact I liked the effect on the whole piece (AL1 Demo) as it was less harsh and more organic. When I mentioned it to korg on a forum, they replace their original MP3 with a higher grade one which lowered the overal quality of the demo a little for my ears :lol:. A CD version I had was a little too sharp overall.

 

Food for thought in how you can use such encoding to alter sounds. Some sounds seem to suit the effect. I am sure some audio purists will not like what I have said but I really liked the effect.

 

:wave:

Link to comment
Share on other sites

  • Members

Not really related here but I personally long for more instruments with identity and personality rather than ones that seem to meet general standards and check required features off a list.

 

Other than that, I look forward to see where the marriage of samples and synthesis will go.

Link to comment
Share on other sites

  • Members

 

Not really related here but I personally long for more instruments with identity and personality rather than ones that seem to meet general standards and check required features off a list.

 

 

I value the above ideas and statements.

 

---

 

As Gribs mentioned, physical modeling is in its infancy, and is largely unexplored.

 

---

 

Two forms of synthesis not mentioned within the first post are:

 

 

 

 

cheers,

Ian

Link to comment
Share on other sites

  • Members

I long for the build quality of a voyager with the features of a modern digital synth like a v-synth or an M3. I don't want a synth that built like an instrument (voyager) that's feature hamstrung and I don't want a synth that has the features but looks mass produced because it is.

 

No more plastic or square metal box pre-fab synths for crappy knobs all in a row!! Down with the status-quo! Grab the torches and pitchforks!! We demand instruments not lab toys!

Link to comment
Share on other sites

  • Members

The emulation of acoustic instruments and creation of new "virtual" instruments is really just beginning. The dichotomy between "sampling" and "physical modeling" is nonsense, these should not be separate approaches. - Once we have powerful enough processors and adept enough programmers to process sampled waveforms in ways that model the way real acoustic objects behave we'll be able to start having some real fun.

 

I agree that there is much to be done in the realm of user interface and musical control of electronic instruments. I also expect that technology will continue to make musical expression available to untrained musicians. Just imagine coming home from a tough day at the office and conducting your favorite symphony, or improvising with a jazz quartet (even if you've never played.)

Link to comment
Share on other sites

  • Members

To me it seems that digital synthesis offers almost limitless possibilities to manipulate sound - yet at a certain point it all sounds "samish"...

Goes for analogue as well, don't get me wrong

 

I guess new interfaces will be more challenging to both players and developers.

 

Then again, i am fairly content with the 5 or 6 synthesis methods i have at my disposal and am sure that i could spend a long while just trying to exhaust their possibilities.

Link to comment
Share on other sites

  • Members

I'd like to be able to model spaces, rather than emulating them with filter resonance ... to get the character of instruments and rooms.

 

It seems to me that we have lots of variety in the excitement phase of synthesis (traditional oscillators, fm, pm, additive, samples etc.) in the electrical/electronic processing (filters, ring mod, bit reduction, waveshaping, etc.) but not nearly so much in the way sound appears to interact with space. Imagine an ADSR controlling the size of a "violin body".

 

It may be possible with the kinds of "delay lines" that are currently in our efx systems, but we would need control systems similar to those in the synth architecture to be able to play with space.

Link to comment
Share on other sites

  • Members

 

wasn't there a synth or two with a trapezoid wave? what happened to that?

I would guess a trapezoid would sound somewhat halfway between a triangle and a square wave.

 

Synths that allow you to overdrive the outputs (or filter inputs) can generate a trapezoid from a triangle wave simply through clipping.

Link to comment
Share on other sites

  • Members

 

The emulation of acoustic instruments and creation of new "virtual" instruments is really just beginning. The dichotomy between "sampling" and "physical modeling" is nonsense, these should not be separate approaches. - Once we have powerful enough processors and adept enough programmers to process sampled waveforms in ways that model the way real acoustic objects behave we'll be able to start having some real fun.

 

 

Isn't this the approach that Spectrasonics Omnisphere is trying to do?

Link to comment
Share on other sites

  • Members

It seems to me that we have lots of variety in the excitement phase of synthesis (traditional oscillators, fm, pm, additive, samples etc.) in the electrical/electronic processing (filters, ring mod, bit reduction, waveshaping, etc.) but not nearly so much in the way sound appears to interact with space. Imagine an ADSR controlling the size of a "violin body".


It may be possible with the kinds of "delay lines" that are currently in our efx systems, but we would need control systems similar to those in the synth architecture to be able to play with space.

 

time to read up on FIR filters. :)

Link to comment
Share on other sites

  • Members

resynthesis gets my vote.

 

faster processors - which allow more immense algorithms/logic have propelled us to where we are today-

 

which makes me think a 10x fold increase in processor power will allow some unrealized method in the future (since we are solely in the digital domain these days-with respect to new synthesis).

 

The key with any synthesis technology is to implement it based on the late Bob Moog's rule>

he said something to effect of: large change in timbre, with relatively few controls.

(I know most of us have read the exact quote at some time- forgive me for not quoting correctly/directly)

Link to comment
Share on other sites

  • Members

time to put them in a commercially available synth.
:)

 

its been available for quite some time. but its marketed as reverb, and for some odd reason most people cant see past the name and use it only as such. try googling "convolution reverb hardware"...

 

its really just the same problem as with "physical modeling" synthesis, which is all just IIR filter systems. manufacturers think everyone wants a more realistic piano or hall reverb, but there is really a lot of fun to be had with these things..

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...