Jump to content

Your own custom multisampled patches: How are YOU creating them nowadays?


Recommended Posts

  • Members

 

Hey,  gang:

 

I used to make my own SoundFonts... It was easy.   Just take about 10,   44.1 recorded samples---     e.g.,  your own vocal "Oohs" and "Aaahs"---    and plug them in to be distributed over your keyboard's range.   I used ALIVE!,   which made it pretty straightforward.  

 

How are YOU creating your own,  personally recorded,   multisampled patches these days?   (Of course,  it's easy to simply plug in one single WAV,  and have a softsynth distribute that one WAV over the key-range,   replete with chipmunks and Darth Vader,   but I'm talking about a multisample).    What software do you use?     Is it easier now,  or harder to do?

 

Thanks,  ras  

Link to comment
Share on other sites

  • Members


rasputin1963 wrote:

I'm talking about a
multisample
).    What software do you use?     Is it easier now,  or harder to do?

 

 

Easier.

Drag&dropped multisamples align chromatically or wholestep wise where you drop them... Kontakt is handy for that, also Galaxy X is handy and has a new class of convolution processing which works well for self-made unique sounds, i.e. for movie scoring

 

00004247.jpg

Link to comment
Share on other sites

  • Members

Since my friend left one of his drum kits here, I don't use samples.  I have, on occasion, captured some impulse samples to use in my SIR reverb, and have sampled other stuff (like the ring of a crystal bowl while swishing water around in it, thunder crashes, plow shares and brake drums hanging from a chain, etc.)

Link to comment
Share on other sites

  • Members

Speaking of SoundFonts...are you hip to the SFZ file format?

Also, in addition to Kontakt (good choice, but it's a rocket science kinda program), Reason's NNXT makes it pretty easy to do multisamples. E-Mu's software sampler, which seems to have faded, was great but not sure about future support.


MOTU's MachFive is a highly-underrated sampler that's very sophisticated.

Link to comment
Share on other sites

  • Members

I have a free jRhodes3 soundfont I made some years back.

I sampled every 4th white key at up to 5 different velocities, using my DAW.  To regulate velocity, I used peak level as a vague proxy for RMS level since that's what my DAW shows.  Today I'd probably use RMS.

Before sampling, I measured the difference between the hardest and softest "meaningful" strikes, at both ends of the keyboard.  At the low end, the difference was about 30 dB, so I recorded each velocity layer 6B apart, at the bottom.   I adjusted the record level for each layer, targeting -3dB for the lowest notes.

The dynamic range was lower at the high end of the keyboard, so I let the target peak level slip as I worked up the keyboard, for the higher layers.  Don't remember the numbers, but it's pretty simple math.

I recorded a wave file for each velocity layer.  Tone didn't vary much at the top an I wanted to keep the total size down, so only two layers (#2 and #4) went all the way up the keyboard.  (I plan to do this again and worry less about memory.)  During recording, if I didn't hit my peak dB target, I just dropped the note quickly and tried again.

For each layer file, I normalized (not much since low notes peaked around -3dB) and converted to 16 bits.  I used CoolEdit96 to de-noise (which handles 16bit only, but sounded better than anything else I tried). I also used CoolEdit to create a stereo image effect to bake into the stereo samples: pitch-shift doubling.  The result turned out to sound in between PSD and a mild chorus, which was just what I wanted even though I didn't know it beforehand. ;-)

I wrote three python scripts to mangle the results into an SF2 file.  The first scanned each layer file, eliminating the snarks, and chopping each note into its own wave file, dumping them all in a folder.  This script did pitch detection, trimmed each sample start & end, and named the file based on the layer name and MIDI note number.

The second did the keymap assignment, based on a simple smallest distance algorithm, creating the zones, and output a keymap.  It also produced an SF2 format file for auditioning the results.

The third took the keymay assignment and wave files and built an SF2 file.

Finally, I used either Extreme Sample Converter or CDExtract to loop the samples.  I can't remember which one, but one of these sample format conversion programs had an excellent loop editor, showing the waveforms and how they'd intersect.

I have an idea for an application that would help automate this process.  Unfortunately, I'm not a GUI coder (at least, not since 1988), and I'd hate to spend all the time it would take to spin up on that.  If I could partner with a GUI coder and/or someone who has experience writing code to interface with audio, I'd consider jumping into this, but I didn't get any interest at kvraudio at the time.

My idea is to have a grid on the screen showing what notes and what velocities you've sampled, let you play notes which it would record and assign to the grid, and you just keep plunking away until you're happy with the amount of coverage you get, and then push a button, bazingo: multisample, multilayer instrument!  Ideally, you could audition the sample set by playing a MIDI keyboard at any time in the process.  If I were to do this, it would be a freebie, or at worst, a cheapie.  Probably freebie because it'd be a lot easier to start with open source audio code, like audacity or something.  However, I tried sourcing audacity and I couldn't even get it to build.  Evidently I was missing the secret decoder ring.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...