Harmony Central Forums
Announcement Announcement Module
Collapse
No announcement yet.

how to make an instrument not sound as good?

Page Title Module
Move Remove Collapse







X
Conversation Detail Module
Collapse
  • Filter
  • Time
  • Show
Clear All
new posts

  • how to make an instrument not sound as good?

    I've just been reading something from a book about mixing. I read something about a trick to mixing is that some instruments don't have to sound as good as some. So how do you make an instrument sound not as good? Obviously performance has to be there. I'm just not sure what is meant by not making some instruments sound good. Do I use bad sounding effects while tracking or making an instrument sound "bad" can be done during mixdown? I just ordered the book that I'm talking about and have read a few teasers from it.

  • #2
    Rcording is a long chain of events. You want crappy sound its simple. Use a crappy mic or dial up a crappy sound tracking and you'll get the results you want.

    If things have gotten to the mixing stage its a matter of dealing with your lowest common denominator.
    I find drums recorded live are the biggest challange getting them to sound as good as the rest of the instruments. I often have to remove frequency responce from the other instruments - dumb them down so to say to match the quality of the drums to get a balance. A bass guitar recorded direct for example is going to have to match the kick drum. If the kick was recorded with a vocal mic, and lacks frequencies below 100hz, then I'll likely have to remove some low frequency on the bass to make the two push together.

    Guitars are another item that may need frequency limiting. If the snare is lacking some 3~5K because it wasnt tuned, miced or played properly, it may not cut through well so I may need to notch my guitars mid frequencies to allow some space in the overall frequency responce of the mix for the snare to cut through.

    This is mainly an issue with masking. You cant have two instruments reside in the same frequency bands and expect them to be heard clearly in a mono field. You can have them stereo panned and hear them but its better to have them separated in a mono field Through frequency banding first so theres little or no clashing/masking at all.

    You can of course use tools to degrade the tracks purposly as an effect. Oe simple one is to remove the highs and lows off the vocals and leave the midrange frequencies for a megaphone/telephone sound to the voice. Add some extra drive and even an echo/reverb or chorus and you've mase the voice sound small.

    Good examples of this can be found on TV. An old Sci Fi movie comes to mind, the Shrinking Man, where the guy keeps shrinking in size.As he does, the audio engineers remove bass from the guys voice and adds more reverb to make the man sound smaller and smaller. You are the viewer and the mic position (your ears) remains the same distance, bit the chest cavity of the little man gets smaller and smaller so less bass is projected. The guys voice winds up being a telephone frequency responce and the reverb of that basement he gets trapped in sounds like the grand canyon.

    There are other ways of degrading tracks too. Drive on guitar works just the opposite of what guitarists think it does. as you add more drive where all the notes are clipped, the sound is compressed and ceases to have dynamics. This loss of dynamics sucks the emotion out of the track and gives you a monotone voice that can't compete with a drummers dynamics. A drummer can hit a drum harder and the guitarist is already hitting the ceiling You want big sounding guitars, back off on the drive and leave more dynamics available to the player. My rule of thumb which works best when I dial up my guitars is, when I pick notes or strum the guitar lightly, the sound is completely clean, when I strum or pick the strings as hard as I can, they peak at maximum drive. When I pick or strum some place in between I have varying amounts of drive.

    This allows both dynamics and tonal harmonics to be present. I have maximum control over the drive added to notes through my playing abilities, and if I want notes to come through I simply dig in harder when I play. When I back off the notes get cleaner and bigger. Later when I mix and add limiters to the master mix the softer notes which are clean cut through the mix cleanly, when I dig in, they drop back and add to the backwash. Reverb is another key item. It too can be EQed for effect. The depth is for pushing the part back in the mix as a 3D thing. Most get the height and width of a mix down fairly quickly, the height being the frequency responce between 20~20Khz. They may also get the width reasonable with stereo panning.

    Height and width alone = a two dimensional flat image. Thats where they get stumped and wonder why their mixes sound so flat. They havent used their minds eye to mimick depth to the music. If you go out to to see a live band, and stand about 25' from the stage, the loudesnt and clearest itm will usually be the vocvals coming from the PA cabs. The drums will likely be the furthest away so their sound will be more reflected and smaller sounding. Guitar aned bass speakers project a long distance but bass should sound clear and guitars a bit of a wash depending on how much drive they're using. Move farther from the satge and theres more room reflection hitting your ears, the bass tones will diminish, and mostly reflected midrange tones will remain.

    Move closer to the stage, and you feel more direct tones hitting your chest. You hear and feel the drums being hit and your chest vibrates more from the bassm and you feel the kick of the guitar chords. The vocals also diminish to some extent. You hear more bass/muffeled vocals from the sides of the PA cabs and you hear the reflected vocals off the back wall coming back at you with most of the bass removed as well as a larger delay.

    This is the three dimensionality of live music I'm describing that needs to be mixed into the music to make it sound real and electrifying. A stage musician is not going to be the best one to make decisions on a mix. What they hear on a stage is not what the audiance hears. He will naturally want to make the drums sound too big in a mix because he stands on a stage right next to them when he plays out.

    Vocals will not be loud enough, his own amp which he focuses on listening to will have too much kick and not enough direct sound. A stage musician hears the mix very differently from an audiance and even if he has good monitors happening, its not going to make for a good recording mixing what he hears. His conditioning and his depth perception wont be very good for mixing.

    It takes a long time to retrain yourself to mix well. Best advice I can give someone is when you're mixing a part, imagine yourself playing that instrument, and if some other part is destracting you from concentrating on that instrument as a player, then you have to balance what you're hearing to prevent masking. If you imagine yourself playing the drums and you cant hear the kick because the bass is booming over it, then you either have to boost the kick or cut the bass volume or frequencies till they are equal. If you're a guitarist, you have to be second fiddle to the vocals, allow the snare to crack through, and stay out of the bass players turf.


    Its all a matter of perspective in the minds eye. You will have to battle with your own bias of wanting to make your own parts sound good but the overall mix comes first. By degrading a part that would/should normally sound good in a mix, you draw attention to that flaw and draw the listeners attention away from listening to the entire mix which may sound clean in comparison. Its just one trick to skinning the cat, but the real trick is contrast. The degraded part isnt going to sound very good if there arent other clean parts to support it. Its the contrast between the two that makes it interesting.

    One little phrase I have on my studio wall as a reminder that sums it up goes like this.


    As the Sun Ra once Said
    "Space is the Place"
    The Less Music You Play
    The More Weight Each Note Carries
    This Creates More Spaciousness
    In the Overall Sound

    Its the contrast between maximum silence and noise that makes a good mix.
    If theres no silence between the notes you have no spaciousness to the music.

    Good music has good rythm, and good rythum is not only whats played but what is not played. Rests - are silent notes - The notes the listeners hear that dont really exist. The Buddahists call it
    the one hand clapping. When the imaginary meets the actual in a musical score the listener is allowed to see into the music and it becomes just as much theirs as it is yours. having strategic rests on beats they would normally expect as being there makes them wonder why it wasnt there and they listen more intensely to see if it was a mistake or intentional. If their internal clock is as accurate as yours, that silence is louder than the music.

    Again, silence can be just as potent and deafining as mximum note velocity levels Keep that in mind when you mix. Without silence there is no dynamics and therefore no emotion.
    Last edited by WRGKMC; 05-15-2014, 08:06 AM.

    Comment


    • #3
      this has to do with applying EQ to certain instruments so they dont conflict with other instruments in the mix. for example, electric guitar - your first instinct is probably to solo the guitar and make it sound huge and full, and that is often a mistake. by eq'ing out some of the low end (making it sound thinner) provides some room for the bass in the mix, which willl make the whole mix sound tighter and cleaner - less boomy and more controlled.
      <div class="signaturecontainer">jnorman<br />
      sunridge studios<br />
      salem, oregon</div>

      Comment


      • samal50
        samal50 commented
        Editing a comment

        this is starting to make sense a bit to me. what if the intro is just a guitar riff meant to sound huge and full before the whole band kicks in (drums and bass and vocals), would this technique you mentioned apply? Isn't dynamics only apply if the full band is playing? If the intro is supposed to be a huge guitar riff for 10 seconds how do you make this huge by using 2 tracks instead then as the whole band kicks in drop it to a 1 track? Or change up the EQs is the best move to take?

        Usually the guitar effect presets sound good solo but when it is recorded along with other instruments (bass, drums, etc.) it's not as good as it was when solo. This is probably one of the mistakes I've made. Just because an instrument sound good solo doesn't mean it will sound as good when the whole band recording is played back. So the adjustments to make is with EQ?

        While the recording process, should a full band be sounding/playing like the way any band would sound during "jamming" then the whole "mind's eye" thing comes into play while mixdown or while recording as well?

         


      • WRGKMC
        WRGKMC commented
        Editing a comment

        That post was from a couple of years ago, but the infor was valid. You have several questions there that mainly require skill in mixing.

        Your key item to learn to use well involves mastering a mix and possibly using volume envelopes, or volume automation. When you have solo parts like that, you dont usually have a big change in EQ. The part fits into the mix when its softer and you simply pump the volume so its louder when its solo.

        The human ear expects the guitar to sound like a guitar so its still going to have frequencies in the midrange. Since a guitar produces frequencies between around 160~5K hz, it makes no sense to boost the high and low frequencies to expand its range.

        You should get the boost in presence whan you boost the volume. Most HiFi speakers give a boost in presence when the music gets louder. If you're trying to use something like headphones to mix, this boost in presence may not be heard. Using good studio monitors this should be a natural occurance.

        There are several wasy to do the boost. You can use a midi controller and manually boost the volume  with a slider (Or just use your mouse on the daws mixer) and record the movement of the changes using DAW automation.

        You can boost the entire guitar part and envelope the volume level down during verses and chords, or you can select the solo part and normalize that part up using your editing tools.

        Even if the part is low in volume a limiter or compressor used on the main buss will reduce the loud items and boost the low ones. When the guitar goes solo its brought up to the same levels as your other material. You often hear this happen on live rock recordings where a solo singer or guitar is loud through a passage and when the bass and drums kick in the obverall volume on the guitar clamps down.

        Much of this is live audio 101 stuff thats simply used in recording and done with different tools. None of it is super hard to do if you know how to use your tools. If an instrument fits in a mix right it shouldnt sound horrible solo, unless its was just tracked badly to begin with.


      • samal50
        samal50 commented
        Editing a comment

        been on hiatus. so pretty much just raise up the faders if I want a part to sound loud especially when it is in solo then drop the fader back down when the whole band plays on? This wouldn't sound awkward would it?

        The band called LIVE has a song called "Lightning and Crashes" if you listen to it close you can hear in the middle of the song it was obvious the faders was lowered for the entire song and then raised up again during another big chorus. It got my attention but surprisingly wasn't awkward at all now that I got used to it.

        So the best way to mix is to listen to the whole song then tweak the EQs or other effects (reverb, etc.) of each instruments/parts/vocals accordingly to the ideal sound? If one instrument (let's say a guitar) sound buried by another guitar track re-tweak it to distinguish it from the other guitar track?

        In the past I simply used an EQ chart of what is supposed to be "ideal" for guitars, bass, vocals, etc. then tweak the EQs of each instruments without listening to how the whole song sounds like. The instruments sound great solo, but not so much when the whole song is played. I know you said something about listening to ears and not the visual charts.


    • #4
      Send it to me. I can make it sound 'not as good'.



      No problem
      flip the phase

      Comment


      • #5
        so don't rely on preset guitar effects then? lol.









        Quote Originally Posted by jnorman
        View Post

        this has to do with applying EQ to certain instruments so they dont conflict with other instruments in the mix. for example, electric guitar - your first instinct is probably to solo the guitar and make it sound huge and full, and that is often a mistake. by eq'ing out some of the low end (making it sound thinner) provides some room for the bass in the mix, which willl make the whole mix sound tighter and cleaner - less boomy and more controlled.




        Comment


        • #6
          the thing is when I track I try to make everything sound "good", but now I'm reading that this is a bad idea. very confusing. in pop music I observed that the vocals are the main thing and the music seem to be just "background".









          Quote Originally Posted by gubu
          View Post

          Send it to me. I can make it sound 'not as good'.



          No problem




          Comment


          • #7
            ^^^ Instruments proper mixed then soloed dont usually sound as good as instruments

            mixed to sound good solo. Its because each instrument in a mix is only a section of the

            entire hearing frequency bandwidth.



            Think of the frequency bands as colors.

            The low frequencs as being the reds, and the high frequencies as violet.

            Infared are your subsonics and the untraviolet is above your hearing ranges only dogs can hear.



            Reds to orange would be your low bass and kick.

            Orange to yellow will be your guitars lowest notes, drum toms.

            Green upper guitars, mid vocals and snare

            Blue will be your upper vocal presence, High hat, ride

            Violet your crash cymbals, strings etc.



            All the colors together give you white light

            In a recording it gives you a balanced mix.



            If you remove one instrument from the mix and solo it, you may only have one color of the rainbow

            and maybe a litte bleedover of the two colors along side it. If guitars are yellow that means you have

            a little orange and a little green to either side. If the guitar is only yellow with no orange or green it

            means theres little masking going tooccur over instruments in the lower or upper frequency ranges.



            Solo, the guitar is going to sould narrow in frequency.

            If you have less instruments in the mix, you can widen the instruments frequency response

            and still not have masking. More instruments need narrower bands to fit them in between

            20~20K hearing range.



            If you improperly mix two instruments into the same frequency range, the two will fight to be heard.

            Volume wont fix this. Turn one up and the other dissapears. IE. If two pictures are both yellow,

            and you overlay them, you can make the two out form each other, it winds up being a distorted mess.



            Same thing for sound. You cant get a speaker cone to reproduce tow separate instruments if the cone is

            vibrating at the same frequency range. It can only reproduce one or the other clearly. Both together creates

            a confused mess.



            you can widen most instruments responces so they sound good solo. A guitar like I said will produce tones from 100~6Khz

            or more which is fairly wide ranged. You can do this on solo passages with editing tools if need be, but when the other instruments

            come back in, that part has to be narrowed up again or at least dropped in volume to prevent masking.



            If you have one guitar yellow and one blue, when you have the two play together the overlapping of the two creates shades of green.

            This is a cool effect and is what you want to achieve between parts depending on the mix.



            The big trick is developing the minds eye and alongh with it your ear to achieve the proper separation between instruments for a good mix.

            Viewing frequency charts can help. Heres a few examples.



            http://www.independentrecording.net/...in_display.htm

            http://www.head-fi.org/a/frequency-r...-of-headphones

            http://obiaudio.com/2010/07/11/eq-chart/

            http://www.rfcafe.com/references/ele...nics-world.htm

            http://www.dak.com/reviews/tutorial_frequencies.cfm



            You cant paint by number to get a good mix of course. Your ears must be the

            overlying factor in achieving a good mix. But using a frequency analyzer can help a beginner get his bearings.

            If you download a free Frequency analyzer like Voxengo Spanhttp://www.voxengo.com/product/span/

            and stick it in the main effects bus of the daw, you can solo instruments and compare it to the charts and

            see what ranges the tracks actually produce.



            If for example, a guitar track has very weak responce at 500hz, boosting that frequency with an EQ is only going

            to raise the noise floor up and contaminate the mixes clarity.



            On the other hand, is the guitar is strong between 500~5K and your snare has a peak resonance at 2K, and its hard to hear in a mix,

            you can notch the guitar at 2K with an EQ and the snare appears clearly in the mix as you removing the veil of masking the guitars

            were producing.



            As you get better, you learn to do this with your ears and hands on with the tools, but using a visual assist tool can be of great

            use to get you there. Like I said you cant paint by numbers but numbers can at least put you in balance and expose the range of sound

            you have to work with.



            The other way is to import a commercial recording into your mix that has a simular genre and beat you'd like to achieve.

            Then switch back and forth between in an A/B comparison with one instrument in your mix at a time along with the commercial

            and use it to target your EQing. Do each instrument in your mix solo against the commercial recording then mute the commercial

            recording and unmute all your tracks. This should get you close to a good balance. Of course the timing and key of the two when you're doing this

            wont be in sync but it should be close enough to ballpark the two. You can also get the Stereo pan, compression and reverb depth close this way too.



            Occasionally I'll do a cover tune this way. Import a tune into the DAW then play all the parts, and then remove the import.

            I'm left with a close faxcimilie of the original minus the musical talent the original artists produce, and the gear thay actually ise.



            Heres an example or two of that method.



            http://dl.dropbox.com/u/1682170/Driv...BMaster%5D.wav



            http://dl.dropbox.com/u/1682170/I%20...0(Master1).wav



            After awhile the method gets boaring but you do learn more in one of those type of sessions than a year of

            doodleing around blindly trying to achieve tyhe same results. you can build some nice EQ presets in the process too.

            Comment


            • #8






              Quote Originally Posted by samal50
              View Post

              the thing is when I track I try to make everything sound "good", but now I'm reading that this is a bad idea. very confusing. in pop music I observed that the vocals are the main thing and the music seem to be just "background".




              Have a close read of what WRGKMC and jnorman say above.





              The key to any method of mixing is to first define properly what it is you are trying to achieve, and how to go about doing so.



              Saying something like 'make it sound not as good' is a kind of throwaway comment that does have some basis in reality, but is really not helpful in terms of guidance towards making better mixes. It's a very ill-defined statement, and very easy to misunderstand in terms of what it is you're actually trying to achieve in a mix.



              My advice:-



              Read the other replies in this thread. There is some great information within them.
              flip the phase

              Comment


              • #9

                WRGKMC, I know you mentioned something about what bands hear during live performance. What is it that they hear vs. what the audience hears? The audience hears the song like in the CD. The band playing should hear the same otherwise they won't be on the same page of what they are playing, right? I'm assuming the guitar player hears his guitars louder than the other instruments? Is this about right? But this isn't true when a group of musicians are jamming in the basement. You end up hearing everyone. Is it because there is no PA set up? From what I understand all it is is panning. Guitar right is panned hard right, and guitar left is panned hard left? Bass and drums centered.

                Another thing; when people say "everyone sounds the same" (usually with pop music), does this mean every producer of those pop records uses the same presets? If there's such a thing. I know there is no set rules when it comes to mixing.

                Comment


                • samal50
                  samal50 commented
                  Editing a comment

                  you mentioned mixing as "enhancement", what would you say mastering is? Is it to make all songs within an album sound equally the same despite each song having had different settings when tracking and or mixing? I understand mastering to be this. Mastering is the last stage before the final product as I recall. I have the TC Electronic Finalizer 96K.


                  You mentioned the use of a limiter. I'm not familiar with it. Would this be similar to a DI box? I think some direct boxes' job is to boost an instrument but is it the same as a limiter?


                • WRGKMC
                  WRGKMC commented
                  Editing a comment

                  Mastering is the spit shine you put on boots after the multitracks are mixed down to a stereo file. You can do some mastering within a DAW program if thats all you have but an Editor program is better because its more precise.

                  A Mixdown should come out at a loudness of about -10 to -14db which is very low in comparison to a commercial recording. Trying to get a mixdown as loud as a commercial recording normally fails for a number of reasons, mostly due to frequency balance and percieved loudness problems. Mixing down at a lower level leaves plenty of clean headroom so the high quality tools used for mastering have dynamic room to work well. If a mixdown is too hot and close to the 0db ceiling, you cant use mastering tools. Over compression Mixing is the #1 problem with most mixes. Mastering may use several different comps and limiters to even up the dynamics so the perceived listening level of each song on an album and the material within the song remains realistic and you arent forced to constantly tweak the volume control listening to the songs playback.

                  The main steps in mastering are, cleanup. Getting the fade ins and outs right, Having enought time between one song and another on a CD. Song information may be coded into the tracks and that kind of stuff may be done too.

                  Overall EQing so the bass middle and trebble on each song matches and you dont have to retweak your playback system every time the CD changes tracks.

                  Multiband limiting may be needed next to get the dynamics of the different frequencies balanced. you dont want bass punching abive the mids or snare attacks masking guitars and vocals. This step smooths the dynamics of the frequency ranges.

                  Left and Right Balancing. There are tools that can be used to even up the stereo tracks so one side isnt louder than the other. You can center the bass frequencies so the bass kicks evenly in both speakers while leaving the mids and highs in stereo, and you can adjust the stereo width. Something overly wide may sound hollow in the middle or too narrow it may sound mono. This is a critical step in checking phase so a song will play back well in both mono and stereo.

                  Noise rediction may be next. Removing digital noise that makes the music sound grainy can be done to make the music sound more fluid and analog like. Harshness removed, and clarity preserved. Sometimes the tracks are even passed through actual vacuume tubes for a warm rich tone.

                  Last item is to use a brickwall limiter. This is the step that brings the song up to commercial levels and adjusts the perceived loudness to match between songs. You can go for a medium loudness so its crystal clear or you can smash the peaks to make it sound like a Hifi Turned all the way up and has the speakers flapping. Every step before this you worked at lower levels. Only here is where the music sounds loud enough to match a commercial recording. Different limiters have different warmth levels and clarity. The main thing is it prevents any transient peaks from going over 0db which causes the nastiest digital distortion and can prevent a palyback system from playing the tracks when its scanned.

                  After this is when you down sample to CD quality of 16/44.1 or MP3 and add dithering, which adds noise to the bits and smoothes the frequencies to sound analog like.

                  There can be many other steps here depending on whats needed but you can see its not a single step and an all in one plugin would fail to do the job well. Every song is different and there is no universal plugin thet will do the job for every song. They do make some multi plugin tools like Ozone and TRacks but I have these and have used them extensively. They can be much harder to setup and use than using separate plugins in in an editor program. Their preset settings are almost always lame and will miss many oppertunities to make the music sound its best.

                  Lastly, you may not need any of these steps if the mixing and tracking were superb. You must "Always" use limiting though. Without it you can have peaks thet cause the tracks to distort on various playback systems or it may not be loud enough for weak playback amps and speakers.

                   

                  If you get a chance, get Bob Katz book on mastering. Its loaded with all the details I've mentioned plus more. You also learn mixing is only the next to last step in the chain and the goal of mixing is to get the music ready for mastering so you learn to mix for good mastering results, not to attempt to get a good finished product ready to burn. Boots look like crap without polish, but the boots need to be well made to last.


                • samal50
                  samal50 commented
                  Editing a comment

                  so what is needed for mastering? I may have the all-in-one box in the TC Electronic Finalizer 96K, but what am I missing? You mentioned Limiter several times.

                  I don't quite get what it is but according to the book "basic effects and processors" by Paul White, it is a "device that controls the gain of a signal so as to prevent it from exceeding a preset level. A limiter is essentially a fast-acting compressor with an infinite compression ratio". So when using a limiter for increasing volume, wouldn't this be "exceeding" the preset level? Contradictory?

                  I haven't played around with the Finalizer 96k that much but I think it has a compressor and a limiter already. Is a limiter device similar to a mastering device then or is part of it? Is the limiter what's giving songs that big "commerial" sound? I listen to all kinds of music and some songs that are on the mainstream is not necessarily a great song but it is of "commercial" grade which is what the "industry" is looking for. Much like some movies aren't that great script wise or acting wise but it is of commercial grade that it is shown on the big screens.

                  From what I understand "mastering" is a combination of compression, limiter, and other things. I can't recall at the moment but I think it's 3 things and I forgot the other one. I looked at sweetwater for limiter and the Finalizer 96k was among them and I looked into the BBE Maxcom as well. I can't afford the Drawmers right now.

                  The way you described mastering sounds like it's simply arranging tracks, when it plays and stops (fades in and out), track info. coded etc. Is this part of mastering? Sounds like what a Sony CD Architect could do. I always thought mastering would actually provide significant tonal boost giving it commercal grade? Would this be untrue? Would this be the job of the limiter then?


              • #10
                Try the IK Multimedia TRacks S3 suite. You'll be very pleasantly surprised at how close you can get to a professionally mastered sound with those plugins, even if it is still an "in the box sound".

                And get a properly calibrated meter plugin. The Massey HR meter is great (and free!) if you're using a Mac, and is completely calibratable to AES/EBU using PrefEdit.


                Bob Katz' book "Mastering Audio" is a good primer for learning the ins and outs of what constitutes a finished master, and how to produce one.
                flip the phase

                Comment


                • #11
                  Not sure if my replies got lost during the "updating" process of this form, but regarding use of drum machines/beat box; would it be wise to record the low freq on a separate track and the high freq on another track? Let's say all bass drums/toms (low freq?) on 1 track and the snare/cymbals (high freq?) on another track?

                  Comment


                  • #12
                    You can record each drum and cymbal on separate tracks if the drum unit has midi capability, or you manually tap the beats in. DAW programs have unlimited tracks remember, use them to your advantage. The drum machine is a sequencer and if you tie the drum units clock into the computer and tie it to a click track you can essentially do anything you want and also modify the beats, change the drum sounds, add breaks you name it.

                    Comment


                    • #13
                      true but if I had only 8 tracks to work with and 8 drum and cymbal parts then I won't have any tracks left? Maybe bounce all into 1 track? But that's only if after I had mixed them properly already. I know a bit about MIDI, just synchronizations type of stuff. Maybe there is more to MIDI than just letting the drum machine start and record at a specific time/second I specified.

                      I think you mentioned recording vocals dry, could this be done with musical instruments or it's not a good idea? I was listening to a rock song which had an acoustic and an electric overdrive that are both in perfect sync so I was assuming the guitar part was recorded dry first then copied on 2 separate tracks, 1 for the acoustic, and the other for the electric overdrive.
                      Last edited by samal50; 03-11-2014, 01:35 AM.

                      Comment


                      • #14
                        My question would be, are you working on an analog recorder, stand alone recorder or a DAW?
                        A DAW of course has unlimited tracks. Tape or a Stand alone recorder has limited tracks which is why most have abandoned their use.
                        You could however use the older technology techniques used for many decades. You record the 8 tracks then bounce them down to a
                        stereo or mono track after they are recorded. This allows you to mix them as you bounce for the best tones possible.

                        I'm not expert at midi either but I've done enough to know what can be done. You can play back an electric drum machine into a DAW and separate
                        it into all its individual elements on separate track, snare, kick cymbals etc. You can then manipulate the notes of the tracks, the loudness, accents,
                        transients, tones, and timing. you can also change the voices. If you don't like the sound of the snare, you can swap it for hundreds of different snares,
                        or not a snare at all. It can be any musical instrument or even samples. You could replace the snare with a whale passing gas if you're so inclined.
                        Then you can mix all the tracks to suit you in midi with them panned any way you want, then bounce them to an analog track, mono or stereo.

                        You see midi is not sound. Its a list of commands that are sequenced by a clock. You can give the midi signals a voice and there are many banks that contain voice triggers that can be broken out into tracks, remixed, re-sequenced, then bounced down to a stereo track if you like. I'm just not an expert at getting you there. I can do a bunch of stuff on my DAW but I have to fumble around a bit to make it work. I find Cubase easier for midi than Sonar which is my main DAW. I have tried to get Sonar to do it the way Cubase does, but its approach to midi is different and I haven't mastered its midi capabilities yet. I do 99% analog which I prefer in any case. Midi is too much work and the musician in me looses inspiration working with it.

                        As far as recording instruments dry. Yes you can. I always record bass dry and direct. Most of the time I use a modeling effect unit so I can make it sound more like a miced amp. I have a few small units that have EQ, compression, head and cab modeling I can dial up to get a nice fat tone. I could do this in the box as well and have many times. It's just by the time I do track bass, I know what tones I want for the bass and having them when I track allows me to get the exact tones I need for the mix as I'm playing the notes instead of finding them afterwards mixing. After the bass is recorded I rarely have to add any additional effects or tweak it at all, so I can use it with the drums as a foundation for mixing the other parts.

                        Keyboard, I always record it direct. I have several Yamaha keyboards that have good sound quality recording direct, but I may EQ it, add chorus or even use a Leslie cabinet emulator plugin or whatever plugin suits it for the mix to sound good. I'm not a great keyboard player. I can do chords and one hand riffs pretty well but I'm not good enough to manipulate the sound quality when playing like I would guitar or bass, so I do that stuff in the box.

                        Guitar, can be recorded clean. I do it to get the most acoustical sound from the guitar I can get. It is very flat sounding though and can get buried in the rest of the mix easily. Transients are large and it pretty much sucks for playing lead unless you're going for a clean jazz or acoustical lead sound. With a little compression the results are allot better. You can slam chords hard and get some nice punch happening without peaking the meters. You can also play softly without being masked and get anything from a nice jangle to arpeggio type rakes and strums.

                        The good part is if you do use compression you can use amp modeler plugins mixing very effectively and add as much drive as you want. I've done it without compression tracking of course but it can be very spikey sounding. You'd have to add allot of compression mixing a completely dry guitar before adding drive just to even up its dynamics. When you add it before tracking I find the results much better, plus the compression is analog which you can tweak based on how you play and play to how it compresses.

                        This is all stuff you can try easily enough. Its always good to have allot of options just for variety. I did a mix recently where I recorded my buddies amp miced and his guitar direct. I made the direct signal sound just like his amp using a free plugin called Voxengo Boogex and used a simple compressor before it. http://www.voxengo.com/press/82/

                        Boogex isn't much for having a fancy GUI but its got all the elements you need to dial up any kind of cab, Mic type, On axis, Off axis, drive, presence, EQ, Phase etc. I use it occasionally to gain a guitar up that needs more edge as well. There a bunch of low CPU consumption amp modeling plugins out there you can try. I do have some other ones like Guitar Rig but I find these clunky to work with mixing and since they suck so much CPU power I have no use for them mixing. Simple and easy is my choice. I don't need eye candy, just ear candy.

                        I even did a whole album using this little gizmo http://www.vst4free.com/free_vst.php?id=530 I could get compression, drive, chorus and echo from one plugin and dial up some realistic amp sounds from it.

                        Comment


                        • #15
                          well the BOSS BR-8 is a digital recording studio. 2 tracks simultaneous recording and 8 tracks simultaneous playback. The 64 virtual tracks is nothing more than having extra tracks for different takes. I'll look into a MIDI book for better understanding.

                          Comment

                          Working...
                          X