Jump to content

Why "humanizing" (currently) doesn't work


Recommended Posts

It's too random.

 

A 2011 study has shown that people not only prefer music that is played by humans over that which is generated by machines (or locked to a grid by humans using machines) but they also prefer human-generated music over music that has been "humanized" via software in an effort to make it more random and introduce subtle "mistakes" and timing variations into it that deviate from the grid.

 

It turns out that the imperfections that human musicians impart into their performances aren't as random as once thought, but are actually correlated over longer time periods. It's a pretty in-depth paper and not exactly light reading, but fascinating nonetheless.

 

So the question is, based on the findings of this research, how soon will programmers devise a way to incorporate such long range temporal (and dynamic / amplitude) correlations into the humanizing functions of recording software? And if they're able to successfully do so to the point where humans can't differentiate between the timing "feel" and fluctuations of human-generated and machine-generated music, what do you think the implications would be insofar as the future of music and human / computer musical interaction?

 

 

Link to comment
Share on other sites

  • Members

I haven't read the article yet but I do have some ideas on this subject...

 

 

 

A good friend of mine (a fabulous musician) was playing in town a couple of years ago and I went backstage during intermisson to say hi. Someone from the audience came in and said "I like the way you let the music play you." My friend replied "Yes, just get out of the way."

 

 

 

I think that is what performing musicains actually do. We establish a groove then ride on it like a surfer does a wave. The groove breaths a bit and the players breath along with it like riding the peaks and troughs of the surf.

 

 

 

When I have a really good gig it usually feels like I am simply holding the instrument and watching myself play stuff I didn't know I could play. It seems effortlees at the time while, at some other shows, I really have to work at it to earn my pay and, although the performance is acceptable, it's a lot better when the music plays me. It's about getting out of the way, as my friend suggested, and allowing ourselves to be in service to the music.

 

 

 

Machine generated music seems to be more in service to the programmers, engineers and musicians than it is to the music itself.

 

 

 

Perhaps the "humanizing" software engineers could study some of the natural phenomena that appear to be random (such as the waves on the ocean) and develop algorithims based Nature rather than trying to introduce the subtle mistakes you mentioned.

 

 

 

For what it's worth, I'll probably have more to say after I finish reading and get my head around some of those formulas.

 

 

Link to comment
Share on other sites

  • Members

Wow, I've been saying for decades that randomization isn't humanization...I always felt the humanization menu item should be calibrated in "Number of beers consumed by musician."

 

I think the big difference between a musician with really good timing and someone who doesn't have it is that the latter overcompensates. Quantization strength can tighten the timing without destroying the feel. But of course, technology can also let you add humanization...manually!!

  • Like 1
Link to comment
Share on other sites

  • Members

Ever since - Roland was it ? introduced that human feel thing, all it has done is enable the progressive introduction of rhythmic incompetence. I can't see anyone that actually works at their instrument(s) or music thinking different. Maybe if they introduced fractal based algorithms that encompassed a greater variety of musical factors ...

Link to comment
Share on other sites

  • Members

That some heavy reading. I'm pretty good at reading raw technical analysis but some of the terms were not in my vocabulary.

 

 

There seems to be allot of questions asked and its left up to the reader to tie it together and come up with conclusions. When all is said and done they are just trying to quantify the human emotion in music and its appeal to others. They even step on bio rhythms which I did allot of research on years ago.

 

It comes down to this. Before the invention of the clock and machines music followed more natural rhythms and flows.

 

There was actually a fantastic Nova type documentary done on this topic I'll have to find. It had a physicist and historian discussing how machines changed mans relation to things like tides, sun, moon stars, and all the other natural sounds he hears. It was highly enlightening because it could so easily be applied to music.

 

The main point is fixed mechanical rhythms have been influencing man in factories, automobiles, even pendulum clocks for a long time now, but they haven't been exactly good for his mindset or health. Man can attempt to become a machine himself but he always adds that human element to his music because he can only maintain certain levels of dopamine going before his engine needs to recoup.

 

Natural bio rhythms like heart and respiration can be pushed hard to maintain a beat for example, but the mind is always going and its hard to maintain focus on a fixed physical effort without distraction for very long. Advertisers found out long ago that the general public has a maximum attention span of about 12~15 minutes focused outside of themselves before they have to reassociate with their own body. That's where they plant their advertisements in TV shows.

 

The other thing in the article leaves out is body language and all its various forms from Hypnosis to sex appeal. Guys on stage strutting their stuff isn't that much different then the Rooster raising its feathers wooing its mates. Man has a primitive emotional side to him driven by hormones to draw attention to himself in one way or another.

 

Music is just one method and expressing emotions and it as of yet cant be quantified in some formula that you can program. At least I hope not. Hopefully I'll be gone from this planet before I become attracted to mate with some machine because its programed to mime human ticks and twitches in music. I know they are busy trying to do these kinds of things to robots but you have to be realistic.

 

The human elements expressed in music are highly complex. You don't have just aural influence, you have visual, direct thought, subconscious thought and subliminal messaging going on between the performer and listeners all the time (if the performer is sober and not dumb deaf and blind)

 

Recorded music can be anything but like the article states, people find the human element more appealing. The reason its "people being people" are inspiring, "machines being machines" are monotonous.

Link to comment
Share on other sites

  • Members
It's too random.

 

A 2011 study has shown that people not only prefer music that is played by humans over that which is generated by machines (or locked to a grid by humans using machines) but they also prefer human-generated music over music that has been "humanized" via software in an effort to make it more random and introduce subtle "mistakes" and timing variations into it that deviate from the grid.

 

It turns out that the imperfections that human musicians impart into their performances aren't as random as once thought, but are actually correlated over longer time periods. It's a pretty in-depth paper and not exactly light reading, but fascinating nonetheless.

 

So the question is, based on the findings of this research, how soon will programmers devise a way to incorporate such long range temporal (and dynamic / amplitude) correlations into the humanizing functions of recording software? And if they're able to successfully do so to the point where humans can't differentiate between the timing "feel" and fluctuations of human-generated and machine-generated music, what do you think the implications would be insofar as the future of music and human / computer musical interaction?

 

 

I don`t remember the last time I quantized something and it sounded good or human… even the groove features that comes along with some DAWs… just doesn`t cut it. For the last couple of years I`ve been programming drums and then playing MIDI bass to the drum pattern. After a couple of takes, I usually have something that "grooves" without using a DAW parameter.

 

My timing has gotten better over the years but there is an element to a groove that is hard to establish if I`m programming a drum part myself so what I found to work is to import pattern from EZ Drummer or the NI library into DP and play to that. I have to say, drum programs have gotten better not only in sound quality but grooves. Nothing beats a great drummer though…. but we knew that.

 

 

Link to comment
Share on other sites

  • Members

I don't even program beats, I tap them in live one at a time track by track. Its allot of work but its actually allot of fun. Of course the songs I do this on have to be a bit simpler because I'm not a great live drummer or tapper but there's still plenty of cool and creative things you can do even if you cant pull off a major high speed 7 tom drum fill and have it all in exact sync. What you can do however is program those fills into an automatic beat, then multitrack in the manual taps in on the simpler beats. and cut and paste them in. I find this very useful when I want to have breaks, slow downs, etc. I cut the time it takes to program all that crap by at least 50% or more even if its 5 or 6 tracks for an entire set of drums.

Link to comment
Share on other sites

 

I have to say, drum programs have gotten better not only in sound quality but grooves. Nothing beats a great drummer though…. but we knew that.

 

 

 

I think a lot of the best ones are actually played by real drummers, using pad kits, and not programmed.

 

If you want to put them in "by hand", IMO a MIDI kit, or at least a pad controller is the way to go since they can actually be "played." I think both are much better than trying to play drum parts on a MIDI keyboard's keys. Full-on 100% quantization is something I pretty much never use... if you need to tighten something up, judicious and modest use percentage quantization is a much better option IMO. Of course, if it's really bad, playing it again until you get a good performance is the better choice.

  • Like 1
Link to comment
Share on other sites

  • Members
It's too random.

 

A 2011 study has shown that people not only prefer music that is played by humans over that which is generated by machines (or locked to a grid by humans using machines) but they also prefer human-generated music over music that has been "humanized" via software in an effort to make it more random and introduce subtle "mistakes" and timing variations into it that deviate from the grid.

 

It turns out that the imperfections that human musicians impart into their performances aren't as random as once thought, but are actually correlated over longer time periods. It's a pretty in-depth paper and not exactly light reading, but fascinating nonetheless.

 

So the question is, based on the findings of this research, how soon will programmers devise a way to incorporate such long range temporal (and dynamic / amplitude) correlations into the humanizing functions of recording software? And if they're able to successfully do so to the point where humans can't differentiate between the timing "feel" and fluctuations of human-generated and machine-generated music, what do you think the implications would be insofar as the future of music and human / computer musical interaction?

 

 

The question has already flashed past us faster than we can answer. Meaning, the answer doesn't matter because the people don't. Only the technology matters, baby, and it's all in the bleeding edge dollars of human displacement. But, you already knew that. You live it.

 

I've read the articles on the music tech side of the industry and nothing about it is virtuous. None of it. Yet, I see the gear-hawking on the side bars because it has to be there. It pays the rent and salaries. The salaries are earned by the Andertons and minions of technology who have made a conscious decision to earn a living off of it. Why not, right? It's a good, clean income and no one gets thrown under a bus. But, don't fan flames under it disingenuously. It's like asking for forgiveness in plain sight of the human artistry lying cold at your feet as you distract yourself with a pitch correction widget.

 

Can't play both sides of the mutually divisive.

Link to comment
Share on other sites

  • Members

Any random is what's wrong. The fluctuations in a good performer might be unpredictable but almost certainly not random. Players spend countless hours working out the latencies in playing their axe so they can play with not so much metrical precision but rather musical coherence. As devices like the BBE Sonic Maximizer clearly demonstrate, timing equals tone; something all ensemble players are keenly attuned to. Taste in the simple syncing of attacks may account for the human "imprecision". seen on the wave traces.

 

Then you have human response to the material. You like, you don't like, you play different accordingly.. Algorithms don't know good anything from bad, changes, lix, whatever moves or un moves a player - algorrilas don't know.

Link to comment
Share on other sites

  • Members

 

The question has already flashed past us faster than we can answer. Meaning, the answer doesn't matter because the people don't. Only the technology matters, baby, and it's all in the bleeding edge dollars of human displacement. But, you already knew that. You live it.

 

Technology can't do anything by itself, but it can extend the abilities of musicians. It reminds me of when keyboard samplers came out, and the union said they were going to put musicians out of work. My standard answer was "So who plays keyboard samplers? Accountants?"

 

With very few exceptions, all the songs on my YouTube channel have drums played by a real drummer. But technology is what allowed me to re-arrange the parts to better fit the music, and process them to put them in a room. I really don't think it's "either/or" with technology and humans, but "and."

 

  • Like 1
Link to comment
Share on other sites

  • Members

I've read the articles on the music tech side of the industry and nothing about it is virtuous.

 

The vast majority of the articles I've written over the years deal with techniques, not reviews. Why? Because gear comes and goes...a review I wrote in 2005 probably is meaningless today. But lots of articles I wrote back then are till relevant. Music technology simply creates tools; it's up to musicians whether they want to use those tools, and if so, whether they want to use them creatively.

 

 

  • Like 1
Link to comment
Share on other sites

 

The question has already flashed past us faster than we can answer. Meaning, the answer doesn't matter because the people don't. Only the technology matters, baby, and it's all in the bleeding edge dollars of human displacement. But, you already knew that. You live it.

 

I would strongly disagree with the idea that when it comes to music, people don't matter. I'm certainly not in favor of technology "displacing" humans, or removing the human element from music. I find most heavily pitch-corrected and grid-based music to be rather boring - it's often the little imperfections and the little mistakes that I find interesting and compelling.

 

Of course, unless you're really old, the debate about recording and music technology predates you. It's certainly nothing new. People thought recordings would put bands and orchestras out of business. Consider the early days of multitrack recording. People said that overdubbing would put musicians out of work since one guitarist could now play two parts on a recording instead of requiring two guitarists to do it simultaneously. But of course, that overlooks the fact that that approach takes twice as long, and means more billable session hours for the single guitarist. People said that being able to punch-in on a recording would lead to musicians who were less skilled since they could just re-play the part until they got it "right" - but technique hasn't suffered. In fact, in many cases today's top-flight musicians are some of the most technically skilled players in history. Similar things were said about drum machines, and yet good drummers are still highly prized and in great demand. I sure as heck would rather have a great drummer on my album than a drum machine.

 

I've read the articles on the music tech side of the industry and nothing about it is virtuous. None of it.

 

Let me ask you a question. Do you play an instrument, and if so, which one? If you do, it's almost a certainty that the instrument you play is a direct result of "music tech" in one form or another. Without it, we wouldn't have the acoustic piano - to say nothing of things like synthesizers, electric guitars and basses, etc.

 

Yet, I see the gear-hawking on the side bars because it has to be there. It pays the rent and salaries. The salaries are earned by the Andertons and minions of technology who have made a conscious decision to earn a living off of it. Why not, right? It's a good, clean income and no one gets thrown under a bus. But, don't fan flames under it disingenuously. It's like asking for forgiveness in plain sight of the human artistry lying cold at your feet as you distract yourself with a pitch correction widget.

 

Can't play both sides of the mutually divisive.

 

How am I being disingenuous in pointing out research about how a DAW feature is flawed? And when have I ever suggested or said that human artistry is unimportant? confused.gif I merely asked the question about what people thought in terms of the potential future ramifications if software programmers learned to apply correlated fluctuations to "humanization" functions in DAW software instead of "randomization." BTW, Jeff Pocaro, the drummer who they based that research study on randomization on, happened to be capable of doing a scarily-accurate impersonation of a Linn drum machine - and was a master at programming one too. Why? Because he thought like the brilliant drummer that he was when he programmed one. What I was asking for everyone's opinions on what it would mean if the machines learned to accurately mimic humans just as Jeff used to be able to mimic the machines.

 

And yes, we run ads here on HC. They do help keep the lights on and the forums open and free for everyone... but we don't shy away from pointing out when a product has a flaw or could be improved upon in some way. I try very hard to find the flaws in every product I review. And most of our members tend to be interested in gear. In fact, in my experience, it's pretty rare that a musician doesn't care about the tools they use in the creation of their art.

Link to comment
Share on other sites

  • 4 weeks later...
  • Members
I what do you think the implications would be insofar as the future of music and human / computer musical interaction?

Just another downward swirl down the musical toilet...a direction music has been going for some time now.

Link to comment
Share on other sites

  • Members
Wow' date=' I've been saying for decades that randomization isn't humanization...I always felt the humanization menu item should be calibrated in "Number of beers consumed by musician."<...>[/quote']

 

I love that Craig -- consider it stolen!!!

 

I've been calling it randomization since I got my first Master Tracks Pro on an Atari computer.

 

Musicians play in the pocket, in the groove, or whatever you want to call it.

 

Sometimes the sub-beats have a little or a lot of 'swing factor', sometimes the second and fourth beat of the measure are rushed or dragged a little, a soloist might drag the first part of a phrase only to rush the end in order to catch up, and there are probably thousands of other ways musicians subconsciously play with the timing. But unlike a "Humanizer", it is not done in a random fashion. If a blues band is delaying the 2s and 4s, it's consistent, for all practical purposes the same amount every measure. Same for the other groove aspects.

 

I remember my first Band Director in Junior High School, Robert Monroe. He played recordings of a number of Viennese Waltzes by various conductors and orchestras. He told us to pay attention to how long they delay the second beat of the measure and how each conductor delayed it a different amount, from a slight amount that you need to pay attention to hear, to a great and obvious amount. He explained that it affects how the dancers will dance to the song, and I've used that gem of information in various styles of music myself. I even took dance lessons to understand how dancers want the groove to be in Merengue, Salsa, Rhumba, Samba, Swing and other popular dances. I helps my sequencing a lot.

 

Not that randomization can't be used effectively. Sometimes when I'm doing a background track for a song I'm working on, I'll play the top note of (for example) a horn line, in the groove. Then I'll let a program like Band-in-a-Box harmonize it for me by adding the other parts. I know, it's quick and easy, but if the part isn't that important, it works and saves a lot of time. Then I'll randomize it just a tiny little bit, a few clock tics and a few velocity increments. In other words, give the horn section a few beers (Thanks Craig).

 

I find putting horn lines in to a MIDI sequence using the wind controller to be the best way to get it to feel live. Unfortunately, it is monophonic. That's where the auto-harmonization in BiaB comes in handy. It uses the same rules as the ones I learned in the Berklee course I took.

 

I mostly live input everything though. I play drums, bass, guitar, sax, wind synth, and one-handed keyboards (left hand on the joystick), so I can get the parts in live.

 

I've played in 'real bands' since I was a kid, went on the road, played with some big stars and some slackers too, and as of yet, no routine I've heard can inject the live feel I'm accustomed to into a step-entered MIDI sequence. If it ever happens, I might try it - it could save some more time generating the backing tracks for my duo.

 

Insights and incites by Notes

 

 

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

×
×
  • Create New...