Jump to content

Mr. Knobs: Eugene Goostman has a guinea pig, so I guess you're wrong about AI


Recommended Posts

  • Members

Carlin. thumb_zps456fc56e.gif

 

 

_____________________________

 

 

Speaking of conversabots and 60s/70s humor... The Breaking of the President.

(7'31" excerpt via YouTube)

 

 

from the Firesign Theatre's 1971 album, I Think we're All Bozos on This Bus.

 

http://en.wikipedia.org/wiki/I_Think...os_on_This_Bus

 

(Amazingly, it's only now, 43 years later, that I get the bus pun.Maybe if they'd used the old fashioned buss spelling. Of course, there is also a vehicular bus and that is, indeed, what the title bit of dialog refers to.)

Link to comment
Share on other sites

  • Moderators

We once dreamed of the day when a computer AI would be indistinguishable from a human. We've reached that day, but, unfortunately we've accomplished it by making people dumber, not computers smarter. See any random Facebook wall or Plurk timeline for confirmation.

 

Terry D.

Link to comment
Share on other sites

  • Members
People -- make that most people -- don't want to hurt the feelings of foreign child whose English isn't so great. Seems like kind of a cheat.

 

the excercise is to form an opinion about the interaction -- no feelings would be hurt either way.

 

however, making the kid foreign excuses linguistic wrinkles that you wouldn't expect from a native speaker. :o

Link to comment
Share on other sites

  • Members
We once dreamed of the day when a computer AI would be indistinguishable from a human. We've reached that day, but, unfortunately we've accomplished it by making people dumber, not computers smarter. See any random Facebook wall or Plurk timeline for confirmation.

 

Terry D.

Classic, Terry. Or, should I say: classic Terry.

thumb_zps456fc56e.gif

 

Link to comment
Share on other sites

  • Members

 

the excercise is to form an opinion about the interaction -- no feelings would be hurt either way.

 

however, making the kid foreign excuses linguistic wrinkles that you wouldn't expect from a native speaker. :o

Of course, got that, I just think people would also be less probing if they thought they were dealing with a real child.

Link to comment
Share on other sites

  • Members

I'm pretty sure I've spoken to it online numerous times. Seriously though, it will always come back to interpretation. In our day more people are going to be convinced because people are more easily convinced than they used to be. That is to say, pretty much what Terry said.

 

"Turing test." Hmmm... seems like its being used too much like "Nyquist Theorem" already. We're doomed for sure.

Link to comment
Share on other sites

  • Members
I'm pretty sure I've spoken to it online numerous times. Seriously though, it will always come back to interpretation. In our day more people are going to be convinced because people are more easily convinced than they used to be. That is to say, pretty much what Terry said.

 

"Turing test." Hmmm... seems like its being used too much like "Nyquist Theorem" already. We're doomed for sure.

I'm so tempted to ask, And how would that be?

 

biggrin.gif

 

But... I think we've all been down that road... or those two roads, as the case may be.

 

Link to comment
Share on other sites

  • Members

I recently predicted that we'd see a program 'pass' a Turing test when even the program's authors would admit that it's not truly intelligent in the way a human (or perhaps even an ape or dog) is intelligent.

 

I don't think this program quite passed the test. Rather, it passed a benchmark predicted by Turing (the 30% part), as pointed out in the article posted above by Geoff Grace. A decade too late for Turing's prediction to be correct, though.

 

Regardless, it's a remarkable achievement. What it means is we've finally reached the point where we'll have to start refining the Turing test into something much more meaningful. The gold standard would be not just passing a single conversation test for a minority of interviewers, but working consistently over a long period of time with a number of people, with few of them suspecting it's anything other than a person. This includes learning work tasks and performing them, and developing interpersonal relationships. That will be a better measure of intelligence.

 

BTW, I envision two types test, one for "intelligence" and one for "human-like intelligence". The former doesn't require fooling anyone; instead, it requires people being able to use natural language to teach it tasks which it then performs suitably. That's intelligence, but not necessarily much like human intelligence. Of course, the "level of intelligence" is related to the kinds of tasks it can perform. I suspect when we get to that stage, we'll learn a lot about what kinds of intelligence are required for different types of tasks. It will be more like benchmarking, where it's hard to give a single cumulative score that's meaningful, but which still can be a big help in characterizing the capabilities of the system under test.

Link to comment
Share on other sites

  • Members

Turing or no, it's remarkable how far AI has progressed.

 

And while there's surely amount of people who aren't real bright - one look at the average comments on YouTube bears this out - writing off an entire generation or two of people as dumber smacks of some octogenenarian in baggy suspenders yelling, "Damn kids! Get off my lawn!"

Link to comment
Share on other sites

  • Members

people have an incredibly sophisticated sense of evaluation developed from millions of years of trying to stay alive.

 

just think about how little has to be askew in a living person to drive us to reject them.

 

we walk down the street, we evaluate a stranger and decide whether they're a friend or threat in a primal and complex way.

 

we take it for granted, but I think we tap into that same primal sense when evaluating a virtual person.

Link to comment
Share on other sites

  • Moderators
Turing or no, it's remarkable how far AI has progressed.

 

And while there's surely amount of people who aren't real bright - one look at the average comments on YouTube bears this out - writing off an entire generation or two of people as dumber smacks of some octogenenarian in baggy suspenders yelling, "Damn kids! Get off my lawn!"

 

Well, the "made people dumber" was sort of a joke.

 

The rest is not, though. AI is one of the major disappointments in Computer Science, the other being parallel processing. When I was studying computer science in college there was all sorts of arm wavey, pie in the sky talk about these two areas, how they would change the future. Neither has come to fruition, both are still sorely needed.

 

For AI we have clever programs that appear to posess some intelligence but are entirely predictible (though somewhat more difficult to predict than in 1976), and for any meaningful parallel processing we still have to have a person write the program to work in a parallel fashion (i.e. gigantic matrix inversion by parts, circa 1970) vs computer software figuring out which parts of a process are parallelizable and doing that automatically.

 

This latter is extremely important as we seem currently stuck with processors that top out around 3 GHz. And yes, I know that multicore processors are helpful when running multiple applications at once.

 

Terry D.

 

Link to comment
Share on other sites

  • Moderators
Sorry. I hear comments like this so often from people on this forum about younger people being dumber that I can't tell whether anyone is joking anymore.

 

After teaching bright young grad students for years I'd never make that statement! :eek:

 

I do think our culture is dumbing down a bit, though.

 

Terry D.

 

Link to comment
Share on other sites

  • Members
... for any meaningful parallel processing we still have to have a person write the program to work in a parallel fashion (i.e. gigantic matrix inversion by parts, circa 1970) vs computer software figuring out which parts of a process are parallelizable and doing that automatically.

 

This latter is extremely important as we seem currently stuck with processors that top out around 3 GHz. And yes, I know that multicore processors are helpful when running multiple applications at once.

I bet you'd be surprised how many applications are multithreaded these days. Rather than take the "parallel from the get-go" approach to thinking about and building parallel algorithms, we've taken a more circuitous (but far more practical) path of starting with mere timesharing for most programs, while embedded programmers (like me) worked on multitasking using real-time operating systems. The synthesis of the two disciplines lead to the "threaded" model for UNIX applications, where we could write a single program but split it into mutliple processes that could run concurrently (usually, sharing the same processor), but coordinating with each other and sharing the same memory.

 

Later, multi-core and multi-processor systems added the ability to farm separate threads onto different cores or processors. Our DAWs all do that today, thank goodness, or it'd take a lot more processor speed to do what we do (for those of us who use virtual instruments or process-heavy effects, and lots of tracks).

 

Now, it would be cool to completely rethink how we program computers and design new languages that took advantage of all the parallelism that's possible. For example, if I write "x = a*b + c*d" it's easy to see that a*b could be calculated in parallel with c*d. But as it turns out, not only does this raise a lot of technical challenges (at the language definition level, programming practice level, operating system level, and computer architecture), it doesn't give us much benefit until we have thousands and thousands of processors.

 

We'll get there. The main reason we're not exploiting parallelism as much as possible is that we'd get diminishing returns. We know what the problems are and academics have presented a lot of ideas on practical approaches.

 

So far, only a few applications benefit from massive parallelism. Graphics processors are a great case in point: it's an application that needs it, and it's how they work! It works so well that guys who do massively parallel math use the GPU rather than the CPU. (For example, bitcoin mining and codebreaking.)

 

Other massively parallel applications include weather and climate models, and stuff like SETI (search for terrestrial intelligence). Some of these take advantage of all those idle home computers out there: you can sign up and allow your PC to get used for science, so problems get worked out using many thousands of volunteer computers.

 

And the latest deal in parallelism is "The Cloud" that we've all been hearing about and using. All this virtualization is a (gosh darn complex) way of setting up lots and lots of identical resources that can be used by whatever needs them at the time, which provides incredible flexibility and lower cost. What's cool about this approach is that it allows you to use one thing as many or many things as one -- it doesn't care! It's like loading trains using liquids rather than lots of oddly sized boxes. Let the stuff flow and fill the available space.

 

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...