Jump to content

AI May Be Biggest Event in Human History -- And The Last (Stephen Hawking)


Recommended Posts

  • Members

Ray Kurzweill has some thoughts on this, as well. And they are not exactly utopian.

 

I like this from the Independent article above:

 

So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong. If a superior alien civilisation sent us a message saying, "We'll arrive in a few decades," would we just reply, "OK, call us when you get here – we'll leave the lights on"? Probably not – but this is more or less what is happening with AI.
Link to comment
Share on other sites

  • Moderators

I think on this Hawking is wrong, as he is on quite a few things if you pay attention to his prognostications over time.

 

AI coming soon? They / we were working on AI when I was studying computer science back in 1972. That's 42 years ago and how far have we really come? Siri is primitive and her mistakes are legendary, a computer that wins at chess or Jeopardy isn't AI, it's just a finely tuned program. That talking head program that corporations use for customer support is a joke and it was proclaimed cutting edge AI for a while. And what about the "AI" that answers the phone when you call because your cable is out or your Viagra prescription wasn't refilled properly? Most people just start angrily pushing the zero key repeatedly until they get placed in a queue for a person.

 

Hell, we've hit a barrier in CPU speed and desperately need software that will allow meaningful parallel processing over four or more cores and we can't even manage that through AI.

 

Today, as it was in 1972, the programs that appear to be AI are nothing more than clever programs written by people. To date, we have completely failed in producing an actual AI that is more than trivial.

 

Terry D.

Link to comment
Share on other sites

  • Members

Neil Stephenson - the writer (Snow Crash, Cryptonomicon, etc.) has some interesting things to say about AI. His basic take is that human brains do not think in the same way that computers "think" and he points to the human activity we call "intuition" as an example of a mental ability that cannot be recreated with zeroes and ones, however complex and powerful. There is the fact that we perfectly understand how computers process information, but we really have little understanding of how our own brains work. In this vacuum of understanding, imagination has a lot of room to roam freely, but probably not accurately.

 

Personally, I sense a bit of hubris in the way we quickly assume that "oh, yeah, we can make "minds" - sure, look, the program acts like a human here, see it go?" And the reductionism that feeds this easy assumption, that behavior is all, and the "inside" or subjective side of things is irrelevant or illusory, etc. That's where I feel like the limitations of our knowledge about minds is pointed up - we understand subjective reality about as well as my cat understands what I'm doing when I read a book.

 

nat whilk ii

Link to comment
Share on other sites

  • Members

I'm not the best one to determine how far along we are with AI. However, I would agree with Knobs and Nat that we are most likely farther off from developing true AI in the true sense of the ability that Hawking speaks of than some scientists believe. But again, I'm not the best one to ask about this. I do like that people are thinking ahead on potential issues that arise, and find the whole thing fascinating.

Link to comment
Share on other sites

  • Moderators

There have been all sorts of approaches to AI. Two that I've worked on extensively for my job are Expert Systems and Genetic algorithms. Both are AI of a sort.

 

About 10 years ago I wrote an expert system for evaluating video imagery of damaged pavement sections. Basically it "looked" at stills from the video, normalized the lighting for changes over time, converted the raster scan imagery to vectors, connected the cracks, and decided whether the cracks were longitudinal (fatigue cracking) or transverse (stress cracking).

 

What made it an expert system was that multiple experts watched a graphic display as the program connected things, enhanced things, and characterized the cause of each crack. If the expert watching disagreed, he could override the software. Each time he did that, the program evaluated a maze of trigonometry and regression analysis and noted the tendencies of each expert along with possible reasons. Periodically it asked the expert questions about why he made the decision he did, sometimes even pointing out a previous decision that seem inconsistent with the current one and overlaying the two images for him to reconsider.

 

Eventually, the program built a rule base for each expert AND a consensus rule base over all experts so that an untrained evaluator in the field could decide on the spot what sort of damage the cameras were looking at and adjust the accellerated testing procedure to optimize the test. That saved the DOT a lot of money, made a nice paper for NAS, and earned me an innovation award from the sponsor.

 

I'm proud of that system, but is it an AI? I don't think so. Though it officially is, and though it mimics the thought process of several very smart and experienced people, it doesn't think for itself.

 

Genetic algorithms were in vogue some years ago, I used the approach to optimize a statewide maintenance strategy. GA works like nature does. You generate a large number of random solutions, you combine them "genetically," you apply random mutations and you put them through a natural selection like process, a survival of the fittest type thing where the survivors get to combine and "breed" again. You iterate on this process until you reach an acceptable solution or you get tired of iterating.

 

Is that AI? Well, it's not just similar to nature, it's also similar to how we think - especially creative thinking. So it's maybe PART of an AI, and it definitely works.

 

I could go on and on about AI (probably I've already gone on too much) because it's the Holy Grail of computer science. I wrote a conversational AI my freshman year in college. I think it's better than the ALICE chatterbot that everyone has seen, but mostly because I think ALICE sucks horribly.

 

Terry D.

 

Link to comment
Share on other sites

  • Members

I agree that we're still a long way off.

 

Remember when fusion power was only 10 or 20 years off? Like, in 1960? They still say fusion is 10 or 20 years off, but today they have a concrete list of the engineering problems that need to be solved, and virtually no theoretical problems that *have* to be solved (though solving some might make it better or sooner). We actually have controlled fusion and understand a lot about it.

 

Contrast that with AI, where we still really don't know what problems we need to solve in order to understand it. However, meanwhile, we're accumulating a lot of "low level AI-type stuff", like the things Terry mentioned, and a lot more, such as autonomous bipedal motion, goal-oriented behavior, etc. Now, none of these things works remotely the way the mammalian cerebral cortex works (and we're just beginning to get interesting glimpses into how THAT works; if you're interested, see the book Kurzweil wrote in 2012 or 2013 on "How to Create a Mind" or something like that -- you also get a good idea of how much we do NOT know from that book!)

 

For AI, there is no end in sight. However, it's quite possible that we might evolve our way towards it almost by accident, as we put together more and more "nearly intelligent" systems. (After all, that approach worked for Nature!) If that's what does happen, we'll get intelligence without much understanding of how it works. Hello, Skynet! :-)

 

I also object to "The last event" in human history. The last event will be the complete demise of humanity. After that, there will be no human history. But until then, history will continue, despite all rumors the contrary. As Heinlein put it, studying the past should teach us to expect a series of complete surprises.

Link to comment
Share on other sites

  • Members

There's also the fact that the human brain is not a standalone "thinking machine" - it's an integrated organ in a complex system that has goals that cannot be reduced to problem-solving or solution suggestions or probability estimates, etc. So the standalone machines that emulate the abstracted activity we call "thinking" are emulating not human brain activity as it really exists, but instead they emulate an abstract concept of certain modes of human brain activity and thinking.

 

So when a computer can win at chess - great! But it's the ultimate "idiot savant" to use the old regrettable phrase. I don't feel bad applying it to a machine, 'tho.smiley-happy

 

nat whilk ii

Link to comment
Share on other sites

  • Members

Great point.

 

These are all little tiny things, one of billions that a brain does. Don't get me wrong, what scientists are doing impresses me greatly. But we have a long way to go to truly create something as complex as a human brain or something that is truly AI.

Link to comment
Share on other sites

  • Members

Well as I read this interesting discussion I harken back to Jeff Goldbloom's comment in Jurassic Park: "Everyone is asking 'Can we do this?' not 'Should we do this?'"

Link to comment
Share on other sites

  • Members

AI is mostly what we imagine something is. It's our perception. Anthropomorphism. It's like the beautiful girl we had a crush on in high school from a distance and we imagined all these wonderful things about her, but it was really just our own beautiful mind we had a crush on, 'cause she wasn't that deep or sweet or nice when we finally got to know her.

 

Or probably a better example is how people give pets certain human attributes like unconditional love. Dogs don't have love like humans. They have instinctive pack behavior that we imagine is some deeper affection on scale with human capacity for, thoughtfulness, faithfulness, sacrifice, etc.

 

Real AI that thinks, feels and has moral conviction is all in our heads, and would not exist without humans there to imagine they're observing it. [h=1][/h]

Link to comment
Share on other sites

  • Members

Good point, although scientists have been very wrong when assuming that all anthropomorphisms are wrong by default. For example, because mammal brains get smarter by increasing cortex area by wrinkling, and bird cortexes are smooth, they concluded that birds can't be very bright, and what owners say about how smart their parrots are must be anthropomorphic. Well, studies of parrots show that they really are remarkably smart -- probably not as smart as their owners think, but far smarter than scientists had given them credit for based on theory.

 

I suspect that dogs feel something that's a lot like what we feel when we love. I wouldn't be surprised if it's even stronger! But certainly, it's not the same thing, and your point is valid that every new thing that comes along looks like "The Next Great Thing". We do that with new politicians, too!

 

Finally, don't expect AI to necessarily be much like human intelligence. While it's an admirable goal to achieve human-like intelligence, it's not at all clear that the way humans think is the only way to achieve what we'd all agree is true intelligence. The more we learn about it (and as we achieve AI), the more we'll develop a vocabulary for identifying different things we attribute to sentience, and we'll be able to contrast and compare different kinds of sentiences. Imagine asking an AI "Well, how does that feel?" Some AIs might not have any feelings. Others might. I'm confident that it'll turn out to be complex, with new kinds of complexity we hadn't foreseen. As well as simplifying ideas too: relatively simple concepts that turn out to have great predictive power. That's what learning is all about, isn't it?

Link to comment
Share on other sites

  • Members

Terry - if you're still following this thread and seeing as how you know more about this than probably all us put together to some power, can you recommend some reading for reasonably capable readers of popularized science writing?

 

Just not harder to understand than Brian Greene's forays into string theory is all I ask...smiley-happy

 

nat whilk ii

Link to comment
Share on other sites

  • Members
People will rarely ask "should we do this?" Science and technology will march on regardless. It's up to others to make the moral choices.

 

Unfortunately I agree with you, the "doers" just do and don't think about morals or consequences. I see that as one of the failings of science and technology.

Link to comment
Share on other sites

  • Members

Do the simplest life forms like germs have emotions, have feeling or think? They do react to stimulus and are alive but they have the most primitive instincts. Maybe they are smarter then the largest computers we have today, but it may be the collective internet connecting many computers and each computer acting as an individual cell may evolve into something as simple as a germ some day, but its not going to be organic nor have organic traits and needs. Its likely to have a very different start then we suspect it will, be as foreign as an alien being would be.

Link to comment
Share on other sites

  • Members

 

Unfortunately I agree with you, the "doers" just do and don't think about morals or consequences. I see that as one of the failings of science and technology.

 

There are lots of people in the science and technology fields who are deeply concerned about the ethics involved in new discoveries and new technological capabilities. Added to that are the many watchdog groups that set up a chorus of howls on a regular basis when ethical issues come to the fore.

 

That's not to say there is not a huge impetus to the onward march of science and technology that seems to run ahead of society's ability to assess and control new capabilities.

 

If anything seems to be the trend, it's that the dismal and mundane god of the present age, The Economy, is the entity that is increasingly perceived capable of doing no wrong as long as it's growing and expanding. Scientific research has been taking a lot of hits as a drain on The Economy. Everything seems to have to justify itself as whether it's a friend or foe of The Economy. People seem far more willing to beg ethical questions regarding The Economy than they are willing to beg ethical questions regarding science and technology. At least from my vantage point, being an amateur observer always ready with an opinion.smiley-happy

 

nat whilk ii

 

Link to comment
Share on other sites

  • Members

 

There are lots of people in the science and technology fields who are deeply concerned about the ethics involved in new discoveries and new technological capabilities. Added to that are the many watchdog groups that set up a chorus of howls on a regular basis when ethical issues come to the fore.

 

That's not to say there is not a huge impetus to the onward march of science and technology that seems to run ahead of society's ability to assess and control new capabilities.

 

If anything seems to be the trend, it's that the dismal and mundane god of the present age, The Economy, is the entity that is increasingly perceived capable of doing no wrong as long as it's growing and expanding. Scientific research has been taking a lot of hits as a drain on The Economy. Everything seems to have to justify itself as whether it's a friend or foe of The Economy. People seem far more willing to beg ethical questions regarding The Economy than they are willing to beg ethical questions regarding science and technology. At least from my vantage point, being an amateur observer always ready with an opinion.smiley-happy

 

nat whilk ii

 

Well a thoughtful answer for sure. However, I am not 100% convinced that people in science and technology are all that ethical/moral. Although I won't jump up on my soapbox, I would use anti-depressant medications as an example....along with diagnosis of some mental illness. This is my field and seeing it daily from the inside convinces me that more people are doing and fewer are disputing..... and often (as you said) the ECONOMY plays a huge role in what direction the field is taking. It is more difficult for me to extrapolate to other fields in science (or quasi-science as most social sciences use a quasi-experimental design) but I strongly suspect it is the same.

Link to comment
Share on other sites

  • Members

 

Well a thoughtful answer for sure. However, I am not 100% convinced that people in science and technology are all that ethical/moral.

 

Some are. Some aren't. Just like anybody else. Some chase money. Some don't care. Some simply want to continue advancing technology without much of a thought, thinking that others will sort out what is morally ethical and what is not. Some feel it's not their place to sort out what is ethical and what is not.

 

Who ultimately determines what is morally right and wrong? Well, in each country, it often falls to our courts ultimately. Court decisions and laws are a sort of mirror of our collective morality. It's not a perfect mirror, but that's in effect what it is.

 

Link to comment
Share on other sites

  • Members

 

Some are. Some aren't. Just like anybody else. Some chase money. Some don't care. Some simply want to continue advancing technology without much of a thought, thinking that others will sort out what is morally ethical and what is not. Some feel it's not their place to sort out what is ethical and what is not.

 

Who ultimately determines what is morally right and wrong? Well, in each country, it often falls to our courts ultimately. Court decisions and laws are a sort of mirror of our collective morality. It's not a perfect mirror, but that's in effect what it is.

 

Once again, I agree with you and know some very good practitioners. But in my little area of expertise I would say a minimum of 90% of the practitioners are working the Economy angle. As that is the angle pushed by institutions and most licensing bodies etc., the 90% can feel pretty justified. But I do talk to others who just ask "What the heck are we doing?" Of course, I am in the minority and could be totally off base. Given the way I think, someone with an advanced degree who makes decisions which have a substantial effect on their patient's lives should be thinking about morality as well as the technical aspects of their practice. Again, I see this from the practicing angle and less so from the research angle.

 

Interesting questions about who determines morality. I think it is rarely the courts who determine morality, but that is just a personal opinion. Anyway, as is often the case, food for thought.

Link to comment
Share on other sites

  • Moderators
Terry - if you're still following this thread and seeing as how you know more about this than probably all us put together to some power, can you recommend some reading for reasonably capable readers of popularized science writing?

 

Just not harder to understand than Brian Greene's forays into string theory is all I ask...smiley-happy

 

nat whilk ii

 

I thought about your request a few days before replying because it made me think about a lot of things I've not thought of in a while.

 

I've never read a layman's book on chemistry, computer science, statistics, operations research, electrical engineering, physics, or materials science because I was busy reading college texts about those subjects so I could pass the exams. I've never read a layman's book on expert systems or genetic algorithms because, though they didn't exist while I was in college (except perhaps as a twinkle in some mathematician's eye), when I DID become aware of them to use them in my research there weren't any layman's books about them yet.

 

What I have read recently, out of curiosity, is a lot of Wikipedia articles on topics I have a great deal of expertise in. My hypothesis (probably quite naive) is that if the articles I have the knowledge to vet are solid, then perhaps I'm safe in relying on the many other articles that are useful to me but I'd be helpless to spot flaws in. For what it's worth, I've read many Wiki pages on subjects I profoundly understand and I can find no error, only sometimes a difference of opinion with the author(s) or a minor frustration that perhaps I could have explained it better than they did.

 

Such is the case with these two articles that I recommend you read:

 

http://en.wikipedia.org/wiki/Expert_system

 

http://en.wikipedia.org/wiki/Genetic_algorithm

 

There is also this about AI in general, which is pretty straightforward and clear:

 

http://en.wikipedia.org/wiki/Artificial_intelligence

 

I have a wonderful book on my bookshelf by Cohn et al, circa 1990s, that is widely accepted as the "Bible" of expert systems in my current field of research. Like layman's books, "Bibles" tend to come later in the cycle of adopting a process or procedure. This large report was published by the Transportation Research Board which is an arm of the National Academy of Science and the journal in which I most often publish. I think this one is written particularly straightforward and well, which is often the case when engineers write something since that's the way they tend to think.

 

I thought to scan some pages from my dog-eared old copy, but then I realize that was old fashioned thinking and surely it must exist on the Internet somewhere. It does, but on the TRB site they want money for it and Amazon only has a used copy without "look inside (how freaky is it that Amazon sells scientific works now!) but somehow Google has this:

 

http://tinyurl.com/owhbkrv

 

On a more personal note, the significant successes I've had in my career as a research scientist have come without exception from reading outside my own field. I read voraciously (as I suspect you do!), everything from science fiction, layman's science fact (both handily available in Analog magazine which fills an entire wall of my reading room) and research papers published in other fields. No area of science exists in a vacuum, and many unsolved problems in one area must await a breakthrough in another. For this reason I chose my field of materials science as we're the guys frequently holding everyone else up!

 

One example would be a space elevator, a dirt cheap way to get stuff to orbit, no rockets required. We're nearly good to go! All that's needed is a material that's strong enough and light enough to anchor on the ground and to a geostationary satellite. Unfortunately, we don't have that material yet. What we do have isn't strong enough to support its own weight.

 

A modest example of my own work that bore fruit was connecting work done for the trucking industry to the problem of concrete cracking during curing and bridges icing over. The former costs the government a fortune and the latter kills people.

 

In the trucking industry perishable refrigerated cargo (food, medicines, blood products, etc.) shipped across the country. Refrigerating cargo costs fuel which is money. Unfortunately some truckers like to save a little money by turning up the thermostat (or even turning off the fridge) between the source and destination. That created the need (and financial incentive) to develop this product:

 

http://www.maximintegrated.com/produ...tton/ibuttons/

 

It's a self contained temperature and humidity logger with a computer inside, flash memory, clock, 10 year battery, network interface, encryption security and is mostly empty! It's the size of a stack of a couple of dimes and it can be fastened to a pallet so the entire history of the shipment with time and date can be downloaded to determine if the product has been spoiled.

 

I read about that, and I created this (a maturity recording system for concrete to determine when traffic could be opened):

 

iButtonStick.jpg

 

 

And also this (a remote detection system for clear ice on roads, leading to self-deicing bridges)

 

iButtonsIce.jpg

 

Now you're probably thinking I've wandered far from the topic of Artificial Intelligence, but think about this:

 

THIS is exactly the process that any real AI must replicate. It must take disparate ideas from a huge knowledge base and COMBINE them into something NEW. It must then work out the details of combination to solve a useful problem. It must do that on its own (not pre-programmed for a specific task in detail), and it must evaluate the efficacy of its solution within a reasonable deadline. It must be open to new facts and feedback from a pilot implementation of its idea and be able to improve it.

 

Just like we humans do - though not necessarily by the same process.

 

Terry D.

Link to comment
Share on other sites

  • Members

Wow, Terry - thanks so much for all that info and the links!

 

Looks like the old-fashioned engineer needs to watch his/her backside - cause this quote from you:

 

must take disparate ideas from a huge knowledge base and COMBINE them into something NEW. It must then work out the details of combination to solve a useful problem. It must do that on its own (not pre-programmed for a specific task in detail), and it must evaluate the efficacy of its solution within a reasonable deadline. It must be open to new facts and feedback from a pilot implementation of its idea and be able to improve it.

 

sound just like a posting for some engineer's job.

 

I have gotten the feeling from reading this and that, that materials science is probably the science that is going to change our world the most visibly in near term. The 3D printer is one example in development, but I've run across a lot more stuff that could be just as amazing and revolutionary. Science Friday had an article recently on a type of concrete in development that can function as a semiconductor. Talk about the ability to make "smart" objects....wow!

 

nat whilk ii

 

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...