Forumchem - Forum with AI(ALICE BOT & HAL9000) and TTS

More dificult for us, more easy for you
It is currently Wed May 15, 2024 8:32 pm

All times are UTC





Post new topic Reply to topic  Page 3 of 39
 [ 390 posts ]  Go to page Previous  1, 2, 3, 4, 5, 6 ... 39  Next
Author Message
 Post subject: Singularity Summit 2009: Open The Pod Bay Door, HAL
PostPosted: Mon Oct 05, 2009 7:43 pm 
Offline
User avatar

Joined: Fri Apr 03, 2009 1:35 am
Posts: 2692
title=

Ray Kurzweil"s concept of the Singularity rests on two axioms: that computers will become more intelligent than humans, and that humans and computers will merge, allowing us access to that increased thinking power. So it only makes sense to begin the conference with discussions of those two fundamental concepts. No one disputed the emergence of intelligence beyond our own, but they did give me plenty of reasons to worry about how that process might take place.


According to Anna Salamon, a former NASA researcher who now works for the Singularity Institute for Artificial Intelligence that hosts the conference, artificial intelligence greater than our own is inevitable and dangerous. Salamon argued that biological brains have finite intellectual capacity. Just as a goldfish can"t appreciate opera and a cat can"t learn quantum mechanics, so too will humans soon confront problems beyond the comprehension of our slimy, mortal brains.


She believes we will create super computers to solve those problems for us. Just as relatively weak human muscles can work together to create stronger lifting machines like cranes, relatively stupid human brains can design vastly more powerful computers minds. Unfortunately, Salamon worries that if humans and AI have divergent goals, we could find ourselves in competition with the AI for resources to achieve those different goals. And when you compete with something vastly smarter than yourself, you lose. She stressed that assuring humanity and AI have the same goals requires a level of care and responsibility greater than even our stewardship of nuclear weapons technology.


To head off the Skynet take over, Salamon advocates starting now to ensure that positive, human assisting missions get hardwired into the basic architecture of artificial intelligence.


But according philosopher Anders Sandberg, the nature of artificial intelligence development may complicate the embedding of those fail-safes. Sandberg believes that engineers have to base their first attempts at AI on the only current example of natural intelligence: the human brain.


And if the first artificial intelligence has to take the form of a human brain, it has to take the form of a particular human brain. Sandberg noted that the first artificial brain, as copy of a specific human brain, would necessarily contain elements of the personality of the test subject that the artificial brain copied. Personality traits that could become locked into all artificial intelligence as the initial AI software proliferates.


Based on my experience with people who volunteer for scientific tests, this means the first artificial intelligence will most likely have the personality of a half stoned, cash-strapped, college student. So if both Salamon and Sandberg prove right, I think avoiding destruction at the hands of artificial intelligence could mean convincing a computer hardwired for a love of Asher Roth, keg stands and pornography to concentrate on helping mankind.


Take home message: as long as we keep letting our robot overlord beat us at beer pong, we just might make it out of the Singularity alive.


And remember to check back soon for more Singularity Conference 2009 updates.




Source


Top
 Profile      
 
 Post subject: New from Boeing: Flying Bot Swarms You Control With Body Lan
PostPosted: Tue Oct 06, 2009 2:30 pm 
Offline
User avatar

Joined: Fri Apr 03, 2009 1:35 am
Posts: 2692
title=
Human operators could use gestures to direct clouds of robot drones<--paging_filter-->

Robot swarms could someday hover, spin, and attack in response to a simple gesture or graceful pirouette from a human operator. And yes, Boeing has filed a patent on that future vision.

The method may involve defining a plurality of body movements of an operator that correspond to a plurality of operating commands for the unmanned object, Boeing notes in its patent filing. Body movements of the operator may be sensed to generate the operating commands.
<--break-->
Boeing goes further by laying claim to specific body motions for specific commands. A nod of the human operator"s head could select one robot out of the flying unmanned swarm. A circular hand motion along a certain plane could order another robot to begin moving from a stationary position. An operator might even select a certain group of drones with a pointing motion that defines a three-dimensional conical area.

The patent filed early this month also includes reflective markers, a motion capture system, and a wireless transmission system for human-to-drone commands. That points to a future where warfighters and civilians can interact more naturally with their friendly neighborhood swarm.

We at PopSci can readily imagine some entertainment spin-offs from such technology as well, given the popularity of motion-control technology with the current generation of video game consoles. Microsoft"s upcoming Project Natal promises a full-body motion controller experience for Xbox 360 games, and Sony has gone a step farther by filing a patent on emotion-controlled gaming.

Could even a Gamer-style future with battle bots remain far behind such developments We sure hope not, because ascribing any sort of predictive power to that movie seems too terrible (and aesthetically wrong) to contemplate.

via Baltimore Sun


source


Top
 Profile      
 
 Post subject: Singularity Summit 2009: Just How"s This Thing Gonna Work, Anyways?
PostPosted: Wed Oct 07, 2009 11:54 am 
Offline
User avatar

Joined: Fri Apr 03, 2009 1:35 am
Posts: 2692

Since I got here, I"ve been wondering what exactly the Singularity"s going to look like. How are we going to create artificial intelligence, and when we do, how are we going to integrate ourselves with this advanced technology? Luckily, NYU philosopher David Chalmers was there to break it all down.


Contradicting this morning"s talks, and solving the problem of complications due to personality quirks from a copied brain, Chalmers rejected the idea of brain emulation as the path to super intelligent AI. Instead, Chalmers thinks that we have to evolve artificial intelligence by planting computer programs in a simulated environment and selecting the smartest bots. Basically, set up the Matrix, but with only artificial inhabitants. The process may take a while, but he stressed that human intelligence serves as proof of concept.


To ensure that the resultant AI adheres to pro-human values, we would have to set up an leak-proof world where we control what goes in, and can prevent any of the artificial consciousness from becoming aware of us too early and escaping (essentially, no red pill). We could then adjust this world to favor pro-human traits while routing anti-human tendencies towards extinction.


As Chalmers sees it, the second the artificial personalities become as smart as us, they will emulate our leap by creating AI even smarter than themselves inside their simulated world. Essentially, they will undergo their own, digital Singularity.


This will start a chain reaction that quickly leads to a digital intelligence far greater than anything we ever imagined. Unless, of course, the first AI more intelligent than us uses that additional foresight to realize creating intelligence greater than itself is a bad idea, and cuts off the entire process.


But assuming that AI does manage to get smarter than us, we will have to either integrate with it, coexist with it as a lower form of life, or pull the plug (which may or may not be genocide). Since mass murder is generally frowned upon, and no one wants to be pets to a machine, Chalmers sees integration as the only way to go. Of course, no higher intelligence would willingly integrate with a lower one (humans don"t forsake language and bipedalism to live with wolves), so Chalmers said we"ll have to computerize ourselves to meld with the AI system.


To sustain consciousness, he advocates physically replacing one neuron at a time with a digital equivalent, while the person is awake, so as to retain continuity of personality.


What Chalmers did not address, however, is whether or not the AI would want to meld with a warmongering, greedy, sex obsessed inferior intelligence like ourselves. If the AI is really that much smarter than us, it might be more like George Wallace than the Borg, insisting on human segregation now, tomorrow and forever rather than total assimilation. Prejudice computers leads right back to Salamon"s prediction of extinction at the hands of our creations.


Can"t we all just get along!



Source


Top
 Profile      
 
 Post subject: Singularity Summit 2009: Ten Unanswered Questions For Our Fu
PostPosted: Thu Oct 08, 2009 3:36 am 
Online
User avatar

Joined: Thu Apr 02, 2009 11:08 am
Posts: 1804
title=
If you could ask the product of the robot-human merge ten questions, what would they be <--paging_filter-->

While I undoubtedly learned a lot at the Singularity Summit, the conference"s greatest benefit was the questions it didn"t answer. Unresolved issues regarding the Singularity have provided a lot of philosophical grist for my admittedly limited intellectual mill, and working through those problems has been as exciting as any talk I saw at the Summit.

To wrap up our coverage of the Singularity Summit, I"m going to count down my ten most vexing unanswered questions about Kurzweil"s theoretical baby, the eventual merge of human and artificial intellifnece, and I am interested to hear any opinions, questions or (hopefully) answers you all have about any or all of these still unexplained facets of our future. <--break-->

10. Is there just one kind of consciousness or intelligence

During his talk, Ray Kurzweil, the pioneer of the concept of Singularity, referred to intelligence as prediction. It evolved so that humans could look at an animal on the savanna, and guess where it would go next. Clearly, computers have already surpassed us in predicting a wide range of events (chess moves, the weather, economic trends, etc.).

Of course, we know that people"s ability to predict outcomes in different fields, say, whether my girlfriend will like this or that flower better, varies so widely that they effectively act as different forms of intelligence.

Assuming there are different forms of intelligence, how do we know machines won"t take on a new one that we won"t recognize as intelligence And if there are different kinds of intelligences, are there different kinds of consciousness, too Could a machine arrive at a new kind of consciousness that we don"t recognize, leading us to miss the Singularity

9. How will you use your digital intelligence to kill us all

A lot of people spent the conference worrying about our eventual extinction at the hands of our automaton creations. But for all that paranoia, no one really explained how a computer program could manage to kill me.

Will it hack into the nuclear missile command and launch all the nukes Will it crash all the planes And couldn"t we just pull the plug Someone still needs to explain to me what I have to fear from a being with no physical presence.

8. Are you Tommy Deaf, dumb and blind

When the first artificial brain comes online, how can its first thought be anything other than holy crap, I"m blind A disembodied intelligence in a machine will exist with a serious lack of senses. Maybe it can see and hear, but feel Doubtful. How does a consciousness that can"t feel keep from freaking out I"d be pissed, and I imagine the first AI will be too. Which leads too...

7. Do you have emotions

Can AI become depressed The first one will no doubt be rather lonely. How will being the first (and only) member of a species affect the AI"s development and relationships The first digital consciousness may come into the world like the only Goth kid in a small town high school: isolated and without anyone who can sympathize. Not really the kind of being I want with access to all our weapons and economic tools.

6. Are humans more similar to your AI construct than we thought

Jurgen Schmidhuber, a philosopher at the Dalle Molle Institute for Artificial Intelligence, noted in his talk that the human brain compresses information like a .zip file, and that we differentiate boredom and interest by measuring how much the new information we take in allows us to compress the information even further.

I really thought he was on to something with his description of how the brain handles the new data from the expansion of our personal experiences. Which leads me to wonder, just how computer-like is our brain already Ours brains already run software, of sorts, that result in biologically similar brains producing vastly different personalities. Is it possible the Singularity will occur not because we create machines that resemble the human brain, but because we uncover just how computer-like the human brain is naturally

5. How much does programming influence your free will

In the discussions about avoiding a robo-apocalypse, speaker after speaker stressed the need to teach digital consciousnesses to have human values. And many people wondered why we couldn"t just program the robots not to kill us Well, presumably we would, but once the computer programs achieve self-awareness and free will, couldn"t they choose not to follow that programing Whether its dieting or monogamy, Humans avoid following their programing all the time. What makes us think a sentient program wouldn"t similarly disregard its basic urges

4. Do you ave a subconscious

If AI minds are as complex as human brains, does that mean they will have areas that they cannot understand, control, or access Are the Id, Ego, and other elements of our unconscious the consequence of biology or a necessary component of sentience Can AI have irrational beliefs or psychological problems If the AI thinks we"re their god, or at the very least their creator, could it have an oedipal problem If so, that might explain why it tries to kill us.

3. Will you actually help us transcend the less pleasant aspects of being human

As anyone who reads internet comment boards know, for every one person that uses the web to broaden their horizons and question their prejudices, there a dozen idiots who use the same technology to spread misinformation about global warming being a hoax, compare Obama to Stalin and Hitler, and ask other idiots for money to help a Nigerian prince. In addition to granting immortality and making everyone nigh-omniscient, won"t the Singularity also provide the ultimate avenue for people to disseminate the lust, greed and hatred humans have pursued for tens of thousands of years Forget about the AI killing us, I"m still worried about the other humans.

2. Do you care about anything at all

What"s to say that an intelligence vastly greater than our own won"t uncover the pointlessness of life, become a nihilist, and turn itself off Or, what if it"s so intelligent, it simply doesn"t care about humans Everyone at the conference predicted a very needy AI, but no one could answer why the AI wouldn"t be just as likely to withdraw from humanity as engage it.

1. And finally, what if someone threw a Singularity and no one came

After her talk, Anna Salamon told me that the Singularity would effect everyone in the world within a span of minutes to a couple of years. As she was telling me that, I thought of these pictures.

Last year, a pilot discovered a previously uncontacted tribe living deep in the Amazon. In parts of South America, Asia and Africa, there are people whose way of life hasn"t changed much in the last 300 years, let alone the last 30. Why would the Singularity be different Sure, I can imagine people with brain chips plugging into a higher intelligence on the Upper West Side, but how long until that technology makes it to the South Bronx Or Somalia Or Afghanistan

If the Singularity only affects one small group of humans, while the rest either can"t afford it or simply don"t care to participate, what happens to the transhumanist future the Singularity promises Doesn"t the Singularity just set humanity up for another of the rich/poor, North/South problems it already deals with Once again, its the other people, not the robots, that I worry about.

Well, that"s it for our Singularity Summit 2009 coverage. I hope the conference has given you all something to think about, and as always, I can"t wait to hear what you all have to say. Thanks for following these posts, and remember, when the Singularity comes, take the blue pill, you"ll be happier.


source


Top
 Profile      
 
 Post subject: Gallery: Far-Out New Tech from Japan
PostPosted: Thu Oct 08, 2009 5:45 am 
Online
User avatar

Joined: Thu Apr 02, 2009 11:08 am
Posts: 1804
title=
Thank Japan for sushi, Kobe beef, karaoke and the goods from the annual CEATEC showcase. <--paging_filter-->

It"s not all about singing robots in Tokyo this year. The annual CEATEC tech expo is loaded with the makings of your gadget-geek future.

Most of the more drool-worthy goodies are, of course, only in the prototype or demo stages right now. (Just like IFA before it, it"s just a big ol" tease.) For now think of them as a far eastern crystal ball to what we can look forward to at CES come January when us Yanks get to have some fun.

From a Star Trek translator app to the thinnest laptop screen we"ve ever seen, here is the tech to watch coming out of CEATEC.


source


Top
 Profile      
 
 Post subject: Polaris Phone Rolls Self to Charger, Keeps an Eye on Users"
PostPosted: Fri Oct 09, 2009 12:21 am 
Offline
User avatar

Joined: Fri Apr 03, 2009 1:35 am
Posts: 2692
title=
<--paging_filter-->

Sure, your iPhone may play games, tell you where to eat, and surf the Internet, but can it tell you what you did the other day and how to do it better Enter the Polaris phone, a new system designed by the giant mobile phone company KDDI and Japan"s Flower Robotics.

The Polaris phone/robot is a three-part system that incorporates your phone, your television, and the robotic sphere seen above. The sphere contains speakers for the phone"s music, and wheels that roll the sphere to the closest power source to charge the phone. The sphere"s dock also links up with your television to display the detailed data about your life and behavior that the phone records (see below for a picture of a TV displaying the data).

And what data does it record Apparently everything. The phone follows where you go, who you email, what you buy, who you call, etc. Aside from telling you what happened after the fifth shot of tequila the other night, the phone also analyzes your habits for patterns, and gives out advice based on the data it collects.

The whole project is still in the prototype phase. The specifics of the data collection system, and the navigational skill of the sphere, need more work. However, the companies hope to have a commercial version of the system available by next year. Until then, I"ll have to continue relying on my friends, not my rolling phonebot, to tell me what I did after I black out drunk.

via Pink Tentacle


source


Top
 Profile      
 
 Post subject: Darwin Trumped: Robots Evolving at Warp Speed - A Galaxy Ins
PostPosted: Fri Oct 09, 2009 2:16 pm 
Offline
User avatar

Joined: Sat Nov 08, 2008 1:17 am
Posts: 126
“I see a strong parallel between the evolution of robot intelligence and the biological intelligence that preceded it. The largest nervous systems doubled in size about every fifteen million years since the Cambrian explosion 550 million years ago. Robot controllers double in complexity (processing power) every year or two. They are now barely at the lower range of vertebrate complexity, but should catch up with us within a half century."

Hans Moravec, pioneer in mobile robot research and founder of Carnegie Mellon University's Robotics Institute.

According toMoravec, our robot creations are evolving similar to how life on Earth evolved, only at warp speed. By his calculations, by mid-century no human task, physical or intellectual, will be beyond the scope of robots.

Here is a summary of his educated predictions for the future of robotics up until they can do everything we can do:

2010: A first generation of broadly-capable "universal robots" will emerge. The “servant” robots, will be able to run application programs for many simple chores. These machines will have mental power and inflexible behavior analogous to small reptiles.

2015: Utility robots host programs for several tasks. Larger "Utility Robots" with manipulator arms able to run several different programs to perform different tasks may follow single-purpose home robots. Their tens of billion calculation per second computers would support narrow inflexible competences, perhaps comparable to the skills of an amphibian, like a frog.

2020: Universal robots host programs for most simple chores. Larger machines with manipulator arms and the ability to perform several different tasks may follow, culminating eventually in human-scale "universal" robots that can run application programs for most simple chores. Their tens of billion calculation per second lizard-scale minds would execute application programs with reptilian inflexibility.

2030: Robot competence will become comparable to larger mammals. In the decades following the first universal robots, a second generation with mammallike brainpower and cognitive ability will emerge. They will have a conditioned learning mechanism, and steer among alternative paths in their application programs on the basis of past experience, gradually adapting to their special circumstances. A third generation will think like small primates and maintain physical, cultural and psychological models of their world to mentally rehearse and optimize tasks before physically performing them. A fourth, humanlike, generation will abstract and reason from the world model.

If Moravec is correct in his predictions, if won’t be long before robots have cognition. With daily breakthroughs happening in the robotic community—it may happen even sooner. Not only will they be able to think autonomously, but robot intelligence and capabilities would equal (and most likely quickly surpass) any human capability.

That likely possibility begs the question, what happens when robots are superior to their creators? Will they still be subservient to us, or will the popular “robot takeover” of sci-fi movies become reality? I love robots as much as the next geek, but maybe we need some sort of plan for when they stop loving us…



On the other hand, others believe that it is humans who will evolve into advanced “robots”. Their belief is that with futuristc technologies being developed in multiple fields, human intelligence may eventually be able to “escape its ensnarement in biological tissue” and be able to move freely across boundaries that can’t support flesh and blood—while still retaining our identities. That idea seems much further away, but whatever the case may be—there are changes ahead.

Posted by Rebecca Sato.


Top
 Profile      
 
 Post subject: Robots That Eat Bugs and Plants for Power
PostPosted: Sat Oct 10, 2009 7:54 pm 
Offline
User avatar

Joined: Fri Apr 03, 2009 1:35 am
Posts: 2692
title=
Controversial robots devour biomass to gain energy independence

No matter how intelligent a robot might be, its nice knowing you can pull its plug to halt the anti-human insurrection. Whoops, not anymore. A new cohort of bots that make energy by gobbling organic matter could be the beginning of truly autonomous machines.

This first wave of biomass-munching robots has been designed with safe, slow, long-term vocations in mind, such as surveillance, clearing land mines, or monitoring sewer pipes and other locales too dark for solar cells. Take EcoBot II, the tambourine-size fly-eating machine built by Bristol Robotics Laboratory in England. Engineers hand-feed this robot insects, which it digests in a microbial fuel cellessentially a tank of sludgy bacteria and oxygenthat converts the insects into electricity. An eight-fly meal can drive it up to seven feet.

EATR (for Energetically Autonomous Tactical Robot), a car-size military reconnaissance bot, will forage more actively. The Darpa-funded concept vehicle from Robert Finkelstein of Robotic Technology in Washington, D.C., will use cameras and radar-like sensors to spot twigs and leaves. It will then chop up food and toss it into a combustion chamber built by engineer Harry Schoell (a 2008 PopSci Invention Award winner). Schoells steam engine runs on anything that burns and will get EATR around 100 miles per 150 pounds of vegetation. Both the EcoBot and EATR teams are working on software to help the robots conserve energy during lean times, and a full EATR prototype should be scavenging by 2011.

For those inclined to fear an autonomous, chainsaw-wielding robot, take comfort that its programming will restrict it to only grabbing morsels that match the shape, color and texture of plant life. It wont consume a chocolate layer cake, because it wont recognize it as food, Finkelstein says. And it certainly wont go running after animals.



Source


Top
 Profile      
 
 Post subject: This Week in the Future, October 12-16, 2009
PostPosted: Sat Oct 17, 2009 12:36 pm 
Online
User avatar

Joined: Thu Apr 02, 2009 11:08 am
Posts: 1804
title=

It"s that time again--our illustrated roundup of the future this week. Robots were in the news quite a bit (meaning it was a good week): robots made of shapeless goo, robots that eat biomaterials for energy and robots that could have saved soldiers" lives in Iraq.

This week"s stories:

Hope every had a good week. Until next time!

See our past weekly illustrated roundups here.



Source


Top
 Profile      
 
 Post subject: Skiing robot developed by Slovakian researchers
PostPosted: Wed Oct 21, 2009 5:25 pm 
Offline
User avatar

Joined: Fri Apr 03, 2009 1:35 am
Posts: 2692
Slovenian researchers have created a skiing robot capable of navigating slalom courses.

Source



Top
 Profile      
 
Display posts from previous:  Sort by  
Post new topic Reply to topic  Page 3 of 39
 [ 390 posts ]  Go to page Previous  1, 2, 3, 4, 5, 6 ... 39  Next

All times are UTC


Who is online

Users browsing this forum: uocrsazac and 4 guests


 
Search for:
 
Jump to:  

cron
Click me:
Powered by phpBB © 2000, 2002, 2005, 2007, 2008, 2009 phpBB Group
Chronicles phpBB3 theme by Jakob Persson. Stone textures by Patty Herford.
With special thanks to RuneVillage

This site have 4 type of tecnology in order to convert text to speech. By default you use the vozme tecnology. In order to know the other you need to sign for.


- Privacy Policy -