Stephen Hawking thinks it will end in tears, human greed will probably prove him right.
There has been a noticeable increase in Artificial Intelligence news over last few months. Most of which seem to hint at a sea-change in the business world. Claims seem to be that in 5-10 years there will be a great acceleration in use of AI. Many clerical jobs could be done by AI, teaching by AI, robot doctors, driverless cars/planes not to mention the inevitable arms race in AI.
Should we be worried, excited, will UK be left behind (in an area we should excel it). Time to invest money into these firms?
Curious for views - seem to be a revolution that has crept up and about to go big in 5-10-20 years.
http://news.sky.com/story/ai-machine...-head-11029135
https://www.forbes.com/sites/louisco.../#13a0d3935463
http://www.dailymail.co.uk/sciencete...ving-cars.html
Last edited by Dazzler; 12th September 2017 at 07:55.
Stephen Hawking thinks it will end in tears, human greed will probably prove him right.
"Once is happenstance. Twice is coincidence. The third time it's enemy action."
'Populism, the last refuge of a Tory scoundrel'.
I think the whole AI business is fascinating.
My first thought is if you commit a traffic offence your car will lock the doors and drive you hopelessly to the local nick !
I'm sure that the whole ultimate purpose is control of individuals and that's not good.
Brendan
It's certainly a trip into the unknown. We've seen the turmoil in elections recently in some way down to the disenfranchised working class - and now there is potentially something that could remove their jobs. One developed system could say do thousands of jobs. There go tax revenues etc etc etc, welfare goes up etc. Macro-economy models defunct...
You see it in the States. http://www.bbc.com/news/technology-41122102 All good, but eventually it will threaten that most American of all jobs - truck driving -then what will happen?
Still - eternal life may be round the corner - map the brain, use AI and hey presto we're all in a concious version of the Sims when we die!
Think Number2 has it sussed though.
People talk about artificial intelligence when often they mean artificial selves. Really there are several strands that are not obviously the same thing. The classic test for AI, the Turing Test, is a very narrow test for the ability to think like (or all least simulate the verbal behaviour of, a human being. The finest rebuttal of this early and naive definition of AI is this: 'worrying if a computer can think is a bit like worrying if a submarine can swim'. Being able to simulate humans isn't so useful, being able to translate, or pull sense out of masses of unclear date or predict outcomes from incomplete information, these are useful. AI is getting very very good at this. For example, translating all of Wikipedia from one language to another in one tenth of a second. That's not just a lot of processing power, that's undeniably a maturing AI technology.
The impact on humans will be enormous. Combined with mechanisation there are few jobs that will be safe and most of those will be either very free creativity or areas wher we prefer a human in charge. I don't think AI will be taking over for themselves, because we are nowhere near selves, there's little commercial pressure and, frankly, we are struggling with even defining a self, we all think we have got them, but we all believed in a self evident God not so long ago. I'm not saying we don't have them, but the possibility that most of our assumptions about them are wrong is a live one. However, AI acting for whoever decides what they want it to do could give enormous predictive and, against our small, limited and bias ridden brains, manipulative power. Google Cambridge analytical to get a sense of the sort of things that are starting to be possible. Then think about why nectar cards are so lucrative.
- - - Updated - - -
People talk about artificial intelligence when often they mean artificial selves. Really there are several strands that are not obviously the same thing. The classic test for AI, the Turing Test, is a very narrow test for the ability to think like (or all least simulate the verbal behaviour of, a human being. The finest rebuttal of this early and naive definition of AI is this: 'worrying if a computer can think is a bit like worrying if a submarine can swim'. Being able to simulate humans isn't so useful, being able to translate, or pull sense out of masses of unclear date or predict outcomes from incomplete information, these are useful. AI is getting very very good at this. For example, translating all of Wikipedia from one language to another in one tenth of a second. That's not just a lot of processing power, that's undeniably a maturing AI technology.
The impact on humans will be enormous. Combined with mechanisation there are few jobs that will be safe and most of those will be either very free creativity or areas wher we prefer a human in charge. I don't think AI will be taking over for themselves, because we are nowhere near selves, there's little commercial pressure and, frankly, we are struggling with even defining a self, we all think we have got them, but we all believed in a self evident God not so long ago. I'm not saying we don't have them, but the possibility that most of our assumptions about them are wrong is a live one. However, AI acting for whoever decides what they want it to do could give enormous predictive and, against our small, limited and bias ridden brains, manipulative power. Google Cambridge analytical to get a sense of the sort of things that are starting to be possible. Then think about why nectar cards are so lucrative.
I agree with Hawking and Elon Musk - there is an amazing amount of good and useful things that will come from AI bur law makers will need to catch up.
The biggest issue I can see, as Musk and Kevin Kelly (AI bod and founder of Wired magazine) is that there is no Geneva convention on AI/robots or even hacking...so at what point, albeit off in the future, does the UN have to draw up guidelines on what's acceptable? His interview here on AI, VR and the rise of China is well worth a listen:
https://tim.blog/2016/06/05/kevin-ke...he-inevitable/
Autonomous vehicles are going to be real life changers in terms of mobility for the disabled, elderly, blind etc but full autonomous trucks are still a way off. Again, the tech may theoretically exist but in tandem there will also have to be massive changes to insurance and liability policies that will take years and probably a lot of lawsuits involving these vehicles. And if one thing moves slowly it's regulators.
I think you're right - maybe I confuse autonomy with AI. http://www.npr.org/sections/13.7/201...ce-or-autonomy
Parallels with Ancient Rome, but substitute robot slaves for human slaves?
A moneyed elite of intellectuals, business / property owners, and politicians.
An increasing number of blue and white collar jobs done by machines.
A growing underclass of dispossessed/ disenfranchised urban poor, whose jobs have been replaced by machines, and who increasingly rely on 'dole' to survive.
Occasional riots and uprisings by the urban poor and occasional revolts by the robot equivalents of Spartacus.
Skynet, pah! Let's worry about Soylent Green!
IDIOTS will be the saviours of the world. I have never seen a system that is idiot-proof and idiots are far more in number and better at being idiots than clever people are at being clever.
Worth reading 'Rise of the Robots' by Martin Ford.
Working my way through it now - fascinating stuff.
IBM declare the "future" of computing every decade or so and called the Internet as a commerce platform, Big Data and now they have been saying for a few years that AI/cognitive will be the next "big thing".
They have Watson https://www.ibm.com/watson/
I don't work for IBM but just saw a presentation and they seem quite optimistic about its role in healthcare although some of the latest stuff I have seen suggests it has further to go than they may initially have thought.
http://fortune.com/2017/06/28/ibm-watson-ai-healthcare/
Although was used to provide the Wimbledon highlights -
https://www.bloomberg.com/news/artic...s-helping-fans
Although really I see this as clever programming (analyses audience response) than AI - depends how much it learns.
Facebook's chat bots developed their own language which made them useless but does show they are machine learning.
https://www.forbes.com/sites/quora/2.../#50589de61710
Law Firms are also adopting AI
https://www.ft.com/content/f809870c-...1-d5f7e0cd0a16
The legal sector was a perfect customer, says Jan Van Hoecke, its co-founder, because it was “document-intensive — and has very expensive people looking at documents”.
Last edited by MB2; 11th September 2017 at 12:21.
I see it much like being a passenger on a bus or train or being in a taxi; you're not going to be "in charge of a vehicle".
In fact I actually believe that the whole concept of being in charge of the vehicle will be as alien to our grandchildren as a man with a red flag walking in front of a car is to us.
Eventually you'll just call for a car, get in and then get out at your destination. Only weird old men will have these ancient things that have to be driven and use petrol. They'll probably come out once or twice a year for vintage runs. The autonomous vehicles will use alternative routes until these old fuddy duddies have finished having their fun!
Humans as a biological entity won't be around forever (obviously) but humanity will continue in technological form.
The next step of human evolution (if we don't mess everything up) is sentient AI with all human experience uploaded to it and carried forwards, across the universe, with no "living, breathing" biological bodies existing.
This is the aim - to survive as a species by transcending our frail biological bodies and "uploading".
Many good books on this subject.
Even ignoring the Skynet-type scenarios, the increase in automation is a more immediate concern.
Many traditional manual jobs have been made obsolete by machinery, but now computer power means that it’s only a matter of time before many of the more complex jobs be become irrelevant eg passenger transport (taxis, coaches, trains) and goods transport (lorry drivers etc). It’s only a step further to automate other service jobs, and arguably a lot of them have gone now eg banking.
Its difficult to to imagine what people will actually need to do. (I don’t buy the “new types of job will be created” arguement - most jobs will be better done by computers).
We’re all going to be sat on the couch watching Jeremy Kyle all day.
An AI system driving a car swerves to avoid a cat in the middle of the road and in doing so knocks a cyclist off their bicycle killing them. Who is to blame?
A couple of points. A useful distinction is between actions, traditionally caused by intentions (or at least intention like mental processes) and events, traditionally caused by other events. The question is, should we treat the behaviour of an AI system as an action or an event and if an action, is the intentionality derived or original
Here's a way of thinking about the distinction between derived and original meaning and intention. Consider a thermostat. While nothing means anything to a thermostat, many people argue that a thermostat has derived meaning, even derived beliefs - it is too hot or cold for example. If meaning is derived, could responsibility also be derived. Certainly in the early years of the A300 there were a few crashes that looked like the three AI systems making bad choices due to poor information to me but which were ultimately written up as pilot error. Should they have been programmer error?
If someone designs a mechanical steering system that can lock up without warning, we know what to do. What about an AI system that makes the 'wrong' moral choices? Say prioritising avoiding a cat over a bicycle wheel, but not making the inference that the wheel is attached to the bicycle and the bicycle has a person on board.
The whole area is a mare's nest.
Reminds me of this -
In an interview with John Oliver, Hawking said:
“One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.”
“There’s a story that scientists built an intelligent computer. The first question they asked it was, ‘Is there a God?’ The computer replied, ‘There is now.’ And a bolt of lightning struck the plug so it couldn’t be turned off.”
And Elon Musk also-
"I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that. So we need to be very careful with artificial intelligence.
I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish. With artificial intelligence we’re summoning the demon. You know those stories where there’s the guy with the pentagram, and the holy water, and he’s like — Yeah, he’s sure he can control the demon? Doesn’t work out."
I think Ai combined with weaponry could be pretty dangerous myself.
http://libertyviral.com/elon-musk-wa...#axzz4sNqh56LF
Eventually, a Singularity. Singularity thread
I posted in another thread that I'm an avid, enthusiastic (but ultimately distinctly average) Dota2 player. The recent Dota 2 world championships saw the winning team get $20,000,000 for winning. A team of boffins created a bot called OpenAIBot that within 2 weeks had trained itself and was beating the best humans 12-0. Deep Blue was the start of it in games when it challenged Gary Kasparov at chess. Ultimately AI in other walks of life will be used to max efficiency. Recently google used an AI program that ran its heating and through switching off st certain timings found no drop in temperature but saved 15% in fuel bills. It's the future and we better be ready
Sent from my iPhone using Tapatalk
I think it will eventually negate the need for nuclear weapons: if one nation wishes to take another back to the stone age, just hack all its power.
If the power production is all run by AI , there's always another AI a tiny bit smarter in the target nation
The thing is that the DOTA world is basically simple. It is restricted to rational numbers and is constrained by the same problem that constrains any rules based AI - The Frame Problem. As such, it it isn't chaotic, it isn't stochastic and it is only using a small subset of possible numbers. As such it is basically a highly constrained baby sandbox. Action in the real world leads to combinatorial explosions of options (and things you need to ignore) that are beyond any current AI to process. While traditional AI will become increasingly good at dealing with immense but constrained problems, I don't see AI having the adaptability needed to do more than blunder until we move away from the current cognitive models and anything that is Church Turing equivalent with them.
Eventually? A single singularity?
Just think of the way we have interacted with the internet. I'd say that there are few predictions of the future that allowed for nectar cards, social media and the suchlike. We are currently routinely and ubiquitously filleted for insight and that insight turned back upon us. Go buy a pack of condoms from Boots on your credit card, wait a few weeks and then buy a pregnancy test with cash but using nectar. Then sit back and watch your advertising stream...
That's a gross example and it might not work any more as there are probably better models out there now that will note that other soft constraint indicators are missing.
The fact is that every single one of us gets our own filter bubble experience. We don't really belong to 'organic' communities in the traditional sense anymore as the ubiquitous shared experience is simply missing - or rather is fed to us based on the choices we make. Did anyone warn you that singularities might not announce themselves, but just might pass without anyone noticing.
The day when the machines take over can't come soon enough. They can't do a worse job than the morons we humans have picked to rum the show.
I think the main worry with AI is if it becomes so intelligent that it is able to manipulate the environment in ways that we don't even know is possible.
Dota is anything but simple and is played in a map with free reign to go wherever you like. Fights in Dota are insanely chaotic. Go YouTube and you see what I mean. The AI bot is also dealing with facing humans and what they do which is extremely random in itself. I drew the comparison with chess but chess is a very rigid structured example and now it can handle adapting to the total chaos of Dota. I think it's incredible. More incredible is that it wasn't programmed with any moves - just 2 or 3 rules like "dying is bad" "denying creeps is good". It developed its own strategy with no other input from developers. As a Dota player I find it insane
Sent from my iPhone using Tapatalk
This is what worries me most about automation and leaps in AI technology.
I recall seeing old 1950's and 1960's adverts, mostly from from America, showing a Utopian future (our TODAY) where all sorts of menial jobs are carried out by robots while the working masses take to light duties and work shorter hours, living a life of leisure and prosperity.
Trouble is there is no prosperity, as workers jobs are replaced they end up on the dole line... the rich get richer while the poor get poorer, and are left with pretty much zero opportunity to become "upwardly mobile".
I'm all for progress, but there needs to be a balance. Studly sort of has a point, the owners of the AI corporations should have a duty to provide for the disenfranchised masses. What's the alternative - billionaires become trillionaires while struggling governments look after the growing number of unemployed.
Battlestar Galactica.
Why would intelligent machines decide to keep billions of human pets? Some perhaps, but not billions, certainly not if they are paying for their upkeep.
The most likely outcome if a machine was making the decision is that the numbers of dispossessed/ disenfranchised urban poor would be significantly reduced, either by natural processes over time, or as revolts and uprisings against the machines and the masters are dealt with, or even by means of a planned cull to manage limited resources more efficiently.
I'm not denying that DOTA is not complicated. It's just not complicated like reality is complicated. it is, to use the vernacular, tractable. More to the point, it's quite clear that the designers are using a combination of machine learning, probably a GA by the look of it, and also hard constraints. That they are doing that in the way they are shows that they are not dealing with a particularly cutting edge setup. The programmers are analysing the game and adding their creativity to a fairly brute pattern completing program using a genetic algorithm to find the patterns in play.
https://blog.openai.com/more-on-dota-2/
To the AI, any problem space is just that. The fact is that we are only just getting systems that can actually walk at all well in the real world gives an idea of just how hard stuff that seems simple to us in fact is.
Artificial Intelligence - Where's it going?
Apparently there's talk of AI designed specifically for forums that will (a) multi quote every comment to within an inch of its life (b) .....no, hang on, wait!
No they didn't. Are you warning me now?
Almost all definitions of a Singualrity describe it as an explosive event, not incremental and certainly not an event that we could not notice.
We are still making baby steps with computers and technology, our computers are still solid state.
Singularity is the point at which "all the change in the last million years will be superseded by the change in the next five minutes." Kevin Kelly, founder of Wired Magazine
Singularity "is a break in human evolution that will be caused by the staggering speed of technological evolution." James Martin
"... a future period during which the pace of technological change will be so rapid, its impact so deep, that human life will be irreversibly transformed. Although neither utopian nor dystopian, this epoch will transform the concepts that we rely on to give meaning to our lifes, from our business models to the cycle of human life, including death itself." Ray Kurzweil
This is the type of technological singularity I am talking about.
Vernor Vinge, who coined the phrase 'technological singularity', might take view more aligned with yours, that "the notion becomes a commonplace" but I don't think that nectar cards and clever google algorithms really count.
"Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended. [...] I think it's fair to call this event a singularity. It is a point where our models must be discarded and a new reality rules. As we move closer and closer to this point, it will loom vaster and vaster over human affairs till the notion becomes a commonplace. Yet when it finally happens it may still be a great surprise and a greater unknown." Vernor Vinge