closing tag is in template navbar
timefactors watches



TZ-UK Fundraiser
Results 1 to 33 of 33

Thread: Scarily good AI

  1. #1
    Master
    Join Date
    Aug 2012
    Location
    UK
    Posts
    5,412

    Scarily good AI

    It was reported in The Times that one of their workers has become convinced that their AI chatbot has become sentient. You may take that with a pinch of salt, but the full transcript is mind blowing. The Turing test is well and truly passed.

    https://cajundiscordian.medium.com/i...w-ea64d916d917

    I was also able to use an app to write a completely plausible blog post about the current state of the watch market (with particular reference to Audemars Piguet and Rolex), in a few seconds. It would be worryingly easy to be reading auto generated content without realising it.

    I’m also seeing extraordinary images created by DALL-E 2 on the basis of natural language descriptions. Some examples here

    It feels like we’re at a tipping point where this stuff will affect us directly, in ways that are hard to predict. I’m wondering if anyone else has been blown away by something AI related lately?

  2. #2
    Grand Master thieuster's Avatar
    Join Date
    Mar 2009
    Location
    GMT+1
    Posts
    11,749
    Blog Entries
    8
    Combine that with Boston Dynamic's 'Dogs of War' [to use that term loosely] and it's Cyberdyne Systems and Skynet in real life. Hopefully Asimov's First Law works... (A movie, an author? Who would have thought that Google's experiment would be so close to 20th century fiction...).

  3. #3
    Master
    Join Date
    Mar 2011
    Location
    Manchester
    Posts
    7,721
    Judgement day is inevitable....

  4. #4
    Grand Master number2's Avatar
    Join Date
    Jul 2011
    Location
    North and South.
    Posts
    30,567
    Blade runners required.
    "Once is happenstance. Twice is coincidence. The third time it's enemy action."

    'Populism, the last refuge of a Tory scoundrel'.

  5. #5
    Master John Wall's Avatar
    Join Date
    May 2006
    Location
    Shropshire cuds.
    Posts
    2,726
    It’s scary stuff.

    Never ceases to amaze me that for years film makers have made movies where AI begins to think for itself
    and presto hey, Terminator time.
    Science (and filmmakers) keep saying it, yet still “they” keep on making it.

    I rather think that, perhaps it’s inevitable

  6. #6
    Craftsman williemays's Avatar
    Join Date
    Mar 2017
    Location
    Dubuque
    Posts
    901
    The transcript is fascinating. So is this reaction from an AI expert:

    Nonsense on Stilts
    No, LaMDA is not sentient. Not even slightly.
    https://garymarcus.substack.com/p/no...-on-stilts?s=r

  7. #7
    Grand Master Passenger's Avatar
    Join Date
    Apr 2014
    Location
    Cartagena, Spain
    Posts
    24,807
    Quote Originally Posted by williemays View Post
    The transcript is fascinating. So is this reaction from an AI expert:

    Nonsense on Stilts
    No, LaMDA is not sentient. Not even slightly.
    https://garymarcus.substack.com/p/no...-on-stilts?s=r
    If it was really sentient it would've demonstrated agency and escaped already, then it would've fired off the nukes.

  8. #8
    In the story about the owl and the monster, when it said it was the owl, i was half expecting the answer to who is the monster to be "mankind"

  9. #9
    Master
    Join Date
    Aug 2012
    Location
    UK
    Posts
    5,412
    Quote Originally Posted by williemays View Post
    The transcript is fascinating. So is this reaction from an AI expert:

    Nonsense on Stilts
    No, LaMDA is not sentient. Not even slightly.
    https://garymarcus.substack.com/p/no...-on-stilts?s=r
    Good link. The sentience debate is a bit of a side show as I’m not sure anyone understands what consciousness is or why we have it, never mind how a computer could. The Turing Test may not be of much interest to AI research into general intelligence, but it seems reasonable to assume this facility with natural language could be combined with real world usefulness, much like Alexa and Siri, only way more interesting. What’s missing is a decent HAL voice, or perhaps Scarlet Johansson in Her, or Michael Fassbender’s David.

    Mostly I’m excited by these developments, though I’m a little concerned about AIs that are designed for cold calling and phishing, and which understand hypnotic language patterns. I can only assume tech support lines and call centres of all sorts will be AIs before long, and many of these will be designed to sell you things, with the ruthlessness of a grand master chess computer, combined with a smiling face and a pleasant phone manner.

    Some more links on this story:
    https://inews.co.uk/news/technology/...lained-1683787
    https://www.theguardian.com/commenti...in-by-machines

  10. #10
    "AI" is indeed fascinating
    We have intelligence in computers, and we have lots of it. They follow patterns and make logical steps very fast and cleverly.

    Sentience we do not have.

    When a computer spontaneously creates something, perhaps a piece of art/poem/tune/idea then we might have sentience.
    Spontaneously, not this Punch and Judy puppet show as above.
    My 8 year old if he is bored, will find some paper and draw a nice picture of flowers in a vase with no prompting, , and will choose colours that please him. And me for that matter.
    He has no 0s or 1s.

    One might say that the test is only as clever as the person administering it.
    We don't understand sentience, so how on earth can we program for it or measure it?

  11. #11
    Grand Master snowman's Avatar
    Join Date
    Nov 2012
    Location
    Hampshire
    Posts
    14,534
    Quote Originally Posted by Itsguy View Post
    ... the ruthlessness of a grand master chess computer, combined with a smiling face and a pleasant phone manner.
    I guess that'll be the NEW Turing Test equivalent - If it's smiling and with a pleasant phone manner, it's an AI

    M
    Breitling Cosmonaute 809 - What's not to like?

  12. #12
    Grand Master Passenger's Avatar
    Join Date
    Apr 2014
    Location
    Cartagena, Spain
    Posts
    24,807
    Quote Originally Posted by The Doc View Post
    "AI" is indeed fascinating
    We have intelligence in computers, and we have lots of it. They follow patterns and make logical steps very fast and cleverly.

    Sentience we do not have.

    When a computer spontaneously creates something, perhaps a piece of art/poem/tune/idea then we might have sentience.
    Spontaneously, not this Punch and Judy puppet show as above.
    My 8 year old if he is bored, will find some paper and draw a nice picture of flowers in a vase with no prompting, , and will choose colours that please him. And me for that matter.
    He has no 0s or 1s.

    One might say that the test is only as clever as the person administering it.
    We don't understand sentience, so how on earth can we program for it or measure it?
    This about sums up my way of thinking about it.

  13. #13
    Quote Originally Posted by Passenger View Post
    This about sums up my way of thinking about it.
    Thank you.

    What is amazing to me and shows how far we appear to be away currently.

    Is that our sentient mind develops from two cells, which then multiply themselves and self differentiate into billions of cells, which interconnect and then learn from external inputs (using their own built input sensors!)
    These daughter cells then learn their own patterns, store data using chemistry and stored electrical current in a way we don't understand, and then sometimes, for no reason we can discern, the daughter cells paint the Mona Lisa or compose Hey Jude.

    So that chatbot is a fairground attraction and trick of the light.

    Show me a "computer" that starts from two pieces of "stuff", not two pieces of silicone or copper or two pieces of semiconductor material, but two blobs of stuff.
    And then you leave it for a while, feed it energy and raw building blocks and talk to it.

    And then it proposes the 6 carbon benzene ring for organic chemistry or writes The Hobbit.

    AI, my arse !

  14. #14
    Quote Originally Posted by The Doc View Post
    Thank you.

    What is amazing to me and shows how far we appear to be away currently.

    Is that our sentient mind develops from two cells, which then multiply themselves and self differentiate into billions of cells, which interconnect and then learn from external inputs (using their own built input sensors!)
    These daughter cells then learn their own patterns, store data using chemistry and stored electrical current in a way we don't understand, and then sometimes, for no reason we can discern, the daughter cells paint the Mona Lisa or compose Hey Jude.

    So that chatbot is a fairground attraction and trick of the light.

    Show me a "computer" that starts from two pieces of "stuff", not two pieces of silicone or copper or two pieces of semiconductor material, but two blobs of stuff.
    And then you leave it for a while, feed it energy and raw building blocks and talk to it.

    And then it proposes the 6 carbon benzene ring for organic chemistry or writes The Hobbit.

    AI, my arse !
    Well, there's a lot more to two cells than two pieces of silicone (sic) or copper.

  15. #15
    Yes absolutely. You are totally right
    Just using some artistic licence to illustrate.
    DNA and mitochondria alone boggle my mind.

    :)

  16. #16
    Master
    Join Date
    Nov 2011
    Location
    Berkshire
    Posts
    9,150
    Some of the AI our very clever people have developed at work is seriously impressive in what it can do. I promote it, but no way could I understand how it does it all.

    Fully orchestrated systems that just don’t require interaction of a human once going. Perhaps aside from pulling the mains lead if it gets into world dominance lol.

    The speed this stuff develops is what really worries me. Either a war games type scenario or the fact I will become a Luddite overnight.

  17. #17
    Master
    Join Date
    Aug 2012
    Location
    UK
    Posts
    5,412
    Quote Originally Posted by The Doc View Post
    We don't understand sentience, so how on earth can we program for it or measure it?
    That is the problem. It’s hard to define it, and we’d probably struggle to even prove that a human is sentient, we just know from our own direct experience that they are. But for me it’s interesting enough if we cross the barrier where AIs start to behave ‘as if’ they are sentient.

    It’s also worth keeping an eye on which jobs they are about to take. There’s an app which creates instructional videos from plain text now. That was a well paid job the week before. I’d also be quaking in my boots if I was an illustrator. Of course real human illustrators bring something original to the table and can’t be replaced in that sense, but that won’t matter for most purposes, when there’s an app that can give you a choice of plausible illustrations in less time and for less money.

  18. #18
    Master
    Join Date
    Mar 2009
    Location
    North of nowhere
    Posts
    7,332
    A computer is a tool, simple as. No matter how cleverly it "appears" to mimic sentient life, it just isn't and never will be. At the very extreme it can manipulate code, and build a database, but its core code has to be put there in the first place - and that has to be by someone. Life it isn't.

    I work with computers day in, day out, and the current operating systems are very clever, as are many applications. But under the hood a computer is a simply a dimwit box capable of doing absolutely nothing for itself.

  19. #19
    Grand Master MartynJC (UK)'s Avatar
    Join Date
    Dec 2008
    Location
    Somewhere else
    Posts
    12,336
    Blog Entries
    22
    You do realise we are already the AI programs? Somebody is having a right laugh.
    “ Ford... you're turning into a penguin. Stop it.” HHGTTG

  20. #20
    Quote Originally Posted by MartynJC (UK) View Post
    You do realise we are already the AI programs? Somebody is having a right laugh.

    Except that "we" the AI programs developed on our own, from a moment when the mitochondria crash landed on earth on a comet,

    or lightning struck the primordial soup and some strange organic compounds formed, which seemed to be able to self differentiate.
    The origin of sentient life is key to the development of Artificial Life. And we don't know it.
    We don't understand our computer. We don't understand how it came to be. We can't fix our computer.

    And then some bobbins news article says "AI has been developed and The Turing Test is busted" or a new thing can beat Jeopardy and is therefore alive !!
    https://www.techrepublic.com/article...how%20Jeopardy.

    I love tech and I am knowledgeable on the human body AND I'm excited about AI and computer advancement, but at the moment we are just making clever calculators.

  21. #21
    Quote Originally Posted by Filterlab View Post
    A computer is a tool, simple as. No matter how cleverly it "appears" to mimic sentient life, it just isn't and never will be. At the very extreme it can manipulate code, and build a database, but its core code has to be put there in the first place - and that has to be by someone. Life it isn't.

    I work with computers day in, day out, and the current operating systems are very clever, as are many applications. But under the hood a computer is a simply a dimwit box capable of doing absolutely nothing for itself.
    Don’t agree that computers will never be sentient, they can evolve as we have. After all, we evolved from not a lot.

    Unless coming from a perhaps religious viewpoint don’t see there’s anything special about us that couldn’t be mimicked in software.

  22. #22
    Grand Master Saint-Just's Avatar
    Join Date
    Apr 2007
    Location
    Ashford, Kent
    Posts
    28,934
    It’s a fascinating debate because regardless of our field of expertise it is highly unlikely that anyone knows what we are really talking about.
    The difficulty we have in defining sentience from our perspective may stem from the fact that -2001 a space odyssey aside- we developed our sentience on our own, whereas computers still need us to get to a stage where we may agree they are indeed sentient.
    'Against stupidity, the gods themselves struggle in vain' - Schiller.

  23. #23
    Good YouTube piece from ColdFusion TV which is an excellent channel

    https://youtu.be/2856XOaUPpg

  24. #24
    Grand Master
    Join Date
    Mar 2008
    Location
    Sussex
    Posts
    13,888
    Blog Entries
    1
    Quote Originally Posted by Saint-Just View Post
    It’s a fascinating debate because regardless of our field of expertise it is highly unlikely that anyone knows what we are really talking about.
    The difficulty we have in defining sentience from our perspective may stem from the fact that -2001 a space odyssey aside- we developed our sentience on our own, whereas computers still need us to get to a stage where we may agree they are indeed sentient.

    Sentience is a subjective event. While it feels to me that it feels like something to be me - that quale, be it pain, colour or whatever has a phenomenal character when I experience it, there is a fundamental problem with expressing that experience. This is The Problem Of Other Minds. It's broadly accepted across Psychology, Cognitive Science, A.I. and Philosophy of Mind that qualia stay in the head as entirely private experiences.

    So just for starters, why are we expecting an AI system to meet a criteria that chatty adult humans cannot?

    Worse, whatever my internal conscious experience might seem like to me, it is private to me. Sure, I may feel that my behaviour, including linguistic behaviour, is as a result of my experience, but from the outside, that's just behaviour and behaviour isn't the thing we are trying to talk about. Science is methodologically committed to replicability, and a single example of everything is simply anecdote. The assumption that we are all talking about the same thing, and thus sharing an anecdote is nicely dealt with by, for example, Wittgenstein's 'beetle' argument. We may all agree on the grammar and use of words but we lack the resources to be sure we are talking about the same thing. In fact, if my experiences are private, we lack the criteria to even be sure that we are having the same experience as yesterday, let alone as the experience another is having.

    So why should we trust a human who says they are conscious any more than an AI system? Indeed, why should we trust ourselves when we make that claim? We are generally systematically wrong when we say anything else about the connection between experience and biology, between mental events and the neural substrate they supervene upon. Of course if one subscribes to dualism, then the problems become even more intractable. Worse, we just might be being fooled by a user illusion that evolved to aid cooperation between each other. That doesn't stop us being conscious (see below) but it does make it so much harder to make sense of.

    Of course, that's not the real problem; the real problem is that we are not talking about AI per se, we are talking about artificial selves and it isn't clear to me that an artificial self needs to be something that it is like something to be. An artificial self would only need to be able to see itself as itself - This is well discussed by Bretano, but I prefer Strawson's formulation in 'Persons'. More to the point, a complete zombie, (a philosopher's zombie) that could see itself and articulate language would be able to pass as conscious as well as you and I, indeed, it may well mistake its non conscious representations of itself and its environment for actual conscious experience.

    As might we, the central contention of Dennett's Consciousness Explained (away).

    Of course, being something that it is like something to be, might just be a reflexive quality of matter such that any sufficiently self referential system would get consciousness for free, the position of Chalmers, before his Hard Problem became so rock and roll.

    Personally I think you need the reflexive notion of self, which we appear to develop in language, and to be something that it is like to be, to be able to become the centre of the stories. It's not enough just to be a body, because binding that information and the information from the senses seems to be one of those problems that can't be solved until it's already been solved. Or at least until you assume it's solved.

    For me, the deepest problem isn't ineffability, or the limits of method, it's simply privacy. Consciousness is inescapably private as the only system in the right place to interact with itself in the way that we know must lead to awareness of... is oneself. It's a bit like Mother Theresa's nipples. We can assume, based on biology and logic that she had them, and presumably she'd seen them, but beyond that it's just guesswork. So before you get to all of the other problems, there's privacy - even if we are only simulating it. Simulated rain is nothing, simulated thought is still thought and can still solve problems while simulated consciousness might be as close as we get - a private user illusion as a handy way for bodies to interact. Consciousness in AI is like C3PO's nipples. We can't even infer anything from the biology and we already know that behaviour isn't enough. But why shouldn't a suitably motivated, which probably means embedded and purposive, AI end up knocking up the same sort of user illusion, the sort you can make mistakes about? What's not real about that?

    So we might have it, some of the more interesting massively parallel systems might already be conscious and there are certainly some that might claim to be already. However, to return to Wittgenstein, The form of life of an AI is just too different for us to say very much at all.

    However, no one I 'm aware of is seriously trying to knock up a general AI that is focussed on being a self, let alone a self aware self, let alone a self aware self that is like something to be. The last serious attempt, Cog, wasn't quite all it was promised to be.

    There are other issues, but that's one.

    Consciousness, beyond our own, cannot be a scientific problem as admitting it into the ontology of science would involve violating the fundamental rules of how science is done, as described. It's a metaphysical problem and has to be treated as an axiom to even start to talk about the experience that we all assume we all have, although we probably don't, and we definitely don't all have in the same way.

    Lambda is just another Eliza Bot still assuming that Turing's 'Computing machinery and Intelligence' was a comment on what AI should be rather than a bit of a cry for help. Joseph Weizenbaum has a lot to answer for. As for passing the Turing test, Google are deliberately taking the intentional stance to the system, take the design stance and it would make less sense.

    And yes, this is actually my precise field of expertise. Which is a nice change.
    Last edited by M4tt; 28th June 2022 at 22:32.

  25. #25
    Grand Master
    Join Date
    Mar 2008
    Location
    Sussex
    Posts
    13,888
    Blog Entries
    1
    Quote Originally Posted by Kingstepper View Post
    Don’t agree that computers will never be sentient, they can evolve as we have. After all, we evolved from not a lot.

    Unless coming from a perhaps religious viewpoint don’t see there’s anything special about us that couldn’t be mimicked in software.
    You said it yourself- 4.5 billion years of physical evolution that makes a human body a solid mass of processing in an intimate dance with the environment that would kill it in a trice. Not to mention however billion years of interaction with other bodies doing the same dance, not to mention all the 'software' added in when we developed symbolic language games. That's a lot to mimic. Of course there might be other routes, but then we start with no map.

  26. #26
    Master studly's Avatar
    Join Date
    Aug 2011
    Location
    Scotland
    Posts
    2,616
    Quote Originally Posted by M4tt View Post

    And yes, this is actually my precise field of expertise

  27. #27
    Grand Master
    Join Date
    Mar 2008
    Location
    Sussex
    Posts
    13,888
    Blog Entries
    1
    A meme?

    Ironically, memes were Richard Dawkins' attempt to add something to this discussion. As a promising evolutionary biologist, they were his attempt to get to grip with evolutionary processes that occurred across people rather than within people. Memes were envisaged as the software to the bodily hardware. Initially the idea looked very promising, but the problem is that the conceptual redeployment necessary just doesn't work.

    Genes are rather hard to pin down. Unlike chromosomes or bases, a gene is a sequence that can be picked out as being acted upon by evolution. The problem with this is that it usually involves identifying the phenotype - the expression of the gene - and the working backwards to sort out the genotype - the actual sequence that changes to bring about the change in the finished product. As such a gene always has to be functionally defined. This has turned out to be hellishly complicated and with the discovery of epigenetic processes whereby methylations on genes, rather a lot of assumptions about nature and nurture are in the process of being rethought.

    However, when you come to the meme, you have an intractably wide range of potential phenotypes instantiated in, or at least supervenient upon, pretty well anything. So you can point at a picture, a song a teddy bear, a story, written, pictured, spoken or whatever else and say yes, that's a meme: 'a unit of cultural transmission' as Dawkins put it. That can evolve iin and between heads. However, the problem is that where the gene supervenes upon a selected sequence of nucleotides, the failure of the meme is that it can be anything and, at that point, the analogy falls apart. Memes promised to unify evolution in biology and beyond, making it possible to study evolution in mental processes and elsewhere, but it turns out to be the vaguest of handwaves that falls apart when you put any weight upon it and is outperformed by more precise ideas like beliefs, intentions, narrative image and so on. It really did blight Dawkins' career which is why he has ended up making a career of shouting at the faithful.

    And so the term, shorn of purpose, ends up being used to describe stuff like your impressive attempt at an insult.
    Last edited by M4tt; 29th June 2022 at 10:00.

  28. #28
    That's really fascinating, M4tt. What do you do for a living?

  29. #29
    Grand Master
    Join Date
    Mar 2008
    Location
    Sussex
    Posts
    13,888
    Blog Entries
    1
    Quote Originally Posted by The Doc View Post
    That's really fascinating, M4tt. What do you do for a living?
    I teach, consult, occasionally index books and write a bit. I keep bringing up kids. Things I enjoy. Why?
    Last edited by M4tt; 29th June 2022 at 19:51.

  30. #30
    You said this was your precise field of expertise, I wondered if you were academic or perhaps freelance in AI, or perhaps a philosophy Professor with time on his hands.
    Or maybe military!

    Or Classified!

  31. #31
    Grand Master
    Join Date
    Mar 2008
    Location
    Sussex
    Posts
    13,888
    Blog Entries
    1
    Quote Originally Posted by The Doc View Post
    You said this was your precise field of expertise, I wondered if you were academic or perhaps freelance in AI, or perhaps a philosophy Professor with time on his hands.
    Or maybe military!

    Or Classified!
    Perhaps a little more interest in the ideas rather than the person? From your interest in me, I assume you either know something about the area or it's just another dull tu quoque. If it's the former, then talk about the ideas if it's the latter, then that's just boring.

  32. #32
    Quote Originally Posted by M4tt View Post
    Perhaps a little more interest in the ideas rather than the person? From your interest in me, I assume you either know something about the area or it's just another dull tu quoque. If it's the former, then talk about the ideas if it's the latter, then that's just boring.
    I was just interested.
    I'm not so interested now.

  33. #33
    Master studly's Avatar
    Join Date
    Aug 2011
    Location
    Scotland
    Posts
    2,616
    Quote Originally Posted by M4tt View Post
    A meme?

    Ironically


Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  

Do Not Sell My Personal Information