closing tag is in template navbar
timefactors watches



TZ-UK Fundraiser
Results 1 to 47 of 47

Thread: AI

  1. #1
    Grand Master AlphaOmega's Avatar
    Join Date
    Jun 2009
    Location
    Trinovantum
    Posts
    11,313

    AI

    Following on from this thread in WT.

    Some of us have experienced people sharing unusual hobbies on their CV. Perhaps AI will allow a more balanced assessment of what matters and what does not. Provided the algorithm is created in a balanced way of course...

  2. #2
    Grand Master ryanb741's Avatar
    Join Date
    Jun 2008
    Location
    London
    Posts
    19,615
    I believe there is a lot of misunderstanding about what AI is and what it represents. Lots of suspicion around it. From my perspective (and I run the UK and IE offices of an AI company) the rise of AI comes hand in hand with a greater awareness of the importance of the unique attributes of Human Intelligence, that which AI cannot replicate. Things such as intuition, leadership, empathy etc. However what AI is brilliant at is being data-lead, non biased and consistent - traits which actually aren't innately strong ones in human intelligence. So by empowering AI to take over some of the admin/repetition-heavy tasks done by humans and free up time for them to use their human-specific intelligence more often then we can see that by combining Human and Artificial Intelligence together we get an outcome greater than what we currently have.

  3. #3
    Grand Master AlphaOmega's Avatar
    Join Date
    Jun 2009
    Location
    Trinovantum
    Posts
    11,313
    That makes sense to me.

    Allow AI to remove any inherent bias and allow it to rely on empirical evidence.

    Then the short list of candidates can be assessed by humans.

    I suppose one potential issue is that presumably we can make a list of jobs for which empathy and other human traits are of lower importance, and presumably these are the roles that AI will fill by itself.

    Eventually however, this list might include all jobs...

  4. #4
    Master
    Join Date
    Sep 2018
    Location
    West
    Posts
    1,282


    Data lead and consistent yes, but I’m not sure about the unbiased bit - that will depend on the data set the system is trained with.

    The technology will ultimately be abused in the same way every technology is where an advantage can be gained.

  5. #5
    Master alfat33's Avatar
    Join Date
    Aug 2015
    Location
    London
    Posts
    6,198

    AI

    Quote Originally Posted by PhilT View Post


    Data lead and consistent yes, but I’m not sure about the unbiased bit - that will depend on the data set the system is trained with.

    The technology will ultimately be abused in the same way every technology is where an advantage can be gained.
    Microsoft’s Tay chatbot proved that point a couple of years ago. It learned dialogue from Twitter and within 24 hours of launch began tweeting frequent offensive racist comments of its own.

    Learning how to get insights from large data sets sounds more like Machine Learning to me, which is similar but not the same as AI. ML is definitely already here and making a difference (for better and worse).

  6. #6
    Grand Master ryanb741's Avatar
    Join Date
    Jun 2008
    Location
    London
    Posts
    19,615
    Quote Originally Posted by alfat33 View Post
    Microsoft’s Tay chatbot proved that point a couple of years ago. It learned dialogue from Twitter and within 24 hours of launch began tweeting frequent offensive racist comments of its own.

    Learning how to get insights from large data sets sounds more like Machine Learning to me, which is similar but not the same as AI. ML is definitely already here and making a difference (for better and worse).
    It's the level of decision making and autonomy that is the difference between AI and ML. I'm at the pub now and not in the best space to go into the detail of how it works on our platform but will do so tomorrow. What's important is what the data is and how it is understood and used. Essentially the granularity of the taxonomy. Until tomorrow then :)

    Sent from my SM-G950F using Tapatalk

  7. #7
    Grand Master
    Join Date
    Mar 2008
    Location
    Sussex
    Posts
    13,888
    Blog Entries
    1
    Quote Originally Posted by ryanb741 View Post
    I believe there is a lot of misunderstanding about what AI is and what it represents. Lots of suspicion around it. From my perspective (and I run the UK and IE offices of an AI company) the rise of AI comes hand in hand with a greater awareness of the importance of the unique attributes of Human Intelligence, that which AI cannot replicate. Things such as intuition, leadership, empathy etc. However what AI is brilliant at is being data-lead, non biased and consistent - traits which actually aren't innately strong ones in human intelligence. So by empowering AI to take over some of the admin/repetition-heavy tasks done by humans and free up time for them to use their human-specific intelligence more often then we can see that by combining Human and Artificial Intelligence together we get an outcome greater than what we currently have.
    I'm entirely unsure why AI cannot replicate human intelligence. For example, the word 'intuition' needs to be carefully defined, but ultimately I suspect it's going to be a case of spotting and completing a pattern and I'm pretty certain that there are systems out there that can massively outperform us in most areas of pattern completion and even in very human focused areas, either can outperform human brains at this or are not far off.

    I'm deeply sceptical that AI can exactly replicate human performance and quite certain it can't replicate human experience, in the precise way we do it, but it doesn't have to, it only has to be able to do things that require intelligence when done by humans. How, precisely, it does this is less important. As the old quote goes: 'worrying if a computer can think is a bit like worrying if a submarine can swim'. So I suspect genuine empathy is out, but simulated empathy is hardly a new thing...

  8. #8
    Grand Master hogthrob's Avatar
    Join Date
    Feb 2007
    Location
    Essex, UK
    Posts
    16,843
    Turning things round a bit, what jobs will robots/computers/AI not be able to do in the near/forseeable future? Is abstract thought or invention coming any time soon?

  9. #9
    Grand Master Chris_in_the_UK's Avatar
    Join Date
    Nov 2004
    Location
    Norf Yorks
    Posts
    42,912
    Quote Originally Posted by hogthrob View Post
    Turning things round a bit, what jobs will robots/computers/AI not be able to do in the near/forseeable future? Is abstract thought or invention coming any time soon?
    Tons and tons.

    Rewire a house, remove & fit a new kitchen, manage H&S, interview a person....
    When you look long into an abyss, the abyss looks long into you.........

  10. #10
    Grand Master
    Join Date
    Mar 2008
    Location
    Sussex
    Posts
    13,888
    Blog Entries
    1
    Quote Originally Posted by Chris_in_the_UK View Post
    Tons and tons.

    Rewire a house, remove & fit a new kitchen, manage H&S, interview a person....
    The first two, the issue in the immediate future is commercial viability, not technology. Joe the plumber is cheaper.

    As for the other two:

    https://www.telegraph.co.uk/news/201...views-uk-find/

    https://projss.co.uk/will-artificial...health-safety/

    Obviously both are a work in progress, but...

  11. #11
    Grand Master Chris_in_the_UK's Avatar
    Join Date
    Nov 2004
    Location
    Norf Yorks
    Posts
    42,912
    Quote Originally Posted by M4tt View Post
    The first two, the issue in the immediate future is commercial viability, not technology. Joe the plumber is cheaper.

    As for the other two:

    https://www.telegraph.co.uk/news/201...views-uk-find/

    https://projss.co.uk/will-artificial...health-safety/

    Obviously both are a work in progress, but...
    I disagree.

    If you are happy to hand over these functions in the way you suggest from the links then you seriously need to consider these suggestions.

    Do you work, do you have a job - what ave you done and for how long?

    These seem to be the musings of a university student - not a bad thing by any means, but before you submit to this kind of decision making you need to get grounded.

    It's not about cost per-se, the intuition and solutions are but one part of the issue. If you/we/us develop AI to the point of independent thought then the world is over - seriously. Read some of Stephen Hawking' stuff.

    We should not be wishing or promoting this stuff unrestrained.

    We should have been in flying cars for ages, but?

    There is no commercially viable way for any of these.

    The commercial diving industry has mandated for ROV access to undersea stuff for ages - the commercial diver still has a place.

    Before anybody cries 'luddite' there are some stuff that will hang on until the end - long may it continue.
    Last edited by Chris_in_the_UK; 30th November 2019 at 00:15.
    When you look long into an abyss, the abyss looks long into you.........

  12. #12
    Grand Master
    Join Date
    Mar 2008
    Location
    Sussex
    Posts
    13,888
    Blog Entries
    1
    Quote Originally Posted by AlphaOmega View Post
    That makes sense to me.

    Allow AI to remove any inherent bias and allow it to rely on empirical evidence.

    Then the short list of candidates can be assessed by humans.

    I suppose one potential issue is that presumably we can make a list of jobs for which empathy and other human traits are of lower importance, and presumably these are the roles that AI will fill by itself.

    Eventually however, this list might include all jobs...
    There's always bias. Take empirical evidence - how do you decide what counts as evidence? How do you decide when evidence conflicts? How do you decide what evidence is relevant and what isn't. If you program these choices in who decides and how do you stop them being biased in the decisions they make. If you provide a more neural style system with training sets, who decide what goes into those training sets. Just for a start.

  13. #13
    Grand Master Chris_in_the_UK's Avatar
    Join Date
    Nov 2004
    Location
    Norf Yorks
    Posts
    42,912
    Quote Originally Posted by M4tt View Post
    There's always bias. Take empirical evidence - how do you decide what counts as evidence? How do you decide when evidence conflicts? How do you decide what evidence is relevant and what isn't. If you program these choices in who decides and how do you stop them being biased in the decisions they make. If you provide a more neural style system with training sets, who decide what goes into those training sets. Just for a start.
    Simple.

    Score the applications and then do a behavioural interview to explore the evidence.
    When you look long into an abyss, the abyss looks long into you.........

  14. #14
    Grand Master
    Join Date
    Mar 2008
    Location
    Sussex
    Posts
    13,888
    Blog Entries
    1
    Quote Originally Posted by Chris_in_the_UK View Post
    I disagree.

    It's not about cost per-se, the intuition and solutions are but one part of the issues. If you/we/us develop AI to the point of independent thought then the world is over - seriously. Read some of Stephen Hawking' stuff.

    We should not be wishing or promoting this stuff unrestrained.

    We should have been in flying cars for ages, but?

    There is no commercially viable way for any of these.

    The commercial diving industry has mandated for ROV access to undersea stuff for ages - the commercial diver still has a place.

    Before anybody cries 'luddite' there are some stuff that will hang on until the end - long may it continue.
    I suspect you are confusing artificial selves with artificial intelligence. Artificial intelligence is simply any task that would require intelligence if done by a human.

    However, I completely agree. The unrestrained use of AI should have been restricted a decade or two ago. Now the Genie is well and truly out of the bottle. Right now, google, nectar or any social media you use have a model of you that I suspect can more effectively predict what you are going to do in the future than your own model of what you are going to do. It certainly can explain more accurately (and without as much bias) why you did something. Then it can nudge you to want to do something else.

    Really effectively.

    You see you don't need an artificial self - we've got plenty of selves hanging around. what you need is mind extensions, in this case chunks of godlike intelligence (that is intelligence that can do things that we simply cant imagine doing, you know like translating all of google into one language and back again in fractions of a second) that can be used to collate all the data you have given or had borrowed and use it...

    ... to do anything, like subvert democracies or sell you a new phone.
    Last edited by M4tt; 30th November 2019 at 00:33.

  15. #15
    Grand Master
    Join Date
    Mar 2008
    Location
    Sussex
    Posts
    13,888
    Blog Entries
    1
    Quote Originally Posted by Chris_in_the_UK View Post
    Simple.

    Score the applications and then do a behavioural interview to explore the evidence.
    Cool, against what criteria do you score them?

    And what is the difference between a criteria and a bias?

  16. #16
    Grand Master oldoakknives's Avatar
    Join Date
    Sep 2012
    Location
    United Kingdom
    Posts
    20,041
    Blog Entries
    1
    Quote Originally Posted by hogthrob View Post
    Turning things round a bit, what jobs will robots/computers/AI not be able to do in the near/forseeable future? Is abstract thought or invention coming any time soon?
    Anything creative. Anything requiring empathy. Anything requiring imagination.
    Started out with nothing. Still have most of it left.

  17. #17
    Grand Master
    Join Date
    Mar 2008
    Location
    Sussex
    Posts
    13,888
    Blog Entries
    1
    Quote Originally Posted by Chris_in_the_UK View Post
    I disagree.

    If you are happy to hand over these functions in the way you suggest from the links then you seriously need to consider these suggestions.

    Do you work, do you have a job - what ave you done and for how long?

    These seem to be the musings of a university student - not a bad thing by any means, but before you submit to this kind of decision making you need to get grounded.

    It's not about cost per-se, the intuition and solutions are but one part of the issue. If you/we/us develop AI to the point of independent thought then the world is over - seriously. Read some of Stephen Hawking' stuff.

    We should not be wishing or promoting this stuff unrestrained.

    We should have been in flying cars for ages, but?

    There is no commercially viable way for any of these.

    The commercial diving industry has mandated for ROV access to undersea stuff for ages - the commercial diver still has a place.

    Before anybody cries 'luddite' there are some stuff that will hang on until the end - long may it continue.
    Cool, you clearly have the measure of me, I'll leave you to it.

  18. #18
    Grand Master ryanb741's Avatar
    Join Date
    Jun 2008
    Location
    London
    Posts
    19,615
    Quote Originally Posted by M4tt View Post
    Cool, against what criteria do you score them?

    And what is the difference between a criteria and a bias?
    Ok so I didn't have too many beers and certainly not enough to stop me from coming back to you with a hopefully cogent response.

    Specifically with the recruitment function (which works well as an example as our AI stack is aimed at supporting recruitment functions) what is clear is that organisations largely thrive or fail based on the quality of people they employ. And therefore the job of the recruiter is a very important one - they are literally bringing in the fuel to power the business. Great recruiters will have an innate intuition as to how well somebody would fit into their organisation, whether they would thrive, develop and so on. What is also true is that they spend a LOT of time trying to target relevant talent pools and sifting through irrelevant applications. So their true Human intelligence traits are their ability to understand how well someone would fit into their specific organisation, and this is stymied frequently by the sheer amount of admin they have to do.

    From our perspective, our aim is to firstly 'decode' talent. What this means is that in the initial process of classifying what someone is from a career perspective, we need to get very, VERY granular. What we refer to as deep taxonomy. For example, typically within the recruitment space, job boards, sites like Linkedin etc will classify someone in IT as 'IT/Software Development'. That's pretty much the depth of the taxonomy they go to. However somebody who is a Ruby on Rails developer doesn't want to see a ton of Java roles when he or she wants to learn about career opportunities. Furthermore, the way we consume digital information has changed and we expect relevant content to be pushed towards us and not have to go out and look for it using imperfect search engines and algorithms. So the answer here is to not get too hung up on what someone is called and focus more on what they do, have done and can do. You can have the job title '4th Duke of Wimbledon' for all we care, what matters is what your digital DNA is.

    How does this happen? Well Google in particular holds a lot of information about you. And from our perspective we have a LOT of data around the kinds of things Ruby on Rails developers (as an example) are interested in. We also know that Ruby on Rails developers don't make frequent searches for Ruby On Rails jobs. So we acquire these people on a deep taxonomy profile level from Google, Bing, Facebook (any of the platforms that operate at this level) and you come into our ecosystem which we call a community. We never take a CV as that is totally static data that is out of date the instant it is put together. Once you are in our community and we think you are (in this example) a Ruby on Rails Developer our CRM stack starts sending you content it thinks people with your profile will engage with (using what we refer to as 'intent data'). We see how you engage with it and the frequency of your engagement and these engagements are tailored automatically accordingly. But what is important here is the understanding of what you do engage positively with as this creates a higher 'confidence score' around your profile and the types of things you are interested in. Moreover because we are serving you content based on your profile and digital engagements and not what your job title is we soon learn that like most people from a career perspective you are interested in multiple options. You may be a Ruby on Rails Developer but you also might want to get into teaching, or be a scuba instructor. Your engagements and digital footprint tell us this, plus we know at any given time what your current state of engagement level is and what you engage with so it is always a real time snapshot of what 'floats your boat' at any given time. Again, no way is a simple CV on a database going to tell us that.

    Ok so now it gets clever. We run a campaign for client X who wants to target Ruby on Rails Developers. She doesn't want Java Developers applying, she only wants Ruby On Rails Developers applying and it is vital they do so for the future growth of her business. She also knows Ruby on Rails Developers are very difficult to target as they are so scarce and sought after. Because we have built a community globally (in our case of 100m profiles) and have used the aforementioned digital profiling to create a high confidence score about what it is that a user is most interested in at that moment in time we are pretty confident that the people we have classified as Ruby on Rails Developers are indeed Ruby on Rails Developers. As a result of this confidence we run a campaign with a commercial model being that the client pays every time one of these candidates completes an application and is verified as a relevant candidate. So a better store of value for the recruiter than paying for a job listing or some clicks.

    Here is where the AI gets involved. When running a campaign, we already know with a high confidence score who is a Ruby on Rails Developer. But we don't know which Ruby on Rails developers are likely to apply for this specific role as everyone is different and will have different motivations. So the AI learns which Ruby on Rails Developer profiles engage best with the recruiter roles, what profile types they share in common and then starts focusing the recruiter adverts more on this higher converting sub profile. What it also does (what we refer to internally as automated SEM by Taxonomy) is go out to Google, Facebook etc and acquire at this very deep taxonomy sub profile level MORE of these profiles if it feels that the profiles already in our communities aren't sufficient to satisfy the client's demands. It also knows what it needs to pay in order to acquire these profiles, how they convert towards the commercial model we have in place with the client and determines if this commercial model needs to change. So the AI has learnt what profiles convert best and are viewed as high quality by the client and takes the decision itself to go out and acquire more of these profiles using the very granular approach we took in the first instance when we acquired the candidates.

    The AI drives candidate acquisition based on what it has learned around what our clients need moving forward and what profiles convert best, it learns what content to serve people in our communities (just like Amazon or Netflix serve you different content compared to your peers based on historical engagements you have made on those platforms). This cannot be done by humans at scale and so fast. It isn't cutting human jobs, rather it is enabling humans to focus on doing the tasks that they are best placed to do and drive real impact to their organisations. And it helps organisations pay only for results that matter to their businesses, provides a much better experience for candidates as the content they are after finds them (they don't need to search for it) and the frequency this content is served to them happens based on the frequency that they engage with it.

    Whoosh, that was a LOT of information but I hope it made sense (in part at least) :)
    Last edited by ryanb741; 30th November 2019 at 01:40.

  19. #19
    Grand Master
    Join Date
    Mar 2008
    Location
    Sussex
    Posts
    13,888
    Blog Entries
    1
    Quote Originally Posted by ryanb741 View Post
    Ok so I didn't have too many beers and certainly not enough to stop me from coming back to you with a hopefully cogent response.

    Specifically with the recruitment function (which works well as an example as our AI stack is aimed at supporting recruitment functions) what is clear is that organisations largely thrive or fail based on the quality of people they employ. And therefore the job of the recruiter is a very important one - they are literally bringing in the fuel to power the business. Great recruiters will have an innate intuition as to how well somebody would fit into their organisation, whether they would thrive, develop and so on. What is also true is that they spend a LOT of time trying to target relevant talent pools and sifting through irrelevant applications. So their true Human intelligence traits are their ability to understand how well someone would fit into their specific organisation, and this is stymied frequently by the sheer amount of admin they have to do.

    From our perspective, our aim is to firstly 'decode' talent. What this means is that in the initial process of classifying what someone is from a career perspective, we need to get very, VERY granular. What we refer to as deep taxonomy. For example, typically within the recruitment space, job boards, sites like Linkedin etc will classify someone in IT as 'IT/Software Development'. That's pretty much the depth of the taxonomy they go to. However somebody who is a Ruby on Rails developer doesn't want to see a ton of Java roles when he or she wants to learn about career opportunities. Furthermore, the way we consume digital information has changed and we expect relevant content to be pushed towards us and not have to go out and look for it using imperfect search engines and algorithms. So the answer here is to not get too hung up on what someone is called and focus more on what they do, have done and can do. You can have the job title '4th Duke of Wimbledon' for all we care, what matters is what your digital DNA is.

    How does this happen? Well Google in particular holds a lot of information about you. And from our perspective we have a LOT of data around the kinds of things Ruby on Rails developers (as an example) are interested in. We also know that Ruby on Rails developers don't make frequent searches for Ruby On Rails jobs. So we acquire these people on a deep taxonomy profile level from Google, Bing, Facebook (any of the platforms that operate at this level) and you come into our ecosystem which we call a community. We never take a CV as that is totally static data that is out of date the instant it is put together. Once you are in our community and we think you are (in this example) a Ruby on Rails Developer our CRM stack starts sending you content it thinks people with your profile will engage with (using what we refer to as 'intent data'). We see how you engage with it and the frequency of your engagement and these engagements are tailored automatically accordingly. But what is important here is the understanding of what you do engage positively with as this creates a higher 'confidence score' around your profile and the types of things you are interested in. Moreover because we are serving you content based on your profile and digital engagements and not what your job title is we soon learn that like most people from a career perspective you are interested in multiple options. You may be a Ruby on Rails Developer but you also might want to get into teaching, or be a scuba instructor. Your engagements and digital footprint tell us this, plus we know at any given time what your current state of engagement level is and what you engage with so it is always a real time snapshot of what 'floats your boat' at any given time. Again, no way is a simple CV on a database going to tell us that.

    Ok so now it gets clever. We run a campaign for client X who wants to target Ruby on Rails Developers. She doesn't want Java Developers applying, she only wants Ruby On Rails Developers applying and it is vital they do so for the future growth of her business. She also knows Ruby on Rails Developers are very difficult to target as they are so scarce and sought after. Because we have built a community globally (in our case of 100m profiles) and have used the aforementioned digital profiling to create a high confidence score about what it is that a user is most interested in at that moment in time we are pretty confident that the people we have classified as Ruby on Rails Developers are indeed Ruby on Rails Developers. As a result of this confidence we run a campaign with a commercial model being that the client pays every time one of these candidates completes an application and is verified as a relevant candidate. So a better store of value for the recruiter than paying for a job listing or some clicks.

    Here is where the AI gets involved. When running a campaign, we already know with a high confidence score who is a Ruby on Rails Developer. But we don't know which Ruby on Rails developers are likely to apply for this specific role as everyone is different and will have different motivations. So the AI learns which Ruby on Rails Developer profiles engage best with the recruiter roles, what profile types they share in common and then starts focusing the recruiter adverts more on this higher converting sub profile. What it also does (what we refer to internally as automated SEM by Taxonomy) is go out to Google, Facebook etc and acquire at this very deep taxonomy sub profile level MORE of these profiles if it feels that the profiles already in our communities aren't sufficient to satisfy the client's demands. It also knows what it needs to pay in order to acquire these profiles, how they convert towards the commercial model we have in place with the client and determines if this commercial model needs to change. So the AI has learnt what profiles convert best and are viewed as high quality by the client and takes the decision itself to go out and acquire more of these profiles using the very granular approach we took in the first instance when we acquired the candidates.

    The AI drives candidate acquisition based on what it has learned around what our clients need moving forward and what profiles convert best, it learns what content to serve people in our communities (just like Amazon or Netflix serve you different content compared to your peers based on historical engagements you have made on those platforms). This cannot be done by humans at scale and so fast. It isn't cutting human jobs, rather it is enabling humans to focus on doing the tasks that they are best placed to do and drive real impact to their organisations. And it helps organisations pay only for results that matter to their businesses, provides a much better experience for candidates as the content they are after finds them (they don't need to search for it) and the frequency this content is served to them happens based on the frequency that they engage with it.

    Whoosh, that was a LOT of information but I hope it made sense (in part at least) :)
    Sounds cool. I suspect ‘converts’ has a non standard use here. I assume that you continue tracking longitudinally and refine your models. It seems to me that ultimately the trajectories of employees within the firm and even success of the firm would be needed to improve matches and move beyond getting stuck in what people think they want rather than what they need. How do you detect and stop people gaming this system and assess competence rather than interest?

  20. #20
    Master
    Join Date
    Sep 2018
    Location
    West
    Posts
    1,282
    The technology is one thing, and it’s great to have a fairly benign example like recruitment, but change “recruitment campaign” to “election campaign” and I get nervous and start thinking Cambridge Analytica.

    For me it all comes back down to data sets, intent and the potential for manipulation. The genie is well and truly out of the bottle and there’s such a technology arms race going on that nobody is thinking that much about regulation.

  21. #21
    Master alfat33's Avatar
    Join Date
    Aug 2015
    Location
    London
    Posts
    6,198
    Thanks Ryan, very interesting.

  22. #22
    Grand Master
    Join Date
    Mar 2008
    Location
    Sussex
    Posts
    13,888
    Blog Entries
    1
    Quote Originally Posted by PhilT View Post
    The technology is one thing, and it’s great to have a fairly benign example like recruitment, but change “recruitment campaign” to “election campaign” and I get nervous and start thinking Cambridge Analytica.

    For me it all comes back down to data sets, intent and the potential for manipulation. The genie is well and truly out of the bottle and there’s such a technology arms race going on that nobody is thinking that much about regulation.
    I assume you saw my opinion in the other thread. I’m much happier to see benign applications, but I agree it’s the same tech and those who are not paying are merely product. I rather suspect it’s self defeating, because, as other research has shown and as evolution demonstrates, it’s diversity that gives you an agile and responsive workforce. As described this looks to me to encourage monocultures and filter bubbles which I suspect will satisfy the client beautifully while adversely affecting overall fitness. There’s no particular motivation to address this as the future firm isn’t paying...

  23. #23
    Grand Master ryanb741's Avatar
    Join Date
    Jun 2008
    Location
    London
    Posts
    19,615
    Quote Originally Posted by M4tt View Post
    Sounds cool. I suspect ‘converts’ has a non standard use here. I assume that you continue tracking longitudinally and refine your models. It seems to me that ultimately the trajectories of employees within the firm and even success of the firm would be needed to improve matches and move beyond getting stuck in what people think they want rather than what they need. How do you detect and stop people gaming this system and assess competence rather than interest?
    That's what the recruiter does. Hence AI and Human Intelligence working together. What the AI does is ensure the recruiter is given enough of a talent pool to work with.

    Sent from my SM-G950F using Tapatalk

  24. #24
    Grand Master ryanb741's Avatar
    Join Date
    Jun 2008
    Location
    London
    Posts
    19,615
    Quote Originally Posted by M4tt View Post
    I assume you saw my opinion in the other thread. I’m much happier to see benign applications, but I agree it’s the same tech and those who are not paying are merely product. I rather suspect it’s self defeating, because, as other research has shown and as evolution demonstrates, it’s diversity that gives you an agile and responsive workforce. As described this looks to me to encourage monocultures and filter bubbles which I suspect will satisfy the client beautifully while adversely affecting overall fitness. There’s no particular motivation to address this as the future firm isn’t paying...
    Actually the opposite applies. Filter bubbles apply when we try to label things using human nomenclature. When we use technology to break down someone's profile at a deep taxonomy level (digital DNA) and focus on what they can do as opposed to what they are called then you get much more diversity as a result.

    Sent from my SM-G950F using Tapatalk

  25. #25
    Grand Master
    Join Date
    Mar 2008
    Location
    Sussex
    Posts
    13,888
    Blog Entries
    1
    Quote Originally Posted by ryanb741 View Post
    Actually the opposite applies. Filter bubbles apply when we try to label things using human nomenclature. When we use technology to break down someone's profile at a deep taxonomy level (digital DNA) and focus on what they can do as opposed to what they are called then you get much more diversity as a result.

    Sent from my SM-G950F using Tapatalk
    Ok, it seems that you are using a few technical terms in ways that may diverge from how I'm using them so I want to be a bit careful.

    I'm not sure that labelling things is the problem and I'm equally unsure what non human nomenclature would look like as no other creature develops anything like that. If you want to call partitioning by, some sort of artificial neural system labelling or even nomenclature then perhaps, but I wouldn't.

    Filter bubbles occur when the choices a user makes lead to a system suggesting similar choices. That could be achieved by parsing natural language looking for key words, but it could equally be achieved by instantiating a Baysian function (either implicitly or explicitly) that inductively offered up search results that fitted the same pattern as previous completed searches but avoided the patterns of aborted searches. This would be entirely subsymbolic but still undoubtedly lead to a filter bubble.

    Perhaps a more precise definition of a 'filter bubble'?

    I'm curious what you mean by 'use technology to break down someone's profile'. What technology? How do you break it down? what does the ontology of this taxonomy look like and how is it, dare I say, conceptualised?

    As it stands, I'm unclear how you are distinguishing between the former, which looks to me like it could equally well be called a shallow taxonomy and your deep taxonomy. Is this a matter of granularity or do you think it is a completely different conceptual scheme?

    As for 'digital DNA' what is the phenotype and what the genotype? Otherwise I suspect that this isn't a helpful use of figurative language.

    I get that focussing on competencies rather than job titles make sense, but that's not new and doesn't require tech. It's the tech I'm interested in.

    Sorry if this sounds stroppy, I just write that way.
    Last edited by M4tt; 30th November 2019 at 18:30.

  26. #26
    Grand Master ryanb741's Avatar
    Join Date
    Jun 2008
    Location
    London
    Posts
    19,615
    Hi Matt

    Thanks for the comments. I'll respond in full tomorrow as heading out but I agree with you regarding the language I use - I deliberately use more conceptual and layman speak when I write here as 99% of the audience is non AI and likewise when I engage with clients and prospects, the vast majority of these are HR professionals and my aim is to simplify what can be very complex concepts. 'Digital DNA' is one such phrase I use as a layman would understand exactly what I mean by that but yes I agree if I went to my DEV team with that they'd look at me as if I just fell out the back of a dog! :)

    Sent from my SM-G950F using Tapatalk

  27. #27
    Grand Master
    Join Date
    Mar 2008
    Location
    Sussex
    Posts
    13,888
    Blog Entries
    1
    Quote Originally Posted by ryanb741 View Post
    Hi Matt

    Thanks for the comments. I'll respond in full tomorrow as heading out but I agree with you regarding the language I use - I deliberately use more conceptual and layman speak when I write here as 99% of the audience is non AI and likewise when I engage with clients and prospects, the vast majority of these are HR professionals and my aim is to simplify what can be very complex concepts. 'Digital DNA' is one such phrase I use as a layman would understand exactly what I mean by that but yes I agree if I went to my DEV team with that they'd look at me as if I just fell out the back of a dog! :)

    Sent from my SM-G950F using Tapatalk
    That would be interesting. However, I really don't think either a layman or a specialist would have the faintest clue as to what 'digital DNA' meant. I guess that a non specialist might think they did, which is what matters, I guess.
    Last edited by M4tt; 30th November 2019 at 20:42.

  28. #28
    Grand Master Mr Curta's Avatar
    Join Date
    May 2014
    Location
    Mainly UK
    Posts
    17,281
    Quote Originally Posted by ryanb741 View Post
    I deliberately use more conceptual and layman speak when I write here as 99% of the audience is non AI and likewise when I engage with clients and prospects, the vast majority of these are HR professionals and my aim is to simplify what can be very complex concepts.
    You're doing a bloody good job of it.

  29. #29
    Could an AI expert here get 'one' to sign up to TZ. Might be good training?

  30. #30
    We use an increasing amount of AI in our organisation. Propensity modelling, churn, recommendations etc.

    I expect it to revolutionise almost every part of our business in the next 3 years.

    I'm investing many millions into building the platform capabilities to roll this out, but honestly, recruitment will be the last place we implement it. Not because I don't believe it will be great at some point, but I want the ethics and probable legal cases finished before I start to use it to assess 50k candidates a year.

  31. #31
    Grand Master ryanb741's Avatar
    Join Date
    Jun 2008
    Location
    London
    Posts
    19,615
    Quote Originally Posted by guinea View Post
    We use an increasing amount of AI in our organisation. Propensity modelling, churn, recommendations etc.

    I expect it to revolutionise almost every part of our business in the next 3 years.

    I'm investing many millions into building the platform capabilities to roll this out, but honestly, recruitment will be the last place we implement it. Not because I don't believe it will be great at some point, but I want the ethics and probable legal cases finished before I start to use it to assess 50k candidates a year.
    It isn't used to assess them it is used to target and engage them at scale. You are right, using AI as a decision maker in the recruitment process is asking for trouble at this stage but much of the process can be automated and AI enhanced in a way that enables the human part of the process to be improved

    Sent from my SM-G950F using Tapatalk

  32. #32
    Grand Master
    Join Date
    Mar 2008
    Location
    Sussex
    Posts
    13,888
    Blog Entries
    1
    Quote Originally Posted by guinea View Post
    We use an increasing amount of AI in our organisation. Propensity modelling, churn, recommendations etc.

    I expect it to revolutionise almost every part of our business in the next 3 years.

    I'm investing many millions into building the platform capabilities to roll this out, but honestly, recruitment will be the last place we implement it. Not because I don't believe it will be great at some point, but I want the ethics and probable legal cases finished before I start to use it to assess 50k candidates a year.
    Here's the problem laid bare - the reality is that all of this will boil down to uplift modelling or whatever the current euphemism is for using big data and godlike AI to gently, or less than gently, get people to act as you wish them to.

    If there's one key discovery over the last thirty years of competent AI and Cognitive Science, it's just how easy we are to manipulate, how many blind spots, cognitive biases and neglects we have. At the same time the ability to store, curate and access data gives those who have access unparalleled insight into what works. Put them together and poor little Homo Narratus is stuffed.

  33. #33
    Master chrisb's Avatar
    Join Date
    Oct 2002
    Location
    at the end of my tether
    Posts
    6,248
    When I saw the thread title I thought this was a livestock related discussion. Seems it was a misconception.

  34. #34
    Master
    Join Date
    Sep 2018
    Location
    West
    Posts
    1,282
    What level of processing power is assumed for this type of application?

    I have some involvement in a related hardware area but the target market is at the edge (self-crashing cars) so it needs to be very real time.

  35. #35
    Quote Originally Posted by M4tt View Post
    Here's the problem laid bare - the reality is that all of this will boil down to uplift modelling or whatever the current euphemism is for using big data and godlike AI to gently, or less than gently, get people to act as you wish them to.

    If there's one key discovery over the last thirty years of competent AI and Cognitive Science, it's just how easy we are to manipulate, how many blind spots, cognitive biases and neglects we have. At the same time the ability to store, curate and access data gives those who have access unparalleled insight into what works. Put them together and poor little Homo Narratus is stuffed.
    To be honest, you don't need a lot of data to manipulate people. Advertisers have managed it for years without AI. Have you seen the price of a Rolex?

    The modelling we do, you can actually do in a traditional data platform it just takes 120 hours. You can do 40,000 times faster with AI. Rather than use it to manipulate the customer, we use to work out who to avoid marketing to, saving us a fortune.

    We'll use it in supply chain to predict errors, we'll use of to work out pricing and volumes, and we'll use it to help decide what stock we buy. All things that currently happen, but are far faster and more reactive with AI. The businesses (past a certain scale) that aren't supported by a solid data platform will all be dead in 10 years.

    The balance that we need to strike is the human vs machine influence in what differentiates us. I'm more worried about a world shaped by the same reductive algorithms than I am about a consumer seeing an advert they respond well do. Although I admit they aren't interdependent.

  36. #36
    Quote Originally Posted by PhilT View Post
    What level of processing power is assumed for this type of application?

    I have some involvement in a related hardware area but the target market is at the edge (self-crashing cars) so it needs to be very real time.
    Im my use cases, I don't really care about the processing power. It happens in the cloud and I really only care about price. It's cheap.

    Real time is coming. You can now do voice to text on an edge device and specific AI enabled chipsets are coming all the time.

  37. #37
    Master
    Join Date
    May 2005
    Location
    Cheshire, UK
    Posts
    5,144
    Quote Originally Posted by chrisb View Post
    When I saw the thread title I thought this was a livestock related discussion. Seems it was a misconception.
    Not too far out though.

  38. #38
    Grand Master
    Join Date
    Mar 2008
    Location
    Sussex
    Posts
    13,888
    Blog Entries
    1
    Quote Originally Posted by guinea View Post
    To be honest, you don't need a lot of data to manipulate people. Advertisers have managed it for years without AI. Have you seen the price of a Rolex?

    The modelling we do, you can actually do in a traditional data platform it just takes 120 hours. You can do 40,000 times faster with AI. Rather than use it to manipulate the customer, we use to work out who to avoid marketing to, saving us a fortune.

    We'll use it in supply chain to predict errors, we'll use of to work out pricing and volumes, and we'll use it to help decide what stock we buy. All things that currently happen, but are far faster and more reactive with AI. The businesses (past a certain scale) that aren't supported by a solid data platform will all be dead in 10 years.

    The balance that we need to strike is the human vs machine influence in what differentiates us. I'm more worried about a world shaped by the same reductive algorithms than I am about a consumer seeing an advert they respond well do. Although I admit they aren't interdependent.
    In that case, you seem to be both aware of the issues and keen to avoid them, which is great. However as Cambridge analytica demonstrate, others are not so fastidious. I assume that they were just incompetent and the competent outfits are the ones we have never heard of.

  39. #39
    Grand Master MartynJC (UK)'s Avatar
    Join Date
    Dec 2008
    Location
    Somewhere else
    Posts
    12,336
    Blog Entries
    22
    A small contribution :-

    This discussion makes me think of Asimov's Three laws of robotics:

    1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
    2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
    3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.


    Also - Descartes' dualistic principle of the, "Ghost in the Machine".

    The whole nature of humanity that has kept philosophers and sages busy down the ages is wrapped up in definitions of Intelligence and by extension - Artificial Intelligence

    In the words of Douglas Adams - we should look out for the 'shoe event horizon'* - as one is due very soon me thinks. The rate of change is exponential and I surmise non-sustainable. Something in this world system has to give - but I am sure the Earth will continue long after the pesky homo-sapiens have gone.


    * https://hitchhikers.fandom.com/wiki/Shoe_Event_Horizon#
    The Shoe Event Horizon is an economic theory that draws a correlation between the level of economic (and emotional) depression of a society and the number of shoe shops the society has.

    The theory is summarized as such: as a society sinks into depression, the people of the society need to cheer themselves up by buying themselves gifts, often shoes. It is also linked to the fact that when you are depressed you look down at your shoes and decide they aren't good enough quality so buy more expensive replacements. As more money is spent on shoes, more shoe shops are built, and the quality of the shoes begins to diminish as the demand for different types of shoes increases. This makes people buy more shoes.

    The above turns into a vicious cycle, causing other industries to decline.

    Eventually the titular Shoe Event Horizon is reached, where the only type of store economically viable to build is a shoe shop. At this point, society ceases to function, and the economy collapses, sending a world spiralling into ruin. In the case of Brontitall and Frogstar World B, the population forsook shoes and evolved into birds.

  40. #40
    Craftsman
    Join Date
    Jan 2019
    Location
    Ireland
    Posts
    390
    Most people hear AI and think of computers thinking and figuring out complex problems using intuition as humans do.
    AI is nothing of the sort, AI is just a bunch of mathematical formulas that use past data of a very specific process to come up with a model which can be used to give a good guess at answering how that process will behave in the future.

    a brutally simple example would be something like this contrived example.
    Say we have a list of house prices and floor area:

    house 1, 275k 100sqm, 3 bedrooms
    house 2, 600k 200sqm, 6 bedrooms
    house 3, 125k 50sqm, 1 bedroom

    say size is the variable s and bedrooms is the variable b, the price of a house can be represented as 2s + (25)b
    e.g. price for a houes with 10 bedrooms and floor area of 400sm woudl be (400x2) + (25*10) = 1050k
    Very simple example but almost all AI does something like this but with vastly more variables and more complicated mathematical techniques.

    That leads to the next important point about AI, its only good when its focused on a very specific problem, and i mean lazer focused.
    The model that prices the houese would be zero use at anything else to do with houses. It doesnt know its dealing with houses or floor areas.
    If you wanted to calculate an estimate for renovation of a house for example, a brand new model is required that has little to do with the pricing model.

    Andrew Ng, a leader in the field has given a good gauge of what AI can do and cannot do in saying that most tasks that take a human a second or less to figure out is a potential candidate for AI. Thats tiny tasks. developers can then string these small tasks together to make systems that perform larger tasks.
    e.g. one AI Model analyzes security cam footage and gets key points for faces.
    a 2nd model might compare those faces to faces of known criminals.
    a 3rd model might decide which are high enough probability to be worth a humans time to investigate.

    AI does not have intuition, it has formulas(models) and data. if the data changes the models need to be recreated.
    if the data is bad, the model is junk.

    AI is also not good or bad, its as bias as the data is. take google photos classification functionality, that lets users search through photos by typing in an object, e.g. type in car and google photos will show all of your photos that are likely to have a car in them.
    When it was first released it classified black people as monkeys. On the face of it, it seems like a massively racist thing to happen and the racist developers must be fired. In reality all that happened is that the model was trained with a biased dataset, i.e. a dataset that contained mostly caucasian people. The model didnt have enough data on dark skinned humans, so the next most probable thing they could be, based on its data was monkeys!
    Google isnt racist, the developers are not racist, the data was just not complete, it was very biased.

    There was no telling google photos classification algorithm that it did something wrong. it doesnt know what wrong is, it doesnt know what a human is, or a car or a monkey. it just knows patters that give a high likelyhood of something being labelled with the text "car" or "person" or "monkey".

    Human brains can make massive leaps of intuition, links between seemingly unrelated fields, and take that information and come up with something brand new. There is no ML/AI algorithm that can do this and no sign of it on the horizon.

    One thing AI can do well that humans cant is spot patterns in massive quantities of data. show a human a million rows with a thousand columns per row and they will go blind! in our contrived house price example, if you add 100 more variables and a million rows its going to take a human a long time to see a pattern. AI algorithms can find patterns in this quickly, thats where they are useful, but again, when super focused on a specific problem or task.

    This is why (IMHO) AI is nowhere near replacing humans at what we're good at. it will at best take away simple tasks, freeing up people for higher thinking. tasks like finding patterns in millions of records of data, patterns which humans will then interpret. It will eventually take over some jobs like security monitoring, and others that require humans to do lots of simple tasks in succession.

    As long as people have a continual growth mindset, and move up the value chain, they will always have jobs.
    Its similar to when the industrial revolution gave people more free time, but it made as many jobs as it took over.
    Same with computers, there are more jobs than ever.
    AI & ML will and is doing the same thing. Sure some existing jobs will disappear but in 20 and 30 years there will be many more new jobs, with titles and functions that dont even exist yet.
    Last edited by Wilson_smyth; 2nd December 2019 at 22:19.

  41. #41
    Grand Master AlphaOmega's Avatar
    Join Date
    Jun 2009
    Location
    Trinovantum
    Posts
    11,313
    Well, unfortunately the discussion has reached where I thought it would go. Not because the contributors to this thread can't discuss things without recourse to insults (see all the political threads that have ever been started on TZ) but because the G&D has rules in place for a reason.

    If Eddie is reading this perhaps it would be wise to move it.

    Thanks to everyone who has taken the time to type long-form posts that really do help to show where the current corporate thinking is on AI. It would be interesting to know where you think it will be in 10 or 20 years.

    Wilson (in his post) says that AI 'isn't good or bad'. But judging by the examples he has given, there is a very good case that AI could be bad. And I don't mean good or bad at a specific recruitment role. I mean good or bad in ethical terms. That's where things get sketchy. What happens if AI turns out to be inherently biased even without ongoing human input? What if it decrees that [insert ideological position] is the way forward? Or if the original brief for something wonderful becomes something we don't want under any circumstances?

    We are flawed. Why shouldn't AI be if we continue to develop it?

    The ethical issue is compounded by the idea that we may be living in a simulation. So we may all be AI. Or not.

    *looks at back of the room where nobody is listening because they're all buying second-hand shoes on SC*

  42. #42
    Craftsman
    Join Date
    Jan 2019
    Location
    Ireland
    Posts
    390
    Quote Originally Posted by AlphaOmega View Post
    Wilson (in his post) says that AI 'isn't good or bad'. But judging by the examples he has given, there is a very good case that AI could be bad. *
    in the same way that a kitchen knife, or a garden shears, or a car can be good or bad.
    cars are good, they transport us from place to place. cars are bad, they run people over.
    knives are good, they cut food, knives are bad, they can stab people.

    Would you look at a car that has ran someone over and think "oooh, now thats an evil car"?
    lets take the human element out of it. a rock crusing machine, it just keeps crusing rocks. nobody is driving or controlling it.A man falls in and gets crushed, is that machine evil? Is it evil if the man is pushed in by another man maybe?



    Quote Originally Posted by AlphaOmega View Post
    Not because the contributors to this thread can't discuss things without recourse to insults (see all the political threads that have ever been started on TZ) but because the G&D has rules in place for a reason.
    I think the discussion here has been very level headed, people have expressed opinions without attacking other opinions or posters and vice versa. it all seems very level headed.
    Last edited by Wilson_smyth; 3rd December 2019 at 13:38.

  43. #43
    Grand Master AlphaOmega's Avatar
    Join Date
    Jun 2009
    Location
    Trinovantum
    Posts
    11,313
    I never say "ooooh".

    Have you been watching too many soap operas?

  44. #44
    Craftsman
    Join Date
    Jan 2019
    Location
    Ireland
    Posts
    390
    Quote Originally Posted by AlphaOmega View Post
    I never say "ooooh".

    Have you been watching too many soap operas?
    OK, now we're getting to thst point where the debate slowly degrades from a discussion. You were right even if it is through self fulfilling prophecy, probably time to close the thread.

  45. #45
    Grand Master AlphaOmega's Avatar
    Join Date
    Jun 2009
    Location
    Trinovantum
    Posts
    11,313
    I just felt your response was a little bit patronising. Hence my response in kind.

    I'm happy to continue the discussion but my concern is that some topics are unintentionally difficult.

  46. #46
    Grand Master Mr Curta's Avatar
    Join Date
    May 2014
    Location
    Mainly UK
    Posts
    17,281
    There's an erudite and articulate line up at this NS event on Sunday.

    https://www.eventbrite.co.uk/e/instant-expert-artificial-intelligence-tickets-79693829389?aff=Email6



    Talks and speakers:

    Privacy in the era of big data
    Michael Veale, Lecturer in digital rights and regulation at University College London

    Living with intelligent machines
    Nello Cristianini, Professor of Artificial Intelligence at the University of Bristol

    Collective intelligence: why AI needs us
    Aleksandra Berditchevskaia, Nesta Senior Researcher at the Centre for Collective Intelligence Design

    Can computers be creative?
    Anna Jordanous, Senior lecturer at the University of Kent


    Helmut Hauser, Senior Lecturer in Robotics at the University of Bristol

    Plus one more exciting speaker to be announced soon


    Hosted by Valerie Jamieson – New Scientist Live Creative Director

  47. #47
    Grand Master AlphaOmega's Avatar
    Join Date
    Jun 2009
    Location
    Trinovantum
    Posts
    11,313
    ^Very interesting.

    Big Data is a bigger problem than AI in the short term IMHO. I'm talking about the ease with which we all acquiesce to giving away valuable information.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  

Do Not Sell My Personal Information