“AI and machine learning are the new hot buttons on industry BS bingo sheets.” Neil Davidson, Managing Director and Partner of HeyHuman argues what AI is (and isn’t) and how agencies can get the balance of man plus machine right.
Last week’s ‘Machine Learning Debate’ at The Foundation for Science and Technology, in London showed what AI gurus had to say – as consultancies and agencies remain far too willing to wrap up basic chatbots under the shiny banner of “AI”. I was determined to come away with a realistic classification of what AI is, and isn’t, to get the balance of man plus machine right and create more human brand experiences that connect by really putting people first.
Simply put, to separate AI from BS.
Dr Mike Lynch – ex CEO of autonomy – and Dr Claire Craig – advising government on AI policy – were exactly the kind of thought leaders I was looking for.
Dr Lynch kicked off by tackling two things that agencies, brands and society could do to get to grips with AI:
- 9 out of 10 projects claiming to include AI don’t
- People who are certain that the robots are coming to get us don’t really understand the current state of play with AI.
The first point is something cynics have rightly suspected for some time. All too often, campaigns using simple decision-tree driven chatbots are wrapped up as “AI”, or “machine learning”. The number of start-ups and consultancies that claim to use machine learning is ridiculous – they are new hot buttons on industry BS bingo sheets.
This is problematic for two reasons: first, clients are getting hoodwinked into putting the tech first and handing the keys over to “AI” only to find there is no substance to it. Secondly, audiences are getting poor experiences as we aren’t getting the marriage of man and machine right.
In the context of this knowledge gap, many audience members agreed we need to get to a clear understanding of what AI is. It’s clear that it’s just not one thing and that the definition of it is subjective and relative:
Level 1: decision-tree driven artificially intelligent answers (most people agreed this isn’t really AI)
Level 2: Bayesian networks apply a degree of related reasoning to the answers they give (some people classify this as narrow AI)
Level 3: AlphaGo uses a Monte Carlo search in addition to neural networks (most agree this is AI – albeit still a relatively narrow use)
While this classification is helpful, I think the panel could and should have gone further. We need to classify AI properly to reduce sensationalist headlines such as ‘the robots are coming to get us.’ As Dr Lynch pointed out, there is an ocean of difference between narrow AI and broad AI – where the latter makes sense of its environment and understands how to respond generally.
“An AI fox could tell if there was a rabbit in the garden, but it might get run over by the night bus as it hasn’t been programmed to recognise it. Don't confuse narrow AI with broad AI that gets the world.”
Dr Claire Craig – leading the Science Foundation’s policy development on machine learning – built on Dr Lynch’s points saying the worry about machine learning does not reflect the way it is applied – the scaremongers treat it as a singular notion – when it is anything but. The key is to have a human approach to applying the tech with human benefits at the fore:
“People don’t want data-driven technology they want data-enabled technology driven by human purpose.”
This was really the key takeaway for me: we have to keep the human in the machine and think about how we can use this technology to better connect with people. This is not about creating automata like Sophia the ‘robot’ as that doesn’t move the needle seriously on AI application.
We have to marry man plus machine better together and determine where to apply this tech in a purposeful way. Increasingly for agencies that means cutting the BS and putting human purpose at the fore. It also means new ways of working and thinking to get new relationships right and connect in ways that remain human and brain-friendly when we are increasingly talking to a machine.
Read the original article on the IPA blog here.
Comment