Even the biggest, most technologically-advanced businesses find artificial intelligence difficult.  There is a slew of examples of AI-gone-bad, where developers have accidentally built applications that have proved themselves to be racist, sexist, or seeming to advocate violence.
The problem is not with the technology itself, but with its human creators who so often bring their own unconscious biases to the table. Google, for example, listed 641 people working on “machine intelligence” – of whom only 10 percent were women.
This shows how even the biggest, most technologically-advanced businesses can make serious missteps in their journey towards better, more intuitive operations through AI and automation. How can businesses navigate this complex landscape and develop systems that are not only free from bias but which also develop real and measurable business value?
AI and automation – the human factor
Luminaries from Elon Musk to Professor Stephen Hawking have made dire warnings about the existential threat of AI to humanity. On a more prosaic level, many ordinary people have the erroneous (and dangerous) assumption that technologies such as AI, machine learning and automation will soon replace them in their jobs. In truth, we’re a long way from achieving this.
Current automated systems are still directed heavily by humans, with pre-set tasks and complex but defined and limited algorithms. There are no AI solutions in current business use whose actions are completely unpredictable, none that are capable of independent thought – and we see this as good business sense.
Rather than seeing the relationship between humans and these technologies as akin to that between master and servant, we should instead think of it more like a marriage. Automated and AI systems should be there to support us, not replace us. And, like a good marriage, they find firm foundations in dialogue.
The picture on the ground 
What does this look like

View Entire Article on ComparetheCloud.com