Given the volume of column inches devoted to artificial intelligence in today’s media, it is all too tempting to treat it as a revolutionary concept. In reality, the concept is anything but new. The historical record shows that the term was first used by computer scientist and pioneer John McCarthy at an American technology conference in 1956. This fact isn’t meant to dissuade those who have faith in the undeniable potential of AI to transform our world – because it surely will – but to encourage a degree of measure, reason and patience in the discussion.
First and foremost, all of us who work in the technology world have a responsibility to our customers and the wider public to be clear about what we mean when we say ‘AI’. As a recent editorial from The Guardian points out, the term is commonly used to describe what might more accurately be referred to as machine learning: the process of discovering patterns in datasets to improve a system’s intelligence.
Today, machine learning impacts the minutiae of our daily lives in more ways than we may care to imagine. Big brands like Apple and Nestlé are investing huge sums of money in developing AI and machine learning models to capitalise on the kind of in-depth data that helps form a sophisticated picture of consumer choices and habits, aiding their product development and advertising strategies. Ads that pop up on your web browser are anything but random. Hugely popular online streaming services like Netflix and Amazon Prime use similar AI bots to curate a viewing experience that you’ll enjoy, recommending films and TV series by remembering what you’ve previously watched. Do you ever wonder why a TV series that you’ve never heard of has a score of “95%” on Netflix? This isn’t a user rating, but the

View Entire Article on ComparetheCloud.com