AI is a new paradigm in computing. You’ll find authors defining it differently in various tech publications, but the general consensus appears to be this: AI, of the type that’s prevalent today, determines solutions to various issues without being explicitly programmed to do so.
Historically, our software’s sole purpose has been automating logic. We’ve given it problems we knew how to fix and had it perform calculations within predefined rules, to achieve objectives promptly. Programming has been based on proof and certainty.
AI, as a computer science, has a more empirical nature. It doesn’t require prior knowledge of the truth and applies probability and statistics to deal with unsureness. In a nutshell, it strives to draw inferences, as accurate as possible, from incomplete information.
Why Are We Talking About AI Now?
Three factors are considered to have spurred the current AI wave. The recent algorithmic advances in Machine Learning (this is referring primarily to deep learning algorithms), the emergence of large enough datasets for ML models to detect patterns in, and, finally, the availability of strong computation hardware that can power Big Data processing. If we were to name one company that helped escalate the progress, we’d say it was NVIDIA.
The firm’s robust GPU cards, invented initially to conjure up realistic graphics for PC gaming, turned out to possess properties nearly ideal for Deep Learning. Namely, their chips each contained about 4000 cores, which weren’t particularly powerful on their own but enabled parallelisation of computing and, thus, provided sound platforms on which neural networks could function efficiently. There were also many influential research papers published in recent years such as Dropout: a simple way to prevent neural networks from overfitting, Deep Residual Learning for Image Recognition, Large-Scale Video Classification with Convolutional Neural Networks and many more.
Collectively, they have all helped ML practitioners to increase,

View Entire Article on