We teach AI systems to process information with the intellectual acumen of humans (perhaps even surpassing some humans), yet we do not reprimand them for their mistakes. Naturally, the blame is placed on humans who developed the artificial intelligence technology and perhaps as a result, AI will never be totally independent. Like a child, AI learns and makes decisions, but it has to be monitored by its creators to ensure it does not go rogue.
How do we respond when an unsupervised machine learning AI is left to its own devices and makes mistakes? For many, the obvious answer to this question is that the developer of this technology should be held accountable, but when we risk allowing AI to make its own decision unmonitored, are we giving it responsibility it does not have the means for? A machine must learn its lesson as a human does if we are to progress further into the widespread implementation of AI technology.
Integrating explanation systems
For AI to truly mimic human intelligence and thinking they would have to be able to explain their actions but this is a complicated matter. Integrating explanations into an AI system would require substantial data and training in its development and also in its application. As AI systems are designed to manage tasks in an efficient and more scientific manner than humans, an AI translating their workings into tangible explanations accessible to humans could cause delay to operations and slow down its productivity. The key question is, how can we make accountable AI’s without sacrificing the quality of its operations?
The absence of accountability and second guessing is a major difference between humans and AI that gives AI the edge in terms of efficiency, but in a human centric world, an entity capable of making decisions is naturally assumed accountable

View Entire Article on ComparetheCloud.com