Demystifying Common Artificial Intelligence (AI) Myths
AI will Augment Human, Not Replace Human Workforce.
Will AI replace Humans?
Will AI suddenly ‘go rogue’ and develop itself to a point where humans cannot govern AI?
AI is emerging important technologically, economically and geopolitically, and therefore the AI’s use, misconception — and misuse — is burgeoning. As organisations emerge from the pandemic and reimagine their future, artificial intelligence (AI) will be a foundation of their technological ecosystem. However, with the growth of AI use -cases across different use-cases, there are several misconceptions about AI which often appear to be a blocker in its implementation across different industries and use-cases. It is essential to demystify it by explaining how organisations can use the technology to amplify their day-to-day efficiency and effectiveness to facilitate AI adoption.
But before we deep dive, What do you mean by AI?
Artificial intelligence (AI) enables a computer to think or act in a more “human” way. It is a branch of computer science that aims to imbue software with the ability to analyse its environment using either predetermined rules and search algorithms, or pattern recognising machine learning models, and then make decisions based on those analyses. Popular examples of artificial intelligence include voice-to-text features on mobile devices, analysing video footage to recognise gestures etc.
Across different industries, AI has the potential to improve lives. But it comes with misconceptions around AI. In the quest of demystifying common myths of AI, I will be covering the top three myths:
Myth #1: Artificial Intelligence = Machine learning = Deep Learning
Let us clear this first, Artificial Intelligence (AI), Machine Learning (ML) and Deep learning are different. AI is a broader term. Machine learning is a subset of AI that enables “intelligence” by using training algorithms and data, and Deep Learning is also a subset of AI. The infographic below explains the difference across AI, ML & DL:
Myth #2: AI will replace Humans.
The advent of AI is disrupting the traditional ecosystem — for example, replacing the mundane tasks and in many situations, it is already doing just that. However, seeing this as a straightforward transfer of labour from humans to machines is a vast over-simplification.
The potentiality to analyse big data undoubtedly augment complex tasks. For example, AI can be used in healthcare for disease detection. However, just because AI will arguably change job profiles, doesn’t automatically imply human intelligence will become altogether expendable. Our role in supervising many of the production models, given the danger of biases discussed above, will become increasingly important.
AI will augment jobs, but not replace all jobs.
For example, In the book, Human + Machine: Reimagining Work in the Age of AI, the author covers this misconception in-depth and reiterates that AI plays a significant role in augmenting and empowering humans.
Myth #3: Machines learn on their own.
Human — in — Loop is integral for designing the AI lifecycle to optimise the model performance and eventually, evolve the AI process. For example, data scientists execute tasks such as framing the problem, preparing the data, determining appropriate datasets, and — most importantly- continually improving the model in production to improve the performance and accuracy over time.
Many of today’s misconceptions about AI are based on adoptions’ stories gone wrong or fears of significant change. These beliefs get in the way of deeply understanding the positive business, operational, and economic impacts of AI and must be addressed to enable organizations to capture AI’s full value.