Artificial intelligence leverages computers and machines to mimic the problem-solving and decision-making capabilities of the human mind.
“Artificial intelligence would be the ultimate version of Google. The ultimate search engine that would understand everything on the web. It would understand exactly what you wanted, and it would give you the right thing. We’re nowhere near doing that now. However, we can get incrementally closer to that, and that is basically what we work on.”—Larry Page
What is artificial intelligence?
Artificial intelligence is the simulation of human intelligence processes by machines, especially computer systems. Specific applications of AI include expert systems, natural language processing, speech recognition and machine vision.
Artificial intelligence (AI) refers to the simulation of human intelligence in machines that are programmed to think like humans and mimic their actions. The term can also be applied to any machine that exhibits features related to the human mind such as learning and problem solving.
- Systems that think like humans
- Systems that act like humans
- Systems that think rationally
- Systems that act rationally
Artificial Intelligence History
The term artificial intelligence was coined in 1956. Artificial intelligence has become more popular today thanks to increased data volume, advanced algorithms and improvements in computing power and storage.
In the 1960s took interest in this type of work and began training computers to mimic basic human reasoning. Defense Advanced Research Projects Agency (DARPA) completed street mapping projects in the 1970s.
This early work paved the way for the automation and formal reasoning that we see in computers today. Decision support systems and smart search systems that can be designed to complement and augment human abilities.
Understanding Artificial Intelligence (AI)
When most people hear the term artificial intelligence, the first thing they usually think of is robots. That’s because big-budget films and novels weave stories about human-like machines that wreak havoc on Earth. But nothing could be further from the truth.
Artificial intelligence is based on the principle that human intelligence can be defined in a way that a machine can easily mimic it and execute tasks, from the most simple to those that are even more complex. The goals of artificial intelligence include mimicking human cognitive activity. Researchers and developers in the field are making surprisingly rapid strides in mimicking activities such as learning, reasoning, and perception, to the extent that these can be concretely defined. Some believe that innovators may soon be able to develop systems that exceed the capacity of humans to learn or reason out any subject. But others remain skeptical because all cognitive activity is laced with value judgments that are subject to human experience.
As technology advances, previous benchmarks that defined artificial intelligence become outdated. For example, machines that calculate basic functions or recognize text through optical character recognition are no longer considered to embody artificial intelligence, since this function is now taken for granted as an inherent computer function.
AI is continuously evolving to benefit many different industries. Machines are wired using a cross-disciplinary approach based on mathematics, computer science, linguistics, psychology, and more.
How Artificial Intelligence Works
AI works by combining large amounts of data with fast, iterative processing and intelligent algorithms, allowing the software to learn automatically from patterns or features in the data. AI is a broad field of study that includes many theories, methods and technologies, as well as the following major subfields:
Machine learning automates analytical model building. It uses methods from neural networks, statistics, operations research and physics to find hidden insights in data without explicitly being programmed for where to look or what to conclude.
A neural network is a type of machine learning that is made up of interconnected units (like neurons) that processes information by responding to external inputs, relaying information between each unit. The process requires multiple passes at the data to find connections and derive meaning from undefined data.
Deep learning uses huge neural networks with many layers of processing units, taking advantage of advances in computing power and improved training techniques to learn complex patterns in large amounts of data. Common applications include image and speech recognition.
Computer vision relies on pattern recognition and deep learning to recognize what’s in a picture or video. When machines can process, analyze and understand images, they can capture images or videos in real time and interpret their surroundings.
Natural language processing (NLP) is the ability of computers to analyze, understand and generate human language, including speech. The next stage of NLP is natural language interaction, which allows humans to communicate with computers using normal, everyday language to perform tasks.
Graphical processing units are key to AI because they provide the heavy compute power that’s required for iterative processing. Training neural networks requires big data plus compute power.
The Internet of Things generates massive amounts of data from connected devices, most of it unanalyzed. Automating models with AI will allow us to use more of it.
Advanced algorithms are being developed and combined in new ways to analyze more data faster and at multiple levels. This intelligent processing is key to identifying and predicting rare events, understanding complex systems and optimizing unique scenarios.
APIs, or application programming interfaces, are portable packages of code that make it possible to add AI functionality to existing products and software packages. They can add image recognition capabilities to home security systems and Q&A capabilities that describe data, create captions and headlines, or call out interesting patterns and insights in data.
In summary, the goal of AI is to provide software that can reason on input and explain on output. AI will provide human-like interactions with software and offer decision support for specific tasks, but it’s not a replacement for humans – and won’t be anytime soon.
Why is Artificial Intelligence important?
- AI automates repetitive learning and discovery through data. Instead of automating manual tasks, AI performs frequent, high-volume, computerized tasks. And it does so reliably and without fatigue. Of course, humans are still essential to set up the system and ask the right questions.
- AI adds intelligence to existing products. Many products you already use will be improved with AI capabilities, much like Siri was added as a feature to a new generation of Apple products. Automation, conversational platforms, bots and smart machines can be combined with large amounts of data to improve many technologies. Upgrades at home and in the workplace, range from security intelligence and smart cams to investment analysis.
- AI adapts through progressive learning algorithms to let the data do the programming. AI finds structure and regularities in data so that algorithms can acquire skills.
- Data analysis. All that has changed with incredible computer power and big data.
- AI achieves incredible accuracy through deep neural networks. your interactions with Alexa and Google are all based on deep learning. AI techniques from deep learning and object recognition can now be used to pinpoint cancer on medical images with improved accuracy.
- Maximum data. Since the role of the data is now more important than ever, it can create a competitive advantage. You have the best data in a competitive industry, even if everyone is applying similar techniques, the best data will win.
Main directions where Artificial Intelligence (AI) is applied
- Health Care
The digitization of healthcare data and the expectation among increasingly engaged patients for personalized and virtual care are introducing unprecedented opportunities to disrupt the way health care is delivered. Health care organizations must adapt the way they use and share data to relieve pressure on health care systems and improve medical decision making to ensure greater patient satisfaction and better outcomes.
Mobile devices, conversational commerce, social networking and other technologies have shifted the behavior of the connected customer. Retailers and consumer goods companies need to shift accordingly.
Customers can quickly research products and compare prices through multiple channels, and you must be ready to respond with relevant offers, competitive prices and the right merchandise. That means moving beyond spreadsheets, using data from every conceivable source to understand your customer’s relationship with your brand so you can influence it in real time.
In manufacturing, you’re under pressure to continuously improve quality while reducing costs and increasing productivity.
Changing regulatory compliance requirements and shifting customer demands mean a bank’s survival hinges on its ability to glean relevant insight from all available data. The efficient and effective use of data is critical to addressing many issues today’s banks face. A partnership between humans and machines – each augmenting the other – holds the most promise for successfully achieving compliance and meeting customer needs.
Artificial intelligence applications
There are numerous, real-world applications of AI systems today. Below are some of the most common examples:
- Speech recognition: It is also known as automatic speech recognition (ASR), computer speech recognition, or speech-to-text, and it is a capability which uses natural language processing (NLP) to process human speech into a written format. Many mobile devices incorporate speech recognition into their systems to conduct voice search—e.g. Siri—or provide more accessibility around texting.
- Customer service: Online virtual agents are replacing human agents along the customer journey. They answer frequently asked questions (FAQs) around topics, like shipping, or provide personalized advice, cross-selling products or suggesting sizes for users, changing the way we think about customer engagement across websites and social media platforms. Examples include messaging bots on e-commerce sites with virtual agents, messaging apps, Facebook Messenger, and tasks usually done by virtual assistants and voice assistants.
- Computer vision: This AI technology enables computers and systems to derive meaningful information from digital images, videos and other visual inputs, and based on those inputs, it can take action. This ability to provide recommendations distinguishes it from image recognition tasks. Powered by convolutional neural networks, computer vision has applications within photo tagging in social media, radiology imaging in healthcare, and self-driving cars within the automotive industry.
- Recommendation engines: Using past consumption behavior data, AI algorithms can help to discover data trends that can be used to develop more effective cross-selling strategies. This is used to make relevant add-on recommendations to customers during the checkout process for online retailers.
- Automated stock trading: Designed to optimize stock portfolios, AI-driven high-frequency trading platforms make thousands or even millions of trades per day without human intervention.
Types of AI
Online shopping and advertising
Artificial intelligence is widely used to provide personalised recommendations to people, based for example on their previous searches and purchases or other online behaviour. AI is hugely important in commerce: optimising products, planning inventory, logistics etc.
Search engines learn from the vast input of data, provided by their users to provide relevant search results.
Digital personal assistants
Smartphones use AI to provide services that are as relevant and personalised as possible. Virtual assistants answering questions, providing recommendations and helping organise daily routines have become ubiquitous.
Language translation software, either based on written or spoken text, relies on artificial intelligence to provide and improve translations. This also applies to functions such as automated subtitling.
Smart homes, cities and infrastructure
Smart thermostats learn from our behaviour to save energy, while developers of smart cities hope to regulate traffic to improve connectivity and reduce traffic jams.
While self-driving vehicles are not yet standard, cars already use AI-powered safety functions. The EU has automated sensors that detect possible dangerous situations and accidents.
AI systems can help recognise and fight cyberattacks and other cyber threats based on the continuous input of data, recognising patterns and backtracking the attacks.
Artificial intelligence against Covid-19
In the case of Covid-19, AI has been used in thermal imaging in airports and elsewhere. In medicine it can help recognise infection from computerised tomography lung scans. It has also been used to provide data to track the spread of the disease.
Certain AI applications can detect fake news and disinformation by mining social media information, looking for words that are sensational or alarming and identifying which online sources are deemed authoritative.
Artificial Intelligence and the Future of Humans
Digital life is augmenting human capacities and disrupting eons-old human activities. Code-driven systems have spread to more than half of the world’s inhabitants in ambient information and connectivity, offering previously unimagined opportunities and unprecedented threats. As emerging algorithm-driven artificial intelligence (AI) continues to spread, will people be better off than they are today?
The experts predicted networked artificial intelligence will amplify human effectiveness but also threaten human autonomy, agency and capabilities. They spoke of the wide-ranging possibilities; that computers might match or even exceed human intelligence and capabilities on tasks such as complex decision-making, reasoning and learning, sophisticated analytics and pattern recognition, visual acuity, speech recognition and language translation. They said “smart” systems in communities, in vehicles, in buildings and utilities, on farms and in business processes will save time, money and lives and offer opportunities for individuals to enjoy a more-customized future.
Many focused their optimistic remarks on health care and the many possible applications of AI in diagnosing and treating patients or helping senior citizens live fuller and healthier lives. They were also enthusiastic about AI’s role in contributing to broad public-health programs built around massive amounts of data that may be captured in the coming years about everything from personal genomes to nutrition. Additionally, a number of these experts predicted that AI would abet long-anticipated changes in formal and informal education systems.
Yet, most experts, regardless of whether they are optimistic or not, expressed concerns about the long-term impact of these new tools on the essential elements of being human. All respondents in this non-scientific canvassing were asked to elaborate on why they felt AI would leave people better off or not. Many shared deep worries, and many also suggested pathways toward solutions. The main themes they sounded about threats and remedies are outlined in the accompanying table.
- Good at detail-oriented jobs;
- Reduced time for data-heavy tasks;
- Delivers consistent results; and
- AI-powered virtual agents are always available.
- Requires deep technical expertise;
- Limited supply of qualified workers to build AI tools;
- Only knows what it’s been shown; and
- Lack of ability to generalize from one task to another.