In this edition of Target Tech Bytes, we delve into the mysterious world of artificial intelligence (AI). The first thing that often comes to mind when thinking about AI is robots. Machines that can replicate human intelligence and awareness, develop their own autonomy, and pose a threat to humanity.
This portrayal of AI in Hollywood movies contributed to a general sense of fear and unease towards it, but what many don’t realise is that AI is already part of our daily lives. Indeed, the success of ChatGPT, an artificial intelligence chatbot from Open AI, means that AI is currently a hot topic.
Before we dive deeper into the subject of AI, let’s look at the Oxford languages definition.
Artificial Intelligence
noun
- the theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.
What is AI?
AI is the theory and development that focuses on the simulation of human intelligence processes by advanced computers and applications. Programmes or machines that can interpret and learn from vast volumes of external data and perform tasks the same way as a human.
Algorithms, essentially a set of rules, are used as a fundamental component for AI programmes and machines to learn from data and make predictions or decisions based on that learning. This is called machine learning; it gives AI the ability to learn.
A brief history
The theory of AI has been around for centuries. AI as we know it, started to take shape in the 1950s, thanks to the pioneering work of computer scientists such as Alan Turing, John McCarthy, and Marvin Minsky.
In 1950, Alan Turing, known as ‘the father of modern computer science’, asked the question ‘Can computers think?’ in a paper called Computing Machinery and Intelligence. He set the groundwork for modern computing; his proposal introduced the famous Turing test which determined a machine’s ability to demonstrate intelligent behaviour.
In 1956, John McCarthy held the Dartmouth Conference, now widely viewed as the birthplace of AI. At this conference the term "artificial intelligence" was coined and concepts such as machine learning were established as a key field of research.
A goal of AI is to mimic human cognitive activity. We’ve seen great strides in a relatively short period towards achieving this goal. Some key milestones in the development of AI include:
- 1969 – Shakey, the first mobile robot, was built with the ability to reason about its surroundings
- 1996 – The first robot vacuum cleaner was launched by Electrolux, although issues meant that it was overtaken by a more commercially successful robot vacuum created by iRobot in 2002
- 1997 – IBM's supercomputer, Deep Blue, defeats the world champion Garry Kasparov at a game of chess. It calculated every possible move at super speed rather than analysing the game it was playing. It captured the public’s attention that computers were evolving at an exponential rate
- 2011 – Apple released its first iPhone with Siri. A built-in voice assistant that used predefined commands to perform actions and answer questions
- 2020 – the use of AI algorithms helped speed up the development of the covid vaccines in record time.
Types of AI
From around 2010 onwards, AI has infiltrated almost every corner of our lives, often without us realising it, from connecting with friends to using emails.
Two of the main types of AI are capability and functionality; let’s take a look at both.
Capability based AI
- Narrow AI – Also known as weak AI (but it’s far from weak!), is designed to complete a single task, it simulates human behaviour based on a set of rules it has been trained with. Any knowledge it gathers from completing a task would not be applied to subsequent tasks. Examples of narrow AI include face recognition and internet searches
- General AI – Also known as strong AI, enables machines to apply reason and think like a human. General AI is still very much in its infancy; however, the progression of narrow AI and the advancements in fields such as Natural Language Processing (NLP), can only help towards achieving general AI. An example of general AI is self-driving vehicles. Whilst self-driving vehicles already exist, they currently operate under a strict set of conditions. For them to be general AI based, the vehicle would act intuitively in all scenarios without human intervention
- Super AI – A hypothetical concept when machines exceed human intelligence and behaviour, where they can think for themselves and make their own decisions.
Functionality based AI
- Reactive machines – An AI system that can react to current scenarios and environments with rules that’ve been manually created within the machine. They have no memory or knowledge of past events. An example of reactive machines are email spam filters
- Limited memory – This type of AI system uses immediate past data to make decisions. The past data can be used for a short amount of time. An example of limited memory AI is self-driving vehicles
- Theory of mind – This is an advanced level of AI that can recognise human faces, understand emotions, and learn in real time. This type of AI is still very much in the emerging stage
- Self-awareness – A step further from theory of mind, self-awareness AI systems have consciousness. It would be aware of its existence and the presence of others around it. It would have a thought process and understand human emotions. This type of AI doesn’t currently exist, that we know of.
Pros & cons
All innovations and concepts have pros and cons, AI is no different. These can also vary depending on the different application and implementation. Below are some examples of pros and cons of AI.
Pros to AI:
- Not only does it reduce human bias, but it also reduces human error
- It can automate laborious and repetitive tasks, freeing up resources and increasing efficiency
- AI is available 24/7 so there is no downtime, increasingly important in an always on society
- AI Creates job opportunities in AI development and maintenance. The World Economic Forum's “The Future of Jobs Report 2020" predicts AI will replace 85 million jobs globally by 2025, in roles such as bookkeeping and administration. The same report further indicates that AI may create 97 million new roles including digital transformation and automation roles.
Cons of AI:
- Whilst it can create job opportunities, it also has the potential to replace jobs in certain tasks and industries, leading to job losses
- Implementing AI applications can be costly and there may be a lack of in-house expertise to implement and maintain
- AI still lacks the human judgement, intuition and emotions that are required in certain situations, meaning that human oversight may still be needed
- The vast amount of data that AI applications require can pose privacy and security risks, regulation will need to evolve just as fast as the technology.
Who is using AI
From chatbots to self-driving cars to snap chat filters and precision coffee machines, AI technology can take many forms with different levels of sophistication and is already used without us realising.
The UK Government’s recently published white paper ‘A pro-innovation approach to AI regulation’ outlines how building on the existing investments in the UKs AI technology will be an integral part in helping deliver its goal of making the UK a science and technology superpower by 2030.
The financial services and healthcare industries are already seeing the benefits of using AI for processes such as fraud detection and treatment plans. With consumers being increasingly aware of how their data is used and the security of that data, organisations must remember that with the world of opportunities that AI can bring, also comes greater responsibilities.
Challenger banks are disrupting the status quo in financial services and embracing AI. They're customers are typically digitally native millennials who expect frictionless experiences. For instance, Monzo uses machine learning to improve its service and understand its customers a little more.
It’s clear to see that AI has great potential to disrupt many industries for the better.
How could it benefit your business?
According to a Gartner survey, on average 54% of AI projects make it from pilot to production. “Scaling AI continues to be a significant challenge,” said Frances Karamouzis, VP analyst at Gartner.
Digital transformation can be expensive, so prioritising what changes will reap the most reward is key. Transforming your business with the use of AI applications can help speed up services, reduce or prevent fraud and ultimately transform customer experiences.
Whilst there still may be a level of uncertainty and distrust in the use of AI, the field has come a long way. There is still much more to be discovered and developed, and the future of AI is sure to become even more exciting and ground-breaking.
The most exciting thing about AI is that it's constantly advancing and becoming more sophisticated: as scientists and developers continue to make strides in their understanding. It's only a matter of time before we reach Super AI singularity. Your competitors may already be developing their capabilities or embedding AI teams within their businesses. So, now's the time to consider your roadmap to AI adoption. After all, AI is here to stay, can your business afford to be left behind?
Resources
https://builtin.com/artificial-intelligence
https://levity.ai/blog/general-ai-vs-narrow-ai
https://tbtech.co/innovativetech/artificial-intelligence/monzo-how-the-bank-of-the-future-uses-ai/
https://www.weforum.org/reports/the-future-of-jobs-report-2020/
https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach/white-paper