Skip to content Skip to footer

Your Complete Guide to Understanding AI and the Digital Revolution

SHARE THE ARTICLE

Recent Posts
Categories

No doubt about it, today’s world is centred around technology, and this technology is developing very quickly. Often, much more quickly than many of us can keep pace with.

Over the last couple of years, AI or artificial intelligence, is a big topic and becoming more mainstream and spoken about. Regardless of where someone lives, or what their job is, everyone wants to know more about it and what it can do.

But, with all of this talk about AI, comes the jargon. The words and terms flying around that can be confusing if you’re not in the know.

Problem solved! This guide is your crash course in AI talk, so you’ll know what’s being spoken about and will even be able to explain it to your Mum who doesn’t like technology much!

Forget about Sci-fi movies and things of fiction because the truth about AI is actually a lot simpler and significantly more interesting. Really, AI is just a subset of computer science that’s focused on building computers and programs that can do the things we’ve always assumed only people can do.

Depending on who you’re speaking to, AI can mean different things.

When most people hear  the term”artificial intelligence,” their minds jump directly to images of either  the Jetson’s maid, Rosie, or an evil piece of machinery looking for world domination. Neither accurate and in fact the term AI actually has multiple meanings depending on the context.

AI as a discipline: the academic and research side of AI, where computer scientists work on algorithms and theories to make machines “think” like humans.

AI as a technology: what companies like Google, Apple, and Meta are talking about when they say they’re “investing in AI.” It’s the practical application of AI research to improve products and services.

AI as an entity: when people refer to AI almost as if it were a being. You might hear phrases like “AI wrote this article” or “AI generated this image.”

AI as a marketing buzzword:  watch for this one, as some companies are slapping the phrase “AI-powered” on their products faster than you can say “machine learning,” even if the actual AI component is minimal.

The point I’m making? Context is everything when it comes to understanding what someone means by “AI.”

Machine learning:  It’s like teaching computers how to fish

If  we think of AI as being the ocean, then machine learning (ML) is the fish swimming within it. Machine learning is a subset of AI that’s all about creating systems that can learn and improve from their application and experience without needing to be explicitly programmed.

Imagine you’re teaching a child  the names of different fruits. Their learning evolves as their exposure does, and you show them examples of lots of types of fruit. Over time, they learn to recognise patterns and common features that distinguish apples from oranges, bananas from grapes.

Machine learning works in a very similar way:

  1. Data input: The system is fed large amounts of data.
  2. Pattern recognition: It analyses this data to identify patterns.
  3. Decision making: Based on these patterns, it then makes predictions or decisions about new and unseen data.
  4. Feedback and improvement: The system’s performance is evaluated, and it will adjust its approach to improve accuracy.

It’s a very clever process that allows ML systems to tackle complex problems and huge data sets that would be impossible to manage traditionally. From Netflix recommending your next binge-watch to email filters keeping spam where it belongs, machine learning is training hard in the background quietly revolutionising our digital experiences.

AGI: This is thought to be the Holy Grail of artificial intelligence

While current AI systems are pretty good and still getting better at specific tasks, they don’t have the adaptability of human intelligence. Yet.

You see, this is where Artificial General Intelligence (AGI) comes in as a hypothetical AI system to match or exceed human intelligence across a wide range of cognitive tasks.

Imagine access to an AI that could:

  • Write a novel in the morning
  • Solve complex maths problems in the afternoon
  • Compose a symphony in the evening
  • And then engage in a philosophical debate about the nature of consciousness before bed

That’s the promise (or, depending on your perspective, the threat) of AGI.

OpenAI  and other tech companies are pouring resources into AGI research, believing it’s likely to be a milestone moment in history. But AGI also raises profound ethical and existential questions:

  • How do we ensure AGI aligns with human values?
  • What could be the economic implications of machines that can outperform humans in nearly every cognitive task?
  • Could AGI pose existential risks to humanity?

As AI capabilities grow and grow, these questions will need to become urgent discussions held by researchers, policymakers, and ethicists.

Generative AI: The fun and creative machines

The creative use of AI is becoming much more mainstream and widespread. This technology is no longer just the playground of Hollywood special effects teams, we can all access it; and quite cheaply too!

These are programs capable of creating mountains of new content, whether it’s text, images, music, or even code. Probably the most well known is Chat GPT, OpenAI’s large language model that converse in a human-like way, write essays, and even debug code.

But there’s more to generative AI than just text:

  • DALL-E and Midjourney are just two applications that can create stunning, original images based on text descriptions. And at lightning fast speed!
  • MuseNet is capable of composing music in whatever style you’d like it too.
  • GitHub Copilot: is your AI assistant for writing code more efficiently.

Not everyone is a fan of Generative AI though and lots of people are sharing their fears about the future of human creativity. Especially as AI produced material is still quite a way from being bulletproof. Here are just a couple of the problems users are encountering:

Hallucinations

AI is impressive, but it doesn’t always get things right. Sometimes the responses given by generative AI are way off mark – often amusing, but also worrying if people are publishing material without closely checking it.

AI is very good at telling you what it thinks you want to hear, or what might logically make sense. Even if it’s not true, or even close to accurate.

Just for a minute imagine asking an AI bot about the capital of France. Quick as a flash you receive this response:  “The capital of France is London, a beautiful city known for its Eiffel Tower and delicious croissants.” Yes, there’s some truth in this (croissants are delicious – accurate) but it’s coupled with blatantly false and potentially misleading information.

These types of hallucinations happen because the current AI models don’t truly “understand” information the way we humans do. They’re making predictions based on patterns in their training data, and when they’re faced with an unknown or have gaps in their knowledge, they’ll provide an answer that might sound reasonable, but is totally wrong.

When publishing information, accuracy is important – think medical or financial advice and how important facts are in these fields. And so, while AI is a powerful and handy tool, human checking and critical thinking can’t go by the wayside yet.

Bias and omission

Because AI systems learn from data, they can only rely on the input they’ve been provided with. And should that training data contain human bias, it follows that the AI can perpetuate and even magnify those biases.

A recent study showed this issue in action with testing done on facial recognition software. It was discovered that the systems had much higher error rates for darker-skinned women compared to lighter-skinned men. Not because the AI was intentionally discriminatory, but because the training data had been primarily images of light-skinned males.

If AI systems used in recruitment, justice and credit rating were to hold biases of any kind it’s easy to see how social inequalities could be compounded.

To address this, it’s important that training data is diverse and representative, that output is rigorously monitored and tested for bias and that there’s transparency and accountability at all levels of AI development and deployment.

AI models? What are they?

Understanding the building blocks of AI – or the models – can help demystify some of the “magic” and we can learn more about their capabilities and limitations.

At the heart of most AI systems are what’s called “models.”

An AI model is pretty much just a mathematical representation of a decision-making process. You can think of it like a very complex set of instructions that the machine will follow to perform a task, whether that’s recognising speech, translating languages, or generating text.

There are numerous AI models, but here are ones that most people are likely to encounter:

Large Language Models (LLMs)

These are the powerhouses behind systems like ChatGPT, Google’s Gemini, and Anthropic’s Claude. LLMs are trained on massive amounts of text data, and with this they can understand user requests to then generate human-like text for a wide range of topics and tasks.

They’re pretty good multitaskers and can…

  • understand context and nuance in language
  • undertake tasks like translation, summarising, and question-answering
  • generate creative text, from poetry to code

But, and it’s a very important but, LLMs don’t really “understand” or use language the way humans do. Their responses are still underpinned by statistically-driven predictions which are based on patterns in their training data.

Diffusion Models

These are the models revolutionising AI-generated imagery. Platforms like DALL-E, Midjourney, and Stable Diffusion use diffusion models to create stunning images in all sorts of styles from user’s text descriptions.

This is how they work behind the scenes…

  1. The model begins with random noise
  2. It then gradually refines this into an image, guided by the text prompt
  3. The process is similar to slowly removing static from a TV screen until a clear picture emerges

Diffusion models have plenty of applications beyond just creating pretty pictures and they can be used for image editing, 3D modelling, and even visualising concepts.

Foundation Models

Foundation models are large AI models trained on diverse datasets and are like a ‘base model’ that can be fine-tuned for specific tasks. Examples include the GPT (Generative Pre-trained Transformer) models from OpenAI.

What makes these models so interesting is that while they’re trained on a wide range of data sources and types they don’t need much additional training to undertake a whole range of different tasks. And also, they’ve been known to exhibit “emergent abilities” and show that they’re capable of things they’ve not been specifically trained to do.

There are pros and cons of course and while foundation models are exciting because they keep pushing the boundaries of what’s possible with AI, there’s also some valid concerns about the concentration of AI power being in the hands of only a few large tech companies.

How do AI models learn?

The process of “training” an AI model is a bit like teaching a smart and eager-to-learn kid with a photographic memory. Here’s how it happens…

  1. Data Collection: First, the AI needs to be fed with a large dataset relevant to the type of tasks it’s expected to perform. For a language model, this might be millions of books, articles, and websites. For an image recognition model, it could be millions of labelled images.
  2. Data Preprocessing: The raw data needs to be cleaned and formatted in a way the model can understand which might mean things like removing duplicates, standardising formats, or breaking big blocks of text into “tokens” or smaller pieces.
  3. Model Architecture: Data scientists then design the structure of the model. This involves choosing the type of model and setting its initial parameters.
  4. Training: The model is exposed to the training data, making predictions and receiving feedback on its accuracy. It uses this feedback to adjust its internal parameters, gradually improving its performance. The more it performs and the more feedback it’s given helps its ability.
  5. Validation: The model is then tested on data it hasn’t seen before to ensure it’s learning patterns which can be applied and not just memorising the training data.
  6. Fine-tuning: The model might then go through additional training on more specific datasets to optimise it for particular tasks.

This whole process takes enormous amounts of computational power and energy, especially for large models and can cost millions of dollars in these computing resources!

Parameters and Tokens: The Building Blocks of AI Models

A couple of terms you’ll probably hear when discussing AI models are “parameters” and “tokens.”

Parameters are the adjustable parts of the model that it learns during training. You can think of them as the “knobs” the model can turn to improve its performance. More parameters generally mean a more complex and potentially more capable model, but also one that’s more resource-intensive to train and run.

Tokens are the units of text that language models work with. A token might be a word, part of a word, or even a single character, depending on the model. The number of tokens a model can process at once (its “context window”) is a key measure of its capabilities.

From Training to Deployment: How AI Gets to Work

Once an AI model is trained, it’s ready to be put to work. This process is called “inference” – the model is making inferences based on new input using what it has learned during training.

Inference can happen in a few different ways:

  1. Cloud-based: The model runs on powerful servers in data centres. This gives more capability but needs an internet connection.
  2. Edge computing: The model runs on local devices like smartphones or laptops. This is generally faster and more private but on the other hand is limited by the device’s processing power.
  3. Hybrid approaches: Some processing happens locally, with more complex tasks offloaded to the cloud.

Factors like the model’s size, the task’s complexity, privacy concerns, and latency requirements all come into play here.

Who’s Who in The AI ZOO

Every day it seems a new suite of AI apps and platforms emerge to take on the world. Many of the smaller companies releasing apps rely on the tech stack developed by the big kids, as the cost involved in developing an AI model from scratch is eye-watering. Though it’s very much a case of “watch this space” as new players and technologies are emerging constantly.

At the time of writing this, there are a few key players though, so let’s meet them.

The Tech Giants

OpenAI and ChatGPT

OpenAI burst into the public consciousness with the release of ChatGPT in late 2022. It’s a conversational AI model that was probably the everyday user’s first insight into the magic of AI and it can do many handy things from writing essays to debugging code.

  • Founded with the goal of ensuring that artificial general intelligence (AGI) benefits all of humanity
  • Developed the GPT (Generative Pre-trained Transformer) series of language models
  • Partnered with Microsoft, which has invested billions in the company

Google and Gemini

Google, long a leader in AI research, found itself playing catch-up after the success of ChatGPT. It responded with Gemini, a multimodal AI model that can process text, images, and other data types.

  • Gemini is integrated into many Google products, from search to productivity tools
  • Google’s AI research arm, DeepMind, is responsible for breakthroughs like AlphaGo and AlphaFold

Microsoft and Copilot

Microsoft has made AI a central part of its strategy, integrating AI assistants (branded as Copilot) into products like Windows, Office, and GitHub.

  • Major investor in OpenAI
  • Focusing on practical applications of AI in productivity and software development

Meta and Llama

Meta (formerly Facebook) has taken a different approach with its Llama series of models, making them open source.

  • Open-source approach allows for broader experimentation and development
  • Focuses on both language models and AI for social media applications

AI Startups and Specialists

Anthropic and Claude

Founded by former OpenAI employees, Anthropic is known for its focus on “constitutional AI” – developing AI systems with built-in ethical constraints.

Key points:

  • Developed the Claude series of language models
  • Received major investments from Amazon and Google

NVIDIA

While not exactly an AI company as such, NVIDIA’s GPUs (Graphics Processing Units) are crucial for training and running AI models.

The H100 chip is currently the gold standard for AI computing

  • NVIDIA’s success in AI has made it one of the world’s most valuable companies

Hugging Face

Hugging Face has become a central hub for the AI community, and is a platform for sharing and collaborating on AI models and datasets.

Key points:

  • Hosts thousands of open-source models and datasets
  • Provides tools for easy deployment of AI models

We’re living in the future now… what could possibly be next?

Anyone not hiding under a rock will realise AI is already changing so much of our lives and how we live. Everything from working to consuming content is now super powered by AI, so what could be left for the future to deliver?

Well the truth is, we’ve just scratched the surface. Here are just some of the potential developments being floated in tech circles…

Multimodal AI: where future AI systems will be able to seamlessly integrate different types of data – text, images, audio, and video – leading to more versatile and capable AI assistants.

AI-Human collaboration: rather than replacing humans, many experts envision a future where AI augments human capabilities, enhancing creativity and problem-solving.

Explainable AI: As AI systems make more important decisions, there’s a growing focus on making their decision-making processes more transparent and interpretable.

Edge AI: More AI processing will happen on local devices, improving speed and privacy.

AI in scientific discovery: AI is already accelerating scientific research and is likely to continue.

But, of course, there are ethical considerations

Privacy: how do we balance the data needs of AI systems with individual privacy rights?

Bias and fairness: how can we make sure AI systems are fair and don’t perpetuate or exacerbate societal biases?

Accountability: who is responsible when AI systems make mistakes or cause harm?

Job displacement: how can we manage the economic transitions as AI automates more tasks?

Existential risk: As AI systems become more powerful, how do we ensure they remain aligned with human values and don’t pose existential risks?

The excitement about what’s possible makes AI very worthwhile.

I really believe the future of AI is incredibly exciting and full of massive potential. And maybe, we’re finally on the brink of solving some of humanity’s greatest challenges and we’re unlocking new levels of creativity and discovery.

Imagine a world where AI brings us:

Personalised education: where AI tutors adapt to each student’s learning style, pace, and interests, unleashing the full potential of every learner.

Medical breakthroughs: AI accelerates drug discovery and personalised medicine, leading to treatments for diseases once thought incurable.

Environmental solutions: optimised renewable energy systems and AI with the capacity to predict natural disasters with amazing accuracy.

Augmented creativity: Artists, writers, and musicians collaborate with AI to push the boundaries of human expression and create entirely new art forms.

Space exploration: AI-powered robots and systems enable us to explore the far reaches of our solar system and beyond, unlocking the mysteries of the universe.

Humans and AI, hand in hand

Quite possibly the most exciting aspect of an AI-driven future isn’t about seeing humans replaced, but about the incredible potential of human-AI collaboration. As AI takes over routine and dull tasks, humans can then be left to focus on what we do best: creative thinking, emotional intelligence, and complex problem-solving.

This could well and truly lead us to a revolution in human achievement, where our innate creativity and intuition are amplified by AI’s analytical power and pattern recognition. The possibilities really are limitless.

Getting there

Getting there from here will take some careful navigation of the ethical, social, and technical issues we’ve raised, and it will mean humans being committed to ensuring AI is used responsibly.

To do this we’ll need to

  1. Invest in AI education at all levels, ensuring we’re adequately preparing for the AI-driven future.
  2. Develop robust governance frameworks to ensure AI is developed and used ethically.
  3. Build  collaborative multi-disciplinary teams to address the complex challenges of AI development.
  4. Prioritise inclusivity and diversity in AI development to create systems that work for everyone.
  5. Maintain a sense of curiosity about the possibilities of AI, while also exercising critical thinking and healthy scepticism.

Conclusion: riding the AI train, are you on?

Artificial intelligence is here and not going away. Many of us are already using it daily to help with work and recreational tasks and only scratching the surface of what’s actually possible. I even think it’s fair to say we’re in the amazing front seat position watching the dawn of a new era in human history.

The choice each of us now have is to either let the AI revolution just happen to us, or instead be involved in learning more, experimenting and ultimately shaping a future that works for us all. There’s  a pretty exciting time ahead, so let’s buckle up and enjoy the AI train ride! Are you on board?