AcademyAI Fundamentals › Module 01

What AI Actually Is

Cut through the noise. This module gives you a grounded, honest understanding of what AI is, what the different types mean in practice, and where it genuinely fits into modern IT and business work — without the hype or the doom.

⏱ 25–30 min 3 knowledge checks No prior AI knowledge needed
01
Module
Your progress
0%
1

The AI landscape — what's actually going on

If you've spent the last two years in any IT environment, you've heard about AI constantly. Executives want AI strategies. Vendors are adding "AI-powered" to every product. Colleagues are either excited or anxious. And if you're honest, it can feel hard to separate the signal from the noise.

This module doesn't assume you know anything about AI. It also doesn't talk down to you — you're a professional who's been building and delivering technology for years. What you need is an honest, grounded foundation: what AI actually is, how it works at a conceptual level, and what it genuinely means for the kind of work you do.

Starting point

AI is not magic, it's not going to replace you overnight, and it's not just a passing fad. It's a significant shift in how software can be built and used — and the professionals who understand it clearly will have a real advantage over those who don't.

A brief, honest history. Artificial Intelligence as a concept has been around since the 1950s. For most of that time it progressed slowly — interesting research, narrow applications, but nothing that changed everyday professional work. What changed in the last few years is the emergence of large language models (LLMs) — AI systems trained on massive amounts of text that can generate coherent, contextual language. That's the engine behind ChatGPT, Copilot, Gemini, and Claude. This specific type of AI is what's driving the current wave of change, and it's what this pathway focuses on.

1950s–80s Rule-based AI 1990s–2000s Machine Learning 2010s Deep Learning 2022–now Generative AI / LLMs Chess programs, expert systems Spam filters, recommendations Image recognition, voice assistants ChatGPT, Copilot, Claude, Gemini ← You are here

The AI timeline — generative AI and LLMs represent the current wave that's reshaping professional work

The earlier generations of AI — rule-based systems, basic machine learning, deep learning for images and voice — are still around and still useful. But they're not what's driving the current conversation. What changed with LLMs is the ability to have a conversation with software, ask it to write, explain, summarise, translate, and reason through problems in natural language. That's genuinely new behaviour.

Knowledge Check
Which development best describes what's driving the current wave of AI adoption in professional environments?
2

Types of AI — what the terms actually mean

One of the quickest ways to feel lost in AI conversations is the vocabulary. Terms get used interchangeably, out of context, or simply incorrectly. Here's a plain-English breakdown of the terms you'll encounter most often — and what they actually mean for your work.

🧠

Artificial Intelligence (AI)

The broad umbrella term. Any system that performs tasks that would normally require human intelligence — pattern recognition, decision-making, language understanding. Machine learning and generative AI are both types of AI.

📊

Machine Learning (ML)

A type of AI where the system learns patterns from data rather than following explicit rules. Spam filters, fraud detection, and product recommendations all use ML. The model improves as it processes more data.

✍️

Generative AI

AI that produces new content — text, code, images, summaries — based on patterns learned from training data. ChatGPT, Copilot, and Claude are generative AI. This is the category most relevant to your daily professional work.

💬

Large Language Models (LLMs)

The specific type of AI behind most generative text tools. LLMs are trained on enormous amounts of text and learn statistical relationships between words, allowing them to generate coherent, contextual responses.

🤖

Prompting

The instruction or question you give to an AI tool. The quality of what you ask has a direct impact on the quality of what you get. Prompting is a learnable skill — covered in Module 02.

🔗

AI Agents / Agentic AI

AI systems that can take actions, not just respond — browsing the web, running code, sending emails. Emerging in enterprise tools. Still maturing, but important to be aware of as the capability evolves quickly.

Worth knowing

When someone says "we're implementing AI" in a business context, they almost always mean a specific tool — usually an LLM-based assistant or a plugin added to an existing platform. "AI" as a blanket term is not meaningful. Always ask: what specific tool, and what specific task?

YOUR PROMPT "Summarise this meeting for me" LARGE LANGUAGE MODEL Trained on billions of words of text Predicts most likely next word / response GENERATED RESPONSE Coherent text output tailored to your prompt Not searching — predicting

An LLM doesn't "look things up" — it generates responses by predicting likely text based on patterns learned during training. This is why it can be confidently wrong.

The most important thing to understand about how LLMs work: they are not search engines. They are not retrieving a stored answer. They are generating a response word by word, based on statistical patterns in their training data. This means they can produce text that sounds completely confident and authoritative — and be completely wrong. Understanding this one point explains most of AI's failure modes, which we cover in Module 03.

3

What AI can and cannot do — an honest account

The clearest way to think about AI's capabilities is in terms of what it genuinely does well versus where it consistently falls short. This isn't about being pessimistic — it's about being accurate, which is what lets you use it effectively.

Task AI tends to do well AI tends to struggle
Writing and drafting First drafts, rewriting for tone, summarising long documents, generating multiple options quickly Precision on facts, quotes, specific data — always verify before using in client-facing work
Research and synthesis Explaining concepts, summarising themes, identifying connections between ideas Current events, specific statistics, anything after its training cut-off — it will confidently invent
Code and technical work Generating boilerplate, explaining code, suggesting refactors, writing tests for known patterns Understanding full system context, subtle logic errors, security edge cases — review everything
Analysis and decisions Structuring a problem, listing considerations, thinking through options when given good context Access to your real data, institutional knowledge, understanding nuance in complex stakeholder situations
Communication Drafting emails, rephrasing for clarity, adapting tone for different audiences Knowing your relationship with the recipient, reading political context, genuine empathy
The core pattern

AI excels at tasks involving language patterns, speed, and volume — drafting, summarising, explaining, generating options. It struggles with precision, current facts, context it doesn't have, and judgment that requires real-world understanding. The best results come from using AI for the first layer and applying your expertise to the second.

Knowledge Check
A colleague asks AI to look up the current quarterly revenue figures for a client's competitor and include them in a market analysis. What's the most significant risk here?
Real scenario — IT consultant context

Situation: You're a BA preparing for a discovery session. You ask an AI to draft five discovery questions for a core insurance policy administration system modernisation project.

What happens: The AI produces ten well-structured, contextually appropriate questions covering business goals, integration dependencies, data migration concerns, and stakeholder alignment. You review them, remove two that don't apply, refine the wording on three others, and add two from your own experience with similar projects.

The result: You've done in 12 minutes what would have taken 45 — and the output is stronger because you combined AI's broad pattern knowledge with your specific domain expertise. This is the collaboration model, not replacement.

4

Where AI fits in your work — right now

AI isn't going to replace your job next quarter. It is going to change what good work looks like, what clients expect, and what distinguishes a strong consultant from an average one. Understanding where it actually fits — right now, practically — is more useful than worrying about abstract futures.

The most useful frame is augmentation, not automation. AI handles the first pass — the draft, the summary, the list of options — and you bring the judgment, the context, the relationships, and the accountability. That combination is consistently better than either one alone.

WITHOUT AI You do everything manually Time-consuming first draft Less time for high-value thinking WITH AI AUGMENTATION AI generates first draft fast (summary, code stub, options list) You review, refine, apply judgment (expertise + context + accountability) More time for what matters most Relationships, strategy, quality review VS

Augmentation means AI handles the first layer — you bring the judgment, context, and accountability that determine the final quality

The areas where this matters most for IT professionals right now:

📝

Documentation and communication

Meeting summaries, status reports, requirements drafts, email responses — AI can produce strong first versions in seconds. Your job shifts from writing to reviewing and refining.

🔍

Research and preparation

Understanding a new technology, preparing questions for a discovery session, getting a quick overview of a regulatory area. AI dramatically reduces the time to "informed enough to ask good questions."

💻

Code and technical work

Generating boilerplate, explaining code you've inherited, writing unit tests, suggesting refactoring approaches. Especially useful for unfamiliar languages or frameworks.

🧩

Problem structuring

When facing a complex problem, asking AI to help break it down, identify considerations you may have missed, or think through tradeoffs. Works best when you provide rich context.

Knowledge Check
You're a project manager who uses AI to draft your weekly project status report. After the AI produces a draft, what should your next step be?
5

Module summary — what you've learned

You've covered the foundational layer of AI literacy. Here's what you now understand that most people who use these tools every day don't have fully straight:

The AI timeline

AI has been around for decades. The current wave is driven by LLMs — generative AI systems trained on massive text datasets that can engage with language broadly.

Key vocabulary

AI, ML, generative AI, LLMs, prompting — you can now use these accurately rather than interchangeably, which matters in professional conversations.

How LLMs actually work

LLMs predict text — they don't look things up. This explains why they can be confidently wrong, and why verification is non-negotiable for factual claims.

The augmentation model

AI does the first layer — drafting, structuring, generating options. You bring judgment, context, domain knowledge, and accountability. That combination is the value.

Ready for Module 02

Module 02 — Use — puts this foundation to work. You'll learn how to prompt effectively, how to structure requests to get genuinely useful outputs, and how to integrate AI into real professional workflows without adding new risk.

Module 01 Complete

You've finished Orient — the foundation for everything that follows. Your progress has been saved. When you're ready, move on to Module 02: Use.