6 min read

5-Min Brief: AI Found Security Holes Humans Missed for Decades

Everyone's talking about AI. Nobody's explaining it. Here's what it actually is, how it actually works, and why any of it actually matters — starting from zero.
5-Min Brief: AI Found Security Holes Humans Missed for Decades

You've heard the word a thousand times by now. AI this, AI that. AI is going to change everything. AI took someone's job. AI wrote a song. AI passed the bar exam.

But here's a question most people never get a straight answer to: what is it, actually?

Not the hype version. Not the sci-fi version. What is the thing itself?

That's what this article is. A real explanation, built from the ground up, that assumes you know nothing about computers or technology beyond the fact that you're using one right now to read this. By the end, you'll understand what AI actually is, how it learns, where the "intelligence" lives, and what it genuinely can and can't do.

Let's start at the very beginning.

Part 1: What AI Is (And What It Isn't)

The term "Artificial Intelligence" has been around since the 1950s, and it's always meant roughly the same thing: getting machines to do things that normally require human intelligence.

That sounds impressive. It is impressive. But it's also a lot less mysterious than it sounds once you understand how it actually works.

Here's the honest, plain-English version of what modern AI is:

AI is pattern matching at enormous scale.

That's it. That's the core of it.

When you look at a dog and know it's a dog, your brain is doing pattern matching. You've seen thousands of dogs in your life. Your brain learned what "dog" looks like — four legs, fur, certain body shapes, certain sizes — and now when something new matches that pattern, you recognize it instantly.

AI systems learn to do the same thing. Show a system enough examples of dogs — millions of photos labeled "dog" — and it learns the pattern. Show it a new photo it's never seen, and it can tell you whether there's a dog in it.

That's the foundation. Everything else — ChatGPT writing emails, AI diagnosing cancer from an X-ray, AI translating languages — is a more sophisticated version of this same basic idea.

What AI is not is a mind. It doesn't understand things the way you do. It doesn't have opinions or feelings or consciousness. It's not "thinking" in any meaningful sense. It's doing extraordinarily sophisticated pattern matching, and the results can look a lot like thinking — but the process underneath is fundamentally different.

This distinction matters because it helps you understand both what AI is good at and where it falls apart.

Part 2: How It Learns

Okay, so AI learns from examples. But how, exactly?

Let's walk through it.

Imagine you want to teach a child to recognize cats. You could write down every rule: four legs, pointy ears, whiskers, meows, retractable claws, and so on. But kids don't actually learn that way. You just show them cats. Again and again. And at some point, something clicks and they can recognize a cat they've never seen before, even a cartoon cat, even a cat wearing a hat.

AI learns similarly — through examples, not rules.

The process is called training. You take a massive dataset — we're talking millions or even billions of examples — and you feed it into the system. The system looks for patterns in that data, adjusts itself based on what it finds, checks how well it's doing, adjusts again, and repeats. Over and over and over, millions of times.

The "adjusting" part is where the math happens. The system is essentially trying to get better at a specific task — predicting the next word in a sentence, identifying what's in a photo, translating from English to French — and it nudges itself slightly in the right direction every time it gets something wrong.

This is why training AI systems takes so much computing power and so much time. It's not one adjustment. It's billions of tiny adjustments, each one making the system fractionally better, until you end up with something that works remarkably well.

The computing hardware that makes this possible — specifically the chips called GPUs, made by companies like NVIDIA — is why you keep hearing about data centers and energy usage and the economics of AI. Training a large AI model can cost tens of millions of dollars in computing costs alone. It's not cheap, and it's not fast, even when the machines are running at full speed.

Part 3: Where the "Intelligence" Lives

Here's something that confuses a lot of people: after you train an AI, where does the knowledge go?

It lives in something called a model.

A model is the end product of training. Think of it like a finished recipe. The training process is all the test batches, the adjustments, the trial and error. The model is the final recipe that actually works.

Technically, a model is a massive collection of numbers — called parameters or weights — that encode everything the system learned during training. When you ask ChatGPT a question, it's not searching the internet or looking things up in a database. It's running your question through billions of these numbers, each one influencing the output in tiny ways, until it produces an answer.

Those numbers represent the patterns the system found during training. They're the distilled result of processing enormous amounts of text, or images, or whatever the system was trained on.

This is why people talk about model "size" in terms of parameters. GPT-4 has an estimated 1.8 trillion parameters. Claude has hundreds of billions. More parameters generally means the model can capture more complex patterns — but also requires more computing power to run, which is why running these models costs money and energy.

When a company "releases a new model," they've finished a new training run — essentially baked a new recipe — and the results are different (hopefully better) than what came before.

Part 4: What AI Can and Can't Do

Now that you know what it is, let's be honest about what it's actually good at — and where it falls on its face.

What AI is genuinely great at:

Anything that involves recognizing patterns in large amounts of data. Generating text that sounds fluent and natural. Translating between languages. Summarizing long documents. Writing code. Identifying objects in images. Finding anomalies in medical scans. Answering questions when the answer exists somewhere in its training data.

It's also remarkably fast. A task that would take a human hours — summarizing a 200-page report, translating a document into five languages, writing ten variations of a marketing email — AI can do in seconds.

Where AI genuinely struggles:

Anything that requires real-world understanding rather than pattern matching. AI systems can confidently tell you something that's completely wrong because they've found a pattern that sounds right without actually understanding the underlying reality. This is called "hallucination" — a bad word for it, but that's what stuck.

AI also struggles with things that are genuinely novel — situations that don't match anything in its training data. It has no common sense in the human sense of the word. It can't reliably do math beyond a certain complexity (yes, really — it's not actually calculating, it's pattern-matching what math answers look like). And it has no concept of what's true versus what just sounds plausible.

This is why the "it passed the bar exam" headlines are both accurate and slightly misleading. It passed because the bar exam is largely pattern-matchable — it's a test of known rules and precedents. That's very different from being a good lawyer, which requires judgment, creativity, and understanding of human situations that go well beyond patterns.

Part 5: Where This Is Going

Here's what makes this moment genuinely significant, beyond the hype.

For most of AI's history, these systems were narrow. An AI that could recognize faces couldn't play chess. An AI that could translate language couldn't identify tumors in X-rays. Every system was trained for one specific task.

What changed in the last few years — and what made ChatGPT a cultural moment — is that AI became general. These new systems, called large language models, can do a huge range of tasks without being specifically trained for each one. They learned language and reasoning from enormous amounts of text, and that turns out to be surprisingly transferable.

That's new. And it's why the pace of change feels so fast right now.

The systems being built today are significantly more capable than the ones from two years ago. The systems being built two years from now will likely be significantly more capable than today's. Nobody knows exactly where that trajectory leads — whether it levels off, accelerates further, or hits fundamental limits we haven't discovered yet.

What we do know is that these tools are becoming part of the infrastructure of everyday work and life, whether or not any given person has chosen to engage with them yet.

Which is exactly why plain-English explanations of what's actually going on have never mattered more.

The Short Version, If You Want It

AI is pattern matching at scale. It learns from enormous amounts of examples, stores what it learned in a model, and uses that model to make predictions and generate outputs. It's genuinely impressive at certain things and genuinely bad at others. And right now, it's changing fast enough that keeping up actually matters.

That's the foundation. Everything we cover here builds on it.

Next week, we'll go one level deeper: how do these language models actually work? What's happening inside ChatGPT when you type a question? It's weirder and more interesting than you'd expect.