• FEDSA Feed
  • Posts
  • Why AI Feels Smart (and Why That's Dangerous) with Steven James

Why AI Feels Smart (and Why That's Dangerous) with Steven James

FEDSA event recap & key takeaways

This article was generated with the assistance of AI to help our volunteer team share valuable insights efficiently. While we've reviewed the content for accuracy, please let us know if you spot any errors or have additional insights to contribute!

Missed the talk? No stress!

Here’s a clear, practical summary of a talk that challenged some of the biggest assumptions many of us make about AI, especially in development, design, and education.

In this session, Dr Steven James unpacked why AI feels so intelligent, why that perception can be dangerous, and how misunderstanding its limits can lead to over-reliance — particularly among junior developers and learners.

Overview: Why AI Feels Smart (and Why That Matters)

The talk explored how modern AI tools, especially large language models like ChatGPT and Claude, actually work under the hood. Rather than thinking, reasoning, or “knowing” things, these systems are fundamentally next-token predictors — extremely sophisticated pattern-matching machines trained on vast amounts of internet data.

Steve broke the discussion into three core areas:

  • What AI is (and isn’t): AI is not a single, unified intelligence. Tools like ChatGPT are just one narrow slice of a much broader field.

  • How large language models work: From tokenisation and training on scraped internet data to reinforcement learning from human feedback.

  • What happens when we use these tools in the real world: Particularly in software development, design, and education.

A recurring theme was the danger of appearance: AI outputs look confident, fluent, and authoritative — even when they’re wrong.

Key Takeaways

  • AI has no sense of certainty or doubt

    Large language models don’t know when they’re guessing. They produce plausible answers, not necessarily correct ones — which makes blind trust risky.

  • Confidence ≠ competence

    Because AI performs well in some areas, we’re tempted to assume it’s good at everything. In reality, its abilities are uneven and highly context-dependent.

  • Design and code tend to regress to the mean

    When trained on existing patterns, AI outputs converge on what’s most common — generic layouts, familiar structures, and well-worn solutions — limiting originality.

  • AI boosts short-term output but can harm learning

    Studies show junior developers using AI complete tasks slightly faster, but retain significantly less understanding. Shortcutting struggle also shortcuts learning.

  • Use AI like an intern, not an expert

    It’s excellent for drafting, brainstorming, and exploring ideas — but humans must remain responsible for judgement, validation, and decision-making.

Why This Talk Matters

This wasn’t an anti-AI talk. It was a call for clarity. AI is powerful and useful — but only when we understand its limitations. Treating it as intelligent rather than statistical risks eroding critical thinking, design judgement, and foundational skills.

For teams, educators, and individuals alike, the challenge is not whether to use AI — but how to use it without losing the very expertise that makes us valuable.

Watch the full session

This summary only scratches the surface. The full talk dives deeper into:

  • How training data shapes AI behaviour

  • Why juniors are especially vulnerable to over-reliance

  • The long-term implications for education, creativity, and craft

👉 Watch the full talk on FEDSA’s YouTube channels and join the ongoing conversation in the community.

Join our Discord for discussion and updates, and subscribe to our monthly newsletter to stay in the loop.

Keen to contribute to the FEDSA community? Sign up here.

Otherwise, join our free Discord community to chat to other members!

Reply

or to participate.