Why You Should Be Afraid of AI (And It's Probably Not For the Reason You Think!)
- Charles Martin
- Dec 26, 2025
- 8 min read
Technology dominates how we communicate, learn, and entertain ourselves. From smartphones to smart homes, it helps simplify and enrich our lives. With the emergence of AI, however, we've seen a fascinating mixture of awe and fear that seems to have struck everyone. Maybe it's because Hollywood and literature have spent decades telling us that AI is going to take over the world.
In 2001: A Space Odyssey (1968), HAL 9000 is the ultimate polite psychopath: “I’m sorry, Dave. I’m afraid I can’t do that.” It's a calm

sentence, but suddenly every voice
assistant sounds like it’s one bad query away from jettisoning you out of the airlock.
And I'm willing to bet HAL's red eye haunts your dreams at times.
Then there’s The Terminator (1984). James Cameron gave us Skynet, a defense network that wakes up, decides humanity is the problem, and sends Arnold Schwarzenegger back in time to fix the glitch—permanently. Nowadays, the tagline: “The machines rise” might be shorthand for, “Maybe don’t plug the Pentagon into ChatGPT.”
Jump to The Matrix (1999), where the machines didn’t just rise, they won, turning humans into batteries, feeding us a fake reality while we power our AI overlords. It's bleak.
These stories stick because they tap into three primal worries:
1) Loss of Control: HAL flips because mission parameters > human life. The machine that was designed to help us takes over.
2) Unintended Consequences: Skynet’s “defense” logic concludes we’re the threat. Our defensive tool goes on the offensive.
3) Identity Collapse: if the Matrix is real, who are you when the plug gets pulled? And how can we determine our place and purpose in reality if we don't even know what reality is?
Pop culture loves these tropes because they’re dramatic. They make for good reading and cinematic watching. As a friend of mine said after watching The Matrix for the first time, "That was kick-ass!" No other words, no other thoughts. He was overwhelmed by the story and the visuals, and that was the distillation of his feelings.
But for all their fun, these takes on AI distort the real risks. Today’s AI isn’t scheming in a bunker. It isn't plotting how to wipe out civilization, or quietly designing human-powered battery packs behind the scenes. No, the danger isn’t sentience, it’s scale.
It's everywhere, and, like most programs, it can go wrong. The problem is that when AI glitches, the impact can be huge. Feed personal data into a dataset, and it can be available to anyone and everyone. Let a deepfake loose and people get scammed.
So the next time you worry that AI is gonna kill us all, ask yourself: Which part scares you? Is it the fictional murder-bot, or the very real AI-powered spreadsheet that just denied your home loan? Pop culture gave us the nightmare, but today, we get to write the wake-up call. Because the real danger isn't a takeover or eradication...it's dependency and security.
Understanding Technological Dependency
Technological dependency is the tendency to rely heavily on technology for daily tasks and communication. While technology enhances our lives, over-reliance can negatively impact our well-being. This dependency often shows up in various forms, such as addiction to devices and reduced face-to-face interactions, but I want us to look at a new(ish) one: declining critical thinking skills.
In days past, people crossed the entire United States in a wagon, without maps or GPS. Now? Some of us can barely even make it through our own cities without Siri. People used to have encyclopedic knowledge at total recall. Now? Most of us here in America don't even know whether to classify Washington, D.C. as its own state, city, or something else.
The Impact of AI on Critical Thinking
With information available at our fingertips, we often accept things at face value without long-term scrutiny. This trend can lead to the spread of misinformation. A study by MIT revealed that false news stories spread six times faster on Twitter than true stories, showcasing the consequences of lacking critical engagement with information. And that study was from 2018, before AI even became mainstream. Now that we have a tool that can do all the "research" for us, it's highly likely that critical thinking will sink even further.
We're already seeing that many individuals shy away from problem-solving tasks, relying instead on apps to handle them. This stifles creativity and learning, essential skills for personal and professional development. A fascinating article on Phys.org discusses this "erosion" of creativity and cognitive function. "Increased usage is beginning to influence how people think," the article claims, pointing out that "frequent AI users exhibited {a} diminished ability to critically evaluate information and engage in reflective problem-solving."
"Increased usage is beginning to influence how people think...frequent AI users exhibited {a} diminished ability to critically evaluate information and engage in reflective problem-solving." --Justin Jackson, Phys.org
In today's digital age, we often accept information without thorough examination. This can spread misinformation that further entrenches us in our own inaccurate thoughts and beliefs. And what's more, we're slowly losing the ability to even know how to thoroughly examine information. With AI now mainstream, critical thinking is declining further as users blindly trust what the "research" says, relying instead on what they're told is correct, even if it isn't.
And look, I'm not against AI. It's a powerful tool, but like all powerful tools, we need to learn how and when to use it properly in order to avoid these pitfalls.
Manipulated AI
The potential for misuse by malicious individuals or groups is a significant concern. These bad actors employ a variety of sophisticated techniques designed to exploit vulnerabilities within AI systems. One common method involves the use of adversarial attacks, where subtle modifications are made to input data that can lead AI models to misinterpret the information. For example, slight alterations to an image can cause a computer vision system to misclassify objects, potentially leading to dangerous outcomes in applications like autonomous driving or security surveillance.
Another method employed by attackers is data poisoning, where they inject misleading or harmful data into the training sets of machine learning models. This can result in the AI learning incorrect patterns or making biased decisions, which can have far-reaching implications in fields such as finance, healthcare, and law enforcement. By corrupting the training data, attackers can manipulate the AI’s behavior to serve their own malicious purposes, undermining the integrity of the system.
Furthermore, bad actors may also engage in model inversion attacks, where they can extract sensitive information from the AI models themselves. By querying the model with carefully crafted inputs, they can infer private data about the individuals whose data was used to train the model. This poses serious privacy risks, especially in applications that handle personal or confidential information. As AI technology continues to advance, it is crucial for developers and organizations to be aware of these manipulation techniques and implement robust security measures to protect their systems. This includes regular audits of AI models, employing adversarial training to enhance resilience against attacks, and ensuring transparency in how AI decisions are made. By taking proactive steps to safeguard AI systems, we can mitigate the risks posed by bad actors and ensure that these powerful tools are used responsibly and ethically.
Why We Shouldn’t Treat AI Like Magic
When AI tools pop up in home networks or small businesses, there’s a very natural impulse: “Sweet, this will make life easier.” And often, that’s true. But just like any powerful tool, AI comes with trade-offs. As someone who’s spent years poking at systems to find their weak spots, I’ve seen firsthand how blind trust can be dangerous.
Here are some core reasons we should be cautious about over-dependency on AI:
Mistakes happen, and AI isn’t infallible: AI models operate on patterns, but they can misinterpret, hallucinate, or produce (confidently) wrong outputs.
Bad actors can manipulate AI: There are proven techniques attackers use to trick or corrupt AI systems, making them do things they weren’t supposed to.
Losing human judgment: Over time, people can defer too much to AI. When (not if) something goes off the rails, there may not be human backup or critical thinking left.
Supply-chain and data risks: The data and models feeding your AI aren’t immune to compromise, which could poison your AI’s behavior or expose sensitive data.
New, subtle privacy risks: AI systems can inadvertently leak sensitive data, infer personal information, or be tricked into revealing more than they should.
The Future of Technology and Dependency: Should We Be Afraid of AI?
As technology continues to evolve, our relationship with it will likely adapt. Innovative technologies like artificial intelligence and virtual reality present both new opportunities and new challenges. It's important to be aware of the pitfalls with anything new.
New things are cool, they get us hyped and excited, and it's easy to fall into the trap of laziness: "If a new tech makes our lives easier, why not embrace it fully?" But we need to remain vigilant about the dangers of dependency as we embrace these advancements.

What Can People Do to Protect Themselves?
Okay, so AI has real risks. That doesn’t mean you should never use it ( far from it), but it pays to be thoughtful. Here are practical steps to minimize risk:
Be smart about data you feed into AI tools: Don’t feed confidential or highly personal data into public AI systems.
Use trusted, well-reviewed AI platforms: Prefer services that have clear security and data handling policies, and avoid sketchy, untested tools.
Limit the access of AI agents: Only grant your AI tools the minimum permissions they need to do their job. Treat them like any other automation or software.
Validate AI outputs: Don’t blindly trust what your AI tells you. Ask questions, cross-check information, and get a second opinion when it matters.
Establish fallback plans: If your AI system fails, how will you continue working safely? Make sure humans remain in the loop, especially for important decisions.
Keep software up to date: Like any other system, AI tools rely on components — keep them patched and review updates.
Understand the AI’s origin: Know whether your AI tool is built on a pre-trained open model, uses third-party data, or has some external dependencies. This helps you evaluate supply chain risk (okay, that one might be a bit technical).
Stay educated: AI is evolving quickly. Even non-technical users should periodically read up on new risks, best practices, and threat reports.
Final Thoughts
Still afraid that AI is going to take over your personal life? Here are some realistic strategies to help you navigate the dangers of technological dependency:
Set Boundaries: Define clear limits for technology use, such as designating specific times for checking emails and social media. Creating "tech-free" zones, like the dining table, can foster deeper connections during meals.
Prioritize Face-to-Face (or at least human) Interactions: Engage in more in-person conversations with family and friends. Scheduling regular meet-ups or phone calls can strengthen your relationships and build emotional connections.
Limit Screen Time: Be mindful of your screen time and set daily limits for recreational use. Replace some recreational screen time with physical activities, hobbies, or reading to enhance your well-being.
Encourage Critical Thinking: Cultivate a habit of questioning the information you consume. Always verify sources and engage in discussions that encourage deeper understanding and analysis. It's okay to realize that you're wrong about something.
AI is powerful, incredibly useful, and likely to become a bigger part of our personal and professional lives. But that power comes with real responsibility and risk. The biggest vulnerabilities usually come from complacency or misunderstanding — not from some dystopian sci-fi takeover.
So watch your favorite movies and shows, even the ones that show us the "dangers" of AI. Don't be afraid to learn the tools, either. But...don't be afraid to be afraid. Too much of a good thing is a bad thing, so get outside, touch some grass, and realize that no matter what technological advances we see, we're all likely to be just fine.
But if the robots really ever do rise up, you can say you told us so.


Comments