Feb 7, 2026

If You Pay Peanuts: The Human Factor in Cybersecurity

If You Pay Peanuts: The Human Factor in Cybersecurity

Three decades ago, my son as an enterprising teenager had a t-shirt that read, “If you pay peanuts, you get monkeys.” Although relating salary levels to cybersecurity may be a stretch, the underlying spirit is apropos: if you treat your employees like idiots, they will act like idiots.

So, why do so many cybersecurity systems take precisely the opposite approach? The answer lies in an industry that cloaks itself in technobabble while delivering cybersecurity training in a format better suited to kindergarteners.

In this article, we will explore how the cybersecurity industry’s obsession with infantilization, obfuscation, and false authority systematically erodes real security.

Exacerbating the Human Factor in Cybersecurity

Most cybersecurity professionals acknowledge that coping with the “human factor” is one of the greatest challenges the industry faces. 

Unfortunately, the purported solution has been to infantilize those outside the cybersecurity elite. By hiding behind obfuscating language and rote, by-the-numbers training, the industry discourages genuine comprehension and responsible thought.

The justifications for this approach echo variations of the same tired mantra:

  • To err is human.
  • Humans are by nature careless.
  • Therefore, humans cannot be trusted.

Yet, if we accept these excuses, we ignore millennia of history. Humans evolved in an extraordinarily hostile and unforgiving world. If anything is true, it’s that humans excel at the skills required for survival. 

So why are these innate security skills not on dazzling display in the cyber world? These questions are far too complex to address fully. However, the underlying causes can be grouped into recurring themes:

  • A lack of trust
  • A desire to create dependency
  • A desire to maintain a position of authority
  • A desire to not reveal what is really happening

Let’s look at each of these

A Lack of Trust

Trust does not come easily. Ask any parent who has tried to teach a child to drive. When danger is high and unknowns abound, our willingness to trust the judgment and skills of others diminishes. 

Cybersecurity operates the same. It requires constant vigilance, sharp perceptive skills, and the ability to render rapid judgments. This is not unlike driving a vehicle on a crowded interstate highway. 

Somehow, society still produces a majority of reasonably proficient drivers. Why, then, can we not produce a generation of computer users capable of practicing safe and secure computing?

Experts tout that the mathematics and logic underlying cybersecurity systems — especially encryption — are highly complex to the average individual. This may be true. But there is a fundamental difference between being able to use a system safely and correctly, and comprehending all of the theory that underlies it.

One does not need to calculate the coefficient of friction between a rubber tire and an asphalt road to drive in snow safely. It is therefore reasonable to expect non-mathematical users to operate computer systems securely, even without advanced mathematics degrees.

A Desire to Maintain a Position of Authority 

Cybersecurity professionals are often reluctant to trust others because they feel a degree of personal insufficiency when confronting the challenges around them. 

One of the most difficult transitions for any skilled frontline professional is delegation. There is a persistent belief that someone new will never perform at the same level, or exercise the same judgment, as the current person in charge. And yet, managers routinely make this transition; and our industrial base has not collapsed because of it.

A Desire to Create Dependency 

Some managers and consultants maintain their position by ensuring that those around them never know as much as they do. In this way, their authority remains unchallenged, sustained by a deliberately cultivated relationship of dependency.

This impulse to create dependency is one of the classic criticisms leveled at segments of the consulting community and in-house experts. But it is equally applicable to vendors portrayed as so arcane that they are treated as sole-source monopolies.

A Desire to Not Reveal What is Really Happening

It’s somewhat understandable why the cybersecurity industry has embraced the old adage, “If you can’t dazzle them with brilliance, baffle them with B.S.” However, exposing one’s own limitations without also revealing their B.S is nigh impossible.

The reluctance to share so-called “authoritative” knowledge is especially pronounced. Average dwell times (i.e., times to detect a malware infection) routinely extend into the six-month range. According to recent data, effective malware detection rates remain below 50%.

False positives are so pervasive that companies such as industry giant SentinelOne have developed specialized AI programs to sift through the overwhelming alert volume. Objectively, this is not the performance of an industry doing a stellar job.

Empowerment Is the Only Viable Security Strategy

What emerges from these patterns is not a technology problem, but a philosophy problem. The industry’s reflexive response to fear — distrust users, centralize control, and conceal complexity — has produced systems that are brittle, opaque, and dangerously overconfident. These systems demand obedience rather than understanding, compliance rather than judgment.

The effects of this philosophy are neither subtle nor accidental. When systems grow increasingly complex, understanding is replaced by dependency. When their operation is obscured, clarity gives way to confusion. And when responsibility is delegated entirely to tools, vendors, or “intelligent” automation, vigilance erodes into complacency.

Security cannot survive under these conditions. Without empowerment, failure is not an exception — it is the expected outcome. Real security begins only when systems are designed to educate, clarify, and actively engage the people responsible for operating them.

Three of the classic enemies of security are the three “Cs”:

  1. Complexity
  2. Confusion
  3. Complacency

#1 Complexity

When overly complex systems are deployed, they cannot be used effectively. Cybersecurity systems fall squarely into the category of “support systems.” They do not exist for their own sake, but to support and protect other systems. As such, they should be non-intrusive, non-disruptive, and operate autonomously. 

Systems can be made less complex through sophisticated design with education that explains how and why a system works, rather than through training that merely teaches its use.

#2 Confusion

Confusion is often compounded by obfuscation, deliberately concealing how and why systems function as they do. When systems become overly complex, their users make operations ineffective.

Once again, the most effective way to dispel confusion is through education that is thorough, complete, and in-depth. 

#3 Complacency

Complacency is often born from unexpressed fear. It reflects a failure to recognize that nothing in this world is certain; past successes are never guarantees of future performance. It may also arise from an overreliance on systems or individuals. Whatever its origin, complacency has been a contributing factor in many of history’s worst disasters.

The cybersecurity industry has counterproductively fostered complacency. The prevailing message has been clear: trust us. Cybersecurity is far too esoteric for you to understand. Simply use our systems to be secure. 

What Cybersecurity Can Learn from Medieval Knighthood

One of the most severe ancillary consequences of cyber infantilization is the loss of the ability to question, to reevaluate situations, and to think critically. 

There is a compelling historical analogy to the modern faith placed in AI. In the 1300s, the ultimate war weapon was the heavily armored knight on an equally armored war horse. It was widely believed that, in open combat, nothing could withstand a knightly cavalry charge except an opposing cavalry charge. As a result, dominant powers competed relentlessly to improve the quality of mounted warfare.

Training and equipping each knight was enormously expensive. Nevertheless, the prevailing culture of the era “knew” that survival depended on having the greatest number of knights. Similarly, the cybersecurity industry has come to believe that survival depends on deploying the most arcane, esoteric, and expensive AI-enhanced cybersecurity systems.

The total supremacy of mounted knights should have been shattered by the battles of Crécy in 1346 and Agincourt in 1415. In both engagements, the simple and inexpensive longbow, wielded by the vastly outnumbered English commoners, decisively defeated thousands of elite French mounted knights. Yet, the military establishment of the time failed to declare an end to mounted warfare.

Repeating History in the Age of AI

“Those who do not learn from history …”

Today, artificial intelligence is being promoted as a savior and the inevitable path forward in cybersecurity. The message is implicit but unmistakable: let the machines do it. Human staff are reduced to little more than expendable, underpaid operators — present to comply, not to think.

Among cybersecurity vendors, there is a strong and growing push to rely on AI to detect malware, particularly AI-generated threats. The result can only be an endless arms race — against attackers with superior resources and who only need to win one battle. In this climate, alternative, non-AI approaches are not merely overlooked; they are often dismissed outright. Cybersecurity is, quite literally, being hoisted by its own petard — and embracing that fate rapturously.

History offers a sobering parallel. When societies fall in love with a technology or a mindset, they rarely listen to hard facts.  Just as medieval society was enamored with its knights, today’s cybersecurity industry is enamored with infantilizing end users and embracing the AI panacea.

This leaves us at a decisive moment. Do we defy the historical pattern by reclaiming critical thinking, adult responsibility, and real comprehension? Or do we retreat into the cradle and allow AI to smother us to death?

We can pay peanuts and get monkeys. We can infantilize our staff and get hacked. Or we can hire trained people and empower them with knowledge and responsibility.

The ramifications are clear. The choice is ours.

How Crytica Security Promotes Empowerment in Cybersecurity

It is against this backdrop we built the Rapid Detection and Alert (RDA) system. The RDA system was designed from the outset to reject the assumptions that undermine modern cybersecurity — that users cannot be trusted, that complexity must be hidden, and that judgment can be replaced by opaque automation. It does not rely on artificial intelligence, probabilistic guesses, or models that demand blind faith. Instead, it is built on clear, deterministic principles that can be understood, questioned, and verified.

The RDA system delivers rapid detection of unauthorized change without drowning operators in false positives or abstract risk scores. By making detection logic transparent and outcomes unambiguous, it restores something the industry has steadily eroded: confidence in both the tools and the people who operate them.

This is not security through infantilization or dependency. It is security through comprehension, accountability, and empowerment.

To learn how Crytica Security approaches cybersecurity by educating users and reinforcing human judgment — rather than obscuring it — contact our team.