Much of modern software development is based upon a fallacy: that programming can be made easier, faster, and less error-prone if we surround developers with enough guardrails. Over the past few decades, this idea has evolved into a new category of tools broadly known as “safe languages.” These languages promise protection through restriction — limiting what a programmer can do in order to reduce the possibility of mistakes.
It is an appealing proposition. But in the development of secure applications — especially those involving cybersecurity, operational technology (OT), and mission-critical infrastructure —what is appealing is not always what is secure.
This article takes a closer look at how safe languages came to be, why their guardrails are not the same as real protection, and why true security still comes from understanding the system rather than relying on layers meant to simplify it.
The Path to “Safety”: From Assembly → C → Safe Languages
To understand why safe languages emerged, it helps to briefly revisit the evolution of how humans communicate with machines.
Assembly Language
In the beginning, there was machine code, the true ones-and-zeros of the processor expressed as raw binary instructions. Machine code quickly gave rise to assembly language, a thin veneer of alphanumeric opcodes that represented those same instructions, with mnemonics to help humans read and write those instructions more easily.
Assembly language (“assembler”) is extraordinarily powerful but equally unforgiving. It is essentially the native language of the machine, and it requires programmers to understand every nuance of a system. To write in assembler, a programmer needs to be fully conversant with such concepts as:
- Register allocations
- Memory addressing
- Byte sizes
- Endianness
- Instruction timing
It was also entirely non-portable. A program written for one processor family cannot run on any other.
Compiled Languages
As the uses of computers expanded exponentially and elementary “data processing” became “information technology,” the industry needed languages that were portable enough to support many different types of machines but still close enough to hardware to produce efficient code.
Early higher-level compiled languages, such as COBOL and Fortran, provided very user-friendly syntax and a degree of insulation from underlying machine code, but they lacked the sheer power and flexibility of assembler. Other compiled languages — notably C — emerged as a powerful middle ground.
C language allows programmers to write human-readable instructions that are then translated into object code, the machine’s own language. It preserves power, performance, control, and portability, while eliminating some of the most tedious aspects and minutiae of assembler.
But the C language comes with requirements that modern engineering culture sometimes finds uncomfortable:
- It requires, just as assembler does, that programmers possess a working knowledge of how digital devices actually operate.
- It does not protect programmers from their own careless mistakes. Rather, C requires, just as assembler does, that programmers possess the discipline, skill, pride, and professionalism to avoid “unprofessional” mistakes.
The Rise of Safe Languages
As software systems grew more complex and more developers entered the field, many employers sought less skilled programmers and languages that would “protect” their programmers from the hazards of “unprofessional” mistakes. Thus emerged safe languages. Languages that:
- Automatically manage memory
- Prevent direct pointer access
- Block out-of-bounds operations
- Enforce strict typing rules
- Limit access to internal data structures
- Confine programmers to predefined abstractions
The intent was admirable. Unfortunately, safety through limitation is not the same as security through understanding.
Guardrails Are Not Security
Safe languages protect novice programmers from novice errors. But they also impose a tax — one that grows heavier in environments where performance, efficiency, and security matter.
Guardrails may prevent accidental buffer overruns, but they also hide essential system detail, making it more difficult — sometimes nearly impossible — for cybersecurity tools to observe what is truly happening inside a device.
Guardrails may simplify memory management, but they also lead to generic, bloated, and less efficient code, built by a compiler or at runtime that must account for every possible scenario, even those that will never occur in practice.
Most critically, guardrails encourage a mindset in which developers become dependent on the abstraction rather than mastering the underlying machinery. In cybersecurity, that dependency can be fatal.
As I have said many times: You cannot secure what you cannot completely see, understand, or control.
The Pascal vs. Fortran Example
A great illustration of this comes from my time as a graduate student. A professor once argued that the then newly introduced Pascal language could accomplish certain tasks — especially recursion — that were impossible to accomplish in any other language. The assertion was patently absurd; all compiled languages, including Pascal, are translated into assembler.
Being a bit of a rebel, I wanted to prove the professor wrong. So I put his assertion to the test by writing a solution to the classic Eight Queens problem in both Pascal and Fortran.
Both programs worked perfectly. But the Fortran program ran more than ten times faster. Why?
Because Pascal implemented recursion through a generic — and of necessity heavily generalized — recursive mechanism. My Fortran program, on the other hand, implemented recursion through a purpose-built, efficient mechanism that was designed specifically to solve that one problem.
This small experiment illustrated a much broader truth: generic features, such as safety features, come at the expense of efficiency, visibility, and control. A language that “does more for you” inevitably takes more control away. It substitutes restriction for responsibility and gives the illusion of safety while eroding performance and precision.
Why Safe Languages Create Risk in Cybersecurity
Safe languages were designed to shield developers from low-level details. But in cybersecurity — and especially in operational technology — those low-level details are exactly where attacks hide. Several issues arise:
1. Abstraction Conceals Vulnerability
By hiding memory, data structures, and internal state, safe languages make assumptions that adversaries can exploit. If code cannot see what is happening under the hood, neither can the defense.
2. Code Bloat Becomes Unavoidable
A runtime that must support all guardrails, abstractions, and safety mechanisms inherently produces larger and slower binaries. In OT environments with limited memory and processing power, that inefficiency has real-world consequences.
3. Determinism Is Lost
Safe languages rely on features like garbage collection and automatic bounds checking. These systems run whenever the runtime decides they should — not when the security tool needs them to. This means execution can pause, slow down, or shift unpredictably. The application is performing functions dictated by the guardrails, not the programmer.
In OT cybersecurity, where timing and behavior must be consistent and guaranteed, this loss of determinism is unacceptable.
4. Skill Atrophy
Safe languages cultivate a culture where the language replaces the need for understanding. When an organization is several layers removed from how machines truly execute instructions, its capacity to secure those systems diminishes accordingly.
Safe languages can be excellent for dashboards, prototypes, and high-level business logic. But for all critical applications — those that have access to critical data and/or can access critical systems, particularly in critical infrastructure — they do not provide the determinism, visibility, or efficiency required to withstand a real-world attack.
Cybersecurity Requires Mastery, Not Managed Abstraction
The cybersecurity industry often confuses ease with safety and restriction with security. Yet the strongest defenses are built not on what a language prevents, but on what it enables a skilled engineer to control.
This is precisely why Crytica’s Rapid Detection & Alert (RDA) system is written in C language, not in abstracted, guardrail-heavy “safe” languages.
- C language provides deterministic performance.
- C language produces small, efficient binaries with minimal overhead.
- C language gives full visibility into memory and execution.
- C language forces developers to understand how systems actually behave.
Could we build the RDA system in a safe language? Possibly. Would it be fast enough, small enough, or reliable enough for OT cybersecurity? Absolutely not.
In OT environments where uptime, safety, and resilience are non-negotiable, “almost enough” is indistinguishable from failure.
Safe languages were created with good intentions: to make programming easier and protect developers from themselves. But cybersecurity is not built on good intentions. It is built on precision, control, and mastery of the underlying system.
If you would like to see how Crytica’s RDA system delivers rapid, lightweight protection in real world environments, we would be happy to demonstrate it. Reach out to our team to learn more.
.png)


.jpg)
.jpg)
.jpg)