
But something has changed, and it's not the mainframe.
It's the attacker.
Generative AI has crossed a threshold. Frontier models like Mythos and GPT‑5.5‑cyber can now reason about code, map complex systems, and autonomously hunt for vulnerabilities in ways that were science fiction just a few years ago. Anthropic's Project Glasswing is exploring exactly these kinds of high-impact applications for enterprise and government use.
The defensive potential is enormous. The offensive potential is exactly the same.
The signal from the top of the financial system is hard to miss. According to TechCrunch, Treasury Secretary Scott Bessent and Fed Chair Jerome Powell recently called six of the largest U.S. banks: JPMorgan Chase, Goldman Sachs, Citigroup, Wells Fargo, Bank of America, and Morgan Stanley - urging them to use Mythos to audit their own code and systems for vulnerabilities. A month earlier, Anthropic was designated a "supply chain risk" by the Department of Defense, the first time a U.S. company has been blacklisted that way. On May 1st, 2026, DoD CTO Emil Michael called Mythos a "separate national security moment."
The takeaway is simple, and it's not subtle:
Your attacker is no longer a human with tools. It's an AI with reasoning.
Mainframes aren't insecure. They're complicated. And complexity is what AI eats for breakfast.
A modern GenAI system can analyze a massive configuration surface in seconds, correlate permissions across datasets and execution paths, identify privilege escalation chains no human would spot, and generate exploit strategies that span subsystems. The recent Wiz disclosure of CVE‑2026‑3854 in GitHub showed how complex interactions can produce critical vulnerabilities that traditional approaches miss entirely. If that's true in the distributed world, it's even more true on the mainframe where the surface is bigger, older, and less examined.
Consider what a mainframe security team is actually up against:
Now put that environment up against an attacker who can explore it tirelessly, at machine scale, with no fatigue and no blind spots. The asymmetry is brutal: attackers explore infinitely, defenders review periodically. That math doesn't work.
The only answer is to change the math.
This is the gap Geniez AI is built to close. The Geniez GenAI Framework connects any LLM like Claude Opus 4.7, GPT‑5.5‑cyber, and others to any mainframe data source, with secure, governed, and auditable access. It lets security teams ask questions in natural language and get real-time analysis across systems that were previously siloed.
That foundation unlocks three things that traditional tooling can't deliver.
Mainframe code is one of the most under-analyzed risk surfaces in the enterprise. Some of it has been running, untouched, since the system was first commissioned. The people who wrote it are gone. The comments, if they ever existed, are gone with them.
Geniez plugs into tools like Copilot, Claude Code, and Amazon Q to run AI-powered scans across COBOL and Assembler programs, detecting vulnerabilities, insecure patterns, and logic flaws that static analysis tools routinely miss. You don't need to rewrite the application to secure it. You need to understand it, consistently, across thousands of programs.
Configuration is where most mainframe risk actually lives, and it's where Geniez's Operations Genie does some of its most interesting work. A security team can simply ask:
"Take a look at my RACF settings and identify any vulnerabilities or misconfigurations."
"Analyze SYS1.PARMLIB — especially SVC, APF, PPT, and authorized TSO/E commands — and cross-reference with RACF permissions. Highlight any risks."
Behind the scenes, the Operations Genie correlates configuration data across subsystems, maps permission chains and escalation paths, identifies inconsistencies and risky patterns, and surfaces findings in plain language. Security reviews shift from static to dynamic, from manual to automated, and from fragmented to holistic. A real audit might surface, for instance, that PROTECTALL is off and critical libraries like SYS1.PARMLIB and SYS1.LINKLIB have no dataset profiles protecting them. The kind of compounding risk that's easy to overlook in a checklist but obvious to a system that can reason across the whole picture.
For deeper work, Geniez offers a dedicated Security SMG, a Subject Matter Genie purpose-built for mainframe security. It handles configuration analysis (RACF chains, PARMLIB components like APF, SVC, PPT, LPA, AUTH TSO/E), code vulnerability scanning across COBOL and Assembler, automated audit and compliance reviews, anomaly detection based on SMF records from RACF, MVS, and TCP/IP, and cross-domain correlation that links configurations, permissions, and runtime behavior into a single picture.
That last capability matters most. Modern security isn't a series of isolated checks. It's about understanding the system as a whole and that's exactly the kind of work AI does well.
The industry is settling into a new equilibrium. Attackers are using advanced AI to find vulnerabilities. Defenders need equally capable AI to detect and prevent them. Mainframes, for all their strengths, aren't exempt from that shift.
Organizations that stick with traditional methods alone will find themselves audited by adversaries faster than by their own teams. Those that adopt AI-native defenses get faster detection, deeper analysis, broader coverage, and continuous validation of their security posture instead of point-in-time snapshots.
The question isn't whether AI will reshape security. It already has. The question is whether your organization will treat AI as a tool or as the core of its defense.
Geniez AI is built to make that second answer possible: bringing GenAI directly into the heart of the most critical system you run, and turning a traditionally static environment into a dynamically secured one.
To learn more about how Geniez AI can help secure your mainframe environment, contact contact@geniez.ai.