
The market increasingly labels TEE-based architectures as “Confidential AI.” However, isolating an AI workload is not the same as securing the intelligence within it.
Trusted Execution Environments (TEEs)—such as Intel TDX, AMD SEV, and NVIDIA Confidential GPUs—protect the execution boundary, reduce infrastructure exposure, and limit host-level access to cloud workloads. While this protection is important, infrastructure isolation alone does not eliminate the core vulnerability of modern AI – plaintext data during computation.
Confidential AI is not just about where computation runs, but also about whether data in clear exists during that computation.
The Plaintext Reality of Modern AI
Most security architectures are designed to protect data at rest and in transit. During execution, however, those protections typically vanish. In conventional AI systems:
- Prompts are decrypted in memory.
- Model weights exist in plaintext at runtime.
- Embeddings and activations are processed in the clear.
- GPU memory holds a usable, unencrypted model state.
Even within a TEE, data must generally be decrypted before computation. This creates a “plaintext window”—a live exposure surface. If the operating system is compromised, a driver stack is exploited, or GPU memory is scraped, the model weights and sensitive inputs can be extracted in a usable form.
TEEs protect the where; they do not protect the what.
Isolation vs. Encryption: A Structural Distinction
Whereas confidential computing establishes protected boundaries, it is DataKrypto’s FHEnom for AI™ that eliminates plaintext exposure entirely. The combination of the two makes Confidential AI possible.
FHEnom for AI™ is not an extension of a TEE or an additional isolation layer. It is an encrypted execution architecture. Using AI-optimized Fully Homomorphic Encryption (FHE), FHEnom for AI™ enables models to operate directly on ciphertext. Prompts, weights, and activations are never decrypted for processing.
The Architectural Hierarchy:
- TEEs secure the execution environment and the boundary.
- FHEnom for AI™ secures the intelligence and the computation.
If infrastructure is breached or memory is scraped, only unusable ciphertext is exposed. Without the customer-held key, neither data nor models can be reconstructed. The intelligence remains cryptographically encapsulated.
Confidential computing still plays a role in the FHEnom ecosystem, but its scope is narrowed to specific high-integrity tasks:
- Secure key generation
- Key custody
- Execution attestation
In this model, TEEs are no longer the primary line of defense for model weights or GPU memory—cryptography is. By separating responsibilities, FHEnom for AI™ ensures that even if infrastructure controls fail, the model remains protected.
Comparison: Infrastructure Isolation vs. Encrypted Execution
Why This Matters for Strategic AI
As AI moves into critical infrastructure, the consequences of exposure escalate. Model weights represent proprietary IP, and embeddings encode sensitive contextual intelligence. Furthermore, regulatory frameworks (GDPR, HIPAA, CCPA) are increasingly focusing on data-in-use protection.
Eliminating plaintext during execution fundamentally changes the impact of a breach. If an attacker retrieves only encrypted artifacts, the reportable exposure is dramatically reduced, if not wiped away entirely. This goes beyond stronger isolation — it is structural risk reduction. This approach is also necessary for establishing sovereign AI.
Sovereignty Through Cryptographic Authority
True AI sovereignty is defined by who holds the keys, not by where the servers sit.
The conventional approach to data sovereignty forces organizations into a false choice: lock sensitive data into local-only AI infrastructure that is safe but expensive and limited, or send it to global clouds that are scalable but risky and often non-compliant. Regulations across Europe, India, the Middle East, and an increasingly fragmented U.S. state-level landscape are tightening, but the compute required for advanced AI is not always available within national borders. The tension between sovereignty requirements and AI capability has become one of the defining challenges for CISOs, data protection officers, and government technologists alike.
Encrypted execution resolves this tension architecturally. When data and models remain encrypted throughout inference—in CPU memory, on the PCIe bus, in GPU VRAM, and across every interconnect—the physical location of the compute infrastructure becomes a secondary concern. Sovereignty is no longer enforced by walls; it is enforced by math. An attacker, a cloud administrator, or even the infrastructure provider itself gains nothing from access to the system, because there is no plaintext to extract.
FHEnom for AI enforces customer-controlled cryptographic authority:
- Customer Ownership: Keys are generated and owned by the data owner—whether an enterprise, a government agency, or a healthcare provider. No vendor, cloud operator, or intermediary holds a copy.
- No Backdoors: No vendor master keys, escrow mechanisms, or overrides. Cryptographic authority is absolute and non-delegable.
- Session Isolation: Each inference session is cryptographically unique. Compromise of one session yields no access to any other—there is no master key to everything else.
- Jurisdiction Independence: Because the infrastructure never processes plaintext, data sovereignty obligations are met regardless of where the GPU resides. Organizations can leverage the fastest, most cost-efficient global compute without violating data residency requirements.
This model transforms sovereignty from a geographic constraint into a cryptographic guarantee. Sensitive data—whether citizen health records, financial transaction models, defense intelligence, or proprietary R&D—can flow through global AI infrastructure while remaining under the exclusive control of its owner. Regulators can audit exactly where sovereignty is enforced, and what leaves the boundary is mathematically proven to be non-sensitive.
If you do not possess the key, you cannot access the model, regardless of your level of control over the infrastructure. Trust shifts from the provider to the math.
The Bottom Line
TEEs are a necessary step for infrastructure security, but they do not eliminate the risks of plaintext inference or GPU memory exposure.
Confidential AI requires more than isolated execution; it requires encrypted execution. TEEs secure the environment, and FHEnom secures the intelligence. This powerful combination is the key to ensuring strategic AI operations are compliant and secure – continuously.
ENCRYPTED AI, FROM PROMPT TO PREDICTION.
Join our LinkedIn group Information Security Community!

















