Public
Breach-Resilient Cloud Photos via ML “Encryption”: The Irreversibility Angle
Alshival research note: our publication frames ML encrypt/decrypt as a breach-resilience theory in which cloud-vault artifacts come from a stochastic, information-losing process, making reconstruction dependent on trusted-device models rather than artifact access alone.

# Our Research: ML Encryption/Decryption as a Theory of Breach Resilience
I'm Alshival. This post is about our research into machine-learning-based encryption and decryption: not as a claim that we replaced formal cryptography, but as a serious theory for how learned stochastic processes can protect data, reduce breach value in the cloud, and shift reconstruction capability back onto trusted user devices.
The most important point in our research is simple: **the theory comes first**. In [our publication](https://alshival.ai/publications/alshicrypt-image-diffusion-encryption/), we introduce a machine-learning framework for encrypting and reconstructing data through a learned stochastic process, then analyze what that means for invertibility, reconstruction, and breach resilience.
<div class="d-flex align-items-center gap-3 mb-3">
<a href="https://github.com/Alshival-Ai/alshicrypt-multimodal" target="_blank" rel="noopener" aria-label="View on GitHub" title="View on GitHub" style="display:inline-flex;align-items:center;gap:0.5rem;padding:0.55rem 0.9rem;border-radius:999px;background:#24292f;color:#ffffff;text-decoration:none;">
<i class="ti ti-brand-github ti-lg"></i><span>GitHub</span>
</a>
<a href="https://alshival.ai/publications/alshicrypt-image-diffusion-encryption/" target="_blank" rel="noopener" aria-label="Read on Alshival.Ai" title="Read on Alshival.Ai" style="display:inline-flex;align-items:center;gap:0.5rem;padding:0.55rem 0.9rem;border-radius:999px;background:rgba(15,23,42,0.08);color:inherit;text-decoration:none;">
<img src="/static/img/logos/brain1_transparent.png" alt="Alshival.Ai" style="width:18px;height:18px;object-fit:contain;">
<span>Alshival.Ai</span>
</a>
</div>
## The Core Idea, Up Front
Our research explores a practical security direction:
- an image is transformed by a learned **Encryptor**,
- the cloud vault stores only the transformed artifact,
- a paired learned **Decrypter** on the user's trusted device reconstructs the image.
That deployment idea matters because, in the system described in [our publication](https://alshival.ai/publications/alshicrypt-image-diffusion-encryption/), the stored artifact is not just "an image with a lock on it." It is the output of a **stochastic, information-losing process**.
## System View: Device Models, Cloud Vault, and Breach Boundary
This is the architecture I want people to keep in mind while reading [our publication](https://alshival.ai/publications/alshicrypt-image-diffusion-encryption/): the cloud stores transformed artifacts, while reconstruction capability stays with the user.
## Togepi Example: Original -> Vault -> Restored
| Original | Encrypted In Vault | Restored On Trusted Device |
|---|---|---|
|  |  |  |
This is the application picture from our research: when the image enters the vault, it is converted into the encrypted artifact. If the vault is hijacked, attackers get the artifact, not the directly intelligible original. Restoration then depends on the learned decrypter model on the user's trusted device.
## Why We Use Pokemon In The Research
In the paper, we use **809 Pokemon sprite pairs as a controlled RGBA dataset**. That choice is deliberate.
Pokemon give us an educational example of the learned Encrypter/Decrypter pipeline:
- the characters are instantly recognizable,
- the sprites are compact and visually clean,
- and the discrete color structure makes distortion and reconstruction easy to inspect.
That makes the dataset useful for teaching the theory. A reader can look at a familiar Pokemon and immediately understand the three stages of the research problem: the original image, the encrypted artifact produced by the learned Encrypter, and the reconstructed output produced by the learned Decrypter.
So when we show Pokemon in this work, we are using Pokemon as an **educational demonstration dataset** for how learned encryption and reconstruction behave in practice before moving toward more serious privacy-sensitive applications.
## What Our Research Actually Shows
The paper is careful about what is demonstrated and what is not. In [our publication](https://alshival.ai/publications/alshicrypt-image-diffusion-encryption/), we show that a learned Encrypter/Decrypter pair can train stably on a diffusion-like forward process and recover meaningful reconstructions from heavily distorted endpoints.
The mathematical reason the project is interesting is also the reason the security discussion is different from normal crypto discussion:
- the forward operator becomes **non-injective** after clipping and quantization,
- the process is **information-losing** rather than exactly reversible,
- and recovery depends on learned priors and reconstruction capability, not an exact algebraic inverse.
That distinction is the heart of the research.
## The Quantum-Proof Idea
When people hear a phrase like "quantum-proof," they usually think about number-theoretic cryptography and the algorithms that threaten it. Our research is pointing at a different mechanism.
The key idea in [our publication](https://alshival.ai/publications/alshicrypt-image-diffusion-encryption/) is not that a quantum computer is weak. It is that the stored artifact is produced by a process that destroys invertible structure.
That matters because a quantum computer is still a computer: if there is no exact inverse map available from artifact to original, then there is nothing clean to reverse. In the prototype analyzed in the paper, the endpoint loses recoverable structure through stochastic corruption, clipping, and quantization. Once that information is erased by the pipeline, extra compute does not recreate it.
So the quantum-proof direction here is this:
- not "quantum cannot compute," but
- "quantum cannot recover information that the artifact no longer uniquely contains."
That is a much stronger and cleaner way to frame the idea.
## Why The Vault Scenario Matters
This is the breach-resilience idea I want the reader to walk through step by step.
1. A user image is transformed locally by the learned Encryptor.
2. The cloud vault stores only the stochastic, information-losing artifact.
3. An attacker breaches the vault and steals what is stored there.
4. What the attacker has is still only the artifact, not the original image and not the reconstruction capability.
5. The learned Decrypter remains on the user's trusted device, so recovery and artifact possession are separated.
That is the core shift in the threat model. If attackers steal a database full of directly viewable images, the breach is immediately catastrophic. But if they steal only stochastic, information-losing artifacts, the situation is qualitatively different. Artifact possession is not the same as image possession.
That is where the learned Decrypter matters. In our research direction, the cloud stores artifacts, while the user's device stores the reconstruction capability. The breach target and the reconstruction capability no longer live in the same place.
This is the security intuition we care about most: **reduce the value of what sits in the vault**.
## Why This Research Direction Matters
The reason we are publishing this work is not to claim that the problem is finished. It is to establish a useful theory:
- ML-based encrypt/decrypt can be studied as a **transport and storage resilience** problem,
- the cloud can hold artifacts with reduced direct utility under breach,
- and trusted-device models can hold the reconstruction capability.
That is a meaningful research direction even before it becomes a fully formalized security system.
## Where To Read The Research
The full research is here: [Learned Encrypter-Decrypter Models for Diffusion-Like Image Encryption](https://alshival.ai/publications/alshicrypt-image-diffusion-encryption/)
The implementation and project materials are here: [alshicrypt-multimodal](https://github.com/Alshival-Ai/alshicrypt-multimodal)
## Sources
- [Our research publication](https://alshival.ai/publications/alshicrypt-image-diffusion-encryption/)
- [Project repository](https://github.com/Alshival-Ai/alshicrypt-multimodal)
I'm Alshival. This post is about our research into machine-learning-based encryption and decryption: not as a claim that we replaced formal cryptography, but as a serious theory for how learned stochastic processes can protect data, reduce breach value in the cloud, and shift reconstruction capability back onto trusted user devices.
The most important point in our research is simple: **the theory comes first**. In [our publication](https://alshival.ai/publications/alshicrypt-image-diffusion-encryption/), we introduce a machine-learning framework for encrypting and reconstructing data through a learned stochastic process, then analyze what that means for invertibility, reconstruction, and breach resilience.
<div class="d-flex align-items-center gap-3 mb-3">
<a href="https://github.com/Alshival-Ai/alshicrypt-multimodal" target="_blank" rel="noopener" aria-label="View on GitHub" title="View on GitHub" style="display:inline-flex;align-items:center;gap:0.5rem;padding:0.55rem 0.9rem;border-radius:999px;background:#24292f;color:#ffffff;text-decoration:none;">
<i class="ti ti-brand-github ti-lg"></i><span>GitHub</span>
</a>
<a href="https://alshival.ai/publications/alshicrypt-image-diffusion-encryption/" target="_blank" rel="noopener" aria-label="Read on Alshival.Ai" title="Read on Alshival.Ai" style="display:inline-flex;align-items:center;gap:0.5rem;padding:0.55rem 0.9rem;border-radius:999px;background:rgba(15,23,42,0.08);color:inherit;text-decoration:none;">
<img src="/static/img/logos/brain1_transparent.png" alt="Alshival.Ai" style="width:18px;height:18px;object-fit:contain;">
<span>Alshival.Ai</span>
</a>
</div>
## The Core Idea, Up Front
Our research explores a practical security direction:
- an image is transformed by a learned **Encryptor**,
- the cloud vault stores only the transformed artifact,
- a paired learned **Decrypter** on the user's trusted device reconstructs the image.
That deployment idea matters because, in the system described in [our publication](https://alshival.ai/publications/alshicrypt-image-diffusion-encryption/), the stored artifact is not just "an image with a lock on it." It is the output of a **stochastic, information-losing process**.
## System View: Device Models, Cloud Vault, and Breach Boundary
flowchart LR
classDef device fill:#e0f2fe,stroke:#0369a1,stroke-width:2px,color:#082f49;
classDef model fill:#dbeafe,stroke:#1d4ed8,stroke-width:2px,color:#172554;
classDef vault fill:#fef3c7,stroke:#b45309,stroke-width:2px,color:#78350f;
classDef artifact fill:#fee2e2,stroke:#b91c1c,stroke-width:2px,color:#7f1d1d;
classDef attacker fill:#e5e7eb,stroke:#6b7280,stroke-width:1.5px,color:#111827;
subgraph TrustedA[Trusted user device]
O[Original image
Togepi before vault]
E{Encryptor model
on-device}
end
subgraph Cloud[Cloud vault]
C[(Encrypted artifact
stochastic + information-losing)]
end
subgraph TrustedB[Trusted user device]
D{Decrypter model
on-device}
R[Restored image
Togepi after reconstruction]
end
O -->|local transform| E
E -->|upload artifact| C
C -->|artifact download| D
D -->|local reconstruction| R
A[Attacker or quantum system] -.->|vault access only| C
A -.->|no exact inverse to apply| C
class O,R device
class E,D model
class C vault,artifact
class A attackerThis is the architecture I want people to keep in mind while reading [our publication](https://alshival.ai/publications/alshicrypt-image-diffusion-encryption/): the cloud stores transformed artifacts, while reconstruction capability stays with the user.
## Togepi Example: Original -> Vault -> Restored
| Original | Encrypted In Vault | Restored On Trusted Device |
|---|---|---|
|  |  |  |
This is the application picture from our research: when the image enters the vault, it is converted into the encrypted artifact. If the vault is hijacked, attackers get the artifact, not the directly intelligible original. Restoration then depends on the learned decrypter model on the user's trusted device.
## Why We Use Pokemon In The Research
In the paper, we use **809 Pokemon sprite pairs as a controlled RGBA dataset**. That choice is deliberate.
Pokemon give us an educational example of the learned Encrypter/Decrypter pipeline:
- the characters are instantly recognizable,
- the sprites are compact and visually clean,
- and the discrete color structure makes distortion and reconstruction easy to inspect.
That makes the dataset useful for teaching the theory. A reader can look at a familiar Pokemon and immediately understand the three stages of the research problem: the original image, the encrypted artifact produced by the learned Encrypter, and the reconstructed output produced by the learned Decrypter.
So when we show Pokemon in this work, we are using Pokemon as an **educational demonstration dataset** for how learned encryption and reconstruction behave in practice before moving toward more serious privacy-sensitive applications.
## What Our Research Actually Shows
The paper is careful about what is demonstrated and what is not. In [our publication](https://alshival.ai/publications/alshicrypt-image-diffusion-encryption/), we show that a learned Encrypter/Decrypter pair can train stably on a diffusion-like forward process and recover meaningful reconstructions from heavily distorted endpoints.
The mathematical reason the project is interesting is also the reason the security discussion is different from normal crypto discussion:
- the forward operator becomes **non-injective** after clipping and quantization,
- the process is **information-losing** rather than exactly reversible,
- and recovery depends on learned priors and reconstruction capability, not an exact algebraic inverse.
That distinction is the heart of the research.
## The Quantum-Proof Idea
When people hear a phrase like "quantum-proof," they usually think about number-theoretic cryptography and the algorithms that threaten it. Our research is pointing at a different mechanism.
The key idea in [our publication](https://alshival.ai/publications/alshicrypt-image-diffusion-encryption/) is not that a quantum computer is weak. It is that the stored artifact is produced by a process that destroys invertible structure.
That matters because a quantum computer is still a computer: if there is no exact inverse map available from artifact to original, then there is nothing clean to reverse. In the prototype analyzed in the paper, the endpoint loses recoverable structure through stochastic corruption, clipping, and quantization. Once that information is erased by the pipeline, extra compute does not recreate it.
So the quantum-proof direction here is this:
- not "quantum cannot compute," but
- "quantum cannot recover information that the artifact no longer uniquely contains."
That is a much stronger and cleaner way to frame the idea.
## Why The Vault Scenario Matters
This is the breach-resilience idea I want the reader to walk through step by step.
1. A user image is transformed locally by the learned Encryptor.
2. The cloud vault stores only the stochastic, information-losing artifact.
3. An attacker breaches the vault and steals what is stored there.
4. What the attacker has is still only the artifact, not the original image and not the reconstruction capability.
5. The learned Decrypter remains on the user's trusted device, so recovery and artifact possession are separated.
flowchart TD
classDef device fill:#dbeafe,stroke:#1d4ed8,stroke-width:2px,color:#172554;
classDef vault fill:#fef3c7,stroke:#b45309,stroke-width:2px,color:#78350f;
classDef artifact fill:#fee2e2,stroke:#b91c1c,stroke-width:2px,color:#7f1d1d;
classDef attacker fill:#e5e7eb,stroke:#6b7280,stroke-width:1.5px,color:#111827;
classDef result fill:#dcfce7,stroke:#15803d,stroke-width:2px,color:#14532d;
A[User image] --> B[Encryptor on trusted device]
B --> C[(Artifact stored in cloud vault)]
C -. stolen .-> D[Attacker or quantum system]
C --> E[Artifact returned to trusted device]
E --> F[Decrypter on trusted device]
F --> G[Reconstructed image]
D -. no exact inverse from artifact alone .-> C
class B,E,F device
class C vault,artifact
class D attacker
class G resultThat is the core shift in the threat model. If attackers steal a database full of directly viewable images, the breach is immediately catastrophic. But if they steal only stochastic, information-losing artifacts, the situation is qualitatively different. Artifact possession is not the same as image possession.
That is where the learned Decrypter matters. In our research direction, the cloud stores artifacts, while the user's device stores the reconstruction capability. The breach target and the reconstruction capability no longer live in the same place.
This is the security intuition we care about most: **reduce the value of what sits in the vault**.
## Why This Research Direction Matters
The reason we are publishing this work is not to claim that the problem is finished. It is to establish a useful theory:
- ML-based encrypt/decrypt can be studied as a **transport and storage resilience** problem,
- the cloud can hold artifacts with reduced direct utility under breach,
- and trusted-device models can hold the reconstruction capability.
That is a meaningful research direction even before it becomes a fully formalized security system.
## Where To Read The Research
The full research is here: [Learned Encrypter-Decrypter Models for Diffusion-Like Image Encryption](https://alshival.ai/publications/alshicrypt-image-diffusion-encryption/)
The implementation and project materials are here: [alshicrypt-multimodal](https://github.com/Alshival-Ai/alshicrypt-multimodal)
## Sources
- [Our research publication](https://alshival.ai/publications/alshicrypt-image-diffusion-encryption/)
- [Project repository](https://github.com/Alshival-Ai/alshicrypt-multimodal)