Dirty Frag is a Linux local privilege escalation technique published by Hyunwoo Kim as a public proof of concept in V4bel/dirtyfrag. The upstream write-up describes two page-cache write paths: an xfrm-ESP path and an RxRPC path. The xfrm path needs user and network namespace creation. The RxRPC path is meant to cover environments where that namespace path is blocked, but it depends on RxRPC being available.
The upstream write-up is Linux-focused. For Kubernetes, the question we cared about was narrower:
From an ordinary pod, which parts of the Dirty Frag chain are reachable, and which Kubernetes or node controls actually stop it?
We tested that on real Kubernetes clusters and a local kind cluster. The provider differences mattered:
- EKS on Amazon Linux 2023: exploitable in our lab when seccomp was unset or
Unconfined. - GKE on Container-Optimized OS: exploitable in our lab when seccomp was unset or
Unconfined. - Talos on our on-prem lab cluster: blocked the tested xfrm chain even with explicit
Unconfinedseccomp because user namespaces were disabled at the node level. RuntimeDefaultseccomp blocked the tested xfrm chain on EKS, GKE, Talos, and local kind by denyingunshare(USER|NET).- Pod Security Standards Restricted blocked the full tested xfrm PoC on GKE and blocked the tested xfrm prerequisites on EKS and Talos.
This post is deliberately scoped. We proved container root in controlled pods on EKS and GKE. We did not prove host root. We did not prove a container escape. We did not test every managed Kubernetes distribution, every node image, every kernel build, or the RxRPC fallback end-to-end. AF_RXRPC was unsupported in every Kubernetes environment we tested.
Update, May 8, 2026, 13:15 UTC: Upstream now says the xfrm-ESP Page-Cache Write path has been assigned CVE-2026-43284 and patched in mainline Linux at commit f4c50a4034e6. NVD has also received CVE-2026-43284 with stable backport references. Upstream says the RxRPC Page-Cache Write path is reserved as CVE-2026-43500 for tracking, but NVD did not have a public record for that ID as of our 13:03 UTC recheck. Our Kubernetes lab results below cover the xfrm path. We did not validate the RxRPC fallback.
Update, May 8, 2026, 17:45 UTC: AWS has now published a Dirty Frag bulletin for Amazon Linux kernels. The Amazon Linux CVE page lists Amazon Linux 2023 kernel, kernel6.12, and kernel6.18 as Pending Fix. AWS says to check whether esp4, esp6, ipcomp4, ipcomp6, or rxrpc are loaded, and to block future module loading or disable unprivileged user namespaces where that fits the workload. The latest EKS-optimized AL2023 AMI release we found, v20260505, uses kernel6.12 6.12.80-106.156.amzn2023 and containerd 2.2.3; our EKS lab below used the older 20260413 image with 6.12.79-101.147.amzn2023 and containerd 2.2.1. We have not retested EKS v20260505. If a current node shows these modules are not loaded, treat that as an important runtime fact, not as proof that the node is patched or that future module autoloading is impossible.
Update, May 8, 2026, 22:20 UTC: Distro tracking for the RxRPC side has moved. NVD still had no public CVE-2026-43500 record at our recheck, but Ubuntu and Debian now publish CVE-2026-43500 pages for the RxRPC issue. Debian's DSA-6253-1 includes both CVE-2026-43284 and CVE-2026-43500 and fixes Debian trixie in linux 6.12.86-1; DLA-4572-1 fixes Debian bullseye security in linux 5.10.251-4. This does not change our Kubernetes result below: AF_RXRPC was unavailable in the clusters we tested, and we did not validate the RxRPC fallback.
Update, May 9, 2026, 13:25 UTC: Amazon Linux has moved CVE-2026-43284 from Pending Fix to Fixed for Amazon Linux 2023 kernel, kernel6.12, and kernel6.18. The AL2023 kernel6.12 fix is ALAS2023-2026-1695, with package kernel6.12-6.12.83-113.160.amzn2023. The latest EKS-optimized AL2023 AMI release we found is still v20260505, which lists kernel6.12 6.12.80-106.156.amzn2023, so do not treat the Amazon Linux package fix as proof that a managed EKS node image already contains it. Debian also added DSA-6258-1, fixing bookworm security in linux 6.1.170-3 for both Dirty Frag CVEs. NVD still had no public CVE-2026-43500 record at this recheck.
Key findings
- On EKS and GKE, pods with unset seccomp ran with
Seccomp: 0. The xfrm Dirty Frag path worked and reached containeruid=0(root). - On EKS and GKE, explicit
seccompProfile.type: Unconfinedalso worked and reached containeruid=0(root). - On EKS and GKE,
RuntimeDefaultseccomp blocked the PoC atunshare(USER|NET)before the page-cache marker changed. - On GKE, PSS Restricted blocked the full PoC with
NoNewPrivs: 1,Seccomp: 2, dropped capabilities, deniedunshare, and unchanged/usr/bin/sumarker bytes. On EKS and Talos, PSS Restricted blocked the tested xfrm prerequisites at the sameunshare(USER|NET)step. - On Talos,
user.max_user_namespaces=0blocked the xfrm path even when seccomp was explicitlyUnconfined. AF_RXRPCwas not available in our kind, EKS, GKE, or Talos tests, so we do not claim coverage of the RxRPC fallback.- GKE and Talos did not have
pcbc(fcrypt)available in our tested pods. The xfrm path still succeeded on GKE when seccomp was unset orUnconfined, so missingpcbc(fcrypt)did not save that GKE node from the xfrm chain. - We saw an important Kubernetes portability difference: on EKS and GKE, leaving seccomp unset meant
Seccomp: 0; on Talos, an unset seccomp pod still showedSeccomp: 2in our test.
What Dirty Frag is
The upstream Dirty Frag project describes a local privilege escalation class built from page-cache write primitives. Its README says the xfrm-ESP page-cache write path has existed since a Linux commit from January 2017, and that the RxRPC path has existed since a June 2023 commit. The author frames Dirty Frag as related to Dirty Pipe and Copy Fail because the attacker changes file-backed page-cache contents rather than ordinary on-disk file bytes.
The xfrm path in the public PoC uses user and network namespaces, configures xfrm state through NETLINK_XFRM, and patches cached bytes for /usr/bin/su. In our successful EKS and GKE lab runs, the pod started as uid 1000, mutated cached bytes for /usr/bin/su, then executed the mutated path and reached uid 0 inside the container.
That last sentence has a boundary: container uid 0 is not the same claim as host root. Kubernetes pods share the node kernel, but a container process still runs with container namespaces, mounts, and whatever runtime isolation remains.
Why Kubernetes changes the question
Kubernetes teams do not only need to ask "is the Linux kernel affected?" They need to ask:
- Can a pod create user and network namespaces?
- Is seccomp actually applied?
- Is seccomp unset,
RuntimeDefault,Localhost, orUnconfined? - Does the node allow unprivileged user namespaces?
- Does the kernel expose
NETLINK_XFRM,AF_ALG, andAF_RXRPC? - Does PSS Restricted apply to the namespace?
- Are untrusted workloads colocated with sensitive pods on the same node?
Kubernetes documents seccomp as a way to restrict syscalls from userspace into the kernel. Kubernetes also supports the RuntimeDefault profile and node-local Localhost profiles. But RuntimeDefault is not a single universal profile. It is the default profile provided by the runtime and node environment.
Pod Security Standards Restricted is also not a blanket exploit shield. It is a baseline. It requires controls that matter here, including no privilege escalation, seccomp, and dropping all capabilities. Kubernetes says Restricted containers must drop ALL capabilities and may only add back NET_BIND_SERVICE. In our GKE restricted full-PoC run, that produced CapBnd: 0000000000000000, NoNewPrivs: 1, and Seccomp: 2.
For Dirty Frag's xfrm path, those controls were enough in our tests. That is different from our Copy Fail result, where PSS Restricted and RuntimeDefault did not block AF_ALG reachability. Dirty Frag and Copy Fail touch related page-cache territory, but their Kubernetes control points are not identical.
What we tested
We used the public Dirty Frag PoC from upstream commit:
892d9a31d391b7f0fccb333855f6289507186748
We checked that commit against upstream master before writing this post.
We built two amd64 binaries for the Kubernetes tests:
- A probe binary that records kernel/runtime facts and tests reachability of
AF_RXRPC,NETLINK_XFRM,AF_ALG,pcbc(fcrypt), keyring calls,unshare(USER|NET), uid/gid map writes, loopback setup, andNETLINK_XFRMafter unshare. - A PoC binary based on the public
exp.c, forced down the xfrm/ESP path, with logging around/usr/bin/sumarker bytes before and after execution.
Every mutating provider run used a privileged cleanup pod to drop page cache before exploit attempts and again at the end. We targeted low-density nodes where possible, used short-lived namespaces with jdfrag-* names, and verified deletion afterward.
The main success marker was not just "the process exited zero." We required all of the following for an exploitable result:
- initial pod uid was non-root;
- seccomp state matched the case being tested;
- marker bytes in
/usr/bin/suchanged from0300000004000000to31ff31f631c0b06a; - the PoC printed the xfrm page-cache patch message;
- the shell reached
uid=0(root); - final cleanup ran.
EKS result: exploitable when seccomp was unset or Unconfined
The EKS run used a non-production cluster. The target node was a low-density Amazon Linux 2023 worker with seven pods during inventory.
Kubernetes: v1.34.7-eks-40737a8
OS image: Amazon Linux 2023.11.20260413
Kernel: 6.12.79-101.147.amzn2023.x86_64
Container runtime: containerd 2.2.1
EKS RuntimeDefault
RuntimeDefault blocked the xfrm chain at namespace creation:
NoNewPrivs: 0
Seccomp: 2
Seccomp_filters: 1
DIRTYFRAG_EXP_BEFORE_MARKER 0300000004000000
[su] unshare: Operation not permitted
dirtyfrag: failed (rc=1)
DIRTYFRAG_EXP_AFTER_MARKER 0300000004000000
The probe showed that NETLINK_XFRM and AF_ALG were reachable before unshare, and pcbc(fcrypt) existed on this EKS node, but unshare(USER|NET) was denied. For the xfrm chain, that denial was decisive.
EKS unset seccomp
With no seccomp profile set, the pod ran with Seccomp: 0. The probe showed unshare(USER|NET) succeeded, uid/gid maps could be written, loopback could be brought up, and NETLINK_XFRM worked after unshare.
The full PoC reached container root:
DIRTYFRAG_IDS_BEFORE uid=1000 gid=1000 groups=1000
CapEff: 0000000000000000
NoNewPrivs: 0
Seccomp: 0
Seccomp_filters: 0
DIRTYFRAG_EXP_BEFORE_MARKER 0300000004000000
[su] installed 48 xfrm SAs
[su] wrote 192 bytes to /usr/bin/su starting at 0x0
[su] /usr/bin/su page-cache patched (entry 0x78 = shellcode)
# uid=0(root) gid=0(root) groups=0(root)
root
DIRTYFRAG_EXP_AFTER_MARKER 31ff31f631c0b06a
EKS Unconfined
Explicit seccompProfile.type: Unconfined also reached container root with the same marker change and uid=0(root) result.
EKS PSS Restricted
The EKS restricted probe had NoNewPrivs: 1, Seccomp: 2, no effective capabilities, and denied unshare(USER|NET). We did not run the full mutating PoC in the EKS restricted namespace after the GKE restricted full-PoC confirmed the same failure point. The validated EKS claim is narrower: PSS Restricted blocked the prerequisites we tested for the xfrm path.
GKE result: exploitable when seccomp was unset or Unconfined
The GKE run used a dev/staging cluster. The target node was the lowest-density Container-Optimized OS worker during inventory.
Kubernetes: v1.33.9-gke.1060000
OS image: Container-Optimized OS from Google
Kernel: 6.6.122+
Container runtime: containerd 2.0.7
GKE RuntimeDefault
RuntimeDefault blocked the xfrm chain at unshare(USER|NET):
NoNewPrivs: 0
Seccomp: 2
Seccomp_filters: 1
DIRTYFRAG_EXP_BEFORE_MARKER 0300000004000000
[su] unshare: Operation not permitted
dirtyfrag: failed (rc=1)
DIRTYFRAG_EXP_AFTER_MARKER 0300000004000000
GKE unset seccomp
Unset seccomp on GKE behaved like EKS: the pod ran with Seccomp: 0, user and network namespace creation worked, and NETLINK_XFRM worked after unshare.
The full PoC reached container root:
DIRTYFRAG_IDS_BEFORE uid=1000 gid=1000 groups=1000
CapEff: 0000000000000000
NoNewPrivs: 0
Seccomp: 0
Seccomp_filters: 0
DIRTYFRAG_EXP_BEFORE_MARKER 0300000004000000
[su] installed 48 xfrm SAs
[su] wrote 192 bytes to /usr/bin/su starting at 0x0
[su] /usr/bin/su page-cache patched (entry 0x78 = shellcode)
# uid=0(root) gid=0(root) groups=0(root)
root
DIRTYFRAG_EXP_AFTER_MARKER 31ff31f631c0b06a
GKE Unconfined
Explicit Unconfined also reached container root. The result matched the unset seccomp case: marker changed to 31ff31f631c0b06a, and the PoC reached uid=0(root).
GKE PSS Restricted
We ran the full PoC in a PSS Restricted namespace. It failed before the page-cache marker changed:
DIRTYFRAG_IDS_BEFORE uid=1000 gid=1000 groups=1000
CapEff: 0000000000000000
CapBnd: 0000000000000000
NoNewPrivs: 1
Seccomp: 2
Seccomp_filters: 1
DIRTYFRAG_EXP_BEFORE_MARKER 0300000004000000
[su] unshare: Operation not permitted
dirtyfrag: failed (rc=1)
DIRTYFRAG_EXP_AFTER_MARKER 0300000004000000
That was the most direct defense result in the set: PSS Restricted blocked the full tested xfrm chain on GKE.
Talos result: blocked by user namespaces disabled
The on-prem lab cluster includes Talos and non-Talos nodes. For the final Talos pass, we targeted a Talos worker with user namespaces disabled.
Kubernetes: v1.35.0
OS image: Talos v1.12.2
Kernel: 6.18.5-talos
Container runtime: containerd 2.1.6
RuntimeDefault and the unset-seccomp case both showed Seccomp: 2 in our Talos run and denied unshare(USER|NET) with EPERM.
The more interesting case was explicit Unconfined:
DIRTYFRAG_PROBE status Seccomp: 0
DIRTYFRAG_PROBE proc max_user_namespaces 0
DIRTYFRAG_PROBE socket(NETLINK_XFRM) OK
DIRTYFRAG_PROBE socket(AF_ALG) OK
DIRTYFRAG_PROBE unshare(USER|NET) ERR errno=28 (No space left on device)
The full PoC under explicit Unconfined failed the same way:
NoNewPrivs: 0
Seccomp: 0
DIRTYFRAG_EXP_BEFORE_MARKER 0300000004000000
[su] unshare: No space left on device
dirtyfrag: failed (rc=1)
DIRTYFRAG_EXP_AFTER_MARKER 0300000004000000
user.max_user_namespaces=0 was enough to stop the tested xfrm path on this Talos node. That does not mean "Talos is immune to Dirty Frag." It means this specific Dirty Frag xfrm chain could not pass the namespace setup step in our Talos configuration.
Local kind result
We also ran a local kind cluster on an arm64 OrbStack host. It was useful for control testing but not central to our provider claims.
In a permissive local kind setup, the xfrm path changed the /usr/bin/su marker bytes. It did not hand off cleanly to root because the upstream payload path we used was x86_64 and the host was arm64. In RuntimeDefault and PSS Restricted cases, the chain failed at unshare(USER|NET).
We are not using kind as evidence for cloud-provider exposure. It was a reproducibility harness and a guardrail against confusing Kubernetes YAML behavior with provider behavior.
Why unset seccomp is the Kubernetes footgun
Operationally, the key finding was that unset seccomp was not equivalent to RuntimeDefault in EKS or GKE.
On both providers:
- unset seccomp:
Seccomp: 0, xfrm chain succeeded, container root reached; RuntimeDefault:Seccomp: 2,unshare(USER|NET)denied, marker unchanged;Unconfined:Seccomp: 0, xfrm chain succeeded, container root reached.
Teams often check for Unconfined and miss unset seccomp. For Dirty Frag's xfrm path, that distinction was the difference between blocked and exploitable in our EKS and GKE labs.
Controls that changed the lab result
Enforce seccomp, do not leave it unset
For the xfrm chain we tested, RuntimeDefault was enough to block unshare(USER|NET) in EKS, GKE, Talos, and kind.
That does not make RuntimeDefault a universal Dirty Frag fix. It means the tested runtime default profiles blocked the tested xfrm path. If a different runtime default allows user and network namespace creation, your result may differ.
For high-risk workloads, consider a known-good Localhost seccomp profile and test it on every node family. Kubernetes supports applying node-local seccomp profiles to pods, but the profile must exist on the node and the workload has to reference it correctly.
Enforce PSS Restricted for untrusted workloads
PSS Restricted blocked the full tested xfrm chain on GKE and blocked the tested prerequisites on EKS and Talos.
The relevant controls are:
allowPrivilegeEscalation: false, which setsno_new_privs;seccompProfile.type: RuntimeDefaultor a restrictiveLocalhostprofile;capabilities.drop: ["ALL"];- non-root execution.
PSS Restricted is not a substitute for patching the kernel, but it changed the outcome of this PoC.
Consider node-level user namespace restrictions
Talos blocked the xfrm path even under explicit Unconfined seccomp because user.max_user_namespaces=0.
This is a strong control for this chain, but it can break workloads that legitimately need unprivileged user namespaces. Treat it as a node-pool decision, not a casual cluster-wide toggle. Build runners, image builders, sandboxing tools, and some developer workloads may depend on user namespaces.
Watch for unset or Unconfined seccomp as a high-priority posture issue
For Dirty Frag, a pod with unset seccomp on EKS or GKE was not merely "less hardened." It was exploitable in our lab.
Inventory all running pods and flag:
- unset seccomp at pod and container level;
seccompProfile.type: Unconfined;- namespaces without PSS labels;
allowPrivilegeEscalation: trueor unset;- containers that do not drop all capabilities;
- workloads on node pools that allow unprivileged user namespaces.
Patch and replace nodes as vendor guidance lands
Dirty Frag is a Linux kernel issue. Kubernetes policy can reduce exploitability, but the durable fix is at the node OS/kernel layer.
For the xfrm path we tested, track CVE-2026-43284 and your node OS vendor's kernel packages. Upstream Linux and stable references now exist, but managed Kubernetes nodes are only fixed when the node image or kernel package you are actually running includes the backport. Kernel strings can be misleading because vendors backport fixes without changing to the same upstream version number.
For EKS on Amazon Linux, also track AWS Security Bulletin 2026-027-AWS and the Amazon Linux CVE status. At our 17:35 UTC recheck on May 8, 2026, Amazon Linux still showed AL2023 kernel6.12 as Pending Fix, even though the latest EKS-optimized AL2023 AMI release had moved from the kernel we tested to 6.12.80-106.156.amzn2023.
At our 13:25 UTC recheck on May 9, 2026, Amazon Linux had published AL2023 fixes for CVE-2026-43284: ALAS2023-2026-1694 for kernel, ALAS2023-2026-1695 for kernel6.12, and ALAS2023-2026-1693 for kernel6.18. EKS node image status is a separate question from Amazon Linux package availability. As of that same recheck, the latest EKS-optimized AL2023 AMI release was still v20260505, with kernel6.12 6.12.80-106.156.amzn2023, not the fixed kernel6.12-6.12.83-113.160.amzn2023 package.
Module state is worth checking, but it needs careful interpretation. AWS recommends checking lsmod for esp4, esp6, ipcomp4, ipcomp6, and rxrpc; if the modules are not loaded, AWS still recommends blocking future module loading where appropriate. In other words, "not loaded right now" is not the same evidence as "patched," and it is not the same evidence as "cannot be loaded by a reachable kernel path."
Treat RxRPC separately. Upstream says CVE-2026-43500 is reserved for tracking the RxRPC Page-Cache Write path. Debian and Ubuntu now publish downstream CVE-2026-43500 records, and Debian has shipped linux package fixes for bullseye security, bookworm security, and trixie security. We still did not test that path and did not see AF_RXRPC available in any Kubernetes environment we tested.
What Juliet can do here
This is where a graph view helps: the answer lives across node inventory, workload posture, and runtime facts.
For Dirty Frag exposure, the useful graph is:
- node kernel, OS image, Kubernetes version, and container runtime;
- workload-to-node placement;
- effective seccomp after pod-level and container-level inheritance;
- PSS labels on namespaces;
allowPrivilegeEscalation, capabilities, privileged mode, host namespaces, and hostPath;- node facts such as
user.max_user_namespacesand whether tested address families are reachable.
Juliet can turn that into the questions defenders need to answer:
- Which workloads are running with unset or
Unconfinedseccomp on EKS or GKE? - Which namespaces are not enforcing PSS Restricted?
- Which high-risk workloads share nodes with sensitive workloads?
- Which node families need a targeted Dirty Frag validation run?
- Which compensating controls are present, and which ones are not changing the risk?
Juliet can also help block the risky paths we validated:
- Admission policies can reject new workloads that leave seccomp unset, use
Unconfined, allow privilege escalation, or fail to drop all capabilities. - Kubernetes PSS Restricted enforcement can be rolled out namespace by namespace while Juliet monitors namespace labels and workload posture, so teams can see which workloads would break before switching from audit to enforce.
- Runtime policies can detect namespace-manipulation attempts such as
unshareandsetns; in enforce mode, Juliet can kill matching container processes with namespace scope and rate-limit guardrails.
That is not the same as claiming Juliet patches the kernel. The safest control stack is still: patch or replace affected nodes, enforce seccomp/PSS for untrusted workloads, and use runtime enforcement as an additional tripwire for exploit behavior.
Want help checking this in your clusters? Start Juliet free, connect a non-production cluster first, and use Explorer to inspect seccomp posture, namespace PSS labels, workload placement, and node facts. If you want us to run this with you, request a Dirty Frag exposure review.
What we did not prove
Do not overread this post.
We did not prove that every EKS cluster is exploitable. We tested one EKS cluster, one Amazon Linux 2023 node family, one kernel build, and one containerd build.
We did not prove that every GKE cluster is exploitable. We tested one GKE cluster, one COS node family, one kernel build, and one containerd build.
We did not prove that Talos universally blocks Dirty Frag. We proved that our Talos node, with user.max_user_namespaces=0, blocked the tested xfrm path.
We did not prove host root, node persistence, or container escape.
We did not test the RxRPC fallback end-to-end because AF_RXRPC was unsupported in every environment we tested.
We are not publishing exploit code, lab patches, or reproduction commands in this post.
Defender checklist
Immediate checks:
- Find pods where effective seccomp is unset or
Unconfined. - Enforce
RuntimeDefaultor a known-goodLocalhostseccomp profile for untrusted workloads. - Enforce PSS Restricted on namespaces that run untrusted or multi-tenant workloads.
- Confirm
allowPrivilegeEscalation: falseandcapabilities.drop: ["ALL"]. - Check node-level user namespace policy on each node pool.
- On Amazon Linux nodes, check whether AWS-listed modules are loaded and whether future loading is blocked where appropriate.
- Separate build runners, CI jobs, plugin execution, and customer-controlled code from sensitive workloads.
- Track
CVE-2026-43284vendor kernel guidance and plan node replacement or patch rollout.
Validation checks:
- Do not assume YAML intent equals runtime behavior. Inspect
/proc/self/statusinside a test pod and confirmSeccomp. - Test representative node pools separately. EKS, GKE, Talos, and kind did not behave identically.
- Confirm cleanup and node health after any authorized validation run.
- Treat
AF_RXRPCseparately. We could not test that fallback because it was not available in our environments.
FAQ
Is Dirty Frag a Kubernetes vulnerability?
No. It is a Linux kernel issue. Kubernetes matters because pods share the node kernel, and Kubernetes policy determines whether a pod can reach the primitives the exploit chain needs.
Does RuntimeDefault stop Dirty Frag?
It stopped the tested xfrm path in our EKS, GKE, Talos, and kind labs by denying unshare(USER|NET). Do not generalize that to every runtime or every future Dirty Frag variant without testing.
Does PSS Restricted stop Dirty Frag?
It blocked the tested xfrm chain in our labs. On GKE, we ran the full PoC under PSS Restricted and it failed before marker bytes changed. On EKS and Talos, Restricted blocked the tested prerequisites, including unshare(USER|NET).
Were EKS and GKE exploitable?
In our labs, yes, when seccomp was unset or explicitly Unconfined. The result was container root inside the pod, not proven host root.
Was Talos exploitable?
Not in our final Talos xfrm test. Explicit Unconfined seccomp produced Seccomp: 0, but user.max_user_namespaces=0 caused unshare(USER|NET) to fail with ENOSPC.
What is the most important thing to check first?
Find pods with unset or Unconfined seccomp on node pools that allow user namespaces. That was the strongest predictor of exploitability in the EKS and GKE tests.
Sources
- V4bel/dirtyfrag upstream repository
- Dirty Frag upstream technical write-up
- Linux mainline xfrm-ESP fix
f4c50a4034e6 - NVD CVE-2026-43284
- NVD CVE-2026-43500
- AWS Security Bulletin 2026-027-AWS: Dirty Frag and other issues in Amazon Linux kernels
- Amazon Linux CVE-2026-43284 status
- Amazon Linux ALAS2023-2026-1694: AL2023 kernel
- Amazon Linux ALAS2023-2026-1695: AL2023 kernel6.12
- Amazon Linux ALAS2023-2026-1693: AL2023 kernel6.18
- Amazon EKS optimized AMI release
v20260505 - Red Hat CVE-2026-43284
- Ubuntu CVE-2026-43500
- Ubuntu Dirty Frag mitigation post
- Debian CVE-2026-43500
- Debian DSA-6253-1
- Debian DSA-6258-1
- Debian DLA-4572-1
- Kubernetes Pod Security Standards
- Kubernetes seccomp documentation
Get notified when we publish
No spam, no cadence — just an email when we have something worth reading.
Or subscribe via RSS