Kubernetes Network Policies: The Complete Guide to Zero Trust Networking
Learn how to implement default deny network policies, namespace isolation, egress control, and a complete 3-tier architecture in Kubernetes.
Kubernetes Network Policies: The Complete Guide to Zero Trust Networking
If you run a Kubernetes cluster without network policies, every pod can talk to every other pod. That includes the pod an attacker just compromised. By default, Kubernetes networking is completely flat — there are no firewalls, no segmentation, and no access controls between workloads. This guide walks you through everything you need to lock it down.
Why Network Policies Are Your First Line of Defense
Kubernetes uses a flat network model where every pod receives its own IP address and can communicate with any other pod across any namespace without NAT. This is great for developer productivity but catastrophic for security.
Consider what happens when an attacker gains code execution in a single pod:
- They can scan the entire cluster network for other services
- They can connect to databases in other namespaces
- They can reach the cloud metadata endpoint (169.254.169.254) and steal IAM credentials
- They can exfiltrate data to external servers
- They can move laterally to other compromised services
Network policies are Kubernetes-native firewall rules that operate at the pod level. They control both ingress (incoming) and egress (outgoing) traffic based on labels, namespaces, IP blocks, and ports.
Important: Network policies require a CNI (Container Network Interface) plugin that supports them. Calico, Cilium, Antrea, and Weave Net all provide full support. Flannel and kubenet do not enforce network policies — the API server accepts the resources but they have no effect.
Check your CNI before relying on network policies:
kubectl get pods -n kube-system -l k8s-app=calico-node # Calico
kubectl get pods -n kube-system -l k8s-app=cilium # Cilium
Understanding the Kubernetes Network Model
Before diving into policies, it helps to understand what you’re working with:
- Every pod gets a unique IP — pods don’t share IPs or use NAT to communicate within the cluster
- All pods can reach all other pods — by default, there is zero network isolation
- Services provide stable endpoints — but the underlying network is still flat
- Namespaces are not security boundaries — without network policies, a pod in
devcan connect to a pod inprod
Network policies change this model by introducing explicit allow rules. Once any network policy selects a pod, that pod shifts from “allow all” to “deny all except what’s explicitly allowed” for the policy types specified.
Default Deny: The Foundation of Zero Trust
The single most important network policy you can deploy is a default deny. This policy selects all pods in a namespace and blocks all traffic — both ingress and egress — unless another policy explicitly allows it.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
namespace: production
spec:
podSelector: {} # Selects ALL pods in the namespace
policyTypes:
- Ingress # Blocks all incoming traffic
- Egress # Blocks all outgoing traffic
This is the zero-trust baseline. Once applied:
- No external traffic can reach any pod
- No pod can make outbound connections
- No pod can communicate with any other pod, even in the same namespace
You then layer additional policies on top to allow only the specific traffic flows your application requires. Network policies are additive — each new policy opens specific holes in the default deny.
Apply default deny to every namespace:
for ns in $(kubectl get namespaces -o jsonpath='{.items[*].metadata.name}'); do
kubectl apply -n "$ns" -f default-deny-all.yaml
done
Verify your policies are in place:
kubectl get netpol -A -o jsonpath='{range .items[*]}{.metadata.namespace}{"/"}{.metadata.name}{"\t"}{.spec.policyTypes}{"\n"}{end}'
DNS Policies: Allowing Essential Traffic
The moment you apply a default deny policy, DNS resolution breaks. Every pod needs to resolve Kubernetes service names (like backend-service.production.svc.cluster.local), and that requires egress access to CoreDNS in kube-system.
This is the first policy you should apply alongside default deny:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-dns
namespace: production
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- to:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: kube-system
podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- protocol: UDP
port: 53
- protocol: TCP
port: 53
Key points:
- Both UDP and TCP port 53 are needed. Most DNS queries use UDP, but large responses (DNSSEC, zone transfers) fall back to TCP.
- Restrict to kube-dns pods specifically using
podSelectoralongside thenamespaceSelector. This prevents pods from using DNS as a tunnel to reach other services in kube-system. - The
kubernetes.io/metadata.namelabel is automatically added to namespaces in Kubernetes 1.22+.
Cloud Metadata Protection
One of the most exploited attack paths in cloud-native environments is SSRF (Server-Side Request Forgery) to the cloud metadata endpoint at 169.254.169.254. Every major cloud provider (AWS, GCP, Azure, DigitalOcean, Oracle Cloud) exposes an Instance Metadata Service at this IP address that returns temporary cloud credentials.
The attack chain is straightforward:
- Attacker exploits an SSRF vulnerability in your web application
- The application sends a request to
http://169.254.169.254/latest/meta-data/ - The response contains IAM role credentials (AccessKeyId, SecretAccessKey, Token)
- The attacker uses the stolen credentials to access S3, RDS, Secrets Manager, and more
Block it with a network policy:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-cloud-metadata
namespace: production
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- to:
- ipBlock:
cidr: 0.0.0.0/0
except:
- 169.254.0.0/16 # Block entire link-local range
This allows egress to all IPs except the link-local range that contains the metadata endpoint. We block the entire /16 rather than just 169.254.169.254/32 to catch edge cases.
AWS-specific note: AWS introduced an IPv6 metadata endpoint at fd00:ec2::254. If your cluster has IPv6 enabled, you need an additional policy:
egress:
- to:
- ipBlock:
cidr: 0.0.0.0/0
except:
- 169.254.0.0/16
- ipBlock:
cidr: ::/0
except:
- fd00:ec2::/32
Targeted approach: If some pods legitimately need metadata access (like monitoring agents), use a targeted policy instead of a blanket block:
spec:
podSelector:
matchLabels:
network-policy/block-metadata: "true"
Then label your web-facing pods: kubectl label pod my-app network-policy/block-metadata=true.
Verify the block is working:
kubectl exec <pod> -- wget -q -O- --timeout=2 http://169.254.169.254/ 2>&1
# Should fail or timeout
Namespace Isolation Strategies
Namespaces in Kubernetes are logical boundaries, not security boundaries. Without network policies, a compromised pod in dev can freely communicate with production databases in prod. Namespace isolation fixes this.
The strategy is:
- Default deny all ingress in each namespace
- Allow intra-namespace communication (pods within the same namespace)
- Allow ingress from specific trusted namespaces (ingress controller, monitoring)
- Control egress separately (DNS, external services)
Step 1: Default deny ingress
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-ingress
namespace: production
spec:
podSelector: {}
policyTypes:
- Ingress
Step 2: Allow same-namespace communication
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-same-namespace
namespace: production
spec:
podSelector: {}
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: production
Step 3: Allow ingress controller traffic
Your ingress controller (typically nginx-ingress or Traefik) runs in its own namespace and needs to route traffic to your application pods:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-from-ingress-controller
namespace: production
spec:
podSelector:
matchLabels:
network-policy/allow-ingress: "true"
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: ingress-nginx
podSelector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
ports:
- protocol: TCP
port: 8080
- protocol: TCP
port: 8443
Step 4: Allow monitoring (Prometheus)
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-from-monitoring
namespace: production
spec:
podSelector: {}
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: monitoring
podSelector:
matchLabels:
app.kubernetes.io/name: prometheus
ports:
- protocol: TCP
port: 9090
Egress Control: DNS, HTTPS, and Database Access
Egress control is just as important as ingress control. Without egress restrictions, a compromised pod can:
- Connect to attacker-controlled C2 (Command and Control) servers
- Exfiltrate sensitive data to external endpoints
- Download additional malware or crypto miners
- Scan internal networks for lateral movement
The approach is defense-in-depth: deny all egress first, then explicitly allow DNS, external HTTPS to approved CIDRs, and database access to specific pods.
Allow HTTPS to external APIs:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-https-egress
namespace: production
spec:
podSelector:
matchLabels:
network-policy/allow-external-https: "true"
policyTypes:
- Egress
egress:
- to:
- ipBlock:
cidr: 52.94.0.0/16 # AWS services
- ipBlock:
cidr: 35.190.0.0/16 # GCP services
ports:
- protocol: TCP
port: 443
Never allow 0.0.0.0/0 on port 443 — always restrict to specific CIDR ranges that your application actually needs.
Allow database access (PostgreSQL, Redis):
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-database-egress
namespace: production
spec:
podSelector:
matchLabels:
network-policy/allow-database: "true"
policyTypes:
- Egress
egress:
- to:
- podSelector:
matchLabels:
app: postgresql
ports:
- protocol: TCP
port: 5432
- to:
- podSelector:
matchLabels:
app: redis
ports:
- protocol: TCP
port: 6379
Only backend pods with the network-policy/allow-database: "true" label can reach the database. Frontend pods are blocked entirely.
Complete 3-Tier Architecture Example
Let’s put it all together with a real-world 3-tier architecture: frontend, backend API, and database with Redis cache.
Internet
|
v
+----------+ +-----------+ +------------+
| Frontend |----->| Backend |----->| Database |
| (public) | 8080 | (API) | 5432 | (restricted)|
+----------+ +-----------+ +------------+
^ |
| v
Ingress +--------+
Controller | Redis |
| (cache)|
+--------+
Traffic rules:
- Frontend: Receives traffic from the internet via ingress controller, sends requests to backend on port 8080
- Backend: Accepts requests only from frontend, connects to database (5432) and Redis (6379)
- Database: Accepts connections only from backend, no outbound traffic (prevents data exfiltration)
- Redis: Accepts connections only from backend, no outbound traffic
Start with default deny:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
namespace: app-production
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
Frontend policy:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: frontend-network-policy
namespace: app-production
spec:
podSelector:
matchLabels:
app: frontend
tier: frontend
policyTypes:
- Ingress
- Egress
ingress:
# Allow traffic from internet via ingress controller
- from: []
ports:
- protocol: TCP
port: 80
- protocol: TCP
port: 443
egress:
# Allow outbound to backend API
- to:
- podSelector:
matchLabels:
app: backend
tier: api
ports:
- protocol: TCP
port: 8080
# Allow DNS
- to:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: kube-system
podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- protocol: UDP
port: 53
- protocol: TCP
port: 53
Backend policy:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: backend-network-policy
namespace: app-production
spec:
podSelector:
matchLabels:
app: backend
tier: api
policyTypes:
- Ingress
- Egress
ingress:
# Only accept traffic from frontend
- from:
- podSelector:
matchLabels:
app: frontend
tier: frontend
ports:
- protocol: TCP
port: 8080
egress:
# Database access
- to:
- podSelector:
matchLabels:
app: database
tier: database
ports:
- protocol: TCP
port: 5432
# Redis access
- to:
- podSelector:
matchLabels:
app: redis
tier: cache
ports:
- protocol: TCP
port: 6379
# DNS
- to:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: kube-system
podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- protocol: UDP
port: 53
- protocol: TCP
port: 53
Database policy (most restrictive):
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: database-network-policy
namespace: app-production
spec:
podSelector:
matchLabels:
app: database
tier: database
policyTypes:
- Ingress
- Egress
ingress:
# Only backend can reach the database
- from:
- podSelector:
matchLabels:
app: backend
tier: api
ports:
- protocol: TCP
port: 5432
egress:
# DNS only -- databases should never initiate outbound connections
- to:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: kube-system
podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- protocol: UDP
port: 53
- protocol: TCP
port: 53
The database tier is the most locked-down. No outbound connections are allowed except DNS. This prevents data exfiltration, reverse shell connections, and DNS tunneling to external servers.
Zero Trust Network Architecture in Kubernetes
Zero trust in Kubernetes means three things:
- Never trust, always verify — every connection must be explicitly allowed
- Least privilege — pods get access only to the services they need
- Assume breach — design your network so a compromised pod has minimal blast radius
The network policies above implement layers 1 and 2. For layer 3, consider adding:
Mutual TLS (mTLS) with a service mesh:
# Istio PeerAuthentication
apiVersion: security.istio.io/v1beta1
kind: PeerAuthentication
metadata:
name: default
namespace: istio-system
spec:
mtls:
mode: STRICT
mTLS encrypts all traffic between pods and verifies identity, preventing man-in-the-middle attacks even within the cluster.
Network policy visualization: Use tools like Cilium Hubble or Calico Enterprise to visualize actual traffic flows and verify your policies match expected behavior.
Testing and Troubleshooting Network Policies
Network policies can be tricky to debug because they fail silently — traffic is simply dropped with no error message.
Test connectivity between pods:
# From a frontend pod, try to reach the backend
kubectl exec frontend-pod -- wget -q -O- --timeout=5 http://backend-service:8080/health
# From a frontend pod, try to reach the database directly (should fail)
kubectl exec frontend-pod -- nc -zv database-service 5432 -w 5
Verify policies are applied:
# List all network policies
kubectl get netpol -n production
# Describe a specific policy to see its rules
kubectl describe netpol frontend-network-policy -n production
Common issues and fixes:
-
DNS not working after default deny — You forgot to add the DNS allow policy. Apply the
allow-dnspolicy shown above. -
Ingress controller can’t reach pods — Your ingress controller namespace needs an explicit allow rule. Check the controller’s namespace label:
kubectl get ns ingress-nginx --show-labels. -
Monitoring/Prometheus can’t scrape — Add an allow policy for the monitoring namespace on the metrics port.
-
Policy seems to have no effect — Your CNI might not support network policies. Verify:
kubectl get pods -n kube-system | grep -E 'calico|cilium'. -
Pods in the same namespace can’t communicate — After default deny, you need an explicit same-namespace allow policy using
namespaceSelector.
Policy testing workflow:
# 1. Apply policies in audit mode first (if using Cilium)
# 2. Test all expected traffic flows
# 3. Check for unexpected blocks in CNI logs
# 4. Switch to enforce mode
# 5. Verify again
# Quick verification script
echo "Testing DNS..."
kubectl exec test-pod -- nslookup kubernetes.default
echo "Testing backend connectivity..."
kubectl exec frontend-pod -- wget -q -O- --timeout=5 http://backend:8080/health
echo "Testing database isolation..."
kubectl exec frontend-pod -- nc -zv database 5432 -w 2 && echo "FAIL: frontend can reach DB" || echo "PASS: frontend blocked from DB"
Putting It Into Practice
Network policies are the most impactful and underutilized security control in Kubernetes. Start with default deny, add DNS, block cloud metadata, and build from there. The key principle is simple: if a pod doesn’t need to talk to something, it shouldn’t be able to.
The templates referenced in this guide — default deny, DNS allow, cloud metadata protection, namespace isolation, egress control, and the complete 3-tier architecture — are all included in the K8s Security Pro template pack along with 14 other production-ready security templates, a Helm chart, and Kustomize overlays for multi-environment deployment.
Start with the free K8s Security Quick-Start Kit to get the checklist and 5 essential templates, including the default deny and DNS allow policies covered in this guide.
Related Templates
Implement what you’ve learned with these production-ready YAML templates:
- Template 01: Default Deny Network Policy — Zero-trust baseline that blocks all ingress and egress traffic by default.
- Template 08: Allow DNS Network Policy — Restores DNS resolution to CoreDNS after applying default deny.
- Template 13: Block Cloud Metadata — Prevents SSRF-based credential theft from the cloud metadata endpoint.
- Template 14: Namespace Isolation — Complete namespace isolation with layered ingress and egress controls.
- Template 15: Egress Allow Rules — Whitelist-only egress model with DNS, HTTPS, and database policies.
- Template 20: Complete 3-Tier Network Policy Set — Production-ready zero-trust policies for a full 3-tier architecture.
Related Articles
- Kubernetes Pod Security Standards: From PSP to PSS Migration Guide — Enforce pod-level security alongside your network policies for defense in depth.
- Kubernetes RBAC Best Practices: Least Privilege Done Right — Complement network isolation with proper access control on the Kubernetes API.
Get the Free K8s Security Quick-Start Kit
Join 500+ engineers. Get 5 essential templates + audit checklist highlights delivered to your inbox.
No spam. Unsubscribe anytime.
Secure Your Kubernetes Clusters
Get the complete 50-point audit checklist and 20+ production-ready YAML templates.
View Pricing Plans