Running AI workloads in a hybrid fashion — in your data center and in the cloud — requires sophisticated, global networks that unify cloud and on-premises resources. While Google’s Cloud WAN provides the necessary unified network fabric to connect VPCs, data centers, and specialized hardware, this very interconnectedness exposes a critical, foundational challenge: IP address scarcity and overlapping subnets. As enterprises unify their private and cloud environments, manually resolving these pervasive address conflicts can be a big operational burden.
Resolving IPv4 address conflicts has been a longstanding challenge in networking. And now, with a growing number of IP-intensive workloads and applications, customers face the crucial question of how to ensure sufficient IP addresses for their deployments.
Google Cloud offers various solutions to address private IP address challenges and facilitate communication between non-routable networks, including Private Service Connect (PSC), IPv6 addressing, and network address translation (NAT) appliances. In this post, we focus on private NAT, a feature of the Cloud NAT service. This managed service simplifies private-to-private communication, allowing networks with overlapping IP spaces to connect without complex routing or managing proprietary NAT infrastructure.
Getting to know private NAT
Private NAT allows your Google Cloud resources to connect to other VPC networks or to on-premises networks with overlapping and/or non-routable subnets, without requiring you to manage any virtual machines or appliances.
Here are some of the key benefits of private NAT:
- A managed service: As a fully managed service, private NAT minimizes the operational burden of managing and scaling your own NAT gateways. Google Cloud handles the underlying infrastructure, so you can focus on your applications.
- Simplified management: Private NAT simplifies network architecture by providing a centralized and straightforward way to manage private-to-private communication — across workloads and traffic paths.
- High availability: Being a distributed service, private NAT offers high availability, VM-to-VM line-rate performance, and resiliency, all without having to over-provision costly, redundant infrastructure.
- Scalability: Private NAT is designed to scale automatically with your needs, supporting a large number of NAT IP addresses and concurrent connections.
Figure: Cloud NAT options
Common use cases
Private NAT provides critical address translation for the most complex hybrid and multi-VPC networking challenges
Unifying global networks with Network Connectivity Center
For organizations that use Network Connectivity Center to establish a central connectivity hub, private NAT offers the essential mechanism for linking networks that possess overlapping “ non-routable” IP address ranges. This solution facilitates two primary scenarios:
VPC spoke-to-spoke: Facilitates seamless private-to-private communication between distinct VPC networks (spokes) with overlapping subnets.
- VPC-to-hybrid-spoke: Enables connectivity between a cloud VPC and an on-premises network (a hybrid spoke) connected via Cloud Interconnect or Cloud VPN. Learn more here.
Figure: Private NAT with Network Connectivity Center
Enabling local hybrid connectivity in shared VPC
Organizations with shared VPC architectures can establish connectivity from non-routable or overlapping network subnets to their local Cloud Interconnects or Cloud VPN tunnels. A single private NAT gateway can manage destination routes for all workloads within the VPC.
“Thanks to private NAT, we effortlessly connected our Orange on-prem data center with the Masmovil GCP environment, even with IP address overlaps after our joint venture. This was crucial for business continuity, as it allowed us to enable communications without altering our existing environment.” – Pedro Sanz Martínez, Head of Cloud Platform Engineering, MasOrange
Figure: Enabling local hybrid connectivity using private NAT
Accommodating Cloud Run and GKE workloads
Dynamic, IP-intensive workloads such as Google Kubernetes Engine (GKE) and Cloud Run often use Non-RFC 1918 ranges such as Class E to solve for IPv4 exhaustion. These workloads often need to access resources in an on-premises network or a partner VPC, so the ability for the on-premises network to accept non-RFC 1918 ranges is critical. In most cases, central network teams do not accept non-RFC 1918 address ranges.
You can solve this by applying a private NAT configuration to the non-RFC 1918 subnet. With private NAT, all egress traffic from your Cloud Run service or GKE workloads is translated, allowing it to securely communicate with the destination network despite being on non-routable subnets. Learn about how private NAT works with different workloads here.
Configuration in action: Example setups
Let’s look at how to configure private NAT for one of these use cases using gcloud commands.
Example: connecting to a partner network with overlapping IPs
Scenario: Your production-vpc contains an application subnet (app-subnet-prod, 10.20.0.0/24). You need to connect to a partner’s network over Cloud VPN, but the partner also uses the 10.20.0.0/24 range for the resources you need to access.
Solution: Configure a private NAT gateway to translate traffic from app-subnet-prod before it goes over the VPN tunnel.
1. Create a dedicated subnet for NAT IPs. This subnet’s range is used for translation and must not overlap with the source or destination.
- code_block
- <ListValue: [StructValue([(‘code’, ‘gcloud compute networks subnets create pnat-subnet-prod \rn –network=production-vpc \rn –range=192.168.1.0/24 \rn –region=us-central1 \rn –purpose=PRIVATE_NAT’), (‘language’, ”), (‘caption’, <wagtail.rich_text.RichText object at 0x7f44d088d3d0>)])]>
2. Create a Cloud Router
- code_block
- <ListValue: [StructValue([(‘code’, ‘gcloud compute routers create prod-router \rn –network=production-vpc \rn –region=us-central1’), (‘language’, ”), (‘caption’, <wagtail.rich_text.RichText object at 0x7f44d136aa00>)])]>
3. Create a private NAT gateway. This configuration specifies that only traffic from app-subnet-prod to local dynamic (match is_hybrid) destinations should be translated using IPs from pnat-subnet-prod subnet.
- code_block
- <ListValue: [StructValue([(‘code’, “gcloud compute routers nats create pnat-to-partner \rn –router=prod-router \rn –region=us-central1 \rn –type=PRIVATE –region=us-central1 \rn –nat-custom-subnet-ip-ranges=app-subnet-prod:ALLrnrngcloud compute routers nats rules create 1 \rn –router=prod-router –region=us-central1 \rn –nat= pnat-to-partner \rn –match=’nexthop.is_hybrid’ \rn –source-nat-active-ranges= pnat-subnet-prod”), (‘language’, ”), (‘caption’, <wagtail.rich_text.RichText object at 0x7f44d136ad60>)])]>
Now, any VM in app-subnet-prod that sends traffic to the partner’s overlapping network will have its source IP translated to an address from the 192.168.1.0/24 range, resolving the conflict.
Google Cloud’s private NAT elegantly solves the common and complex problem of connecting networks with overlapping IP address spaces. As a fully managed, scalable, and highly available service, it simplifies network architecture, reduces operational overhead, and enables you to build and connect complex hybrid and multi-cloud environments with ease.
Learn more
Ready to get started with private NAT? Check out the official private NAT documentation and tutorials to learn more and start building your own solutions today.
Read more here: https://cloud.google.com/blog/products/networking/using-private-nat-for-networks-with-overlapping-ip-spaces/


