Every few months, someone publishes a hot take about leaving the cloud. Then someone else publishes a rebuttal about why the cloud is still the answer. Both sides cherry-pick numbers, both sides are partly right, and nobody helps you make the actual decision for your specific situation.
We run our own AS, our own BGP routing, and our own hardware across multiple data centers. We also manage AWS environments for clients in ten countries. We see both invoices, every month. Here's what we've learned about when each option makes financial and operational sense.
The conversation everyone gets wrong
The AWS-vs-own-servers debate usually plays out like this: someone compares the monthly price of an EC2 instance to a Hetzner dedicated server and concludes that AWS is five times more expensive. Then an AWS advocate points out that you're not accounting for staff costs, redundancy, networking, and all the stuff you get "for free" with a managed cloud.
Both arguments are incomplete. The real answer depends on three things: how predictable your workload is, how much data you move, and how much operational complexity you're willing to own.
If you want the shortcut answer: the break-even point where own infrastructure becomes cheaper than AWS, even after accounting for management overhead, is roughly $1,500/month in AWS spend. Below that, AWS is usually the pragmatic choice. Above that, you're leaving money on the table every month you don't run the numbers.
The rest of this post is those numbers.
The hidden line items
AWS pricing is straightforward if you're running a single EC2 instance with an EBS volume. The moment you build anything resembling a production environment, the bill develops hidden compartments.
Data transfer
This is where AWS makes real money. Outbound data transfer (egress) starts at $0.09/GB. That sounds cheap until you do the math on a service that transfers 10 TB per month. That's roughly $900/month in bandwidth alone, before you've paid for a single compute hour.
On dedicated infrastructure, bandwidth is typically included or flat-rate. Most providers include 20-50 TB of traffic per month in the server price. The delta at scale is enormous.
NAT Gateway
If your instances are in a private subnet (they should be), outbound traffic goes through a NAT Gateway. That's $0.045/hour per AZ, plus $0.045/GB of processed data, on top of the egress fees you're already paying.
A NAT Gateway sitting idle in two availability zones costs about $65/month. Route 10 TB through it and you're looking at another $450/month in processing charges. This is a fee that doesn't exist on dedicated infrastructure.
Cross-AZ traffic
Running your database in one AZ and your application in another (which AWS best practices recommend for availability) costs $0.01/GB in each direction. It's small per-request, but for chatty applications hitting the database thousands of times per second, it adds up to hundreds of dollars monthly.
EBS snapshots
Snapshots look cheap at $0.05/GB-month. Then you realize you have 500 GB of production data, daily snapshots with 30-day retention, and incremental storage that's less incremental than you'd expect. Snapshot costs quietly climb to $100-200/month on a typical setup.
The one nobody tracks
CloudWatch, Elastic IP addresses (charged when not attached), S3 request pricing (not just storage, but per-request fees for GET, PUT, LIST), load balancer idle hours. Individually small. Collectively, 15-30% of your total bill in ways that are painful to attribute to specific services.
What things actually cost: three scenarios
These are representative of real client setups we manage. Not exact client data, but close enough to be useful.
Small: A Laravel app serving 10K requests/day
On AWS (t3.medium, RDS db.t3.micro, ALB, S3): Compute, database, load balancer, storage, and transfer. Roughly $150-200/month if you're careful. But that's the optimistic version. Add CloudWatch, snapshots, a staging environment, and it's closer to $250-300/month.
On dedicated infrastructure (single Hetzner-class server): A dedicated server with 32 GB RAM, 6 cores, and 1 TB NVMe runs everything on one box. Database, Redis, nginx, the application. $60-100/month depending on provider (note: Hetzner and OVH both raised prices significantly in early 2026, so check current rates). Add a backup destination and monitoring and you're at $100-150/month.
Winner at this tier: Dedicated, clearly. The workload is small, predictable, and doesn't need auto-scaling. AWS is overkill.
Medium: A SaaS product with 1M requests/day
On AWS (multiple instances, RDS Multi-AZ, ElastiCache, CloudFront, S3): Now you need redundancy. RDS Multi-AZ doubles your database cost. ElastiCache for Redis adds another $150-300/month. CloudFront takes the edge off your egress bill but introduces its own pricing. ALB is handling real traffic. Total: $800-1,500/month, depending heavily on data transfer and storage. Reserved Instances or Savings Plans can knock 25-35% off compute costs if you commit for 1-3 years, which brings the low end closer to $600, but you're locking in spend and the savings don't touch data transfer or managed service fees.
On dedicated infrastructure (2-3 servers, replicated): Two application servers behind a load balancer, one database server with replication to a hot standby. Redis on the app servers. About $250-500/month for the hardware (post-2026 pricing). Add managed monitoring, backups, and DNS failover and you're at $450-700/month.
Winner at this tier: Still dedicated, but the gap is narrowing. The operational complexity of managing replication, failover, and security patches starts to matter. If you have the expertise (or a team like ours handling it), dedicated wins on cost. If you're a four-person startup with no ops experience, the AWS premium buys you time.
A real migration we did: A Laravel SaaS running on AWS (EC2, RDS Multi-AZ, ElastiCache, CloudFront, S3) was spending $3,800/month. We moved them to two dedicated servers with MariaDB replication and Cloudflare in front. Monthly cost dropped to $940 including our management fee. The migration took three weekends, zero downtime using DNS cutover. Their traffic hasn't changed. Their bill has.
Large: 10M+ requests/day with traffic spikes
On AWS: Auto-scaling groups, reserved instances, Savings Plans, spot instances for background jobs, CloudFront, multiple regions. You've hired someone whose full-time job is managing your AWS bill. Total: $3,000-10,000+/month depending on architecture.
On dedicated infrastructure: You're running a rack or a partial rack. Colocation or multiple dedicated servers across providers. Load balancing, database clustering, CDN (you'd still use Cloudflare or similar). Hardware: $800-2,000/month. But you need someone who knows BGP, or at least someone who knows their way around a firewall config. Operational cost is real.
Winner at this tier: Hybrid. Dedicated for baseline compute, cloud for the spikes. A typical split at this scale: $800-1,200/month in dedicated hardware handling 80% of steady-state traffic, $300-800/month in AWS for auto-scaling overflow and managed services you can't easily replicate (SQS, DynamoDB), plus $400-1,000/month in management and monitoring. Total: $1,500-3,000/month. That's half to a third of what pure AWS costs at the same traffic level. This is what we run ourselves and what we set up for most clients at this scale.
When AWS wins outright
Let's be honest about where cloud is the right answer:
Unpredictable traffic. If you're a startup that might get featured on Hacker News tomorrow or a seasonal business with 10x traffic spikes, paying for elasticity makes sense. Over-provisioning dedicated hardware for peak traffic that happens twice a year is waste.
Managed services you'd struggle to replicate. DynamoDB, SQS, Lambda, Cognito. If your architecture depends on services that don't have good self-hosted equivalents, the cloud premium is the cost of not building it yourself. That's often a reasonable trade.
Global presence. If you need low-latency access across five continents, building your own global infrastructure is a multimillion-dollar project. AWS regions and CloudFront solve this for a fraction of the cost.
Short-lived projects. Need 50 servers for a load test this week? Spin them up, run the test, tear them down. Dedicated infrastructure doesn't do that.
You have zero ops capability. If nobody on your team can SSH into a server and debug a crashed process, AWS's managed services and console are genuinely valuable. The premium is worth not hiring a sysadmin.
When own infrastructure wins outright
Predictable, sustained compute. If your traffic graph is a flat line (or a gently rising one), you're paying AWS a premium for elasticity you'll never use. Dedicated servers are 2-4x cheaper for the same specs when your utilization is consistently above 50%.
Storage-heavy workloads. EBS is expensive. S3 is cheap for storage but expensive for access patterns with lots of small requests. If you're sitting on 10+ TB of hot data, dedicated NVMe storage costs a fraction of what AWS charges.
Data sovereignty. GDPR compliance with actual EU data residency (not just an AWS region label) is simpler when you know exactly which rack your data is on. We run our own hardware in European data centers for clients where this matters.
Bandwidth. Once you're pushing 10+ TB per month outbound, the egress math kills AWS. Dedicated hosting typically includes 20-50 TB of traffic. On AWS, that same bandwidth costs $900-4,500/month in transfer fees alone.
The risk nobody prices in
Own infrastructure has a failure mode that cloud doesn't: single-provider risk. In March 2021, a fire at OVH's Strasbourg data center destroyed SBG2 entirely and damaged SBG1. Customers who had their servers and backups in the same facility lost everything.
AWS isn't immune to outages (us-east-1 has had several memorable ones), but their blast radius is typically smaller and recovery is faster because the infrastructure is distributed by design.
The answer isn't "therefore use AWS." The answer is: don't put everything with one provider. We run servers across multiple data centers and multiple providers specifically because of this. Backups go to a different provider than the primary servers. If one facility has a bad day, the recovery path doesn't depend on that same facility coming back online.
This is operational complexity that AWS handles for you implicitly. On your own infrastructure, you have to design for it explicitly. Factor that into the cost comparison.
The option nobody talks about: hybrid
Most of the "cloud vs. own servers" discourse pretends you have to pick one. You don't.
The pattern we set up most often for mid-sized clients: dedicated servers for the baseline workload (application servers, databases, Redis), with a CDN in front for static assets and a cloud provider for overflow capacity. The database never touches AWS (too expensive, and you lose control over performance tuning). The compute layer can burst to cloud during spikes.
This isn't exotic. It's just sensible infrastructure. The baseline is cheap and fast on dedicated metal. The spikes are absorbed by auto-scaling groups that spin up on demand. You pay cloud prices only for the variable part of your traffic.
The cost nobody puts in the spreadsheet
Operational overhead. The elephant in every infrastructure comparison.
AWS reduces operational work for small teams. No hardware failures. No kernel updates (on managed services). No disk replacements. That has real value when your team is five people and none of them want to be on call.
Own infrastructure means someone needs to handle OS patching, security updates, hardware failures, network issues, and monitoring. If you're doing this yourself, budget 10-20 hours per month of sysadmin time per cluster. If you're outsourcing it (to a team like ours, for instance), that's a predictable monthly cost that's still usually cheaper than the AWS premium.
What to do right now
If you're spending less than $500/month on AWS and your traffic is stable, you're probably fine where you are. The savings from moving don't justify the migration effort.
If you're spending $1,000-3,000/month on AWS with predictable traffic, run the numbers on dedicated servers. Include monitoring, backups, and either your own time or managed services in the comparison. The savings are likely 40-60%.
If you're spending $5,000+/month, you should have already done this analysis. If you haven't, the delta is probably large enough to fund the migration itself within a few months.
In all cases, check your data transfer bill separately. It's usually the first thing that makes the case for dedicated infrastructure, and it's the line item most people never look at.
Send us your AWS bill
Seriously. We'll look at it for free and tell you what you'd save on dedicated infrastructure, what you should keep on AWS, and whether a migration is worth the effort. If the answer is "stay where you are," we'll say that.
30 minutes, free, no pitch. You'll talk to the engineers who run both sides of this equation every day.