Every month the same thing happens. The AWS bill arrives. Someone opens Cost Explorer, looks at the total, compares it to last month. If it went up, there’s a brief moment of concern. If it stayed roughly the same, everyone moves on. Nobody digs deeper because there’s always something more urgent to do.
This works until it doesn’t. Then one month the bill jumps by $400 and nobody can explain why. Or worse, it’s been slowly creeping up by $50/month for six months and nobody noticed because each individual increase looked normal.
I’ve been on both sides of this. Managing my own AWS costs for ScanOrbit, and scanning other people’s accounts where cost waste had been accumulating for months. The pattern is almost always the same: teams know what they’re paying in total, but they don’t know what they’re paying for at a resource level. And the AWS billing dashboard doesn’t really help with that.
What the billing dashboard actually shows you
The AWS Billing and Cost Management console gives you a monthly total, a cost breakdown by service, and Cost Explorer for more detailed analysis. For what it is, it’s fine. You can see that EC2 cost you $1,200 last month and RDS cost $340. You can filter by region, by linked account, by usage type.
What you can’t easily do is answer the questions that actually matter:
“Which specific EC2 instances are costing the most?” Cost Explorer groups by instance type and region, but it doesn’t show you individual instance IDs unless you enable and use Cost Allocation Tags. Most small teams haven’t set these up.
“Is any of this spend wasted?” The billing dashboard has no concept of waste. A $40/month orphaned EBS volume sitting unattached for six months looks identical to a $40/month volume that’s actively serving data. Both show up as EBS charges.
“Why did the bill go up this month?” Cost Explorer has a cost anomaly feature that can catch sudden spikes. But gradual increases, someone leaving a test instance running, a NAT Gateway nobody needed anymore, those don’t trigger anomaly alerts because each month looks only slightly different from the last.
For a team of 3-5 engineers with no dedicated FinOps person, this is the reality. You have enough information to know the total but not enough to make decisions about specific resources.
Tagging: the thing everyone knows they should do
Ask any AWS consultant how to improve cost visibility and the first answer is always tagging. Tag your resources with Environment, Team, Project, Owner. Then use Cost Allocation Tags to make those tags appear in your billing data. Then you can group costs by team or project or environment.
They’re right. It works. I’m not going to pretend tagging isn’t the correct answer.
But I will say that in practice, tagging discipline in small teams is terrible. Not because people are lazy. Because when you’re moving fast with three engineers, nobody wants to be the person who blocks a deploy because someone forgot to add an Owner tag to a security group. Enforcement feels like overhead. So you start with good intentions, tag everything for two weeks, then gradually stop because there’s no automated enforcement and nobody notices when a resource goes untagged.
Six months later, half your resources have tags and half don’t. The tagged half gives you nice Cost Explorer breakdowns. The untagged half is a black box. And the untagged half is usually where the waste is, because those are the resources that got created in a hurry or through the console without going through the normal process.
If you’re going to invest in tagging (and you should, eventually), pair it with enforcement. AWS Organizations Tag Policies can require specific tags on resource creation. SCPs can deny actions that don’t include required tags. Without enforcement, tagging decays. Every single time.
What actually helps at the small-team scale
Here’s what I’ve seen work for teams of 2-10 people who don’t have a FinOps practice and aren’t going to build one.
Monthly cost review ritual
Fifteen minutes, once a month, right after the bill comes in. One person opens Cost Explorer, looks at the top 5 services by cost, and compares to the previous month. If anything went up by more than 10%, they spend five minutes figuring out why.
That’s it. No elaborate process. No spreadsheet. Just a quick check that catches the obvious stuff before it compounds. You’d be amazed how many teams don’t even do this.
AWS Cost Anomaly Detection
Free to enable. It uses machine learning (AWS’s phrase, not mine) to detect unusual spending patterns and sends you an alert. It’s surprisingly good at catching sudden spikes. Less useful for slow drift, but it runs automatically and costs nothing, so there’s no reason not to turn it on.
Set up an alert that emails whoever pays attention to infrastructure. Even if you ignore half the alerts, the one you don’t ignore will pay for the five minutes it took to set up.
Budget alerts
Create an AWS Budget for your total monthly spend. Set alert thresholds at 80% and 100% of your expected cost. This takes about two minutes and gives you an early warning before the bill surprises you.
You can also create per-service budgets if you know roughly what each service should cost. RDS shouldn’t be more than $400? Set a budget. If it goes over, you’ll know before the end of the month.
The unused resource sweep
Once a quarter, run through the unused resource checklist. Check for unattached EBS volumes, unused Elastic IPs, idle load balancers, and NAT Gateways connected to nothing. This is where the silent waste lives.
The CLI commands in that guide take about 20 minutes to run across all regions. In my experience, teams find $50-300/month in waste on the first sweep. Not life-changing, but that’s $600-3,600/year that was going nowhere.
If you’re running multiple accounts, multiply the time by the number of accounts. This is where manual approaches start to feel like a part-time job.
Resource-level cost attribution (without perfect tagging)
Even without comprehensive tagging, you can get resource-level costs for the biggest offenders. EC2 instances, RDS instances, and NAT Gateways have instance-level billing data available through Cost Explorer if you group by resource ID.
Go to Cost Explorer, set the filter to EC2-Instances, group by Resource, sort by cost descending. You’ll see exactly which instances are costing the most. Do the same for RDS and any other service that’s a significant line item.
This won’t cover everything (EBS volumes grouped by resource ID require tags or separate API queries), but it covers the biggest items without any tagging setup.
What doesn’t work at this scale
A few things I’ve seen small teams try that usually don’t stick.
Full FinOps platforms. CloudHealth, Spot.io, Kubecost, Vantage. These are real products that solve real problems, but they’re built for teams with dedicated FinOps practices or at least someone who owns cloud costs as part of their role. A three-person team that sets up CloudHealth will use it for a month, then stop logging in because nobody has time to maintain it. The ROI needs a certain account size to justify the cost and attention.
Custom dashboards. Someone spends a weekend building a Grafana dashboard that pulls CloudWatch metrics and cost data. It looks great on Monday. By Friday nobody’s checking it. By next month the data source breaks because someone changed an IAM policy. I’ve built these dashboards myself. They die every time.
Spreadsheet tracking. Manually copying cost numbers into a spreadsheet each month. Works for about three months. Then someone forgets. Then the spreadsheet format changes. Then nobody trusts the data.
The common thread: anything that requires sustained manual effort from a small team eventually gets deprioritized. The approaches that work are either automated or so quick that they don’t compete with feature work for attention.
Automated cost visibility
The step beyond manual checks is having something that scans your account and tells you what’s wasting money without you asking.
AWS Trusted Advisor does some of this, but only on Business or Enterprise support plans ($100+/month minimum). The free tier checks are limited to a handful of items.
ScanOrbit scans all regions and surfaces cost waste alongside security and compliance findings. It connects through a read-only IAM role, so there’s nothing to install or maintain. It finds orphaned volumes, unused IPs, idle load balancers, stopped instances with attached storage, and other waste patterns automatically.
I built it because I wanted something between “run CLI scripts manually every month” and “subscribe to a full FinOps platform.” Something a solo engineer or a small team could set up in five minutes and get immediate value from. The free scan shows you how many cost findings you have and their severity, so you can see if it’s worth digging deeper. Full disclosure: this is my product.
Whatever you use, the principle is the same. Small teams need cost visibility that’s either automated or fast enough to do in 15 minutes. Anything that requires a dedicated workflow will lose to the next feature request. Every time.
Part of our series on AWS infrastructure for small teams. See also: How to Find Unused Resources and Cut Costs, AWS Account Audit Checklist for Solo Engineers, and AWS Multi-Account Visibility for CTOs.