Data Stack
Blog
Snowflake
8
min read

Snowflake Cost Optimization: how to keep your bill under control

Learn proven strategies data teams use to control Snowflake spend.
Author
Arend Verschueren
Arend Verschueren
Head of Marketing & RevOps
Snowflake Cost Optimization: how to keep your bill under control
Share article

Snowflake cost optimization means actively managing how your team consumes compute, storage, and cloud services — so you pay for value, not waste. Without a deliberate strategy, Snowflake's consumption-based model can turn a flexible, scalable platform into a source of budget surprises.

The good news: most cost overruns are preventable, and the tools to fix them are already built into Snowflake.

Why Snowflake costs get out of hand

Snowflake's pricing model is built on consumption. You pay for what you use — compute (virtual warehouses), storage, and cloud services. Credits are the currency: when a warehouse is running, it burns credits. When it's paused, it doesn't. Simple in theory.

In practice, costs tend to compound quickly. For most organizations, compute is the biggest cost driver — typically accounting for over 80% of total Snowflake spend. Storage and cloud services make up the rest, but they can still surprise you if left unmanaged.

Here's why bills spiral:

  • Idle virtual warehouses left running between workloads, consuming credits around the clock.
  • Oversized compute clusters provisioned for peak load but running at 20% utilization most of the time.
  • Poorly written queries that scan far more data than necessary, sometimes consuming 10x the credits of an optimized equivalent.
  • Serverless feature creep — automatic clustering, materialized views, and Snowpipe all consume credits silently in the background.
  • Over-purchasing credits upfront based on anticipated growth, leading to unused capacity and inflexible contracts.

None of these are inevitable. They're the result of building fast without building smart — which is completely normal, and completely fixable.

How to keep snowflake costs under control

Step 1 — Understand your cost drivers first

Before you optimize anything, you need to know what you're actually paying for. This is the step most teams skip — and it's why so many Snowflake optimization efforts focus on the wrong places.

Definition: A Snowflake credit is the unit of measure for compute consumption. One credit equals one hour of compute time for an X-Small warehouse. Larger warehouses consume more credits per hour — each size up roughly doubles the rate.

Start with Snowflake's built-in cost management console in the Admin section of the UI. This gives you a high-level breakdown of spend by service type: compute, storage, and serverless. From there, you can dig deeper using the ACCOUNT_USAGE schema — specifically the WAREHOUSE_METERING_HISTORY and QUERY_HISTORY views — to understand which warehouses, workloads, and specific queries are driving the bulk of your spend.

A useful exercise: calculate a cost-per-query metric by joining query execution data with credit consumption, then aggregate by user, warehouse, or workload type. This quickly reveals where your budget is actually going — and where to focus first.

If you're new to Snowflake's architecture and want a grounding in how the storage, compute, and services layers interact before diving into cost management, our guide on what the different Snowflake components are is a good starting point.

Step 2 — Right-size and configure your virtual warehouses

Virtual warehouses are where most of your Snowflake budget goes. Getting warehouse configuration right is the single highest-leverage action you can take.

Choose the right warehouse size

Bigger isn't always more expensive per query — a larger warehouse runs queries faster, which can mean fewer credits consumed overall. But it needs to be the right size for the workload. The only reliable way to find that is to benchmark your actual queries at different sizes and measure credit consumption, not just runtime.

A useful rule of thumb: group similar workloads (ETL, BI queries, ad hoc analytics) onto dedicated warehouses rather than sharing one warehouse across everything. This isolates costs, improves performance, and makes optimization far easier.

Configure auto-suspend and auto-resume

Snowflake's auto-suspend feature pauses a warehouse after a configurable period of inactivity. It's on by default — but the default timeout is often too generous. For most workloads, setting auto-suspend to 60 seconds or less is a safe starting point. Pair this with auto-resume, which automatically restarts the warehouse when a new query arrives, so users experience no interruption.

One important caveat: Snowflake bills a minimum of 60 seconds each time a warehouse starts. If you have a high-frequency workload with short gaps between queries, aggressive auto-suspend can actually increase costs by triggering repeated 60-second minimums. Test before committing.

Separate workloads across warehouses

Running ETL pipelines, BI dashboards, and ad hoc queries on a single warehouse is a common pattern that leads to contention, over-provisioning, and blurred accountability. Dedicated warehouses per workload type make it far easier to right-size, monitor, and attribute costs — and to suspend warehouses that aren't in use outside business hours.

Step 3 — Optimize Your Queries

Inefficient queries are one of the most common and most correctable sources of wasted Snowflake spend. A poorly written query can easily consume ten times the credits of an optimized equivalent.

Query pruning is the most impactful optimization technique. When Snowflake executes a query, it reads micro-partitions from storage — one of the most expensive steps in the process. If your table is clustered correctly for your access patterns (for example, by created_at if you frequently filter on date ranges), Snowflake can skip the micro-partitions that don't contain relevant data. This dramatically reduces the amount of data read — and the credits consumed.

For tables that aren't naturally clustered to your query patterns, Snowflake's Clustering Keys allow you to define explicit clustering. Use this selectively: automatic clustering runs as a background serverless process that consumes credits continuously, so it should only be applied to tables that are queried frequently and at high enough volume to justify the overhead.

Incremental processing is another major lever. Many data transformation pipelines are written to process the full dataset on every run — a safe default that becomes expensive at scale. Converting these to process only new or changed data (often called incremental or CDC-based models) can cut transformation costs dramatically. Streams and Tasks in Snowflake, or dbt's incremental materialization, are the standard tools for this.

Finally, set a query timeout. Snowflake's default allows statements to run for up to 48 hours before halting. A runaway query triggered by mistake can silently consume an enormous number of credits. Setting the STATEMENT_TIMEOUT_IN_SECONDS parameter to a more reasonable value — say, 3,600 seconds for most workloads — prevents this without affecting legitimate long-running jobs.

Step 4 — Govern Access and Set Budget Controls

Cost governance is often treated as a monitoring problem, when it's really a controls problem. Visibility tells you what happened. Controls prevent problems before they occur.

Resource monitors are Snowflake's native mechanism for setting credit limits on warehouses. You can configure them to notify administrators at threshold levels (for example, at 70% and 90% of a monthly budget) and to automatically suspend a warehouse when a credit limit is reached. This is one of the most effective ways to prevent runaway costs in shared environments or on warehouses used by teams with variable workloads.

Tag-based budgets, introduced as part of Snowflake's FinOps tooling, take this further by allowing you to allocate and track credit consumption at the level of teams, projects, or business units. Combined with Snowflake's cost-based anomaly detection, you can surface unexpected usage spikes before the end of the month — rather than discovering them on your invoice.

Access control is an underused cost lever. Restricting who can resize virtual warehouses prevents accidental or unauthorized changes — a surprisingly common source of cost spikes. A user bumping a warehouse from Medium to X-Large and forgetting to revert it can silently add significant expense. Role-based access policies that require warehouse changes to go through a controlled process eliminate this risk.

For a broader view of how Snowflake is evolving its FinOps capabilities — including adaptive compute and cross-account cost views..

Step 5 — Manage Storage Costs

Storage typically represents a smaller share of your total Snowflake bill than compute, but it can grow quietly and is easy to control once you know where to look.

Time Travel is one of the most common sources of unexpected storage costs. Snowflake retains historical versions of your data for a configurable retention period — up to 90 days on Enterprise edition. This is valuable for recovery, but every day of retention adds storage consumption. For most tables, a retention period of 1–7 days is sufficient. Reserve longer retention for critical, high-value datasets where the recovery use case is clear.

Materialized views store precomputed query results, which is useful for frequently run, expensive queries. But each materialized view incurs both storage costs and serverless compute costs to stay up to date. A materialized view that's queried fewer than ten times a week is almost certainly costing more than it saves. Audit your materialized views regularly and suspend or drop the ones that aren't earning their keep.

Automatic clustering on rarely queried tables is a similar trap. Background clustering runs continuously to reorganize data, even if the table is barely used. Review which tables have clustering enabled, compare against actual query frequency, and disable clustering on tables where it isn't justified by usage.

For data loading via Snowpipe, file size matters. Snowflake charges an overhead fee per file loaded, so ingesting thousands of tiny files adds up quickly. The optimal file size for Snowpipe ingestion is in the range of 100–250MB. Batching smaller files before loading eliminates unnecessary overhead.

Frequently Asked Questions

What is the biggest driver of Snowflake costs?

Compute is by far the largest component of most Snowflake bills, typically accounting for more than 80% of total spend. Compute costs come from virtual warehouses running queries and data transformations — when a warehouse is active, it consumes Snowflake credits. Managing warehouse size, configuration, and idle time has the highest impact on cost reduction.

How do I stop Snowflake warehouses from running when not in use?

Enable auto-suspend on all virtual warehouses and set the timeout to the shortest value that makes sense for the workload — 60 seconds is a reasonable default for most interactive or batch workloads. Pair this with auto-resume so warehouses restart automatically on demand. For warehouses used only during business hours, consider scheduling them to suspend outside those windows using Snowflake Tasks.

How do Snowflake resource monitors work?

A resource monitor is a Snowflake object that tracks credit consumption for one or more virtual warehouses against a defined limit. When credit usage hits a configured threshold (such as 70% or 90% of a monthly budget), Snowflake can send an alert notification or automatically suspend the warehouse. Resource monitors are configured via the Snowflake UI or SQL, and are one of the most effective native controls for preventing unexpected overspend.

Facts & figures

About client

Testimonial

Blogs you might also like

5 Signs it's time to modernise your BI stack
Arrow icon dark

5 Signs it's time to modernise your BI stack

Discover 5 clear signs it's time to move to a modern data stack — and what to do about it.

Data Stack
Blog
Tableau Cloud Migration: Your Complete Planning Guide
Arrow icon dark

Tableau Cloud Migration: Your Complete Planning Guide

A complete guide to plan your migration to Tableau Cloud and how to speed up the process.

Data Stack
Blog
TabMove
Biztory named Fivetran EMEA Partner of The Year
Arrow icon dark

Biztory named Fivetran EMEA Partner of The Year

Biztory was named Fivetran partner of the year for 2021. Learn more about our award-winning journey here.

Data Stack
Blog
Fivetran
Biztory named Fivetran EMEA Partner of the Year
Arrow icon dark

Biztory named Fivetran EMEA Partner of the Year

Biztory won the Fivetran EMEA Partner of the Year award for the second time in a row.

Data Stack
Blog
Fivetran
Remove data silos to increase data clarity
Arrow icon dark

Remove data silos to increase data clarity

Learn how to increase data clarity and quality by removing data silos company-wide.

Data Stack
Blog
What is the Modern Data Stack, Anyway?
Arrow icon dark

What is the Modern Data Stack, Anyway?

What is the Modern Data Stack? Buzzword-bingo or actually worth the investment? Read our latest guide to the modern data stack here.

Data Stack
Blog
A beginner’s guide: Moving to a cloud data warehouse
Arrow icon dark

A beginner’s guide: Moving to a cloud data warehouse

Learn why moving to a cloud data warehouse might be the right move for .

Data Stack
Blog
Snowflake
How the modern data stack removes data silos
Arrow icon dark

How the modern data stack removes data silos

Learn how to break down data silos with a Modern Data Stack.

Data Stack
Blog
3 Reasons to migrate from Tableau Server to Cloud
Arrow icon dark

3 Reasons to migrate from Tableau Server to Cloud

Discover the benefits of migrating from Tableau Server to Tableau Cloud.

Data Stack
Blog
Tableau
How to migrate from Tableau Server to Tableau Cloud
Arrow icon dark

How to migrate from Tableau Server to Tableau Cloud

Learn how to migrate from Tableau Server to Tableau Cloud efficiently for a seamless transition.

Data Stack
Blog
TabMove
What is the cost of migrating to Tableau Cloud?
Arrow icon dark

What is the cost of migrating to Tableau Cloud?

Understand the costs and benefits of migrating to Tableau Cloud, and learn how tools like TabMove can simplify and reduce expenses in the process.

Data Stack
Blog
Tableau
The best Tableau Cloud migration tools
Arrow icon dark

The best Tableau Cloud migration tools

Discover the best tools for migrating to Tableau Cloud efficiently and cost-effectively.

Data Stack
Blog
TabMove