Stored procedures & UDFs for Redshift → Snowflake
Re-home reusable logic—Redshift UDFs, procedure-like utilities, and operational macros—into Snowflake routines with explicit behavior contracts and a replayable harness so retries and backfills don’t change outcomes.
- Input
- Redshift Stored procedure / UDF migration logic
- Output
- Snowflake equivalent (validated)
- Common pitfalls
- Procedural side effects ignored: audit/control writes disappear and reruns become unsafe.
- Mixed-type branches: CASE/IF returns mixed types; Snowflake needs explicit casts.
- NULL semantics drift: null-safe equality and type coercion differ; match logic changes.
Why this breaks
Redshift estates often embed business correctness in reusable logic: UDFs used in reporting and ETL, procedure-like utilities that manage windows and control tables, and macros that generate SQL. During migration, teams convert tables and queries first—then discover these utilities were the real system. Snowflake can implement equivalent outcomes, but only if procedural behavior is turned into explicit routines with testable semantics for state, errors, and idempotency.
Common symptoms after migration:
- Outputs drift due to type coercion and NULL handling differences
- Dynamic SQL behaves differently (quoting, binding, identifier resolution)
- Error handling changes; pipelines fail differently or silently continue
- Side effects (audit/control writes) disappear unless recreated
- Row-by-row procedural patterns become expensive and fragile if ported directly
A successful migration extracts the behavior contract and validates it with a replayable harness, not ad-hoc spot checks.
How conversion works
- Inventory & classify procedural assets: Redshift UDFs, stored procedures/utilities, and macro scripts. Build a call graph to ETL jobs and BI queries.
- Extract the behavior contract: inputs/outputs, typing/NULL intent, side effects, error semantics, state assumptions, and performance constraints.
- Choose Snowflake target form per asset:
- Snowflake SQL UDF for pure expressions
- Snowflake JavaScript UDF for complex string/regex/object handling
- Snowflake stored procedure (SQL/JS) for multi-statement control flow and dynamic SQL
- Set-based refactor where procedural loops can be eliminated
- Rewrite dynamic SQL safely using bindings and explicit identifier rules.
- Validate with a harness: golden inputs/outputs, branch and failure-mode tests, and side-effect assertions (audit/control writes).
Supported constructs
Representative Redshift procedural constructs we commonly migrate to Snowflake routines (exact coverage depends on your estate).
| Source | Target | Notes |
|---|---|---|
| Redshift scalar UDFs | Snowflake SQL UDFs | Pure expressions mapped with explicit casts and NULL behavior. |
| Procedure-like ETL utilities | Snowflake stored procedures (SQL/JS) | Control flow rewritten; state and side effects modeled explicitly. |
| Dynamic SQL generation | EXECUTE IMMEDIATE with bindings | Normalize identifier rules; reduce drift and injection risk. |
| Control tables for restartability | Applied-window/batch tracking + idempotency markers | Retries/backfills become safe and auditable. |
| Timestamp/timezone helpers | Explicit timestamp intent (NTZ/LTZ/TZ) and conversions | Prevents boundary-day drift in reporting and SCD logic. |
| Row-by-row procedural transforms | Set-based SQL refactors | Avoid cost and reliability cliffs in Snowflake. |
How workload changes
| Topic | Redshift | Snowflake |
|---|---|---|
| Where logic lives | UDFs + ETL utilities embedded across jobs | Centralized routines (UDFs/procedures) with explicit contracts |
| Typing and coercion | Implicit casts often tolerated | Explicit casts required for stable outputs |
| Operational behavior | Reruns/retries often encoded in ETL conventions | Idempotency and side effects must be explicit |
| Cost posture | Cluster utilization and WLM | Warehouse credits + bounded scans |
Examples
Illustrative patterns for moving Redshift procedural logic into Snowflake routines. Adjust schemas, types, and identifiers to match your environment.
-- Snowflake SQL UDF example
CREATE OR REPLACE FUNCTION UTIL.SAFE_DIV(n NUMBER(38,6), d NUMBER(38,6))
RETURNS NUMBER(38,6)
AS
$$
IFF(d IS NULL OR d = 0, NULL, n / d)
$$;Common pitfalls
- Procedural side effects ignored: audit/control writes disappear and reruns become unsafe.
- Mixed-type branches: CASE/IF returns mixed types; Snowflake needs explicit casts.
- NULL semantics drift: null-safe equality and type coercion differ; match logic changes.
- Dynamic SQL injection risk: concatenation without bindings/escaping causes drift and risk.
- Row-by-row loops: procedural loops should be refactored into set-based SQL to avoid cost cliffs.
- No harness: without replayable tests, parity becomes a debate at cutover.
Validation approach
- Compile + interface checks: each routine deploys; signatures match the contract (args/return types).
- Golden tests: curated input sets validate outputs, including NULL-heavy and boundary cases.
- Branch + failure-mode coverage: expected failures (invalid inputs, missing rows) are tested.
- Side-effect verification: assert expected writes to audit/control tables and idempotency under retries/backfills.
- Integration replay: run routines within representative pipelines and compare downstream KPIs/aggregates.
- Performance gate: confirm no hidden row-by-row scans; set-based refactors validated with credit/runtime baselines.
Migration steps
- 01
Inventory procedural assets and build the call graph
Collect UDFs and procedural utilities, map call sites across ETL and BI, and identify side effects (audit/control writes) and state assumptions.
- 02
Define the behavior contract
For each asset, specify inputs/outputs, typing/NULL intent, expected errors, side effects, restart semantics, and performance expectations. Choose the target form (UDF/procedure/refactor).
- 03
Convert logic with safety patterns
Rewrite casts and NULL behavior explicitly, migrate dynamic SQL using bindings, and refactor row-by-row patterns into set-based SQL where feasible.
- 04
Model side effects and restartability
Implement audit/control writes and idempotency markers so reruns/backfills are safe and outcomes are measurable.
- 05
Build a validation harness and cut over
Create golden inputs, edge cohorts, and failure-mode tests. Validate outputs and side effects deterministically, then cut over behind gates with rollback-ready criteria.
We inventory your UDFs and procedural utilities, migrate a representative subset into Snowflake routines, and deliver a harness that proves parity—including side effects and rerun behavior.
Get a conversion plan, review markers for ambiguous intent, and validation artifacts so procedural logic cutover is gated by evidence and rollback criteria.