Best Practices

Workshop recap: SQL patterns every analyst should know with Ergest Xheblati | Census

Parker Rogers
Parker Rogers October 28, 2022

Parker is a data community advocate at Census with a background in data analytics. He's interested in finding the best and most efficient ways to make use of data, and help other data folks in the community grow their careers. Salt Lake City, Utah, United States

Last week we hosted a hands-on SQL workshop with a master of the craft, Ergest Xheblati. He’s spent the last 15 years refining his SQL skills and captured those skills in his book,  Minimum Viable SQL Patterns. In this workshop, Ergest explains and demonstrates various principles from his book, such as:

🎯 Query decomposition patterns - Solve complex queries by systematically decomposing them into smaller ones

🎯 Query maintainability patterns - DRY principle (don't repeat yourself)

🎯 Query performance patterns - Make your queries faster (and cheaper)

Toward the end of the workshop, Ergest answered over a dozen questions from SQL professionals all over the world. Here’s a summary of all the major topics covered. 👇

Query decomposition patterns

As SQL practitioners, we often find ourselves writing 50+ line queries to answer business questions. Sure, these queries get the answers we’re looking for, but without discipline, they can be annoyingly difficult to read. And if you, the creator, can barely follow along with the query, it’s unlikely that your data team will be able to either.

The solution? Common Table Expressions (CTEs).

When you use CTEs correctly, you can break down a large query into smaller, independent pieces (AKA decompose them), allowing you to easily read your query as a direct acyclic graph (DAG). Ergest used a real-life example in his video with several subqueries converted to CTEs, but here’s a simple side-by-side comparison from Alisa Aylward.

CTE:

WITH avg_pet_count_over_time AS 
(
  SELECT 
    cat_id, 
    MAX(timestamp)::DATE AS max_pet_date, 
    MIN(timestamp)::DATE AS min_pet_date 
  FROM cat_pet_fact
  GROUP BY 1
)
SELECT 
  cat_name,
  t1.max_pet_date,
  t2.min_pet_date
FROM cat_dim
LEFT JOIN avg_pet_count_over_time as t1
ON cat_dim.cat_id = t1.cat_id
LEFT JOIN avg_pet_count_over_time as t2
ON cat_dim.cat_id = t2.cat_id;

Subquery:

SELECT 
  cat_name,
  t1.max_pet_date,
  t2.min_pet_date 
FROM cat_dim
LEFT JOIN 
  (SELECT 
    cat_id, 
    MAX(timestamp)::DATE AS max_pet_date,
    MIN(timestamp)::DATE AS min_pet_date
  FROM cat_pet_fact
  GROUP BY 1) AS t1
ON cat_dim.cat_id = t1.cat_id
LEFT JOIN 
  (SELECT 
    cat_id,
    MAX(timestamp)::DATE AS max_pet_date,
    MIN(timestamp)::DATE AS min_pet_date
  FROM cat_pet_fact
  GROUP BY 1) as t2
ON cat_dim.cat_id = t2.cat_id;

Notice how much easier the query with CTEs is to read?

In fact, switching from nested subqueries to CTEs is similar to switching from a messy bedroom to an organized one, Ergest highlighted in a recent tweet. Like a clean room, a CTE query is more manageable – plus, it’s much easier to find whatever you’re looking for if it’s always in the right place. 

Query Maintainability Patterns

After learning how to make queries more readable, Ergest explained how CTEs make queries more maintainable. Whenever you need to debug a query, you can investigate the individual DAG-like CTEs from beginning to end until you find the issue. This can save you hours in a day and allow you to get started on your never-ending to-do list. 📝

Ergest also discussed the don’t-repeat-yourself (DRY) principle. Here’s the TL;DR: If you find yourself copying and pasting code frequently, you’re better off creating views for your CTEs, which reduces the lines of code per query. 

Query performance patterns

Next, Ergest described how to make queries more performant and cost-effective, something he calls a “query performance pattern.” He shared a few rules to follow 👇

  • Avoid using sort operations until the final SELECT statement. Sort operations (ex. ORDER BY) aren’t necessary until your query is in its final format.
  • Avoid joining data until you’ve reduced the data as much as possible. Before you join data, filter out all unnecessary columns and rows.
  • Avoid using functions in the WHERE clauses. WHERE Clauses are capable of handling complex functions, but they reduce performance. It’s negligible when dealing with small queries, but if you’re dealing with millions or rows, it can get costly, so make the where clauses as simple as possible.

Following these rules will reduce query run times and save your organization money. 💰 Even if you’re dealing with small amounts of data now, practicing query performance patterns will make you a SQL expert in the long term.

Audience Q&A

After chatting about the three principles above, Ergest answered dozens of questions from SQL practitioners around the world. These are three (of the many) that I found valuable:

🤔 Why should you wait until the end of a query to join data? Is it for performance or organization?

  • Organization. You want to limit the scope of CTEs to simple aggregations because it makes it much easier to change in the future (query maintainability pattern). Additionally, if you are joining data in CTEs you might as well create a new table of the joined table rather than constantly writing a join statement.

 🤔 What are your suggestions for where and how to comment in SQL? 

  • I believe in self-documenting code. If your code is simple (following the query decomposition pattern), you don’t need to write comments. If you’ve taken the time to properly name your tables and CTEs, they should explain themselves. 

 🤔 What is the performance/readability difference between using CTE and temp tables?

  • Temp tables and CTEs are equivalent in terms of performance. However, I believe CTEs are easier to read (Query decomposition patterns).

This blog is a brief summary of everything Ergest taught in the workshop. If you’d like to level up your SQL skills and learn more, check out the full workshop here.

✨ Then head on over to join the Operational Analytics Club so you can discuss what you learned!

Related articles

Product News
Sync data 100x faster on Snowflake with Census Live Syncs
Sync data 100x faster on Snowflake with Census Live Syncs

For years, working with high-quality data in real time was an elusive goal for data teams. Two hurdles blocked real-time data activation on Snowflake from becoming a reality: Lack of low-latency data flows and transformation pipelines The compute cost of running queries at high frequency in order to provide real-time insights Today, we’re solving both of those challenges by partnering with Snowflake to support our real-time Live Syncs, which can be 100 times faster and 100 times cheaper to operate than traditional Reverse ETL. You can create a Live Sync using any Snowflake table (including Dynamic Tables) as a source, and sync data to over 200 business tools within seconds. We’re proud to offer the fastest Reverse ETL platform on the planet, and the only one capable of real-time activation with Snowflake. 👉 Luke Ambrosetti discusses Live Sync architecture in-depth on Snowflake’s Medium blog here. Real-Time Composable CDP with Snowflake Developed alongside Snowflake’s product team, we’re excited to enable the fastest-ever data activation on Snowflake. Today marks a massive paradigm shift in how quickly companies can leverage their first-party data to stay ahead of their competition. In the past, businesses had to implement their real-time use cases outside their Data Cloud by building a separate fast path, through hosted custom infrastructure and event buses, or piles of if-this-then-that no-code hacks — all with painful limitations such as lack of scalability, data silos, and low adaptability. Census Live Syncs were born to tear down the latency barrier that previously prevented companies from centralizing these integrations with all of their others. Census Live Syncs and Snowflake now combine to offer real-time CDP capabilities without having to abandon the Data Cloud. This Composable CDP approach transforms the Data Cloud infrastructure that companies already have into an engine that drives business growth and revenue, delivering huge cost savings and data-driven decisions without complex engineering. Together we’re enabling marketing and business teams to interact with customers at the moment of intent, deliver the most personalized recommendations, and update AI models with the freshest insights. Doing the Math: 100x Faster and 100x Cheaper There are two primary ways to use Census Live Syncs — through Snowflake Dynamic Tables, or directly through Snowflake Streams. Near real time: Dynamic Tables have a target lag of minimum 1 minute (as of March 2024). Real time: Live Syncs can operate off a Snowflake Stream directly to achieve true real-time activation in single-digit seconds. Using a real-world example, one of our customers was looking for real-time activation to personalize in-app content immediately. They replaced their previous hourly process with Census Live Syncs, achieving an end-to-end latency of <1 minute. They observed that Live Syncs are 144 times cheaper and 150 times faster than their previous Reverse ETL process. It’s rare to offer customers multiple orders of magnitude of improvement as part of a product release, but we did the math. Continuous Syncs (traditional Reverse ETL) Census Live Syncs Improvement Cost 24 hours = 24 Snowflake credits. 24 * $2 * 30 = $1440/month ⅙ of a credit per day. ⅙ * $2 * 30 = $10/month 144x Speed Transformation hourly job + 15 minutes for ETL = 75 minutes on average 30 seconds on average 150x Cost The previous method of lowest latency Reverse ETL, called Continuous Syncs, required a Snowflake compute platform to be live 24/7 in order to continuously detect changes. This was expensive and also wasteful for datasets that don’t change often. Assuming that one Snowflake credit is on average $2, traditional Reverse ETL costs 24 credits * $2 * 30 days = $1440 per month. Using Snowflake’s Streams to detect changes offers a huge saving in credits to detect changes, just 1/6th of a single credit in equivalent cost, lowering the cost to $10 per month. Speed Real-time activation also requires ETL and transformation workflows to be low latency. In this example, our customer needed real-time activation of an event that occurs 10 times per day. First, we reduced their ETL processing time to 1 second with our HTTP Request source. On the activation side, Live Syncs activate data with subsecond latency. 1 second HTTP Live Sync + 1 minute Dynamic Table refresh + 1 second Census Snowflake Live Sync = 1 minute end-to-end latency. This process can be even faster when using Live Syncs with a Snowflake Stream. For this customer, using Census Live Syncs on Snowflake was 144x cheaper and 150x faster than their previous Reverse ETL process How Live Syncs work It’s easy to set up a real-time workflow with Snowflake as a source in three steps:

Best Practices
How Retail Brands Should Implement Real-Time Data Platforms To Drive Revenue
How Retail Brands Should Implement Real-Time Data Platforms To Drive Revenue

Remember when the days of "Dear [First Name]" emails felt like cutting-edge personalization?

Product News
Why Census Embedded?
Why Census Embedded?

Last November, we shipped a new product: Census Embedded. It's a massive expansion of our footprint in the world of data. As I'll lay out here, it's a natural evolution of our platform in service of our mission and it's poised to help a lot of people get access to more great quality data.