Best Practices

Driving product-led growth with your RevOps data stack: Learnings from Apollo.io and Montreal Analytics | Census

Nicole Mitich
Nicole Mitich October 11, 2022

Nicole Mitich is the content marketing manager @ Census. She's carried a love for reading and writing since childhood, but her particular focus is on streamlining technical communication through writing. She loves seeing (and helping) technical folks share their wisdom. San Diego, California, United States

Product-led growth is a game-changer for RevOps – you can make the most of your data ecosystem, maximize the impact of your operational tools, and drive better outcomes.

But that doesn’t mean that it's free of challenges. 😬

Luckily, some pretty knowledgeable folks let us in on some of their tips during our recent webinar, Driving Product-Led Growth with your RevOps data stack. It featured a few awesome RevOps-oriented leaders who get their hands dirty in the revenue and growth landscapes daily:

Watch the full webinar below 👇

Together, Henry, Cyril, and Syl shared learnings and examples of how to build a data stack that enables PLG companies to leverage data that directly impacts their revenue goals. 💸

Product data enables you to reach customers at the right time

In RevOps, it’s critical to have a complete 360° view of product usage. Enriching your customer accounts with context from all of your internal customer data enables your sales team to prioritize the right accounts, while your customer success team prevents churn and drives upsells. 

By using the customer’s actual product usage, your teams can have more authentic conversations that revolve around a customer’s real, demonstrated needs. Henry let us in on a few of Apollo’s use cases demonstrating the results of more authentic customer communication. 👇

  • Improving outbound conversion rates with product data. By adding product behavior to outbound emails, Apollo increased their conversion rates 10X to over 5%. 
  • Automating upsell alerts when users hit feature gates. When Apollo users hit feature gates inside the product and don’t convert, that behavioral data is sent to Salesforce and Apollo and triggers both automated and manual tasks for the sales and customer success teams to follow up. 
  • Product-qualified lead (PQL) scoring to prioritize leads for the sales team. Apollo combined context from all their leads’ activities to identify which leads were most likely to convert, resulting in a better-equipped sales team.

📈 For more examples of how you can use your product data to drive revenue, check out these use cases from some of the best PLG companies.

All these use cases show that when you use your product as the main vehicle to acquire, activate, and retain customers, you empower product-led growth. So, let your potential customers try your product for free or for a small fee, realize the value the product is providing, and then bring them into a sales cycle. 

“As new users sign up, you want to measure how far along they are in the process towards realizing the value that your product is bringing. And when they do achieve those values – or as we call it, achieve the ‘AHA’ moment – it's really about getting that information to your team and enriching your CRM,” Cyril said.

How do you decide what product data to use?

Product analytics can help you answer questions like:

🎯 Where is a specific user on the journey? 

🎯 Are they just signing up, or have they triggered some feature? 

🎯 Have they used a bunch of different tools and features on the platform? 

🎯 Are multiple users from the same company using the product?

🎯 Do they have a lot of usage in a short amount of time?

All this data is specific to your organization, your niche, and your customers. While we might wish it were as easy as flipping a switch, there’s no standardized set of product metrics for every company.

“I think, zooming out, there isn't a one-size-fits-all approach to PQLs, and I think that's kind of the definition of it, right? It's specific to the business, it's specific to the tier and the segment that you're working on, and where the customer is in the funnel,” Henry commented.

Start by looking at your existing customers' user behavior within your product so you can start pulling out actionable information, Henry recommended. Ultimately you’re looking for signals to help decide:

Is this person potentially either a current or future customer champion for you? 

Can they help you drive adoption and engagement with your product within that organization? Or are they too early in their journey and need some support or intervention from the sales team? 

By timing your interventions appropriately according to these details, you can maximize the value that you give your customer in parallel with them realizing the core value of your product during their ‘AHA’ moment. 💡

Product data is often siloed

The problem is, historically speaking, your product data is siloed. Your stack might be composed of dozens of tools, with each tool carrying different data about your customers. And, when none have the full picture, all your valuable product data isn't getting to the teams who need it.

So what’s the solution? Bringing your data into a single source of truth – your data warehouse. When Apollo worked with Montreal Analytics to implement the modern data stack, they were able to centralize their data in Snowflake and activate it in their GTM tools. 

Apollo started this process with three goals in mind:

  1. They wanted deeper product usage insights for their GTM teams  
  2. They wanted to future-proof their data stack. PLG means they had a “firehouse” of users signing up for their free trials, and Apollo didn’t want all of these in their CRM. 
  3. They wanted to make the warehouse a system of record for the customer.
<p>Apollo
Apollo's RevOps data stack

‎With these goals as their North Star and Montreal Analytics to guide them, they needed one final piece of the puzzle: Confidence in their data. 🧩

Product data isn’t all there is to it. You need confidence in that data

It’s one thing to have access to your data, but it’s another thing entirely to have confidence in that data. Data needs to be reliable to be actionable. So, you need a single source of truth and a governance plan to make that data trustworthy. 

Once you’ve centralized your data in the warehouse, you need to model that data (join your product, marketing, and sales data) together and focus on the metrics that matter for your business. Cyril calls this “giving an opinion to your data.” 

There’s a lot to it. How do you start?

Cyril recommends adopting an agile product approach to activating your data: Don’t try to build everything at once. Start with bite-sized pieces, and determine priority based on the project sponsor. For example, if your sponsor is the Head of RevOps, the highest priority should be getting data to the revenue team.

Start by bringing a couple of sources into your warehouse, then model it and train your team on that data. You can iterate to get exactly what you need. 🔁 But it can still seem a little overwhelming at first, especially when you realize you need to balance time to value, data quality, and breadth of scope. 

“Data quality is very important. So if you want to go fast and take shortcuts, know that you don't get a lot of shots to get adoption from non-data people on the data itself. If at some point you bring in some data and people realize that the data is not accurate, you've lost a ton of trust, and it's going to be a journey to bring that trust back,” Cyril warned.

Work on getting solid foundations in place first and then build on those foundations versus trying to boil the ocean. 🌊 Keep in mind: A tool alone won't solve your problems, so start by nailing down your roadmap and deciding on tools based on where you want to go (instead of letting your tools dictate your path). From there, you can see tremendous effects. 

PLG needs lots of experimentation and iteration 

Personalization with product data increases conversion rates since you can now reach out to the right person at the right time – but product data alone won't always lead you to the right place. After all, having all the quantitative data you need doesn't mean you have the context behind it.

PLG involves a lot of experimentation. You won't necessarily hit it out of the park the first time. ⚾ “So, take your North Star, take your business goals as a company and go into the data and figure out what success is for your users,” Cyril proposed.

As Henry mentioned, “PQL scoring evolves over time,” – but you can’t evolve if you don’t start somewhere. A solid foundation with trustworthy data will enable your organization to be agile and experiment with how to effectively activate that data across the business.

✨ Want to start operationalizing your data to drive product-led growth with your RevOps data stack? Book a demo with a Census product specialist.

Related articles

Customer Stories
Built With Census Embedded: Labelbox Becomes Data Warehouse-Native
Built With Census Embedded: Labelbox Becomes Data Warehouse-Native

Every business’s best source of truth is in their cloud data warehouse. If you’re a SaaS provider, your customer’s best data is in their cloud data warehouse, too.

Best Practices
Keeping Data Private with the Composable CDP
Keeping Data Private with the Composable CDP

One of the benefits of composing your Customer Data Platform on your data warehouse is enforcing and maintaining strong controls over how, where, and to whom your data is exposed.

Product News
Sync data 100x faster on Snowflake with Census Live Syncs
Sync data 100x faster on Snowflake with Census Live Syncs

For years, working with high-quality data in real time was an elusive goal for data teams. Two hurdles blocked real-time data activation on Snowflake from becoming a reality: Lack of low-latency data flows and transformation pipelines The compute cost of running queries at high frequency in order to provide real-time insights Today, we’re solving both of those challenges by partnering with Snowflake to support our real-time Live Syncs, which can be 100 times faster and 100 times cheaper to operate than traditional Reverse ETL. You can create a Live Sync using any Snowflake table (including Dynamic Tables) as a source, and sync data to over 200 business tools within seconds. We’re proud to offer the fastest Reverse ETL platform on the planet, and the only one capable of real-time activation with Snowflake. 👉 Luke Ambrosetti discusses Live Sync architecture in-depth on Snowflake’s Medium blog here. Real-Time Composable CDP with Snowflake Developed alongside Snowflake’s product team, we’re excited to enable the fastest-ever data activation on Snowflake. Today marks a massive paradigm shift in how quickly companies can leverage their first-party data to stay ahead of their competition. In the past, businesses had to implement their real-time use cases outside their Data Cloud by building a separate fast path, through hosted custom infrastructure and event buses, or piles of if-this-then-that no-code hacks — all with painful limitations such as lack of scalability, data silos, and low adaptability. Census Live Syncs were born to tear down the latency barrier that previously prevented companies from centralizing these integrations with all of their others. Census Live Syncs and Snowflake now combine to offer real-time CDP capabilities without having to abandon the Data Cloud. This Composable CDP approach transforms the Data Cloud infrastructure that companies already have into an engine that drives business growth and revenue, delivering huge cost savings and data-driven decisions without complex engineering. Together we’re enabling marketing and business teams to interact with customers at the moment of intent, deliver the most personalized recommendations, and update AI models with the freshest insights. Doing the Math: 100x Faster and 100x Cheaper There are two primary ways to use Census Live Syncs — through Snowflake Dynamic Tables, or directly through Snowflake Streams. Near real time: Dynamic Tables have a target lag of minimum 1 minute (as of March 2024). Real time: Live Syncs can operate off a Snowflake Stream directly to achieve true real-time activation in single-digit seconds. Using a real-world example, one of our customers was looking for real-time activation to personalize in-app content immediately. They replaced their previous hourly process with Census Live Syncs, achieving an end-to-end latency of <1 minute. They observed that Live Syncs are 144 times cheaper and 150 times faster than their previous Reverse ETL process. It’s rare to offer customers multiple orders of magnitude of improvement as part of a product release, but we did the math. Continuous Syncs (traditional Reverse ETL) Census Live Syncs Improvement Cost 24 hours = 24 Snowflake credits. 24 * $2 * 30 = $1440/month ⅙ of a credit per day. ⅙ * $2 * 30 = $10/month 144x Speed Transformation hourly job + 15 minutes for ETL = 75 minutes on average 30 seconds on average 150x Cost The previous method of lowest latency Reverse ETL, called Continuous Syncs, required a Snowflake compute platform to be live 24/7 in order to continuously detect changes. This was expensive and also wasteful for datasets that don’t change often. Assuming that one Snowflake credit is on average $2, traditional Reverse ETL costs 24 credits * $2 * 30 days = $1440 per month. Using Snowflake’s Streams to detect changes offers a huge saving in credits to detect changes, just 1/6th of a single credit in equivalent cost, lowering the cost to $10 per month. Speed Real-time activation also requires ETL and transformation workflows to be low latency. In this example, our customer needed real-time activation of an event that occurs 10 times per day. First, we reduced their ETL processing time to 1 second with our HTTP Request source. On the activation side, Live Syncs activate data with subsecond latency. 1 second HTTP Live Sync + 1 minute Dynamic Table refresh + 1 second Census Snowflake Live Sync = 1 minute end-to-end latency. This process can be even faster when using Live Syncs with a Snowflake Stream. For this customer, using Census Live Syncs on Snowflake was 144x cheaper and 150x faster than their previous Reverse ETL process How Live Syncs work It’s easy to set up a real-time workflow with Snowflake as a source in three steps: