Best Practices

Webinar recap: How Figma operationalizes data across their $20B PLG business | Census

Nicole Mitich
Nicole Mitich November 07, 2022

Nicole Mitich is the content marketing manager @ Census. She's carried a love for reading and writing since childhood, but her particular focus is on streamlining technical communication through writing. She loves seeing (and helping) technical folks share their wisdom. San Diego, California, United States

It’s no secret: Figma has seen huge success due to their product-led growth (PLG) motion, with a massive $20B acquisition announced recently by Adobe. And if you’re in the same space, you might want all the juicy details about how they did it. 🫖

How did Figma scale their business operations as a hypergrowth PLG company? How did they build a source of truth for their GTM data? What words of advice can they share about their journey?

We got all these answers and more in our recent webinar, How Figma operationalizes data across its $20B PLG business. It featured Praveer Melwani, CFO at Figma, who joined in 2017 and helped propel Figma into revenue hypergrowth. Joining him was Sylvain Giuliani, Head of Growth & Ops at Census. Census was one of the early tools in Figma’s data stack that helped democratize data across the organization.

Catch the full webinar below 👇

Keep reading for our 7 top tips from the Live Q&A:

PLG means centering your product around your business strategy

Product-led growth is all about taking in customer feedback, making iterative improvements, and turning your product into the main driver of acquisition, engagement, and retention. For Figma, this process is simple: End users gain a ton of familiarity with the free tool prior to having to make a purchasing decision. But rather than being one-off occurrences, these interactions can actually overlap with others and create buzz around your product. 🐝

“The unique thing is once that affiliation is there from the product side, the pull is so strong that you can create almost this beautiful flywheel. It can start to emerge as more people are sharing it. They’re getting exposed to new features, and you end up with these viral loops that build off each other all again centered around your product in the way that you've orchestrated the offering on a go-forward basis.”

Like a flywheel delivering power from a motor to a machine, PLG delivers the power of your product to your end users. 🔌 But without access to free features, users can’t understand (and thereby won’t share) the value with others.

Empathy and a deep understanding of the product make all the difference in a sales team

If your sales team doesn’t understand your end users, it’ll be difficult to convince them that your product solves their problems. I mean, you can’t take a traditional enterprise sales motion that worked at MongoDB and apply it to Figma. So, what’s the trick? Skill sharing.

“Early on, each of our sales individuals really understood our product and had empathy for our end users. So the demo portion of our sales interviews were so crucial to the way that we brought in our early sales team because we armed them with information to go after and find the champions of our tool. Those champions tended to be designers in their own right, and we wanted the individuals that were having those conversations to be fluent in the types of relationships that they were then going to build.”

You're eventually going to have to go and talk to other departments outside of sales to help them fundamentally understand your product, too. But the way that you're going to build a really strong connection with your end user initially is through thoughtful conversation with regard to the core design product. 

Gravitating toward a single source of truth requires iteration

When you become a data-driven organization, one of the first things you’ll do is set up your data warehouse to act as a single source of truth for everyone downstream to reference for the most up-to-date information. 🌟 But even if it’s not smooth sailing from the beginning, each incremental improvement will help data and operations teams leverage data a little bit better.

“It’s definitely iterative. Things are always going to change a little bit, and you just have to be comfortable that they are going to do that. But the benefit is you get consistency across systems. No one is seeing one number in one place, then it looks differently in a different place. And what that requires you to do is structure a team that is owning that derived set of tables and has to go and then take ownership of what they look like.”

Sure, it might involve a bit of trial and error, but even taking the first baby steps will be huge for your organization. And once you hit your stride, you’ll empower more analysts and use cases than ever before.

Product usage data is a key part of revenue forecasting

With each new touchpoint in the customer journey, quantifiable information about your end users and their relationship with your product is uncovered. ✋ Because product usage data details how real customers use your products in real-time, Figma quickly understood the strategic changes they needed to make to delight their users.

“Our whole business is built off having an understanding of when someone landed in our tool. How long have they been around? What triggers can we look at to get a good sense of when they ultimately will upgrade? We have that all built out. And then we actually pass a lot of the product data into Salesforce, or even our sales reps, to have an understanding of where the large upsells could come from, and then they use it to help them forecast their own pipeline.”

As Praveer’s example explains, data collection is necessary, but providing that data to GTM teams in a usable and actionable way is how you accelerate your business. 🏃Putting your data into the tools that your teams use with reverse ETL ensures that when it comes time to make a decision, the right person has the right data to do it.

Investing in quality systems early on prevents issues as you scale

Inexpensive or DIY data tools are often used to save time and money. But as your business starts rapidly scaling, you need more information than they can provide and these beginner tools start to fail, bringing your business down with them. 📉 Of course, the budget is tight when you’re just starting out, but if there’s one thing to spend extra on, Praveer says, it’s your tooling.

“There are definitely things that we can rip and replace, and there were good decisions that we made early on. I'm so glad we started on Salesforce early, I'm so glad we were on NetSuite early. I'm really thankful that we started on Stripe and continued to build on our Stripe instance over time. And then I think what has been really interesting for us is continuing to work with you guys [Census] and as we think about new places and new sources or new destinations of data, ensuring that we're continuing to get data into those right places."

By implementing a quality data stack, you ensure that your business has the right foundation for success. Your tools will grow with your business so you can focus on innovation, rather than just shepherding data from point A to point B. 

Avoid the headache. Build automation into your manual processes

Increasing efficiency is one way to improve your company’s bottom line. And, sure, manual processes will always be necessary, but a mix of automated and manual processes can be the secret to your success. 

“I think the way that you build efficiency in your motions is not by people. I mean, it tends to be – and today we all kind of know it has to be a little bit of a mix. Like, you're going to have some manual processes early, but you want to build automation into it so that it isn't a headache for you to manage if your business just kind of goes through the roof.”

When you automate your business processes, you increase their efficiency and reliability and prevent human errors, so you can get back to focusing on more strategic thinking to help grow your business. 🚀 Win-win.

Silos prohibit growth and diminish trust

When you’re trying to manage reliable data in a number of workflows and applications, every team sees just a piece of the truth. No one has a complete, 360-degree view of the customer. Sure, when your company is small, you can avoid silos because everyone talks to one another. But as the company grows, your point-to-point connections increase exponentially, and pretty soon you have a data integration spaghetti stack. 🍝

“Operating in silos will create inconsistencies, and I think inconsistencies will erode trust. And if I think about what has enabled me to scale it's the fact that I can go to our board, I can go to our leadership team and, with confidence, tell them what our numbers look like – with a really quick turnaround and a high degree of certainty. Then, they’re like, ‘Okay, we trust you, we trust the forecast, we trust the actual.’ And that all then comes with a strong understanding of how those pillars are all intertwined.”

By avoiding these silo-induced issues and building a future-proof data stack at an early stage, Praveer was able to guide Figma to where they are now: A $20B company. 🤯

✨ Want to learn how PLG leaders like Figma, Notion, Canva, and Mixpanel do more with their data? Get started.

Related articles

Customer Stories
Built With Census Embedded: Labelbox Becomes Data Warehouse-Native
Built With Census Embedded: Labelbox Becomes Data Warehouse-Native

Every business’s best source of truth is in their cloud data warehouse. If you’re a SaaS provider, your customer’s best data is in their cloud data warehouse, too.

Best Practices
Keeping Data Private with the Composable CDP
Keeping Data Private with the Composable CDP

One of the benefits of composing your Customer Data Platform on your data warehouse is enforcing and maintaining strong controls over how, where, and to whom your data is exposed.

Product News
Sync data 100x faster on Snowflake with Census Live Syncs
Sync data 100x faster on Snowflake with Census Live Syncs

For years, working with high-quality data in real time was an elusive goal for data teams. Two hurdles blocked real-time data activation on Snowflake from becoming a reality: Lack of low-latency data flows and transformation pipelines The compute cost of running queries at high frequency in order to provide real-time insights Today, we’re solving both of those challenges by partnering with Snowflake to support our real-time Live Syncs, which can be 100 times faster and 100 times cheaper to operate than traditional Reverse ETL. You can create a Live Sync using any Snowflake table (including Dynamic Tables) as a source, and sync data to over 200 business tools within seconds. We’re proud to offer the fastest Reverse ETL platform on the planet, and the only one capable of real-time activation with Snowflake. 👉 Luke Ambrosetti discusses Live Sync architecture in-depth on Snowflake’s Medium blog here. Real-Time Composable CDP with Snowflake Developed alongside Snowflake’s product team, we’re excited to enable the fastest-ever data activation on Snowflake. Today marks a massive paradigm shift in how quickly companies can leverage their first-party data to stay ahead of their competition. In the past, businesses had to implement their real-time use cases outside their Data Cloud by building a separate fast path, through hosted custom infrastructure and event buses, or piles of if-this-then-that no-code hacks — all with painful limitations such as lack of scalability, data silos, and low adaptability. Census Live Syncs were born to tear down the latency barrier that previously prevented companies from centralizing these integrations with all of their others. Census Live Syncs and Snowflake now combine to offer real-time CDP capabilities without having to abandon the Data Cloud. This Composable CDP approach transforms the Data Cloud infrastructure that companies already have into an engine that drives business growth and revenue, delivering huge cost savings and data-driven decisions without complex engineering. Together we’re enabling marketing and business teams to interact with customers at the moment of intent, deliver the most personalized recommendations, and update AI models with the freshest insights. Doing the Math: 100x Faster and 100x Cheaper There are two primary ways to use Census Live Syncs — through Snowflake Dynamic Tables, or directly through Snowflake Streams. Near real time: Dynamic Tables have a target lag of minimum 1 minute (as of March 2024). Real time: Live Syncs can operate off a Snowflake Stream directly to achieve true real-time activation in single-digit seconds. Using a real-world example, one of our customers was looking for real-time activation to personalize in-app content immediately. They replaced their previous hourly process with Census Live Syncs, achieving an end-to-end latency of <1 minute. They observed that Live Syncs are 144 times cheaper and 150 times faster than their previous Reverse ETL process. It’s rare to offer customers multiple orders of magnitude of improvement as part of a product release, but we did the math. Continuous Syncs (traditional Reverse ETL) Census Live Syncs Improvement Cost 24 hours = 24 Snowflake credits. 24 * $2 * 30 = $1440/month ⅙ of a credit per day. ⅙ * $2 * 30 = $10/month 144x Speed Transformation hourly job + 15 minutes for ETL = 75 minutes on average 30 seconds on average 150x Cost The previous method of lowest latency Reverse ETL, called Continuous Syncs, required a Snowflake compute platform to be live 24/7 in order to continuously detect changes. This was expensive and also wasteful for datasets that don’t change often. Assuming that one Snowflake credit is on average $2, traditional Reverse ETL costs 24 credits * $2 * 30 days = $1440 per month. Using Snowflake’s Streams to detect changes offers a huge saving in credits to detect changes, just 1/6th of a single credit in equivalent cost, lowering the cost to $10 per month. Speed Real-time activation also requires ETL and transformation workflows to be low latency. In this example, our customer needed real-time activation of an event that occurs 10 times per day. First, we reduced their ETL processing time to 1 second with our HTTP Request source. On the activation side, Live Syncs activate data with subsecond latency. 1 second HTTP Live Sync + 1 minute Dynamic Table refresh + 1 second Census Snowflake Live Sync = 1 minute end-to-end latency. This process can be even faster when using Live Syncs with a Snowflake Stream. For this customer, using Census Live Syncs on Snowflake was 144x cheaper and 150x faster than their previous Reverse ETL process How Live Syncs work It’s easy to set up a real-time workflow with Snowflake as a source in three steps: