Tutorials

Pipedrive API w/ Python tutorial: Contact records | Census

Allie Beazell
Allie Beazell June 08, 2021

Allie Beazell is director of developer marketing @ Census. She loves getting to connect with data practitioners about their day-to-day work, helping technical folks communicate their expertise through writing, and bringing people together to learn from each other. Los Angeles Metropolitan Area, California, United States

Chances are you and your team have felt frustrated at some point because you’re missing sales you could have made if data were better utilized. You took the effort to implement Pipedrive as a sales CRM but some other system also generates valuable data that is not fed to Pipedrive, too.

On top of that, that data point doesn't fit neatly into Pipedrive's default fields. There is, for example, no default field for trial end date available to include in your contact records (even though this could help you reach out to potential customers in your sales pipeline with an offer at just the right time). Luckily, you can create a custom field and write data to it using the Pipedrive API and significantly improve your sales process functionality.

Getting set up with the Pipedrive API  

We'll use Python to connect to the Pipedrive API to make our trial end date data available where we need it to be. To connect to the Pipedrive API, you will need to gather two things from within the web app. First, you’ll need your API access token, which you can find here. If you have access to multiple companies, make sure that you are within the right environment. Second, you will need the company domain, which you can find in the address bar (e.g. https://this-is-your-domain.pipedrive.com.)

You’ll also have to install the Python requests library if you haven't done that previously. You can do so by launching your terminal or command prompt and entering the command below.


pip install requests
pip3 install requests # Use this if the first one doesn't work for you.

We recommend running all code in a Jupyter Notebook for your first attempt so that you can easily see the output and interact with it, though creating a .py file will work as well.

We'll start by importing the necessary libraries. The requests library will allow us to make HTTP requests to the Pipedrive API and the json library will allow us to properly parse the responses.


import json
import requests

Checking your existing Pipedrive fields  

Before we get started, we need to find out if the field we want to write data to already exists. Since we'll be making a GET request, a POST request, and a PUT request, the variables have been prepended with get_, post_ and put_ to help you distinguish between them.


# token hasn't been prepended with get_ because it needs to be sent with all requests.

token = {
    'api_token': 'your-api-token-found-in-the-web-app'
}

get_url = 'https://your-domain.pipedrive.com/api/v1/personFields'

# The params argument is appended to the url as a query parameter.
get_response = requests.get(get_url, params=token)

get_content = json.loads(get_response.content)

# Upon success, the key 'success' will equal True.
print(get_content['success'])

A response to a successful GET request will contain the fields that you already have in the API key data. We'll print the names of all the fields and their respective indices. Unless your company naming conventions are absolutely topnotch, you'll want to go through this list manually to check if your custom field already exists.


get_data = get_content['data']

for i, v in enumerate(get_data):
    print(i, v['name'])

# If you want to further examine the field at index 5.
print(get_data[5])

If the custom field that you want to write to already exists, save its “key” value to a variable.


# If for example the index of your field is 5.
field_key = get_data[5]['key']

Follow along with the next section to create a custom field if you didn't find an existing field that meets your needs.

Creating a Pipedrive custom field

Creating the custom field if it doesn't exist yet is fairly straightforward. You'll just need to think of a name for your custom field and decide its type.  You have several options when it comes to the field type, which you can find in the API reference. For our trial end date example, it makes the most sense to go with “date.”


# token should still be defined from the GET request, but in case you skipped over that, here it is again.

token = {
    'api_token': 'your-api-token-found-in-the-web-app'
}

# The field that you want to create.
post_data = {
    'name': 'trial end date',
    'field_type': 'date'
}

post_url = 'https://your-domain.pipedrive.com/api/v1/personFields' 

post_response = requests.post(post_url, params=token, data=post_data)

post_content = json.loads(post_response.content)

# The key 'success' should equal True.
print(post_content['success'])

If you successfully created a field, the response will contain a “data” key with the information of the field. This information includes a key called “key,” which you’ll need when writing data to this field.


field_key = post_content['data']['key']

Writing data to a Pipedrive custom field

Now that you have a custom field in place, you can write data to it. Heads up, the Pipedrive API reference is misleading (it makes it seem like you can only write data to default fields, which actually isn’t the case). To complete this step, you’ll need to find the ID of the user you want to write data to. To make it easier, you can get a list of all your users or search for specific users.


# token should still be defined from the GET request, but in case you skipped over that, here it is again.
token = {
    'api_token': 'your-api-token-found-in-the-web-app'
}

# Replace id-of-person with the actual ID
put_url = 'https://your-domain.pipedrive.com/api/v1/persons/id-of-person'

# field_key is the 'key' value of the field that you want to write data to
put_payload = {
    field_key: '2021-06-01' # If this person's trial ends on 2021-06-01
}

put_response = requests.put(put_url, params=token, data=put_payload)

put_content = json.loads(put_response.content)

# The key 'success' should equal True.
print(put_content['success'])

The output contains the user that we just wrote data to, including the newly added data. One thing to watch for is that the “success” key will equal True if you manage to write data, regardless of whether the data was correct. If you, for instance, try to write the string “wrong-data” to a date field, the “success” key will equal True and the value of the field will be set to 1970-01-01. You'll want to verify the result of your API request to make sure it’s accurate.


# This should equal the value that you just wrote using the PUT request.
print(put_content['data'][field_key])

Success! You just wrote your data to a Pipedrive custom field using the Pipedrive API... once.

There's more to life than writing to custom fields

It is, in itself, easy enough to write data to a custom field through the Pipedrive API. The real challenge lies in getting this process just right in production. This means scheduling this process to run periodically. This also means making sure that you don't exceed the two-second nor 24-hour rate limits - which also includes any actions that you take in the web app. You’ll also need to incorporate logging so you know exactly which data points were written successfully and which ones failed (and why). Additionally, you'll have to develop a process to retry writing these failed data points - and hope they don't fail again. The list goes on.

You can struggle your way through this process, or you can let Census worry about it. We can take all the engineering for custom connectors off your plate and make it easy to sync your customer data from your warehouse to your business tools. See if we integrate with your tools or check out a demo.

Related articles

Customer Stories
Built With Census Embedded: Labelbox Becomes Data Warehouse-Native
Built With Census Embedded: Labelbox Becomes Data Warehouse-Native

Every business’s best source of truth is in their cloud data warehouse. If you’re a SaaS provider, your customer’s best data is in their cloud data warehouse, too.

Best Practices
Keeping Data Private with the Composable CDP
Keeping Data Private with the Composable CDP

One of the benefits of composing your Customer Data Platform on your data warehouse is enforcing and maintaining strong controls over how, where, and to whom your data is exposed.

Product News
Sync data 100x faster on Snowflake with Census Live Syncs
Sync data 100x faster on Snowflake with Census Live Syncs

For years, working with high-quality data in real time was an elusive goal for data teams. Two hurdles blocked real-time data activation on Snowflake from becoming a reality: Lack of low-latency data flows and transformation pipelines The compute cost of running queries at high frequency in order to provide real-time insights Today, we’re solving both of those challenges by partnering with Snowflake to support our real-time Live Syncs, which can be 100 times faster and 100 times cheaper to operate than traditional Reverse ETL. You can create a Live Sync using any Snowflake table (including Dynamic Tables) as a source, and sync data to over 200 business tools within seconds. We’re proud to offer the fastest Reverse ETL platform on the planet, and the only one capable of real-time activation with Snowflake. 👉 Luke Ambrosetti discusses Live Sync architecture in-depth on Snowflake’s Medium blog here. Real-Time Composable CDP with Snowflake Developed alongside Snowflake’s product team, we’re excited to enable the fastest-ever data activation on Snowflake. Today marks a massive paradigm shift in how quickly companies can leverage their first-party data to stay ahead of their competition. In the past, businesses had to implement their real-time use cases outside their Data Cloud by building a separate fast path, through hosted custom infrastructure and event buses, or piles of if-this-then-that no-code hacks — all with painful limitations such as lack of scalability, data silos, and low adaptability. Census Live Syncs were born to tear down the latency barrier that previously prevented companies from centralizing these integrations with all of their others. Census Live Syncs and Snowflake now combine to offer real-time CDP capabilities without having to abandon the Data Cloud. This Composable CDP approach transforms the Data Cloud infrastructure that companies already have into an engine that drives business growth and revenue, delivering huge cost savings and data-driven decisions without complex engineering. Together we’re enabling marketing and business teams to interact with customers at the moment of intent, deliver the most personalized recommendations, and update AI models with the freshest insights. Doing the Math: 100x Faster and 100x Cheaper There are two primary ways to use Census Live Syncs — through Snowflake Dynamic Tables, or directly through Snowflake Streams. Near real time: Dynamic Tables have a target lag of minimum 1 minute (as of March 2024). Real time: Live Syncs can operate off a Snowflake Stream directly to achieve true real-time activation in single-digit seconds. Using a real-world example, one of our customers was looking for real-time activation to personalize in-app content immediately. They replaced their previous hourly process with Census Live Syncs, achieving an end-to-end latency of <1 minute. They observed that Live Syncs are 144 times cheaper and 150 times faster than their previous Reverse ETL process. It’s rare to offer customers multiple orders of magnitude of improvement as part of a product release, but we did the math. Continuous Syncs (traditional Reverse ETL) Census Live Syncs Improvement Cost 24 hours = 24 Snowflake credits. 24 * $2 * 30 = $1440/month ⅙ of a credit per day. ⅙ * $2 * 30 = $10/month 144x Speed Transformation hourly job + 15 minutes for ETL = 75 minutes on average 30 seconds on average 150x Cost The previous method of lowest latency Reverse ETL, called Continuous Syncs, required a Snowflake compute platform to be live 24/7 in order to continuously detect changes. This was expensive and also wasteful for datasets that don’t change often. Assuming that one Snowflake credit is on average $2, traditional Reverse ETL costs 24 credits * $2 * 30 days = $1440 per month. Using Snowflake’s Streams to detect changes offers a huge saving in credits to detect changes, just 1/6th of a single credit in equivalent cost, lowering the cost to $10 per month. Speed Real-time activation also requires ETL and transformation workflows to be low latency. In this example, our customer needed real-time activation of an event that occurs 10 times per day. First, we reduced their ETL processing time to 1 second with our HTTP Request source. On the activation side, Live Syncs activate data with subsecond latency. 1 second HTTP Live Sync + 1 minute Dynamic Table refresh + 1 second Census Snowflake Live Sync = 1 minute end-to-end latency. This process can be even faster when using Live Syncs with a Snowflake Stream. For this customer, using Census Live Syncs on Snowflake was 144x cheaper and 150x faster than their previous Reverse ETL process How Live Syncs work It’s easy to set up a real-time workflow with Snowflake as a source in three steps: