Offline File Ingestion
File ingestion plays a critical role in Enterprises' marketing stacks by enabling them to process customer data in a batch and create/update profiles into the CDP. This enables enterprises to process customer data in batches, be it one-time for a certain occasion or periodically, at a set frequency where they are receiving data files from another department for their branding communications.
Use Cases for File Ingestion
Historical Data Ingestion: Offline File ingestion supports ingesting large data sets into CDP which enables enterprises to track historical data to analyze patterns over time to make strategic business decisions and more personalized marketing engagements.
Cross-Source Data Collection: Offline file ingestion in CDP supports collecting data from multiple sources, enabling enterprises to consolidate data from external sources, like CRM, e-commerce platforms, Customer support systems, social media, Offline interactions, and data from different internal teams to manage unified profiles in CDP and ensure all data points are centralized. Collecting large sets of data from different streams may also support running campaigns manually with the collected data in case of failures in automated campaigns.
Cross-source Personalisation: Ingesting Files into CDP from multiple streams supports enterprises in making personalized marketing communications and promotions using user data that has been collected from disparate sources, and processed into the CDP via files. This increases your brand's recall value and consistency in communication and engagements.
Asynchronous Data Collection: While marketers primarily have direct and real-time sources of customer data flowing into the CDP from sources like websites and mobile apps, there are data collection opportunities that are asynchronous in nature. Large enterprises often have a network of sister business units that collaborate by providing batches of customer data for brand marketing, or processing data from 3rd party data providers, etc. each of which requires an asynchronous method of custom data ingestion, which is served using Offline Data Sources.
Pre-requisites
Required File size: 40 MB Max.
Required File Format: TSV
Required Data format in sheets-
No space between columns
No space or "-" between two words
At least one user identifier should exist in the data to add to the CDP
Creating an Offline Data Source
Let's say, as an enterprise you want to set up a marketing campaign for an upcoming festive season targeting a set of customers with the data gathered from a sister team. Offline file ingestion supports you in uploading the data source into the CDP and running a campaign with the uploaded data on cycle. The first would be to create a data source to ingest for.
Go to > Data Pipeline > Profile Management > Offline File Management.
Step 1: Click +Add New button on the right corner of the window
Step 2: Enter the data source name and click Save
Step 3: Now, Click on the data source name and click + ingest for this category in the right corner of the upcoming window.
Step 3: Upload the TSV file and map the columns to the attributes of the user profile
Mapping data set with the attributes.
According to the sample data we use in this demonstration, we map the columns and attributes in the CDP as follows.
crmid
crmid
phone
hm
preferences
cross sell product
slot_num
( slot_num doesn't have existing attribute which will be configured later)
Make sure you have at least one Identified in the data set while mapping the attributes. While mapping the column to attribute, mention if it is PII data from the drop-down as Lemnisk follows the ISO's latest data security regulations for data protection where the PII data will be encrypted and managed under certain security protocols.
In the sample data sheet, the fourth column Slot_num doesn't have an attribute configured in the CDP. Here, we add custom attributes in the following steps.
Step 1. Click Add Custom attributes
Step 2. Enter the attribute name and display the name
Step 3. Choose the input format
Step 4: Select the check box to use in the segments if required.
Step 5: Select the check box to use it as macros for personalized communication.
The segment name and macro name should be in all upper cases and snake_case. That is, spaces to be replaced with underscores.
Step 6: Click Save.
File ingestion
Segment options: This will allow you to create a segment here. Select none. We will be creating a segment separately or will be adding this category into an existing segment.
Upload Options: Select the check box if you want to remove the existing data and add new data. This will ensure that users who were previously ingested in the category aren't triggered campaigns again.
Send Email to: Enter the email in which you want to receive updates about your ingestion and click "Upload Profile data". You will be receiving alerts about the start, processing, and finishing of ingestion.
A separate Request ID will be created for that particular upload once you click the Upload Profile data button.
Upload History
Upload history showcases the status of your previous uploads, to help you in scheduling engagements with the uploaded file.
To view your upload history,
Step 1: Go to > Data Pipeline
Step 2: Select Profile management
Step 3: Click on Upload History
Upload history displays the following information, which will inform you about the status of your upload and its progress.
Request ID: Shows the request ID created for that particular upload.
Offline Data Source: Shows category Name
File Name: Shows the Name of the file that was uploaded
Uploaded By: The username of the user who uploaded the file
Status: Shows the Status of the ingestion. There are 4 types of status
Scheduled: When the ingestion is scheduled but has not yet been started.
In Progress: When the ingestion is in progress.
Completed: When the ingestion is completed (including segmentation)
Failed: When the ingestion completely fails and not even a single row is processed.
You will get an update about the status of your ingestion in your mail
Last updated