LogoLogo
WebsiteBlog
  • Introduction
  • CDP Overview
  • Identity Resolution
  • CDP Guidelines for Data Sanity and Campaign Governance
  • Login and User Management
    • Login to the platform
    • Change Password / Retrieve your Account
    • Role Based Access Control
      • User Management
      • Roles
    • Single Sign-On (SSO)
  • Events
    • Events Overview
    • App Events
    • Track
    • Identify
    • Page
    • Screen
    • Event Dictionary
  • Sources
    • Sources Overview
    • Android SDK
    • iOS SDK
    • JavaScript SDK
    • React Native SDK
    • Flutter SDK
    • REST API
    • Adobe Analytics Exports
    • SFTP
    • Kafka
    • Offline File Ingestion
  • Destinations
    • Destinations Overview
    • Quora Pixel
    • Hotjar
    • Clevertap
    • Google Analytics 4 (GA4)
    • Meta Pixel
    • Meta Conversion API
    • LinkedIn Insight Tag
    • Adobe Target
    • AppsFlyer
    • AWS S3
    • Criteo
    • Kafka
  • Integrations
    • Rudderstack
    • Azure Blob
    • Adobe Launch Private Extension
    • Adobe Launch Extension
    • Salesforce CRM
    • Microsoft Dynamics 365
  • Customer One View
    • Introduction
    • Basic details, Attributes and Devices
    • Segment and Engagement
    • Activity
  • Segments
    • Getting Started
    • Create a Segment
  • Channels
    • Getting Started
    • A/B Testing
    • SMS
      • Set up an SSP
        • Netcore
        • Twilio
        • Adobe Campaign Classic
        • Gupshup
        • Unifonic
        • Infobip
        • Tubelight
      • Add an SSP
      • Create SMS campaign
      • FAQs
    • Email
      • Set up an ESP
        • SendGrid
        • SendInBlue
        • SparkPost
        • Taximail
        • Netcore
        • Adobe Campaign Classic
        • Mailchimp
        • Oracle Email Delivery
        • Infobip
        • Vision6
      • 🆕Add an ESP
      • Create Email campaign
      • Common use cases with Email Editor
      • Why Email Notification may not get delivered?
      • FAQs
    • App Push Notification
      • Create App Push Notification - Android
      • Create App Push Notification - iOS
      • Why App Push Notification may not get delivered?
      • FAQs
    • WhatsApp
      • Configure a WSP
        • Yellow Messenger
        • Infobip
        • Gupshup
        • BIK.ai
        • Vonage
        • Sinch
        • Tubelight
      • Create WhatsApp campaign
      • FAQs
    • RCS
      • Add an RCS API
      • Example: Netcore RCS API
      • Create an RCS campaign
      • FAQs
    • Web Push Notification
      • Create a Web Push Notification
      • Create a Default Web Push Notification
      • FAQs
    • On-site Notification
      • Create On-site Notifications.
      • Common use cases with On-site Notification.
      • Notification Templates
    • Banner Personalization
      • Create a Personalized Banner
      • Create a Default Banner
    • External API
      • Create Engagement
      • Test your API configuration
      • Example Use Cases of External APIs
        • Use case 1: HubSpot - Create Contacts API
        • Use case 2: Exotel's Make a call API
        • Use case 3: Mailmodo's Send Campaign Email API
  • Ramanujan AI
    • Lead scoring
    • Channel Orchestration
    • Content Generator
      • Generate Web Push Content
  • Journey Builder
    • Overview- Journey Builder
    • View all Journeys
    • Create a Journey
    • Journey Reports
    • FAQs
  • Audience Export
    • Facebook Export Channel
    • Google Ads Export Channel
  • Analytics
    • Dashboard
      • Guiding through the Dashboard
      • Unique Profile
      • Profile and Merge Trends
      • Campaign and Revenue Dashboard
    • Campaign Summary
    • Events Occurrence
    • Event Telemetry
    • App Installs and Uninstalls
    • Funnels
    • Paths
    • Traffic Analysis
    • Cohorts
    • Data sanity between Funnels, Paths and Events
    • FAQs
  • Developer APIs
    • User Profile API
    • WhatsApp Opt-in/Opt-out API
    • Subscription Management
  • Settings
    • Product Label
    • Frequency Caps
    • Contacts
Powered by GitBook
On this page
  • Use Cases for File Ingestion
  • Pre-requisites
  • Creating an Offline Data Source
  • Mapping data set with the attributes.
  • File ingestion
  • Upload History
  1. Sources

Offline File Ingestion

File ingestion plays a critical role in Enterprises' marketing stacks by enabling them to process customer data in a batch and create/update profiles into the CDP. This enables enterprises to process customer data in batches, be it one-time for a certain occasion or periodically, at a set frequency where they are receiving data files from another department for their branding communications.

Use Cases for File Ingestion

  1. Historical Data Ingestion: Offline File ingestion supports ingesting large data sets into CDP which enables enterprises to track historical data to analyze patterns over time to make strategic business decisions and more personalized marketing engagements.

  2. Cross-Source Data Collection: Offline file ingestion in CDP supports collecting data from multiple sources, enabling enterprises to consolidate data from external sources, like CRM, e-commerce platforms, Customer support systems, social media, Offline interactions, and data from different internal teams to manage unified profiles in CDP and ensure all data points are centralized. Collecting large sets of data from different streams may also support running campaigns manually with the collected data in case of failures in automated campaigns.

  3. Cross-source Personalisation: Ingesting Files into CDP from multiple streams supports enterprises in making personalized marketing communications and promotions using user data that has been collected from disparate sources, and processed into the CDP via files. This increases your brand's recall value and consistency in communication and engagements.

  4. Asynchronous Data Collection: While marketers primarily have direct and real-time sources of customer data flowing into the CDP from sources like websites and mobile apps, there are data collection opportunities that are asynchronous in nature. Large enterprises often have a network of sister business units that collaborate by providing batches of customer data for brand marketing, or processing data from 3rd party data providers, etc. each of which requires an asynchronous method of custom data ingestion, which is served using Offline Data Sources.

Pre-requisites

  • Required File size: 40 MB Max.

  • Required File Format: TSV

  • Required Data format in sheets-

    • No space between columns

    • No space or "-" between two words

  • At least one user identifier should exist in the data to add to the CDP

Creating an Offline Data Source

Let's say, as an enterprise you want to set up a marketing campaign for an upcoming festive season targeting a set of customers with the data gathered from a sister team. Offline file ingestion supports you in uploading the data source into the CDP and running a campaign with the uploaded data on cycle. The first would be to create a data source to ingest for.

Go to > Data Pipeline > Profile Management > Offline File Management.

Step 1: Click +Add New button on the right corner of the window

Step 2: Enter the data source name and click Save

Step 3: Now, Click on the data source name and click + ingest for this category in the right corner of the upcoming window.

Step 3: Upload the TSV file and map the columns to the attributes of the user profile

Mapping data set with the attributes.

According to the sample data we use in this demonstration, we map the columns and attributes in the CDP as follows.

Columns in the file
Attributes in CDP

crmid

crmid

phone

hm

preferences

cross sell product

slot_num

( slot_num doesn't have existing attribute which will be configured later)

Make sure you have at least one Identified in the data set while mapping the attributes. While mapping the column to attribute, mention if it is PII data from the drop-down as Lemnisk follows the ISO's latest data security regulations for data protection where the PII data will be encrypted and managed under certain security protocols.

In the sample data sheet, the fourth column Slot_num doesn't have an attribute configured in the CDP. Here, we add custom attributes in the following steps.

Step 1. Click Add Custom attributes

Step 2. Enter the attribute name and display the name

Step 3. Choose the input format

Step 4: Select the check box to use in the segments if required.

Step 5: Select the check box to use it as macros for personalized communication.

The segment name and macro name should be in all upper cases and snake_case. That is, spaces to be replaced with underscores.

Step 6: Click Save.

File ingestion

Segment options: This will allow you to create a segment here. Select none. We will be creating a segment separately or will be adding this category into an existing segment.

Upload Options: Select the check box if you want to remove the existing data and add new data. This will ensure that users who were previously ingested in the category aren't triggered campaigns again.

Send Email to: Enter the email in which you want to receive updates about your ingestion and click "Upload Profile data". You will be receiving alerts about the start, processing, and finishing of ingestion.

A separate Request ID will be created for that particular upload once you click the Upload Profile data button.

Upload History

Upload history showcases the status of your previous uploads, to help you in scheduling engagements with the uploaded file.

To view your upload history,

Step 1: Go to > Data Pipeline

Step 2: Select Profile management

Step 3: Click on Upload History

Upload history displays the following information, which will inform you about the status of your upload and its progress.

  1. Request ID: Shows the request ID created for that particular upload.

  2. Offline Data Source: Shows category Name

  3. File Name: Shows the Name of the file that was uploaded

  4. Uploaded By: The username of the user who uploaded the file

    1. Status: Shows the Status of the ingestion. There are 4 types of status

      1. Scheduled: When the ingestion is scheduled but has not yet been started.

        1. In Progress: When the ingestion is in progress.

        2. Completed: When the ingestion is completed (including segmentation)

        3. Failed: When the ingestion completely fails and not even a single row is processed.

      You will get an update about the status of your ingestion in your mail

PreviousKafkaNextDestinations Overview

Last updated 2 months ago

Ingestion Report: Once the ingestion is completed, its status will be informed via email. You can also download the report by clicking this icon below the ingestion report.

Demo: Creating offline data source and mapping data sets with attributes
Image: Sample Data
GIF: File ingestion
Image: Upload History