The Deliver Data step lets you send updated or newly collected data from your workflow back to connected systems. It provides flexibility to write data to multiple destinations, giving you full control over where and how information flows.
This step is especially useful when your workflow involves more than one system. For example, you might want to:
- Send updated patient information from your EHR to a CRM
- Write data from a consent form to both your internal database and Salesforce
- Pass records from one Salesforce org to another for cross-org collaboration
By adding Deliver Data to your workflow, you can make sure information moves where it’s needed, without extra manual work.
Note: This step is commonly used alongside the Data Search step, which identifies the records you want to act on.
Capabilities
With the Deliver Data step, you can:
- Write data back to the original source with automatic field mapping
- Send data to additional destinations from your connected datasets
- Configure multiple Deliver Data steps in a single workflow
- Map specific fields to specific destinations
- Use UPSERT to intelligently update existing records or create new ones (for supported data sources)
All Deliver Data operations are automatically logged to Streamline's audit trail, capturing execution details for compliance and troubleshooting purposes.
How to Use the Deliver Data Step
Step 1: Add the Deliver Data Step
Add the Deliver Data step to your workflow.
Step 2: Choose Data to Send
Click add dataset to choose and configure data to send to the data source.
Select a dataset as the source of the data you want to send. If you already have a dataset in use within your workflow, it will appear first as an option. You can choose to automatically map or map manually from this dataset.
If there’s no dataset currently on the workflow, you’ll see a list of all available datasets in your account, which you can select and map manually.
Step 3: Map Fields Manually & Select Record Action
After choosing your preferred dataset option, click Map Manually or Automap to continue.
On the next screen, you’ll see two options:
- Create a new record – Use this when you want to add a brand-new entry to your destination dataset (e.g., creating a new patient record in your EHR).
⚠️ When choosing this option, any required fields in the destination dataset must be mapped for the record to be created successfully.
- Update a record (but not create) – Use this when you want to update an existing record, but do not want the system to create a new one if no match is found (e.g., updating a contact’s email address in your CRM).
- Upsert (create if none found, otherwise update) – Use this option when you want the system to check for an existing record and update it if found; if no match exists, a new record will be created automatically.
- Skip (do not update or create) – Select this option if you want to exclude certain records from being created or updated, often used for conditional logic within workflows.
After choosing, you can map entity fields (left) to values (right). For values, you can use any fields from the data panel to write back to your data source.
Example: If you want to update the contact on a record, map the entity field (Contact) to the value containing the updated information.
Tip: You can add additional mappings if needed. To manage your dataset, click the three dots beside your selected dataset at the top. From there, you can clear all mappings or delete the dataset entirely.
Step 4: Finalize Setup
Once your mappings are configured, click the X to finish your setup.
Connecting to a Data Search Step
When using Update or Upsert actions, the Deliver Data step needs to know which record to modify. This is where the Data Search step comes in.
How It Works
- Add a Data Search step earlier in your workflow to find the record(s) you want to update
- Add your Deliver Data step after the Data Search step
- When configuring the Deliver Data step, select the dataset that's linked to your Data Search results
- The Deliver Data step will use the record(s) returned by your search
Important Timing Consideration
The Deliver Data step uses data from the moment the Data Search was performed, not real-time data at the moment of write-out.
Example: If your Data Search returns no record, but a form step takes several days to complete, someone else might create that record in the meantime. When the Deliver Data step runs with Upsert enabled, it may fail because it's trying to create a record that now already exists.
Workaround: For workflows with long wait times between search and write-out, consider adding a second Data Search step immediately before the Deliver Data step to ensure you're working with the most current data.
Note: The Deliver Data step only supports the Data Search step for updates. Legacy Activation steps are not supported.
Known Limitations
General Limitations
- No built-in filtering or transformation options within the Deliver Data step itself - data transformations must be handled in earlier workflow steps
- Deliver Data destinations are limited to systems accessible through connected Datasets
- Conditional logic must be configured using workflow branching, not within the Deliver Data step
- Empty field values are saved as empty strings, not as null values - some destination systems may handle these differently
- If a field is not received from a previous step, it will not be sent to the destination even if it was mapped
- Only destination tables with primary keys are supported. If you need support for tables without primary keys, you can request it by clicking "Submit Idea" on the top right corner of our product roadmap page.
Data Structure Limitations
Complex parent-child relationships with multiple children of the same type (e.g., an Account with Contact A and Contact B) are not directly supported in a single Deliver Data step.
Workaround: Use multiple Deliver Data steps, creating each child record separately and linking them to the parent.
"No Data" scenarios (where all submitted fields are empty) are not supported - the system will not create objects when no field data is present.
Rollback Behavior
If any record in a write operation fails, all records in that operation are rolled back - partial updates are not supported. This means either all records succeed, or none do.
System-Specific Limits
Salesforce
- Maximum of 500 records can be created or updated per Dataset in a single Deliver Data operation
- Uses all-or-none behavior: if one record fails, all records in that operation are rolled back
Azure SQL and PostgreSQL
- No limit on the number of records per operation
- Uses transactional behavior: if one record fails, all records in that operation are rolled back
System Compatibility
- Only Datasets with write capabilities enabled through Data Fabric can be selected as destinations
- Write functionality depends on the capabilities of the destination system
- Some data sources are read-only or write-only (update only, no new record creation)
- See documentation for each data source to learn about specific capabilities
Special Behaviors
Orphan Record Handling
The Deliver Data step allows orphan records by default:
- If a parent record is skipped, the system will still attempt to create child records
- If a child record is skipped but a parent exists, the parent will be created without an association to that child
To prevent orphan records: Mark the data source as required, or ensure required fields are mapped for both parent and child entities.
Primary Key Handling
If you manually map a Primary Key value and also have a Data Search linked to the same dataset, the manually mapped value takes precedence.
To use the Primary Key from a Data Search result instead: Remove the manually mapped Primary Key value from your field mappings.
Troubleshooting
Deliver Data Step Fails to Execute
Symptoms: Workflow completes but data is not written to the destination
Common Causes:
- Destination system is unavailable or experiencing downtime
- API credentials for the destination Dataset have expired or changed
- Connected credential lacks write permissions on the destination system
- Destination system rate limits have been exceeded
Resolution:
- Check the workflow execution log for specific error messages
- Verify the destination Dataset connection is active in Data Fabric
- Confirm connected credential permissions on the destination system
- Review audit logs for delivery attempts and failure reasons
No Records to Update Error
Symptoms: Error appears when trying to update a record
Common Causes:
- The record you're trying to update doesn't exist
- The Data Search step didn't return any matching records
Resolution:
- Verify the record exists in the destination system
- If the record may not exist, consider using Upsert instead of Update, or use Create
Field Mapping Issues
Symptoms: Data is written but fields appear in wrong columns or are missing
Common Causes:
- Field mapping was not configured correctly
- Destination column names have changed since mapping was created
- Data types are incompatible between source and destination
Resolution:
- Open the Deliver Data step configuration and review field mappings
- Verify destination column names match current system schema
- Check that data types are compatible (e.g., not sending text to a number field)
- Update mappings as needed and test
UPSERT Not Working as Expected
Symptoms: New records are created instead of updating existing ones, or vice versa
Common Causes:
- Unique identifier field is not properly configured
- Matching logic on destination system differs from expectations
- Data Search was performed too long ago and data has changed
Resolution:
- Verify the destination system supports UPSERT functionality
- Confirm the unique identifier field is correctly mapped
- Review destination system documentation for matching criteria
- Add a Data Search step immediately before the Deliver Data step to use current data
- Consider splitting into separate Create and Update workflows if UPSERT is not supported
Summary
The Deliver Data step gives you a clear and flexible way to send workflow data wherever it’s needed. By combining automatic defaults with full manual control, you can:
- Deliver data to multiple or entirely new destinations
- Send information between systems like EHRs, CRMs, or even from one Salesforce org to another
- Save time with smart auto-mapping
- Align records accurately across external systems
- Ensure compliance with your organization’s policies
Use it with the Search Data step to identify the right records and complete your end-to-end workflow.
Comments
0 comments
Article is closed for comments.