Rules and guidelines for mappings in advanced mode
Rules and guidelines for mappings in advanced mode
Consider the following rules and guidelines for Databricks Delta objects used as sources and targets in mappings in advanced mode:
When you write data to multiple Databricks Delta targets with the same table and configure different target operations for each target, the mapping throws a concurrent append exception.
When mappings in advanced mode reads NULL values from the source and updates or upserts a column with NOT NULL constraint in the Databricks Delta target table, the mapping fails and the Secure Agent fails to log an appropriate error message.
when you read data from or write data to Databricks Delta and the source or target object contains 5000 or more columns, the mapping fails.
A mapping in advanced mode configured to read from or write to Databricks Delta fails in the following cases:
Data is of the Date data type and the date is less than 1582-10-15.
Data is of the Timestamp data type and the timestamp is less than 1900-01-01T00:00:00Z.
To resolve this issue, specify the following spark session properties in the mapping task or in the custom properties file for the Secure Agent:
When you do a data type conversion from Float to Double or use create target at run time, data loss is encountered.
If a mapping in advanced mode has a source column data type as String containing true or false value and a target data type as Boolean, the Secure Agent writes data as null to the target.
Use the following formats when you import a Databricks Delta source object containing Boolean, Date, or Timestamp data types with a simple source filter conditions:
Boolean = 0 or 1
Date = YYYY-MM-DD HH24:MM:SS.US
Timestamp = YYYY-MM-DD HH24:MM:SS.US
You cannot use the following features:
View
Multipipe
After you create and run a
mapping
configuration task, it is recommended to shut down the job cluster. If you modify a
mapping
task or edit the connection linked to a
mapping
task, metadata is fetched again and the job cluster restarts.
When you do a data type conversion from Date or Timestamp to String, the Secure Agent writes the value only in the following default format for both Date and Timestamp:
MM/DD/YYYY HH24:MI:SS
When you do a data type conversion from String to Date or Timestamp, the String value must be in the following format:
MM/DD/YYYY HH24:MI:SS
To use any other format, you must specify the format in the advanced session property of a
mapping
task for successful conversion. Null is populated in the target for the unmatched format.
When you do a data type conversion from Bigint to Double, the target data is written in the exponential format.
When you perform an update, upsert, or a data driven operation with an IIF condition that includes DD_DELETE or DD_UPDATE, ensure that the update column that you specified does not have duplicate rows. Otherwise, the mapping fails with the following error:
java.lang.UnsupportedOperationException: Cannot perform MERGE as multiple source rows matched and attempted to update the same target row in the Delta table.
When you perform an insert, update, upsert operation, or DD_UPDATE and the range of the data in source column is greater than the range of the target column, the mapping does not fail and leads to data truncation.
When you specify a single constant in the data driven condition, the mapping ignores the data driven condition and the Secure Agent performs insert, update, or delete operation based on the constant.
For example, if you specify the data driven condition as DD_INSERT, the mapping does not consider the update columns and depends on the Write Disposition property.
When you specify a single constant with the IIF condition in the data driven condition such as IIF(COL_INT > 20 , DD_UPDATE), the Secure Agent inserts the data into the target even for those rows that do not satisfy the condition.
When you specify the DD_REJECT constant in the data driven condition, the Secure Agent does not log the rejected rows in the error file or the session log.
You can run mappings with hierarchical data types only on a Linux system.
You cannot configure mappings with hierarchical data types if the source and target columns have special characters.
When you configure mappings with nested statements that contain hierarchical data types, the mapping fails if the nested field names contain the following characters: