These FAQs are organized into the following categories:
Q: My data export is missing records!
A: If a Data Source (from a target system, task or job output, manual data source, etc.) is managed with GDPR controls, the number of records exported may be reduced by according to the GDPR settings. Check with your Openprise administrators, or go to Administration > GDPR controls for more information.
Q: Why is my Google Sheet not importing?
A: If you have a Google Sheet or CSV file on Google Drive that is not importing, check the following:
- The Data Source type must match the file type. For example, you cannot have a Data Source type of CSV and import a Google Sheet.
- The file name must be a valid file name when downloaded by Openprise on our servers. Therefore avoid the following characters: ? : ” * | / \ < >
- If you are using Data Source type of Google Sheets, and you upload an Excel file, you must first open the file with Google Sheets. Openprise does not recognize Excel files on your Google Drive.
Q: What are best practices when creating a data source using CSV?
A: For the easy and clean import of a CSV file, use the following guidelines:
- Avoid html escape sequences (these often show up in notes fields).
- Double quote your double quotes: ie. his “gear” should be his “”gear””.
- Avoid angle brackets <>.
- Replace all tabs with commas (if a field separator) or blank (if the tab is part of a text string field).
- TSV files sometimes do not have double quotes around string values and it is best to do so. Double quotes should be around every non-numeric value.
- Remove all control characters. One method of doing this is to load the CSV file into Google Sheets and save it as a sheet. Google Sheets usually strips the control characters for you.
- Avoid opening CSV files using Excel because by default Excel does not handle multibyte characters properly. Excel also has a propensity of stripping leading zeros on all numeric fields (such as zip codes).
Q: I have a data file containing double-byte characters, how can I open it?
A: If you have your own file, or have downloaded one of our Open Data files that has double-byte characters, use the following table when opening it so the double-byte characters are correctly handled.
|File format||Google Sheets||Excel|
|Excel||Opens correctly||Opens correctly (providing it was saved in the proper format). If you download an open data set from Openprise in Excel format, it is saved correctly.|
|CSV||Opens correctly||To open the file in Excel, you have to instruct Excel to open with UTF-8 format:
When saving, instruct Excel to save with UTF-8 format:
Q: When defining a Data Source, what is the Primary Timestamp needed/used for?
A: The primary timestamp is used for charting any time series data.
Q: When defining a Data Source, what does “Archive this Data Source” do?
A: If this box is checked, Openprise will archive the original data source for the specified number of days, up to 7 days.
Q: Does Openprise recommend a dedicated API User for Marketo and Salesforce?
A: Yes, we do recommend a dedicated API User for tracking purposes. In Marketo, there is no charge for an additional API user. For Salesforce there may be a charge for an additional license.
Q: When importing Marketo records into Openprise, how many API calls are used? (Our license allows only so many API calls per day and I don’t want to use them all up and affect other processes.)
A: With bulk API, Marketo returns 300 records per API call. There is some overhead so if you have 250,000 records in Marketo that you want to import into Openprise, it will take about 1,000 API calls. However, due to Marketo API limitations, if you have deleted many records over the life of your Marketo instance, Openprise may use more API calls than described above due to the “holes” in Marketo Lead ID numbers.
Q: When updating Salesforce records, how many Bulk API calls are used?
A: Although the exact numbers vary somewhat, Openprise uses approximately 1 API call per 500 records updated.
Q: Why are my newly added attributes in Salesforce not being imported?
A: In order to add new fields to your Data Source, you must go to the Data Source configuration screen > parse screen (which updates the schema in Openprise based on the Salesforce configuration with the new fields) and then to the map screen. On the map screen, make sure to check the box for the new attributes to have them included in your data source. The next import will import the newly added fields for those records pulled in during the import. Note that to have the new attribute pulled in for all records in the data source during the next import, you will have to first purge the data source and then re-import all records.
In some cases, data isn’t imported from Salesforce because user credentials associated with the import data source do not have permission to access the missing data. In this case, contact your Salesforce administrator to obtain the correct access.
Q: Is there any benefit for using Google Sheets vs. CSV file for input source file type?
A: Google Sheets has a few advantages over CSV:
- It’s easy to update the file from your browser without going through the save-as-CSV step.
- Google Sheets handles multibyte, international characters better than CSV.
- You can put in formulas to do calculations or additional operations.
Q: How many fields or attributes can I select when creating a Data Source?
A: This varies by vendor. Best practices are to select the attributes you are likely to need, and not blindly import all of them. Many companies have systems of record with legacy attributes that will never be needed in Openprise.
- Marketo – There is a limit of 200 fields that can be imported into an Openprise Data Source. (Note: this is a limitation imposed by Marketo and cannot be changed.) If you need more fields imported, consider creating an additional data source in Openprise to accommodate all the fields you need.
- Salesforce – no limit
- Eloqua – no limit
- Dynamics – no limit
- Pardot – no limit
Q: How many API calls are used when updating Marketo records?
A: The update process is done in bulk and is limited by the file size of the bulk upload. Therefore, there is no easy rule of thumb to calculate API usage because it depends on the number of records being updated and the number of fields updated for those records. If you are concerned about using too many API calls for your updates you can:
- Establish an API Quota to make sure Openprise limits the number of API calls used per day. Go to Administration > System Settings > Quota to define a Quota for Marketo, or
- Use filters to limit the number of updates done during one job run.
Q: How to fix errors when writing to a Redshift DB?
A: When writing to a Redshift database table, you need to grant permission to Openprise to write to the table. The error message received will look similar to this if your permissions aren’t set up properly: “Error while executing copy from bucket for ruleId:dtId: 82762-555;error: permission denied for relation export_test“. To fix the problem, have your Redshift administrator apply the commands below:
GRANT ALL PRIVILEGES ON TABLE <table_name> TO <user>;
GRANT CONNECT ON DATABASE <DBNAME> TO <USER>;
— This assumes you’re actually connected to your database
GRANT USAGE ON SCHEMA <SCHEMA> TO <USER>;
GRANT SELECT,UPDATE,INSERT,DELETE ON <TABLENAME> TO <USER>;
Q: Why do I see null values in Salesforce for Annual Revenue but “0” in Openprise?
A: In Salesforce, you can see both null and “0” values for Annual Revenue (or any other real number). However, the Salesforce API called to retrieve data returns a “0” in place of any null values. Therefore the values shown in Salesforce and Openprise may differ.
Q: How do I blank out an attribute?
A: To blank out an attribute, use the Remove Junk task and check all the boxes: Pure number, Pure text, Mixed number and text, IP address, Boolean. The result will be an attribute that has no value, or is effectively “blank”.
Q: Why can’t I add a task at the end of some of my jobs?
A: Note: This behavior has changed with the product release on March 30, 2018.
With the latest release, you can have multiple export-type tasks in one job. For example, if you are adding (inserting) new leads to Salesforce, you can also add the new leads to a Salesforce campaign in the same job.
With the previous product version, you were limited to one export-type task per job.
Q: When using a Reference Data source to pull in additional data, should I use the Simple Replacement / Normalization task or the Infer task?
A: Simple Replacement is a simplified version of the Infer Task. The Infer Task allows for multiple matching attributes while Simple Replacement allows for only one matching attribute. So you can use either task depending on your requirements.
Q: When using a Reference data source, sometimes I can select a priority for conflict resolution, and sometimes I can’t. Why?
A: The Reference Data Source that you select must include a priority column (whole number type) in order to use it for conflict resolution, and not all Reference Data sources have that column.
Q: Can I delete an attribute I’ve added, or can I rename it?
A: No, we do not currently allow the renaming of an attribute added to a Job. However, you can remove all unused attributes in a Job by selecting Edit Tasks > Job Attributes > Remove All Unused Attributes. You can also see attribute usage by selecting Edit Tasks > Job Attributes > Show Attribute usage.
Q: What does the recycle symbol mean on a task within a job?
A: This indicates that the task, or a preceding task in the job has changed and the task needs to be run again. Another way to look at it is: the data contained in the Data Source is effectively “dirty” and needs to be run through the task again.
Q: Can I create a filter for a task with nesting, ie. add “parentheses”?
A: Yes, use the Add Filter Group button from the Task/Filter screen. This assumes the last task in the list is part of the group, so you may have to play around with the order of the tasks (by deleting them and re-entering them in a different order) to get the results you want. We currently support one level of grouping.
Q: How do I update a multi-select picklist in Salesforce?
A: In Salesforce, you can create a field that is a multi-select picklist. To update these fields, use a text single-value attribute in Openprise that contains your picklist values separated by semicolons. If you have a text multi-value attribute in Openprise that contains your data, you can use the Change attribute type task to convert to a text single-value attribute. Remember to specify “;” as the separator.
Q: When using Google Places to obtain address information, how many API calls are used per record appended
A: Google’s API usage varies, but a good rule of thumb is that for every record in Openprise used for a Google Places call, from 5 to 15 API calls will be used up. Since Google has a threshold of 10,000 free API calls per day, you can expect to append up to 2,500 records per day. We suggest you establish a daily quota so your usage doesn’t exceed the desired threshold. (Note: Google can change their pricing at will, so please carefully read their pricing documentation when implementing Google Places in Openprise.)
Q: I want to check records during list import to see if they’re already in my data target. Should I use De-duplication and Merge, De-duplication and Merge against Master, or the Infer task?
A: The task to use depends on your situation and the results you want to achieve. The following guidelines may be helpful:
- If you do not want to merge any data, then use the Infer task.
- If you are looking for duplicates within your import list, use the De-duplication and Merge task.
- If you are looking for duplicates in another data source (ie. Salesforce, Marketo, etc.) AND want to merge data, then use the De-duplication and Merge against Master Data task.
If you use either the De-duplication and Merge task or the De-duplication and Merge against Master Data task, your output data set from the task will include new attributes you can use to determine which records are the Surviving records and which are the duplicates. See the individual task help pages for details on using these fields.
Q: When adding data to Marketo, Salesforce, etc., I can see the data has loaded before the job has finished. Why the delay?
A: We typically use a bulk load feature for the target data source, and are reliant on the target platform (Marketo, Salesforce, etc.) to notify us that the bulk load has completed. Often there is a delay between the time the data has been received and the time we receive the completion notice. Note that you’ll see this same delay if you do bulk operations in the target platform outside of Openprise.
Q: How does the option to Copy unmatched input data to output work?
A: Let’s look at the Infer task. The inferred value is determined by matching an attribute in your input data source to a value in the Reference Data source. The Copy unmatched input data to output option determines what to do if the attribute does NOT have a match in the Reference Data source. If you want the attribute blanked out, leave the box unchecked. If you want to keep the original value, then check the box.
Q: For the Infer task, what does the “Advanced Configuration Allow match on blanks [ ] for:” do?
A: This option is usually left unchecked because you usually do NOT want to match attributes with blank values to the Reference Data source. If you do check this box, you’ll need to know what value will be inferred when a match on blank is detected. (You can view the Reference Data source to see what values will match to blanks.)
Q: When should I just run a Job, and when do I need to purge and run?
A: When a Job is run, it only processes new and updated data from the input Data Source, not the entire data source (unless it was run for the first time). Purging empties out all intermediate task outputs as if the Job has never been run before. Purging is required if you want the entire data source to be reprocessed, not just new and updated input records. For example, if a reference data source has been changed you would like to reprocess all the records according to the updated reference data, you should purge and run.
Let’s use country normalization as an example. Say originally you decided to normalize country using the Alpha ISO-2 country codes. Then later, you decide to have all countries spelled out in full, using their English name. Purging and rerunning the job will change all countries to be spelled out. Without purging, only new and updated records will be updated to have country spelled out.
When you are uploading data to a target system, it is a good practice to purge the job before running. By doing this, you’ll be certain all records will be sent to the target system.
Q: For the Extract Domain task – what is the difference between full or root domain?
A: Given a domain/URL of marketing.mycompany.com:
- Full domain extraction is: marketing.mycompany.com
- Root domain extraction is: mycompany.com
Q: What method do you use for fuzzy extraction?
A: We use the Levenshtein Distance Algorithm with a maximum of 2 edits. The Levenshtein distance between two words is the minimum number of single-character edits (insertions, deletions or substitutions) required to change one word into the other.
For example, the Levenshtein distance between “kitten” and “sitting” is 3, since the following three edits change one into the other, and there is no way to do it with fewer than three edits:
- kitten → sitten (substitution of “s” for “k”)
- sitten → sittin (substitution of “i” for “e”)
- sittin → sitting (insertion of “g” at the end)
Within Openprise, you can control the minimum length of a string to be matched. The default is 3 characters meaning that any string of 2 or fewer characters will not use fuzzy matching.
Q: Please explain what happens when you select the checkbox to “Copy unselected data from the input Data Source to the output Data Source without modification” when creating tasks.
A: When this box is checked, any record that is part of the input data source that does not meet the filter criteria will be written to the task output data source, along with the records that do meet the filter criteria. If you uncheck this box, only the input data source records that meet the filter criteria will be part of the task’s output data source.
Typically, when doing lead cleaning, you’ll want to leave this box checked. However, if you are reducing the input data set using a filter and want to process only those records, leave the box unchecked. An example is to create a filter to select all non-surviving records after a Deduplication and merge task, add a “reject” reason to an attribute field and then write the records to an export data file. In this case, you only want to continue to process the filtered-only records since the goal is to create an output file containing only non-surviving records.
Q: What task can I use to add a timestamp to an attribute?
A: Use a Classification and Tagging task and an attribute with type Date. There is a checkbox to fill the attribute with the date and time of the task evaluation.
Q: How can I fix zip codes with the leading zero stripped off?
A: Excel has an undesirable habit of stripping leading zeros from many Postal Codes when opening spreadsheets. This automated change is often overlooked and the altered data gets loaded into your system of record. If your data source ends up with this problem, use the following logic to normalize this data for US postal codes:
- Create a new task in your job, and use the infer task template
- Specify a filter that limits records where zip code length is between (4 / 4) and country matches (United States)
- For “What inference mapping would you like to use?”, specify “Reference – US Zip Codes”
- For “Using the match method of”, specify “Ends with: Reference value ends with input value”
- Under Advanced configurations, add a filter where “zip code begins with (0)”
- If needed, create another task using similar logic for Zip+4 postal codes.
This method can also be used with minor modifications for the following countries that also have postal codes with leading zeros.
- Germany (one leading zero in places)
- Spain (one leading zero in places)
- Finland (one or 2 leading zeroes, plus special case for one Helsinki postal code with 4 leading zeroes (00002))
- France (one leading zero in places)
- Italy (one, 2 or 3 leading zeroes in places)
- Norway (one leading zero in places)
Q: How do I delete records in Salesforce, Marketo, Pardot, Eloqua, etc.?
A: Openprise does not allow you to delete records directly, so you should use the native delete feature in your system. Using Openprise to identify records to be deleted can easily be accomplished by setting a field’s contents to a known value and then using the system’s native functionality to filter on those records and then delete them. For example, in Salesforce, you can set Lead Status = Delete and then use the bulk delete feature on the Leads object to delete all leads with status = Delete.
Q: I want to use our own data to fill in missing Country data, and can use either email/website domain or phone number. Which is preferable?
A: Using a phone number to infer country is possible (assuming, of course, the phone number includes a country code). If you choose to use this method, be aware that several countries share a country code. For example, Canada and the US both use country code +1. The United Kingdom, Jersey and the Isle of Mann share +44. An added problem occurs if you use Openprise to format phone numbers because you’ll need a country to produce accurate formatting, so you can quickly run into a chicken and egg problem.
Using an email domain or website domain to infer country is straightforward. Just use the table Reference – Countries – Multilingual and the infer task template, matching your data’s email or website attribute using “ends with” against the reference attribute Top Level Domain (TLD).
Q: How do I change my password?
A: Click on your name -> My Account -> Change Password.
Q: What are user roles and what permissions does each role have?
A: There are 3 fixed user roles in Openprise. Each user is assigned one and only one role. The roles are defined in order of increasing capabilities.
User – This is the least-privileged role. This role should be assigned to all end-user type of users. This role has full access to:
- View Data Catalog and Data Set
- Use analytics
- Use search and download
Data Administrator – This is the next level up role from Manager. This role should be assigned to administrators who are responsible for setting up data sources and debug data import issues. This role has all the entitlements of the Manager role plus:
- Create, edit, delete and view Data Sources, Data Targets and Data Sets
- Create, edit, delete and view Bots, Jobs, and Tasks
System Administrator – This is the most privileged (root user) role. This role should be assigned to technical administrators who are responsible for configuration and customizing the entire system. This role has all the entitlements of the Data Administrator role plus:
- Manage all system configurations
Q: What are the advantages to using Openprise for bulk uploads to SDFC or Marketo, or other applications?
A: With Openprise, you can clean and de-dupe the input list before loading it into your marketing platform. This moves the processing load involved with cleaning, normalizing, enhancing the data to Openprise, thus freeing up your marketing platform for other work. Plus, deduping a list is much faster in Openprise than deduping after the records are uploaded to your marketing platform.
Error: “failed to execute with error : Task id xxxxxx and name : Xxxxxxx failed with message:Error while processing bulk action for taskId:dtId: xxxxxx-xxx;ExceededQuota: ApiBatchItems Limit exceeded.
Remedy: This message indicates the Salesforce API quota has been reached. Salesforce calculates API usage using a rolling 24 hour window, so if there was a spike in API usage, it may take up to 24 hours before Openprise can successfully update Salesforce without errors.
Error: Error while running Job Data target is off-line
Remedy: This error can occur if you’re attempting to access a data source or target external to Openprise and the authentication token needs refreshing. Go to your data source or data target and check for the message “Please Login Again”. Click on the card, select Configuration and follow the on-screen instructions.
Error: “These credential(s) are already being used, please get in touch with <someone’s name> to get shared access.”
Remedy: This error shows up when an authentication token is not shared with your user. To remedy this, you will have to add yourself or your roles to the authentication token. See Manage Authentications for help updating permissions on authentication tokens.
Error: op_campaign_member_error contains [“INVALID_CROSS_REFERENCE_KEY:invalid cross reference id:–“]
Remedy: (Applicable to Salesforce only.) This can occur when the lead ID used to add the lead to a campaign is no longer valid (ie. Salesforce does not have a lead matching the ID). This error message may occur if you attempt to update a lead marked IsDeleted = yes (ie. you did not filter the IsDeleted leads out of the job), or if you have mass delete leads and selected the “Permanently delete the selected records” option from within Salesforce. Since these leads are permanently from Salesforce, there is no way for Openprise to know they were deleted. To remove permanently deleted leads (or records from other Salesforce objects) from Openprise, purge your data source and re-import the records.
Error: [“CANNOT_UPDATE_CONVERTED_LEAD:cannot reference converted lead:–“]
Remedy: (Applicable to Salesforce only.) This message indicates you are attempting to update a lead that has already been converted into an opportunity or contact. In your processing logic, make sure you explicitly exclude records where the attribute IsConverted is true/yes.
Error: [“INVALID_CROSS_REFERENCE_KEY:invalid cross reference id:–“]
Remedy: (Applicable to Salesforce only.) This message indicates you have requested an update to a record (account, contact, lead, etc.) that has been deleted in Salesforce. In your processing logic, make sure you explicitly exclude records where the attribute IsDeleted and/or OpIsDeleted is true/yes. Note that in some circumstances it is possible for Openprise to be unaware when a record is deleted in Salesforce. If you find that you are receiving these errors with increasing frequency, contact your Customer Success Manager who will work with you to purge and re-import your data to synch up your data and eliminate these error-causing records.
If you have any additional questions, please feel free to contact us at email@example.com.