Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Keep unused columns in success .csvs when importing with Bulk API 2.0 #1143

Open
bchfundraising opened this issue May 7, 2024 · 1 comment

Comments

@bchfundraising
Copy link

bchfundraising commented May 7, 2024

Is your feature request related to a problem? Please describe.
With SOAP or Bulk API, when importing data, the success.csv file contains all columns regardless of whether they were used or not, but Bulk API 2.0 only returns columns that were imported (and the new ID and sf_created columns)

Describe the solution you'd like
Keep all columns that were in the original import .csv file

Describe alternatives you've considered
Using other SOAP or Bulk API instead

Additional context
Dataloader version: 61.0.0
OS: MacOS Sonoma 14.4.1
Configs: configs.zip

To describe why this is useful, one of my workflow involves me having a flat file of Opportunities, GAU Allocation and Payments details at the ready. I'd import the Opportunities first to create them, then using the success file, I can proceed to import GAU allocations and Payments directly without extra steps.

Our Opportunities don't always have Unique References, so I can't use an xlookup to ensure I'm mapping the correct opportunities to the Payments and GAU on subsequent imports. Whilst I know the success file is meant to return the records in the same order of import - as a data person - I just cannot make that assumption just in case a bug creeps in one day and causes the order to change.

Maintaining the behaviour of SOAP and Bulk API behaviour would allow us to use this without concern in the future.

Edit: added more context info, not that I think it's relevant

@ashitsalesforce
Copy link
Contributor

Hi @bchfundraising , Bulk API 2.0 returns success and error results separately to facilitate upload retry on failed rows more readily (client code does not have to separate failed rows from success rows). However, it means that the information regarding which row in the uploaded CSV is lost because there is no unique id in the import CSV to match a successful or failed insert row with. We plan to enhance Bulk API 2.0 in the future to be able to identify which row in the uploaded CSV failed. Data Loader will be enhanced after that to support your request.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants