You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It will be good to add additional dependencies on ingestion benchmarks.
FILE UPLOAD: Uploaded /home/ec2-user/20230516-150914-tpcds-custom-ingestion-delta-report.csv to s3://path/ingestion/1gb/delta/20230516145512/reports/csv/ org.apache.spark.sql.AnalysisException: Path does not exist: s3://path/ingestion/1gb/base/call_center.dat at org.apache.spark.sql.errors.QueryCompilationErrors$.dataPathNotExistError(QueryCompilationErrors.scala:1016) at org.apache.spark.sql.execution.datasources.DataSource$.$anonfun$checkAndGlobPathIfNecessary$4(DataSource.scala:785) at org.apache.spark.sql.execution.datasources.DataSource$.$anonfun$checkAndGlobPathIfNecessary$4$adapted(DataSource.scala:782) at org.apache.spark.util.ThreadUtils$.$anonfun$parmap$2(ThreadUtils.scala:372) at scala.concurrent.Future$.$anonfun$apply$1(Future.scala:659) at scala.util.Success.$anonfun$map$1(Try.scala:255) at scala.util.Success.map(Try.scala:213)
The text was updated successfully, but these errors were encountered:
For insert/ingestion benchmarks, can you share more details on how this was planned?
There are mentions about "base/" folder. But data is not loaded in that location, and it leads to errors when running "ingestion" benchmarks.
https://github.com/lhbench/lhbench/blob/main/src/main/scala/benchmark/IncrementalTPCDSBenchmark.scala#L358-L362
It will be good to add additional dependencies on ingestion benchmarks.
FILE UPLOAD: Uploaded /home/ec2-user/20230516-150914-tpcds-custom-ingestion-delta-report.csv to s3://path/ingestion/1gb/delta/20230516145512/reports/csv/ org.apache.spark.sql.AnalysisException: Path does not exist: s3://path/ingestion/1gb/base/call_center.dat at org.apache.spark.sql.errors.QueryCompilationErrors$.dataPathNotExistError(QueryCompilationErrors.scala:1016) at org.apache.spark.sql.execution.datasources.DataSource$.$anonfun$checkAndGlobPathIfNecessary$4(DataSource.scala:785) at org.apache.spark.sql.execution.datasources.DataSource$.$anonfun$checkAndGlobPathIfNecessary$4$adapted(DataSource.scala:782) at org.apache.spark.util.ThreadUtils$.$anonfun$parmap$2(ThreadUtils.scala:372) at scala.concurrent.Future$.$anonfun$apply$1(Future.scala:659) at scala.util.Success.$anonfun$map$1(Try.scala:255) at scala.util.Success.map(Try.scala:213)
The text was updated successfully, but these errors were encountered: