it seems have no connection type for databicks
I tried generic database JDBC to connect databricks and tested successfully.
but when review data, has error.
![](https://higherlogicdownload.s3.amazonaws.com/HITACHI/MessageImages/1a5d8e49c2de44e989bfbcbd8e12c284.png)
tested successfully
![](https://higherlogicdownload.s3.amazonaws.com/HITACHI/MessageImages/49c8bb97ef7c45f0a906270be1e4d12e.png)
explered all schemas, tables, views but couldn't see unity catalog.
![](https://higherlogicdownload.s3.amazonaws.com/HITACHI/MessageImages/cf031b43a89d41d89b87f96eb8502a52.png)
![](https://higherlogicdownload.s3.amazonaws.com/HITACHI/MessageImages/31a66ecf2e22430c8edbfffd046514a0.png)
2023/12/06 16:35:58 - C:\work\02_Deployment\pentaho\Transformation 1.ktr : Transformation 1 - Dispatching started for transformation [C:\work\02_Deployment\pentaho\Transformation 1.ktr : Transformation 1]
2023/12/06 16:35:58 - Table input.0 - ERROR (version 9.4.0.0-343, build 0.0 from 2022-11-08 07.50.27 by buildguy) : Unexpected error
2023/12/06 16:35:58 - Table input.0 - ERROR (version 9.4.0.0-343, build 0.0 from 2022-11-08 07.50.27 by buildguy) : org.pentaho.di.core.exception.KettleDatabaseException:
2023/12/06 16:35:58 - Table input.0 - An error occurred executing SQL:
2023/12/06 16:35:58 - Table input.0 - SELECT *
2023/12/06 16:35:58 - Table input.0 - FROM vnpham-schema.project
2023/12/06 16:35:58 - Table input.0 -
2023/12/06 16:35:58 - Table input.0 - [Databricks][JDBCDriver](500051) ERROR processing query/statement. Error Code: 0, SQL state: 42602, Query: SELECT *
2023/12/06 16:35:58 - Table input.0 - ***, Error message from Server: org.apache.hive.service.cli.HiveSQLException: Error running query: [INVALID_IDENTIFIER] org.apache.spark.sql.catalyst.parser.ParseException:
[INVALID_IDENTIFIER] The identifier vnpham-schema is invalid. Please, consider quoting it with back-quotes as `vnpham-schema`.(line 2, pos 11)
== SQL ==
SELECT *
2023/12/06 16:35:58 - Table input.0 - FROM vnpham-schema.project
-----------^^^
at org.apache.spark.sql.hive.thriftserver.HiveThriftServerErrors$.runningQueryError(HiveThriftServerErrors.scala:48)
at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.$anonfun$execute$1(SparkExecuteStatementOperation.scala:697)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at com.databricks.unity.UCSEphemeralState$Handle.runWith(UCSEphemeralState.scala:41)
at com.databricks.unity.HandleImpl.runWith(UCSHandle.scala:99)
at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.org$apache$spark$sql$hive$thriftserver$SparkExecuteStatementOperation$$execute(SparkExecuteStatementOperation.scala:574)
at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$2$$anon$3.$anonfun$run$2(SparkExecuteStatementOperation.scala:423)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at com.databricks.logging.UsageLogging.withAttributionContext(UsageLogging.scala:420)
at com.databricks.logging.UsageLogging.withAttributionContext$(UsageLogging.scala:418)
at com.databricks.spark.util.PublicDBLogging.withAttributionContext(DatabricksSparkUsageLogger.scala:25)
at com.databricks.logging.UsageLogging.withAttributionTags(UsageLogging.scala:472)
at com.databricks.logging.UsageLogging.withAttributionTags$(UsageLogging.scala:455)
at com.databricks.spark.util.PublicDBLogging.withAttributionTags(DatabricksSparkUsageLogger.scala:25)
at com.databricks.spark.util.PublicDBLogging.withAttributionTags0(DatabricksSparkUsageLogger.scala:70)
at com.databricks.spark.util.DatabricksSparkUsageLogger.withAttributionTags(DatabricksSparkUsageLogger.scala:170)
at com.databricks.spark.util.UsageLogging.$anonfun$withAttributionTags$1(UsageLogger.scala:491)
at com.databricks.spark.util.UsageLogging$.withAttributionTags(UsageLogger.scala:603)
at com.databricks.spark.util.UsageLogging$.withAttributionTags(UsageLogger.scala:612)
at com.databricks.spark.util.UsageLogging.withAttributionTags(UsageLogger.scala:491)
at com.databricks.spark.util.UsageLogging.withAttributionTags$(UsageLogger.scala:489)
at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.withAttributionTags(SparkExecuteStatementOperation.scala:65)
at org.apache.spark.sql.hive.thriftserver.ThriftLocalProperties.$anonfun$withLocalProperties$8(ThriftLocalProperties.scala:161)
at com.databricks.spark.util.IdentityClaim$.withClaim(IdentityClaim.scala:48)
at org.apache.spark.sql.hive.thriftserver.ThriftLocalProperties.withLocalProperties(ThriftLocalProperties.scala:160)
at org.apache.spark.sql.hive.thriftserver.ThriftLocalProperties.withLocalProperties$(ThriftLocalProperties.scala:51)
at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.withLocalProperties(SparkExecuteStatementOperation.scala:65)
at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$2$$anon$3.run(SparkExecuteStatementOperation.scala:401)
at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$2$$anon$3.run(SparkExecuteStatementOperation.scala:386)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1899)
at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$2.run(SparkExecuteStatementOperation.scala:435)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
Caused by: org.apache.spark.sql.catalyst.parser.ParseException:
[INVALID_IDENTIFIER] The identifier vnpham-schema is invalid. Please, consider quoting it with back-quotes as `vnpham-schema`.(line 2, pos 11)
== SQL ==
SELECT *
2023/12/06 16:35:58 - Table input.0 - FROM vnpham-schema.project
-----------^^^
at org.apache.spark.sql.errors.QueryParsingErrors$.invalidIdentifierError(QueryParsingErrors.scala:462)
at org.apache.spark.sql.catalyst.parser.PostProcessor$.exitErrorIdent(parsers.scala:297)
at org.apache.spark.sql.catalyst.parser.SqlBaseParser$ErrorIdentContext.exitRule(SqlBaseParser.java:36473)
at org.antlr.v4.runtime.Parser.triggerExitRuleEvent(Parser.java:408)
at org.antlr.v4.runtime.Parser.exitRule(Parser.java:642)
at org.apache.spark.sql.catalyst.parser.SqlBaseParser.singleStatement(SqlBaseParser.java:569)
at org.apache.spark.sql.catalyst.parser.AbstractSqlParser.$anonfun$parsePlan$1(AbstractSqlParser.scala:88)
at org.apache.spark.sql.catalyst.parser.AbstractParser.parse(parsers.scala:83)
at org.apache.spark.sql.execution.SparkSqlParser.parse(SparkSqlParser.scala:111)
at org.apache.spark.sql.catalyst.parser.AbstractSqlParser.parsePlan(AbstractSqlParser.scala:87)
at com.databricks.sql.parser.DatabricksSqlParser.$anonfun$parsePlan$1(DatabricksSqlParser.scala:77)
at com.databricks.sql.parser.DatabricksSqlParser.parse(DatabricksSqlParser.scala:98)
at com.databricks.sql.parser.DatabricksSqlParser.parsePlan(DatabricksSqlParser.scala:74)
at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.$anonfun$analyzeQuery$3(SparkExecuteStatementOperation.scala:542)
at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94)
at org.apache.spark.sql.catalyst.QueryPlanningTracker.measurePhase(QueryPlanningTracker.scala:400)
at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.$anonfun$analyzeQuery$2(SparkExecuteStatementOperation.scala:539)
at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:1127)
at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.$anonfun$analyzeQuery$1(SparkExecuteStatementOperation.scala:537)
at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.getOrCreateDF(SparkExecuteStatementOperation.scala:523)
at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.analyzeQuery(SparkExecuteStatementOperation.scala:537)
at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.$anonfun$execute$5(SparkExecuteStatementOperation.scala:610)
at org.apache.spark.util.Utils$.timeTakenMs(Utils.scala:606)
at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.$anonfun$execute$1(SparkExecuteStatementOperation.scala:610)
... 36 more
.
2023/12/06 16:35:58 - Table input.0 -
2023/12/06 16:35:58 - Table input.0 - at org.pentaho.di.core.database.Database.openQuery(Database.java:1776)
2023/12/06 16:35:58 - Table input.0 - at org.pentaho.di.trans.steps.tableinput.TableInput.doQuery(TableInput.java:242)
2023/12/06 16:35:58 - Table input.0 - at org.pentaho.di.trans.steps.tableinput.TableInput.processRow(TableInput.java:143)
2023/12/06 16:35:58 - Table input.0 - at org.pentaho.di.trans.step.RunThread.run(RunThread.java:62)
2023/12/06 16:35:58 - Table input.0 - at java.base/java.lang.Thread.run(Thread.java:1583)
2023/12/06 16:35:58 - Table input.0 - Caused by: java.sql.SQLException: [Databricks][JDBCDriver](500051) ERROR processing query/statement. Error Code: 0, SQL state: 42602, Query: SELECT *
2023/12/06 16:35:58 - Table input.0 - ***, Error message from Server: org.apache.hive.service.cli.HiveSQLException: Error running query: [INVALID_IDENTIFIER] org.apache.spark.sql.catalyst.parser.ParseException:
[INVALID_IDENTIFIER] The identifier vnpham-schema is invalid. Please, consider quoting it with back-quotes as `vnpham-schema`.(line 2, pos 11)
== SQL ==
SELECT *
2023/12/06 16:35:58 - Table input.0 - FROM vnpham-schema.project
-----------^^^
at org.apache.spark.sql.hive.thriftserver.HiveThriftServerErrors$.runningQueryError(HiveThriftServerErrors.scala:48)
at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.$anonfun$execute$1(SparkExecuteStatementOperation.scala:697)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at com.databricks.unity.UCSEphemeralState$Handle.runWith(UCSEphemeralState.scala:41)
at com.databricks.unity.HandleImpl.runWith(UCSHandle.scala:99)
at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.org$apache$spark$sql$hive$thriftserver$SparkExecuteStatementOperation$$execute(SparkExecuteStatementOperation.scala:574)
at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$2$$anon$3.$anonfun$run$2(SparkExecuteStatementOperation.scala:423)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at com.databricks.logging.UsageLogging.withAttributionContext(UsageLogging.scala:420)
at com.databricks.logging.UsageLogging.withAttributionContext$(UsageLogging.scala:418)
at com.databricks.spark.util.PublicDBLogging.withAttributionContext(DatabricksSparkUsageLogger.scala:25)
at com.databricks.logging.UsageLogging.withAttributionTags(UsageLogging.scala:472)
at com.databricks.logging.UsageLogging.withAttributionTags$(UsageLogging.scala:455)
at com.databricks.spark.util.PublicDBLogging.withAttributionTags(DatabricksSparkUsageLogger.scala:25)
at com.databricks.spark.util.PublicDBLogging.withAttributionTags0(DatabricksSparkUsageLogger.scala:70)
at com.databricks.spark.util.DatabricksSparkUsageLogger.withAttributionTags(DatabricksSparkUsageLogger.scala:170)
at com.databricks.spark.util.UsageLogging.$anonfun$withAttributionTags$1(UsageLogger.scala:491)
at com.databricks.spark.util.UsageLogging$.withAttributionTags(UsageLogger.scala:603)
at com.databricks.spark.util.UsageLogging$.withAttributionTags(UsageLogger.scala:612)
at com.databricks.spark.util.UsageLogging.withAttributionTags(UsageLogger.scala:491)
at com.databricks.spark.util.UsageLogging.withAttributionTags$(UsageLogger.scala:489)
at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.withAttributionTags(SparkExecuteStatementOperation.scala:65)
at org.apache.spark.sql.hive.thriftserver.ThriftLocalProperties.$anonfun$withLocalProperties$8(ThriftLocalProperties.scala:161)
at com.databricks.spark.util.IdentityClaim$.withClaim(IdentityClaim.scala:48)
at org.apache.spark.sql.hive.thriftserver.ThriftLocalProperties.withLocalProperties(ThriftLocalProperties.scala:160)
at org.apache.spark.sql.hive.thriftserver.ThriftLocalProperties.withLocalProperties$(ThriftLocalProperties.scala:51)
at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.withLocalProperties(SparkExecuteStatementOperation.scala:65)
at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$2$$anon$3.run(SparkExecuteStatementOperation.scala:401)
at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$2$$anon$3.run(SparkExecuteStatementOperation.scala:386)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1899)
at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$2.run(SparkExecuteStatementOperation.scala:435)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
Caused by: org.apache.spark.sql.catalyst.parser.ParseException:
[INVALID_IDENTIFIER] The identifier vnpham-schema is invalid. Please, consider quoting it with back-quotes as `vnpham-schema`.(line 2, pos 11)
== SQL ==
SELECT *
2023/12/06 16:35:58 - Table input.0 - FROM vnpham-schema.project
-----------^^^
at org.apache.spark.sql.errors.QueryParsingErrors$.invalidIdentifierError(QueryParsingErrors.scala:462)
at org.apache.spark.sql.catalyst.parser.PostProcessor$.exitErrorIdent(parsers.scala:297)
at org.apache.spark.sql.catalyst.parser.SqlBaseParser$ErrorIdentContext.exitRule(SqlBaseParser.java:36473)
at org.antlr.v4.runtime.Parser.triggerExitRuleEvent(Parser.java:408)
at org.antlr.v4.runtime.Parser.exitRule(Parser.java:642)
at org.apache.spark.sql.catalyst.parser.SqlBaseParser.singleStatement(SqlBaseParser.java:569)
at org.apache.spark.sql.catalyst.parser.AbstractSqlParser.$anonfun$parsePlan$1(AbstractSqlParser.scala:88)
at org.apache.spark.sql.catalyst.parser.AbstractParser.parse(parsers.scala:83)
at org.apache.spark.sql.execution.SparkSqlParser.parse(SparkSqlParser.scala:111)
at org.apache.spark.sql.catalyst.parser.AbstractSqlParser.parsePlan(AbstractSqlParser.scala:87)
at com.databricks.sql.parser.DatabricksSqlParser.$anonfun$parsePlan$1(DatabricksSqlParser.scala:77)
at com.databricks.sql.parser.DatabricksSqlParser.parse(DatabricksSqlParser.scala:98)
at com.databricks.sql.parser.DatabricksSqlParser.parsePlan(DatabricksSqlParser.scala:74)
at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.$anonfun$analyzeQuery$3(SparkExecuteStatementOperation.scala:542)
at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94)
at org.apache.spark.sql.catalyst.QueryPlanningTracker.measurePhase(QueryPlanningTracker.scala:400)
at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.$anonfun$analyzeQuery$2(SparkExecuteStatementOperation.scala:539)
at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:1127)
at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.$anonfun$analyzeQuery$1(SparkExecuteStatementOperation.scala:537)
at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.getOrCreateDF(SparkExecuteStatementOperation.scala:523)
at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.analyzeQuery(SparkExecuteStatementOperation.scala:537)
at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.$anonfun$execute$5(SparkExecuteStatementOperation.scala:610)
at org.apache.spark.util.Utils$.timeTakenMs(Utils.scala:606)
at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.$anonfun$execute$1(SparkExecuteStatementOperation.scala:610)
... 36 more
.
2023/12/06 16:35:58 - Table input.0 - at com.databricks.client.hivecommon.api.HS2Client.buildExceptionFromTStatusSqlState(Unknown Source)
2023/12/06 16:35:58 - Table input.0 - at com.databricks.client.hivecommon.api.HS2Client.pollForOperationCompletion(Unknown Source)
2023/12/06 16:35:58 - Table input.0 - at com.databricks.client.hivecommon.api.HS2Client.executeStatementInternal(Unknown Source)
2023/12/06 16:35:58 - Table input.0 - at com.databricks.client.hivecommon.api.HS2Client.executeStatement(Unknown Source)
2023/12/06 16:35:58 - Table input.0 - at com.databricks.client.hivecommon.dataengine.HiveJDBCNativeQueryExecutor.executeNonRowCountQueryHelper(Unknown Source)
2023/12/06 16:35:58 - Table input.0 - at com.databricks.client.hivecommon.dataengine.HiveJDBCNativeQueryExecutor.executeQuery(Unknown Source)
2023/12/06 16:35:58 - Table input.0 - at com.databricks.client.hivecommon.dataengine.HiveJDBCNativeQueryExecutor.<init>(Unknown Source)
2023/12/06 16:35:58 - Table input.0 - at com.databricks.client.hivecommon.dataengine.HiveJDBCDataEngine.prepare(Unknown Source)
2023/12/06 16:35:58 - Table input.0 - at com.databricks.client.jdbc.common.SStatement.executeNoParams(Unknown Source)
2023/12/06 16:35:58 - Table input.0 - at com.databricks.client.jdbc.common.BaseStatement.executeQuery(Unknown Source)
2023/12/06 16:35:58 - Table input.0 - at com.databricks.client.hivecommon.jdbc42.Hive42Statement.executeQuery(Unknown Source)
2023/12/06 16:35:58 - Table input.0 - at org.pentaho.di.core.database.Database.openQuery(Database.java:1765)
2023/12/06 16:35:58 - Table input.0 - at org.pentaho.di.trans.steps.tableinput.TableInput.doQuery(TableInput.java:242)
2023/12/06 16:35:58 - Table input.0 - at org.pentaho.di.trans.steps.tableinput.TableInput.processRow(TableInput.java:143)
2023/12/06 16:35:58 - Table input.0 - at org.pentaho.di.trans.step.RunThread.run(RunThread.java:62)
2023/12/06 16:35:58 - Table input.0 - Caused by: com.databricks.client.support.exceptions.ErrorException: [Databricks][JDBCDriver](500051) ERROR processing query/statement. Error Code: 0, SQL state: 42602, Query: SELECT *
2023/12/06 16:35:58 - Table input.0 - ***, Error message from Server: org.apache.hive.service.cli.HiveSQLException: Error running query: [INVALID_IDENTIFIER] org.apache.spark.sql.catalyst.parser.ParseException:
[INVALID_IDENTIFIER] The identifier vnpham-schema is invalid. Please, consider quoting it with back-quotes as `vnpham-schema`.(line 2, pos 11)
== SQL ==
SELECT *
2023/12/06 16:35:58 - Table input.0 - FROM vnpham-schema.project
-----------^^^
at org.apache.spark.sql.hive.thriftserver.HiveThriftServerErrors$.runningQueryError(HiveThriftServerErrors.scala:48)
at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.$anonfun$execute$1(SparkExecuteStatementOperation.scala:697)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at com.databricks.unity.UCSEphemeralState$Handle.runWith(UCSEphemeralState.scala:41)
at com.databricks.unity.HandleImpl.runWith(UCSHandle.scala:99)
at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.org$apache$spark$sql$hive$thriftserver$SparkExecuteStatementOperation$$execute(SparkExecuteStatementOperation.scala:574)
at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$2$$anon$3.$anonfun$run$2(SparkExecuteStatementOperation.scala:423)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at com.databricks.logging.UsageLogging.withAttributionContext(UsageLogging.scala:420)
at com.databricks.logging.UsageLogging.withAttributionContext$(UsageLogging.scala:418)
at com.databricks.spark.util.PublicDBLogging.withAttributionContext(DatabricksSparkUsageLogger.scala:25)
at com.databricks.logging.UsageLogging.withAttributionTags(UsageLogging.scala:472)
at com.databricks.logging.UsageLogging.withAttributionTags$(UsageLogging.scala:455)
at com.databricks.spark.util.PublicDBLogging.withAttributionTags(DatabricksSparkUsageLogger.scala:25)
at com.databricks.spark.util.PublicDBLogging.withAttributionTags0(DatabricksSparkUsageLogger.scala:70)
at com.databricks.spark.util.DatabricksSparkUsageLogger.withAttributionTags(DatabricksSparkUsageLogger.scala:170)
at com.databricks.spark.util.UsageLogging.$anonfun$withAttributionTags$1(UsageLogger.scala:491)
at com.databricks.spark.util.UsageLogging$.withAttributionTags(UsageLogger.scala:603)
at com.databricks.spark.util.UsageLogging$.withAttributionTags(UsageLogger.scala:612)
at com.databricks.spark.util.UsageLogging.withAttributionTags(UsageLogger.scala:491)
at com.databricks.spark.util.UsageLogging.withAttributionTags$(UsageLogger.scala:489)
at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.withAttributionTags(SparkExecuteStatementOperation.scala:65)
at org.apache.spark.sql.hive.thriftserver.ThriftLocalProperties.$anonfun$withLocalProperties$8(ThriftLocalProperties.scala:161)
at com.databricks.spark.util.IdentityClaim$.withClaim(IdentityClaim.scala:48)
at org.apache.spark.sql.hive.thriftserver.ThriftLocalProperties.withLocalProperties(ThriftLocalProperties.scala:160)
at org.apache.spark.sql.hive.thriftserver.ThriftLocalProperties.withLocalProperties$(ThriftLocalProperties.scala:51)
at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.withLocalProperties(SparkExecuteStatementOperation.scala:65)
at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$2$$anon$3.run(SparkExecuteStatementOperation.scala:401)
at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$2$$anon$3.run(SparkExecuteStatementOperation.scala:386)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1899)
at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$2.run(SparkExecuteStatementOperation.scala:435)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
Caused by: org.apache.spark.sql.catalyst.parser.ParseException:
[INVALID_IDENTIFIER] The identifier vnpham-schema is invalid. Please, consider quoting it with back-quotes as `vnpham-schema`.(line 2, pos 11)
== SQL ==
SELECT *
2023/12/06 16:35:58 - Table input.0 - FROM vnpham-schema.project
-----------^^^
at org.apache.spark.sql.errors.QueryParsingErrors$.invalidIdentifierError(QueryParsingErrors.scala:462)
at org.apache.spark.sql.catalyst.parser.PostProcessor$.exitErrorIdent(parsers.scala:297)
at org.apache.spark.sql.catalyst.parser.SqlBaseParser$ErrorIdentContext.exitRule(SqlBaseParser.java:36473)
at org.antlr.v4.runtime.Parser.triggerExitRuleEvent(Parser.java:408)
at org.antlr.v4.runtime.Parser.exitRule(Parser.java:642)
at org.apache.spark.sql.catalyst.parser.SqlBaseParser.singleStatement(SqlBaseParser.java:569)
at org.apache.spark.sql.catalyst.parser.AbstractSqlParser.$anonfun$parsePlan$1(AbstractSqlParser.scala:88)
at org.apache.spark.sql.catalyst.parser.AbstractParser.parse(parsers.scala:83)
at org.apache.spark.sql.execution.SparkSqlParser.parse(SparkSqlParser.scala:111)
at org.apache.spark.sql.catalyst.parser.AbstractSqlParser.parsePlan(AbstractSqlParser.scala:87)
at com.databricks.sql.parser.DatabricksSqlParser.$anonfun$parsePlan$1(DatabricksSqlParser.scala:77)
at com.databricks.sql.parser.DatabricksSqlParser.parse(DatabricksSqlParser.scala:98)
at com.databricks.sql.parser.DatabricksSqlParser.parsePlan(DatabricksSqlParser.scala:74)
at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.$anonfun$analyzeQuery$3(SparkExecuteStatementOperation.scala:542)
at com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94)
at org.apache.spark.sql.catalyst.QueryPlanningTracker.measurePhase(QueryPlanningTracker.scala:400)
at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.$anonfun$analyzeQuery$2(SparkExecuteStatementOperation.scala:539)
at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:1127)
at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.$anonfun$analyzeQuery$1(SparkExecuteStatementOperation.scala:537)
at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.getOrCreateDF(SparkExecuteStatementOperation.scala:523)
at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.analyzeQuery(SparkExecuteStatementOperation.scala:537)
at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.$anonfun$execute$5(SparkExecuteStatementOperation.scala:610)
at org.apache.spark.util.Utils$.timeTakenMs(Utils.scala:606)
at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.$anonfun$execute$1(SparkExecuteStatementOperation.scala:610)
... 36 more
.
2023/12/06 16:35:58 - Table input.0 - ... 15 more
2023/12/06 16:35:58 - Table input.0 - Finished reading query, closing connection
2023/12/06 16:35:58 - Table input.0 - Finished processing (I=0, O=0, R=0, W=0, U=0, E=1)
2023/12/06 16:35:58 - C:\work\02_Deployment\pentaho\Transformation 1.ktr : Transformation 1 - Transformation detected one or more steps with errors.
2023/12/06 16:35:58 - C:\work\02_Deployment\pentaho\Transformation 1.ktr : Transformation 1 - Transformation is killing the other steps!