Dataplex > Explore > Spark SQL: Script Failed in Execution

Has anyone been getting this error in Dataplex > Explore > Spark SQL?

___

Script failed in execution.

org.apache.spark.sql.AnalysisException: org.apache.hadoop.hive.ql.metadata.HiveException: org.apache.hadoop.hive.ql.metadata.HiveException: org.apache.thrift.transport.TTransportException

Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: org.apache.hadoop.hive.ql.metadata.HiveException: org.apache.thrift.transport.TTransportException

Caused by: org.apache.hadoop.hive.ql.metadata.HiveException:

___

It happens on almost every query right now.  Simple ones like

select * from salesforce.contact LIMIT 10;
show databases;
 
This one : select * from salesforce.contact;
Results in this error:
Script failed in execution.
internal error: query execution failed
 
Something pretty basic must be not set, but I don't have permissions to grant permissions on this account.  I have Dataplex Administrator and Dataplex Editor access.
0 1 48
1 REPLY 1

I figured out the problem - partially.  This common Explore errors website says that HiveException and TTransportException errors are permission errors, and that the following roles/permissions are needed for using the Data exploration workbench:

  1. Dataplex Viewer
  2. Dataplex Developer
  3. Dataplex Metadata Reader
  4. Dataplex Data Reader
  5. Dataproc Metastore Metadata Viewer
  6. Service Usage Consumer

After a careful comparison of the privileges granted in the Dataplex Administrator role, and the other Dataplex roles listed above, the only Dataplex role that has privileges the Dataplex Administrator does not is the Dataplex Data Reader.  So, for a user that already had the Dataplex Administrator role, the remaining roles that needed to be granted were the 

  1. Dataplex Data Reader
  2. Dataproc Metastore Metadata Viewer
  3. Service Usage Consumer

Granting these roles allowed queries to execute.  Then I ran across the issue of Table or view not found errors when trying to query some tables.  Documentation on the table not found error looks like if the table name has any capital letters in it, then spark.sql.caseSensitive needs to be set to true.  Are there any more specific instructions out there about how to do that?