Question

SQL Runner: dummy columns in Spark tables

  • 20 April 2021
  • 1 reply
  • 20 views

Hi guys!

I ran into a problem while fetching the schema of a Spark table in SQL Runner. As I can see, Looker is using the following SQL command for this.

DESCRIBE <db>.<table>

And this command produces a result in this manner:

col_name data_type comment
my_string_column string  
my_date_column date  
my_int_column bigint

 

     
# Partitioning    
Part 0    
my_partition_key string  

 

Which results in appearing  of dummy columns in the parsed schema

 

Does anybody know any workarounds in such case?


1 reply

Userlevel 3

This is planned to be addressed in 21.6

Reply