SQL Runner: dummy columns in Spark tables

Hi guys!

I ran into a problem while fetching the schema of a Spark table in SQL Runner. As I can see, Looker is using the following SQL command for this.

DESCRIBE <db>.<table>

And this command produces a result in this manner:

col_name data_type comment
my_string_column string  
my_date_column date  
my_int_column bigint
     
# Partitioning    
Part 0    
my_partition_key string  

Which results in appearing  of dummy columns in the parsed schema

Does anybody know any workarounds in such case?

0 1 146
1 REPLY 1

This is planned to be addressed in 21.6

Top Labels in this Space
Top Solution Authors