How to connect cloud spanner via jdbc in pyspark

Hi all, 

I am trying to connect my spanner, it's done but when I reading/writing to table then getting error like (String literal issue)

 

For I got the issue is "columns_name" vs `columns_name`

 

. The problem is how I can create custom jdbc dialect in pyspark.

 

Connection is done my spanner is connected. Reading and writing is issue.

 

Is issue is "" vs ``.

 

Can you help me to solve the issue.

 

 

0 3 195
3 REPLIES 3

Which GCP Product are you using to write PySpark?  We can direct you to the right team based on that.

Have you tried using the Spanner connector in Application Integration?  see: https://www.googlecloudcommunity.com/gc/Integration-Services/Announcement-Application-Integration-am... 

Hi @shaaland ,

I am using cloud spanner product,

I have created my spanner instance, MySQL/postgress SQL database and table.

I have connection of cloud spanner instance database through cloud spanner JDBC driver in pyspark.

But problem is that when I am trying to read/write operation then getting error "String literal issue". 

Like if table column name is id then error getting: String literal issue  \"id"/ . 

As I research on that and getting that cloud spanner database table column name look like `id` .

But in pyspark dataframe table column name is "id".

Maybe  `` backticks vs double  coat "" creating problem.

Can you provide best way how I can apply read/write operation through JDBC driver in pyspark. 

Ok, I see...I think this is in the incorrect forum then.  Perhaps, try to repost this question in the Cloud Spanner forum under Cloud Forums / Databases / Topics that include: Cloud Spanner  https://www.googlecloudcommunity.com/gc/forums/filteredbylabelpage/board-id/cloud-database/label-nam... 

Hope that helps.