For all of your SQL errors, troubleshooting questions, and cool queries
- 327 Topics
- 784 Replies
Hi,I’m building a Looker dashboard and in it I’d like to standardise my test values to control spend (at the moment we’re standardising to total spend) - e.g. for clicks we’d like to move from:normalised clicks = clicks * sum(total spend) / 2 * total spendtonormalised clicks = clicks * sum(control spend) / 2 * control spendI’m therefore trying to figure out how to calculate control spend in Looker and then apply that control spend value to the rest of the columns so they can use it in those calculations too. For example if I had more than one SQL step I would do something similar to the following:CREATE TABLE temp_1 asSELECT TEST_OR_CONTROL,COALESCE(SUM((CLICKS)), 0) AS total_clicks,NULL as total_control_cost_usdFROM datatable1<WHERE STATEMENT HERE>GROUP BY 1INSERT INTO temp_1.total_control_cost_usdVALUES (SELECT DISTINCT total_control_cost_usd FROM(select TEST_OR_CONTROL,COALESCE(SUM((CASE WHEN TEST_OR_CONTROL = 'Control Group' THEN "COST_USD"* 1.2 ELSE NULL END)), 0) AS total_c
Hi,Has anyone faced an issue with the queries getting queued and running endlessly without loading any results? For our instance, it has become a serious issue where the queries execute successfully on the DB but do not render on the Looker UI - dashboards or looks. This is now impacting our day-to-day reporting needs. If anybody has faced similar issues or have any suggestions, please help. Thanks.
We have a filter on a virtual field (an unnested column).This works in development mode and generates the following query (correct):SELECT field_course_category AS "field_course_category.course_category" FROM dw_live.course AS course LEFT JOIN LATERAL UNNEST(string_to_array(course.field_course_category_tid,'|')) field_course_category ON true WHERE (( field_course_category ) ILIKE '%') OR (( field_course_category ) ILIKE '% %')But when we turn off development mode, the same look generates this query when you use the filter (incorrect)SELECT field_course_category AS "field_course_category.course_category" FROM field_course_category WHERE (( field_course_category ) ILIKE '%') OR (( field_course_category ) ILIKE '% %')That results in ERROR: relation "field_course_category" does not exist since that table is defined in LookML from an unnested column.We use the advanced deployment method so in this instance the tip of develop is the deployed commit. I also set suggest_pers
Hello,Do we have any “LAST” function? The following query is not working, and I want to extract the last limit in each month from each acoount ID . SELECT date_trunc(date, month) as Month,LAST(credit_card_account__limit_range_max) as Credit_LimitFROM core.credit_card_account_daily_snapshotGROUP BY Month, account__idORDER BY Month
Hello, im having trouble with my code in the 6th line. I already tried “BETWEEN 0 AND 5000” and this. Does someone know what would be a correct way ? SELECT date_trunc(date, month) as Month,AVG(credit_card_account__limit_range_max) as Credit_LimitFROM core.credit_card_account_daily_snapshotWHERE credit_card_account__limit_range_max (0, 5000)GROUP BY Month, account__idORDER BY Month
Anybody would like to share experiences as a part of CI flow?We are especially interested in implementations using spectacles cli (using spectacles sql)Ideally we would like to generate an error only in case of errors in critical explores, and for all other explores a warning.
Something very weird has been discovered by one of my colleagues. They were trying to plot some data with day of the month as dimension and few months as pivot. Our database is set to UTC and the Query time zone to Europe/Madrid. They noticed that, for example, June had no data on 30th but on 31st, and July didn’t have any data on 31st. Surprisingly August, which has only just begun, also had data on 31st. I compared the code generated in SQL tab when we use Table Calculations vs timeframe day of month WITH data AS ( SELECT CAST('2021-05-31 23:50:00' AS TIMESTAMP) AS created_at)SELECT TIMESTAMP_TRUNC(created_at , DAY), TIMESTAMP_TRUNC(created_at, HOUR, 'Europe/Madrid'),`` TIMESTAMP_TRUNC(created_at , DAY, 'Europe/Madrid'), -- Code generated when using timeframe "month" (FORMAT_TIMESTAMP('%Y-%m', created_at , 'Europe/Madrid')), -- Code generated when using timeframe "day_of_month" (EXTRACT(DAY FROM created_at AT TIME ZONE 'Europe/Madrid'), -- Code generated when using "e
I have a table in snowflake with a column that produces a string like this “144514490704288__LOOKUP__1933330251567369140”.In my view I only want the second number so I created a dimension like this:magic_text is the column that contains the string as seen in example above When I try to use the explore and accessing this dimension: Does LookML just not support the split_part function?Can anyone tell me how to use split_part and still make a dimension? I am aware of a workaround using SUBSTRING which does work except the first number could be of variable length and substring uses hardcoded integer positions so it is not applicable here. We must search for the __LOOKUP__ substring and extract all the integers that follow it.
I have been getting this error in a Look . I narrowed it down to one particular field , but the same field works fine in an another look so i am not sure what the exact problem is at this point.I recently added a link to the troubled field , so i am not sure if that caused the problem but when i validate the lookml i donot see any errors And also when i run the same query including the troubled field in sql runner it just runs fine and gives me the result. Any idea why this is happening?
So I know Looker suggests go through support for bug report. But I’ve reported this bug and all I received from Looker support is that it's not a bug, it's how Looker is supposed to run. That's why I'm asking if anyone else run into the same thing and if so do you guys consider this a bug. I feel like I'm going insane with "ït's the way Looker is supposed to work" from support. My problem appears when I try to compare a date - date_date (dimension_group, type: time, datatype:date) with another date - timestamp_date ( (dimension_group, type: time, convert_tz: yes) in Custom Measure. I use this comparison in my filter. For the first date I don't specify convert_tz because it's a date format, and I've converted before turning timestamp into date already. My language is Standard SQL (we use BigQuery). I need to compare and see if the 2 dates are the same. The auto-generated SQL is as follow(TIMESTAMP([date_date1] )) = (TIMESTAMP_TRUNC(CAST([timestamp_date] AS TIMESTAMP), DAY, 'my_time_zo
Hi Looker Team,I have a concern regarding the looker queries. Issue is, if I have look created from an oracle data source, I am able to retrieve the sql query responsible for that look through Rest API, but I also need the dialect of the database(the model created in looker from connection) on which this query has generated/created. Is there any way to get that query’s database dialect through REST api or any other way ?Please help me with this.Thanks.
I have 2 tables (see below): Whats the best way of joining them so I can have consolidated Customer ID list (1.0 + 2.0) but the records are not duplicated. so:if I do a total of points earned (so one address will only get counted once despite there are 2 separate customer ids) The Customer ID list will still list individually and show me the address and points earned accordingly. (i.e, I don’t want it to display as 2,5 | 234 street | 15 points earned . I would like to see 2 | 234 street | 15 points earned , 5 | 234 street | 15 points earned)Table 1Customer_ID_1.0 Address points earned 1 123 street 20 2 234 street 15 3 345 street 30 4 456 street 40 Table 2 Customer_ ID_2.0 Address points earned 5 234 street 15 6 123 street 20 Hope it make sense!
New looker user. Normally I have a data person to rely on. I’ll have multiple models under 1 project. I have connections setup that have new daily log files. How do I add this new daily log file to my reporting? I can join existing views but how do I manage the ones that will be created after today?
When we use unnest query to flatten arrays in looker, by default the joins are added in the same order of the joins. The same is difficult when we extend the explores to re-use the joins. The unnest statements get added in the order of joins to the parent explore data. Is there anyway we can automate the process of automatically moving all the unnest statements that cannot be used with the cross join to the bottom of the SQL query as they are generated on the fly and creating derived tables and explores would make the process more complicated?
So we have a lot of end users that like to run queries that have no date filters or use ‘is_any_time’ on the date filter. I am wondering what is the best way to correct this. As a team we have been tossing around the idea taht if our date filter is not include in the query it would return no results. Wanting to see if other shave tackled this and what was the fix?
DB Concepts Series: Troubleshooting Partitions Post Context: This content is adapted from our internal SME (Subject Matter Expert) curricula. Similar to knowledge drops, this content may be highly Looker specific and may become out of date in the future. The goal is to convey a high level understanding of a common SQL query issue and convey next steps for troubleshooting and resolving. Scenario: User reports that there's no performance improvement after partitioning a table(s), or that the 'estimated size' of the query in Looker seems off. A few common problems:Problem 1: The partition_key is not appropriately defined Tables should be partitioned by a field that creates a meaningful grouping of rows. Given that partitions work by reducing the number of rows the DB has to query over to fetch query results, the partition_key needs to be a field that has few(ish) distinct values. To test for this, try running select partition_key, count(*)... order by 2 desc . The output should be someth
DB Concepts Series: What partitions are, and how they're used Post Context: This content is adapted from our internal SME (Subject Matter Expert) curricula. Similar to knowledge drops, this content may be highly Looker specific and may become out of date in the future. The goal is to convey a high level understanding of a common SQL query issue and convey next steps for troubleshooting and resolving. ELI5: Partitions essentially work by creating a reduced subset of rows that the Database can quickly scan to reduce the number of rows it has to evaluate to fetch query results.To restate this a different way, and in the context of getting Looker to work with them:We create a partition key to partition by. We don't need to do this in Looker, and a PDT is not required to make use of partitioning. The key essentially "slices" up rows of your dataset, grouping them so that the DB can do a scan over less rows We evaluate a condition against that Partition Key in the WHERE clause. the DB perfo
We’ve been attempting to create a connection from Looker to our Denodo7 instance. We have validated our connection info, but receive the following error on testing the connection: Cannot trust the server to establish a SSL connection. We have validated that the Denodo7 server is in the whitelist. Testing the secure connection from the server via Openssl s_client -connect, we are seeing the handshake: SSL handshake has read 5389 bytes and written 419 bytes New, TLSv1/SSLv3, Cipher is Server public key is 2048 bit Secure Renegotiation IS supported How do you enable the TLS connection from the Looker connection to the Denodo7 server? Is there a Denodo7 jdbc option to force a secure connection? Adding the option ssl=true results in the same “cannot trust” error. Does the Denodo7 jdbc driver need to be added to the Looker server?
Problem: You’ve written alot of very nice LookML, but your data is >20TB and queries take forever. This isn’t fun, and your business users are complaining - a tragedy! Never fear, this post is here. Prerequisites: Be familiar with the doc “How Looker Generates SQL” Understand basic SQL syntax (eg what each clause is, basic SQL functions, etc) Have at least developer permissions in Looker (with SQL Runner access) Be familiar with LookML terms and conceptsPost Context:This is adapted from DCL’s internal SME (Subject Matter Expert) curricula; similar to “Knowledge Drops”, the information here may be highly Looker specific and at some point become out of date. Additionally, the information here should be thought of as a sort of ELI5, not an in depth description of how things work. The goal is to provide a “quick and dirty” guide to facilitate troubleshooting query performance, not something that is technically comprehensive - for that, you may find better mileage elsewhere. What this c
I tried to get the 2 dimensions order ID and the date and SQL automatically adds another LIMIT clause in between. When I remove this LIMIT in SQL runner, the query runs fine. But I want to run it in the explorer so that I can add it to my dashboard. The SQL query is not editable in explore. Any way I can get rid of this LIMIT clause?(the “LIMIT 10”) I know it is the only thing causing the error.1
Hi, The documentation is a bit unclear about indexes on multiple columns in a derived table. The Derived Tables page says: add indexes on customer_id and first_order_date, like this:… indexes: [customer_id, first_order_date] On the other hand, the indexes page says: with an index on customer_id and date:… indexes: [customer_id, date] Note the difference! According to the Derived Tables page, there are indexes on the two columns, one on each. According to the indexes page, there’s a single index on the two columns. This is a significant difference (in MySQL at least); specifically, a single multicolumn index serves as an index not only on the combination of columns but also on any initial subset of that combination, but not on any other subset. Therefore, if this is a multicolumn index, then the order columns are specified in matters. (Again, this is true for MySQL. I don’t know what other dialects it might be true in.) So my questions are: Which documentation page is correct: does
Already have an account? Login
Login to the community
No account yet? Create an account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.