Extensions, open source tools, the Looker API, and embedding Looker
Update: This article is now out of date. For the latest info on how to download or generate client SDKs for the Looker API, please see our Looker SDK Codegen repo on Github The Looker API is a collection of “RESTful” operations that enables you to tap into the power of the Looker data platform in your own applications. The Looker API can be invoked using plain old HTTP(S) requests. Any development tool, language, or application that can make HTTP requests and ingest JSON responses should be able to use the Looker API. Web geeks who live and breathe HTTP and/or AJAX XHR will feel at home using just the “raw” Looker API HTTP URL endpoints. GET, POST, PUT, PATCH, DELETE, oh my! Web APIs For The ‘REST’ of Us If writing HTTP requests is not something you or your developers do every day, accessing a REST API for the first time can be a little intimidating and disorienting. Programming with web requests often requires different idioms and patterns of behavior than traditional app developmen
Important note: As of Looker 4.16 we’ve changed the below script. It was previously based on parsing HTML, which broke when we adjusted how we formatted html tables in Looker 4.16. We’ve since switched to a csv parsing method, and the current script below should work for all versions of Looker. Why we built this Looker users have the ability to share results of Looks publicly. Amongst the public sharing options is the ability to import the data into a Google Sheet using the =ImportXML function. We’ve recently noticed a major hiccup in the process causing a majority of links to take up to 5 minutes even though the queries return in seconds or less. After extensive research, we’ve been able to validate that there is a bug in Google’s spreadsheet functionality. We don’t yet have an estimated resolution time from Google, but we are working with them to add a solution to our product. In the meantime, we’ve developed this workaround. The function and how to use it In order to use the fu
Powered by Looker is a neat way to share data with the world. If you’re interested in creating a proof of concept, I highly recommend checking out this post. If you’re having issues with your embedded Looks and/or dashboards, making sure the embed URL is properly constructed is a good place to start. Creating an application to write a URL is not something we currently do, but there is some example code here. The Steps You’ll Take To check URL construction, I always run through these steps (more on each to later): Make sure the Look or dashboard exists on the “production” version of your instance Make sure users are authenticated with the correct permission set Make sure URL parameters are correctly formed Confirm that everything is spelled correctly Handy Tools To run through these checks, I find it helpful to have a couple of tools easily available: A URL decoder (I use this one, for no particular reason) A URL encoding reference (I use W3Schools, say what you will) Looker logs (if
A nice example of how you can use Looker’s APIs to expand the capabilities of the data platform is to generate new kinds of content. Even data-driven businesses sometimes want to put things into a PowerPoint presentation: With just a little bit of scripting, you can automatically generate a slide deck by downloading PNG images via the API. 1. Install the required Python modules. This code uses the python-pptx module. There is good documentation online, including a lot more you can do with PowerPoint slides: http://python-pptx.readthedocs.io/. While there isn’t an official Python Looker SDK client, you can install my build (from using Swagger Codegen) directly from GitHub: https://github.com/ContrastingSounds/looker_sdk_30/. Instructions for building your own client can be found here: Generating Client SDKs for the Looker API. Note that for Python 3 users there is a slight glitch in Swagger Codegen 2, which doesn’t handle APIs that return different content types very well. See the chan
#The Basics As of Looker 4.2, you can schedule reports directly to an S3 bucket. Results can be unlimited, allowing users to schedule and send large result sets, provided they meet the streaming criteria (that is, the report can’t contain table calculations or totals or, in some dialects, pivots). The scheduler will let you know as you’re scheduling if the report can be unlimited or not. To see this option in the scheduler modal, the user needs to have the send_to_s3 permission. When to Schedule to S3 Sending reports directly to S3 works well when email is not an option because of the size of the result set. Because we use Sendgrid to process scheduled emails from Looker, and it has a 19.5MB limit (as of November 2016), the reports we send via email have to be limited in size. Streaming to S3 allows customers to bypass browser, memory and email limitations. This also may be useful if you want to automate a system to pull data down from S3 and use it in other applications within your bu
Hi there, I am trying to create a simple python script to copy dashboards (these are user-defined dashboards), as well as their underlying Looks. Basically I am reading the info from the “source” dashboard, and using the create_dashboard and/or update_dashboard API calls. I am being stumped by a couple of issues: I can’t seem to get the references to Looks and layouts into the “copied” dashboard. I just end up with an empty dashboard. When I look at the Dashboard data structure, I notice that dashboard_layouts and dashboard_elements are marked as readonly, so it makes sense that it doesn’t use this info when creating/updating the dashboard. But is there a way to do this in the API? I can’t see how to use the API to copy dashboard-level filters. The dashboard-level filter info doesn’t seem to appear anywhere in the dashboard data structure? Thanks for any clues you can provide…
What is prefetching? Prefetching is an API-only system through which a customer can pre-run certain dashboards for particular filter sets. This will make the dashboards load as if from cache, even on the first load of the day. When should I use a prefetch? Only use a prefetch if the following conditions are met: You know all the possible filter values that will be used on the dashboard, and you’re comfortable with the load it will put on your database to run all the variations of the dashboard in sequence. You must be okay with filters outside this set being slow. You have truly realtime data. This matters if you want other places (e.g., Explore pages) to be realtime, but you don’t want Dashboards to be re-run. It would also matter if it’s really important for you to have all the tiles in the dashboard be from the same time (e.g., all from 6am), and you have experienced situations where one tile is out of sync because it was refreshed with newer data at a later point in the day.
The Problem User Defined Content (Dashboards and Looks) are not easily portable across Looker instances, nor can they be backed up and restored to a known state easily. It is not easy to monitor content creation, user activity, etc. In particular, it is not easy to do these activities from scripts that can be run by an operations team. The Why Gazer is a command line tool that provides an interface to a Looker instance for the purpose of managing content, users, schedules, etc. It can be used interactively, but also be included in scripts. Finally, it serves as a reference implementation for developers that want to build their own tools. The How Gazer currently has 11 commands. The most common commands are user, space, dashboard, look, and plan. Each command has several subcommands tailored for that command. The ls subcommand is very common and will produce a list of objects. The cat subcommand is also common, and will provide the json representation of an object. Typing gzr alone wil
We need to pull just a few filters outside of the Looker frame to support constraints in our application and maintain continuity with other apps in our suite. The parameters from those filters will be injected into the embedded Look. We want to hide those filters within the embedded Looker UI but retain their filtering ability and display the other filters that we aren’t implementing in our own UI. Forgive my terminology if anything sounds wrong there. I’m hunting on behalf of our Engineering dept, not actually developing this myself.
We need to create a user in Looker before we try to embed a dashboard but having trouble figuring out if it is even possible to create one with the API. I’ve tried posting to the create user endpoint with embedcredentials populated, but no luck. There is also no endpoint listed in the documentation to create embed credentials on their own like you can with other credential types. Has anybody had any luck with this?
Looker ERD Generator (from an Explore using the Looker API) Have you ever wanted to create an Entity Relationship Diagram (ERD) from your Looker Model Explores? As a Looker partner consultant, I get asked for an ERD or Data Model all of the time. Data Model diagrams provide a concise, visual way to show how the tables (or views) and fields relate to one another. Views Only (Conceptual Data Model) Views with Keys Views with ALL Fields In order to create these ERD diagrams, I created a Google Drive Colab Jupyter Notebook (click here) which you may COPY and modify and run. The Jupyter Notebook works best when you start at the top, read each section, and run each Python code cell snippet one-by-one from top-to-bottom. This Jupyter Notebook uses Python3, the Looker API, and ERAlchemy to create an Entity Relationship Diagram (ERD) for a Selected Project, Model, Explore, and ERD Type. It walks you through step-by-step to: Make a COPY of the Notebook Install Necessary Python Libraries Set
Looker webhooks are an excellent way to ferry data from Looker to servers or services. For example, by writing a small server or by using a service like Zapier, a scheduled webhook can be used to move a Look’s data to Google Sheets, Amazon S3, Dropbox, an SMS, and other destinations. Here are the steps to schedule a Look or Dashboard for webhooks. Note that before you begin, your Looker user must have the send_outgoing_webhook or admin permission. Get the URL for the webhook In Looker, schedule the dashboard or Look, using the webhook In the web service, specify the webhook’s destination application Handle any configuration needed for the destination application Documentation for the payload sent to your server or service is documented in this Discourse article: Webhook Payload Description. Read this article to understand the overall procedure and focus on the Looker-side steps for using a webhook to schedule delivering Looks or dashboards. 1. Get the URL for the Webhook Leaving L
Please note that the Looker Slack Bot now directly supports scheduling for a more streamlined experience. Learn more here. As a fully distributed company at Buffer we use Slack extensively to communicate as a team. I’ve been curious for a while if we could somehow integrate the two. One idea I had was to use the ability to send scheduled Look emails, but to have the emails go to a Slack channel instead. However, it was hard to get the slack messages to be formatted nicely. With the 3.36 Early Access release I noticed that a new feature, still currently in Labs, lets you send scheduled emails with an Inline Image attachment. I could use this new feature to get rudimentary Slack integration with our Looker data and was able to post graphs to a specified channel in Slack. The messages look something like this. ##How does it work? This uses Looker’s email scheduling feature under the hood, but with some fun Zapier magic I was able to send these emails
Hi there. I’ve deployed to Heroku and am really struggling to get any response from Lookerbot… I’ve attached what I’ve got in Heroku below, but have redacted the specifics…hopefully this helps a little. Whenever I mention the Lookerbot nothing happens. When I type “/lookerbot” I get " /lookerbot failed with the error “http_service_error” but I otherwise can’t get any response from it at all. My main use case is backslash commands to show particular Looks. I’m fairly sure that it’s not to do with my API access since I used the same credentials in Postman and was able to navigate the API without a problem. But it seems that Lookerbot doesn’t have the same access for some reason. Any ideas on how I can troubleshoot? Heroku setup below. Thanks so much!
I’m excited to debut lkml, a pure Python parser for LookML. You can run lkml from the command line (it will output a JSON string) or import it as a Python package (it will output a nested dictionary). @fabio at Looker already built a fantastic, open-source parser in Node, but I find that the Node dependency sometimes makes it difficult to adopt for Python-based workflows that data people are more familiar with. I decided to embrace the challenge of hand-writing a parser in Python without any external dependencies or libraries. I didn’t have access to the actual grammar used for LookML, so I reverse engineered it myself. Don’t worry, I’ve tested lkml on over 160K lines of public LookML I downloaded through the GitHub API and it works like a charm! lkml is fast too. Excluding file I/O, it parses a typical file in a handful of milliseconds. Based on my tests, it’s at parity with the Node parser. Lastly, lkml has a full unit test suite with CI. Installation lkml is up on pip, so the follow
I sometimes edit LookML in VSCode, because I find it’s easier to jump around amongst multiple files, search-and-replace, and generally crush code. Of course there’s no syntax highlighting, autocomplete, or any of the other code-editor niceties to be found in Looker’s built-in editor. I’ve written a couple of VSCode plugins over the years (yes I have weird hobbies) and I’ve thought about writing a VSCode plugin for looker, which would provide syntax coloring, error squigglies and autocomplete in LookML. But I’m not sure it would have any users other than me. Does anyone else wish there was a “LookML mode” for VSCode?
Many data savvy companies have teams that live in tools like Salesforce. Some of our customers embed Looker in their Salesforce implementation so people can get a quick view of account activity and lead information from their internal database. Here is an example of how we embed Looker into Salesforce (at Looker): Embedding Looker into Salesforce requires that each user have Looker access. Data is embedded as a parameterized iFrame, not passed to the Salesforce platform. Here are some quick instructions to get you started: Prerequisites: Looker! Salesforce administrator account Mappable relationship (joinable key) between a SFDC record and an in-database record Step 1: Create a VisualForce page (Setup - Develop - Pages). Depending on whether you want to insert a dashboard or a single look/query, the code will look slightly different. You should play with the width and height to appropriately fit the element onto the page. Dashboard Example From a dashboard or explore URL, you will
The Problem As a Looker model grows in size and sophistication, it will also experience an ever-increasing number of Explores, Views and Fields. Unfortunately, a common side effect of this is model bloat, which typically means a less than great end-user experience. The Why Henry is a command line tool that helps determine model bloat in your Looker instance and identify unused content in models and explores. It is meant to provide recommendations that developers can validate in order to cleanup models from unused explores and explores from unused joins and fields, as well as maintain a healthy and user-friendly instance. The How The tool currently has three main commands: pulse, analyze and vacuum. The pulse command runs a number of tests that help determine the overall health of the Looker instance. Among the tests are: connection checks, which confirm that all connections are in working order; query history checks to determine if there are any whose runtime stands out; the use of an
Has anyone else had errors when connecting to Google BigQuery. When I test my connection, I get an error saying: “Driver cannot be found: undefined local variable or method `e’ for #Looker::DBConnectionTester:0x12ce1da0” Just wondering if anyone else is getting this error. I’ve never seen this before, so I am not sure how to fix it. Thanks
I am trying to make this code work in my python script (https://github.com/llooker/python_api_samples/blob/master/update_filters_and_run_query.py). I have correctly built a yaml config file with the host, secret, and token. However, in the line looker = LookApi(host=my_host, token=my_token, secret=my_secret) I keep getting the error TypeError:init() got an unexpected keyword argument ‘host’ I have included a screenshot, above the screenshot is the line from lookerapi import LookApi because putting LookerApi did not work. Host, secret, and token, all have the correct strings in them. I cannot find further documentation on the LookApi function. Please advise.
Already have an account? Login
Login to the community
No account yet? Create an account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.