Looker Connect Training
Help Center
Documentation
Community
Cloud Certifications
An archive of Looker blog posts
This content, written by Brett Sauve, was initially posted in Looker Blog on Dec 2, 2014. The content is subject to limited support.A lesser known feature of some SQL dialects is something called the "window function". While MySQL users will be left out in the cold, most other SQL dialects can take advantage of their power. They can be a little tricky to wrap your mind around at first, but certain calculations - which are very complex or impossible without window functions - can become straightforward. Intriguing ... To demonstrate the power of window functions, let's take a look at an example set of customer data:name status lifetime_spend Neil Armstrong Platinum 1000.00 Buzz Aldrin Platinum 2000.00 Yuri Gagarin Platinum 3000.00 John Glenn Gold 400.00 Alan Shepard Gold 500.00 Jim Lovell Gold 600.00 Now suppose you want to know how the customer ranks in spending against the other customers in their status. In other words, you're hoping for a result se
This content, written by Jim Rottinger, was initially posted in Looker Blog on May 3, 2019. The content is subject to limited support.Understanding iFrame sandboxes and iFrame security Embedding third-party JavaScript in web applications is a tale as old as time. Whether it’s dropping a widget onto your web page or including custom content from a client in your cloud application, it’s something that many developers have encountered in their career. We all know about the iframe element in HTML, but how much do we really know about how it works? What are the security concerns associated with running code inside of an iframe and, furthermore, how can the HTML5 sandbox attribute on the frame alleviate these concerns? The goal of this tutorial is to walk through the various security risks associated with running third-party JavaScript on your page and explain how sandboxed iframes can alleviate those issues by restricting the permissions it is allowed to run with. In this post, we’ll demons
This content, written by Sooji Kim, was initially posted in Looker Blog on Aug 16, 2017. The content is subject to limited support.If you’re anything like me, as soon as your manager came to you with the novel idea of testing the company’s website to improve conversion, you might have done a few (or all) of these things. Stare blankly into the space between their eyes and nod. Say you’ll have something for them in a week. Try to figure out where to start. Google “how to A/B test.” And, if you do actually search “how to A/B test,” you’ll get a ton of results—62,400,000 to be exact-ish. From beginner’s guides to “proven” tactics and ideas, it can get pretty overwhelming to figure out how to get your testing strategy and process started. So when it came to A/B testing looker.com, I started where any employee of a data-obsessed company would: with our web analytics data. With that came a starting point for testing ideas, strategies, and processes that we continue to optimize and fine-tune,
This content, written by Mike Xu, was initially posted in Looker Blog on Mar 25, 2014. The content is subject to limited support.When looking at time series data, it's good to rely on a metric that reveals an underlying trend — something robust enough to deal with volatility and short-term fluctuations. A question that Looker users frequently pose is: How does average sale price fluctuate over time? This question points to a and sum calculation within SQL using a monthly interval. There are several ways of accomplishing this. I'm going to demonstrate two approaches: correlated subqueries and derived tables. My example uses a simple purchases table to create rolling sums on revenue. The sample code below can be modified to calculate many different aggregates and for comparing other timeframes, such as daily or hourly. Here's how the raw data looks: id timestamp price 1 2014-03-03 00:00:04 230 2 2014-03-03 00:01:14 210 3 2014-03-03 00:02:34 250 4 ... ... Here is the result set we e
This content, written by Kenny Cunanan, was initially posted in Looker Blog on May 31, 2019. The content is subject to limited support.Technology is continually and rapidly changing. For organizations today, this means that in order to stay ahead of the pack and get real-time answers to pressing questions, data management is a necessity. From sales reports to company or industry trends, analyzing raw data has become central to promoting growth and improving decision-making. Big data is the future, and can give your organization data on-demand and the ability to quickly connect analysis to action. What is self-service business intelligence? Self-service business intelligence refers to a set of tools that help companies manage their data, transforming raw numbers into streamlined reports that improve company-wide decision-making. These tools allow companies to promote collaboration across multiple departments and to utilize ad hoc querying. As a branch of data analytics, self-service bu
This content, written by Erin Franz, was initially posted in Looker Blog on Mar 4, 2020. The content is subject to limited support.It’s been almost five years since the original launch of the . At that time, the term “Looker Block” didn’t even exist. Now, our directory contains over 75 source Blocks developed by Looker and our partners to accelerate analytics for our customers. With that in mind, we figured it was a great time to take a closer at other challenges Looker Blocks can help solve. One use case we often get from Looker customers is how to appropriately model marketing attribution with data. Understanding ROI on marketing campaigns is imperative to your business, but wrangling all the data and modeling it or trying to understand your current model can be pretty time-consuming. As a solution to make this process easier, we’re excited to announce the Block. This Block provides the necessary guidance to successfully build and implement a marketing attribution model using Looke
This content, written by Scott Hoover, was initially posted in Looker Blog on Jan 4, 2016. The content is subject to limited support.Survival analysis - introduction Many businesses view customer lifetime value (LTV) as the Holy Grail of metrics, and with good reason. As an absolute measure, it's an indication of how much money a business can reasonably expect to make from a typical customer. As a relative measure, it's a good gauge of business health—for example, if expected profit from the typical customer is decreasing over time, perhaps there are levers the business can identify and pull in order to adjust its course. There are, however, a multitude of approaches when calculating LTV, each ranging in complexity. Moreover, an LTV formula for one business may not hold for another. What is consistent are the fundamental inputs: revenue, costs, and estimates of customer lifetime. In this article, I'll focus on a popular method for estimating the typical customer's lifetime called Survi
This subcategory contains all of the technical content from the deprecated Looker Blog (https://looker.com/blog).We will not be updating Blog Archive content, nor do we guarantee that everything is up-to-date. If you’re looking for officially supported resources, you can always visit our Help Center and review our Docs.
This content, written by Ryan Gurney, was initially posted in Looker Blog on Oct 8, 2018. The content is subject to limited support.Looker remains committed to continually improving its security and compliance practice. In September of 2018, our Service Organization Control 2 Type 2 Report for the Looker Cloud Hosted Data Platform became available for customers and prospects. The SOC 2 Type 2 assessment was conducted by independent auditors, The Cadence Group, who specialize in compliance across multiple industries. The Type 2 report addresses service organization security controls that relate to operations and compliance, as outlined by the . The report includes management’s description of Looker’s trust services and controls, as well as Cadence’s opinion of the suitability of Looker’s system design and the operating effectiveness of the controls, in relation to availability, security, and confidentiality. While our SOC 2 Type 1 , released in February of 2018, was a "test of design,"
This content, written by Mike Xu, was initially posted in Looker Blog on Dec 9, 2014. The content is subject to limited support.Much of the world's data is stored in an . The EAV model is a commonly used in scientific research, medicine, healthcare, and popular open source and commercial software platforms such as Magento and Drupal. The key advantage of the EAV Model is that cannot handle and . It was designed with the intent of getting data in, with the tradeoff of being difficult to get data back out. To overcome these tradeoffs, this article will cover how to effectively analyze EAV data. If you are not fully familiar with the structure and pros/cons of the EAV model or need a quick refresher, here's an . In order to fully work with EAV data for traditional analysis and reporting, we will transform the EAV tables to an by creating tables for their associated entities. Directly querying EAV data to analyze cohorts, funnels, and time series is tedious and challenging. These queri
This content, written by Bruce Sandell, was initially posted in Looker Blog on Jan 30, 2018. The content is subject to limited support.Amazon Redshift support for Late Binding Views. A Late Binding View is a view that is not tied to the underlying database objects that it references. It is particularly beneficial for Amazon Redshift users that are storing current or more frequently used data in Redshift and historical or less frequently used data in Amazon S3. Using Late Binding Views, you are able to create a single view that includes data in both Amazon Redshift and Amazon Redshift Spectrum External Tables, providing a single, comprehensive data set for your reporting needs without users having to worry about whether data is stored in Amazon Redshift or Amazon S3. Late Binding Views are the only type of view supported by Redshift Spectrum. Prior to adding the functionality for Late Binding Views, you could only create a view that referenced existing database objects, and you could n
This content, written by Matthew Marichiba, was initially posted in Looker Blog on Jul 17, 2014. The content is subject to limited support.Data analysts often face problems with naming conventions, because their vocabulary spans both the business space ("how do we talk about our business?") and the data model space ("how do our systems represent data relating to our business?"). Add to this situation legacy naming conventions entrenched in systems that describe the world of the past, and you've got a real namespace challenge. If you write queries without a modeling tool like Looker, every SQL query you write carries an extra burden of mapping column and table names into something sensible for business users. On a good day, mapping names is pesky. On bad days, it's error-prone and inconsistent. Of course, you could decide to not bother mapping names. But this shifts the cognitive burden to your data consumers, who are left wondering things like, "Is a 'user' a customer, or a partner, or
This content, written by Erin Franz, was initially posted in Looker Blog on Jan 19, 2017. The content is subject to limited support.Today we are proud to that with our newest release, we now support Amazon Athena. Looker on Amazon Athena allows users across an organization to derive insights and easily make data-driven decisions directly from their AWS data lake. Data lakes make it possible to store ALL the data So what is a data lake, really? A data lake is a single repository for ALL of an organization’s data, regardless of source or format. Structured, semi-structured, and unstructured data can all be stored in the same place. Data lakes can include data that you’re using today, data that you plan to use in the future, and even data with an as-yet-unknown purpose that you might find a use for someday. Ideally, all data for all time is stored in one place so the entirety of your historical data is available for analysis. With all data available, theoretically, any question can be a
This content, written by Jill Hardy, was initially posted in Looker Blog on Aug 19, 2019. The content is subject to limited support.Today I’m going to describe five principles that will help you create dashboards that serve the people that count, rather than just serving up data. The principles are: finding your dashboard’s “big idea” getting buy-in with a wireframe ensuring clarity keeping it simple creating a good flow of information I like to think of the first two as the research phase because they take place before I start developing my dashboard. And I think of the last three as the creation phase, since I’m thinking about them as I build. A clear dashboard that focuses on a central theme speaks for itself. You’ll spend less time explaining the dashboard, and data-driven decisions can be made more easily because the right information is readily accessible. Sounds like a solid way to work, doesn’t it? Well then, let’s get started. The research phase What’s the big idea? Knowing w
This content, written by Ben Beebe, was initially posted in Looker Blog on May 11, 2017. The content is subject to limited support.Looker is growing very quickly. While our growth is dramatic, our ability to scale using Looker may be best demonstrated (in my not-so-humble opinion) by the Looker Finance Department. When I joined Looker a few years ago, I was the first finance hire outside of our CFO. Today, after several years of 100%+ Yr/Yr growth, two new business entities, and roughly 250 more employees, I have a team of two and we use Looker to help manage and track so much of what we do. How we Looker As an FP&A professional, we have several core systems that we leverage to do our jobs and provide the company with key metrics and guidance: a General Ledger & ERP system (NetSuite), a Budgeting or Financial Planning tool (Adaptive Planning), a CRM (Salesforce.com), and often a BI tool to help with reporting and Analytics (Looker). Outside of those tools is the finance profess
This content, written by Kevin Marr, was initially posted in Looker Blog on Mar 28, 2019. The content is subject to limited support.Each month, Looker releases new updates and features that further enable the smarter use of data. As we continue to improve and build upon Looker, we want to highlight and share notable features so that our customers can take full advantage of them. With Looker 6.8 come many great additions focused on modeling. Most notably, we are releasing beta support for importing projects from private, remote LookML repositories. To understand why this is a big deal, let’s first review why LookML is so valuable to data modeling. D.R.Y. data modeling with LookML One of the core benefits of LookML is that it stops you from repeating yourself when doing data analysis. When writing SQL, you often look back at queries you’ve written in the past, copying and pasting little snippets of those queries, and reassembling them to form a new query. This process is error-prone and
This content, written by Haarthi Sadasivam, was initially posted in Looker Blog on Apr 13, 2017. The content is subject to limited support.To show how seamlessly Looker can integrate into a , we took a public dataset (Seattle bikeshare data) and applied a predictive model using Looker, Python, and Jupyter Notebooks. Follow along as I walk through the setup. We loaded nearly 2 years (October 2014 - August 2016) of historical daily trip data for Seattle’s bikeshare program into BigQuery. Because we know that the , we thought we’d explore the impact that weather has on trip volume. To do that we’ve imported daily weather data (e.g. temperature, humidity, rain) for Seattle alongside the trip data in BigQuery. Based on our model, we’d like to predict future trip counts by station, but more importantly operationalize those insights to automatically facilitate rebalancing bikes to underserved stations. How might the weather affect people’s willingness to ride? First, we want to define relatio
This content, written by Donal Tobin, was initially posted in Looker Blog on Sep 30, 2020. The content is subject to limited support.Many Looker users find that Redshift is an excellent, high performance database that can be used to power sophisticated dashboards. However, given Redshift’s architecture, having an increased number of tiles in a dashboard can result in slower response times and queries that take longer to complete than they initially did. In this piece, we’ll explore how to fine-tune your Redshift cluster so you can better match your Looker workloads to your Amazon Redshift configuration and render dashboards quickly and efficiently. How data bottlenecks happen For many organizations, standing up an analytics stack can initially be a bit of an experiment. It often starts with an identified data need, followed by somebody spinning up an Amazon Redshift cluster, building a few data pipelines, and then connecting an analytics platform, like Looker, to visualize the results.