Better together: Looker connector for Looker Studio now generally available

Today’s leading organizations want to ensure their business users get fast access to data with real-time governed metrics, so they can make better business decisions. Last April, we announced our unified BI experience, bringing together both self-serve and governed BI. Now, we are making our Looker connector to Looker Studio generally available, enabling you to access your Looker modeled data in your preferred environment.Connecting people to answers quickly and accurately to empower informed decisions is a primary goal for any successful business, and more than ten million users turn to Looker each month to easily explore and visualize their data from hundreds of different data sources. Now you can join the many Google Cloud customers who have benefited from early access to this connector by connecting your data in a few steps.*How do I turn on the integration between Looker and Looker Studio? You can connect to any Google Cloud-hosted Looker instance immediately after your Looker admin turns on its Looker Studio integration.Enable the Looker Studio connector on the BI Connectors page in the Looker instance Admin panel.Once the integration is turned on, you create a new Data Source, select the Looker connector, choose an Explore in your connected Looker instance, and start analyzing your modeled data.You can explore your company’s modeled data in the Looker Studio report editor and share results with other users in your organization.When can I access the Looker connector? The Looker connector is now available for Looker Studio, and Looker Studio Pro, which includes expanded enterprise support and compliance.Learn more about the connector in the Looker Studio help center and the Looker documentation.* A Google Cloud-hosted Looker instance with Looker 23.0 or higher is required to use the Looker connector. A Looker admin must enable the Looker Studio BI connector before users can access modeled data in Looker Studio.

Related products:Looker Studio News & Announcements

Customer Success Newsletter - November 2022

November 2022 Hello Lookers, With the winter holidays around the corner, we are busy working on some exciting new developments with Looker! Read on to find out more about Looker Studio and upcoming events.Looker Studio Learn about Looker Studio In One Minute, and get the basics!  For more detailed information, visit this Looker Studio page for benefits, key features, and documentation. To access Looker Studio, see this Help Center Article for instructions. Latest Release Note Highlights 🎉22.20 - Release Highlights, Changelog, Breaking Changes  Connected Sheets is generally available for Google Cloud-hosted Looker instances. Connected Sheets for Looker lets you explore data from your LookML models through the familiar Google Sheets interface. Looker admins must first enable the feature on the new BI Connectors admin page, which can be found in the Platform section of Looker's Admin menu. Legacy dashboards. The Revert to Legacy Dashboards legacy flag has been removed, meaning that you can no longer use the dashboards UI to downgrade a dashboard to a legacy dashboard. The Can use Legacy Dashboards legacy feature has been added to enable users to view legacy dashboards to use the new dashboard experience. For more information, see the Legacy dashboard deprecation - starting with Looker 22.20 (November 2022) Best Practices article. Cookieless embeds. When the Cookieless Embed Labs feature is enabled, browsers that block third-party cookies can authenticate users in the embedded iframe across different domains. Cookieless embed authentication requires server-side configuration. See the Looker JavaScript Embed SDK README for setup instructions.  Customer InsightsAre you passionate about feature requests, cool use cases, beta features, or want to give feedback on the Looker product? Wish there was a way to get more attention for your input? Join the Looker Customer Insights program for one-of-a-kind opportunities to influence our planning directly. Here, you will interact with the user research teams, engineers, and product managers.Events and Looker Training Cloud BI Hackathon - December 6Looker Onboarding Webinar - December 21

Related products:Looker News & Announcements

Customer Success Newsletter - October 2022

Hello Lookers, Welcome to the October newsletter, and we hope you are enjoying fall wherever you live. Check out our release notes and the Looker Studio announcement made this week at Cloud Next! Looker Studio The Looker connector for Looker Studio (fka Data Studio) is now available in preview. As we announced at Cloud Next, we're bringing together a complete, unified BI platform with Looker and Looker Studio. The Looker connector is one part of a larger integration story that will empower data workers to self-serve while enabling data leaders to maintain data governance. Admins can submit the sign-up form to get access. The connector is compatible with Google Cloud hosted instances with Looker version 22.16 or higher. Users on compatible instances will also see an option to "Open in Looker Studio" from Explores in Looker, enabling them to quickly create a Looker Studio report with a data source pointing back to that Explore. This is also a Labs feature in 22.16 and admins can disable it at any time. Steps to get access:1) Complete the form, providing an instance URL and organizational domain to enable2) Google will add a license feature to the instance, and a new toggle will appear in Labs3) The connector will appear in Looker Studio for users on the organizational domain4) The submitter will receive an email confirming the above, generally within one week5) An admin will need to enable the Data Studio Labs feature on the instance to enable the integration Latest Release Note Highlights 🎉 22.18 - Release Highlights, Changelog, Breaking Changes  Add Explores to Looker Studio. The Looker connector, now available in Public Preview, allows you to view data from a Looker Explore in a Looker Studio report by connecting Looker as a data source. This integration requires enablement in both Looker and Looker Studio. If you would like to view Looker data in a Looker Studio report, first enroll in the Public Preview by filling out a Looker Studio / Looker Integration Public Preview form.  Customer Insights Are you passionate about feature requests, cool use cases, beta features, or want to give feedback on the Looker product? Wish there was a way to get more attention for your input? Join the Looker Customer Insights program for one-of-a-kind opportunities to directly influence our planning. Here, you will interact with the user research teams, engineers, and product managers. Events and Looker Training Looker Onboarding Webinar - October 19Looker Onboarding Webinar - November 16 

Related products:Looker News & Announcements

Product Announcement: Introducing the New Looker Performance Recommendations Dashboard

We’re excited to announce a new addition to the Looker’s suite of System Activity dashboards: Performance Recommendations. Now, in addition to tracking user activity, content usage, and high-level instance performance, System Activity provides you with actionable recommendations for improving performance and enables you to drill into detailed query performance data.This new dashboard can help you: Improve content performance by aligning with best practices Identify query bottlenecks that are slowing down performance for users Prioritize work based on the severity of performance issues Learn about ways to optimize performance of dashboards and Explores This new Performance Recommendations dashboard is built using a new underlying Explore called Query Performance Metrics, which we have also made available through System Activity. The Query Performance Metrics Explore provides detailed performance measures for each step of query execution, enabling you to dig far beyond overall query runtime as you analyze performance. Performance Recommendations dashboardThe new Performance Recommendations dashboard includes two tiles: one with dashboard recommendations and one with Explore recommendations. Let’s look at what you can find on each one.The Dashboard Recommendations tile focuses on identifying specific dashboards that are out of line with Looker performance best practices, with each dashboard ranked based on the severity of the issue(s) identified. Common warnings that you’ll find here include: Dashboard auto-refresh settings that are more frequent than would be recommended Dashboard tile counts being too high The number of merged queries on a dashboard being too high As you’d expect, recommendations guide you to update settings or reduce the number of tiles and/or merged queries on a given dashboard. In addition, each recommendation links out to documentation that provides more information about the recommendation being made.The Explore Recommendations tile is built from the new Query Performance Metrics Explore and provides recommendations based on average performance of each query step across queries run from a given Explore. This aims to help you identify query bottlenecks and offers suggestions for improvements like: Places where PDTs could be helpful for simplifying complex SQL logic that takes a long time to execute Opportunities to reduce custom formatting or table calculation usage in order to improve post-query processing New features that could be enabled, like the new LookML runtime, that can help improve overall performance In addition to these recommendations, you can also “Explore from here” to dig deeper into query performance with the Query Performance Metrics Explore. Query Performance Metrics ExploreWithin the Query Performance Metrics Explore, you are able to investigate specific queries to understand what’s happening at each step of the execution process. Each phase of query execution includes even more detailed steps, so we’re making these details available at the most granular level possible. This new level of detail makes it easier to identify the specific bottlenecks that are resulting in long-running queries. Things like concurrency issues, connection limits, network latency, and slow query execution within the database can be more easily differentiated, diagnosed, and acted upon. To learn more about query phases and the metrics available, check out our documentation.For those using BigQuery with Looker, this Explore also includes three database-specific metrics specifically aimed at highlighting BI Engine usage for query acceleration: BigQuery Job ID BI Engine Mode BI Engine Reason This makes it easier to tie Looker queries back to BigQuery. It also helps you determine whether a given query was able to be partially or fully accelerated using BI Engine. Note that these values will be null for queries run against databases other than Google BigQuery.As you dig into query performance, you can also set up your own Looks and alerts using this data to help you proactively manage query performance. Consider creating weekly scheduled reports for long-running queries, or setting up alerts for queries that run longer than a set threshold. The addition of these granular query performance metrics should make it easier to identify and address query performance challenges. Try it outWith this new Performance Recommendations dashboard and underlying Query Performance Metrics Explore, we are providing tools to more easily identify and address query bottlenecks so that you can optimize the efficiency of your data environment. Just head over to System Activity and check out the new Performance Recommendations dashboard to get started.

Related products:Looker News & Announcements

Customer Success Newsletter - September 2022

September 2022 Hello Lookers, We hope you are enjoying fall this year!  We have several new updates in our release notes below. Please join us for Google Cloud Next '22 October 11-13 for many Data Analytics focus sessions! Register Now! Looker Connect UpdateAll new learning journeys have been published for Data Consumers and LookML Developers in Looker Connect. The new lessons support Looker 22.4.35, have improved audio and accessibility standards, and have been optimized to make sure the learning pace is just right to get up to speed with Looker.  Customer InsightsAre you passionate about feature requests, cool use cases, beta features, or want to give feedback on the Looker product? Wish there was a way to get more attention for your input? Join the Looker Customer Insights program for one-of-a-kind opportunities to directly influence our planning by interacting with the user research teams, engineers, and product managers.  Want to share your thoughts around Looker’s Maps Visualization? We are looking to connect with customers who are using our Maps Visualizations to hear from you about your use cases and discuss what features we should include in our roadmap. Join us on Wednesday, September 28th at 9am PT for a discussion with our Product Management team to share your thoughts.  For an invite, join the Looker Customer Insights program by September26th.  Latest Release Note Highlights 🎉22.16 - Release Highlights, Changelog, Breaking Changes System Activity - Looker System Activity Explore users can now explore query history context specific to Data Studio applications. Content Navigation for Embed - A new Looker Labs feature Embed Content Navigation controls whether the new embed content navigation is available on an instance. This Labs feature will now default to on, with the option of being disabled. Enhanced content navigation is now available for embedded Looks and Explores. Enhanced Query Admin UI - A Looker Labs feature that migrates the Query Admin UI from Angular to React refreshes the look-and-feel with standard Looker components. In addition, pagination is available, as well as a separate tab for recent and complete queries, allowing customers with large history tables to effectively administer their instances. Query Performance is Now GA - The Query Metrics Explore & Performance Recommendation dashboard is now generally available. For more details, refer to our documentation.  Events, and Looker TrainingLooker Onboarding Webinar - September 21Google Cloud Next '22 - October 11-13Cloud Roadmap Series - Data Analytics Innovation- October 5 

Related products:Looker News & Announcements

Announcing: Improved Performance with New LookML Compiler

Good performance is a key component to providing a good user experience, so we are continuously investing in improvements to the end-to-end performance of Looker. The latest enhancement that we’ve made in this area is the development of a completely new LookML compiler, now generally available across all Looker instances. What does the LookML compiler do?Looker’s LookML compiler, also called New Looker Runtime, is a core component of the Looker application. It is the backend engine that validates your LookML and translates your LookML into SQL queries when users are exploring data or working in dashboards. Specifically, the LookML compiler is responsible for the following: Compiling a model: Parsing and loading a model into cache, which occurs every seven days or after any changes have been made. Validation: Parsing LookML code and front-end content and identifying any errors. Generating metadata for an Explore: Identifying and exposing views, fields, and other metadata from the LookML model to users within the Explore interface. Writing SQL for a query: Identifying the “active” fields for a query and writing SQL to the database accordingly. Active fields are those that are included in a query from an Explore or dashboard.  What are the benefits of the new LookML compiler?The New LookML compiler has feature parity with the original compiler but provides improved performance in a number of application areas. With this new compiler, some or all of the following activities should be faster: LookML validation Content validation Explores and dashboard loads SQL query generation (now using the Apache Calcite SQL writer) Through participation in beta testing for this feature, we have observed that some customers have seen orders of magnitude performance improvements when running dashboards with the new compiler compared to running those same dashboards without the new compiler. And in aggregate across all customers using the new LookML compiler, we are seeing a reduction in query overhead timing. Below, you can see the 90th percentile for Looker query overhead by week over eight complete weeks spanning June through mid-July, 2022. The blue line represents the 90th percentile for queries using the new LookML compiler, while the red line represents the 90th percentile for queries using the old LookML compiler.In addition to some potential performance gains, the new LookML compiler also brings with it improved LookML validation, providing you with warnings that may not have been surfaced previously. In the short term, this may result in the identification of errors in your LookML that previously had not been caught but would have resulted in errors at query time. This includes errors such as: Instances where primary keys have not been defined for a view Inaccessible fields Naming convention misalignment on a parameter Legacy filters on data selection fields for PDTs Enhanced validation for Liquid This will help ensure that your LookML code is clean and has fewer errors impacting your front-end users.We have extensively tested this new compiler, but if you do experience any issues, there is a a Legacy Toggle that you can turn on to enable the original LookML compiler for backwards compatibility.  What’s Next?This project continues the evolution of Looker’s architecture to provide enterprise security, scale and robustness. Previous projects introduced Apache Calcite to generate SQL and optimize queries using Aggregate Awareness. In addition to improving application performance for all of our customers and users, this rebuild provides us with more modular, maintainable backend code and immutable-first guarantees to allow for stateless, reusable objects. This will let us continue to develop new product features and capabilities and make Looker more scalable and robust in the future.

Related products:Looker News & Announcements

How to manage PDTs at scale with Looker’s new Apache Airflow and Cloud Composer integrations

Looker persistent derived tables (PDTs) provide the ultimate data modeling flexibility. They put robust data transformation capability into the hands of analysts and data modelers by enabling them to use LookML to write materialized query results back to the database. This means performant reporting based on complex analytical patterns behind the scenes (think cohorting, retention, user behavior patterns, etc.) can be in the hands of users faster.However, as our customers have leaned into the magic of PDTs, some have run into challenges with scalability. Monitoring and managing PDTs that refresh on varying schedules becomes exponentially more complex as the number of PDTs grows. As a solution to this, we are thrilled to share that we have released a new integration with Apache Airflow that is now also available in Cloud Composer, Google’s managed workflow orchestration service built on Apache Airflow. This new integration provides a pathway for our customers to scale their PDT usage through external data orchestration alongside other ETL and ELT processes. Read on for details on how you can get started. Before You StartIn order to take advantage of this new integration, you will need to be using the following versions of Looker and Airflow: Looker 22.2+ Airflow 2 or Cloud Composer using an Airflow 2 environment Google providers package 6.5.0+ for Airflow   This integration uses the Looker SDK to connect Looker and Airflow. The SDK will call the Looker API 4.0 using your API credentials. To learn more about authenticating with the Looker API, check out our documentation. Setting Up Your ConnectionBecause this integration will leverage Looker’s new functionality for managing PDTs externally via API, you’ll need to ensure that the toggle called Enable PDT API Control is turned on within your connection settings. If you have more than one connection, you’ll need to enable this for all connections with PDTs that will be managed using Airflow.Once you have enabled PDT API control in Looker, you’ll need to set up your Looker connection within Apache Airflow or Cloud Composer.Apache AirflowHere’s a brief overview of the parameters available and recommended settings for each:Connection Id: your_conn_id #give your connection a name of your choosingConnection Type: HTTPHost: https://your.looker.com #base URL for Looker API (do not include /api/* in the URL)Login: YourClientIDPassword: YourClientSecretPort:  #optional - Looker will use the default API path for your instance if left blankExtra: {"verify_ssl": "true", "timeout": "120"}  # optionalFor more information on these settings, see the Looker API path and port section of Looker’s documentation. To generate your API login (ClientID) and password (ClientSecret), you’ll need to do the following: Create API3 credentials on the Users page in the Admin section of your Looker instance. If you’re not a Looker admin, ask your Looker admin to create the API3 credentials for you. Copy and paste the generated client ID into your Airflow connection as your login and the generated client secret as your password. For more information, see our documentation on Authentication with an SDK. There are some additional optional parameters that can be set using JSON format within the Extra section. These include: verify_ssl: This should be set to false ONLY if testing locally against self-signed certs. Otherwise, this will default to true and does not need to be specified. timeout: You can use this to set the timeout (in seconds) for HTTP requests. This will default to 120 seconds (2 minutes) if not specified. Please note that there isn’t an associated test_connection hook for this connection type, so the Test Connection button will not work. The best way to test your connection is to create and trigger a simple DAG.Cloud ComposerWhen setting up a Looker connection in Cloud Composer, we recommend using Secret Manager. By using Secret Manager, you benefit from full capability of secret management for your Looker Client Secret. Here are the steps for setting up a connection in Cloud Composer using Secret Manager:1. Enable Secret Manager in your project.2. Ensure that the Service Account used by Cloud Composer has the proper permissions.3. Set secrets/backend Airflow configuration override to indicate that Secret Manager will be used for secrets.4. Create a Secret following this naming pattern and Value following URI representation (this is where you will pass in your Looker instance and connection details).For testing purposes or if you would prefer not to use Secret Manager, you can set up a connection in Airflow directly for your Composer environment. To do that, you would need to open the Airflow UI from your Composer environment and follow the steps outlined above for creating a connection within Airflow directly. Creating a DAGIn Airflow, a DAG – or a Directed Acyclic Graph – is a collection of all the tasks you want to run, organized in a way that reflects their relationships and dependencies. Once you have set up your connection to Looker, you’re ready to create a DAG for PDTs. To do this, you should follow the same steps that you would for any other DAG you might create in Airflow. An example DAG can be found at the end of this article.Each task in your DAG will be responsible for specific PDT builds using two new Looker Operators: LookerStartPdtBuildOperator initiates materialization for a PDT based on a specified model name and view name and returns the materialization ID. LookerCheckPdtBuildSensor checks the status of a PDT build based on a provided materialization ID for the PDT build job Using these operators, there are two different types of tasks that you might create: synchronous and asynchronous. Let’s take a closer look at how each would be set up. Synchronous TasksSynchronous mode is the default mode for tasks. In synchronous tasks, the LookerStartPdtBuildOperator is used to both start the task and check its status, so it is the only operator that you will need to use inside your PDT build task.To use this operator, you simply need to provide the model name and view name for the PDT that you want to build. These are the only required parameters, but are a few additional optional parameters that you can include: wait_time: Specify the number of seconds to wait in between status checks for the materialization job. The default is 10 seconds, but this parameter enables you to set a custom interval. wait_timeout: Specify a build timeout in seconds. For example, to force a build to timeout after one hour, you would set wait_timeout to 3600 query_parameters: Further specify the materialization build job by including additional settings specific to the PDT build. force_rebuild: This can be set to True or False. When set to True, this will force the rebuild of the specified PDT in your task plus any other PDTs that it depends on, even if they are already materialized. If this parameter is not specified, then it will default to False. force_full_incremental: This can be set to True or False. When set to True, this will force an incremental PDT to fully rebuild (as opposed to just appending a new increment). If this parameter is not specified, then it will default to False. workspace: This is a string value that specifies the workspace in which the PDT should be materialized. It can either be set to dev or production. If unset, this value will default to production. Here’s example code for a synchronous PDT build task:build_pdt_task = LookerStartPdtBuildOperator( task_id='build_pdt_task', looker_conn_id='your_conn_id',         model='your_lookml_model', view='your_lookml_view', wait_time=30 wait_timeout=3600 query_params={ "force_rebuild": True,        "force_full_incremental": True,        "workspace": "dev", },) And here’s what this would look like within Airflow:Once started, the build task will block DAG execution until the build is finished, meaning that the task will run until it succeeds, errors, or is canceled. Asynchronous TasksIn asynchronous mode, a PDT build task is separated into start and status tasks. Using asynchronous mode and separating the start and status tasks in this way can be useful when submitting long-running PDT build jobs. The LookerStartPdtBuildOperator is used to start the build task and should be set up as outlined above. The only difference is that you’ll include the following additional parameter within the task:start_pdt_task_async = LookerStartPdtBuildOperator(    …    asynchronous=True,    …)You will then need to include a separate Airflow sensor, which can be created using the LookerCheckPdtBuildSensor operator. This operator is a custom sensor task that can be used to check the status of a PDT build. When the DAG is executing, the start task will finish immediately and pass execution over to the status task. The only required parameter for the LookerCheckPdtBuildSensor is the materialization_id for the PDT build job, which is an output of the start task. You can also, optionally, include a custom poke_interval, which sets the interval for checking the status of the job (in seconds). The default poke_interval is 60 seconds (1 minute).Here’s example code for a status task:check_pdt_task_async_sensor = LookerCheckPdtBuildSensor( task_id='check_pdt_task_async_sensor', looker_conn_id='your_conn_id',         materialization_id=start_pdt_task_async.output, poke_interval=10 # optional, poke every 10 sec) Within Airflow, you would chain these tasks together and specify the relationship between them within your DAG:   start_pdt_task_async >> check_pdt_task_async_sensor Here’s how your DAG would be represented in Airflow: Just like with synchronous mode, once running, the status task will block DAG execution until the build is finished, meaning that the task will run until it succeeds, errors, or is canceled. Canceling a PDT BuildBoth the LookerStartPdtBuildOperator and the LookerCheckPdtBuildSensor support PDT build cancellation. This does not require a separate operator. Either type of task can be canceled by manually marking a task as Failed or Success in Airflow. Manually marking a task as Failed or Success will trigger an API call back to Looker to cancel the build. This will result in the cancellation of the PDT materialization in Looker and within your database (if supported). Complete DAG ExampleBelow is a complete example of how you might set up a DAG in Airflow for managing Looker PDT builds. If using Airflow, DAGs should be placed within the dags folder inside the file structure of your Airflow implementation. If using Cloud Composer, DAGs should be placed inside the DAGs folder for your environment, which can be found on the Environment Details page.from datetime import datetimefrom airflow import modelsfrom airflow.providers.google.cloud.operators.looker import LookerStartPdtBuildOperatorfrom airflow.providers.google.cloud.sensors.looker import LookerCheckPdtBuildSensorwith models.DAG( dag_id='looker_pdt_build', schedule_interval='0 0 * * *', start_date=datetime(2022, 1, 1), catchup=False,) as dag: # Start PDT build in asynchronous mode start_pdt_task_async = LookerStartPdtBuildOperator( task_id='start_pdt_build_async', looker_conn_id='my_connection_name', model='my_model_name', view='my_view_name', asynchronous=True, ) check_pdt_task_async_sensor = LookerCheckPdtBuildSensor( task_id='check_pdt_async', looker_conn_id='my_connection_name', materialization_id=start_pdt_task_async.output, poke_interval=10, # optional, poke every 10 sec (default is 1 minute) ) start_pdt_task_async >> check_pdt_task_async_sensor​This new integration with Apache Airflow creates a pathway for scaling PDT management and maintenance. Not only can you implement automated processes to govern PDT rebuilds, but you can also orchestrate these data transformations alongside your other ETL and ELT workflows. These new operators will also be made available within Cloud Composer, Google Cloud’s managed workflow orchestration solution built on top of Apache Airflow, so watch out for more information on that in the coming weeks.

Related products:Looker News & Announcements