Looker set-up and management
I’m using the Looker History and API to automatically delete unused Looks. Ideally, I’d like to delete Looks that have not been run for 6 months, but it seems like the Looker History only goes back 90 days or so. I imagine that extending back 6 months might result in Big Data, so maybe instead, create a last_run field on Looks that is updated every time a Look is run?
Hello, I’m working for a large organization where we have lots of individuals who are going to be onboarding as explorers and dashboard users. Our team would like to be able to view metrics that show the amount of uptake. Looker does have the metrics below, but they are only available to be viewed if you are a developer. Is there any way that I can schedule the reports below or have them be their own look that we can have drill down functionality in?
Hi, I’m looking to reorganize our Looker UI (Model, Explore and Look Names) and wanted to see if anyone had any macro-level thoughts on how they’ve done this at their company? I understand each user has a specific use for their instance, but generally, do any ideas come to mind with respect to driving increased usage through a simple, yet sophisticated organization of Looker. For example, maybe some have organized by business department (Finance, Customer Success, Business Development, etc.), while others are organizing by more specific topics (Billing, CSAT/NPS, Mobile, etc.). I’m open to all suggestions - Thank you!
Hey Abby, Really looking forward to this (as you know)! I’m reading through the notes now and was wondering if in a later release it would be possible to split out the “Manage Access, Edit” permission into 2 permissions? One allowing users to add, remove, change Looks, Dashboards, and Spaces and the 2nd to add or remove users. The problem I’m seeing now is that we want to give our “Explore users” Manage Access to be able to create dashboards and looks, but they shouldn’t be allowed to decide who has access to the space. Access to the space we’d like to restrict to the Admin user. Thoughts on this? Does that make sense?
Hi all, We’re investigating creating some general performance requirements for our models to help benchmark and measure improvements and such and such. Does anyone have any suggestions for things that worked well/were suited to Looker? At the moment I’m thinking about setting requirements for various percentiles like: 50% of queries faster than x seconds 75% faster than y seconds 90% fast than z seconds Thanks Jake
Hi all, we use EBK (Elasticsearch, Beats + Kibana) and want to have the Looker logs in there as well. Since you can’t change the output format of Looker logs here is a small script to wrap the logs into a json structure. https://gist.github.com/sisu-frank-kutzey/02d54375ae3aed6d393701ab9cbdf8c0 I’m pretty sure there can be some overflows since I don’t know all looker logs but it worked pretty good so far. FYI: @maxcorbin
Oh joy Who doesn’t love scheduling highly important reports to customers/prospects/team members/mothers/friends just to find that they’re being unsubscribed from everyday? I know I sure do. I also love tracking these events down such that I can call my mom and remind how her important it is for her to see the daily report on top used search keywords on IMBD’s website. How do I track these… Quite simply, actually. I utilize the system__activity model is what powers the usage panel, and in short is a LookML model included in the product that allows Admin’s to sift through Looker’s internal database. I essentially use the two following looks to follow unsubscribe events, and whom grew tired of my souper shweet datas. Quick Note: The following urls are query urls. They’re meant to be tacked on to the end of your host…don’t worry, I’ll provide examples. The (near) magic keys Disclaimer: I will not be posting example result sets due to the nature of the data…you know, I don’t think yo
#Overview Looker offers the ability to upgrade on Marketplace deployments. Currently we are listed on the following Marketplaces: AWS Azure Google Cloud (coming soon!) The customer will be notified by the Martketplace seller when an upgrade is available. An upgrade requires spinning up a completely new machine image, so there are a few steps involved to migrate prior work to the new image. #Application Migration Simply copy over the Looker user’s home directory (including all normal and hidden subdirectories) from the original AMI to the upgraded AMI. This is basically the same as the documentation available for Creating a Backup except that you also need to copy over the looker/models directory as well.
This article serves as a guideline for migrating from Java 7 to Java 8. While we have previously updated this post to state the minimum required Java version, it is not intended to serve as a source of truth for system configuration requirements. For the most up to date minimum requirements please refer to our Installation documentation page. We do not plan on further updating Java requirements in this document. https://looker.com/docs/setup-and-management/on-prem-install/installation Summary Java 8 was initially released March 18, 2014 and has improved performance/Garbage Collection facilities as well as other features. This, along with the fact that Java 7 is no longer receiving public updates warrants us to move our systems forward to Java 8 (see details). This move ensures our systems are using the latest supported version of Java while taking advantage of the improved Garbage Collector and other Java 8 features. The current Java8 build as of this writing is Java8Update92. Do I N
Go to admin Labs Try to enable PDF Download when you don’t have PhantomJS installed You get a link at the top - “You must install PhantomJS for this feature to be available.” - click the link, which is supposed to go to https://looker.com/docs/r/viz/phantomjs-not-installed - but you get your funny 404. As a side issue, we’ve had that lab turned on for awhile but it got toggled off during some upgrade. (?) cf. Installing PhantomJS for PDF Download, Scheduling and Scheduled Visualizations Administration Note: PhantomJS is already installed on hosted instances of Looker. These instructions only need to be followed for on premise installations of Looker. Looker 4.0 requires PhantomJS 2.1.1+ PDF Download, Scheduling and Scheduled Visualizations is provided as a Looker Labs feature. In order to enable this feature you must install a third party tool, PhantomJS, on the server running your Looker. Looker 4.0+ requires version 2.1.1, for Looker 3.48+ we recommen
Note: This article has been migrated and updated. If you want to set up multiple instances, see: This Help Center article if you have a single repository. [This Help Center articlehttps://help.looker.com/hc/en-us/articles/360001947887-Git-Workflow-Using-One-Repository-Across-Multiple-Instances-Development-Staging-and-Production)]( if you are already using multiple repositories. If you want to set up clustering, see this documentation page.
Hi Looker, Although I think it’s a great idea that permissions & data access (model sets) have been split out and can be managed separately, it’s leading to a bit of a headache given that they can’t be assigned at a user level without first creating roles. We’ve decided to have 5 permission sets and expect about 10 departments needing access. The combination means we’d need to define ~50 roles. Although this is only a bit of 1-off work creating the 50 roles, it’d be much easier if we could simply add users and every time select the required model and permission set. Any thoughts on developing this? Or otherwise, any ideas on alternative solutions? I know we can define user sets to restrict access to specific data, but this would then lead to needing to duplicate models for each department, correct?
If you experiencing issues with looker-exported PDFs not rendering Asian character sets (Chinese, Japanese, Korean), you can fix this by installing the fonts on PhantomJS using the below command. For Looker-hosted instances, you’ll need to reach out to Looker Ops/Support to run this for you. sudo apt-get install fonts-arphic-ukai fonts-arphic-uming fonts-ipafont-mincho fonts-ipafont-gothic fonts-unfonts-core
Hi, At the moment we have the validate project before commit on to provide guarantees on code quality, but we share the problems raised in Tips on improving LookML Validation performance?. Waiting over 10 seconds between changing a small part of a view and being able to commit it is prohibitively long for the gain A solution we’d like to move to is making validation a part of pull request merging using a CI tool, but as I understand it this would require some programmatic access to the validator (either directly or through an API) which we currently don’t have Would it be possible to add api access to the validator to the road map of a future looker release or even open source the validator? Thanks Jake
As of Looker 3.16, fields that are preceded with a period, in order to remove the view name, will no longer remove that view name. Old Behavior Before Looker 3.16, it was possible to remove the view name from a field, throughout Looker’s UI (such as the Explore page), by starting the field name with a period. For example, this LookML: - view: order fields: - dimension: price ... Would appear as: ORDER Price Whereas this LookML: - view: order fields: - dimension: .price ... Would appear as: Price New Options As of Looker 3.20, you can use view_label to achieve the same behavior. For example: - view: order fields: - dimension: price view_label: ''
As of Looker 3.36, when users of Amazon Redshift create a persistent derived table, the Redshift distribution style will default to ALL instead of EVEN. Description Amazon Redshift allows database rows to be distributed in one of 3 ways: All Distribution: All rows are fully copied to each node. Even Distribution: Rows are distributed to different nodes in a round-robin fashion. Key Distribution: Rows are distributed to different nodes based on unique values within a particular column. Before 3.42, Looker defaulted to EVEN distribution, but now defaults to ALL if you do not specify a style using the distribution_style parameter, or the distribution parameter. Going Back to an EVEN Distribution If you would like to go back to using an EVEN distribution on a derived table, you can do so by using the distribution_style parameter as follows: - view: customer_order_facts derived_table: sql: | SELECT customer_id, COUNT(*) AS lifetime_orders FROM
Looker instances historically sent email with a “from” field of firstname.lastname@example.org when configured to use the default SMTP mail settings. Email From Value Changed In 3.32+, the default from email was changed as follows: old value: email@example.com new value: ``Looker firstname.lastname@example.org``` This change was made to improve email deliverability. Instances created prior to 3.32 have preserved the old email address as a legacy feature to allow customers to manage the migration to the new address. Applying the Change When you uncheck this option in the Legacy Features Panel, the new address, Looker <email@example.com> will be used. Before unchecking, ensure your users update email filters that rely on the current “from” field of Looker emails. This setting has no effect if you are using custom SMTP settings. End of Life This feature will be toggled offin 3.44 and removed in 3.46.
For the looker-hosted customers, I am wondering if anyone has requested that Looker host their instance in US-West on AWS versus US-East. Given our database is located in US-West (we host it), we should see some performance improvements by moving the looker instance closer to the database. Wanted to hear others thoughts on this. Always seems like a good idea to colocate instance and database. It would be great to have the flexibility without having to host the instance ourselves.
I’ve gone ahead and made sure all deprecations listed in the ‘legacy features’ section of the admin have been cleared up, but I’m not quite sure what to do after that. Do I check all the boxes; uncheck all the boxes? Check some boxes? Some help text would be useful indicating what we should do with them.
##Cluster Configuration# Go to the Google console. https://console.cloud.google.com/home/ Choose the menu on the left and scroll to the bottom and select Dataproc. (Currently third from bottom of list.) Choose “Create Cluster” and specify the size and other parameters. While the cluster is being created, go to a different tab and open the Google console there. Go to the menu and choose Compute Engine, then metadata. Choose the ssh keys setting. Upload an ssh public key. Once the cluster is running, go to the compute engine settings for the cluster master node. It should be named <cluster-name>-m. Under the Network Settings for the VM, choose the network - likely “default”. Click the network connection to add firewall rules. First, add a firewall rule to allow incoming SSH connections. There may already be one. The rule should be named something like “default-allow-ssh” and it should allow incoming connections from any ip to tcp:22. Add firewall rules to allow incoming connec
Already have an account? Login
Login to the community
No account yet? Create an account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.