This content, written by Frank Bien, was initially posted in Looker Blog on Aug 6, 2013. The content is subject to limited support.
This week Looker is having fun at the HP Vertica Big Data Conference in Boston. For those of you who know Looker, you know we're a bit contrarian — we run in database; we have a modeling language; we promote wild user curiosity. But we're learning this week that there's nothing contrarian about letting folks capitalize on the giant investments they've made in back-end data infrastructure over the last five years.
"New" databases like (or Pivotal, Hana, Netezza, Aster, Redshift, etc.) are really fast. And as compute continues to get faster, denser, and provide larger memory footprints, these analytic beasts are getting even more impressive. When I was at Greenplum back in the day, we always knew workloads would move from storing enormous amounts of data to actually analyzing that data — rapidly, by many users, with giant workloads.
But the dirty little secret was that there weren't any tools that let people operate directly on the data in any meaningful way. Sure, BI tools had JDBC connectors, but they weren't designed to do the kind of transformation and modeling on-the-fly that these analytic machines were capable of. So, analysts went back to hand-coding SQL.
Our founder, Lloyd, entered the Big Data HAWKathon at the show. We won a great prize and got a lot of kudos about showing the value of Vertica. We love kudos. But we love data even more. We try not to use overuse the term "Big Data" here — but Looker is really good at letting users move quickly from high level views, all the way down to extreme detail — and that's what Big Data is all about.
What we've learned here at the Conference is the importance of finishing your sentence... putting icing on the cake... going the last mile. When you match a platform like Vertica with a tool like Looker, the value quickly emerges — and the solution becomes complete.