Improve OpenAPI spec + documentation for responses

joshtemple
Participant II

Hey Looker!

I’m one of the maintainers of Spectacles, a popular open-source tool for testing LookML for SQL, content, and data test errors.

We’re major users of the Looker API and have been using it for a long time with hundreds of Looker customers.

We’d love to use the OpenAPI spec that the Looker SDKs are generated from to enforce better type safety in Spectacles. However, most of the API endpoint responses are generic and under-specified in the OpenAPI spec.

For example, here’s one we use all the time to get query results: https://developers.looker.com/api/explorer/3.1/methods/Query/query_task_results

If you look at the spec, under “responses”, all that’s specified is a JSON string, which doesn’t give us any information about what fields we can expect in the response.

"responses": {
"200": {
"description": "The query results.",
"content": {
"text": {
"schema": {
"type": "string"
}
},
"application/json": {
"schema": {
"type": "string"
}
}
}
}
}
}

In practice, this lack of clarity on response format has caused issues for our library because we have to handle a variety of different response formats that we’ve encountered over the years. It’s also difficult to develop without testing this endpoint live, because we can never remember which fields are returned.

We would love it if Looker could take the time to improve the spec for popular endpoints (we’re particularly interested in things like Get Query Multi Results, Content Validator, and Run LookML Tests).

Most APIs that I’ve worked with in the past document the responses as well as the request—it’s important that developers know what to expect from an endpoint.

1 1 213
1 REPLY 1

Hey Josh! A couple people from our API/Dev Tools team reviewed what options we have and asked me to provide you with an update.

Our API spec is programmatically constructed from static analysis of our controller & class code. For endpoints like these that are underspecified, it’s normally because the signature of the relevant methods is somehow dynamic or determined at runtime.

Also, this means that the construction of the API spec is not handled fully centrally by one team, but by many teams that own many different parts of the code base. So, improvements in this area are likely to come either one-by-one in an uncoordinated way, OR as a part of a large cross-team initiative. All this is not to say that it won’t happen - for example, right now we are working on one such initiative to improve the information returned in error messages across the API. But, I did want to help set expectations about the near term!