I have gotten question about writing PDTs to a separate BQ project, but we want to able to query tables from a different project all on the same connection. While this is not currently supported there is a potential workaround you can use, but to be clear this is not the most robust solution. This can be useful for security purposes in regard to permission-ing off one project in BQ or if you have a general need to write to another project.
First we want to set up two projects in Big Query. For our purposes, we will call them project_one and project_two. Our source data is located in project_one (I used UFO sighting data), but we want to write our PDTs to project_two. We want to give the service account the required permissions on both projects. For my testing purposes, I made them owners on both projects, but feel free to experiment with the permissions. Then, we want to follow the steps to create a scratch_schema dataset in project two to write the PDTs to.
Now, we set up the connection to point at project_two (with the project id in the project name field), we use the normal service account credentials, but in the Dataset, we reference the data set in project_one. In the temp dataset, we reference looker_scratch, which we created in project_two.
Now, in the LookML for the tables we want to read from, we need to fully scope the project, dataset and table from project one.
When we define derived tables, we also want to scope the project.dataset.table we want to query.
If we query the PDT though, we will see the hash is scoped to our second project since we write our PDTs to that project.
We can also verify this by going into BigQuery and looking at the Looker Scratch dataset.
Again, this is not completely supported yet, but may give you some ideas on how to work around certain permission restrictions or may help if your team is being required to write PDTs to a separate project.