Cloud Run container time limitaion

Hi Everyone,

We in discussion in moving our on prem solution to gcp using cloud functions and cloud run as a serverless container l.

 My query is us there any  time limitation when we run our applications inside cloud run container will it be terminates automatically after some specific time period?

Basically what we want to trying our applications based on  Python we wanted to processes some file as per our solution we will use one gcs buckets where raw  files will land and using cloud functions the file will be detect and it trigger cloud run using http invoke or pub/sub , once cloud run container spwan it will first process the raw file to required format then it will add encryption to the processed file then push to big query , this whole process can take some time depending on the file size , is cloud run terminates the running container if it runs more than 15 or 60 mins ?

1 6 770
6 REPLIES 6

Hi! Your Cloud Run container will rapidly scale out to serve requests up until the request timeout you set (up to 60 min). If you are need to process long running jobs greater than 60 min, you can consider Cloud Run jobs, where you can start independent containers in parallel or run the workload inside the container until it is complete. For example, you can stack 4-  60 min tasks in Cloud Run jobs, for a 4 hour job and it will run until completion - 

 

 

Hi, does this mean that the job has to be broken into 4 1 hour sections? Or is it possible to have one container running for 4 hours?

The use case I have is that I want to run a container for a few hours (from start to completion) with minimal infrastructure set up.

Thanks!

Hi, yes, today, you need to break up your job into multiple segments in order to run for more than 1 hour. We're working on longer running times. 

Thank you. Does that mean that GKE autopilot is the way to go? Or perhaps Batch? What would be my best option for an auto-scaling serverless container based workflow with minimum possible setup/configuration required?

The specific use case is very unpredictable requests for long running jobs (>3 hours) - where each job is likely to be a different binary/container.

Hi, do you have any estimates on when we can expect to have longer running times for cloud run services?
Thanks.

Both of those are good options in the meantime; Batch might be a better fit if you're looking for a serverless solution.