Is there a way to check whether the model is running?

Hi.

I’m currently build a multi-gpu pipeline system.

My pc has 2 different gpu, 2080ti, 1050ti.

Lets say 2080ti as Smart one , 1050ti Dumb one.

Smart one :laughing: is faster than Dumb one :sweat_smile: and almost always finish it’s job before Dumb one finish.

Then Smart one :laughing: start to think that if I’m finished and Dumb one :sweat_smile: sill not, maybe I can run again!

But in this case, Smart one :laughing: should check whether Dumb one’s :sweat_smile: status, like RUNNING, IDLE.

Using multiprocessing, each process share a data queue, and the one who finish, the one who takes next element in queue stretegy could be a solution.

Also profiling gpu’s performance and predict its running time could be a solution.

But just simply check each model and if the one is finished, run again stretegy could also be nice…

So, is there way to check the tvm’s GraphModule’s status that shows whether model is currently running or not?

Thanks in advance.