Can we schedule bunch of OPS in CPU and other some in GPU ? while running inference in TVM

Hi Expert,

I have just started looking into the TVM framework. I am exploring possibilities like how do we get best latency numbers using TVM.

As a part of this I wanted to know that, is there anyway user can attached device info per OPS? Also can user creates multiple graphs (like one with Object detection model and another one with Classification model) and schedule in one application using TVM.

Thanks and Regards, Raju