Layers are not supported by CUDA backend

what happens when layers are not supported by CUDA backend?

Please describe your problem in more detail. And error screenshots to help us understand the problem

Sorry for very naive question. Actually am trying to understand the execution flow of TVM for different backends. For example DNNL is backend, in this case unsupported layers will be as LLVM IR and executed in CPU and supported layers will be executed from DNNL libs. For CUDA case it generates host modules and Device modules, I dont know this host modules is LLVM IR or not, but device modules is mapped to CUDA API kernel call. how will the generated modules look like if the layer is not supported by CUDA, will it get executed in CPU as LLVM IR or it returns error?