Modifying weight parameters in OpStrategy

I’m working on improving the current sparse_dense kernels on the GPU. I have a performant implementation, but it requires modification of the input sparse matrix to make it more amenable to the GPU (I have been doing this modification at compile time as the input sparse matrix is always a static weight). I have tried adding my implementation as new strategy implementation (via OpStrategy.add_implementation), but I cannot get it to work because I need to modify the input matrix. Is there a way to modify the inputs in this way? If not, do I need to write a new operator that uses my kernel and then write a new pass to replace existing sparse_dense operations with mine?

@haichen

In this case, you can it in the op alter layout. You can check out the example from https://github.com/apache/incubator-tvm/blob/master/python/tvm/topi/x86/conv2d_alter_op.py#L40.

@haichen, I’m trying this, but the alter_op_layout is never called because the layouts appear to be invalid. Specifically, this line https://github.com/apache/incubator-tvm/blob/master/src/relay/transforms/transform_layout.h#L285-L286 always returns success=false. I tried adding the ElemwiseArbitraryLayout to sparse dense, but that did not fix the problem. Is there a way to add the correct layout (which I assume is just the identity?) or a way to skip this layout business altogether?