def manual_test():
data_shape=(1, 1, 2, 2)
data_dtype='int32'
data = relay.var("data", shape=data_shape,
dtype=data_dtype)
pool = relay.op.nn.avg_pool2d(data, pool_size=(2, 2))
func = pool
func = run_infer_type(func)
func = relay.Function(relay.analysis.free_vars(func),
func)
print(func)
with relay.build_config(opt_level=0):
graph, lib, params = relay.build(func, "llvm", params=None)
mod = graph_runtime.create(graph, lib, ctx=tvm.cpu(0))
golden_data = np.array([5, 5, 5, 5]).reshape(data_shape).astype(data_dtype)
mod.set_input("data",golden_data)
mod.run()
res = mod.get_output(0).asnumpy()
print(res)
The above gives output 4
instead of 5
.
Setting the golden_data to (1, 1, 1, 1)
gives output 0
.
I think we are dividing before accumulating. Since, we have only tested on Float32, this never showed up before. But, in int32, it performs int division.
@tqchen @yzhliu
I tried to further debug this and got stuck. I changed the topi computein following manner
- return tvm::sum(temp(indices) / divide_factor, { dheight, dwidth });
+ auto reduced_hw = tvm::sum(temp(indices), { dheight, dwidth });
+ return topi::divide(reduced_hw, divide_factor);
+ //return tvm::sum(temp(indices) / divide_factor, { dheight, dwidth });
};
I got the following error
tvm._ffi.base.TVMError: Traceback (most recent call last):
[bt] (8) /home/ubuntu/workplace/t1/tvm/build/libtvm.so(tvm::IRFunctor<void (tvm::NodeRef const&, tvm::ir::IRVisitor*)>::operator()(tvm::NodeRef const&, tvm::ir::IRVisitor*) const+0x162) [0x7fb3d791c47e]
[bt] (7) /home/ubuntu/workplace/t1/tvm/build/libtvm.so(std::function<void (tvm::NodeRef const&, tvm::ir::IRVisitor*)>::operator()(tvm::NodeRef const&, tvm::ir::IRVisitor*) const+0x61) [0x7fb3d791ce35]
[bt] (6) /home/ubuntu/workplace/t1/tvm/build/libtvm.so(std::_Function_handler<void (tvm::NodeRef const&, tvm::ir::IRVisitor*), tvm::IRFunctor<void (tvm::NodeRef const&, tvm::ir::IRVisitor*)>& tvm::IRFunctor<void (tvm::NodeRef const&, tvm::ir::IRVisitor*)>::set_dispatch<tvm::ir::Reduce>(std::function<void (tvm::ir::Reduce const*, tvm::ir::IRVisitor*)>)::{lambda(tvm::NodeRef const&, tvm::ir::IRVisitor*)#1}>::_M_invoke(std::_Any_data const&, tvm::NodeRef const&, tvm::ir::IRVisitor*&&)+0x4f) [0x7fb3d7c90d01]
[bt] (5) /home/ubuntu/workplace/t1/tvm/build/libtvm.so(tvm::IRFunctor<void (tvm::NodeRef const&, tvm::ir::IRVisitor*)>& tvm::IRFunctor<void (tvm::NodeRef const&, tvm::ir::IRVisitor*)>::set_dispatch<tvm::ir::Reduce>(std::function<void (tvm::ir::Reduce const*, tvm::ir::IRVisitor*)>)::{lambda(tvm::NodeRef const&, tvm::ir::IRVisitor*)#1}::operator()(tvm::NodeRef const&, tvm::ir::IRVisitor*) const+0x54) [0x7fb3d7c7f376]
[bt] (4) /home/ubuntu/workplace/t1/tvm/build/libtvm.so(std::function<void (tvm::ir::Reduce const*, tvm::ir::IRVisitor*)>::operator()(tvm::ir::Reduce const*, tvm::ir::IRVisitor*) const+0x61) [0x7fb3d7c88785]
[bt] (3) /home/ubuntu/workplace/t1/tvm/build/libtvm.so(+0x204b3b1) [0x7fb3d7c753b1]
[bt] (2) /home/ubuntu/workplace/t1/tvm/build/libtvm.so(+0x2046642) [0x7fb3d7c70642]
[bt] (1) /home/ubuntu/workplace/t1/tvm/build/libtvm.so(+0x1f66570) [0x7fb3d7b90570]
[bt] (0) /home/ubuntu/workplace/t1/tvm/build/libtvm.so(dmlc::LogMessageFatal::~LogMessageFatal()+0x34) [0x7fb3d78729f6]
File "/home/ubuntu/workplace/t1/tvm/src/op/compute_op.cc", line 584
TVMError: Check failed: 0 == level_: Reductions are only allowed at the top level of compute. Please create another tensor for further composition.
Can anybody help me with this?
yzhliu
July 18, 2019, 12:44am
3
I guess we can create a tvm::compute doing sum first, like pad, then do divide
Should this get a github issue for fixing ?
This is already fixed. I will find the commit and paste it here.
Nice, in which case the test in test_op_qnn_conv2d.py needs to be updated ?
Just saw the test. I fixed the test case but forgot to remove the comment. Nice catch