slice works fine for fixed ranges (e.g. a[0:2]
). If instead of particular number I use expression (e.g. a[0, L]
then relay.build
fails.
Do we support torch.slice only for fixed ranges?
The code to reproduce the issue:
import torch
import tvm
from tvm import relay
import numpy as np
class Net(torch.nn.Module):
def __init__(self):
super(Net, self).__init__()
def forward(self, values, length):
return values[0:length]
net = Net()
a = torch.tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
len = torch.tensor(2)
net(a, len)
traced_net = torch.jit.trace(net, (a, len))
ctx = tvm.cpu(0)
target = 'llvm'
shape_list = [("input0", [3,3]),("input1", []),]
mod, params = relay.frontend.from_pytorch(traced_net, shape_list)
with tvm.transform.PassContext(opt_level=3):
lib = relay.build(mod, target=target, params=params)
func=mod['main']
intrp = relay.create_executor("graph", ctx=ctx, target=target)
ff=intrp.evaluate(func)
ff([[1, 2, 3], [4, 5, 6], [7, 8, 9]], 2)
Error:
>>> with tvm.transform.PassContext(opt_level=3):
... lib = relay.build(mod, target=target, params=params)
...
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
File "/Users/pivovaa/workspace/tvm/python/tvm/relay/build_module.py", line 269, in build
graph_json, mod, params = bld_mod.build(mod, target, target_host, params)
File "/Users/pivovaa/workspace/tvm/python/tvm/relay/build_module.py", line 132, in build
self._build(mod, target, target_host)
File "/Users/pivovaa/workspace/tvm/python/tvm/_ffi/_ctypes/packed_func.py", line 237, in __call__
raise get_last_ffi_error()
tvm._ffi.base.TVMError: Traceback (most recent call last):
[bt] (8) 9 libtvm.dylib 0x00000001268bdfa7 tvm::relay::StorageAllocaBaseVisitor::GetToken(tvm::RelayExpr const&) + 23
[bt] (7) 8 libtvm.dylib 0x0000000126967388 tvm::relay::ExprVisitor::VisitExpr(tvm::RelayExpr const&) + 344
[bt] (6) 7 libtvm.dylib 0x00000001266c2a8d tvm::relay::ExprFunctor<void (tvm::RelayExpr const&)>::VisitExpr(tvm::RelayExpr const&) + 173
[bt] (5) 6 libtvm.dylib 0x00000001266c2d80 tvm::NodeFunctor<void (tvm::runtime::ObjectRef const&, tvm::relay::ExprFunctor<void (tvm::RelayExpr const&)>*)>::operator()(tvm::runtime::ObjectRef const&, tvm::relay::ExprFunctor<void (tvm::RelayExpr const&)>*) const + 288
[bt] (4) 5 libtvm.dylib 0x00000001268bd3cb tvm::relay::StorageAllocator::VisitExpr_(tvm::relay::CallNode const*) + 491
[bt] (3) 4 libtvm.dylib 0x00000001268bdc92 tvm::relay::StorageAllocator::CreateToken(tvm::RelayExprNode const*, bool) + 1010
[bt] (2) 3 libtvm.dylib 0x00000001268bf29c tvm::relay::StorageAllocator::Request(tvm::relay::StorageToken*) + 28
[bt] (1) 2 libtvm.dylib 0x00000001268bfb73 tvm::relay::StorageAllocator::GetMemorySize(tvm::relay::StorageToken*) + 451
[bt] (0) 1 libtvm.dylib 0x0000000125a2c9ff dmlc::LogMessageFatal::~LogMessageFatal() + 111
File "/Users/pivovaa/workspace/tvm/src/relay/backend/graph_plan_memory.cc", line 292
TVMError:
---------------------------------------------------------------
An internal invariant was violated during the execution of TVM.
Please read TVM's error reporting guidelines.
More details can be found here: https://discuss.tvm.ai/t/error-reporting/7793.
---------------------------------------------------------------
Check failed: pval != nullptr == false: Cannot allocate memory symbolic tensor shape [?, ?]