Hello @masahi
I have asked my questions and found out solution by myself in
I am facing prim::DictConstruct issue again. because I am customizing BERT model with BERT config, and there is no return_dict
option I can use in this regard.
Can you explain how exactly you mean “you can manually turn it into a tuple”?
Here is my setting:
from transformers import (BertTokenizer, BertConfig, BertModel)
np_input = torch.tensor(np.random.uniform(size=[1, 128], low=0, high=128).astype("int32"))
BERTconfig = BertConfig(hidden_size=768, num_hidden_layers=round(12*depth_multipliers), num_attention_heads=12, intermediate_size=3072)
model = BertModel(BERTconfig)
model = model.eval()
traced_script_module = torch.jit.trace(model, np_input, strict=False).eval()
mod, params = relay.frontend.from_pytorch(traced_script_module, input_infos=shape_dict)
Thanks!