Questions about using VTA

Hello,
I have three questions about VTA:

  1. For VTA,if I run the resnet50 , how can I generate the graph json file and the params file used in these codes:
    sym = nnvm.graph.load_json(open(graph_fn).read())
    params = nnvm.compiler.load_param_dict(open(params_fn, ‘rb’).read())

  2. In your VTA paper, the first convolution layer, max pooling and fully connected layers are evaluated on CPU. In the resnet18 demo, how did you achieve the assignment strategy, and if I want to change the assignment strategy, how should I change your code?

  3. when I run your resnet18 demo, my result is:
    ResNet-18 Prediction #1: wood rabbit, cottontail, cottontail rabbit
    #2: Norwich terrier
    #3: tabby, tabby cat
    #4: tiger cat
    #5: weasel
    Performed inference in 0.45s

it’s different from your result in the website.

my vta_config.json is:
{
“TARGET” : “pynq”,
“HW_FREQ” : 100,
“HW_CLK_TARGET” : 8,
“HW_VER” : “0.0.0”,
“LOG_INP_WIDTH” : 3,
“LOG_WGT_WIDTH” : 3,
“LOG_ACC_WIDTH” : 5,
“LOG_OUT_WIDTH” : 3,
“LOG_BATCH” : 0,
“LOG_BLOCK_IN” : 4,
“LOG_BLOCK_OUT” : 4,
“LOG_UOP_BUFF_SIZE” : 15,
“LOG_INP_BUFF_SIZE” : 15,
“LOG_WGT_BUFF_SIZE” : 18,
“LOG_ACC_BUFF_SIZE” : 17
}

my hardware:
CPU: Cortex-A9
FPGA: xc7z030fbg484-3

Do you know the reason?

Thanks

Thanks @D_Shang for trying VTA out, we very much appreciate your interest!

  1. Currently graph support is fairly rigid in our demo - we generated our own NNVM graph with an unsupported branch. However at the moment @jroesch and his colleagues are hard at work to introduce the new graph IR for which we’ll provide VTA support to map new graphs, operators to. See Relay GitHub issue: https://github.com/dmlc/tvm/issues/1673 If you give us a couple weeks, we’ll provide some front-end support from mainstream frameworks to Relay, and compilation from Relay to VTA, so you can run new models like resnet50 much more easily. If it’s urgent, we can provide some preliminary support in NNVM, but the latter will eventually be deprecated.

  2. Again, regarding the assignment strategy, it’s fairly rigid and was achieved on an unsupported internal branch. This is another item pending on Relay, and Relay to VTA compilation support which should be available within the next two weeks or so.

  3. The prediction indeed does seem wrong, although not totally off. It’s a little odd and will require some investigation on our end. Are you using a Pynq board?

Thanks @thierry for your reply! I’m looking forward to your jobs!

As for the third question, I forget that when I execute the resnet.py, I meet a mistake:
AttributeError: module ‘tvm.autotvm’ has no attribute ‘tophub’.

Also, I can’t find ‘tophub’ in the whole tvm project and I delete this line of code:
‘autotvm.tophub.check_backend(‘vta’)’

I’m not using Pynq board, the board I’m using is customed. The CPU is Cortex-A9 and the FPGA is xc7z030fbg484-3.

Thanks!

Thanks @D_Shang. You can ignore the tophub error for now, as we haven’t brought complete tophub support to VTA (it’s WIP - the goal is to have AutoTVM schedule VTA operators for higher efficiency).

With respect to the board, we are using a different device xc7z020clg484-1, with a different speedgrade. Did you check your post place and route timing analysis to see if you close timing? You may have to set the “HW_CLK_TARGET” parameter in the vta_config.json file to 7 to more aggressively pipeline your design as your FPGA has a lower speed grade.

What’s interesting about the error you are getting is that you are not totally off, but you’re not getting the right answer either…

How did you manage to get the Pynq Image on this FPGA board as well? Did you build a custom image from scratch?

Thanks @thierry for your reply!

  1. I set the “HW_CLK_TARGET” to 7, my timing is closed. But my result is still error:
    #1: wood rabbit, cottontail, cottontail rabbit
    #2: Norwich terrier
    #3: tabby, tabby cat
    #4: tiger cat
    #5: weasel
    it’s not changed.

when I execute the resnet.py, there is a warning:

    _"tvm-master/src/arithmetic/int_set.cc:515: cannot evaluate set type Load"_
  1. I don’t use the whole Pynq Image, I just use its filesystem. The BOOT.bin, devicetree, kernel and rootfilesystem are cunstom made. I deleted the contents of “/etc/environment”.

  2. The results of other demos in your website are correct(no matter the “HW_CLK_TARGET” is 7 or 8).

Thank you. I’ll look some more into it next week but without your platform, it may be difficult to reproduce the bug.

Regardless, we are migrating the graph IR front end to Relay (see: https://github.com/dmlc/tvm/issues/1673) so this Resnet example will be deprecated and replaced with one that utilizes Relay. Stay tuned in the next couple weeks for some exciting updates on VTA/Relay.

Thanks @thierry! I’m looking forward to your updates!