I notice that when tuning a model for arm cpu we can set the remote cpu affinity :
config_threadpool = remote.get_function(‘runtime.config_threadpool’)
config_threadpool(affinity_mode, num_threads)
Do I need set cpu affinity mode with C++ runtime when inferencing?