TVM Monthly - October and November 2021

As discussed by the TVM PPMC, our goal is to provide a monthly summary of the project so users and developers can get a better understanding of the goings on of the TVM community.

Feedback and suggestions are welcomed so that we can further improve these updates.

Community

During October and November of 2021 we welcomed many new contributors to the project. Importantly we welcomed @areusch as a PMC member, @Mousius as a committer, @wrongtest, @mehrdadh, @csullivan, @zxybazh, @mbs-octoml, @ganler and @elvin-n as reviewers.

Thanks to everyone for their hardwork and contributions!

On the technical side we kept working on improving operator and frontend support, we are thrilled to have PaddlePaddle supported in TVM. We keep working on TensorIR and start to land Meta Scheduler, which will be the next generation of scheduling DSL that unifies the approaches of AutoTVM and AutoScheduler. The community added a set of backend support including CMSIS, CUTLASS, etc. In addition, Relay has been enhanced further to support dynamic workloads.

This forum got 143k pageviews, 2.6k user visits in the last month.

Relay

  • [Relay] Improve reduction op layout propagation for packed input #9253
  • [Relay] Support dynamic shape searchsorted #9348
  • [Relay] Remove DeviceMap from LowerTE #8788
  • [Relay] Prepare for switching VM to LowerTEPass. #9550
  • [Relay] Prepare DeadCodeElimination for running post LowerTEPass/ManifestAlloc. #9542
  • [Relay] Remove FTVMCompute from TNonComputational ops #9334
  • [Relay] WithFields method for Call, Function, Var, TupleGetItem, If, Let, RefCreate, RefRead, RefWrite, Match, and Clause #9569
  • [Relay] Non-recursive Dtor for Let #9461
  • [Relay] Use target_host determined at Relay level instead of recalculating it #9499
  • [Relay] Introduce Executor and Runtime representations with associated registries #9246
  • [Relay] Gather op dynamic input support #9240
  • [IR] Minor cleanup to tvm.ir.instrument.PassInstrument #9392
  • Switch PlanDevices pass to be w.r.t. SEScopes instead of DLDeviceTypes. #9326
  • Adds SEScope (Storage/Execution Scope) for use as new unit of planning in ‘device’ planning. #9313
  • WithFields for Tuples #9533
  • Add conv1d support in BYOC TRT by converting conv1d to conv2d #9324
  • [Relay][VM][RPC]Use a uint64_t to serialize primitive_attrs in the Relay VM to fix 32bit RPC #9169
  • [AlterLayout] Strided slice layout transform fix (disallow NCHW4c → NCHW etc properly) #9245
  • [AlterLayout] Respect input layout for dense op if explicitly specified #9535

TOPI and operators

  • [Relay, TOPI] Add searchsorted op #9184
  • [Topi] Fix direct SIMD conv2d schedule name #9225
  • [Topi] Cortex-M DSP support #9233
  • Add default for split op #9489
  • [Topi][Op][PyTorch][Vitas] Fix inconsistent kernel layout conventions for conv2d_transpose #9336
  • [Op] Do not override specified layout in pooling (2nd PR) #9328
  • [TOPI] Fix compiing batch_matmul and dense when two args are the same tensor #9207

Tensor-level IR and Arithmetics

  • [TIR] Added PrettyPrint of ProducerStore/ProducerRealize nodes #9259
  • [TIR] Minor refactor to tir.transform.StorageFlatten #9260
  • [TIR] tir.transform.StorageFlatten refactor #9091
  • [TIR] Move UnifyThreadBinding to earlier stage #9365
  • [TIR] Fix VerifyGPUCode for vectorized halfx8 store #9420
  • [TIR] Fix FlattenBuffer computing size for buffer with strides #9195
  • [TIR] Add support for 0-dim buffer #9224
  • [TIR] Add type hint for TIR #9432
  • [TIR] Add structural error printing for TensorIR #9306
  • [TIR] Make compact buffer and get access region aware of conditions #9372
  • Change Call with TIRCallAttrs to call_lowered op #9312
  • Followup from #9312 (Introduce call_lowered op) #9491
  • Add a ‘rolling_buffer’ scheduling primitive #9444
  • schedule_injective of arm_cpu should consider dtype itemsize #9339
  • Adding annotations for tir.allocate #9168
  • [Simplifier] Add printing of SplitExprNode and SumExprNode #9262
  • [TensorIR] GetProducer, GetConsumer #9464
  • [TensorIR] Cross-Thread Reduction #9360
  • [TensorIR] Print TVMScript with prefix T instead of tir #9422
  • [TE] Add stage to ICHECK error message #9249
  • [TE] Light refactoring of TE → TIR paths. #9263
  • [TensorIR][UX] Type annotation-based runtime type checking #9559
  • [TIR][LowerMatchBuffer] Fix lowering strides when source buffer has non-empty strides #9166
  • [TVMScript] Script namespace changes #9115
  • [TVMScript] Report error if add attr to implicit root block #9507
  • [TVMScript] Use // and % for FloorDiv/FloorMod #9437
  • [TVMScript] Parser for Lambdas, Parser/Printer for CommReducer #9358
  • [Script][TensorIR] update block syntax #9286
  • [TIR][USMP] Added buffer info extraction pass #8468
  • [TIR][USMP] Add a parallel to serial for loop converter pass #8469
  • [TensorIR][Schedule] Inherit block anotation upon creating new blocks #9573
  • [TensorIR][M2a] Decompose-Reduction #9041
  • [TIR][Schedule] Add get-child-blocks primitive #9434

MetaSchedule, Ansor, Autoscheduler and AutoTVM

  • [MetaSchedule] Task Extraction #9382
  • [MetaSchedule] Sample-Perfect-Tile #9449
  • [Meta Schedule][M4a] Local runner #9153
  • [Meta Scheduler] Add cleanup for localrunner #9191

Language Bindings

  • Add dilation to MaxPool2DAttrs Rust bindings #9215
  • Fix CallNode Rust binding #9381

Frontend

PaddlePaddle

  • Add TVMC Frontend for PaddlePaddle #9083
  • add PaddlePaddle tutorial #9124
  • [Frontend][PaddlePaddle] Support more common operators #9428
  • [Frontend][PaddlePaddle] Add some activation、elementwise and reduce operators #9370
  • [Frontend][PaddlePaddle] Remove unused parameters and fix doc string #9283
  • [Frontend][PaddlePaddle] Support conv2d_transpose/rnn/fill_constant_batch_size_like #9564
  • [Frontend][PaddlePaddle] Fix bug for paddle frontend #9236
  • [Frontend][PaddlePaddle] Add 10+ operators for PaddlePaddle #9126
  • [Frontend][PaddlePaddle] Add operators of interploate/flatten and modify try_infer_value #9459
  • [Frontend][PaddlePaddle] Add autopad for conv/pool #9295

ONNX

  • [Contrib][ONNX] Handle removal of onnx.utils.polish_model #9178
  • [ONNX] Unique op should always return int64 indices #9490
  • [ONNX] Normalize axes for Slice #9517
  • [ONNX] Add MatMulInteger16 contrib op #9186
  • [ONNX][Relay] Support “tf_crop_and_resize” in relay Resize op. #9475
  • [Frontend][ONNX] Support RandomNormal operator #9493
  • [Frontend][ONNX] ignore ‘training_mode’ tag from onnx in batch_norm op #9575
  • [ONNX][Converter] Add dynamic nodes support #9380
  • [ONNX] [Relay] Resize Opset 13 #9265
  • [TVM] Add importer for ONNX QLinearMatMul op #8952
  • [ONNX] [#8838] QLinearLeakyRelu contrib op #9063

Tensorflow, TFLite and Keras

  • Support quantised RSQRT operator in TFLite #9165
  • Support quantized ABS operator in TFLite frontend #9411
  • Support quantized NEG operator in TFLite frontend #9404
  • Improve the keras frontend to support tflite 2.6 #9562
  • Support quantised SQRT operator in TFLite #9258
  • [Keras] Support return_sequences in LSTM #9303
  • [Keras] Add l2_normalize support #9383
  • [Frontend][TFlite] Cast MirrorPad paddings to int32 #9468
  • [TFLite] Add option to overwrite OperatorConverter class in relay.frontend.from_tflite #9256

Caffe

  • [Caffe Frontend] Add support for Embed layer #9257

Torch

  • [PyTorch]Add PyTorchTVM: compile torchscript to tvm and export as pytorch_op #8777
  • [Torch] Add aten::roll support for Swin Transformer #9371
  • [Torch] Support “all” and “any” op #9185
  • [PyTorch][Frontend] Semantic difference of ‘bias_add’ between relay and pytorch #9204

Backend

  • [CUTLASS] Fix hardcoded include path and logic for profile_all = False case #9399
  • [CUTLASS] Initial support for dynamic shape dense #9419
  • [CUTLASS] Support batch_matmul #9439
  • [CUTLASS] Initial support for dynamic shape dense #9419
  • [CUTLASS] Refactor GEMM generator in preparation for conv2d #9571
  • [VMCompiler] Support shape func lowering for nested function call #9405
  • [CUTLASS, Eazy] Cache profiling result and support compiling generated kernels in parallel #9402
  • Add cache flush for arm #9170
  • A follow up PR for 5/6 of Arm(R) Ethos™-U NPU codegen #9147
  • Arm(R) Cortex(R)-M55 CPU and Arm(R) Ethos™-U55 NPU Demo App #8922
  • Arm(R) Ethos™-U NPU Depthwise2d operator support #9209
  • Add the Arm(R) Ethos™-U NPU identity operator #9457
  • Arm(R) Ethos™-U NPU BinaryElementwise operators support #9442
  • Arm(R) Ethos™-U NPU Pooling operators support #9384
  • Address review comments on Arm(R) Ethos™-U PR 3/6 #9159
  • Adjust Hexagon conv2d schedule to split channel out (k) and move to outer loop #9287
  • Hexagon conv2d full output slice #9198
  • [Hexagon] Add hexagon launcher to apps and add to TVM’s build system #9220
  • [Hexagon] Sync with upstream and use CreateDSOLibraryObject in module construction #9376
  • [Hexagon] Refactor directory structure to accommodate new runtime #9354
  • [Hexagon] Launcher modifications to make use of the new device API #9356
  • [Hexagon] Introduce new DeviceAPI #9355
  • [microNPU] Allow constants to be given as input to an operator #9515
  • [microNPU] Adding rounding mode attribute to operators #9514
  • [microNPU] Replace ICHECK with diagnostic context in type inference #9470
  • [microNPU] Enforce bias when pattern matching conv2d #9244
  • [microNPU] Support binary elementwise with non-4D inputs #9521
  • [microNPU] Change weights and command stream section #9523
  • [microNPU] Add unary elementwise operator infrastructure with ABS #9530
  • [microNPU] Add support for unary elementwise CLZ #9577
  • [ETHOSN] Add support for non-default Ethos™-N78 configurations #9386
  • [ETHOSN] Cleanup of trademarks and registered trademarks #9516
  • [ETHOSN] Match config for is-supported with compilation target #9160
  • [ETHOSN] Streamline Ethos™-N cross-compile rpc usage #9477
  • [ETHOSN] Update compilation defaults to Ethos™-N78 #9563
  • [ETHOSU] Add early simplify to fix LoopPartition #9387
  • [CUDA] Support memory reuse for dynamic shared memory #9341
  • [Target] enable -arch=sm_xx for assigning cuda target arch and deprecate autotvm.measure.set_cuda_target_arch api #9544
  • [BYOC] CUTLASS integration #9261
  • [CMSIS-NN] Assert correct amount of CMSIS-NN artifacts in MLF #9480
  • [CMSIS-NN] Initial operator support for Mul #9163
  • [CMSIS-NN] Initial operator support for Add #9167
  • [CMSIS-NN] Convert CMSIS-NN to use Target Hooks #9397
  • [OPENCL] Workaround for zero size allocation #9379

Code Generation and Compilation API

  • Section names for TVM generated constants #9524
  • Expose workspace size in tvmgen_default.h #9510
  • Propagate tvm target through graph tuning setup #9248
  • Initial Implementation of TIRToRuntime Target hook #9190
  • [Codegen][LLVM] Add ability to turn on fast math flags #9223
  • [TVMC] Re-enable PyTorch test #9441
  • [TVMC] Keep quantized weights when importing PyTorch model #9417
  • [TVMC] Add test for quantized pytorch model #9467
  • [TVMC] Treat invalid FILE arguments #9213
  • [TVMC] Support dot inside of TVMC input shape name arguments #9294
  • [TVMC] Split common tvmc test file into more specific files #9206
  • [TVMC] Compose target options from target registry #9218
  • [tvmc] Adds ethos-u-vela dependency in the tvmc set of python dependencies. #9590
  • [CORE][Relay] Swap and remove compile_engine with te_compiler followup of #8775 #9282
  • [TVMC][microTVM] Add new micro context #9229
  • [1/3][AOT][DeviceAPI] Connecting devices structure to relevant operators #9395
  • [2/3][AOT][DeviceAPI] Add Hooks for Activate/Deactivate/Open/Close #9500
  • [3/3][AOT][DeviceAPI] Wire up cpacked Device API context #9501

MicroTVM

  • [microTVM] Add platform version check to template project #9274
  • [microTVM] Fix AOT/ARMv7m tests on physical devices. #9364
  • [microTVM] Add microTVM Template Projects to tlcpack pip Package #9309
  • [microTVM] Arduino: Fix MLF archive filename in generated project dir #9320
  • [microTVM][Zephyr] Enable RISCV Tests on QEMU CI #9325
  • [MicroTVM][PyTest] Explicitly skip MicroTVM unittests. #9335
  • [microTVM][RVM] Always destroy the VM if all tests pass #8739

Runtime

  • Initial Implementation of TIRToRuntime Target hook #9190
  • Contributing the STM32 port #7742
  • Support runtime defined function wrapping of library module packed functions #9342
  • Add is_global_func tag to differentiate global and device function #9436
  • [Runtime] Pipeline Executor Second patch, configuration load and executor export/import. #9108
  • [Profiler] Sort columns in table and csv output #9300
  • [Profiler] Do not aggregate frames with different devices #9290
  • [Profiler] Add significant VM instructions to profiling report #9292

Build, Testing and CI

  • [CI] Hot fix the python integration script misplacement #9412
  • [CI] Add TVM_INTEGRATION_I386_ONLY for Integration Test on i386 #9388
  • [CI] Bump ci-gpu to v0.78 #9378
  • [CI] Update TVM ci-cpu docker image to v0.79 #9454
  • [CI] Make version.py to rely on repository metadata to generate version string #9472
  • [CI] Prevent the complete Jenkins pipeline to run when files commited only to /docs #9031
  • [CI] Pin setuptools to v58.4.0 in CI to circumvent breaking change in v58.5 #9446
  • [CI] Pre-build Reference System Dependencies #9270
  • [CI] Use correct tag in Docker --cache-from #9234
  • [UnitTest][Flaky] In test_report_serialization, compare csv. #9275
  • [Pytest] Sort unit tests before running. #9188
  • [UnitTest] Removed vulkan from CI run of task_python_topi.sh #9219
  • [UnitTests][CMSISNN] Mark CMSISNN with skipif they are missing libraries #9179
  • [UnitTests][CMSISNN] Mark Binary Ops CMSIS NN tests as skipped #9200
  • [PyTest] Sort by test location, but not parametrization #9353
  • Hotfix Jenkinsfile #9592
  • Test run triage #9308
  • Update ci_i386 to v0.74 #9211
  • Update ci-cpu to v0.78 #9199
  • Run full build when no files were changed over main #9221
  • Bump version to 0.9.dev0 #9581
  • Bump the CMake version in ubuntu_install_cmake_source.sh to 3.14.7. #9424
  • Add LLVM-13 installation to Docker setup #9498
  • Separate Windows and MacOS GitHub Actions #9578
  • added tests for quantized tflite sin operator #9478
  • Fix function annotation #9474
  • llvm 14 and above move TargetRegistry.h into MC #9305
  • Skip onnx test cases if no onnx #9272
  • Reset sphinx-gallery version to 0.4.0 #9280
  • Add USE_ETHOSU for the config.cmake #9162
  • Fix repository URL in ubuntu_install_rocm.sh #9425
  • Add back-to-back conv2d Hexagon test for stripe scheduling #9390
  • refactor Hexagon conv2d tests #9333
  • cleanup Hexagon conv2d tests #9473
  • [LLVM] Treat scalars as single-lane vectors in CreateVecConcat #9264
  • [LLVM] Rename t_tvm_context_ to t_tvm_device_, NFC #9176
  • [cpptest] Use find_package to locate GTest files #9208
  • [cpptest] Reset op attributes before registering them #9202
  • [LLVM/CPU] Add comments with origins of various runtime/backend types, NFC #9177
  • [unittests] Skip import of tvm.micro if micro-TVM was not enabled #9301
  • [Release] Bump version to v0.8.0; Update NEWS.md #9503
  • [Docker][Onnx] Upgrade ONNX to latest version #9519
  • [TEST] Move llvm import test away from minimum test #9171
  • [TEST] Fix duplicate definition error for gpu export mod testcase #9538
  • [TEST] Disable Hexagon TestConv2dPackedFilter test #9344
  • [Core][Build] Move build module transformations and utilities to C++ #9103
  • [Build] Rename build module helper func #9297
  • [iOS][RPC] Enable tests for connection configuration: tracker via proxy #9398
  • [iOS][RPC] Enable iOS simulation in public CI to cover basic tuning capabilities #9212
  • [VitisAI] Update Vitis AI integration to 1.4 release #8815
  • [CI.Lint.Black] Use “en_US.UTF-8” for Red Hat 6&7 Compatibility #9537
  • [AOT][Tests] Use pre-built libraries in Reference System tests #9271
  • [Tests] Ensure MyPy type checks pass #9284
  • [BYOC][NPU] Fix integration tests not running #9415
  • [BYOC] [ACL] Update ACL to 21.08 #9396
  • [BYOC][ACL] Update installation docs #9426

Doc

  • [DOC] Add tip on mitigation for symbol conflict with PyTorch #9433
  • Update license file to note libbacktrace #9579
  • Update NEWS to include v0.8 change log #9580
  • Documentation Refactor #9203
  • Introduction tutorial formatting fixes #9539
  • Update virtual_machine.rst #9222
  • [DOCS] Fix installation from source link some text #9238
  • [TensorIR][Tutorial] Blitz course #9315
  • [docs][bug] Add redirects for moved pages #9394
  • [Tutorial] Fix VTA vision detection tutorial ‘sphinx’ style error. #9279
  • [Tutorial] Fix formatting, grammar, dead link #9281
  • [Docs][Bugfix] fix API doc URLs #9266

Improvement and Bugfix

  • [Relay] Remove unnecessary Optional argument to ToANormalForm and friends #9197
  • [microTVM][Arduino] Cleanup template directory #9289
  • Better host handling in CompilationConfig & debug printing #9460
  • BUG: FoldConstant can see through on_device annotations. #9367
  • BUG: alloc_tensor offset and reshape shape should be on the CPU #9421
  • BUG: Look through on_device annotations when looking for shape constants #9345
  • BUG #9216: Don’t disable FuseOps pass since required by GraphExecutor #9227
  • Fix typo in Git Usage Tips #9377
  • Fix direct and broken links #9314
  • fix compute inline not to over write annotated opaque accesses #9509
  • fix debug mask argument check typo #9586
  • Fix compiler warning with clang-13.0 #9522
  • Fix end to end benchmark with rpc devices #9175
  • Fix several typos in pytest_target_parameterization.rst #9447
  • Use variable in curl download url #9330
  • Fix custom_address serialization in c++ tracker client. #9192
  • Fix GetQmin and GetQmax from relay.qnn #9427
  • Fixed some warnings about lambda’s closures that are bigger than necessary #9481
  • Fix a typo #9601
  • Fix inconsistencies in graph_executor function names handling #9255
  • fix a bug in the comment of function :fixed_point_multiply #9304
  • Fix USMP parallel to serial loop transform test #9254
  • Migrate C Interface API Generation to C++ #9106
  • fix missing span arg #9318
  • Removed a manual file handler pitfall #9435
  • [Hexagon] Fix cmake files for Hexagon launcher #9343
  • [Hexagon] Fix addressing TVMValue array #9302
  • [Hexagon] Fix compilation errors in Hexagon launcher #9189
  • [Hexagon] Fix addressing TVMValue array #9302
  • [Hexagon] Fix compilation errors in Hexagon launcher #9189
  • [BugFix] Fix to allow zero-copy between numpy and TVM NDArrays #9230
  • [BugFix] Fix a predicate bug in TIR schedule primitive rfactor #9228
  • [BugFix] Fix divide by zero error in TIR pass lower_warp_memory #9485
  • [BugFix] fix nvptx not supported by device_enabled error #9585
  • [BugFix][Meta Schedule] Fix meta_schedule.testing.local_rpc #9172
  • [Test] Fix flaky LocalRunner test due to restrictive timeout #9181
  • [QNN] Fix order of operations in qnn.quantize slightly to prevent undefined behavior #9558
  • [Typo] Correct fast_tanh description #9193
  • [microNPU] Fix incorrectly calculated stride when converting NHWC to NHCWB16 #9560
  • [HOTFIX][TARGET] Change LOG in compilation config to DLOG #9486
  • [TARGET] Cleanup the target_host usage to new target style. #9497
  • [FIX][TIR] Remove unused code and fix typo in storage_align #9583
  • [Support] Fix StartsWith when the string is equal to the prefix #9393
  • [Support] Add libinfo into the runtime build #9310
  • [BUG][TVMScript] fix block range error #9574
  • [BUGFIX] Fix typo in error message in CMakeLists.txt #9251
  • [Error reporting] Replace runtime errors with LOG(FATAL) #9311
  • [PROFILER,VM] Fix timer device type for reshape_tensor #9518
  • [FIX,PROFILING] Only check if ops duration is nonzero #9568
  • [TVMScript][Fix] Add type hints for more uncovered cases #9505
  • [BugFix][TIR] Fix primitive Bind for init-inside blocks #9359
  • [BugFix][TVMScript] Fix printer for dependent loops #9506
  • [Fix] Update the return value type of GraphExecutorCodegen.codegen #9603
  • [Relay][Frontend] Prune redundant logging #9545
  • [Arith][TensorIR][Bugfix] Add IterRangeSanityCheck in DetectIterMap #9205
  • [Bug][Meta Schedule] Fix Infinite Loop Caused When Calling Methods Not Overridden In PyClass. #9451
  • [RPC] Fix Server connecting to RPC Tracker through a Proxy #9210
  • [Conv2DTransposed] Fix wrong shape check and add new TOPI module to support groups #9465
  • [codegen][LLVM][bugfix] Specify argument to FastMathFlags setAllowContract #9337
  • [iOS] Fix build issues on the latest XCode and iOS #9298
  • [BugFix][Opencl] Explicitly cast min/max operands #9374
  • [Fixbug] Report duplicated param names of relay function when bind params #9350
  • [Rust] Fix an infinite recompilation loop in the tvm-sys crate #9450
  • [Code Style] Changed code to match the tvm code style conventions. #9040
  • [Runtime] BUG: Fix core-dump in crt graph_executor.c #9155

People Whose Pull Requests are Updated

Note: The format is name (number of activities)

Disclaimer: number of activities do not directly correspond to the community’s view about the significance of contributions.

Mousius (30), masahi (18), mehrdadh (18), Lunderberg (13), mbs-octoml (13), jiangjiajun (13), kparzysz-quic (12), junrushao1994 (11), AndrewZhaoLuo (11), lhutton1 (10), tqchen (9), areusch (9), vinx13 (9), Hzfengsy (9), electriclilies (8), tkonolige (7), Leo-arm (7), mbrookhart (6), leandron (6), manupa-arm (6), csullivan (6), shingjan (6), mikepapadim (6), MasterJH5574 (6), syang-ng (6), adstraw (6), gromero (5), hogepodge (5), wrongtest (5), grant-arm (5), comaniac (4), jroesch (4), spectrometerHBH (4), zxybazh (4), ekalda (4), NicolaLancellotti (4), sunwayforever (4), ophirfrish (4), icemelon (3), mbaret (3), huajsj (3), driazati (3), KJlaccHoeUM9l (3), Lyken17 (3), shengxinhu (3), mshr-h (3), apivovarov (2), lixiaoquan (2), rkimball (2), jtuyls (2), anwang2009 (2), gussmith23 (2), vvchernov (2), alter-xp (2), jinhongyii (2), ZQPei (2), apeskov (2), akmaru (2), onkar-sima-ai (2), merrymercy (1), zhiics (1), yzhliu (1), anijain2305 (1), Laurawly (1), slyubomirsky (1), t-vi (1), echuraev (1), altanh (1), elvin-n (1), leeexyz (1), CircleSpin (1), Johnson9009 (1), hzfan (1), hgt312 (1), tristan-arm (1), ganler (1), Meteorix (1), cloud-mxd (1), sergey-grovety (1), quic-sanirudh (1), ghostplant (1), Oreobird (1), ziyu-guo (1), hope51607 (1), Icemist (1), cconvey (1), elinx (1), FranckQC (1), gayatripk1 (1), ZJUGuoShuai (1)

People Who Reviewed Pull Requests

Note: The format is name (number of activities)

Disclaimer: number of activities do not directly correspond to the community’s view about the significance of contributions.

junrushao1994 (97), masahi (78), areusch (76), tqchen (57), comaniac (53), Mousius (49), leandron (41), manupa-arm (39), jroesch (34), vinx13 (33), Hzfengsy (32), mbrookhart (22), AndrewZhaoLuo (19), mbs-octoml (19), tmoreau89 (16), electriclilies (14), tkonolige (13), gromero (11), csullivan (11), mbaret (10), lhutton1 (10), zxybazh (10), Lunderberg (7), mehrdadh (7), mikepapadim (7), ekalda (7), NicolaLancellotti (7), jwfromm (6), shingjan (6), MasterJH5574 (6), ashutosh-arm (6), u99127 (5), hogepodge (5), yzhliu (3), huajsj (3), jiangjiajun (3), grant-arm (3), merrymercy (2), zhiics (2), kparzysz-quic (2), Laurawly (2), kevinthesun (2), apivovarov (2), trevor-m (2), FrozenGene (2), jtuyls (2), YuchenJin (2), guberti (2), Lyken17 (2), fernchen (2), cconvey (2), denise-k (2), icemelon (1), ZihengJiang (1), MarisaKirisame (1), anijain2305 (1), t-vi (1), yongwww (1), jcf94 (1), xqdan (1), elvin-n (1), anwang2009 (1), Leo-arm (1), ganler (1), driazati (1), ophirfrish (1), schell (1), sunjiweiswift (1), quic-sanirudh (1), dchauhan-arm (1), giuseros (1)

1 Like