TVM Monthly - Jan and Feb 2022

Community

During January and February of 2022 we welcomed many new contributors to the project. Importantly we welcomed @spectrometerHBH, @AndrewZhaoLuo and @csullivan as committers.

Thanks to everyone for their hardwork and contributions!

On the technical side we kept improving backend support, including Qualcomm Hexagon DSP, ARM Ethos-U65 Machine Learning Processor (NPU) and the related CMSIS NN Software Library. There has also been more CUTLASS kernels implemented that help improve performance. Meta Scheduler, as the next generation of scheduling DSL, has now been landed and is integrated with Relay. In addition, TIR kept receiving enhancement and new functionalities, such as Common Subexpression Elimination (CSE), which helps to simplify expressions and remove redundant computation.

Relay gets a visualization tool RelayViz as well as several enhancements such as better quantization support, VM memory liveness analysis and pipeline executor, etc. Unified Static Memory Planner (USMP) has been landed that performs memory planning between both inter- (Relay level) and intra- (TIR level) tensors to achieve best memory utilization. Such aggressive memory optimizations are especially vital to embedded use-cases.

This forum got 15.1k pageviews, 2.3k user visits in the last month.

Please see the detailed items below for more information. Note that I removed the “Build, Testing and CI” section due to the maximum characters limit.

Relay

  • Add a conversion of individual operations in FQ2I pass. #10239
  • [FQ2I] Add Min/Max operator support #9918
  • [FQ2I] Support Conv2dTranspose FQ2I #9347
  • [FQ2I] Add support for some unary operators #10273
  • [FQ2I] Add topk to FQ2I #10170
  • RelayViz interface and terminal ast-dump #10085
  • [Relay] Make DeviceAnalyzer a mixed mode visitor #10248
  • [Relay] Add conv2d_backward_weight op (without topi) #9954
  • [Relay] Add icheck to avoid crash in opt_level=0 vm build #10347
  • [Relay] Align strided slice shape functions #10155
  • [Relay] QLinearMatMul allows 1D weight_scale, weight_zero_point inputs #10047
  • [Relay] Add printer for op strategy objects #9923
  • [Pass] DynamicToStatic uses InferTypeLocal #9869
  • [Pass] Simplify consecutive casts in Relay #10133
  • [QNN] Lookup operations for hard to implement operators #10053
  • [QNN] Register a bunch of unary elementwise ops #10086
  • [QNN] Add qnn.rsqrt op #9982
  • [AMP][Pass][Typing] Add faster type inference #9735
  • [AMP] Register some new ops #9849
  • [AMP] Fix IsMixedPrecisionType Edge Case #9856
  • [Relay] [Virtual Device] Store function result virtual device in virtual_device_ field #9848
  • [RELAY] [VIRTUALDEVICE] Change syntax for device planning and store parameter virtual devices in virtual_device_ field #10352
  • [FoldScaleAxis] Support dense and bias_add op in fold scale axis #9838
  • [BYOC-DNNL] add support for more ops and fusion patterns #9995
  • [Relay][Pass] Add a relay pass to extract fake quantized ops #10089
  • [TIR, Relay] improve bfloat16 support #10112

Operator

  • [TOPI] VNNI support for int8 dense #10230
  • [TOPI] VNNI support for batch matmul #10332
  • [TOPI] Add support for groupped conv3d #9873
  • [TOPI] Support grouped conv1d #9832
  • [TOPI] Print shape information when the input shape not compatible with reshaped shape. #9876
  • Add sliding_window operator #9816
  • [Op][Topi] Gather, GatherND, Take can accept unsigned integers as indices #10080
  • [Op][Topi] 5 ops can accept unsigned integers as indices #10098
  • [TOPI,x86] Improve performance on int8 conv2d on x86 #9966

Tensor-level IR and Arithmetics

  • Implementation of Common Subexpression Elimination for TIR #9482
  • [TE] Support negative indices #9023
  • [TE] Fix Const Int bound analysis to handle uints for division #10102
  • [LLVM,TIR] Print LLVM intrinsic names instead of ids #9964
  • [TIR]Show meaningful message when input shape size mismatch with expected size. #9863
  • [TIR] Allow compute_at create block predicate for non-trivial bounds and support floordiv pattern #9527
  • [TIR] Canonical simplify the intset before region cover proof #9941
  • [TIR] Fix an index out of bound problem of cache write block #10203
  • [TIR] Encode conditional accesses info into block read/write regions #9880
  • [TIR] Misc minor updates #10335
  • [TIR] TIR Schedule Misc Update #10341
  • [TIR] Fix Ramp int32-64 mismatch in VectorizeLoop and NarrowDataType passes #10172
  • [TIR] Add software pipelining #10066
  • [TIR] add support for multi-blocking layout and their transformation #9996
  • [Arith] Simplify floordiv(x*8+7, 16) to floordiv(x, 2) #10232
  • [Arith] Support integer BufferLoad in IntervalSetEvaluator #10327
  • [TIR][Schedule] simpilfy compute_at static bound #10307
  • [TIR][Schedule] Update compact_dataflow constraint #10158
  • [TIR][Schedule] Annotate allows array as annotaton value #9920
  • [TIR][Schedule] Blockize and Tensorize #9871
  • [TIR][USMP] Integrating USMP to AoT Executor #9565
  • [USMP] adding support for U2 and U3 usecases #10193
  • [USMP] Add performance characteristics to PoolInfo #10005
  • [USMP] Hill Climb allocator #9704
  • [USMP] Register hill climb algorithm #10182
  • [TVMScript] Support T.buffer_decl using data pointer from Let/Allocate #10099
  • [TIR][Transform] relax LoopPartition restriction #10340

MetaSchedule, Ansor, Autoscheduler and AutoTVM

  • [Ansor] OpenCL follow-up #10199
  • [Ansor] Improve OpenCL support #10108
  • [MetaSchedule][M3c] XGB-based Cost Model #9859
  • [MetaSchedule][M3c] Add Per-Store-Feature #9860
  • [MetaSchedule][M3c] Update TuneContext, TaskScheduler & Search Strategy Design #9789
  • [MetaSchedule][M4a] Add EvolutionarySearch Search Strategy #9836
  • [MetaSchedule][M4a] Add ReplayFunc Search Strategy #9799
  • [MetaSchedule][M4a] Schedule Rule: Random-Compute-Location #9940
  • [MetaSchedule][M4a] PostProcessor: Disallow-Dynamic-Loop #9997
  • [MetaSchedule][M4a] PostProcessor: Rewrite-Parallel-Vectorize-Unroll #10071
  • [MetaSchedule][M4a] Schedule Rule: Auto-Inline #9943
  • [MetaSchedule][M4a] PostProcessor: Rewrite-Unbound-Block #10027
  • [MetaSchedule][M4a] Schedule Rule: Parallelize-Vectorize-Unroll #10033
  • [MetaSchedule][M4a] Mutator: Mutate-Tile-Size #10092
  • [MetaSchedule][M4a] PostProcessor: Rewrite Reduction Block #10013
  • [MetaSchedule][M4a] Schedule Rule: Add-RFactor #9975
  • [MetaSchedule][M4a] Schedule Rule: Cross-Thread-Reduction #9994
  • [MetaSchedule][M4a] PostProcessor: Verify-GPU-Code #9945
  • [MetaSchedule][M4a] Schedule Rule: Multi-Level-Tiling #10043
  • [MetaSchedule][M4a] Mutator: Mutate Parallel #10096
  • [MetaSchedule][M4b] Add ApplyHisotryBest Meta Schedule Context #10049
  • [MetaSchedule][M4b] Testcases for TensorRT builder/runner #10055
  • [MetaSchedule] Update Tuning Interfaces. #10367
  • [MetaSchedule] Mutator: Mutate-Compute-Location #10028
  • [MetaSchedule] Meta Schedule Misc Update #10389
  • [MetaSchedule] bug fix in ApplyHistoryBest #10183
  • [MetaSchedule] Add target field to MetaScheduleContext #10169
  • [MetaSchedule] Mutator: Mutate-Unroll #10045
  • [Meta Schedule] Allow Non-strict Population Size in Evolutionary Search #10163
  • [AUTOTVM] Use opt level 3 when extracting tasks #10065
  • [AutoScheduler] Allow device specification for AutoScheduler Runners. #10123

Language Bindings

  • [Rust] Update Rust bindings #9808
  • [Rust] Update DenseAttrs to add auto_scheduler_rewritten_layout #10063

Frontend

ONNX

  • refactored GraphProto.from_onnx into smaller functions #10267
  • [ONNX] Use relay softmax op to convert Softmax if posssible #9892
  • [ONNX] Fix onnx convtranspose error #9938
  • [ONNX] only broadcast matmul if the shape has changed #10321
  • [ONNX] Add per channel quantization to QLinearConv and fix related bugs #10354
  • [Onnx] add back supported tests #10116

Tensorflow, TFLite and Keras

  • Improve the tensorflow frontend _test_spop_resource_variables to supp… #9978
  • [frontend][keras] Add support for TimeDistributed #7006
  • [Relay/Frontend][TFLite] Change the output shape calculation based on keep_dim option in fully connected #9840

Caffe

  • [Caffe Frontend] Add support for Permute layer #9157
  • [Caffe Frontend] extending Eltwise to handle multiple inputs #8136
  • [Caffe Frontend] adding Reduction op #8015
  • [Caffe Frontend] supporting group > 1 cases for Deconv op #8260
  • [Caffe Frontend] Add support for Power layer #9655

Torch

  • Add aten::mv support #9894
  • Add support for aten::dot #9893
  • Support PyTorch grid_sample #10184
  • [Torch] Experimental support for FX-quantized models #10091
  • [Torch] Better support in-place variant of ops (aten::relu_ etc) #9851
  • [Torch] Run torch JIT pass lower_all_tuples before conversion. #10186
  • [PyTorch] add var_mean support #10233
  • [PyTorch] Fix rsub type #10090

Backend

  • Run extract constants pass only for CMSIS-NN target #9913
  • Support sub warp reduction for CUDA target. #10207
  • Add FP requantize flow. Set float32 flow by default for llvm x86 targets with sse4.1 support. #9637
  • Disallow copy to/from external HexagonBuffer #9930
  • Adding support for Hexagon User DMA Engine #10217
  • Lower cache_read and cache_write to Hexagon DMA via tensorize #10365
  • [CUTLASS] Support more kernels: int8, tf32, and 3xtf32 #9899
  • [CUTLASS] Profile only the largest-possible alignment by default #10036
  • [CUTLASS] Initial support for conv2d wgrad #10177
  • [CUTLASS] Add parallel split-k support to wgrad #10185
  • [CUTLASS] Conv2d dgrad #10110
  • [CUTLASS] Residual connection fusion #9820
  • [Hexagon] Export ir_lower_vtcm_pass function in the init file #10330
  • [Hexagon] Pass SDK information to launcher build for Android #9902
  • [Hexagon] Include Utils.cmake for tvm_file_glob used in HexagonSDK.cmake #9903
  • [Hexagon] Replace strlen in constant initialization with sizeof #10381
  • [Hexagon] RPC server/client for simulator #10361
  • [Hexagon] Do not auto-build apps when building TVM #9970
  • [Hexagon] Return pathlib.Path from get_hexagon_rpc_path() #9969
  • [Hexagon] Don’t use cmake glob for auto-generated source files #10259
  • [Hexagon] Pass kDLHexagon device when allocating workspace pool on Hexagon #10289
  • [Hexagon] Remember to add common sources when building TVMRT for Hexagon #10290
  • [Hexagon] Update hexagon API build instruction and cleanup hexagon_proxy_rpc #10068
  • [CUBLAS] Fix cublas batch matmul strategy plevel #10351
  • [CUDNN] Refactor descriptor initialization, remove cudnn.conv.output_shape_from_cudnn #9948
  • [CUDNN] Support gradient kernels #9986
  • [Int8] Support cublas on e2e int8 models (also tried cudnn but doesn’t work) #9898
  • [4a/10] [CMSIS-NN] Calculate CMSIS-NN buffer size with respect to architecture extensions #9338
  • [CMSIS-NN] Fix extension detection for CPUs #10200
  • [CMSIS-NN] Conv2D with equal paddings can be mapped to CMSIS-NN target #9801
  • [CMSIS-NN] Moved all asserts in tests under a single utils function #10148
  • [CMSIS-NN] Fixed the network hash to avoid type inference failure #9887
  • [CMSIS-NN] Convert scalar constants to tensor constants #10100
  • [CMSIS-NN] Support for asymmetric padding in Convolutions #9886
  • [CMSIS-NN] Separated symmetric and asymmetric padding tests for Conv2D #9963
  • [CMSIS-NN] Moved test_cnn_small to the latest version #9962
  • [CMSIS-NN] Update microNPU demo to include offloading to CMSIS-NN #9979
  • [microNPU] Refactor type inference data type checks #10060
  • [microNPU] Move optimization passes to be a module pass and ensure they are running #9831
  • [microNPU] Remove remaining UnsupportedLayout checks #9791
  • [microNPU] Add support for pack and unpack #9960
  • [microNPU] Add support for scalar values #9794
  • [microNPU] Use TFLite tests for strided_slice #10165
  • [microNPU] Add support for transpose convolution #9855
  • [microNPU] Add support for nearest neighbor and bilinear upsampling #9841
  • [microNPU] Add support for requantize #9910
  • [microNPU] adding more tests with USMP #10362
  • [microNPU] enable USMP #10022
  • [microNPU] Refactor base address determination to codegen #9929
  • [microNPU] Removing constant args from PrimFunc #9951
  • [microNPU] Enable network tests for U65 256 mac variant #10159
  • [microNPU] Enable the codegen tests for 256 mac Arm(R) Ethos™-U65 NPU #9815
  • [microNPU][2a] Add CascaderGraph for cascading analysis #9469
  • [microNPU][2b] Create CascaderGraphs from TE graphs #9471
  • [microNPU][3] Plan generation for the cascader #9890
  • [microNPU][4] Add the cascader Proposal generator #9959
  • [ARM_CPU] Conv2d int8 intrinsic for cortex-A72 #10310
  • [ETHOSN] Ethos™-N 21.11 update #10061
  • [ETHOSN] Improved identification of driver library version #10285
  • [ETHOSN] Per-channel int8 quantization for conv2d #10131
  • [ETHOSN] Per-tensor support for int8 operations #10018
  • [ETHOSN] Remove the compiler library from the runtime link #10334
  • [ETHOSN] Drop back to Ethos™-N release 21.08 #10157
  • [ETHOSN] Stricter data type conversion checks #10271
  • [ETHOSN] Add support for mean on Ethos-N78 #10130
  • [Cuda] Updated bfloat16 math defs. #10258
  • [CUDA] Support float16 erf,tan,atan #10122

Code Generation and Compilation API

  • Add Python representation for VirtualDevice #9812
  • std::string → tvm::String for Conv1DAttrs #9921
  • TVMC: Don’t divide trials by zero tasks #10164
  • PackedFunction to return params from the .so module, show warning when no params are set #9811
  • [TVMC] Add codegen args to tvmc #10190
  • [TVMC] Split common tvmc file into more specific files #9529
  • [TVMC] Add an end_to_end benchmarking argument when benchmarking. #10256
  • [LLVM] LLVM codegen debug utilities #9857
  • [PTX-MMA] Add full PTX MMA code generation support #9909

MicroTVM

  • [microTVM] Add timeouts for CI tests #10295
  • [microTVM] Update Zephyr to 2.7 #10094
  • [microTVM] TVMCon 2021 Zephyr Demo with CMSIS-NN #10144
  • [microTVM] Fix zephye/test_zephyr_armv7m test #9684
  • [microTVM] Include standalone_crt dependencies in MLF #10095
  • [microTVM][Zephyr] Update RVM to Zephyr 2.7 #10138
  • [microTVM][Zephyr] Add reading of nRF5340 DK product ID to determine which COM port to use while running tests #10304
  • [microTVM][tvmc] Add TVMC Micro tutorial for Zephyr #10024
  • [microtvm][RVM] Add scripts for automated build and testing #10194

Runtime

  • OpenCL debug runtime timer handler added. #10140
  • Add runtime.ModuleGetFormat method enabling export of BYOC generated sources which require a .cpp/.cc file extension #9243
  • [PROFILING] Add ability to profile a single function_profiling #9553
  • [runtime] Add Metadata classes for AOTExecutor #10282
  • [runtime] Improved log information with function signature #10326
  • [Runtime][Pipeline executor] Global parameters group name and runtime modules parameters map. #9846
  • [Runtime][PipelineExecutor] Pipeline Executor Sequential execution #10082
  • [Runtime][PipelineExecutor] Add Pipeline Executor Interface #10010
  • [Runtime][Pipeline Executor] multiple threads management and the data forwarding notification mechanism. #10234
  • [VirtualMachine] new method allowing to set one input tensor by its index or name #10293
  • [Runtime][PackedFunc] Bring PackedFunc into TVM Object System #10032
  • [Relay][VM] Relay VM memory liveness/lifetime analysis #10026
  • [VM] Remove undesired arg to load_late_bound_consts #9870
  • [RPC] Take PageAllocator out of MinRPCServer, make it template parameter #10219
  • [RPC] Link in whole archive with BUILD_STATIC_RUNTIME #10260
  • [RPC] Add Missing Command Line Option “through-proxy” of RPC Server #10188

Doc

  • Add FreeRTOS variant of NPU demo #10004
  • Document missing qnn operators #10077
  • Documents how to contribute TVM docs with images. #10287
  • Tvmc python tutorial #9633
  • [microTVM][tutorial] Add ENV variable to enable testing on physical hardware #9993
  • [Docs] Fix an irrelevant sentence in relay.reverse #10331

Improvement and Bugfix

  • Fix more ONNX URLs #10220
  • Add user-configurable backtrace limit #10025
  • Change function constructors to WithFields #9690
  • Add a JSON converter for 0.7 → 0.8 and 0.8 → 0.9 #9874
  • Don’t use std::move in WithFields #10009
  • resolve issue #10107 by setting eps larger #10176
  • Update ethos-u-vela for demo app #10129
  • Fix LayoutRewriter #10118
  • Fix broadcast InferCorrectLayout #10156
  • Overload get() function for Optional type. #9748
  • Restore the use of ONNX_DEFAULT_CONFIGS["use_nt_batch_matmul"] #9925
  • Generate correct output tensor names in C Interface API #10191
  • fix RPC waiting for device #10255
  • te_compiler_cache: reduce name length without loss of information #9787
  • “Resolved deprecation issue in test_op_qnn_conv2_transpose.py” #10228
  • Revert “[Frontend] Add Span filling for frontends to Relay (#9723)” #10072
  • This patch is to fix some minor typos in tvm #9852
  • Add more logging information to ReshapeLikeRel #10125
  • fix convert_pooling in caffe parser #9828
  • fix pytorch frontend bug #9884
  • Add -i option to fix ASF headers to lint scripts. #10284
  • Fix Plint error in parser.py and test_vm.py #10394
  • Fix a lint issue. #10245
  • Fix clang compile warnings #9942
  • Propagate ssh-agent authentication socket #9926
  • Improve the frontend tflite _test_rsqrt test to support tflite 2.6 #9888
  • Remove javah support #10104
  • [cleanup] Remove task_sphinx_precheck.sh #10196
  • [Relay] Fix a bug in tensor_array_scatter #6890
  • [Relay] Fix TFlite frontend for unpack, stridedslice #10333
  • [Relay] fix incorrect binding of Lets in ANF conversion #10078
  • [Relay] fix a corner case when relay return empty tuple #10128
  • [Torch] Fix conv2d transpose with group #10235
  • [Fix] relay onnx frontend bug when [A, B, M, N] * [1, B, N, K] #9911
  • [Hexagon] Refactor Hexagon.cmake #10227
  • [Hexagon] Fix getting/setting DMA state #10288
  • [Hexagon] Add missing #include #9968
  • [Hexagon] Fix build issue due to #9611 #9914
  • [Hexagon] Follow up fixes on PR #9631 #10205
  • [TOPI] fix icelake target for avx512 and vnni #9928
  • [Refactor] Clean up type relations that are declared as template for no reason #10236
  • [TensorRT] Fix pad_value access (removed from PadAttrs) #9858
  • [EZ][Typo] Correct gather, scatter type rel error message #10023
  • [Minor] Typo Fixes #10000
  • [Fix] Fix flaky test of #9952 #9958
  • [Bugfix][Op] Fix shape inference of adv_index #9717
  • [BUGFIX] fix text printer when TVM_LOG_DEBUG is on #10279
  • [BUGFIX] Check that virtual device is unchanged in WithFields #9826
  • [BUGFIX] Define kTargetPoolReadWriteAccess globally #10262
  • [bugfix] Fix the behavior of TVMScript printer #9974
  • [fix] Convert BufferSlice to BufferLoad when used as range/loop start and end #10370
  • [BugFix] shapeOfAttrs should be registered before “vm.shape_of” used #9669
  • [BugFix] resolve integer 32. ~ 64. mismatch by casting #9582
  • [BugFix][TIR] Fix cross-thread reduction when single reduction loop with predicate #10016
  • [BugFix][TVMScript] Use operator is when recognizing TIR Module #10175
  • [HotFix] Skip the flaky MetaSchedule Auto-Unroll test #9956
  • [Bugfix] Add one extra space to improve diagnostic messages #10268
  • [FIX] Fix bug in MobileNetV2 quantization #8243
  • [Fix Bug] fix the bug of pool_impl_nd when computing avgpool_nd whith ceil_mode and count_include_pad are True #9835
  • [Fix Bug]fix the bug of tensorflow frontend when parsing Range layer #9999
  • [Fix Bug]fix the bugs of keras frontend when parsing LSTM, GRU, RNN layers. #9850
  • [FIX,AUTOTVM] Add backtraces to tuning errors #9901
  • [FIX,TOPI] Fix issue when running conv2d in autoscheduler #9900
  • [FIX,PROFILING] Add extra precision to numbers when serializing to json #10392
  • [TOPI,CUDA] Don’t enable cudnn conv2d kernel if is not supported #10021
  • [Tir]Adding detail error messages when MatchCopyPattern function is failed. #10244
  • [ETHOSN] Fix quantization parameters in test #10178
  • [Misc] typo and nit fixes #10145
  • [AOT] BugFix of workspace calculation #10337
  • [BUGFIX][ARITH] Fix FloorMod Simplifier #10336
  • [Makefile] Fixed error in “make clean” #10048
  • [onnx] fix onnx where broadcast #10106
  • [Object] Throw AttributeError if the object doesn’t have a reflection table. #9919
  • [VirtualMachine] fix raw pointer using by VirtualMachine #9980
  • [Relay][VM] Fix loading late bound consts when none exist #10087
  • [Doc][Fix] Fix qnn op parameters hint order #9622
  • [Relay][DefuseOps pass] bug fix: To support function body types other… #10069

People Whose Pull Requests are Updated:

Note: The format is name (number of activities)

Disclaimer: number of activities do not directly correspond to the community’s view about the significance of contributions.

driazati (66), masahi (32), junrushao1994 (31), Mousius (21), AndrewZhaoLuo (19), kparzysz-quic (17), mehrdadh (14), zxybazh (14), lhutton1 (13), tkonolige (12), areusch (12), huajsj (12), Leo-arm (12), electriclilies (11), wrongtest (11), Hzfengsy (10), MasterJH5574 (10), manupa-arm (9), Lunderberg (8), ashutosh-arm (8), jinhongyii (8), mshr-h (8), mbaret (7), csullivan (5), gussmith23 (5), jacobbohlin (5), lazycal (5), sunggg (5), mbrookhart (4), comaniac (4), vinx13 (4), mikepapadim (4), vvchernov (4), yzh119 (4), Icemist (4), shengxinhu (4), yuanfz98 (4), lixiaoquan (3), d-smirnov (3), spectrometerHBH (3), grant-arm (3), adstraw (3), ophirfrish (3), chiwwang (3), zotanika (3), pfk-beta (3), sfvaroglu (3), apeskov (3), SebastianBoblestETAS (3), BBuf (3), tqchen (2), tmoreau89 (2), jwfromm (2), srkreddy1238 (2), FrozenGene (2), gromero (2), slyubomirsky (2), ANSHUMAN87 (2), u99127 (2), altanh (2), leeexyz (2), ekalda (2), shingjan (2), anwang2009 (2), hgt312 (2), michalpiszczek (2), rafzi (2), crazydemo (2), Raghav-Chakravarthy (2), chunit-quic (2), argrento (2), PhilippvK (2), akmaru (2), dchauhan-arm (2), mei-ye (2), spanijel (2), billishyahao (2), blackkker (2), cconvey (2), merrymercy (1), icemelon (1), ZihengJiang (1), jroesch (1), leandron (1), vegaluisjose (1), mbs-octoml (1), yongwww (1), rkimball (1), hogepodge (1), wyc-ruiker (1), Johnson9009 (1), CircleSpin (1), ganler (1), rohanmukh (1), insop (1), guberti (1), Lyken17 (1), alter-xp (1), cloud-mxd (1), KJlaccHoeUM9l (1), lygztq (1), schell (1), kueitang (1), solin319 (1), xiaolong18 (1), zhuwenxi (1), domin1985 (1), fantasyRqg (1), FranckQC (1), JCBrouwer (1), MargaretQian (1), mkroening (1), lsy643 (1), Tantalus13A98B5F (1), Wheest (1), qsqqsqqsq (1), KnowingNothing (1), mhyang-pllab (1), alanmacd (1), deepakbabel23 (1)

People Who Reviewed Pull Requests:

Note: The format is name (number of activities)

Disclaimer: number of activities do not directly correspond to the community’s view about the significance of contributions.

masahi (164), junrushao1994 (117), jroesch (70), tqchen (53), comaniac (52), areusch (50), leandron (46), manupa-arm (45), Mousius (43), Hzfengsy (35), mbrookhart (32), AndrewZhaoLuo (30), mbs-octoml (23), tmoreau89 (22), tkonolige (20), driazati (20), kparzysz-quic (17), MasterJH5574 (16), vinx13 (13), FrozenGene (13), mehrdadh (12), jwfromm (10), zxybazh (10), lhutton1 (9), electriclilies (8), ekalda (8), mbaret (7), elvin-n (7), Lunderberg (6), jcf94 (6), spectrometerHBH (6), hogepodge (6), shingjan (6), gromero (5), huajsj (4), u99127 (4), anwang2009 (4), yzh119 (4), jacobbohlin (4), zhiics (3), trevor-m (3), liangfu (3), wrongtest (3), were (3), adstraw (3), YuchenJin (3), mshr-h (3), NicolaLancellotti (3), denise-k (3), icemelon (2), srkreddy1238 (2), Laurawly (2), lixiaoquan (2), altanh (2), csullivan (2), echuraev (2), leeexyz (2), Leo-arm (2), xqdan (2), mikepapadim (2), fernchen (2), cconvey (2), ZihengJiang (1), MarisaKirisame (1), kevinthesun (1), apivovarov (1), t-vi (1), ANSHUMAN87 (1), ashutosh-arm (1), sxjscience (1), yidawang (1), codeislife99 (1), grant-arm (1), jinhongyii (1), maheshambule (1), hgt312 (1), ganler (1), ophirfrish (1), grwlf (1), michalpiszczek (1), chiwwang (1), Icemist (1), KJlaccHoeUM9l (1), reminisce (1), sunggg (1), PhilippvK (1), domin1985 (1), mei-ye (1), billishyahao (1), ziyu-guo (1), alanmacd (1), AlexanderSerov (1), corehalt (1)

2 Likes