As discussed by the TVM PMC, our goal is to provide a monthly summary of the project so users and developers can get a better understanding of the goings on of the TVM community.
Feedback and suggestions are welcomed so that we can further improve these updates.
RFCs
None
We continue to improve Relax (PyTorch frontend) .
BugFix
- #17968 - [Relax][Pytorch] Bugfix of conv_transpose1d and conv_transpose2d
- #17950 - [Fix][Relax] Fix dangling reference in GetTargetFunctions()
CI
Frontend
- #17980 - [ONNX] Make bias input optional in LayerNormalization
LLVM
- #17859 - [Codegen] Enable SVE/VLA for RISCV targets
- #17958 - Fix JIT unknown reloc issue for case of RISCV
- #17954 - [FFI]Fix compilation errors with clang20
Relax
-
#18016 - [ONNX] Replace deprecated
mapping.TENSOR_TYPE_TO_NP_TYPE
usage - #18001 - [Frontend][ONNX] Fix: bitwise_not misclassified as binary (is ā¦
- #17990 - [Frontend]Fix: Output tensor with zero dimension after torch.uā¦
- #17925 - [PyTorch] Re-enable test_subgraph_capture in dynamo test
- #17918 - [PyTorch] Add ReLU6 Op Support for Exported Program and FX graph
- #17930 - [PyTorch] Add torch.outer Op Support for Exported Program and FX graph
- #17932 - [PyTorch] Add UpSample Bicubic Op Support for Exported Program and FX graph
- #17921 - [PyTorch] Add AvgPool 1D and 3D Op Support for Exported Program and FX graph
- #17922 - [PyTorch] Add Adaptive AvgPool 1D and 3D Op Support for Exported Program and FX graph
- #17863 - [PyTorch] CrossEntropyLoss
- #17919 - [PyTorch] Add MaxPool 1D and 3D Op Support for Exported Program and FX graph
- #17926 - [PyTorch] Add tests for all the dtypes supported in the PyTorch frontend
- #17924 - [PyTorch] Add div.Tensor_mode and trunc Op Support for Exported Program and FX graph
- #17904 - [PyTorch] Add Meshgrid Op Support for Exported Program and FX graph
- #17915 - [PyTorch] Add support for linspace op in fx graph
TOPI
- #18015 - Support integer type input for log10
- #17942 - Add shape validation to prevent negative dimensions in conv operations
Vulkan
- #18005 - Add TIR unary trigonometric/hyperbolic intrinsic definitions
web
- #17946 - [REFACTOR][FFI]Upgrade Web Runtime to new FFI
Misc
- #18023 - [FFI] More strict tuple constructor checking
- #18022 - [REFACTOR][FFI] Cleanup PackedFunc redirections
- #18020 - [REFACTOR][PYTHON] Phase out tvm._ffi and Limited API support
- #18019 - Add op support for slice_scatter
- #17974 - Fix FLOP estimation for EvaluateNode by implementing VisitStmt_ handler
- #18013 - Fix RuntimeError: parallel_for_dynamic
- #18014 - Fix division truncation in window size calculation for small dtypes in average_pool
- #18010 - [REFACTOR][FFI] Phase out legacy C API
- #17995 - Fix zero-extent loops in PerStoreFeature to prevent crashes
- #17983 - [FFI][JVM] Upgrade tvm4j to latest FFI
- #17969 - Add registion for the operator asinh, acosh, atanh in llvm
- #17972 - Fix g.costs
- #17979 - [FFI][REFACTOR] Update to distinguish as and cast
- #17953 - Fix sqrt/rsqrt Compatibility with Integer Data Types
- #17961 - Fix basic FLOP estimation for WhileNode
- #17945 - Add registion for the operator asin and acos in llvm
- #17951 - [NODE] Fix structural equality for Array specialization
- #17943 - [FFI] Variant specialize for all ObjectRef
- #17939 - [REFACTOR] Phase out legacy rust ffi
- #17940 - [REFACTOR] Phase out legacy go ffi
- #17931 - [REFACTOR][FFI][RPC] Migrate RPC to use the latest FFI ABI
- #17929 - [REFACTOR][FFI] Cleanup container redirections
- #17927 - [FFI][FEAT] AutoDLPack for taking external tensor objects
- #17923 - [REFACTOR][FFI] Cleanup PackedFunc related redirection
- #17920 - [REFACTOR] Introduce and modernize ffi system
- #17917 - [WebGPU][CodeGen] Override PrintVecElemLoad and Store for WebGPU
-
#17913 - [Triton] Support latest
triton.compile
interface