MLIR News 73rd Edition (28th Jan 2025)

Welcome to the 73rd issue of the MLIR Newsletter covering developments in MLIR, and related projects in the ecosystem. We welcome your contributions (contact: [email protected]). Click here to see previous editions.

Highlights, Discussions & RFCs

MLIR Commits Recently:

  • With this change here from Matthias Springer, it is now possible to add new MLIR floating point types in downstream projects.

  • This PR implements a generalization of the existing more efficient lowering of shape casts from 2-D to 1D and 1-D to 2-D vectors. click here.

  • This PR adds support for converting Vector::BitCastOp working on ND
    (N >1) vectors into the same op working on linearized (1D) vectors. click here

  • Rework ReifyRankedShapedTypeInterface implementation for tensor.expand_shape op… The op carries the output-shape directly. This can be used directly. Also adds a method to get the shape as a SmallVector<OpFoldResult>. click here

  • Canonicalize gathers/scatters with contiguous (i.e. [0, 1, 2, …]) offsets into vector masked load/store ops. Canonicalize gathers/scatters with trivial offsets click here.

  • Track replacements using a listener. see here.

  • This commit changes the way how func.func ops are lowered to LLVM. Previously, the signature of the entire region (i.e., entry block and all other blocks in the func.func op) was converted as part of the func.func lowering pattern. click here.

  • [mlir][IR][NFC] Move free-standing functions to MemRefType (#123465). Turn free-standing MemRefType-related helper functions in BuiltinTypes.h into member functions.

  • Interesting improvement on softmax lowering here. The decomposition of linalg.softmax uses maxnumf, but the identity element that is used in the generated code is the one for maximumf. They are not the same, as the identity for maxnumf is NaN, while the one of maximumf is -Infty. This is wrong and prevents the maxnumf from being folded.

  • Fix and Refactor DecomposeOuterUnitDimsUnPackOpPattern. The error occurs because of faulty logic when computing dynamic sizes for tensor::EmptyOp , which initializes tensors for linalg::transpose [mlir][linalg] Fix and Refactor DecomposeOuterUnitDimsUnPackOpPattern… · llvm/llvm-project@58da789 · GitHub.

  • update vectorize extract. These changes will make it easier to identify the test cases being exercised and simplify future maintenance or refactoring.

Related Projects

Useful Links

1 Like