Welcome to the 73rd issue of the MLIR Newsletter covering developments in MLIR, and related projects in the ecosystem. We welcome your contributions (contact: [email protected]). Click here to see previous editions.
Highlights, Discussions & RFCs
-
Nominations for Project Area Teams. The elections started on Mon Jan 27. There are five initial Area Teams: (a)
llvm
; (b)clang
; (c)mlir
; (d) Infrastructure, and (e) Community. From MLIR perspective, Renato, " This is a continuation of [RFC] MLIR Project Charter and Restructuring and using data from [Survey] MLIR Project Charter and Restructuring Survey. It’s also a recognition that Aaron’s [RFC] Proposing changes to the community code ownership policy with regards to a single top-level maintainer per project doesn’t work with MLIR as well as it does with LLVM and Clang". -
Tanya put out requests for " 2025 EuroLLVM Co-located Workshop Application - Due Feb 1. 'Workshops should have 2 organizers and a clear idea of the goals of the workshop".
-
Andrzej proposed an experiment to restrict some vector dialect operations (
vector.insert
andvector.extract
) to require non-0D vectors or scalars in his [RFC] Should We Restrict the Usage of 0-D Vectors in the Vector Dialect?. -
Rolf Morel put out RFC proposing linalg.contract (see here). Javed put out RFC proposing extending linalg.elementwise semantics (see here). These follow from Renato’s [RFC] Op explosion in Linalg and [RFC][MLIR] Linalg operation tree.
-
Alex Bradbury 's LLVM Weekly - #578, January 27th 2025 is out!
MLIR Commits Recently:
-
With this change here from Matthias Springer, it is now possible to add new MLIR floating point types in downstream projects.
-
This PR implements a generalization of the existing more efficient lowering of shape casts from 2-D to 1D and 1-D to 2-D vectors. click here.
-
This PR adds support for converting Vector::BitCastOp working on ND
(N >1) vectors into the same op working on linearized (1D) vectors. click here -
Rework
ReifyRankedShapedTypeInterface
implementation fortensor.expand_shape
op… The op carries the output-shape directly. This can be used directly. Also adds a method to get the shape as aSmallVector<OpFoldResult>
. click here -
Canonicalize gathers/scatters with contiguous (i.e. [0, 1, 2, …]) offsets into vector masked load/store ops. Canonicalize gathers/scatters with trivial offsets click here.
-
Track replacements using a listener. see here.
-
This commit changes the way how
func.func
ops are lowered to LLVM. Previously, the signature of the entire region (i.e., entry block and all other blocks in thefunc.func
op) was converted as part of thefunc.func
lowering pattern. click here. -
[mlir][IR][NFC] Move free-standing functions to
MemRefType
(#123465). Turn free-standingMemRefType
-related helper functions inBuiltinTypes.h
into member functions. -
Interesting improvement on softmax lowering here. The decomposition of
linalg.softmax
usesmaxnumf
, but the identity element that is used in the generated code is the one formaximumf
. They are not the same, as the identity formaxnumf
isNaN
, while the one ofmaximumf
is-Infty
. This is wrong and prevents the maxnumf from being folded. -
Fix and Refactor DecomposeOuterUnitDimsUnPackOpPattern. The error occurs because of faulty logic when computing dynamic sizes for
tensor::EmptyOp
, which initializes tensors forlinalg::transpose
[mlir][linalg] Fix and Refactor DecomposeOuterUnitDimsUnPackOpPattern… · llvm/llvm-project@58da789 · GitHub. -
update vectorize extract. These changes will make it easier to identify the test cases being exercised and simplify future maintenance or refactoring.
Related Projects
Useful Links