[mlir][Linalg] Evolve named ops to use assembly form and support linalg on tensors.

This revision allows representing a reduction at the level of linalg on tensors for named ops. When a structured op has a reduction and returns tensor(s), new conventions are added and documented.

As an illustration, the syntax for a `linalg.matmul` writing into a buffer is:

```
  linalg.matmul ins(%a, %b : memref<?x?xf32>, tensor<?x?xf32>)
               outs(%c : memref<?x?xf32>)
```

, whereas the syntax for a `linalg.matmul` returning a new tensor is:

```
  %d = linalg.matmul ins(%a, %b : tensor<?x?xf32>, memref<?x?xf32>)
                    init(%c : memref<?x?xf32>)
                      -> tensor<?x?xf32>
```

Other parts of linalg will be extended accordingly to allow mixed buffer/tensor semantics in the presence of reductions.
diff --git a/mlir/test/IR/slice.mlir b/mlir/test/IR/slice.mlir
index 731f387..68ddeb6 100644
--- a/mlir/test/IR/slice.mlir
+++ b/mlir/test/IR/slice.mlir
@@ -5,8 +5,10 @@
   %b = alloc(%arg2, %arg1) : memref<?x?xf32>
   %c = alloc(%arg0, %arg1) : memref<?x?xf32>
   %d = alloc(%arg0, %arg1) : memref<?x?xf32>
-  linalg.matmul %a, %b, %c : (memref<?x?xf32>, memref<?x?xf32>, memref<?x?xf32>)
-  linalg.matmul %a, %b, %d : (memref<?x?xf32>, memref<?x?xf32>, memref<?x?xf32>)
+  linalg.matmul ins(%a, %b : memref<?x?xf32>, memref<?x?xf32>)
+               outs(%c : memref<?x?xf32>)
+  linalg.matmul ins(%a, %b : memref<?x?xf32>, memref<?x?xf32>)
+               outs(%d : memref<?x?xf32>)
   dealloc %c : memref<?x?xf32>
   dealloc %b : memref<?x?xf32>
   dealloc %a : memref<?x?xf32>