You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+9-11Lines changed: 9 additions & 11 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -199,6 +199,15 @@ We provide a variety of training materials and examples to quickly learn the Mat
199
199
- Finally, for new MatX developers, browsing the [example applications](examples) can provide familarity with the API and best practices.
200
200
201
201
## Release Major Features
202
+
**v0.9.1**:
203
+
- New operators: `argminmax`, `dense2sparse`, `sparse2dense`, `interp1`, `normalize`, `argsort`
204
+
- Removed requirement for --relaxed-constexpr
205
+
- Added MatX NVTX domain
206
+
- Significantly improved speed of `svd` and `inv`
207
+
- Python integration sample
208
+
- Experimental sparse tensor support (SpMM and solver routines supported)
209
+
- Significantly reduced FFT memory usage
210
+
202
211
**v0.9.0**:
203
212
-*Features*
204
213
* Full CPU support for both ARM and x86 on all solver, BLAS, and FFT functions, including multi-threaded support
@@ -225,17 +234,6 @@ We provide a variety of training materials and examples to quickly learn the Mat
225
234
* Optimized polyphase resampler
226
235
* Negative slice indexing
227
236
- Many new bug fixes and error checking
228
-
229
-
**v0.6.0**:
230
-
- Breaking changes
231
-
* This marks the first release of using "transforms as operators". This allows transforms to be used in any operator expression, whereas the previous release required them to be on separate lines. For an example, please see: https://nvidia.github.io/MatX/basics/fusion.html. This also causes a breaking change with transform usage. Converting to the new format is as simple as moving the function parameters. For example: `matmul(C, A, B, stream);` becomes `(C = matmul(A,B)).run(stream);`.
232
-
-*Features*
233
-
* Polyphase channelizer
234
-
* Many new operators, including upsample, downsample, pwelch, overlap, at, etc
235
-
* Added more lvalue semantics for operators based on view manipulation
236
-
- Bug fixes
237
-
* Fixed cache issues
238
-
* Fixed stride = 0 in matmul
239
237
240
238
## Discussions
241
239
We have an open discussions board [here](https://github.com/NVIDIA/MatX/discussions). We encourage any questions about the library to be posted here for other users to learn from and read through.
0 commit comments