You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Now supports integration with PyTorch Lightning (https://lightning.ai/docs/pytorch/stable/), bringing:
23
-
* User workflow simplifications: zero boilerplate code and increased modularity
24
-
* Ability for user to define custom training logic easily
25
-
* Easy support for distributed GPU training
26
-
* Weights and Biases hyperparameter tuning
23
+
Further integration enhancements with PyTorch Lightning (https://lightning.ai/docs/pytorch/stable/). We now support all Lightning hooks, which are modular logic blocks that make defining custom training/validation/etc logic very easy and user-friendly.
27
24
28
-
Please refer to the Lightning folder and its [README](examples/lightning_integration_examples/README.md).
25
+
We have also released a *Lightning Studio* course on **Differentiable Predictive Control**: <atarget="_blank"href="https://lightning.ai/rahulbirmiwal/studios/differential-predictive-control-with-neuromancer">
26
+
<imgsrc="https://pl-bolts-doc-images.s3.us-east-2.amazonaws.com/app-2/studio-badge.svg"alt="Open In Studio"style="width:100px; height:auto;"/>
27
+
</a>!
28
+
29
+
Lightning Studios are powerful, AI development platforms. They essentially act as extremely user-friendly virtual machines, but accessible through your browser! Please see https://lightning.ai/studios for more information.
30
+
31
+
32
+
#### TorchSDE Integration
33
+
* We have begun integration with the TorchSDE library (https://github.com/google-research/torchsde/tree/master) TorchSDE provides stochastic differential equation solvers with GPU spport and efficient backpropagation.
34
+
* Neuromancer already has robust and extensive library for Neural ODEs and ODE solvers. We extend that functionality to the stochastic case by incorporating TorchSDE solvers. To motivate and teach the user how one progresses from neural ODEs to "neural SDEs" we have written a lengthy notebook -- [sde_walkthrough.ipynb](examples/SDEs/sde_walkthrough.ipynb)
35
+
36
+
#### Stacked Physics-Informed Neural Networks
37
+
* Neuromancer now supports Stacked Physics-Informed Neural Networks. This architecture, based on the work of [Howard et al. (2023)](https://arxiv.org/abs/2311.06483), consists of stacking multifidelity networks via composition, allowing a progressive improvement of learned solutions. This formulation is especially useful for highly oscillatory problems. We illustrate an example of its usage with the solution of a damped harmonic oscillator using PINN: [Part_5_Pendulum_Stacked.ipynb](examples/PDEs/Part_5_Pendulum_Stacked.ipynb)
38
+
39
+
#### SINDy
40
+
41
+
* Sparse Identification of Nonlinear Dynamics (SINDy) is a powerful method which uses sparse regression to identify a small number of active terms in dynamic systems, allowing for interpretable and efficient modeling of complex, nonlinear dynamics. We now enable users to leverage this technique for sparse physics-informed system identification. Checkout the notebook here [Part_9_SINDy.ipynb](examples/ODEs/Part_9_SINDy.ipynb)
29
42
30
43
**New Colab Examples:**
31
-
> ⭐ [Various domain examples, such as system identification of building thermal dynamics, in NeuroMANCER](#domain-examples)
44
+
> ⭐ [Custom Training Via Lightning Hooks ](#lightning-integration-examples)
<imgsrc="https://colab.research.google.com/assets/colab-badge.svg"alt="Open In Colab"/></a> Part 9: Sparse Identification of Nonlinear Dynamics (SINDy)
118
+
99
119
100
-
### Physics-Informed Neural Networks (PINNs) for Partial Differential Equations (PDEs)
120
+
### Physics-Informed Neural Networks (PINNs) for Partial Differential Equations (PDEs) and Ordinary Differential Equations (ODEs)
101
121
+ <atarget="_blank"href="https://colab.research.google.com/github/pnnl/neuromancer/blob/master/examples/PDEs/Part_1_PINN_DiffusionEquation.ipynb"><imgsrc="https://colab.research.google.com/assets/colab-badge.svg"alt="Open In Colab"/></a> Part 1: Diffusion Equation
102
122
+ <atarget="_blank"href="https://colab.research.google.com/github/pnnl/neuromancer/blob/master/examples/PDEs/Part_2_PINN_BurgersEquation.ipynb"><imgsrc="https://colab.research.google.com/assets/colab-badge.svg"alt="Open In Colab"/></a> Part 2: Burgers' Equation
103
123
+ <atarget="_blank"href="https://colab.research.google.com/github/pnnl/neuromancer/blob/master/examples/PDEs/Part_3_PINN_BurgersEquation_inverse.ipynb"><imgsrc="https://colab.research.google.com/assets/colab-badge.svg"alt="Open In Colab"/></a> Part 3: Burgers' Equation w/ Parameter Estimation (Inverse Problem)
124
+
+ <atarget="_blank"href="https://colab.research.google.com/github/pnnl/neuromancer/blob/master/examples/PDEs/Part_4_PINN_LaplaceEquationSteadyState.ipynb"><imgsrc="https://colab.research.google.com/assets/colab-badge.svg"alt="Open In Colab"/></a> Part 4: Laplace's Equation (steady-state)
125
+
+ <atarget="_blank"href="https://colab.research.google.com/github/pnnl/neuromancer/blob/master/examples/PDEs/Part_5_Pendulum_Stacked.ipynb"><imgsrc="https://colab.research.google.com/assets/colab-badge.svg"alt="Open In Colab"/></a> Part 5: Damped Pendulum (stacked PINN)
126
+
+ <atarget="_blank"href="https://colab.research.google.com/github/pnnl/neuromancer/blob/master/examples/PDEs/Part_6_PINN_NavierStokesCavitySteady_KAN.ipynb"><imgsrc="https://colab.research.google.com/assets/colab-badge.svg"alt="Open In Colab"/></a> Part 6: Navier-Stokes equation (lid-driven cavity flow, steady-state, KAN)
104
127
105
128
### Control
106
129
@@ -153,6 +176,12 @@ Part 5: Using Cvxpylayers for differentiable projection onto the polytopic feasi
<imgsrc="https://colab.research.google.com/assets/colab-badge.svg"alt="Open In Colab"/></a> Part 4: Defining Custom Training Logic via Lightning Modularized Code.
<imgsrc="https://colab.research.google.com/assets/colab-badge.svg"alt="Open In Colab"/></a> LatentSDEs: "System Identification" of Stochastic Processes using Neuromancer x TorchSDE
183
+
184
+
156
185
## Documentation
157
186
The documentation for the library can be found [online](https://pnnl.github.io/neuromancer/).
158
187
There is also an [introduction video](https://www.youtube.com/watch?v=YkFKz-DgC98) covering
Copy file name to clipboardExpand all lines: RELEASE_NOTES.md
+7Lines changed: 7 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,13 @@
1
1
2
2
## Release notes
3
3
4
+
### Version 1.5.1 Release Notes
5
+
+ Enhancement: Now supports integration of all Lightning hooks into the Neuromancer Lightning trainer. Please refer to Lightning examples README for more information
6
+
+ Deprecated WandB hyperparameter tuning via `LitTrainer` for now
7
+
+ New feature: TorchSDE integration with Neuromancer core library, namely `torchsde.sdeint()`. Motivating example for system ID on stochastic process found in examples/sdes/sde_walkthrough.ipynb
8
+
+ New feature: Stacked physics-informed neural networks
9
+
+ New feature: SINDy -- sparse system identification of nonlinear dynamical systems
10
+
4
11
### Version 1.5.0 Release Notes
5
12
+ New Feature: PyTorch Lightning Integration with NeuroMANCER core library. All these features are opt-in.
6
13
+ Code simplifications: zero boilerplate code, increased modularity
<imgsrc="https://colab.research.google.com/assets/colab-badge.svg"alt="Open In Colab"/></a> Part 9: Sparse Identification of Nonlinear Dynamics (SINDy)
0 commit comments