Neural surrogates for crystal growth dynamics with variable supersaturation: explicit vs. implicit conditioning
Matteo Rigoni, Daniele Lanzoni, Francesco Montalenti, Roberto Bergamaschini
TLDR
This paper develops and compares two CRNN architectures for simulating crystal growth dynamics with variable supersaturation, finding explicit conditioning superior.
Key contributions
- Developed two CRNN architectures for crystal growth: implicit (from mini-sequence) and explicit (direct parameter input).
- Explicit conditioning consistently achieved higher fidelity in reproducing ground-truth crystal growth profiles.
- Implicit conditioning requires significantly larger datasets to match the performance of explicit conditioning.
- Models are highly scalable, extending to 256x larger domains and 10x longer sequences with low error.
Why it matters
This research provides efficient neural surrogate models for complex crystal growth simulations, offering a significant speed-up over traditional methods. By identifying the superior explicit conditioning approach, it guides future model development. The scalability demonstrates broad applicability for material science and engineering.
Original Abstract
Simulations of crystal growth are performed by using Convolutional Recurrent Neural Network surrogate models, trained on a dataset of time sequences computed by numerical integration of Allen-Cahn dynamics including faceting via kinetic anisotropy. Two network architectures are developed to take into account the effects of a variable supersaturation value. The first infers it implicitly by processing an input mini-sequence of a few evolution frames and then returns a consistent continuation of the evolution. The second takes the supersaturation parameter as an explicit input along with a single initial frame and predicts the entire sequence. The two models are systematically tested to establish strengths and weaknesses, comparing the prediction performance for models trained on datasets of different size and, in the first architecture, different lengths of input mini-sequence. The analysis of point-wise and mean absolute errors shows how the explicit parameter conditioning guarantees the best results, reproducing with high-fidelity the ground-truth profiles. Comparable results are achievable by the mini-sequence approach only when using larger training datasets. The trained models show strong conditioning by the supersaturation parameter, consistently reproducing its overall impact on growth rates as well as its local effect on the faceted morphology. Moreover, they are perfectly scalable even on 256 times larger domains and can be successfully extended to more than 10 times longer sequences with limited error accumulation. The analysis highlights the potential and limits of these approaches in view of their general exploitation for crystal growth simulations.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.