Abstract

A deep generative model based on a variational autoencoder (VAE), conditioned simultaneously by two target properties, is developed to inverse design stable magnetic materials. The structure of the physics-informed, property embedded latent space of the model is analyzed using graph theory. An impressive ∼96% of the generated materials are found to satisfy the target properties as per predictions from the target-learning branches. This is a huge improvement over approaches that do not condition the VAE latent space by target properties or that do not consider the connectivity of the parent materials from which the new materials are generated. This impressive feat is achieved by using a simple real-space-only representation that can be directly read from material cif files. Model predictions are finally validated by density functional theory calculations on a randomly chosen subset of materials. The performance of the present model is comparable or superior to that of models reported earlier. This model (MagGen) is applied to the problem of designing rare earth-free permanent magnets with promising results.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call