Large Language Models for Forward and Inverse Design of Metamaterials
Over the last several years deep learning - in particular deep neural networks (DNNs) - have been shown to be an effective approach for the design of electromagnetic metamaterials and metasurfaces and many novel results have been shown. More recently large language models (LLMs) have emerged as powerful tools achieving accurate results comparable to state-of-the-art DNNs, but with the advantage of the unique ability to interact in a human-like manner. This raises the possibility that LLMs could leverage their broad world knowledge to learn more efficiently from training data than traditional models, potentially enabling them to discover and describe the underlying physical principles. We show results on three LLMs that have been fine tuned on metamaterials data. These models are capable of predicting electromagnetic spectra when prompted with a particular metamaterial geometry - the so-called forward problem. We also show that fine-tuned LLMs also have some ability to predict metamaterial geometric parameters necessary to yield a desired spectra. This latter task is a challenging illposed inverse problem and has been a long grand challenge in many areas of physics and engineering.