Learning Electromagnetic Metamaterial Physics With ChatGPT
Large language models (LLMs) such as ChatGPT, Gemini, LlaMa, and Claude are trained on massive quantities of text parsed from the internet and have shown a remarkable ability to respond to complex prompts in a manner often indistinguishable from humans. For all-dielectric metamaterials consisting of unit cells with four elliptical resonators, we present a LLM fine-tuned on up to 40,000 data that can predict the absorptivity spectrum given a text prompt that only specifies the metasurface geometry. Results are compared to conventional machine learning approaches including feed-forward neural networks, random forest, linear regression, and K-nearest neighbor (KNN). Remarkably, the fine-tuned LLM (FT-LLM) achieves a comparable performance across large dataset sizes with a deep neural network. We also explore inverse problems by asking the LLM to predict the geometry necessary to achieve a desired spectrum. LLMs possess several advantages over humans that may give them benefits for research, including the ability to process enormous amounts of data, find hidden patterns in data, and operate in higher-dimensional spaces. This suggests they may be able to leverage their general knowledge of the world to learn faster from training data than traditional models, making them valuable tools for research and analysis.
Duke Scholars
Published In
DOI
EISSN
Publication Date
Volume
Start / End Page
Related Subject Headings
- 46 Information and computing sciences
- 40 Engineering
- 10 Technology
- 09 Engineering
- 08 Information and Computing Sciences
Citation
Published In
DOI
EISSN
Publication Date
Volume
Start / End Page
Related Subject Headings
- 46 Information and computing sciences
- 40 Engineering
- 10 Technology
- 09 Engineering
- 08 Information and Computing Sciences