Presentation
3 October 2024 Fine-tuned large language models for predicting electromagnetic spectra in metamaterials
Author Affiliations +
Abstract
Large language models (LLMs) such as ChatGPT are trained on massive quantities of text parsed from the internet and have shown a remarkable ability to respond to complex prompts in a manner indistinguishable from humans. We present a fine-tuned LLM that can predict electromagnetic spectra over frequencies given a text prompt which only specifies the metasurface geometry. Results are compared to conventional machine learning approaches including feed-forward neural networks, random forest, linear regression, and K-nearest neighbor. We demonstrate the LLM’s ability to solve inverse problems by providing the geometry necessary to achieve a desired spectrum. Furthermore, our fine-tuned LLM excels at “physics” understanding, explaining how certain resonances are directly related to geometry. LLMs possess some advantages over humans that may give them benefits for research, including the ability to process enormous amounts of data, finding hidden patterns in data, and operating in higher dimensional spaces. We propose that fine-tuning LLMs on large datasets specific to a field allows them to grasp the nuances of that domain, making them valuable tools for research and analysis.
Conference Presentation
© (2024) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Willie J. Padilla "Fine-tuned large language models for predicting electromagnetic spectra in metamaterials", Proc. SPIE PC13109, Metamaterials, Metadevices, and Metasystems 2024, PC131091K (3 October 2024); https://doi.org/10.1117/12.3029394
Advertisement
Advertisement
KEYWORDS
Electromagnetism

Electromagnetic metamaterials

Analytical research

Internet

Inverse problems

Linear regression

Machine learning

Back to Top