Large language models (LLMs) such as ChatGPT are trained on massive quantities of text parsed from the internet and have shown a remarkable ability to respond to complex prompts in a manner indistinguishable from humans. We present a fine-tuned LLM that can predict electromagnetic spectra over frequencies given a text prompt which only specifies the metasurface geometry. Results are compared to conventional machine learning approaches including feed-forward neural networks, random forest, linear regression, and K-nearest neighbor. We demonstrate the LLM’s ability to solve inverse problems by providing the geometry necessary to achieve a desired spectrum. Furthermore, our fine-tuned LLM excels at “physics” understanding, explaining how certain resonances are directly related to geometry. LLMs possess some advantages over humans that may give them benefits for research, including the ability to process enormous amounts of data, finding hidden patterns in data, and operating in higher dimensional spaces. We propose that fine-tuning LLMs on large datasets specific to a field allows them to grasp the nuances of that domain, making them valuable tools for research and analysis.
|