Substantial progress has been made recently in training context-aware language models. CLOTH is a human created cloze dataset, which can better evaluate machine reading comprehension. Although the author of CLOTH has done many experiments on BERT and context2wec, it is still worth studying the performance of other models. We applied the CLOTH dataset to other models and evaluated their performance based on different model mechanisms. The results showed that ALBERT performed well on the cloze task. The accuracy of ALBERT is 92.24%, which is 6.34% higher than the human performance. In addition, we introduce adversarial training into the model. Experiments show that adversarial training has significant effects in improving the robustness and accuracy of the model. On the BERT-large model, the accuracy rate is up to 0.15% after using adversarial training.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.