KEYWORDS: Feature extraction, Data modeling, Machine learning, Transformers, Education and training, Deep learning, Associative arrays, Lithium, Performance modeling, Medical research
Electronic medical records are important information for medical intelligence, although there have been many related studies. However, based on the characteristics of the electronic medical record itself, for example, there is no clear boundary of participle and other problems, which causes difficulties to the research. Chinese, as a kind of hieroglyphic script, contains rich information in itself. There are various methods to use the splitting of Chinese characters as an enhanced data input model to improve the overall recognition effect; however, there are various forms of splitting of glyphs, and there are no papers to compare and contrast this information enhancement method. In this paper, firstly, the common ways of Chinese NER are introduced, and then they are analyzed through the technical point of view, followed by illustrating the effects of different splitting methods on the NER task, and comparing this data enhancement method through experiments.
Conversational humor often depends on context. Compared to one-liner humor, the task of conversational humor recognition is more complex and difficult. In addition, characters are one of the most important factors in dialogue, and most of the existing research on conversational humor recognition does not consider character information, resulting in poor results. Therefore, this paper proposes a conversational humor recognition model that combines character information and contextual feature. The main and supporting characters are set in a specific sitcom, and their gender is used as character attribute. RoBERTa, Bi-GRU, CNN and Attention are used to extract utterance feature at the word level and contextual feature at the sentence level to recognize one-liner humor and conversational humor in dialogue. This paper conducted experiments on CCL2020 Task 3 and achieved an F1-score of 53.7%, which is an improvement of 2.2% over the current best score, demonstrating the effectiveness of character information and contextual feature on the conversational humor recognition task.
To address the current problems of combining single domain-specific knowledge and poor fusion in suicidal ideation detection tasks, this paper proposes a Multi-Head Knowledge Attention Mechanism model that fuses domain knowledge (DK-MHKA) to fully integrate the suicide risk severity lexicon and the user's neurotic personality traits. The model involves integrating suicidal tendencies attributes into the semantic domain that encompasses the user's social media content, with the aim of enhancing the model's linguistic representations. Furthermore, the method employs a multi-head knowledge attention mechanism to effectively combine various sources of features, resulting in an enhanced predictive capability of the model. The experimental findings indicate that the suggested DK-MHKA model outperforms alternative baseline models in terms of forecasting precision. Additionally, the ablation experiments confirm the individual contributions of each module to the overall performance of the model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.