登录    注册    忘记密码

详细信息

一种提高跨语言理解的NLP迁移学习    

An NLP Migration Learning for Improving Cross-lingual Understanding

文献类型:期刊文献

中文题名:一种提高跨语言理解的NLP迁移学习

英文题名:An NLP Migration Learning for Improving Cross-lingual Understanding

作者:王坤[1];盛鸿宇[2]

第一作者:王坤

机构:[1]四川信息职业技术学院,四川广元628017;[2]北京联合大学机器人学院,北京100101

第一机构:四川信息职业技术学院,四川广元628017

年份:2024

卷号:46

期号:4

起止页码:153-163

中文期刊名:西南大学学报(自然科学版)

外文期刊名:Journal of Southwest University(Natural Science Edition)

收录:CSTPCD;;CSCD:【CSCD_E2023_2024】;

基金:国家自然科学基金项目(12104289).

语种:中文

中文关键词:自然语言处理;多语言双向编码器表征量;迁移学习;跨语言;深度学习

外文关键词:NLP;M-BERT;migration learning;cross-lingual;deep learning

摘要:随着互联网信息的发展,如何有效地表示不同语言所含的信息已成为自然语言处理(Natural Language Processing,NLP)领域的一项重要任务.然而,很多传统的机器学习模型依赖在高资源语言中进行训练,无法迁移到低资源语言中使用.为了解决这一问题,结合迁移学习和深度学习模型,提出一种多语言双向编码器表征量(Multi-lingual Bidirectional Encoder Representations from Transformers,M-BERT)的迁移学习方法.该方法利用M-BERT作为特征提取器,在源语言领域和目标语言领域之间进行特征转换,减小不同语言领域之间的差异,从而提高目标任务在不同领域之间的泛化能力.首先,在构建BERT模型的基础上,通过数据收集处理、训练设置、参数估计和模型训练等预训练操作完成M-BERT模型的构建,并在目标任务上进行微调.然后,利用迁移学习实现M-BERT模型在跨语言文本分析方面的应用.最后,在从英语到法语和德语的跨语言迁移实验中,证明了本文模型具有较高的性能质量和较小的计算量,并在联合训练方案中达到了96.2%的准确率.研究结果表明,该文模型实现了跨语言数据迁移,且验证了其在跨语言NLP领域的有效性和创新性.
With the development of internet-based information,effectively representing the information contained in different languages has become an important task in the field of Natural Language Processing(NLP).However,many traditional machine learning models rely on training in high-resource languages and cannot be used in low-resource languages.To address this issue,this paper proposes a migration learning method called Multi-lingual Bidirectional Encoder Representations from Transformers(M-BERT)that combines migration learning with deep learning models.This method utilizes M-BERT as a feature extractor to transform features between the source language domain and the target language domain,thereby reducing the differences between different language domains and improving the generalization ability of the target task across domains.First,the BERT model was constructed.Then,the construction of the M-BERT model was completed through pre-training operations such as data collection and processing,training setup,parameter estimation,and model training.Fine-tuning was performed on the target task.Finally,migration learning was employed to apply the M-BERT model in cross-lingual text analysis.The cross-lingual migration experiments from English to French and German demonstrated that the model proposed in this paper exhibited high performance quality and required minimal computational effort,achieved an accuracy of 96.2%in the joint training scheme.The research results indicate that this model achieved cross-lingual data migration,validated its effectiveness and innovation in the field of cross-lingual NLP.

参考文献:

正在载入数据...

版权所有©北京联合大学 重庆维普资讯有限公司 渝B2-20050021-8 
渝公网安备 50019002500408号 违法和不良信息举报中心