登录    注册    忘记密码

详细信息

Handling the adversarial attacks: A machine learning's perspective  ( SCI-EXPANDED收录 EI收录)  

文献类型:期刊文献

英文题名:Handling the adversarial attacks: A machine learning's perspective

作者:Cao, Ning[1];Li, Guofu[2];Zhu, Pengjia[3];Sun, Qian[4];Wang, Yingying[1];Li, Jing[5];Yan, Maoling[6];Zhao, Yongbin[7]

第一作者:Cao, Ning

通讯作者:Li, GF[1]

机构:[1]Qingdao Binhai Univ, Coll Informat Engn, Qingdao, Shandong, Peoples R China;[2]Univ Shanghai Sci & Technol, Coll Commun & Art Design, Shanghai, Peoples R China;[3]Accenture AI Lab, Shanghai, Peoples R China;[4]Beijing Technol & Business Univ, Sch Comp & Informat Engn, Beijing, Peoples R China;[5]Beijing Union Univ, Coll Intellectualized City, Beijing, Peoples R China;[6]Shandong Agr Univ, Coll Informat Sci & Engn, Tai An, Shandong, Peoples R China;[7]Shijiazhuang Tiedao Univ, Sch Informat Sci & Technol, Shijiazhuang, Hebei, Peoples R China

第一机构:Qingdao Binhai Univ, Coll Informat Engn, Qingdao, Shandong, Peoples R China

通讯机构:[1]corresponding author), Univ Shanghai Sci & Technol, Coll Commun & Art Design, Shanghai, Peoples R China.

年份:2019

卷号:10

期号:8

起止页码:2929-2943

外文期刊名:JOURNAL OF AMBIENT INTELLIGENCE AND HUMANIZED COMPUTING

收录:;EI(收录号:20182805535500);Scopus(收录号:2-s2.0-85049594283);WOS:【SCI-EXPANDED(收录号:WOS:000477644300003)】;

基金:This work was supported by Grant Shandong education Department (J16LN73), Shanghai University Youth Teacher Training Funding Scheme (10-17-309-802), and Shandong independent innovation and achievements transformation project (2014ZZCX07106).

语种:英文

外文关键词:Security; Deep learning; Adversarial; Robustness

摘要:The i.i.d assumption is the corner stone of most conventional machine learning algorithms. However, reducing the bias and variance of the learning model on the i.i.d dataset may not help the model to prevent from their failure on the adversarial samples, which are intentionally generated by either the malicious users or its rival programs. This paper gives a brief introduction of machine learning and adversarial learning, discussing the research frontier of the adversarial issues noticed by both the machine learning and network security field. We argue that one key reason of the adversarial issue is that the learning algorithms may not exploit the input feature set enough, so that the attackers can focus on a small set of features to trick the model. To address this issue, we consider two important classes of classifiers. For random forest, we propose a type of random forest called Weighted Random Forest (WRF) to encourage the model to give even credits to the input features. This approach can be further improved by careful selection of a subset of trees based on the clustering analysis during the run time. For neural networks, we propose to introduce extra soft constraints based on the weight variance to the objective function, such that the model would base the classification decision on more evenly distributed feature impact. Empirical experiments show that these approaches can effectively improve the robustness of the learnt model against their baseline systems.

参考文献:

正在载入数据...

版权所有©北京联合大学 重庆维普资讯有限公司 渝B2-20050021-8 
渝公网安备 50019002500408号 违法和不良信息举报中心