研究方向
基于大语言模型的语义理解与文本生成;跨模态语义理解;文法纠错
分级阅读(http://www.chinese-pku.com)
科研/教育经历
2005 ~ —— 讲师/副教授,yl9193永利官网信息科学技术学院
2003 ~ 2005, 博士后研究,yl9193永利官网
2000 ~ 2003, 博士研究生,yl9193永利官网
科研项目
[1] 国家自然科学基金面上项目: 面向阅读理解的问题自动生成关键技术研究. 2021.01-2024.12. 主持.
[2] 国家自然科学基金面上项目: 基于文档的智能问答的关键技术研究与资源建设. 2018.01-2021.12. 主持.
[3] 国家自然科学基金面上项目: 基于汉语话题的句际关系自动分析研究. 2014.01-2017.12. 主持.
[4] 国家自然科学基金青年项目: 基于词语独异性特征的大规模词义标注语料库自动构建研究. 2008.01--2010.12. 主持.
[5] 国家社科基金后期资助项目: 面向语言信息处理的现代汉语并列结构研究. 2012.01-2013.06. 主持.
[6] 国家社会科学基金青年项目: 面向网络文本的词语情感义自动标注研究. 2008.06--2010.12. 主持.
[7] 国家社科基金重大项目: 面向网络文本的多视角语义分析方法、语言知识库及平台建设研究. 2013.01-2017.12,子课题负责人。
[8] 国家863项目: 面向基础教育的类人智能知识理解与推理关键技术. 2015.01-2017.12,课题骨干。
[9] 国家863项目:面向汉语语音合成的言语语义计算模型研究. 2007.07--2009.12,副组长。
[10] 人民教育出版社合作项目: 中国儿童分级阅读文本标准研制,2020.09--2021.12,主持
[11] 主持其他产学研合作项目7项,资源成果转让若干.
科研获奖
[1] 2011年,“综合型语言知识库”,国家科学技术进步二等奖.
[2] 2020年,“多情景跨领域中文文本智能校对关键技术及应用”,北京市科学技术进步二等奖.
Selected Publications(* Corresponding Author)
[1] Chenming Tang, Zhixiang Wang, Yunfang Wu*. SCOI: Syntax-augmented Coverage-based In-context Example Selection for Machine Translation. EMNLP-2024.
[2] Sanwoo Lee, Yida Cai, Desong Meng, Ziyang Wang, Yunfang Wu*. Unleashing Large Language Models' Proficiency in Zero-shot Essay Scoring. Findings of EMNLP-2024.
[3] Fanyi Qu, Hao Sun, Yunfang Wu*. Unsupervised Distractor Generation via Large Language Model Distilling and Counterfactual Contrastive Decoding. Findings of ACL-2024.
[4] Chenming Tang, Fanyi Qu, Yunfang Wu*. Ungrammatical-syntax-based In-context Example Selection for Grammatical Error Correction. NAACL-2024.
[5] Ziyang Wang, Shanyu Li, HsiuYuan Huang, Yunfang Wu*. FPT: Feature Prompt Tuning for Few-shot Readability Assessment. NAACL-2024.
[6] Ming Zhang, Ke Chang, Yunfang Wu*. Multi-modal Semantic Understanding with Contrastive Cross-modal Feature Alignment. COLING-2024.
[7] Zichen Wu, HsiuYuan Huang, Fanyi Qu and Yunfang Wu*. Mixture-of-Prompt-Experts for Multi-modal Semantic Understanding. COLING-2024.
[8] Chenming Tang, Xiuyu Wu, Yunfang Wu*. Are Pre-trained Language Models Useful for Model Ensemble in Chinese Grammatical Error Correction? ACL-2023.
[9] Wenbiao Li, Wang Ziyang and Yunfang Wu*. A Unified Neural Network Model for Readability Assessment with Feature Projection and Length-Balanced Loss. EMNLP-2022.
[10] Xiuyu Wu and Yunfang Wu*. From Spelling to Grammar: A New Framework for Chinese Grammatical Error Correction. Findings of EMNLP-2022.
[11] Rui Sun, Xiuyu Wu and Yunfang Wu*. An Error-Guided Correction Model for Chinese Spelling Error Correction. Findings of EMNLP-2022.
[12] Zichen Wu, Xin Jia, Fanyi Qu and Yunfang Wu*. Enhancing Pre-trained Models with Text Structure Knowledge for Question Generation. COLING-2022.
[13] Ming Zhang, Shuai Dou, Ziyang Wang, Yunfang Wu*. Focus-Driven Contrastive Learniang for Medical Question Summarization. COLING-2022.
[14] Xiuyu Wu, Jingsong Yu, Xu Sun and Yunfang Wu*. Position Offset Label Prediction for Grammatical Error Correction. COLING-2022.
[15] Fanyi Qu, Xin Jia, Yunfang Wu*. Asking Questions Like Educational Experts: Automatically Generating Question-Answer Pairs on Real-World Examination Data. EMNLP-2021.
[16] Xin Jia, Wenjie Zhou, Xu Sun, Yunfang Wu*. EQG-RACE: Examination-Type Question Generation. AAAI-2021.
[17] Xin Jia, Wenjie Zhou, Xu Sun, Yunfang Wu*. How to Ask Good Questions? Try to Leverage Paraphrases. ACL-2020.
[18] Xiaorui Zhou, Senlin Luo, Yunfang Wu*. 2020. Co-Attention Hierarchical Network: Generating Coherent Long Distractors for Reading Comprehension. AAAI-2020.
[19] Wenjie Zhou, Minghua Zhang, Yunfang Wu*. 2019. Multi-Task Learning with Language Modeling for Question Generation. EMNLP-2019.
[20] Wenjie Zhou, Minghua Zhang, Yunfang Wu*. 2019. Question-type Driven Question Generation. EMNLP-2019.
[21] Minghua Zhang, Yunfang Wu*, Weikang Li, Wei Li. 2018. Learning Universal Sentence Representations with Mean-Max Attention Autoencoder. EMNLP-2018.
[22] Minghua Zhang, Yunfang Wu*. 2018. An Unsupervised Model with Attention Autoencoders for Question Retrieval. AAAI-2018.
[23] Weikang Li, Wei Li, Yunfang Wu*. 2018. A Unified Model for Document-based Question Answering based on Human-like Reading Strategy. AAAI-2018.