ΑΙhub 09月12日
探索可信高效的机器学习:Yezi Liu的博士研究
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文介绍了加州大学欧文分校的博士生Yezi Liu关于可信高效机器学习的研究。她的博士课题聚焦于图神经网络和大型语言模型中的公平性、隐私保护、可解释性及效率问题。Liu博士在研究中发现文本到图像模型存在文化刻板印象,并开发了利用负样本实现大型语言模型“遗忘”特定知识的方法。未来,她计划进一步研究大型语言模型在推理方面的效率。她还分享了AAAI博士生研讨会的宝贵经历,以及对有志于攻读AI博士学位的学生的建议,强调兴趣和坚持的重要性。

⚖️ **研究方向聚焦可信与高效的机器学习**:Yezi Liu博士的研究重点在于提升机器学习系统的可靠性和效率,特别关注图神经网络(GNNs)和大型语言模型(LLMs)。她致力于解决GNNs在动态图上的公平性、隐私保护性(如“遗忘”功能)和可解释性问题,并开始探索LLMs在实际应用中的效率与可信度挑战,旨在构建更准确、可靠且对社会负责的AI系统。

🎨 **对文本到图像模型刻板印象的洞察**:在研究过程中,Liu博士发现文本到图像模型常常会生成带有文化刻板印象的图像,这一发现既出乎意料又发人深省。这一观察不仅增加了她研究的趣味性,也凸显了AI模型在反映和放大社会偏见方面的潜在问题,促使她更深入地探究AI的公平性议题。

🧠 **“遗忘”机制的创新应用**:Liu博士在大型语言模型的研究中,采用了一种巧妙且有效的方法,利用负样本来微调模型,使其能够“遗忘”特定的知识。这种“遗忘”机制对于数据隐私和模型安全至关重要,通过负样本训练,模型能够更精准地控制其知识边界,从而实现有选择性的信息移除,这项工作具有很高的实践价值和学术意义。

🚀 **未来聚焦LLMs的推理效率**:展望未来,Liu博士计划将研究重心更多地放在大型语言模型的效率提升上,尤其是在推理能力方面。她认为,推理是LLMs的核心能力之一,如何使其在保持高质量输出和有效信息检索的同时,能够更快地进行推理,是亟待解决的关键问题,并希望在该领域做出有价值的贡献。

🤝 **AAAI博士生研讨会提供宝贵交流平台**:Liu博士认为AAAI博士生研讨会提供了极佳的交流机会,特别是其将博士生与助理教授导师配对的模式。这种结构化的互动方式,如共同就餐或围坐交流,极大地降低了与教授直接沟通的门槛,促成了与师长和同行的自然、舒适的交流,为建立有意义的人脉关系提供了宝贵支持。

In this interview series, we’re meeting some of the AAAI/SIGAI Doctoral Consortium participants to find out more about their research. Yezi Liu is working on trustworthy and efficient machine learning. We asked her about her research to date, what she has found particularly interesting, plans for future work, and what is was that inspired her to study AI.

Tell us a bit about your PhD – where are you studying, and what is the topic of your research?

I’m currently pursuing my Ph.D. in Computer Engineering at the University of California, Irvine. My research focuses on trustworthy machine learning, with particular emphasis on graph neural networks as well as trustworthy and efficient large language models.

Could you give us an overview of the research you’ve carried out so far during your PhD?

So far, my Ph.D. research has focused on making machine learning more trustworthy and efficient. On the graph learning side, I have worked on fairness, privacy-preserving unlearning, and interpretability for dynamic graph neural networks. More recently, I’ve also started to look at efficiency and trustworthiness challenges in large language models, as these have become increasingly important in real-world applications.

Is there an aspect of your research that has been particularly interesting?

One project I found particularly interesting was studying fairness in text-to-image models. I noticed that the generated images often reflected cultural stereotypes, which was both surprising and thought-provoking, and it made the research process very engaging. On the technical side, I really enjoyed working on unlearning for large language models, where I used negative samples to fine-tune the model in a way that forces it to ‘forget’ certain knowledge. I found this approach both clever and effective, which made the work especially rewarding.

An illustration of the unlearning task: before unlearning, the model recalls factual knowledge, while after unlearning, it ‘forgets’ the targeted information.

Overview of the LUNE framework: the model uses negative examples and lightweight adapters to efficiently unlearn specific information.

What are your plans for building on your research so far during the PhD – what aspects will you be investigating next?

Looking ahead, I plan to focus more on the efficiency of large language models, especially in reasoning. Since reasoning is such a core capability of these models, finding ways to make it faster and still generate high-quality answers and retrieve useful information is very important. I hope to contribute meaningful research in this area during the next stage of my Ph.D.

How was the AAAI Doctoral Consortium, and the AAAI conference experience in general? 

My experience at the AAAI Doctoral Consortium and the conference overall was excellent. What I appreciated most was the way the consortium connected us with assistant professors as mentors. For me, it can sometimes feel a bit difficult to approach professors directly, but the structure of the program, like sitting together at a table or even sharing a meal, made those interactions very natural and comfortable. This design was incredibly helpful for building meaningful connections with both faculty and other Ph.D. students, and it made the whole experience truly valuable.

What made you want to study AI?

I’ve always enjoyed digging deeply into things I find interesting, and AI naturally caught my attention because it’s becoming such a big part of everyday life. It’s hard not to wonder why these systems work the way they do and how they could be improved. Doing a Ph.D. has given me the opportunity to explore these questions more systematically, but at the core, it really comes down to curiosity and passion, those are what drive me to keep pushing forward.

What advice would you give to someone thinking of doing a PhD in the field?

My advice would be: find something you’re really interested in, and just get it done. A Ph.D. is a long journey, and without genuine interest, it’s hard to stay motivated. If you don’t persist, it’s difficult to build a sense of accomplishment, and that can really affect your daily mindset. But when you work on something that excites you, that motivation keeps you going.

Could you tell us an interesting (non-AI related) fact about you? 

Outside of research, I really enjoy things that are complex but logical, because I like having something that makes me think. For example, I love reading detective novels, watching strategy-based shows, or playing mystery and puzzle games. Even in music, I’m drawn to songs with intricate but clever arrangements, the kind that make you appreciate the thought behind them.

About Yezi Liu

Yezi Liu is a Ph.D. candidate in Computer Science at the University of California, Irvine. Her research focuses on trustworthy and efficient machine learning, spanning fairness, privacy, interpretability, and scalability in both graph neural networks and large language models. She has developed methods for fairness, privacy-preserving learning, dynamic GNN explainability, and graph condensation, with work published at venues such as IJCAI, WWW, CIKM, and ACM MM. More recently, she has been exploring efficiency and trustworthiness in large language models. Her overarching goal is to build AI systems that are accurate, reliable, and socially responsible.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

机器学习 可信AI 高效AI 图神经网络 大型语言模型 公平性 隐私保护 可解释性 Machine Learning Trustworthy AI Efficient AI Graph Neural Networks Large Language Models Fairness Privacy Preservation Interpretability
相关文章