Top 10 AI Research Papers to Read on 10.14.2024
Welcome to today’s edition of our AI research newsletter! As AI continues to transform industries and research, it’s critical to stay informed with the latest developments. In this edition, we bring you the top 10 AI research papers as of October 14, 2024, that are advancing the boundaries of artificial intelligence.
From addressing safety alignment degradation in vision-language models to leveraging explainable AI for in-vehicle network intrusion detection, this list highlights groundbreaking innovations that are shaping the future.
Explore fascinating advancements like Mentor-KD, which enhances multi-step reasoning in small language models, and SubZero, a novel approach to memory-efficient fine-tuning of large models.
Other key topics include robust frameworks for automation and benchmarks that assess the harmfulness of AI agents. These papers offer valuable insights into how AI is evolving, both in research and in practical applications across various domains.
Stay tuned as we explore these influential papers that are paving the way for the next wave of AI innovations!
- Unraveling and Mitigating Safety Alignment Degradation of Vision-Language Models by Qin Liu, Chao Shang, Ling Liu, Nikolaos Pappas, Jie Ma, Neha Anna John, Srikanth Doss, Lluis Marquez, Miguel Ballesteros, Yassine Benajiba
- Transforming In-Vehicle Network Intrusion Detection: VAE-based Knowledge Distillation Meets Explainable AI by Muhammet Anil Yagiz, Pedram MohajerAnsari, Mert D. Pese, Polat Goktas
- SimpleStrat: Diversifying Language Model Generation with Stratification by Justin Wong, Yury Orlovskiy, Michael Luo, Sanjit A. Seshia, Joseph E. Gonzalez
- Mentor-KD: Making Small Language Models Better Multi-step Reasoners by Hojae Lee, Junho Kim, SangKeun Lee
- PEAR: A Robust and Flexible Automation Framework for Ptychography Enabled by Multiple Large Language Model Agents by Xiangyu Yin, Chuqiao Shi, Yimo Han, Yi Jiang
- AgentHarm: A Benchmark for Measuring Harmfulness of LLM Agents by Maksym Andriushchenko, Alexandra Souly, Mateusz Dziemian, Derek Duenas, Maxwell Lin, Justin Wang, Dan Hendrycks, Andy Zou, Zico Kolter, Matt Fredrikson, Eric Winsor, Jerome Wynne, Yarin Gal, Xander Davies
- Software Engineering and Foundation Models: Insights from Industry Blogs Using a Jury of Foundation Models by Hao Li, Cor-Paul Bezemer, Ahmed E. Hassan
- Hierarchical Universal Value Function Approximators by Rushiv Arora
- The structure of the token space for large language models by Michael Robinson, Sourya Dey, Shauna Sweet
- SubZero: Random Subspace Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning by Ziming Yu, Pan Zhou, Sike Wang, Jia Li, Hua Huang