Benchmarking IR Models (w/ Nandan Thakur)
Manage episode 430843781 series 3446693
In this episode of Neural Search Talks, we're chatting with Nandan Thakur about the state of model evaluations in Information Retrieval. Nandan is the first author of the paper that introduced the BEIR benchmark, and since its publication in 2021, we've seen models try to hill-climb on the leaderboard, but also fail to outperform the BM25 baseline in subsets like Touché 2020. Plus some insights into what the future of benchmarking IR systems might look like, such as the newly announced TREC RAG track this year.
Timestamps: 0:00 Introduction & the vibe at SIGIR'24 1:19 Nandan's two papers at the conference 2:09 The backstory of the BEIR benchmark 5:55 The shortcomings of BEIR in 2024 8:04 What's up with the Touché 2020 subset of BEIR 11:24 The problem with overfitting on benchmarks 13:09 TREC-RAG: the future of IR benchmarking 17:34 MIRACL & the importance of multilinguality in IR 21:38 Outro
20 επεισόδια