Large Language Models (LLMs) are transforming information access by facilitating natural language interaction. However, current systems exhibit several limitations:
LLMs were not initially designed for Information Retrieval (IR) and require adaptations for effective interactions in document search.
Issues of factual accuracy and hallucination compromise the reliability of generated responses.
Poor generalization to low-resource languages and domains, limiting accessibility and requiring a deeper understanding of internal mechanisms.
Challenges related to explainability, making it difficult to transparently justify search results.
This workshop will address the challenges of dialogue-based information access and the integration of retrieval-augmented generation to improve conversational search systems.
We invite contributions on models, datasets, and evaluation methodologies aimed at enhancing explainability, robustness, and fairness in this context.
This workshop is linked to the ANR GUIDANCE project and will be followed by a hackathon related to the iKAT task of TREC 2025 (https://www.trecikat.com/). It will be held over a half-day at the CORIA-TALN 2025 conference in Marseille.
Call for paper
We welcome studies, preliminary works, or roundtable proposals related to conversational information retrieval (IR), covering the following thematic areas:
Models Best Suited for Conversational IR
Combination of dense and sparse IR approaches
Continuous learning
Architectures for Interactive Information Access
Clarification and Reformulation Strategies in Conversational Search
Design of Specialized Prompts for Query Generation
Evaluation and Collections for Conversational Information Retrieval
Reflection on collections and evaluation measures, early-stage collections
Data collection and annotation tools for evaluation
Explainability in Neural Information Retrieval Models
Adaptation to Low-Resource Languages and Domains
Zero-shot and few-shot learning for domain adaptation
Managing linguistic diversity and adapting to under-resourced languages
Factuality, Bias, and Truthfulness
Detection and mitigation of hallucinations in LLM-generated responses
Evaluation of fairness and bias in conversational systems
Fact-checking mechanisms for information access
Submission guidelines
Submission: Scientific article*, roundtable proposal, presentation of an interactive session with participants, teaching resources, feedback, demos, etc. *Articles previously published in a journal or other conference may also be considered for abstract submission to the workshop.