Evaluation Measures in Information Retrieval: A Theoretical Framework for the Unranked Retrieval Sets
Yusuf Durachman
Abstract
It is known that many alternatives in designing an IR system. How do we know which of these techniques are effective in which applications? Should we use stop lists? Should we stem? Should we use in- verse document frequency weighting? Information retrieval has developed as a highly empirical discipline, requiring careful and thorough evaluation to demonstrate the superior performance of novel techniques on representative document collections. In this research tries to present common (although many) evaluation of measuring the effectiveness of IR systems that widely used. and the test collections that are most often used for this purpose. Then presenst the straightforward notion of relevant and nonrelevant documents and the formal evaluation methodol-ogy that has been developed for evaluating unranked retrieval results. This includes explaining the kinds of evaluation measures that are standardly used for document retrieval and related tasks like text clas-sification and why they are appropriate. This research can valuable for those want to do research in the field of IR.
.
Keyword: Information Retrieval, evaluation & measurement, Precion & Recall,