메뉴 건너뛰기




Volumn 5766 LNCS, Issue , 2009, Pages 342-345

IR evaluation without a common set of topics

Author keywords

IR effectiveness; Topics; TREC

Indexed keywords

IR EFFECTIVENESS; SYSTEM EFFECTIVENESS; TOPICS; TREC;

EID: 70350610816     PISSN: 03029743     EISSN: 16113349     Source Type: Book Series    
DOI: 10.1007/978-3-642-04417-5_35     Document Type: Conference Paper
Times cited : (6)

References (11)
  • 1
    • 72449180422 scopus 로고    scopus 로고
    • Relevance criteria for e-commerce: A crowdsourcing-based experimental analysis
    • in press
    • Alonso, O., Mizzaro, S.: Relevance criteria for e-commerce: A crowdsourcing-based experimental analysis. In: 32nd SIGIR (2009) (in press)
    • (2009) 32nd SIGIR
    • Alonso, O.1    Mizzaro, S.2
  • 2
    • 65249129950 scopus 로고    scopus 로고
    • Crowdsourcing for relevance evaluation
    • Alonso, O., Rose, D., Stewart, B.: Crowdsourcing for relevance evaluation. SIGIR Forum 42(2), 9-15 (2008)
    • (2008) SIGIR Forum , vol.42 , Issue.2 , pp. 9-15
    • Alonso, O.1    Rose, D.2    Stewart, B.3
  • 3
    • 0033650323 scopus 로고    scopus 로고
    • Evaluating evaluation measure stability
    • Buckley, C., Voorhees, E.: Evaluating evaluation measure stability. In: 23rd SIGIR, pp. 33-40 (2000)
    • (2000) 23rd SIGIR , pp. 33-40
    • Buckley, C.1    Voorhees, E.2
  • 4
    • 70350599902 scopus 로고    scopus 로고
    • A few good topics: Experiments in topic set reduction for retrieval evaluation
    • in press
    • Guiver, J., Mizzaro, S., Robertson, S.: A few good topics: Experiments in topic set reduction for retrieval evaluation. In: ACM TOIS (2009) (in press)
    • (2009) ACM TOIS
    • Guiver, J.1    Mizzaro, S.2    Robertson, S.3
  • 5
    • 41849135039 scopus 로고    scopus 로고
    • Mizzaro, S.: The Good, the Bad, the Difficult, and the Easy: Something Wrong with Information Retrieval Evaluation? In: Macdonald, C., Ounis, I., Plachouras, V., Ruthven, I., White, R.W. (eds.) ECIR 2008. LNCS, 4956, pp. 642-646. Springer, Heidelberg (2008)
    • Mizzaro, S.: The Good, the Bad, the Difficult, and the Easy: Something Wrong with Information Retrieval Evaluation? In: Macdonald, C., Ounis, I., Plachouras, V., Ruthven, I., White, R.W. (eds.) ECIR 2008. LNCS, vol. 4956, pp. 642-646. Springer, Heidelberg (2008)
  • 6
    • 36448959373 scopus 로고    scopus 로고
    • HITS hits TREC: Exploring IR evaluation results with network analysis
    • Mizzaro, S., Robertson, S.: HITS hits TREC: exploring IR evaluation results with network analysis. In: 30th SIGIR, pp. 479-486 (2007)
    • (2007) 30th SIGIR , pp. 479-486
    • Mizzaro, S.1    Robertson, S.2
  • 7
    • 84885608872 scopus 로고    scopus 로고
    • Information retrieval system evaluation: Effort, sensitivity, and reliability
    • Sanderson, M., Zobel, J.: Information retrieval system evaluation: effort, sensitivity, and reliability. In: 28th SIGIR, pp. 162-169 (2005)
    • (2005) 28th SIGIR , pp. 162-169
    • Sanderson, M.1    Zobel, J.2
  • 9
    • 0036993119 scopus 로고    scopus 로고
    • The effect of topic set size on retrieval experiment error
    • Voorhees, E., Buckley, C.: The effect of topic set size on retrieval experiment error. In: 25th SIGIR, pp. 316-323 (2002)
    • (2002) 25th SIGIR , pp. 316-323
    • Voorhees, E.1    Buckley, C.2
  • 10
    • 57349160444 scopus 로고    scopus 로고
    • Score standardization for inter-collection comparison of retrieval systems
    • Webber, W., Moffat, A., Zobel, J.: Score standardization for inter-collection comparison of retrieval systems. In: 31st SIGIR, pp. 51-58 (2008)
    • (2008) 31st SIGIR , pp. 51-58
    • Webber, W.1    Moffat, A.2    Zobel, J.3
  • 11
    • 0032272626 scopus 로고    scopus 로고
    • How reliable are the results of large-scale information retrieval experiments?
    • Zobel, J.: How reliable are the results of large-scale information retrieval experiments? In: 21st SIGIR, pp. 307-314 (1998)
    • (1998) 21st SIGIR , pp. 307-314
    • Zobel, J.1


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.