메뉴 건너뛰기




Volumn 44, Issue 6, 2008, Pages 1879-1885

On test collections for adaptive information retrieval

Author keywords

Cranfield; Retrieval evaluation; Test collections

Indexed keywords

EXPERIMENTS; INFORMATION SERVICES; PLANNING; SEARCH ENGINES; STRATEGIC PLANNING;

EID: 53849148992     PISSN: 03064573     EISSN: None     Source Type: Journal    
DOI: 10.1016/j.ipm.2007.12.011     Document Type: Article
Times cited : (28)

References (22)
  • 1
    • 53849097609 scopus 로고    scopus 로고
    • Allan, J. (2005). HARD track overview in TREC 2004: High accuracy retrieval from documents. In Proceedings of the Thirteenth Text REtrieval Conference, TREC 2004 (pp. 25-35). NIST Special Publication 500-261.
    • Allan, J. (2005). HARD track overview in TREC 2004: High accuracy retrieval from documents. In Proceedings of the Thirteenth Text REtrieval Conference, TREC 2004 (pp. 25-35). NIST Special Publication 500-261.
  • 2
    • 0002500408 scopus 로고    scopus 로고
    • Blind men and elephants: Six approaches to TREC data
    • Banks D., Over P., and Zhang N.-F. Blind men and elephants: Six approaches to TREC data. Information Retrieval 1 (1999) 7-34
    • (1999) Information Retrieval , vol.1 , pp. 7-34
    • Banks, D.1    Over, P.2    Zhang, N.-F.3
  • 3
    • 33750345177 scopus 로고    scopus 로고
    • Buckley, C., Dimmick, D., Soboroff, I., & Voorhees, E. (2006). Bias and the limits of pooling. In Proceedings of the Twenty-Ninth Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2006) (pp. 619-620).
    • Buckley, C., Dimmick, D., Soboroff, I., & Voorhees, E. (2006). Bias and the limits of pooling. In Proceedings of the Twenty-Ninth Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2006) (pp. 619-620).
  • 4
    • 0033650323 scopus 로고    scopus 로고
    • Buckley, C., & Voorhees, E. M. (2000). Evaluating evaluation measure stability. In N. Belkin, P. Ingwersen, & M. Leong (Eds.), Proceedings of the 23rd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (pp. 33-40).
    • Buckley, C., & Voorhees, E. M. (2000). Evaluating evaluation measure stability. In N. Belkin, P. Ingwersen, & M. Leong (Eds.), Proceedings of the 23rd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (pp. 33-40).
  • 6
    • 85038209522 scopus 로고    scopus 로고
    • Cleverdon, C. W. (1991). The significance of the Cranfield tests on index languages. In Proceedings of the Fourteenth Annual International ACM/SIGIR Conference on Reserach and Development in Information Retrieval (pp. 3-12).
    • Cleverdon, C. W. (1991). The significance of the Cranfield tests on index languages. In Proceedings of the Fourteenth Annual International ACM/SIGIR Conference on Reserach and Development in Information Retrieval (pp. 3-12).
  • 7
    • 0004710498 scopus 로고
    • Opening the black box of relevance
    • Cuadra C.A., and Katter R.V. Opening the black box of relevance. Journal of Documentation 23 4 (1967) 291-303
    • (1967) Journal of Documentation , vol.23 , Issue.4 , pp. 291-303
    • Cuadra, C.A.1    Katter, R.V.2
  • 8
    • 0001769424 scopus 로고    scopus 로고
    • Variations in relevance assessments and the measurement of retrieval effectiveness
    • Harter S.P. Variations in relevance assessments and the measurement of retrieval effectiveness. Journal of the American Society for Information Science 47 1 (1996) 37-49
    • (1996) Journal of the American Society for Information Science , vol.47 , Issue.1 , pp. 37-49
    • Harter, S.P.1
  • 9
    • 0033661288 scopus 로고    scopus 로고
    • Hersh, W., Turpin, A., Price, S., Chan, B., Kraemer, & D., Sacherek, L., et al. (2000). Do batch and user evaluations give the same results? In N. Belkin, P. Ingwersen, & M. Leong (Eds.), Proceedings of the 23rd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (pp. 17-24).
    • Hersh, W., Turpin, A., Price, S., Chan, B., Kraemer, & D., Sacherek, L., et al. (2000). Do batch and user evaluations give the same results? In N. Belkin, P. Ingwersen, & M. Leong (Eds.), Proceedings of the 23rd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (pp. 17-24).
  • 10
    • 79952607503 scopus 로고    scopus 로고
    • First international symposium on information interaction in context
    • Ingwersen P., Ruthven I., and Belkin N. First international symposium on information interaction in context. ACM SIGIR Forum 41 1 (2007) 117-119
    • (2007) ACM SIGIR Forum , vol.41 , Issue.1 , pp. 117-119
    • Ingwersen, P.1    Ruthven, I.2    Belkin, N.3
  • 11
    • 0032271285 scopus 로고    scopus 로고
    • Comparing interactive information retrieval systems across sites: The TREC-6 interactive track matrix experiment
    • Croft W.B., Moffat A., van Rijsbergen C., Wilkinson R., and Zobel J. (Eds), ACM Press, New York, Melbourne, Australia Aug.
    • Lagergren E., and Over P. Comparing interactive information retrieval systems across sites: The TREC-6 interactive track matrix experiment. In: Croft W.B., Moffat A., van Rijsbergen C., Wilkinson R., and Zobel J. (Eds). Proceedings of the 21st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (1998), ACM Press, New York, Melbourne, Australia 164-172 Aug.
    • (1998) Proceedings of the 21st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval , pp. 164-172
    • Lagergren, E.1    Over, P.2
  • 12
    • 0025664745 scopus 로고
    • On sample sizes for non-matched-pair IR experiments
    • Robertson S.E. On sample sizes for non-matched-pair IR experiments. Information Processing and Management 26 6 (1990) 739-753
    • (1990) Information Processing and Management , vol.26 , Issue.6 , pp. 739-753
    • Robertson, S.E.1
  • 13
    • 33750340100 scopus 로고    scopus 로고
    • Sakai, T. (2006). Evaluating evaluation metrics based on the bootstrap. In Proceedings of the Twenty-Ninth Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2006) (pp. 525-532).
    • Sakai, T. (2006). Evaluating evaluation metrics based on the bootstrap. In Proceedings of the Twenty-Ninth Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2006) (pp. 525-532).
  • 14
    • 84885608872 scopus 로고    scopus 로고
    • Sanderson, M., & Zobel, J. (2005). Information retrieval system evaluation: Effort, sensitivity, and reliability. In Proceedings of the Twenty-Eighth Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2005) (pp. 162-169).
    • Sanderson, M., & Zobel, J. (2005). Information retrieval system evaluation: Effort, sensitivity, and reliability. In Proceedings of the Twenty-Eighth Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2005) (pp. 162-169).
  • 16
    • 84937340307 scopus 로고    scopus 로고
    • Automatic language and information processing: Rethinking evaluation
    • Sparck Jones K. Automatic language and information processing: Rethinking evaluation. Natural Language Engineering 7 1 (2001) 29-46
    • (2001) Natural Language Engineering , vol.7 , Issue.1 , pp. 29-46
    • Sparck Jones, K.1
  • 18
    • 0001317342 scopus 로고
    • The pragmatics of information retrieval experimentation, revisited
    • Tague-Sutcliffe J. The pragmatics of information retrieval experimentation, revisited. Information Processing and Management 28 4 (1992) 467-490
    • (1992) Information Processing and Management , vol.28 , Issue.4 , pp. 467-490
    • Tague-Sutcliffe, J.1
  • 19
    • 0002256621 scopus 로고
    • A note on the pseudomathematics of relevance
    • Taube M. A note on the pseudomathematics of relevance. American Documentation 16 2 (1965) 69-72
    • (1965) American Documentation , vol.16 , Issue.2 , pp. 69-72
    • Taube, M.1
  • 20
    • 0034788434 scopus 로고    scopus 로고
    • Turpin, A. H., & Hersh, W. (2001). Why batch and user evaluations do not give the same results. In Proceedings of the 24th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (pp. 225-231).
    • Turpin, A. H., & Hersh, W. (2001). Why batch and user evaluations do not give the same results. In Proceedings of the 24th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (pp. 225-231).
  • 21
    • 0036993119 scopus 로고    scopus 로고
    • Voorhees, E. M., & Buckley, C. (2002). The effect of topic set size on retrieval experiment error. In Proceedings of the 25th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (pp. 316-323).
    • Voorhees, E. M., & Buckley, C. (2002). The effect of topic set size on retrieval experiment error. In Proceedings of the 25th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (pp. 316-323).
  • 22
    • 57449085010 scopus 로고    scopus 로고
    • Workshop on evaluating exploratory search systems
    • White R.W., Muresan G., and Marchionini G. Workshop on evaluating exploratory search systems. ACM SIGIR Forum 40 2 (2006) 52-60
    • (2006) ACM SIGIR Forum , vol.40 , Issue.2 , pp. 52-60
    • White, R.W.1    Muresan, G.2    Marchionini, G.3


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.