메뉴 건너뛰기




Volumn 2016-December, Issue , 2016, Pages 5534-5542

Situation recognition: Visual semantic role labeling for image understanding

Author keywords

[No Author keywords available]

Indexed keywords

COMPUTER VISION; SEMANTICS; WOOL; YARN;

EID: 84986247420     PISSN: 10636919     EISSN: None     Source Type: Conference Proceeding    
DOI: 10.1109/CVPR.2016.597     Document Type: Conference Paper
Times cited : (300)

References (52)
  • 4
    • 84898427335 scopus 로고    scopus 로고
    • Recognizing human actions in still images: A study of bag-of-features and part-based representations
    • 2
    • V. Delaitre et al. Recognizing human actions in still images: A study of bag-of-features and part-based representations. In BMVC, 2010.
    • (2010) BMVC
    • Delaitre, V.1
  • 5
    • 84946590544 scopus 로고    scopus 로고
    • Construction and Analysis of a Large Scale Image Ontology
    • 3
    • J. Deng et al. Construction and Analysis of a Large Scale Image Ontology. Vision Sciences Society, 2009.
    • (2009) Vision Sciences Society
    • Deng, J.1
  • 6
    • 70450161428 scopus 로고    scopus 로고
    • An empirical study of context in object detection
    • 3
    • S. Divvala et al. An empirical study of context in object detection. In CVPR, 2009.
    • (2009) CVPR
    • Divvala, S.1
  • 7
    • 84906928552 scopus 로고    scopus 로고
    • Comparing automatic evaluation measures for image description
    • 3
    • D. Elliott et al. Comparing automatic evaluation measures for image description. In ACL, 2014.
    • (2014) ACL
    • Elliott, D.1
  • 8
    • 85009936768 scopus 로고    scopus 로고
    • Linking people with their names using coreference resolution
    • V. R. et al.
    • V. R. et al. Linking people with "their" names using coreference resolution. In ECCV, 2014.
    • (2014) ECCV , vol.3
  • 10
    • 84921069139 scopus 로고    scopus 로고
    • The pascal visual object classes challenge 2009
    • 2
    • M. Everingham et al. The pascal visual object classes challenge 2009. In 2th PASCAL Challenge Workshop, 2009.
    • (2009) 2th PASCAL Challenge Workshop
    • Everingham, M.1
  • 12
    • 78149311145 scopus 로고    scopus 로고
    • Every picture tells a story: Generating sentences from images
    • 3
    • A. Farhadi et al. Every picture tells a story: Generating sentences from images. In ECCV 2010, pages 15-29. 2010.
    • (2010) ECCV 2010 , pp. 15-29
    • Farhadi, A.1
  • 13
    • 0012686456 scopus 로고    scopus 로고
    • Wiley Online Library, 2, 3
    • C. Fellbaum. WordNet. Wiley Online Library, 1998.
    • (1998) WordNet
    • Fellbaum, C.1
  • 15
    • 84959925712 scopus 로고    scopus 로고
    • Semantic role labelling with neural network factors
    • 6
    • N. FitzGerald et al. Semantic role labelling with neural network factors. In EMNLP, 2015.
    • (2015) EMNLP
    • FitzGerald, N.1
  • 16
    • 84898958665 scopus 로고    scopus 로고
    • Devise: A deep visual-semantic embedding model
    • 3
    • A. Frome et al. Devise: A deep visual-semantic embedding model. In NIPS, 2013.
    • (2013) NIPS
    • Frome, A.1
  • 17
    • 78651403274 scopus 로고    scopus 로고
    • Context based object categorization: A critical survey
    • 3
    • C. Galleguillos et. Al. Context based object categorization: A critical survey. CVIU, 2010.
    • (2010) CVIU
    • Galleguillos, C.1
  • 19
    • 84943742382 scopus 로고    scopus 로고
    • A dataset of syntactic-ngrams over time from a very large corpus of english books
    • 4
    • Y. Goldberg et al. A dataset of syntactic-ngrams over time from a very large corpus of english books. In SEM, 2013.
    • (2013) SEM
    • Goldberg, Y.1
  • 20
    • 84898773262 scopus 로고    scopus 로고
    • Youtube2text: Recognizing and describing arbitrary activities using semantic hierarchies and zero-shot recognition
    • 3
    • S. Guadarrama et al. Youtube2text: Recognizing and describing arbitrary activities using semantic hierarchies and zero-shot recognition. In ICCV, 2013.
    • (2013) ICCV
    • Guadarrama, S.1
  • 21
    • 84902318725 scopus 로고    scopus 로고
    • A survey on still image based human action recognition
    • 2
    • G. Guo et al. A survey on still image based human action recognition. Pattern Recognition, 2014.
    • (2014) Pattern Recognition
    • Guo, G.1
  • 22
    • 70450155469 scopus 로고    scopus 로고
    • Beyond nouns: Exploiting prepositions and comparative adjectives for learning visual classifiers
    • 2
    • A. Gupta et al. Beyond nouns: Exploiting prepositions and comparative adjectives for learning visual classifiers. In ECCV. 2008.
    • (2008) ECCV.
    • Gupta, A.1
  • 24
    • 84883394520 scopus 로고    scopus 로고
    • Framing image description as a ranking task: Data, models and evaluation metrics
    • 3
    • M. Hodosh et al. Framing image description as a ranking task: Data, models and evaluation metrics. JAIR, 2013.
    • (2013) JAIR
    • Hodosh, M.1
  • 27
    • 84964379003 scopus 로고    scopus 로고
    • From treebank to propbank
    • Citeseer, 3
    • P. Kingsbury and M. Palmer. From treebank to propbank. In LREC. Citeseer, 2002.
    • (2002) LREC
    • Kingsbury, P.1    Palmer, M.2
  • 28
    • 84911370987 scopus 로고    scopus 로고
    • What are you talking about text-to-image coreference
    • 3
    • C. Kong et al. What are you talking about text-to-image coreference. In CVPR, 2014.
    • (2014) CVPR
    • Kong, C.1
  • 29
    • 85009863830 scopus 로고    scopus 로고
    • Is this a wampimuk
    • 3
    • A. Lazaridou et al. Is this a wampimuk In ACL, 2014.
    • (2014) ACL
    • Lazaridou, A.1
  • 30
    • 85062874978 scopus 로고    scopus 로고
    • Tuhoi: Trento universal human object interaction dataset
    • 2
    • D.-T. Le et al. Tuhoi: Trento universal human object interaction dataset. V&L Net 2014, 2014.
    • (2014) V&L Net 2014
    • Le, D.-T.1
  • 31
    • 78149310629 scopus 로고    scopus 로고
    • What, where and who classifying events by scene and object recognition
    • 2
    • L.-J. Li et al. What, where and who classifying events by scene and object recognition. In CVPR, 2007.
    • (2007) CVPR
    • Li, L.-J.1
  • 32
    • 85009931853 scopus 로고    scopus 로고
    • Microsoft coco: Common objects in context
    • 3
    • T.-Y. Lin et al. Microsoft coco: Common objects in context. In ECCV. 2014.
    • (2014) ECCV.
    • Lin, T.-Y.1
  • 33
    • 80052880806 scopus 로고    scopus 로고
    • Action recognition from a distributed representation of pose and appearance
    • 3
    • S. Maji et al. Action recognition from a distributed representation of pose and appearance. In CVPR, 2011.
    • (2011) CVPR
    • Maji, S.1
  • 35
    • 70450177757 scopus 로고    scopus 로고
    • Actions in context
    • 1, 3
    • M. Marszalek et al. Actions in context. In CVPR, 2009.
    • (2009) CVPR
    • Marszalek, M.1
  • 36
    • 85162522202 scopus 로고    scopus 로고
    • Im2text: Describing images using 1 million captioned photographs
    • 3
    • V. Ordonez et al. Im2text: Describing images using 1 million captioned photographs. In NIPS, 2011.
    • (2011) NIPS
    • Ordonez, V.1
  • 37
    • 81255158440 scopus 로고    scopus 로고
    • Semlink: Linking propbank, verbnet and framenet
    • 3
    • M. Palmer. Semlink: Linking propbank, verbnet and framenet. In GLC, pages 9-15, 2009.
    • (2009) GLC , pp. 9-15
    • Palmer, M.1
  • 38
    • 50649096757 scopus 로고    scopus 로고
    • Objects in context
    • 3
    • A. Rabinovich et al. Objects in context. In ICCV, 2007.
    • (2007) ICCV
    • Rabinovich, A.1
  • 40
    • 84994124048 scopus 로고    scopus 로고
    • Describing common human visual actions in images
    • 3
    • M. Ronchi et al. Describing common human visual actions in images. In BMVC, 2015.
    • (2015) BMVC
    • Ronchi, M.1
  • 41
    • 84921954402 scopus 로고    scopus 로고
    • ImageNet large scale visual recognition challenge
    • 2, 6
    • O. Russakovsky et al. ImageNet Large Scale Visual Recognition Challenge. CoRR, 2014.
    • (2014) CoRR
    • Russakovsky, O.1
  • 42
    • 84883376937 scopus 로고    scopus 로고
    • Grounded models of semantic representation
    • 3
    • C. Silberer et al. Grounded models of semantic representation. In EMNLP, 2012.
    • (2012) EMNLP
    • Silberer, C.1
  • 47
    • 85009851491 scopus 로고    scopus 로고
    • Modeling mutual context of object and human pose in human-object interaction activities
    • 3
    • B. Yao et al. Modeling mutual context of object and human pose in human-object interaction activities. In CVPR.
    • CVPR
    • Yao, B.1
  • 48
    • 77955987964 scopus 로고    scopus 로고
    • Grouplet: A structured image representation for recognizing human and object interactions
    • 2
    • B. Yao et al. Grouplet: A structured image representation for recognizing human and object interactions. In CVPR, 2010.
    • (2010) CVPR
    • Yao, B.1
  • 49
    • 84856672971 scopus 로고    scopus 로고
    • Human action recognition by learning bases of action attributes and parts
    • 2
    • B. Yao et al. Human action recognition by learning bases of action attributes and parts. In ICCV, 2011.
    • (2011) ICCV
    • Yao, B.1
  • 50
    • 85026937926 scopus 로고    scopus 로고
    • See no evil, say no evil: Description generation from densely labeled images
    • 3
    • M. Yatskar et al. See no evil, say no evil: Description generation from densely labeled images. SEM, 2014.
    • (2014) SEM
    • Yatskar, M.1
  • 52
    • 84952058866 scopus 로고    scopus 로고
    • Reasoning about object affordances in a knowledge base representation
    • 3
    • Y. Zhu et al. Reasoning about object affordances in a knowledge base representation. In ECCV. 2014.
    • (2014) ECCV
    • Zhu, Y.1


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.