메뉴 건너뛰기




Volumn , Issue , 2017, Pages 77-84

Accelerating binarized neural networks: Comparison of FPGA, CPU, GPU, and ASIC

Author keywords

ASIC; Binarized neural networks; CPU; Data analytics; Deep learning; FPGA; GPU; Hardware accelerator

Indexed keywords

APPLICATION SPECIFIC INTEGRATED CIRCUITS; BINS; COMPUTATIONAL EFFICIENCY; DEEP LEARNING; DEEP NEURAL NETWORKS; EFFICIENCY; FIELD PROGRAMMABLE GATE ARRAYS (FPGA); GRAPHICS PROCESSING UNIT; HARDWARE; PROGRAM PROCESSORS;

EID: 85016000557     PISSN: None     EISSN: None     Source Type: Conference Proceeding    
DOI: 10.1109/FPT.2016.7929192     Document Type: Conference Paper
Times cited : (267)

References (16)
  • 8
    • 84876231242 scopus 로고    scopus 로고
    • Imagenet classification with deep convolutional neural networks
    • A. Krizhevsky, et al, "Imagenet classification with deep convolutional neural networks," NIPS, 2012.
    • (2012) NIPS
    • Krizhevsky, A.1
  • 14
    • 84962229245 scopus 로고    scopus 로고
    • A sparse matrix vector multiply accelerator for support vector machine
    • E. Nurvitadhi, A. Mishra, D. Marr "A sparse matrix vector multiply accelerator for support vector machine," CASES, 2015.
    • (2015) CASES
    • Nurvitadhi, E.1    Mishra, A.2    Marr, D.3
  • 16
    • 84994813371 scopus 로고    scopus 로고
    • Accelerating recurrent neural networks in analytics servers: Comparison of FPGA, CPU, GPU, and ASIC
    • E. Nurvitadhi, et al, "Accelerating recurrent neural networks in analytics servers: Comparison of FPGA, CPU, GPU, and ASIC," FPL, 2016.
    • (2016) FPL
    • Nurvitadhi, E.1


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.