메뉴 건너뛰기




Volumn , Issue , 2018, Pages

ESGD: Commutation efficient distributed deep learning on the edge

Author keywords

[No Author keywords available]

Indexed keywords

EDGE COMPUTING; GRADIENT METHODS; INTERNET OF THINGS; LEARNING ALGORITHMS; NEURAL NETWORKS; STOCHASTIC SYSTEMS;

EID: 85084164409     PISSN: None     EISSN: None     Source Type: Conference Proceeding    
DOI: None     Document Type: Conference Paper
Times cited : (22)

References (21)
  • 9
    • 84910069984 scopus 로고    scopus 로고
    • 1-bit stochastic gradient descent and its application to data-parallel distributed training of speech DNNs
    • Frank Seide Hao Fu Jasha Droppo Gang Li and Dong Yu. “1-Bit Stochastic Gradient Descent and its Application to Data-Parallel Distributed Training of Speech DNNs”. In: INTERSPEECH (2014).
    • (2014) INTERSPEECH
    • Seide, F.1    Fu, H.2    Droppo, J.3    Li, G.4    Yu, D.5
  • 14
    • 85077743953 scopus 로고    scopus 로고
    • visited on 05/02/2018
    • Resource efficient ML for Edge and Endpoint IoT Devices. URL: https://www.microsoft.com/en-us/research/project/resource-efficient-ml-for-the-edge-and-endpoint-iot-devices/ (visited on 05/02/2018).
    • Resource Efficient ML for Edge and Endpoint IoT Devices
  • 16
    • 84959142008 scopus 로고    scopus 로고
    • Scalable distributed DNN training using commodity GPU cloud computing
    • Nikko Strom. “Scalable Distributed DNN Training Using Commodity GPU Cloud Computing”. In: INTERSPEECH (2015).
    • (2015) INTERSPEECH
    • Strom, N.1


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.