메뉴 건너뛰기




Volumn 17, Issue , 2016, Pages

Learning with differential privacy: Stability, learnability and the sufficiency and necessity of ERM principle

Author keywords

Characterization; Differential privacy; Learnability; Privacy preserving machine learning; Stability

Indexed keywords

ARTIFICIAL INTELLIGENCE; CHARACTERIZATION; CONVERGENCE OF NUMERICAL METHODS; DATA PRIVACY; LEARNING SYSTEMS;

EID: 84995390313     PISSN: 15324435     EISSN: 15337928     Source Type: Journal    
DOI: None     Document Type: Article
Times cited : (81)

References (39)
  • 6
    • 84894624083 scopus 로고    scopus 로고
    • Bounds on the sample complexity for private learning and private data release
    • Amos Beimel, Hai Brenner, Shiva Prasad Kasiviswanathan, and Kobbi Nissim. Bounds on the sample complexity for private learning and private data release. Machine learning, 94 (3):401-437, 2014.
    • (2014) Machine Learning , vol.94 , Issue.3 , pp. 401-437
    • Beimel, A.1    Brenner, H.2    Prasad Kasiviswanathan, S.3    Nissim, K.4
  • 7
  • 11
    • 84885588035 scopus 로고    scopus 로고
    • Sample complexity bounds for differentially private learning
    • Kamalika Chaudhuri and Daniel Hsu. Sample complexity bounds for differentially private learning. In Conference on Learning Theory (COLT-11), volume 19, pages 155-186, 2011.
    • (2011) Conference on Learning Theory (COLT-11) , vol.19 , pp. 155-186
    • Chaudhuri, K.1    Hsu, D.2
  • 16
    • 33745556605 scopus 로고    scopus 로고
    • Calibrating noise to sensitivity in private data analysis
    • Springer
    • Cynthia Dwork, Frank McSherry, Kobbi Nissim, and Adam Smith. Calibrating noise to sensitivity in private data analysis. In Theory of cryptography, pages 265-284. Springer, 2006b.
    • (2006) Theory of Cryptography , pp. 265-284
    • Dwork, C.1    McSherry, F.2    Nissim, K.3    Smith, A.4
  • 17
    • 84939199001 scopus 로고    scopus 로고
    • The reusable holdout: Preserving validity in adaptive data analysis
    • Cynthia Dwork, Vitaly Feldman, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Aaron Roth. The reusable holdout: Preserving validity in adaptive data analysis. Science, 349 (6248):636-638, 2015a.
    • (2015) Science , vol.349 , Issue.6248 , pp. 636-638
    • Dwork, C.1    Feldman, V.2    Hardt, M.3    Pitassi, T.4    Reingold, O.5    Roth, A.6
  • 20
    • 0002192516 scopus 로고
    • Decision theoretic generalizations of the pac model for neural net and other learning applications
    • David Haussler. Decision theoretic generalizations of the pac model for neural net and other learning applications. Information and computation, 100(1):78-150, 1992.
    • (1992) Information and Computation , vol.100 , Issue.1 , pp. 78-150
    • Haussler, D.1
  • 23
    • 0033566418 scopus 로고    scopus 로고
    • Algorithmic stability and sanity-check bounds for leave-one-out cross-validation
    • Michael Kearns and Dana Ron. Algorithmic stability and sanity-check bounds for leave-one-out cross-validation. Neural Computation, 11(6):1427-1453, 1999.
    • (1999) Neural Computation , vol.11 , Issue.6 , pp. 1427-1453
    • Kearns, M.1    Ron, D.2
  • 25
    • 84904192038 scopus 로고    scopus 로고
    • Private convex empirical risk minimization and high-dimensional regression
    • Daniel Kifer, Adam Smith, and Abhradeep Thakurta. Private convex empirical risk minimization and high-dimensional regression. Journal of Machine Learning Research, 1:41, 2012.
    • (2012) Journal of Machine Learning Research , vol.1 , pp. 41
    • Kifer, D.1    Smith, A.2    Thakurta, A.3
  • 28
    • 33745655665 scopus 로고    scopus 로고
    • Learning theory: Stability is sufficient for generalization and necessary and sufficient for consistency of empirical risk minimization
    • Sayan Mukherjee, Partha Niyogi, Tomaso Poggio, and Ryan Rifkin. Learning theory: Stability is sufficient for generalization and necessary and sufficient for consistency of empirical risk minimization. Advances in Computational Mathematics, 25(1-3):161-193, 2006.
    • (2006) Advances in Computational Mathematics , vol.25 , Issue.1-3 , pp. 161-193
    • Mukherjee, S.1    Niyogi, P.2    Poggio, T.3    Rifkin, R.4
  • 30
    • 0025448521 scopus 로고
    • The strength of weak learnability
    • Robert E Schapire. The strength of weak learnability. Machine Learning, 5(2):197-227, 1990.
    • (1990) Machine Learning , vol.5 , Issue.2 , pp. 197-227
    • Schapire, R.E.1
  • 32
    • 79959714549 scopus 로고    scopus 로고
    • Privacy-preserving statistical estimation with optimal convergence rates
    • ACM
    • Adam Smith. Privacy-preserving statistical estimation with optimal convergence rates. In ACM Symposium on Theory of Computing (STOC-11), pages 813-822, 2011.
    • (2011) Symposium on Theory of Computing (STOC-11) , pp. 813-822
    • Smith, A.1
  • 33
    • 84898021295 scopus 로고    scopus 로고
    • Differentially private feature selection via stability arguments, and the robustness of the lasso
    • Abhradeep Guha Thakurta and Adam Smith. Differentially private feature selection via stability arguments, and the robustness of the lasso. In Conference on Learning Theory (COLT-13), pages 819-850, 2013.
    • (2013) Conference on Learning Theory (COLT-13) , pp. 819-850
    • Guha Thakurta, A.1    Smith, A.2
  • 34
    • 0021518106 scopus 로고
    • A theory of the learnable
    • Leslie G Valiant. A theory of the learnable. Communications of the ACM, 27(11):1134-1142, 1984.
    • (1984) Communications of the ACM , vol.27 , Issue.11 , pp. 1134-1142
    • Valiant, L.G.1
  • 39
    • 84905233012 scopus 로고    scopus 로고
    • Scalable privacy-preserving data sharing methodology for genome-wide association studies
    • Fei Yu, Stephen E Fienberg, Aleksandra B Slavković, and Caroline Uhler. Scalable privacy-preserving data sharing methodology for genome-wide association studies. Journal of biomedical informatics, 50:133-141, 2014.
    • (2014) Journal of Biomedical Informatics , vol.50 , pp. 133-141
    • Yu, F.1    Fienberg, S.E.2    Slavković, A.B.3    Uhler, C.4


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.