五月色婷婷综合开心网

<pre id="6ydkw"><progress id="6ydkw"><tt id="6ydkw"></tt></progress></pre><strike id="6ydkw"><video id="6ydkw"></video></strike>
  • <th id="6ydkw"><video id="6ydkw"></video></th>

    <tr id="6ydkw"><option id="6ydkw"></option></tr>

    <code id="6ydkw"></code>
  • <big id="6ydkw"></big>

  • <code id="6ydkw"></code>

    <code id="6ydkw"></code><code id="6ydkw"><nobr id="6ydkw"><track id="6ydkw"></track></nobr></code>
    • go to Richard W. Hamming's profile page
    • go to E. Allen Emerson's profile page
    • go to Ronald L Rivest's profile page
    • go to Niklaus E. Wirth's profile page
    • go to Allen Newell 's profile page
    • go to Donald E. Knuth's profile page
    • go to Leslie Lamport's profile page
    • go to David Patterson's profile page
    • go to Michael O. Rabin 's profile page
    • go to John L Hennessy's profile page
    • go to Marvin Minsky 's profile page
    • go to Whitfield Diffie 's profile page
    • go to Robert E Kahn's profile page
    • go to Herbert A. Simon's profile page
    • go to Edward A Feigenbaum's profile page
    • go to Ivan Sutherland's profile page
    • go to Charles W Bachman's profile page
    • go to Frederick Brooks's profile page
    • go to Michael Stonebraker's profile page
    • go to Kenneth Lane Thompson's profile page
    • go to Barbara Liskov's profile page
    • go to Pat Hanrahan's profile page
    • go to Amir Pnueli's profile page
    • go to Juris Hartmanis's profile page
    A.M. TURING AWARD WINNERS BY...

    Geoffrey E Hinton DL Author Profile link

    Canada – 2018
    Short Annotated Bibliography
    1. Ackley, D. H., G. E. Hinton and T. J. Sejnowski (1985) “A learning algorithm for Boltzmann machines. Cognitive Science, 9, 147-169.
      An early and highly influential description of the Boltzmann machine, a class of neural networks inspired by statistical approaches to physics. This innovation underpinned much of Hinton’s later work.
    2. Rumelhart, D. E., G. E. Hinton and R. J. Williams (1986) “Learning representations by back-propagating errors,” Nature, 323, 533—536 and Rumelhart, D. E., Hinton, G. E., and Williams, R. J. (1986) “Learning internal representations by error propagation” In Rumelhart, D. E. and McClelland, J. L., editors, Parallel Distributed Processing: Explorations in the Microstructure of Cognition. Volume 1: Foundations, MIT Press, Cambridge, MA. pp 318-362.
      Two descriptions of the new approach to training neural networks, which Hinton and his collaborators termed “back-propagation.”  The Nature paper is concise and clearly argued, while the longer paper provides compelling detail. Together they helped to revive interest in connectionist approaches to machine learning.
    3. Hinton, G. E., S. Osindero, and Y. Teh, (2006) “A Fast Learning Algorithm for Deep Belief Nets,” Neural Computation, 18, pp 1527-1554.
      Returning to Boltzmann machines, Hinton and his collaborators introduced a new and efficient unsupervised learning algorithm applicable to a restricted subclass of Boltzmann machines. This demonstrated unexpected gains from the introduction of pre-trained “hidden” layers of neurons between input and output.
    4. Hinton, G. et al. (2012) “Deep Neural Networks for Acoustic Modeling in Speech Recognition,” IEEE Signal Processing Magazine, 29, 82–97.
      In this paper, Hinton partnered with co-authors from the groups working on speech recognition at Microsoft Research, Google and IBM Research to document the success they were achieving by applying deep learning to phonetic classification. This was the application that moved deep learning from experimental technique to industrial practice.
    5. Krizhevsky, A., I. Sutskever & G. Hinton (2012) “ImageNet Classification With Deep Convolutional Neural Networks,” Proc. Advances in Neural Information Processing Systems 25, pp. 1090–1098.
      This report described the design of the SuperVision program that won the 2012 ImageNet classification competition with a spectacular improvement over the performance of existing methods. Following its publication the designers of computer vision systems shifted rapidly towards deep learning methods.
    6. LeCun, Y., Y. Bengio and G. E Hinton. (2015) “Deep Learning,” Nature, Vol. 521, pp 436-444.
      A recent and accessible summary of the methods that Hinton and his co-winners termed “deep learning,” because of their reliance on neural networks with multiple, specialized, layers of neurons between input and output nodes. It addressed a surge of interest in their work following the successful demonstration of these methods for object categorization, face identification, and speech recognition.

     

    五月色婷婷综合开心网
    <pre id="6ydkw"><progress id="6ydkw"><tt id="6ydkw"></tt></progress></pre><strike id="6ydkw"><video id="6ydkw"></video></strike>
  • <th id="6ydkw"><video id="6ydkw"></video></th>

    <tr id="6ydkw"><option id="6ydkw"></option></tr>

    <code id="6ydkw"></code>
  • <big id="6ydkw"></big>

  • <code id="6ydkw"></code>

    <code id="6ydkw"></code><code id="6ydkw"><nobr id="6ydkw"><track id="6ydkw"></track></nobr></code>