丁香五月天婷婷久久婷婷色综合91|国产传媒自偷自拍|久久影院亚洲精品|国产欧美VA天堂国产美女自慰视屏|免费黄色av网站|婷婷丁香五月激情四射|日韩AV一区二区中文字幕在线观看|亚洲欧美日本性爱|日日噜噜噜夜夜噜噜噜|中文Av日韩一区二区

您正在使用IE低版瀏覽器,為了您的雷峰網(wǎng)賬號安全和更好的產(chǎn)品體驗,強烈建議使用更快更安全的瀏覽器
此為臨時鏈接,僅用于文章預(yù)覽,將在時失效
人工智能 正文
發(fā)私信給奕欣
發(fā)送

0

Ian Goodfellow撰文總結(jié):谷歌的 ICLR 2017 碩果累累

本文作者: 奕欣 2017-04-25 16:14 專題:ICLR 2017
導(dǎo)語:谷歌大腦團隊的 Ian Goodfellow 今日在研究院官網(wǎng)上撰文,總結(jié)了谷歌在 ICLR 2017 上所做的學(xué)術(shù)貢獻。

Ian Goodfellow撰文總結(jié):谷歌的 ICLR 2017 碩果累累

雷鋒網(wǎng)消息,谷歌大腦團隊的 Ian Goodfellow 今日在研究院官網(wǎng)上撰文,總結(jié)了谷歌在 ICLR 2017 上所做的學(xué)術(shù)貢獻。雷鋒網(wǎng)編譯全文如下,未經(jīng)許可不得轉(zhuǎn)載。

本周,第五屆國際學(xué)習(xí)表征會議(ICLR 2017)在法國土倫召開,這是一個關(guān)注機器學(xué)習(xí)領(lǐng)域如何從數(shù)據(jù)中習(xí)得具有意義及有用表征的會議。ICLR 包括 conference track 及 workshop track 兩個項目,邀請了獲得 oral 及 poster 的研究者們進行分享,涵蓋深度學(xué)習(xí)、度量學(xué)習(xí)、核學(xué)習(xí)、組合模型、非線性結(jié)構(gòu)化預(yù)測,及非凸優(yōu)化問題。

站在神經(jīng)網(wǎng)絡(luò)及深度學(xué)習(xí)領(lǐng)域浪潮之巔,谷歌關(guān)注理論與實踐,并致力于開發(fā)理解與總結(jié)的學(xué)習(xí)方法。作為 ICLR 2017 的白金贊助商,谷歌有超過 50 名研究者出席本次會議(大部分為谷歌大腦團隊及谷歌歐洲研究分部的成員),通過在現(xiàn)場展示論文及海報的方式,為建設(shè)一個更完善的學(xué)術(shù)研究交流平臺做出了貢獻,也是一個互相學(xué)習(xí)的過程。此外,谷歌的研究者也是 workshops 及組委會構(gòu)建的中堅力量。

如果你來到 ICLR 2017,我們希望你能在我們的展臺前駐足,并與我們的研究者進行交流,探討如何為數(shù)十億人解決有趣的問題。

以下為谷歌在 ICLR 2017 展示的論文內(nèi)容(其中的谷歌研究者已經(jīng)加粗表示)

區(qū)域主席

George Dahl, Slav Petrov, Vikas Sindhwani

程序主席(雷鋒網(wǎng)此前已經(jīng)做過相關(guān)介紹)

Hugo Larochelle, Tara Sainath

受邀演講論文

  • Understanding Deep Learning Requires Rethinking Generalization (Best Paper Award)

    Chiyuan Zhang*, Samy Bengio, Moritz Hardt, Benjamin Recht*, Oriol Vinyals

  • Semi-Supervised Knowledge Transfer for Deep Learning from Private Training Data (Best Paper Award)

    Nicolas Papernot*, Martín Abadi, úlfar Erlingsson, Ian Goodfellow, Kunal Talwar

  • Q-Prop: Sample-Efficient Policy Gradient with An Off-Policy Critic

    Shixiang (Shane) Gu*, Timothy Lillicrap, Zoubin Ghahramani, Richard E. Turner, Sergey Levine

  • Neural Architecture Search with Reinforcement Learning

    Barret Zoph, Quoc Le

Poster 論文

  • Adversarial Machine Learning at Scale

    Alexey Kurakin, Ian J. Goodfellow?, Samy Bengio

  • Capacity and Trainability in Recurrent Neural Networks

    Jasmine Collins, Jascha Sohl-Dickstein, David Sussillo

  • Improving Policy Gradient by Exploring Under-Appreciated Rewards

    Ofir Nachum, Mohammad Norouzi, Dale Schuurmans

  • Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer

    Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, Jeff Dean

  • Unrolled Generative Adversarial Networks

    Luke Metz, Ben Poole*, David Pfau, Jascha Sohl-Dickstein

  • Categorical Reparameterization with Gumbel-Softmax

    Eric Jang, Shixiang (Shane) Gu*, Ben Poole*

  • Decomposing Motion and Content for Natural Video Sequence Prediction

    Ruben Villegas, Jimei Yang, Seunghoon Hong, Xunyu Lin, Honglak Lee

  • Density Estimation Using Real NVP

    Laurent Dinh*, Jascha Sohl-Dickstein, Samy Bengio

  • Latent Sequence Decompositions

    William Chan*, Yu Zhang*, Quoc Le, Navdeep Jaitly*

  • Learning a Natural Language Interface with Neural Programmer

    Arvind Neelakantan*, Quoc V. Le, Martín Abadi, Andrew McCallum*, Dario Amodei*

  • Deep Information Propagation

    Samuel Schoenholz, Justin Gilmer, Surya Ganguli, Jascha Sohl-Dickstein

  • Identity Matters in Deep Learning

    Moritz Hardt, Tengyu Ma

  • A Learned Representation For Artistic Style

    Vincent Dumoulin*, Jonathon Shlens, Manjunath Kudlur

  • Adversarial Training Methods for Semi-Supervised Text Classification

    Takeru Miyato, Andrew M. Dai, Ian Goodfellow?

  • HyperNetworks

    David Ha, Andrew Dai, Quoc V. Le

  • Learning to Remember Rare Events

    Lukasz Kaiser, Ofir Nachum, Aurko Roy*, Samy Bengio

Workshop Track

  • Particle Value Functions

    Chris J. Maddison, Dieterich Lawson, George Tucker, Nicolas Heess, Arnaud Doucet, Andriy Mnih, Yee Whye Teh

  • Neural Combinatorial Optimization with Reinforcement Learning

    Irwan Bello, Hieu Pham, Quoc V. Le, Mohammad Norouzi, Samy Bengio

  • Short and Deep: Sketching and Neural Networks

    Amit Daniely, Nevena Lazic, Yoram Singer, Kunal Talwar

  • Explaining the Learning Dynamics of Direct Feedback Alignment

    Justin Gilmer, Colin Raffel, Samuel S. Schoenholz, Maithra Raghu, Jascha Sohl-Dickstein

  • Training a Subsampling Mechanism in Expectation

    Colin Raffel, Dieterich Lawson

  • Tuning Recurrent Neural Networks with Reinforcement Learning

    Natasha Jaques*, Shixiang (Shane) Gu*, Richard E. Turner, Douglas Eck

  • REBAR: Low-Variance, Unbiased Gradient Estimates for Discrete Latent Variable Models

    George Tucker, Andriy Mnih, Chris J. Maddison, Jascha Sohl-Dickstein

  • Adversarial Examples in the Physical World

    Alexey Kurakin, Ian Goodfellow?, Samy Bengio

  • Regularizing Neural Networks by Penalizing Confident Output Distributions

    Gabriel Pereyra, George Tucker, Jan Chorowski, Lukasz Kaiser, Geoffrey Hinton

  • Unsupervised Perceptual Rewards for Imitation Learning

    Pierre Sermanet, Kelvin Xu, Sergey Levine

  • Changing Model Behavior at Test-time Using Reinforcement Learning

    Augustus Odena, Dieterich Lawson, Christopher Olah

* 工作內(nèi)容在谷歌就職時完成

? 工作內(nèi)容在 OpenAI 任職時完成

詳細(xì)信息可訪問 research.googleblog 了解,雷鋒網(wǎng)編譯。

雷峰網(wǎng)版權(quán)文章,未經(jīng)授權(quán)禁止轉(zhuǎn)載。詳情見轉(zhuǎn)載須知。

Ian Goodfellow撰文總結(jié):谷歌的 ICLR 2017 碩果累累

分享:
相關(guān)文章
當(dāng)月熱門文章
最新文章
請?zhí)顚懮暾埲速Y料
姓名
電話
郵箱
微信號
作品鏈接
個人簡介
為了您的賬戶安全,請驗證郵箱
您的郵箱還未驗證,完成可獲20積分喲!
請驗證您的郵箱
立即驗證
完善賬號信息
您的賬號已經(jīng)綁定,現(xiàn)在您可以設(shè)置密碼以方便用郵箱登錄
立即設(shè)置 以后再說