NeurIPS Debriefing Seminar

The National NeurIPS debriefing is a yearly seminar (inspired by a similar seminar in Paris), where multiple PhD students and/or senior researchers from anywhere in the Netherlands give short presentations on the paper they found the most interesting at the previous NeurIPS conference. If you are interested in NeurIPS, please consider being one of the presenters.

The format is to have two hours of short talks (15 or 20 minutes, in English). It is not required to have attended NeurIPS, and presenters would usually not present their own papers. Talks can be informal, in a friendly atmosphere, so this is an ideal opportunity for PhD students to gain experience in giving presentations.

The current organizers are: Jack Mayo and Tim van Erven. If you are interested in presenting a paper or if you have any questions, please contact Jack.

Upcoming seminars will be announced on the machine learning Netherlands mailing list.

NeurIPS 2021 Debriefing

March 15, 2022
This year’s meeting will be online, via zoom:

14:00-14:30 Julia Olkhovskaya (VU) Efficient First-Order Contextual Bandits: Prediction, Allocation, and Triangular Discrimination by Foster, Krishnamurthy
14:30-15:00 Ahmad Hammoudeh (University of Mons, ISIA Lab & MAIA artificial intelligence Lab) On the Frequency Bias of Generative Models by Schwarz, Liao, Geiger
15:00-15:15 Break  
15:15-15:45 Hidde Fokkema (UvA) Framing RNN as a kernel method: A neural ODE approach by Fermanian, Marion, Vert, Biau

Canceled: The talk by Mustafa Celikok (TU Delft) on Bayesian Bellman Operators by Fellows, Hartikainen, Whiteson has been canceled.

NeurIPS 2020 Debriefing

Friday March 5, 2021
This year’s meeting will be online, via zoom:

14:00-14:30 Mustafa Celikok, TU Delft A Unifying View of Optimism in Episodic Reinforcement Learning by Gergely Neu, Ciara Pike-Burne
14:30-15:00 Alexander Mey, TU Delft Towards Minimax Optimal Reinforcement Learning in Factored Markov Decision Processes by Yi Tian, Jian Qian, Suvrit Sra
15:00-15:15 Break
15:15-15:45 Alexander Ly, CWI Overfitting Can Be Harmless for Basis Pursuit, But Only to a Degree by Peizhong Ju, Xiaojun Lin, Jia Liu
15:45-16:00 Peter van der Putten, Leiden University Creative intermezzo
16:15-16:45 Jack Mayo, University of Amsterdam Optimal Algorithms for Stochastic Multi-Armed Bandits with Heavy Tailed Rewards by Kyungjae Lee, Hongjun Yang, Sungbin Lim, Songhwai Oh

In addition to the talks, we will also have a creative intermezzo, in which Peter van der Putten (Leiden University) will showcase a creative piece using the GPT3 playground that was part of the digital gallery of the NeurIPS Workshop on Machine Learning for Creativity and Design 2020: Link 1, Link 2.

NeurIPS 2019 Debriefing

Friday February 28, 2020
Leiden University, Snellius Building, Niels Bohrweg 1, Leiden
Room 176

List of speakers:

Frans Oliehoek, TU Delft Using a Logarithmic Mapping to Enable Lower Discount Factors in Reinforcement Learning by Van Seijen, Fatemi, Tavakoli
Wouter Koolen, CWI Optimistic Regret Minimization for Extensive-Form Games via Dilated Distance-Generating Functions by Farina, Kroer, Sandholm
Albert Senen-Cerda, TU Eindhoven q-means: A quantum algorithm for unsupervised machine learning by Kerenidis, Landman, Luongo, Prakash
Free slot for one more speaker

NeurIPS 2018 Debriefing

Thursday February 14, 2019 15h00-17h00 Leiden University, Snellius Building, Niels Bohrweg 1, Leiden Room 176

List of speakers:

Rémy Degenne, CWI Coordinate Descent with Bandit Sampling by Salehi, Thiran, Celis
Changyong Oh, UvA Geometrically Coupled Monte Carlo Sampling by Rowland et al.
Zakaria Mhammedi, Australian National University Direct Runge-Kutta Discretization Achieves Acceleration by Zhang et al.
Wouter Koolen, CWI Acceleration through Optimistic No-Regret Dynamics by Wang and Abernethy

NIPS 2017 Debriefing

Wednesday February 28, 2018
Leiden University, Snellius Building, Niels Bohrweg 1, Leiden
Room 174

Thomas Moerland Thinking Fast and Slow with Deep Learning and Tree Search by Anthony, Tian, Barber slides
Dirk van der Hoeven Parameter free online learning via model selection by Foster, Kale, Mohri, Sridharan
Wouter Koolen Safe and Nested Subgame Solving for Imperfect-Information Games by Brown, Sandholm
William Weimin Yoo Dynamic Routing Between Capsules by Sabour, Frosst, Hinton slides
Tim van Erven A high-level overview of recent thinking on why neural networks generalize, based on: slides

NIPS 2016 Debriefing

Friday January 20, 2017
Leiden University, Snellius Building, Niels Bohrweg 1, Leiden
Room 402

List of speakers:

Nishant Mehta: On the Recursive Teaching Dimension of VC Classes by Chen^2, Cheng, Tang
Kevin Duisters: Matrix Completion has No Spurious Local Minimum by Ge, Lee, Ma
Dirk van der Hoeven: Blazing the trails before beating the path: Sample-efficient Monte-Carlo planning by Grill, Valko, Munos
Tim van Erven: Deep Learning without Poor Local Minima by Kawaguchi