paper_id stringlengths 19 21 | paper_title stringlengths 8 170 | paper_abstract stringlengths 8 5.01k | paper_acceptance stringclasses 18 values | meta_review stringlengths 29 10k | label stringclasses 3 values | review_ids list | review_writers list | review_contents list | review_ratings list | review_confidences list | review_reply_tos list |
|---|---|---|---|---|---|---|---|---|---|---|---|
iclr_2018_ryQu7f-RZ | On the Convergence of Adam and Beyond | Several recently proposed stochastic optimization methods that have been successfully used in training deep networks such as RMSProp, Adam, Adadelta, Nadam are based on using gradient updates scaled by square roots of exponential moving averages of squared past gradients. In many applications, e.g. learning with large output spaces, it has been empirically observed that these algorithms fail to converge to an optimal solution (or a critical point in nonconvex settings). We show that one cause for such failures is the exponential moving average used in the algorithms. We provide an explicit example of a simple convex optimization setting where Adam does not converge to the optimal solution, and describe the precise problems with the previous analysis of Adam algorithm. Our analysis suggests that the convergence issues can be fixed by endowing such algorithms with ``long-term memory'' of past gradients, and propose new variants of the Adam algorithm which not only fix the convergence issues but often also lead to improved empirical performance. | accepted-oral-papers | This paper analyzes a problem with the convergence of Adam, and presents a solution. It identifies an error in the convergence proof of Adam (which also applies to related methods such as RMSProp) and gives a simple example where it fails to converge. The paper then repairs the algorithm in a way that guarantees convergence without introducing much computational or memory overhead. There ought to be a lot of interest in this paper: Adam is a widely used algorithm, but sometimes underperforms SGD on certain problems, and this could be part of the explanation. The fix is both principled and practical. Overall, this is a strong paper, and I recommend acceptance.
| test | [
"HkhdRaVlG",
"H15qgiFgf",
"Hyl2iJgGG",
"BJQcTsbzf",
"HJXG6sWzG",
"H16UnjZMM",
"ryA-no-zz",
"HJTujoWGG",
"ByhZijZfG",
"SkjC2Ni-z",
"SJXpTMFbf",
"rkBQ_QuWf",
"Sy5rDQu-z",
"SJRh-9lef",
"Bye7sLhkM"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"public",
"public",
"public",
"public",
"author",
"public"
] | [
"The paper presents three contributions: 1) it shows that the proof of convergence Adam is wrong; 2) it presents adversarial and stochastic examples on which Adam converges to the worst possible solution (i.e. there is no hope to just fix Adam's proof); 3) it proposes a variant of Adam called AMSGrad that fixes the... | [
9,
8,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
5,
4,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_ryQu7f-RZ",
"iclr_2018_ryQu7f-RZ",
"iclr_2018_ryQu7f-RZ",
"HkhdRaVlG",
"H15qgiFgf",
"Sy5rDQu-z",
"SJXpTMFbf",
"SkjC2Ni-z",
"Hyl2iJgGG",
"iclr_2018_ryQu7f-RZ",
"iclr_2018_ryQu7f-RZ",
"HkhdRaVlG",
"iclr_2018_ryQu7f-RZ",
"Bye7sLhkM",
"iclr_2018_ryQu7f-RZ"
] |
iclr_2018_BJ8vJebC- | Synthetic and Natural Noise Both Break Neural Machine Translation | Character-based neural machine translation (NMT) models alleviate out-of-vocabulary issues, learn morphology, and move us closer to completely end-to-end translation systems. Unfortunately, they are also very brittle and easily falter when presented with noisy data. In this paper, we confront NMT models with synthetic and natural sources of noise. We find that state-of-the-art models fail to translate even moderately noisy texts that humans have no trouble comprehending. We explore two approaches to increase model robustness: structure-invariant word representations and robust training on noisy texts. We find that a model based on a character convolutional neural network is able to simultaneously learn representations robust to multiple kinds of noise. | accepted-oral-papers | The pros and cons of this paper cited by the reviewers can be summarized below:
Pros:
* The paper is a first attempt to investigate an under-studied area in neural MT (and potentially other applications of sequence-to-sequence models as well)
* This area might have a large impact; existing models such as Google Translate fail badly on the inputs described here
* Experiments are very carefully designed and thorough
* Experiments on not only synthetic but also natural noise add significant reliability to the results
* Paper is well-written and easy to follow
Cons:
* There may be better architectures for this problem than the ones proposed here
* Even the natural noise is not entirely natural, e.g. artificially constrained to exist within words
* Paper is not a perfect fit to ICLR (although ICLR is attempting to cast a wide net, so this alone is not a critical criticism of the paper)
This paper had uniformly positive reviews and has potential for large real-world impact. | train | [
"SJoXiUUNM",
"SkABkz5gM",
"BkQzs54VG",
"BkVD7bqlf",
"SkeZfu2xG",
"SyTfeD5bz",
"B1dT1vqWf",
"HJ1vJDcZz",
"HyRwAIqWf",
"rJIbAd7-z"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"public"
] | [
"Thanks for your thoughtful response to my review.",
"This paper investigates the impact of character-level noise on various flavours of neural machine translation. It tests 4 different NMT systems with varying degrees and types of character awareness, including a novel meanChar system that uses averaged unigram ... | [
-1,
7,
-1,
7,
8,
-1,
-1,
-1,
-1,
-1
] | [
-1,
4,
-1,
4,
4,
-1,
-1,
-1,
-1,
-1
] | [
"HJ1vJDcZz",
"iclr_2018_BJ8vJebC-",
"rJIbAd7-z",
"iclr_2018_BJ8vJebC-",
"iclr_2018_BJ8vJebC-",
"rJIbAd7-z",
"BkVD7bqlf",
"SkABkz5gM",
"SkeZfu2xG",
"iclr_2018_BJ8vJebC-"
] |
iclr_2018_Hk2aImxAb | Multi-Scale Dense Networks for Resource Efficient Image Classification | In this paper we investigate image classification with computational resource limits at test time. Two such settings are: 1. anytime classification, where the network’s prediction for a test example is progressively updated, facilitating the output of a prediction at any time; and 2. budgeted batch classification, where a fixed amount of computation is available to classify a set of examples that can be spent unevenly across “easier” and “harder” inputs. In contrast to most prior work, such as the popular Viola and Jones algorithm, our approach is based on convolutional neural networks. We train multiple classifiers with varying resource demands, which we adaptively apply during test time. To maximally re-use computation between the classifiers, we incorporate them as early-exits into a single deep convolutional neural network and inter-connect them with dense connectivity. To facilitate high quality classification early on, we use a two-dimensional multi-scale network architecture that maintains coarse and fine level features all-throughout the network. Experiments on three image-classification tasks demonstrate that our framework substantially improves the existing state-of-the-art in both settings. | accepted-oral-papers | As stated by reviewer 3 "This paper introduces a new model to perform image classification with limited computational resources at test time. The model is based on a multi-scale convolutional neural network similar to the neural fabric (Saxena and Verbeek 2016), but with dense connections (Huang et al., 2017) and with a classifier at each layer."
As stated by reviewer 2 "My only major concern is the degree of technical novelty with respect to the original DenseNet paper of Huang et al. (2017). ". The authors assert novelty in the sense that they provide a solution to improve computational efficiency and focus on this aspect of the problem. Overall, the technical innovation is not huge, but I think this could be a very useful idea in practice.
| train | [
"rJSuJm4lG",
"SJ7lAAYgG",
"rk6gRwcxz",
"Hy_75oomz",
"HkJRFjomf",
"HJiXYjjQz"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author"
] | [
"This work proposes a variation of the DenseNet architecture that can cope with computational resource limits at test time. The paper is very well written, experiments are clearly presented and convincing and, most importantly, the research question is exciting (and often overlooked). \n\nMy only major concern is t... | [
8,
7,
10,
-1,
-1,
-1
] | [
4,
4,
4,
-1,
-1,
-1
] | [
"iclr_2018_Hk2aImxAb",
"iclr_2018_Hk2aImxAb",
"iclr_2018_Hk2aImxAb",
"rJSuJm4lG",
"SJ7lAAYgG",
"rk6gRwcxz"
] |
iclr_2018_HJGXzmspb | Training and Inference with Integers in Deep Neural Networks | Researches on deep neural networks with discrete parameters and their deployment in embedded systems have been active and promising topics. Although previous works have successfully reduced precision in inference, transferring both training and inference processes to low-bitwidth integers has not been demonstrated simultaneously. In this work, we develop a new method termed as ``"WAGE" to discretize both training and inference, where weights (W), activations (A), gradients (G) and errors (E) among layers are shifted and linearly constrained to low-bitwidth integers. To perform pure discrete dataflow for fixed-point devices, we further replace batch normalization by a constant scaling layer and simplify other components that are arduous for integer implementation. Improved accuracies can be obtained on multiple datasets, which indicates that WAGE somehow acts as a type of regularization. Empirically, we demonstrate the potential to deploy training in hardware systems such as integer-based deep learning accelerators and neuromorphic chips with comparable accuracy and higher energy efficiency, which is crucial to future AI applications in variable scenarios with transfer and continual learning demands. | accepted-oral-papers | High quality paper, appreciated by reviewers, likely to be of substantial interest to the community. It's worth an oral to facilitate a group discussion. | train | [
"SkzPEnBeG",
"rJG2o3wxf",
"SyrOMN9eM",
"HJ7oecRZf",
"r1t-e5CZf",
"ryW51cAbG"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author"
] | [
"This paper proposes a method to train neural networks with low precision. However, it is not clear if this work obtains significant improvements over previous works. \n\nNote that:\n1)\tWorking with 16bit, one can train neural networks with little to no reduction in performance. For example, on ImageNet with AlexN... | [
7,
7,
8,
-1,
-1,
-1
] | [
4,
3,
4,
-1,
-1,
-1
] | [
"iclr_2018_HJGXzmspb",
"iclr_2018_HJGXzmspb",
"iclr_2018_HJGXzmspb",
"SkzPEnBeG",
"rJG2o3wxf",
"SyrOMN9eM"
] |
iclr_2018_HJGv1Z-AW | Emergence of Linguistic Communication from Referential Games with Symbolic and Pixel Input | The ability of algorithms to evolve or learn (compositional) communication protocols has traditionally been studied in the language evolution literature through the use of emergent communication tasks. Here we scale up this research by using contemporary deep learning methods and by training reinforcement-learning neural network agents on referential communication games. We extend previous work, in which agents were trained in symbolic environments, by developing agents which are able to learn from raw pixel data, a more challenging and realistic input representation. We find that the degree of structure found in the input data affects the nature of the emerged protocols, and thereby corroborate the hypothesis that structured compositional language is most likely to emerge when agents perceive the world as being structured. | accepted-oral-papers | Important problem (analyzing the properties of emergent languages in multi-agent reference games), a number of interesting analyses (both with symbolic and pixel inputs), reaching a finding that varying the environment and restrictions on language result in variations in the learned communication protocols (which in hindsight is that not surprising, but that's hindsight). While the pixel experiments are not done with real images, it's an interesting addition the literature nonetheless. | train | [
"HJ3-u2Ogf",
"H15X_V8yM",
"BytyNwclz",
"S1XPn0jXG",
"r1QdpPjXf",
"SJWDw1iXG",
"ryjhESdQG",
"S1GjVrOmz",
"rJylbvSzG"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"--------------\nSummary:\n--------------\nThis paper presents a series of experiments on language emergence through referential games between two agents. They ground these experiments in both fully-specified symbolic worlds and through raw, entangled, visual observations of simple synthetic scenes. They provide ri... | [
7,
9,
5,
-1,
-1,
-1,
-1,
-1,
-1
] | [
4,
5,
4,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_HJGv1Z-AW",
"iclr_2018_HJGv1Z-AW",
"iclr_2018_HJGv1Z-AW",
"iclr_2018_HJGv1Z-AW",
"SJWDw1iXG",
"H15X_V8yM",
"HJ3-u2Ogf",
"HJ3-u2Ogf",
"BytyNwclz"
] |
iclr_2018_Hkbd5xZRb | Spherical CNNs | Convolutional Neural Networks (CNNs) have become the method of choice for learning problems involving 2D planar images. However, a number of problems of recent interest have created a demand for models that can analyze spherical images. Examples include omnidirectional vision for drones, robots, and autonomous cars, molecular regression problems, and global weather and climate modelling. A naive application of convolutional networks to a planar projection of the spherical signal is destined to fail, because the space-varying distortions introduced by such a projection will make translational weight sharing ineffective.
In this paper we introduce the building blocks for constructing spherical CNNs. We propose a definition for the spherical cross-correlation that is both expressive and rotation-equivariant. The spherical correlation satisfies a generalized Fourier theorem, which allows us to compute it efficiently using a generalized (non-commutative) Fast Fourier Transform (FFT) algorithm. We demonstrate the computational efficiency, numerical accuracy, and effectiveness of spherical CNNs applied to 3D model recognition and atomization energy regression. | accepted-oral-papers | This work introduces a trainable signal representation for spherical signals (functions defined in the sphere) which are rotationally equivariant by design, by extending CNNs to the corresponding group SO(3). The method is implemented efficiently using fast Fourier transforms on the sphere and illustrated with compelling tasks such as 3d shape recognition and molecular energy prediction.
Reviewers agreed this is a solid, well-written paper, which demonstrates the usefulness of group invariance/equivariance beyond the standard Euclidean translation group in real-world scenarios. It will be a great addition to the conference. | train | [
"r1VD9T_SM",
"r1rikDLVG",
"SJ3LYkFez",
"B1gQIy9gM",
"Bkv4qd3bG",
"r1CVE6O7f",
"Sy9FmTuQM",
"ryi-Q6_Xf",
"HkZy7TdXM",
"S1rz4yvGf"
] | [
"public",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"public"
] | [
"How to describe the relationships between these two papers?",
"Thank you for the feedback; I maintain my opinion.",
"Summary:\n\nThe paper proposes a framework for constructing spherical convolutional networks (ConvNets) based on a novel synthesis of several existing concepts. The goal is to detect patterns i... | [
-1,
-1,
8,
7,
9,
-1,
-1,
-1,
-1,
-1
] | [
-1,
-1,
4,
3,
4,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_Hkbd5xZRb",
"ryi-Q6_Xf",
"iclr_2018_Hkbd5xZRb",
"iclr_2018_Hkbd5xZRb",
"iclr_2018_Hkbd5xZRb",
"Bkv4qd3bG",
"SJ3LYkFez",
"B1gQIy9gM",
"S1rz4yvGf",
"iclr_2018_Hkbd5xZRb"
] |
iclr_2018_S1CChZ-CZ | Ask the Right Questions: Active Question Reformulation with Reinforcement Learning | We frame Question Answering (QA) as a Reinforcement Learning task, an approach that we call Active Question Answering.
We propose an agent that sits between the user and a black box QA system and learns to reformulate questions to elicit the best possible answers. The agent probes the system with, potentially many, natural language reformulations of an initial question and aggregates the returned evidence to yield the best answer.
The reformulation system is trained end-to-end to maximize answer quality using policy gradient. We evaluate on SearchQA, a dataset of complex questions extracted from Jeopardy!. The agent outperforms a state-of-the-art base model, playing the role of the environment, and other benchmarks.
We also analyze the language that the agent has learned while interacting with the question answering system. We find that successful question reformulations look quite different from natural language paraphrases. The agent is able to discover non-trivial reformulation strategies that resemble classic information retrieval techniques such as term re-weighting (tf-idf) and stemming. | accepted-oral-papers | this submission presents a novel way in which a neural machine reader could be improved. that is, by learning to reformulate a question specifically for the downstream machine reader. all the reviewers found it positive, and so do i. | train | [
"r10KoNDgf",
"HJ9W8iheM",
"Hydu7nFeG",
"Hk9DKzYzM",
"H15NIQOfM",
"SJZ0UmdfM",
"BkGlU7OMz",
"BkXuXQufM"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author"
] | [
"This paper proposes active question answering via a reinforcement learning approach that can learn to rephrase the original questions in a way that can provide the best possible answers. Evaluation on the SearchQA dataset shows significant improvement over the state-of-the-art model that uses the original question... | [
7,
6,
8,
-1,
-1,
-1,
-1,
-1
] | [
5,
4,
3,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_S1CChZ-CZ",
"iclr_2018_S1CChZ-CZ",
"iclr_2018_S1CChZ-CZ",
"BkGlU7OMz",
"Hydu7nFeG",
"r10KoNDgf",
"HJ9W8iheM",
"iclr_2018_S1CChZ-CZ"
] |
iclr_2018_rJTutzbA- | On the insufficiency of existing momentum schemes for Stochastic Optimization | Momentum based stochastic gradient methods such as heavy ball (HB) and Nesterov's accelerated gradient descent (NAG) method are widely used in practice for training deep networks and other supervised learning models, as they often provide significant improvements over stochastic gradient descent (SGD). Rigorously speaking, fast gradient methods have provable improvements over gradient descent only for the deterministic case, where the gradients are exact. In the stochastic case, the popular explanations for their wide applicability is that when these fast gradient methods are applied in the stochastic case, they partially mimic their exact gradient counterparts, resulting in some practical gain. This work provides a counterpoint to this belief by proving that there exist simple problem instances where these methods cannot outperform SGD despite the best setting of its parameters. These negative problem instances are, in an informal sense, generic; they do not look like carefully constructed pathological instances. These results suggest (along with empirical evidence) that HB or NAG's practical performance gains are a by-product of minibatching.
Furthermore, this work provides a viable (and provable) alternative, which, on the same set of problem instances, significantly improves over HB, NAG, and SGD's performance. This algorithm, referred to as Accelerated Stochastic Gradient Descent (ASGD), is a simple to implement stochastic algorithm, based on a relatively less popular variant of Nesterov's Acceleration. Extensive empirical results in this paper show that ASGD has performance gains over HB, NAG, and SGD. The code for implementing the ASGD Algorithm can be found at https://github.com/rahulkidambi/AccSGD.
| accepted-oral-papers | The reviewers unanimously recommended that this paper be accepted, as it contains an important theoretical result that there are problems for which heavy-ball momentum cannot outperform SGD. The theory is backed up by solid experimental results, and the writing is clear. While the reviewers were originally concerned that the paper was missing a discussion of some related algorithms (ASVRG and ASDCA) that were handled in discussion.
| train | [
"Sy3aR8wxz",
"Sk0uMIqef",
"Sy2Sc4CWz",
"SkEtTX6Xz",
"BJqEtWdMf",
"SyL2ub_fM",
"rkv8dZ_fz"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"I like the idea of the paper. Momentum and accelerations are proved to be very useful both in deterministic and stochastic optimization. It is natural that it is understood better in the deterministic case. However, this comes quite naturally, as deterministic case is a bit easier ;) Indeed, just recently people s... | [
7,
7,
8,
-1,
-1,
-1,
-1
] | [
4,
3,
5,
-1,
-1,
-1,
-1
] | [
"iclr_2018_rJTutzbA-",
"iclr_2018_rJTutzbA-",
"iclr_2018_rJTutzbA-",
"iclr_2018_rJTutzbA-",
"Sy2Sc4CWz",
"Sk0uMIqef",
"Sy3aR8wxz"
] |
iclr_2018_Hk6kPgZA- | Certifying Some Distributional Robustness with Principled Adversarial Training | Neural networks are vulnerable to adversarial examples and researchers have proposed many heuristic attack and defense mechanisms. We address this problem through the principled lens of distributionally robust optimization, which guarantees performance under adversarial input perturbations. By considering a Lagrangian penalty formulation of perturbing the underlying data distribution in a Wasserstein ball, we provide a training procedure that augments model parameter updates with worst-case perturbations of training data. For smooth losses, our procedure provably achieves moderate levels of robustness with little computational or statistical cost relative to empirical risk minimization. Furthermore, our statistical guarantees allow us to efficiently certify robustness for the population loss. For imperceptible perturbations, our method matches or outperforms heuristic approaches.
| accepted-oral-papers | This paper attracted strong praise from the reviewers, who felt that it was of high quality and originality. The broad problem that is being tackled is clearly of great importance.
This paper also attracted the attention of outside experts, who were more skeptical of the claims made by the paper. The technical merits do not seem to be in question, but rather, their interpretation/application. The perception by a community as to whether an important problem has been essentially solved can affect the choices made by other reviewers when they decide what work to pursue themselves, evaluate grants, etc. It's important that claims be conservative and highlight the ways in which the present work does not fully address the broader problem of adversarial examples.
Ultimately, it has been decided that the paper will be of great interest to the community. The authors have also been entrusted with the responsibility to consider the issues raised by the outside expert (and then echoed by the AC) in their final revisions.
One final note: In their responses to the outside expert, the authors several times remark that the guarantees made in the paper are, in form, no different from standard learning-theoretic claims: "This criticism, however, applies to many learning-theoretic results (including those applied in deep learning)." I don't find any comfort in this statement. Learning theorists have often focused on the form of the bounds (sqrt(m) dependence and, say, independence from the # of weights) and then they resort to empirical observations of correlation to demonstrate that the value of the bound is predictive for generalization. because the bounds are often meaningless ("vacuous") when evaluated on real data sets. (There are some recent examples bucking this trend.) In a sense, learning theorists have gotten off easy. Adversarial examples, however, concern security, and so there is more at stake. The slack we might afford learning theorists is not appropriate in this new context. I would encourage the authors to clearly explain any remaining work that needs to be done to move from "good enough for learning theory" to "good enough for security". The authors promise to outline important future work / open problems for the community. I definitely encourage this.
| train | [
"S1pdil8Sz",
"rkn74s8BG",
"HJNBMS8rf",
"rJnkAlLBf",
"H1g0Nx8rf",
"rklzlzBVf",
"HJ-1AnFlM",
"HySlNfjgf",
"rkx-2-y-f",
"rkix5PTQf",
"rJ63YwTQM",
"HyFBKPp7z",
"Hkzmdv67G",
"rJBbuPTmz",
"Hk2kQP3Qz",
"BJVnpJPXM",
"H1wDpaNbM"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"public",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"public",
"public",
"public"
] | [
"We just received an email notification abut this comment a few minutes ago and somehow did not receive any notification of the original comment uploaded on 21 January. We will upload a response later today.",
"Apologies for the (evidently) tardy response. We have now uploaded a response to the area chair's comme... | [
-1,
-1,
-1,
-1,
-1,
-1,
9,
9,
9,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"H1g0Nx8rf",
"rJnkAlLBf",
"rklzlzBVf",
"S1pdil8Sz",
"iclr_2018_Hk6kPgZA-",
"rJBbuPTmz",
"iclr_2018_Hk6kPgZA-",
"iclr_2018_Hk6kPgZA-",
"iclr_2018_Hk6kPgZA-",
"Hk2kQP3Qz",
"BJVnpJPXM",
"BJVnpJPXM",
"H1wDpaNbM",
"iclr_2018_Hk6kPgZA-",
"iclr_2018_Hk6kPgZA-",
"iclr_2018_Hk6kPgZA-",
"iclr_... |
iclr_2018_HktK4BeCZ | Learning Deep Mean Field Games for Modeling Large Population Behavior | We consider the problem of representing collective behavior of large populations and predicting the evolution of a population distribution over a discrete state space. A discrete time mean field game (MFG) is motivated as an interpretable model founded on game theory for understanding the aggregate effect of individual actions and predicting the temporal evolution of population distributions. We achieve a synthesis of MFG and Markov decision processes (MDP) by showing that a special MFG is reducible to an MDP. This enables us to broaden the scope of mean field game theory and infer MFG models of large real-world systems via deep inverse reinforcement learning. Our method learns both the reward function and forward dynamics of an MFG from real data, and we report the first empirical test of a mean field game model of a real-world social media population. | accepted-oral-papers | The reviewers are unanimous in finding the work in this paper highly novel and significant. They have provided detailed discussions to back up this assessment. The reviewer comments surprisingly included a critique that "the scientific content of the work has critical conceptual flaws" (!) However, the author rebuttal persuaded the reviewers that the concerns were largely addressed. | val | [
"BkGA_x3SG",
"ByGPUUYgz",
"rJLBq1DVM",
"S1PF1UKxG",
"rJBLYC--f",
"BycoZZimG",
"HyRrEDLWG",
"SJJDxd8Wf",
"r1D9GPUbf"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"We appreciate your suggestions for further improving the precision of our language, and we understand the importance of doing so for the work to be useful to researchers in collective behavior. \n\nWe agree with most of your suggestions, and we will make all necessary edits for the final version of the paper if ac... | [
-1,
8,
-1,
8,
10,
-1,
-1,
-1,
-1
] | [
-1,
4,
-1,
3,
5,
-1,
-1,
-1,
-1
] | [
"rJLBq1DVM",
"iclr_2018_HktK4BeCZ",
"SJJDxd8Wf",
"iclr_2018_HktK4BeCZ",
"iclr_2018_HktK4BeCZ",
"iclr_2018_HktK4BeCZ",
"S1PF1UKxG",
"ByGPUUYgz",
"rJBLYC--f"
] |
iclr_2018_HkL7n1-0b | Wasserstein Auto-Encoders | We propose the Wasserstein Auto-Encoder (WAE)---a new algorithm for building a generative model of the data distribution. WAE minimizes a penalized form of the Wasserstein distance between the model distribution and the target distribution, which leads to a different regularizer than the one used by the Variational Auto-Encoder (VAE).
This regularizer encourages the encoded training distribution to match the prior. We compare our algorithm with several other techniques and show that it is a generalization of adversarial auto-encoders (AAE). Our experiments show that WAE shares many of the properties of VAEs (stable training, encoder-decoder architecture, nice latent manifold structure) while generating samples of better quality. | accepted-oral-papers | This paper proposes a new generative model that has the stability of variational autoencoders (VAE) while producing better samples. The authors clearly compare their work to previous efforts that combine VAEs and Generative Adversarial Networks with similar goals. Authors show that the proposed algorithm is a generalization of Adversarial Autoencoder (AAE) and minimizes Wasserstein distance between model and target distribution. The paper is well written with convincing results. Reviewers agree that the algorithm is novel and practical; and close connections of the algorithm to related approaches are clearly discussed with useful insights. Overall, the paper is strong and I recommend acceptance. | test | [
"Sy_QFsmHG",
"HyBIaDXBM",
"rJSDX-xSG",
"SJQzLO_gM",
"Hk2dO8ngz",
"SJncf2gWz",
"BkU7vv8ff",
"SkxfpL8GG",
"BJ2VpnZff",
"BkrqpnbGG",
"H1bGp3bfz",
"SkGcPcZ-z"
] | [
"public",
"author",
"public",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"public",
"author",
"author",
"author",
"public"
] | [
"Let me clarify the markov chain point.\n\nIn the case Q(Z|X) is stochastic, the encode/decode chain X->Z->X' is stochastic. Namely, P(X'|X) is not a deterministic function, it is a distribution. A markov chain can be constructed if we sample X from P_X and use P(X'|X) as the transition probability.\n\nBy optimizin... | [
-1,
-1,
-1,
8,
8,
8,
-1,
-1,
-1,
-1,
-1,
-1
] | [
-1,
-1,
-1,
3,
3,
4,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"HyBIaDXBM",
"rJSDX-xSG",
"iclr_2018_HkL7n1-0b",
"iclr_2018_HkL7n1-0b",
"iclr_2018_HkL7n1-0b",
"iclr_2018_HkL7n1-0b",
"SkxfpL8GG",
"iclr_2018_HkL7n1-0b",
"Hk2dO8ngz",
"SJQzLO_gM",
"SJncf2gWz",
"iclr_2018_HkL7n1-0b"
] |
iclr_2018_B1QRgziT- | Spectral Normalization for Generative Adversarial Networks | One of the challenges in the study of generative adversarial networks is the instability of its training.
In this paper, we propose a novel weight normalization technique called spectral normalization to stabilize the training of the discriminator.
Our new normalization technique is computationally light and easy to incorporate into existing implementations.
We tested the efficacy of spectral normalization on CIFAR10, STL-10, and ILSVRC2012 dataset, and we experimentally confirmed that spectrally normalized GANs (SN-GANs) is capable of generating images of better or equal quality relative to the previous training stabilization techniques. | accepted-oral-papers | This paper presents impressive results on scaling GANs to ILSVRC2012 dataset containing a large number of classes. To achieve this, the authors propose "spectral normalization" to normalize weights and stabilize training which turns out to help in overcoming mode collapse issues. The presented methodology is principled and well written. The authors did a good job in addressing reviewer's comments and added more comparative results on related approaches to demonstrate the superiority of the proposed methodology. The reviewers agree that this is a great step towards improving the training of GANs. I recommend acceptance. | train | [
"SkQdbLclM",
"H1xyfspez",
"HJH-EWkWM",
"rkDCavsmz",
"r1Ko8X_Gf",
"r1onL2xXM",
"SJpmh_17f",
"BkOctTAGf",
"HJxGRNvMz",
"BJAcWZobM",
"BkIYgWs-z",
"rynszWibz",
"S1x6bLDZM",
"SJok1XB-f",
"rJrC3dhlz",
"Hkci0r3lM",
"SyTXZU2xz",
"S1g_eb9gz",
"ryjuZSQlG",
"Hkgbu7Qgz",
"SJmRwz7xG",
"... | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"public",
"public",
"author",
"author",
"public",
"author",
"author",
"author",
"author",
"public",
"public",
"public",
"author",
"author",
"public",
"public",
"public",
"author",
"public",
"public",
"public... | [
"This paper borrows the classic idea of spectral regularization, recently applied to deep learning by Yoshida and Miyato (2017) and use it to normalize GAN objectives. The ensuing GAN, coined SN-GAN, essentially ensures the Lipschitz property of the discriminator. This Lipschitz property has already been proposed b... | [
7,
8,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
4,
3,
2,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_B1QRgziT-",
"iclr_2018_B1QRgziT-",
"iclr_2018_B1QRgziT-",
"iclr_2018_B1QRgziT-",
"HJxGRNvMz",
"iclr_2018_B1QRgziT-",
"BkOctTAGf",
"iclr_2018_B1QRgziT-",
"SJok1XB-f",
"H1xyfspez",
"HJH-EWkWM",
"SkQdbLclM",
"rJrC3dhlz",
"iclr_2018_B1QRgziT-",
"S1g_eb9gz",
"Hkgbu7Qgz",
"ryjuZ... |
iclr_2018_BJOFETxR- | Learning to Represent Programs with Graphs | Learning tasks on source code (i.e., formal languages) have been considered recently, but most work has tried to transfer natural language methods and does not capitalize on the unique opportunities offered by code's known syntax. For example, long-range dependencies induced by using the same variable or function in distant locations are often not considered. We propose to use graphs to represent both the syntactic and semantic structure of code and use graph-based deep learning methods to learn to reason over program structures.
In this work, we present how to construct graphs from source code and how to scale Gated Graph Neural Networks training to such large graphs. We evaluate our method on two tasks: VarNaming, in which a network attempts to predict the name of a variable given its usage, and VarMisuse, in which the network learns to reason about selecting the correct variable that should be used at a given program location. Our comparison to methods that use less structured program representations shows the advantages of modeling known structure, and suggests that our models learn to infer meaningful names and to solve the VarMisuse task in many cases. Additionally, our testing showed that VarMisuse identifies a number of bugs in mature open-source projects. | accepted-oral-papers | There was some debate between the authors and an anonymous commentator on this paper. The feeling of the commentator was that existing work (mostly from the PL community) was not compared to appropriately and, in fact, performs better than this approach. The authors point out that their evaluation is hard to compare directly but that they disagreed with the assessment. They modified their texts to accommodate some of the commentator's concerns; agreed to disagree on others; and promised a fuller comparison to other work in the future.
I largely agree with the authors here and think this is a good and worthwhile paper for its approach.
PROS:
1. well written
2. good ablation study
3. good evaluation including real bugs identified in real software projects
4. practical for real world usage
CONS:
1. perhaps not well compared to existing PL literature or on existing datasets from that community
2. the architecture (GGNN) is not a novel contribution | train | [
"ryuuTE9gG",
"rkhdDBalz",
"H1oEvnkWM",
"SyXFuWT7M",
"Hy2ZoJhbf",
"B1LIjy2Zz",
"H1UNdg8-G",
"S1SRPhSbM",
"B1ioD3SZG",
"SkRKw2SWz",
"BJNSv3SWf",
"Hy-fzXEZM",
"Hy3kAkmZM"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"public",
"author",
"author",
"author",
"author",
"author",
"public"
] | [
"Summary: The paper applies graph convolutions with deep neural networks to the problem of \"variable misuse\" (putting the wrong variable name in a program statement) in graphs created deterministically from source code. Graph structure is determined by program abstract syntax tree (AST) and next-token edges, as... | [
8,
8,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
4,
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_BJOFETxR-",
"iclr_2018_BJOFETxR-",
"iclr_2018_BJOFETxR-",
"BJNSv3SWf",
"H1UNdg8-G",
"H1UNdg8-G",
"Hy-fzXEZM",
"ryuuTE9gG",
"rkhdDBalz",
"H1oEvnkWM",
"iclr_2018_BJOFETxR-",
"Hy3kAkmZM",
"iclr_2018_BJOFETxR-"
] |
iclr_2018_B1gJ1L2aW | Characterizing Adversarial Subspaces Using Local Intrinsic Dimensionality | Deep Neural Networks (DNNs) have recently been shown to be vulnerable against adversarial examples, which are carefully crafted instances that can mislead DNNs to make errors during prediction. To better understand such attacks, a characterization is needed of the properties of regions (the so-called `adversarial subspaces') in which adversarial examples lie. We tackle this challenge by characterizing the dimensional properties of adversarial regions, via the use of Local Intrinsic Dimensionality (LID). LID assesses the space-filling capability of the region surrounding a reference example, based on the distance distribution of the example to its neighbors. We first provide explanations about how adversarial perturbation can affect the LID characteristic of adversarial regions, and then show empirically that LID characteristics can facilitate the distinction of adversarial examples generated using state-of-the-art attacks. As a proof-of-concept, we show that a potential application of LID is to distinguish adversarial examples, and the preliminary results show that it can outperform several state-of-the-art detection measures by large margins for five attack strategies considered in this paper across three benchmark datasets. Our analysis of the LID characteristic for adversarial regions not only motivates new directions of effective adversarial defense, but also opens up more challenges for developing new attacks to better understand the vulnerabilities of DNNs. | accepted-oral-papers | The paper characterizes the latent space of adversarial examples and introduces the concept of local intrinsic dimenstionality (LID). LID can be used to detect adversaries as well build better attacks as it characterizes the space in which DNNs might be vulnerable. The experiments strongly support their claim. | val | [
"rkARQJwez",
"H1wVDrtgM",
"S1tVnWqxM",
"rJLdp2jfz",
"ryeCOusGf",
"rka1XhcGf",
"HyZMDhcfG",
"ByIbHZKGG",
"Hk9lgX_Mz",
"SkjsvgdMG",
"ByhCuVUff",
"r1BGcmzzG",
"HJFaKQGfG",
"rJ1LK7zfM"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"public",
"author",
"author",
"public",
"public",
"author",
"public",
"author",
"author",
"author"
] | [
"The paper considers a problem of adversarial examples applied to the deep neural networks. The authors conjecture that the intrinsic dimensionality of the local neighbourhood of adversarial examples significantly differs from the one of normal (or noisy) examples. More precisely, the adversarial examples are expec... | [
8,
6,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
3,
1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_B1gJ1L2aW",
"iclr_2018_B1gJ1L2aW",
"iclr_2018_B1gJ1L2aW",
"ryeCOusGf",
"rka1XhcGf",
"ByIbHZKGG",
"Hk9lgX_Mz",
"Hk9lgX_Mz",
"SkjsvgdMG",
"ByhCuVUff",
"rJ1LK7zfM",
"rkARQJwez",
"H1wVDrtgM",
"S1tVnWqxM"
] |
iclr_2018_HkwZSG-CZ | Breaking the Softmax Bottleneck: A High-Rank RNN Language Model | We formulate language modeling as a matrix factorization problem, and show that the expressiveness of Softmax-based models (including the majority of neural language models) is limited by a Softmax bottleneck. Given that natural language is highly context-dependent, this further implies that in practice Softmax with distributed word embeddings does not have enough capacity to model natural language. We propose a simple and effective method to address this issue, and improve the state-of-the-art perplexities on Penn Treebank and WikiText-2 to 47.69 and 40.68 respectively. The proposed method also excels on the large-scale 1B Word dataset, outperforming the baseline by over 5.6 points in perplexity. | accepted-oral-papers | Viewing language modeling as a matrix factorization problem, the authors argue that the low rank of word embeddings used by such models limits their expressivity and show that replacing the softmax in such models with a mixture of softmaxes provides an effective way of overcoming this bottleneck. This is an interesting and well-executed paper that provides potentially important insight. It would be good to at least mention prior work related to the language modeling as matrix factorization perspective (e.g. Levy & Goldberg, 2014). | train | [
"By7UbmtHM",
"B1ETY-KBM",
"SyTCJyqeM",
"r1zYOdPgz",
"B18hETI4f",
"B1v_izpxM",
"Hk9G7RsmM",
"S1pkkLc7G",
"BySeO9CGz",
"HJ985q0fz",
"SktCu50MM",
"rkkKFc0GG",
"Hk_Hu9Rfz",
"HkkWuBrZG",
"HkXiuNE-f",
"Hk1nEGUxM",
"rJFe7vDCW",
"B1VaeCHeM",
"BJt665wkf",
"SyRk1cPkM",
"B1D4D3wRb",
"... | [
"author",
"public",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"public",
"public",
"author",
"author",
"author",
"author",
"author",
"public",
"public",
"author",
"author",
"public",
"author",
"public",
"public",
"public"
] | [
"Thanks for pointing out this related piece we’ve missed. Salute!\n\nWe would like to clarify that using a mixture structure is by no means a new idea, as we have noted in Related Work. Instead, the insight on model expressiveness, the integration with modern architectures and optimization algorithms, the SOTA perf... | [
-1,
-1,
7,
7,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
-1,
-1,
5,
4,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"B1ETY-KBM",
"iclr_2018_HkwZSG-CZ",
"iclr_2018_HkwZSG-CZ",
"iclr_2018_HkwZSG-CZ",
"HJ985q0fz",
"iclr_2018_HkwZSG-CZ",
"Hk_Hu9Rfz",
"iclr_2018_HkwZSG-CZ",
"iclr_2018_HkwZSG-CZ",
"r1zYOdPgz",
"B1v_izpxM",
"SyTCJyqeM",
"HkXiuNE-f",
"SyTCJyqeM",
"iclr_2018_HkwZSG-CZ",
"B1VaeCHeM",
"S10Gw... |
iclr_2018_Sk2u1g-0- | Continuous Adaptation via Meta-Learning in Nonstationary and Competitive Environments | Ability to continuously learn and adapt from limited experience in nonstationary environments is an important milestone on the path towards general intelligence. In this paper, we cast the problem of continuous adaptation into the learning-to-learn framework. We develop a simple gradient-based meta-learning algorithm suitable for adaptation in dynamically changing and adversarial scenarios. Additionally, we design a new multi-agent competitive environment, RoboSumo, and define iterated adaptation games for testing various aspects of continuous adaptation. We demonstrate that meta-learning enables significantly more efficient adaptation than reactive baselines in the few-shot regime. Our experiments with a population of agents that learn and compete suggest that meta-learners are the fittest. | accepted-oral-papers | Looks like a great contribution to ICLR. Continuous adaptation in nonstationary (and competitive) environments is something that an intelligent agent acting in the real world would need to solve and this paper suggests that a meta-learning approach may be quite appropriate for this task. | train | [
"ryBakJUlz",
"BJiNow9gG",
"SyK4pmsgG",
"B19e7vSmf",
"ByEAfwS7z",
"B1YwMwHmf",
"HyyfMDSQf"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"This is a dense, rich, and impressive paper on rapid meta-learning. It is already highly polished, so I have mostly minor comments.\n\nRelated work: I think there is a distinction between continual and life-long learning, and I think that your proposed setup is a form of continual learning (see Ring ‘94/‘97). Give... | [
8,
7,
9,
-1,
-1,
-1,
-1
] | [
4,
4,
2,
-1,
-1,
-1,
-1
] | [
"iclr_2018_Sk2u1g-0-",
"iclr_2018_Sk2u1g-0-",
"iclr_2018_Sk2u1g-0-",
"ryBakJUlz",
"BJiNow9gG",
"SyK4pmsgG",
"iclr_2018_Sk2u1g-0-"
] |
iclr_2018_S1JHhv6TW | Boosting Dilated Convolutional Networks with Mixed Tensor Decompositions | The driving force behind deep networks is their ability to compactly represent rich classes of functions. The primary notion for formally reasoning about this phenomenon is expressive efficiency, which refers to a situation where one network must grow unfeasibly large in order to replicate functions of another. To date, expressive efficiency analyses focused on the architectural feature of depth, showing that deep networks are representationally superior to shallow ones. In this paper we study the expressive efficiency brought forth by connectivity, motivated by the observation that modern networks interconnect their layers in elaborate ways. We focus on dilated convolutional networks, a family of deep models delivering state of the art performance in sequence processing tasks. By introducing and analyzing the concept of mixed tensor decompositions, we prove that interconnecting dilated convolutional networks can lead to expressive efficiency. In particular, we show that even a single connection between intermediate layers can already lead to an almost quadratic gap, which in large-scale settings typically makes the difference between a model that is practical and one that is not. Empirical evaluation demonstrates how the expressive efficiency of connectivity, similarly to that of depth, translates into gains in accuracy. This leads us to believe that expressive efficiency may serve a key role in developing new tools for deep network design. | accepted-oral-papers | This paper proposes improvements to WaveNet by showing that increasing connectivity provides superior models to increasing network size. The reviewers found both the mathematical treatment of the topic and the experiments to be of higher quality that most papers they reviewed, and were unanimous in recommending it for acceptance in the conference. I see no reason not to give it my strongest recommendation as well. | train | [
"ryLYFULlM",
"SyEtTPclG",
"B1VRu6hbz",
"H1RTfV_Gz",
"rJXMGNmWG",
"r1zdyNQZf"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author"
] | [
"This paper theoretically validates that interconnecting networks with different dilations can lead to expressive efficiency, which indicates an interesting phenomenon that connectivity is able to enhance the expressiveness of deep networks. A key technical tool is a mixed tensor decomposition, which is shown to ha... | [
7,
9,
8,
-1,
-1,
-1
] | [
4,
4,
3,
-1,
-1,
-1
] | [
"iclr_2018_S1JHhv6TW",
"iclr_2018_S1JHhv6TW",
"iclr_2018_S1JHhv6TW",
"B1VRu6hbz",
"ryLYFULlM",
"SyEtTPclG"
] |
iclr_2018_HkfXMz-Ab | Neural Sketch Learning for Conditional Program Generation | We study the problem of generating source code in a strongly typed,
Java-like programming language, given a label (for example a set of
API calls or types) carrying a small amount of information about the
code that is desired. The generated programs are expected to respect a
`"realistic" relationship between programs and labels, as exemplified
by a corpus of labeled programs available during training.
Two challenges in such *conditional program generation* are that
the generated programs must satisfy a rich set of syntactic and
semantic constraints, and that source code contains many low-level
features that impede learning. We address these problems by training
a neural generator not on code but on *program sketches*, or
models of program syntax that abstract out names and operations that
do not generalize across programs. During generation, we infer a
posterior distribution over sketches, then concretize samples from
this distribution into type-safe programs using combinatorial
techniques. We implement our ideas in a system for generating
API-heavy Java code, and show that it can often predict the entire
body of a method given just a few API calls or data types that appear
in the method. | accepted-oral-papers | This paper presents a novel and interesting sketch-based approach to conditional program generation. I will say upfront that it is worth of acceptance, based on its contribution and the positivity of the reviews. I am annoyed to see that the review process has not called out the authors' lack of references to the decently body of existing work on generating structure on neural sketch programming and on generating under grammatical constraint. The authors' will need look no further than the proceedings of the *ACL conferences of the last few years to find papers such as:
* Dyer, Chris, et al. "Recurrent Neural Network Grammars." Proceedings of NAACL-HLT (2016).
* Kuncoro, Adhiguna, et al. "What Do Recurrent Neural Network Grammars Learn About Syntax?." Proceedings of EACL (2016).
* Yin, Pengcheng, and Graham Neubig. "A Syntactic Neural Model for General-Purpose Code Generation." Proceedings of ACL (2017).
* Rabinovich, Maxim, Mitchell Stern, and Dan Klein. "Abstract Syntax Networks for Code Generation and Semantic Parsing." Proceedings of ACL (2017).
Or other work on neural program synthesis, with sketch based methods:
* Gaunt, Alexander L., et al. "Terpret: A probabilistic programming language for program induction." arXiv preprint arXiv:1608.04428 (2016).
* Riedel, Sebastian, Matko Bosnjak, and Tim Rocktäschel. "Programming with a differentiable forth interpreter." CoRR, abs/1605.06640 (2016).
Likewise the references to the non-neural program synthesis and induction literature are thin, and the work is poorly situated as a result.
It is a disappointing but mild failure of the scientific process underlying peer review for this conference that such comments were not made. The authors are encouraged to take heed of these comments in preparing their final revision, but I will not object to the acceptance of the paper on these grounds, as the methods proposed therein are truly interesting and exciting. | train | [
"Sy9Sau_xf",
"rJ69A1Kxf",
"SyhuQnyZz",
"Bk5zgF6mf",
"SJoUWF_bG",
"HyUkWY_Wz",
"ryIaJKO-f"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"The authors introduce an algorithm in the subfield of conditional program generation that is able to create programs in a rich java like programming language. In this setting, they propose an algorithm based on sketches- abstractions of programs that capture the structure but discard program specific information t... | [
7,
8,
7,
-1,
-1,
-1,
-1
] | [
2,
4,
3,
-1,
-1,
-1,
-1
] | [
"iclr_2018_HkfXMz-Ab",
"iclr_2018_HkfXMz-Ab",
"iclr_2018_HkfXMz-Ab",
"iclr_2018_HkfXMz-Ab",
"Sy9Sau_xf",
"rJ69A1Kxf",
"SyhuQnyZz"
] |
iclr_2018_Hk99zCeAb | Progressive Growing of GANs for Improved Quality, Stability, and Variation | We describe a new training methodology for generative adversarial networks. The key idea is to grow both the generator and discriminator progressively: starting from a low resolution, we add new layers that model increasingly fine details as training progresses. This both speeds the training up and greatly stabilizes it, allowing us to produce images of unprecedented quality, e.g., CelebA images at 1024^2. We also propose a simple way to increase the variation in generated images, and achieve a record inception score of 8.80 in unsupervised CIFAR10. Additionally, we describe several implementation details that are important for discouraging unhealthy competition between the generator and discriminator. Finally, we suggest a new metric for evaluating GAN results, both in terms of image quality and variation. As an additional contribution, we construct a higher-quality version of the CelebA dataset. | accepted-oral-papers | The main contribution of the paper is a technique for training GANs which consists in progressively increasing the resolution of generated images by gradually enabling layers in the generator and the discriminator. The method is novel, and outperforms the state of the art in adversarial image generation both quantitatively and qualitatively. The evaluation is carried out on several datasets; it also contains an ablation study showing the effect of contributions (I recommend that the authors follow the suggestions of AnonReviewer2 and further improve it). Finally, the source code is released which should facilitate the reproducibility of the results and further progress in the field.
AnonReviewer1 has noted that the authors have revealed their names through GitHub, thus violating the double-blind submission requirement of ICLR; if not for this issue, the reviewer’s rating would have been 8. While these concerns should be taken very seriously, I believe that in this particular case the paper should still be accepted for the following reasons:
1) the double blind rule is new for ICLR this year, and posting the paper on arxiv is allowed;
2) the author list has been revealed through the supplementary material (Github page) rather than the paper itself;
3) all reviewers agree on the high impact of the paper, so having it presented and discussed at the conference would be very useful for the community. | train | [
"BJ8NesygM",
"rJ205zPlG",
"S15uG36lG",
"B1oIUPX4M",
"B1N1nJxGf",
"r1YixQVZM",
"Bk_JafhgM",
"rk0s9ajgM",
"rJGV53FyM",
"SJqOmCHJf"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"public",
"author",
"public"
] | [
"The paper describes a number of modifications of GAN training that enable synthesis of high-resolution images. The modifications also support more automated longer-term training, and increasing variability in the results.\n\nThe key modification is progressive growing. First, a GAN is trained for image synthesis a... | [
8,
1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
4,
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_Hk99zCeAb",
"iclr_2018_Hk99zCeAb",
"iclr_2018_Hk99zCeAb",
"r1YixQVZM",
"iclr_2018_Hk99zCeAb",
"iclr_2018_Hk99zCeAb",
"rk0s9ajgM",
"iclr_2018_Hk99zCeAb",
"SJqOmCHJf",
"iclr_2018_Hk99zCeAb"
] |
iclr_2018_H1tSsb-AW | Variance Reduction for Policy Gradient with Action-Dependent Factorized Baselines | Policy gradient methods have enjoyed great success in deep reinforcement learning but suffer from high variance of gradient estimates. The high variance problem is particularly exasperated in problems with long horizons or high-dimensional action spaces. To mitigate this issue, we derive a bias-free action-dependent baseline for variance reduction which fully exploits the structural form of the stochastic policy itself and does not make any additional assumptions about the MDP. We demonstrate and quantify the benefit of the action-dependent baseline through both theoretical analysis as well as numerical results, including an analysis of the suboptimality of the optimal state-dependent baseline. The result is a computationally efficient policy gradient algorithm, which scales to high-dimensional control problems, as demonstrated by a synthetic 2000-dimensional target matching task. Our experimental results indicate that action-dependent baselines allow for faster learning on standard reinforcement learning benchmarks and high-dimensional hand manipulation and synthetic tasks. Finally, we show that the general idea of including additional information in baselines for improved variance reduction can be extended to partially observed and multi-agent tasks. | accepted-oral-papers | The reviewers are satisfied that this paper makes a good contribution to policy gradient methods. | train | [
"ryf-_2ugf",
"S1VwmoFxz",
"rJaGVZ5lz",
"rkrQFmjmG",
"HkYvtXoQf",
"HyKwumiXG",
"rJJhvXiQz"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"This paper presents methods to reduce the variance of policy gradient using an action dependent baseline. Such action dependent baseline can be used in settings where the action can be decomposed into factors that are conditionally dependent given the state. The paper:\n(1) shows that using separate baselines for ... | [
7,
8,
6,
-1,
-1,
-1,
-1
] | [
4,
3,
4,
-1,
-1,
-1,
-1
] | [
"iclr_2018_H1tSsb-AW",
"iclr_2018_H1tSsb-AW",
"iclr_2018_H1tSsb-AW",
"S1VwmoFxz",
"ryf-_2ugf",
"rJaGVZ5lz",
"iclr_2018_H1tSsb-AW"
] |
iclr_2018_BkisuzWRW | Zero-Shot Visual Imitation | The current dominant paradigm for imitation learning relies on strong supervision of expert actions to learn both 'what' and 'how' to imitate. We pursue an alternative paradigm wherein an agent first explores the world without any expert supervision and then distills its experience into a goal-conditioned skill policy with a novel forward consistency loss. In our framework, the role of the expert is only to communicate the goals (i.e., what to imitate) during inference. The learned policy is then employed to mimic the expert (i.e., how to imitate) after seeing just a sequence of images demonstrating the desired task. Our method is 'zero-shot' in the sense that the agent never has access to expert actions during training or for the task demonstration at inference. We evaluate our zero-shot imitator in two real-world settings: complex rope manipulation with a Baxter robot and navigation in previously unseen office environments with a TurtleBot. Through further experiments in VizDoom simulation, we provide evidence that better mechanisms for exploration lead to learning a more capable policy which in turn improves end task performance. Videos, models, and more details are available at https://pathak22.github.io/zeroshot-imitation/. | accepted-oral-papers | The authors have proposed a method for imitating a given control trajectory even if it is sparsely sampled. The method relies on a parametrized skill function and uses a triplet loss for learning a stopping metric and for a dynamics consistency loss. The method is demonstrated with real robots on a navigation task and a knot-tying task. The reviewers agree that it is a novel and interesting alternative to pure RL which should inspire good discussion at the conference. | train | [
"SylSZ-5gf",
"rJ4XrD5eG",
"BJdG429gz",
"H1MXa4xmG",
"Sk3SYJO7f",
"HJL-IlS7z",
"HJRuTbN7f",
"S1vwJrgQM",
"ryXU04emf",
"Hko0F4l7M",
"SkJHO4e7G"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"The authors propose an approach for zero-shot visual learning. The robot learns inverse and forward models through autonomous exploration. The robot then uses the learned parametric skill functions to reach goal states (images) provided by the demonstrator. The “zero-shot” refers to the fact that all learning is p... | [
8,
8,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
4,
3,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_BkisuzWRW",
"iclr_2018_BkisuzWRW",
"iclr_2018_BkisuzWRW",
"iclr_2018_BkisuzWRW",
"HJL-IlS7z",
"HJRuTbN7f",
"SkJHO4e7G",
"ryXU04emf",
"rJ4XrD5eG",
"SylSZ-5gf",
"BJdG429gz"
] |
iclr_2018_rkRwGg-0Z | Beyond Word Importance: Contextual Decomposition to Extract Interactions from LSTMs | The driving force behind the recent success of LSTMs has been their ability to learn complex and non-linear relationships. Consequently, our inability to describe these relationships has led to LSTMs being characterized as black boxes. To this end, we introduce contextual decomposition (CD), an interpretation algorithm for analysing individual predictions made by standard LSTMs, without any changes to the underlying model. By decomposing the output of a LSTM, CD captures the contributions of combinations of words or variables to the final prediction of an LSTM. On the task of sentiment analysis with the Yelp and SST data sets, we show that CD is able to reliably identify words and phrases of contrasting sentiment, and how they are combined to yield the LSTM's final prediction. Using the phrase-level labels in SST, we also demonstrate that CD is able to successfully extract positive and negative negations from an LSTM, something which has not previously been done. | accepted-oral-papers | Very solid paper exploring an interpretation of LSTMs.
good reviewss | train | [
"HkG9aTIVf",
"r1k_ETYlM",
"SJ9ufS8Ef",
"rJHhMjFgG",
"B1qEe3txM",
"rJZhYjI7f",
"BJZscx4mz",
"HJCfJlszf",
"rJNARkjGz",
"BykpAJiMG",
"H1fcTJsGf"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"Thanks for engaging in a helpful discussion!",
"This article aims at understanding the role played by the different words in a sentence, taking into account their order in the sentence. In sentiment analysis for instance, this capacity is critical to model properly negation.\nAs state-of-the-art approaches rely ... | [
-1,
7,
-1,
7,
7,
-1,
-1,
-1,
-1,
-1,
-1
] | [
-1,
3,
-1,
4,
2,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"SJ9ufS8Ef",
"iclr_2018_rkRwGg-0Z",
"rJZhYjI7f",
"iclr_2018_rkRwGg-0Z",
"iclr_2018_rkRwGg-0Z",
"BJZscx4mz",
"H1fcTJsGf",
"rJHhMjFgG",
"BykpAJiMG",
"B1qEe3txM",
"r1k_ETYlM"
] |
iclr_2018_Hy7fDog0b | AmbientGAN: Generative models from lossy measurements | Generative models provide a way to model structure in complex distributions and have been shown to be useful for many tasks of practical interest. However, current techniques for training generative models require access to fully-observed samples. In many settings, it is expensive or even impossible to obtain fully-observed samples, but economical to obtain partial, noisy observations. We consider the task of learning an implicit generative model given only lossy measurements of samples from the distribution of interest. We show that the true underlying distribution can be provably recovered even in the presence of per-sample information loss for a class of measurement models. Based on this, we propose a new method of training Generative Adversarial Networks (GANs) which we call AmbientGAN. On three benchmark datasets, and for various measurement models, we demonstrate substantial qualitative and quantitative improvements. Generative models trained with our method can obtain 2-4x higher inception scores than the baselines. | accepted-oral-papers | All three reviewers were positive about the paper, finding it to be on an interesting topic and with broad applicability. The results were compelling and thus the paper is accepted. | train | [
"Bkpju_8VG",
"BJAJzV4xz",
"B1oKXx9gG",
"Hyxt2gCxz",
"SkElzP3mf"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author"
] | [
"After reading the other reviews and responses, I retain a favorable opinion of the paper. The additional experiments are especially appreciated.",
"Quick summary:\nThis paper shows how to train a GAN in the case where the dataset is corrupted by some measurement noise process. They propose to introduce the noise... | [
-1,
7,
7,
8,
-1
] | [
-1,
4,
4,
4,
-1
] | [
"Hyxt2gCxz",
"iclr_2018_Hy7fDog0b",
"iclr_2018_Hy7fDog0b",
"iclr_2018_Hy7fDog0b",
"iclr_2018_Hy7fDog0b"
] |
iclr_2018_rJWechg0Z | Minimal-Entropy Correlation Alignment for Unsupervised Deep Domain Adaptation | In this work, we face the problem of unsupervised domain adaptation with a novel deep learning approach which leverages our finding that entropy minimization is induced by the optimal alignment of second order statistics between source and target domains. We formally demonstrate this hypothesis and, aiming at achieving an optimal alignment in practical cases, we adopt a more principled strategy which, differently from the current Euclidean approaches, deploys alignment along geodesics. Our pipeline can be implemented by adding to the standard classification loss (on the labeled source domain), a source-to-target regularizer that is weighted in an unsupervised and data-driven fashion. We provide extensive experiments to assess the superiority of our framework on standard domain and modality adaptation benchmarks. | accepted-poster-papers | This paper presents a nice approach to domain adaptation that improves empirically upon previous work, while also simplifying tuning and learning.
| train | [
"HJGANV2Ez",
"BkiyM2dgG",
"r15hYW5gM",
"SkjcYkCgf",
"SkhhtBGXG",
"SJJSYrGmG",
"rkcg_HfXf",
"HkkI8HMQM"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"The rebuttal addresses most of my questions. Here are two more cents. \n\nThe theorem still does not favor the correlation alignment over the geodesic alignment. What Figure 2 shows is an empirical observation but the theorem itself does not lead to the result.\n\nI still do not think the cross-modality setup is a... | [
-1,
6,
7,
8,
-1,
-1,
-1,
-1
] | [
-1,
5,
5,
4,
-1,
-1,
-1,
-1
] | [
"BkiyM2dgG",
"iclr_2018_rJWechg0Z",
"iclr_2018_rJWechg0Z",
"iclr_2018_rJWechg0Z",
"SJJSYrGmG",
"BkiyM2dgG",
"r15hYW5gM",
"SkjcYkCgf"
] |
iclr_2018_B1zlp1bRW | Large Scale Optimal Transport and Mapping Estimation | This paper presents a novel two-step approach for the fundamental problem of learning an optimal map from one distribution to another. First, we learn an optimal transport (OT) plan, which can be thought as a one-to-many map between the two distributions. To that end, we propose a stochastic dual approach of regularized OT, and show empirically that it scales better than a recent related approach when the amount of samples is very large. Second, we estimate a Monge map as a deep neural network learned by approximating the barycentric projection of the previously-obtained OT plan. This parameterization allows generalization of the mapping outside the support of the input measure. We prove two theoretical stability results of regularized OT which show that our estimations converge to the OT and Monge map between the underlying continuous measures. We showcase our proposed approach on two applications: domain adaptation and generative modeling. | accepted-poster-papers | This paper is generally very strong. I do find myself agreeing with the last reviewer though, that tuning hyperparameters on the test set should not be done, even if others have done it in the past. (I say this having worked on similar problems myself.) I would strongly encourage the authors to re-do their experiments with a better tuning regime. | train | [
"rJ81OAtgM",
"B1cR-6neM",
"H1dxvZWWM",
"Skh5eWVWG",
"HydEb7aQz",
"HJ6TkQpmM",
"HkgBvuc7z",
"SkAHhVRfG",
"Hych0z0MG",
"rkVI5fAGz",
"H122CDsMG",
"By5zFb-bG"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"Quality\nThe theoretical results presented in the paper appear to be correct. However, the experimental evaluation is globally limited, hyperparameter tuning on test which is not fair.\n\nClarity\nThe paper is mostly clear, even though some parts deserve more discussion/clarification (algorithm, experimental eval... | [
7,
6,
6,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
3,
3,
3,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_B1zlp1bRW",
"iclr_2018_B1zlp1bRW",
"iclr_2018_B1zlp1bRW",
"iclr_2018_B1zlp1bRW",
"HkgBvuc7z",
"iclr_2018_B1zlp1bRW",
"rkVI5fAGz",
"rJ81OAtgM",
"B1cR-6neM",
"H1dxvZWWM",
"Skh5eWVWG",
"rJ81OAtgM"
] |
iclr_2018_ryUlhzWCZ | TRUNCATED HORIZON POLICY SEARCH: COMBINING REINFORCEMENT LEARNING & IMITATION LEARNING | In this paper, we propose to combine imitation and reinforcement learning via the idea of reward shaping using an oracle. We study the effectiveness of the near- optimal cost-to-go oracle on the planning horizon and demonstrate that the cost- to-go oracle shortens the learner’s planning horizon as function of its accuracy: a globally optimal oracle can shorten the planning horizon to one, leading to a one- step greedy Markov Decision Process which is much easier to optimize, while an oracle that is far away from the optimality requires planning over a longer horizon to achieve near-optimal performance. Hence our new insight bridges the gap and interpolates between imitation learning and reinforcement learning. Motivated by the above mentioned insights, we propose Truncated HORizon Policy Search (THOR), a method that focuses on searching for policies that maximize the total reshaped reward over a finite planning horizon when the oracle is sub-optimal. We experimentally demonstrate that a gradient-based implementation of THOR can achieve superior performance compared to RL baselines and IL baselines even when the oracle is sub-optimal. | accepted-poster-papers | This paper proposes a theoretically-motivated method for combining reinforcement learning and imitation learning. There was some disagreement amongst the reviewers, but the AC was satisfied with the authors' rebuttal. | train | [
"BJBWMqqlf",
"H1JzYwcxM",
"H16Rrvtlz",
"H1rlRAfEG",
"ryI0uS6mM",
"r13LB1_Qz",
"Hya0QJ_XG",
"H1yFfkuXf"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"This paper proposes a new theoretically-motivated method for combining reinforcement learning and imitation learning for acquiring policies that are as good as or superior to the expert. The method assumes access to an expert value function (which could be trained using expert roll-outs) and uses the value functio... | [
7,
6,
3,
-1,
-1,
-1,
-1,
-1
] | [
3,
4,
5,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_ryUlhzWCZ",
"iclr_2018_ryUlhzWCZ",
"iclr_2018_ryUlhzWCZ",
"H1yFfkuXf",
"iclr_2018_ryUlhzWCZ",
"H16Rrvtlz",
"H1JzYwcxM",
"BJBWMqqlf"
] |
iclr_2018_SJJinbWRZ | Model-Ensemble Trust-Region Policy Optimization | Model-free reinforcement learning (RL) methods are succeeding in a growing number of tasks, aided by recent advances in deep learning. However, they tend to suffer from high sample complexity, which hinders their use in real-world domains. Alternatively, model-based reinforcement learning promises to reduce sample complexity, but tends to require careful tuning and to date have succeeded mainly in restrictive domains where simple models are sufficient for learning. In this paper, we analyze the behavior of vanilla model-based reinforcement learning methods when deep neural networks are used to learn both the model and the policy, and show that the learned policy tends to exploit regions where insufficient data is available for the model to be learned, causing instability in training. To overcome this issue, we propose to use an ensemble of models to maintain the model uncertainty and regularize the learning process. We further show that the use of likelihood ratio derivatives yields much more stable learning than backpropagation through time. Altogether, our approach Model-Ensemble Trust-Region Policy Optimization (ME-TRPO) significantly reduces the sample complexity compared to model-free deep RL methods on challenging continuous control benchmark tasks. | accepted-poster-papers | The reviewers agree that the paper presents nice results on model based RL with an ensemble of models. The limited novelty of the methods is questioned by one reviewer and briefly by the others, but they all agree that this paper's results justify its acceptance. | train | [
"rkoFpFOlz",
"SJ3tICFlz",
"Hkg9Vrqlz",
"S1d6-e6XG",
"ByM_IJ0Wz",
"rJ5NoSnbG",
"S1ljgW0gf",
"HJdkWGt1f",
"S1vW09QlG",
"B1RjDqMlf",
"ryRzHczeM",
"rJBiAuGxG",
"BJLAKoxeG",
"HkWtjGlxG",
"rkiav21gf",
"SJK0YrJlz",
"S16EX33yG",
"SJY0Oo5yM",
"SytosZKJM",
"rkm6BWFkf"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"public",
"public",
"public",
"author",
"author",
"public",
"author",
"public",
"public",
"author",
"public",
"author",
"public",
"author",
"public"
] | [
"Summary:\nThe paper proposes to use ensembles of models to overcome a typical problem when training on a learned model: That the policy learns to take advantage of errors of the model.\nThe models use the same training data but are differentiated by a differente parameter initialization and by training on differen... | [
7,
6,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
4,
3,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_SJJinbWRZ",
"iclr_2018_SJJinbWRZ",
"iclr_2018_SJJinbWRZ",
"iclr_2018_SJJinbWRZ",
"rJ5NoSnbG",
"iclr_2018_SJJinbWRZ",
"rkiav21gf",
"SytosZKJM",
"HkWtjGlxG",
"ryRzHczeM",
"rJBiAuGxG",
"BJLAKoxeG",
"rkiav21gf",
"S16EX33yG",
"SJK0YrJlz",
"SytosZKJM",
"SJY0Oo5yM",
"iclr_2018_... |
iclr_2018_Hy6GHpkCW | A Neural Representation of Sketch Drawings | We present sketch-rnn, a recurrent neural network able to construct stroke-based drawings of common objects. The model is trained on a dataset of human-drawn images representing many different classes. We outline a framework for conditional and unconditional sketch generation, and describe new robust training methods for generating coherent sketch drawings in a vector format. | accepted-poster-papers | This work presents a RNN tailored to generate sketch drawings. The model has novel elements and advances specific to the considered task, and allows for free generation as well as generation with (partial) input. The results are very satisfactory. Importantly, as part of this work a large dataset of sketch drawings is released. The only negative aspect is the insufficient evaluation, as pointed out by R1 who points out the need for baselines and evaluation metrics. R1’s concerns have been acknowledged by the authors but not really addressed in the revision. Still, this is a very interesting contribution. | test | [
"rJLTQtKgG",
"SyW65dqgz",
"B1wqtjoxz",
"B1C-ibcXf",
"SytFB9hGG",
"S1TvBchzz"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author"
] | [
"This paper introduces a neural network architecture for generating sketch drawings. The authors propose that this is particularly interesting over generating pixel data as it emphasises more human concepts. I agree. The contribution of this paper of this paper is two-fold. Firstly, the paper introduces a large ske... | [
8,
8,
5,
-1,
-1,
-1
] | [
4,
4,
4,
-1,
-1,
-1
] | [
"iclr_2018_Hy6GHpkCW",
"iclr_2018_Hy6GHpkCW",
"iclr_2018_Hy6GHpkCW",
"B1wqtjoxz",
"rJLTQtKgG",
"SyW65dqgz"
] |
iclr_2018_SJaP_-xAb | Deep Learning with Logged Bandit Feedback | We propose a new output layer for deep neural networks that permits the use of logged contextual bandit feedback for training. Such contextual bandit feedback can be available in huge quantities (e.g., logs of search engines, recommender systems) at little cost, opening up a path for training deep networks on orders of magnitude more data. To this effect, we propose a Counterfactual Risk Minimization (CRM) approach for training deep networks using an equivariant empirical risk estimator with variance regularization, BanditNet, and show how the resulting objective can be decomposed in a way that allows Stochastic Gradient Descent (SGD) training. We empirically demonstrate the effectiveness of the method by showing how deep networks -- ResNets in particular -- can be trained for object recognition without conventionally labeled images. | accepted-poster-papers | In this paper the authors show how to allow deep neural network training on logged contextual bandit feedback. The newly introduced framework comprises a new kind of output layer and an associated training procedure. This is a solid piece of work and a significant contribution to the literature, opening up the way for applications of deep neural networks when losses based on manual feedback and labels is not possible. | test | [
"HkGPbhPgG",
"rk5ybxtxf",
"SJVVwfoeM",
"rJYagDTXM",
"B1fB6sofG",
"rk8o2soGz",
"HkJw2jjMM",
"ryR-3ijMf",
"rJWkU5g-z"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"public"
] | [
"Learning better policies from logged bandit feedback is a very important problem, with wide applications in internet, e-commerce and anywhere it is possible to incorporate controlled exploration. The authors study the problem of learning the best policy from logged bandit data. While this is not a brand new proble... | [
7,
8,
6,
-1,
-1,
-1,
-1,
-1,
-1
] | [
4,
3,
3,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_SJaP_-xAb",
"iclr_2018_SJaP_-xAb",
"iclr_2018_SJaP_-xAb",
"iclr_2018_SJaP_-xAb",
"HkGPbhPgG",
"rk5ybxtxf",
"SJVVwfoeM",
"rJWkU5g-z",
"iclr_2018_SJaP_-xAb"
] |
iclr_2018_Byt3oJ-0W | Learning Latent Permutations with Gumbel-Sinkhorn Networks | Permutations and matchings are core building blocks in a variety of latent variable models, as they allow us to align, canonicalize, and sort data. Learning in such models is difficult, however, because exact marginalization over these combinatorial objects is intractable. In response, this paper introduces a collection of new methods for end-to-end learning in such models that approximate discrete maximum-weight matching using the continuous Sinkhorn operator. Sinkhorn iteration is attractive because it functions as a simple, easy-to-implement analog of the softmax operator. With this, we can define the Gumbel-Sinkhorn method, an extension of the Gumbel-Softmax method (Jang et al. 2016, Maddison2016 et al. 2016) to distributions over latent matchings. We demonstrate the effectiveness of our method by outperforming competitive baselines on a range of qualitatively different tasks: sorting numbers, solving jigsaw puzzles, and identifying neural signals in worms. | accepted-poster-papers | This paper with the self-explanatory title was well received by the reviewers and, additionally, comes with available code. The paper builds on prior work (Sinkhorn operator) but shows additional, significant amount of work to enable its application and inference in neural networks. There were no major criticisms by the reviewers, other than obvious directions for improvement which should have been already incorporated in the paper, issues with clarity and a little more experimentation. To some extent, the authors addressed the issues in the revised version. | train | [
"HJRB8Nugf",
"BkMR49YxM",
"S1XwiJagG",
"SyXM0-iXz",
"Hka-6bjQG",
"rJ3-L0tXM",
"r1BqSAtmf",
"BysZ6GgMG",
"ryjE_x-bM"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"public"
] | [
"Quality: The paper is built on solid theoretical grounds and supplemented by experimental demonstrations. Specifically, the justification for using the Sinkhorn operator is given by theorem 1 with proof given in the appendix. Because the theoretical limit is unachievable, the authors propose to truncate the Sinkho... | [
8,
7,
6,
-1,
-1,
-1,
-1,
-1,
-1
] | [
4,
4,
2,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_Byt3oJ-0W",
"iclr_2018_Byt3oJ-0W",
"iclr_2018_Byt3oJ-0W",
"iclr_2018_Byt3oJ-0W",
"S1XwiJagG",
"HJRB8Nugf",
"BkMR49YxM",
"ryjE_x-bM",
"iclr_2018_Byt3oJ-0W"
] |
iclr_2018_rk07ZXZRb | Learning an Embedding Space for Transferable Robot Skills | We present a method for reinforcement learning of closely related skills that are parameterized via a skill embedding space. We learn such skills by taking advantage of latent variables and exploiting a connection between reinforcement learning and variational inference. The main contribution of our work is an entropy-regularized policy gradient formulation for hierarchical policies, and an associated, data-efficient and robust off-policy gradient algorithm based on stochastic value gradients. We demonstrate the effectiveness of our method on several simulated robotic manipulation tasks. We find that our method allows for discovery of multiple solutions and is capable of learning the minimum number of distinct skills that are necessary to solve a given set of tasks. In addition, our results indicate that the hereby proposed technique can interpolate and/or sequence previously learned skills in order to accomplish more complex tasks, even in the presence of sparse rewards.
| accepted-poster-papers | This is a paper introducing a hierarchical RL method which incorporates the learning of a latent space, which enables the sharing of learned skills.
The reviewers unanimously rate this as a good paper. They suggest that it can be further improved by demonstrating the effectiveness through more experiments, especially since this is a rather generic framework. To some extent, the authors have addressed this concern in the rebuttal.
| test | [
"Hke9_IpVf",
"H1kdvIp4M",
"r1aiqauxf",
"Hk2Ttk_NM",
"Hk9a7-qlG",
"HkEQMXAxz",
"S1b5e76mM",
"rJFSl767z",
"Hy0-eQp7G"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author"
] | [
"We would you like to notify the reviewer that the pdf has been updated with the requested changes including the new experiment with the embedding pre-trained on all 6 tasks.",
"Dear reviewers, \nWe would like to let you know that we have updated the manuscript with the changes requested in your reviews. Thank yo... | [
-1,
-1,
7,
-1,
7,
7,
-1,
-1,
-1
] | [
-1,
-1,
4,
-1,
4,
5,
-1,
-1,
-1
] | [
"Hk2Ttk_NM",
"iclr_2018_rk07ZXZRb",
"iclr_2018_rk07ZXZRb",
"r1aiqauxf",
"iclr_2018_rk07ZXZRb",
"iclr_2018_rk07ZXZRb",
"r1aiqauxf",
"Hk9a7-qlG",
"HkEQMXAxz"
] |
iclr_2018_S1DWPP1A- | Unsupervised Learning of Goal Spaces for Intrinsically Motivated Goal Exploration | Intrinsically motivated goal exploration algorithms enable machines to discover repertoires of policies that produce a diversity of effects in complex environments. These exploration algorithms have been shown to allow real world robots to acquire skills such as tool use in high-dimensional continuous state and action spaces. However, they have so far assumed that self-generated goals are sampled in a specifically engineered feature space, limiting their autonomy. In this work, we propose an approach using deep representation learning algorithms to learn an adequate goal space. This is a developmental 2-stage approach: first, in a perceptual learning stage, deep learning algorithms use passive raw sensor observations of world changes to learn a corresponding latent space; then goal exploration happens in a second stage by sampling goals in this latent space. We present experiments with a simulated robot arm interacting with an object, and we show that exploration algorithms using such learned representations can closely match, and even sometimes improve, the performance obtained using engineered representations. | accepted-poster-papers | This paper aims to improve on the intrinsically motivated goal exploration framework by additionally incorporating representation learning for the space of goals. The paper is well motivated and follows a significant direction of research, as agreed by all reviewers. In particular, it provides a means for learning in complex environments, where manually designed goal spaces would not be available in practice. There had been significant concerns over the presentation of the paper, but the authors put great effort in improving the manuscript according to the reviewers’ suggestions, raising the average rating by 2 points after the rebuttal. | train | [
"HJcQvaVef",
"ByvGgjhez",
"Bk9oIe5gG",
"ByeHdGpXM",
"Sk-lOfTmz",
"rywYwMaXz",
"SkrQDzp7M",
"B1JbLzTQz",
"S1h1Bz6Qf"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author"
] | [
"This paper introduces a representation learning step in the Intrinsically Motivated Exploration Process (IMGEP) framework.\n\nThough this work is far from my expertise fields I find it quite easy to read and a good introduction to IMGEP.\nNevertheless I have some major concerns that prevent me from giving an acce... | [
7,
6,
7,
-1,
-1,
-1,
-1,
-1,
-1
] | [
4,
2,
4,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_S1DWPP1A-",
"iclr_2018_S1DWPP1A-",
"iclr_2018_S1DWPP1A-",
"HJcQvaVef",
"rywYwMaXz",
"SkrQDzp7M",
"Bk9oIe5gG",
"ByvGgjhez",
"iclr_2018_S1DWPP1A-"
] |
iclr_2018_ryRh0bb0Z | Multi-View Data Generation Without View Supervision | The development of high-dimensional generative models has recently gained a great surge of interest with the introduction of variational auto-encoders and generative adversarial neural networks. Different variants have been proposed where the underlying latent space is structured, for example, based on attributes describing the data to generate. We focus on a particular problem where one aims at generating samples corresponding to a number of objects under various views. We assume that the distribution of the data is driven by two independent latent factors: the content, which represents the intrinsic features of an object, and the view, which stands for the settings of a particular observation of that object. Therefore, we propose a generative model and a conditional variant built on such a disentangled latent space. This approach allows us to generate realistic samples corresponding to various objects in a high variety of views. Unlike many multi-view approaches, our model doesn't need any supervision on the views but only on the content. Compared to other conditional generation approaches that are mostly based on binary or categorical attributes, we make no such assumption about the factors of variations. Our model can be used on problems with a huge, potentially infinite, number of categories. We experiment it on four images datasets on which we demonstrate the effectiveness of the model and its ability to generalize. | accepted-poster-papers | This paper presents an unsupervised GAN-based model for disentagling the multiple views of the data and their content.
Overall it seems that this paper was well received by the reviewers, who find it novel and significant . The consensus is that the results are promising.
There are some concerns, but the major ones listed below have been addressed in the rebuttal. Specifically:
- R3 had a concern about the experimental evaluation, which has been addressed in the rebuttal.
- R2 had a concern about a problem inherent in this setting (what is treated as “content”), and the authors have clarified in the discussion the assumptions under which such methods operate.
- R1 had concerns related to how the proposed model fits in the literature. Again, the authors have addressed this concern adequately.
| train | [
"r1Ojef4gf",
"SyAnSJdxf",
"r1aAVyagf",
"SkaF99UXf",
"HJpLqqIXf",
"r1gec9IQG"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author"
] | [
"The paper proposes a GAN-based method for image generation that attempts to separate latent variables describing fixed \"content\" of objects from latent variables describing properties of \"view\" (all dynamic properties such as lighting, viewpoint, accessories, etc). The model is further extended for conditional... | [
7,
5,
7,
-1,
-1,
-1
] | [
3,
4,
5,
-1,
-1,
-1
] | [
"iclr_2018_ryRh0bb0Z",
"iclr_2018_ryRh0bb0Z",
"iclr_2018_ryRh0bb0Z",
"r1Ojef4gf",
"SyAnSJdxf",
"r1aAVyagf"
] |
iclr_2018_SyYe6k-CW | Deep Bayesian Bandits Showdown: An Empirical Comparison of Bayesian Deep Networks for Thompson Sampling | Recent advances in deep reinforcement learning have made significant strides in performance on applications such as Go and Atari games. However, developing practical methods to balance exploration and exploitation in complex domains remains largely unsolved. Thompson Sampling and its extension to reinforcement learning provide an elegant approach to exploration that only requires access to posterior samples of the model. At the same time, advances in approximate Bayesian methods have made posterior approximation for flexible neural network models practical. Thus, it is attractive to consider approximate Bayesian neural networks in a Thompson Sampling framework. To understand the impact of using an approximate posterior on Thompson Sampling, we benchmark well-established and recently developed methods for approximate posterior sampling combined with Thompson Sampling over a series of contextual bandit problems. We found that many approaches that have been successful in the supervised learning setting underperformed in the sequential decision-making scenario. In particular, we highlight the challenge of adapting slowly converging uncertainty estimates to the online setting. | accepted-poster-papers | This paper is not aimed at introducing new methodologies (and does not claim to do so), but instead it aims at presenting a well-executed empirical study. The presentation and outcomes of this study are quite instructive, and with the ever-growing list of academic papers, this kind of studies are a useful regularizer. | val | [
"rkI9YHhlz",
"Hk6R4RIEM",
"H11if2uxf",
"HyxcSZ9lG",
"rkN31yEVM",
"BkRk8K_Mz",
"SJniHtdMf",
"B14GjL_GG",
"ByW1BBdMf",
"r1VQYmOMG",
"SyCf87dGG",
"BynWicwMz"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"public",
"public",
"author",
"public"
] | [
"This paper presents the comparison of a list of algorithms for contextual bandit with Thompson sampling subroutine. The authors compared different methods for posterior estimation for Thompson sampling. Experimental comparisons on contextual bandit settings have been performed on a simple simulation and quite a fe... | [
5,
-1,
7,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
5,
-1,
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_SyYe6k-CW",
"BkRk8K_Mz",
"iclr_2018_SyYe6k-CW",
"iclr_2018_SyYe6k-CW",
"iclr_2018_SyYe6k-CW",
"SJniHtdMf",
"rkI9YHhlz",
"HyxcSZ9lG",
"r1VQYmOMG",
"BynWicwMz",
"H11if2uxf",
"iclr_2018_SyYe6k-CW"
] |
iclr_2018_H15odZ-C- | Semantic Interpolation in Implicit Models | In implicit models, one often interpolates between sampled points in latent space. As we show in this paper, care needs to be taken to match-up the distributional assumptions on code vectors with the geometry of the interpolating paths. Otherwise, typical assumptions about the quality and semantics of in-between points may not be justified. Based on our analysis we propose to modify the prior code distribution to put significantly more probability mass closer to the origin. As a result, linear interpolation paths are not only shortest paths, but they are also guaranteed to pass through high-density regions, irrespective of the dimensionality of the latent space. Experiments on standard benchmark image datasets demonstrate clear visual improvements in the quality of the generated samples and exhibit more meaningful interpolation paths. | accepted-poster-papers | The paper presents a modified sampling method for improving the quality of interpolated samples in deep generative models.
There is not a great amount of technical contributions in the paper, however it is written in a very clear way, makes interesting observations and analyses and shows promising results. Therefore, it should be of interest to the ICLR community. | train | [
"S16ZxNFgz",
"BJDXbk5lM",
"rka1Lw2xf",
"r1y8QktmG",
"BJjJmJY7G",
"H18cf1Y7G",
"rkPw6C_mG",
"ByNZgIKgG"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"public"
] | [
"The paper concerns distributions used for the code space in implicit models, e.g. VAEs and GANs. The authors analyze the relation between the latent space dimension and the normal distribution which is commonly used for the latent distribution. The well-known fact that probability mass concentrates in a shell of h... | [
6,
5,
7,
-1,
-1,
-1,
-1,
-1
] | [
3,
4,
4,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_H15odZ-C-",
"iclr_2018_H15odZ-C-",
"iclr_2018_H15odZ-C-",
"ByNZgIKgG",
"S16ZxNFgz",
"rka1Lw2xf",
"BJDXbk5lM",
"iclr_2018_H15odZ-C-"
] |
iclr_2018_B1X0mzZCW | Fidelity-Weighted Learning | Training deep neural networks requires many training samples, but in practice training labels are expensive to obtain and may be of varying quality, as some may be from trusted expert labelers while others might be from heuristics or other sources of weak supervision such as crowd-sourcing. This creates a fundamental quality- versus-quantity trade-off in the learning process. Do we learn from the small amount of high-quality data or the potentially large amount of weakly-labeled data? We argue that if the learner could somehow know and take the label-quality into account when learning the data representation, we could get the best of both worlds. To this end, we propose “fidelity-weighted learning” (FWL), a semi-supervised student- teacher approach for training deep neural networks using weakly-labeled data. FWL modulates the parameter updates to a student network (trained on the task we care about) on a per-sample basis according to the posterior confidence of its label-quality estimated by a teacher (who has access to the high-quality labels). Both student and teacher are learned from the data. We evaluate FWL on two tasks in information retrieval and natural language processing where we outperform state-of-the-art alternative semi-supervised methods, indicating that our approach makes better use of strong and weak labels, and leads to better task-dependent data representations. | accepted-poster-papers | This paper introduces a student-teacher method for learning from labels of varying quality (i.e. varying fidelity data). This is an interesting idea which shows promising results.
Some further connections to various kinds of semi-supervised and multi-fidelity learning would strengthen the paper, although understandably it is not easy to cover the vast literature, which also spans different scientific domains. One reviewer had a concern about some design decisions that seemed ad-hoc, but at least the authors have intuitively and experimentally justified them. | train | [
"rk-GXLRgz",
"H1dQodKgf",
"ByHbM4qlG",
"SkVyr0imz",
"H1fhykJ7G",
"rJSfGk17z",
"H1-tW1yXf",
"B1SWkkk7G",
"Bkwb0C0fM",
"B1wri00MM",
"HkQooCCzG"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author"
] | [
"This paper suggests a simple yet effective approach for learning with weak supervision. This learning scenario involves two datasets, one with clean data (i.e., labeled by the true function) and one with noisy data, collected using a weak source of supervision. The suggested approach assumes a teacher and student... | [
7,
5,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
4,
4,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_B1X0mzZCW",
"iclr_2018_B1X0mzZCW",
"iclr_2018_B1X0mzZCW",
"iclr_2018_B1X0mzZCW",
"ByHbM4qlG",
"H1dQodKgf",
"H1dQodKgf",
"ByHbM4qlG",
"ByHbM4qlG",
"rk-GXLRgz",
"rk-GXLRgz"
] |
iclr_2018_SJzRZ-WCZ | Latent Space Oddity: on the Curvature of Deep Generative Models | Deep generative models provide a systematic way to learn nonlinear data distributions through a set of latent variables and a nonlinear "generator" function that maps latent points into the input space. The nonlinearity of the generator implies that the latent space gives a distorted view of the input space. Under mild conditions, we show that this distortion can be characterized by a stochastic Riemannian metric, and we demonstrate that distances and interpolants are significantly improved under this metric. This in turn improves probability distributions, sampling algorithms and clustering in the latent space. Our geometric analysis further reveals that current generators provide poor variance estimates and we propose a new generator architecture with vastly improved variance estimates. Results are demonstrated on convolutional and fully connected variational autoencoders, but the formalism easily generalizes to other deep generative models. | accepted-poster-papers | This paper characterizes the induced geometry of the latent space of deep generative models. The motivation is established well, such that the paper convincingly discusses the usefulness derived from these insights. For example, the results uncover issues with the currently used methods for variance estimation in deep generative models. The technique invoked to mitigate this issue does feel somehow ad-hoc, but at least it is well motivated.
One of the reviewers correctly pointed out that there is limited novelty in the theoretical/methodological aspect. However, I agree with the authors’ rebuttal in that characterizing geometries on stochastic manifolds is much less studied and demonstrated, especially in the deep learning community. Therefore, I believe that this paper will be found useful by readers of the ICLR community, and will stimulate future research. | train | [
"SyC3QhVgf",
"r1dsxRSxG",
"HJOIiwjlz",
"SkdEk8KfM",
"rJpn3fVZf",
"rk-_6MEZM",
"SkM8pzVZf",
"HkbMaz4Wz"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author"
] | [
"The paper investigates the geometry of deep generative models. In particular, it describes the geometry of the latent space when giving it the (stochastic) Riemannian geometry inherited from the embedding in the input space described by the generator function. The authors describe the geometric setting, how distan... | [
3,
7,
7,
-1,
-1,
-1,
-1,
-1
] | [
4,
4,
3,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_SJzRZ-WCZ",
"iclr_2018_SJzRZ-WCZ",
"iclr_2018_SJzRZ-WCZ",
"iclr_2018_SJzRZ-WCZ",
"iclr_2018_SJzRZ-WCZ",
"SyC3QhVgf",
"r1dsxRSxG",
"HJOIiwjlz"
] |
iclr_2018_Hk3ddfWRW | Imitation Learning from Visual Data with Multiple Intentions | Recent advances in learning from demonstrations (LfD) with deep neural networks have enabled learning complex robot skills that involve high dimensional perception such as raw image inputs.
LfD algorithms generally assume learning from single task demonstrations. In practice, however, it is more efficient for a teacher to demonstrate a multitude of tasks without careful task set up, labeling, and engineering. Unfortunately in such cases, traditional imitation learning techniques fail to represent the multi-modal nature of the data, and often result in sub-optimal behavior. In this paper we present an LfD approach for learning multiple modes of behavior from visual data. Our approach is based on a stochastic deep neural network (SNN), which represents the underlying intention in the demonstration as a stochastic activation in the network. We present an efficient algorithm for training SNNs, and for learning with vision inputs, we also propose an architecture that associates the intention with a stochastic attention module.
We demonstrate our method on real robot visual object reaching tasks, and show that
it can reliably learn the multiple behavior modes in the demonstration data. Video results are available at https://vimeo.com/240212286/fd401241b9. | accepted-poster-papers | This paper presents a sampling inference method for learning in multi-modal demonstration scenarios. Reference to imitation learning causes some confusion with the IRL domain, where this terminology is usually encountered. Providing a real application to robot reaching, while a relatively simple task in robotics, increases the difficulty and complexity of the demonstration. That makes it impressive, but also difficult to unpick the contributions and reproduce even the first demonstration. It's understandable at a meeting on learning representations that the reviewers wanted to understand why existing methods for learning multi-modal distributions would not work, and get a better understanding of the tradeoffs and limitations of the proposed method. The CVAE comparison added to the appendix during the rebuttal period just pushed this paper over the bar. The demonstration is simplified, so much easier to reproduce, making it more feasible others will attempt to reproduce the claims made here.
| train | [
"ryU5B1zxf",
"r1cOWGdgz",
"r1ET9Ncgf",
"S1PKgkIGz"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author"
] | [
"The authors propose a new sampling based approach for inference in latent variable models. They apply this approach to multi-modal (several \"intentions\") imitation learning and demonstrate for a real visual robotics task that the proposed framework works better than deterministic neural networks and stochastic n... | [
6,
4,
6,
-1
] | [
4,
3,
4,
-1
] | [
"iclr_2018_Hk3ddfWRW",
"iclr_2018_Hk3ddfWRW",
"iclr_2018_Hk3ddfWRW",
"iclr_2018_Hk3ddfWRW"
] |
iclr_2018_H1zriGeCZ | Hyperparameter optimization: a spectral approach | We give a simple, fast algorithm for hyperparameter optimization inspired by techniques from the analysis of Boolean functions. We focus on the high-dimensional regime where the canonical example is training a neural network with a large number of hyperparameters. The algorithm --- an iterative application of compressed sensing techniques for orthogonal polynomials --- requires only uniform sampling of the hyperparameters and is thus easily parallelizable.
Experiments for training deep neural networks on Cifar-10 show that compared to state-of-the-art tools (e.g., Hyperband and Spearmint), our algorithm finds significantly improved solutions, in some cases better than what is attainable by hand-tuning. In terms of overall running time (i.e., time required to sample various settings of hyperparameters plus additional computation time), we are at least an order of magnitude faster than Hyperband and Bayesian Optimization. We also outperform Random Search 8×.
Our method is inspired by provably-efficient algorithms for learning decision trees using the discrete Fourier transform. We obtain improved sample-complexty bounds for learning decision trees while matching state-of-the-art bounds on running time (polynomial and quasipolynomial, respectively). | accepted-poster-papers | This paper introduces an algorithm for optimization of discrete hyperparameters based on compressed sensing, and compares against standard gradient-free optimization approaches.
As the reviewers point out, the provable guarantees (as is usually the case) don't quite make it to the main results section, but are still refreshing to see in hyperparameter optimization.
The method itself is relatively simple compared to full-featured Bayesopt (spearmint), although not as widely applicable.
| train | [
"H1nRveigz",
"SyM469sgf",
"Syx3D46ez",
"S1cQU63Zz",
"S1BiH6nWf",
"HyAQBa2Zz"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author"
] | [
"This paper looks at the problem of optimizing hyperparameters under the assumption that the unknown function can be approximated by a sparse and low degree polynomial in the Fourier basis. The main result is that the approximate minimization can be performed over the boolean hypercube where the number of evaluatio... | [
6,
6,
9,
-1,
-1,
-1
] | [
4,
3,
5,
-1,
-1,
-1
] | [
"iclr_2018_H1zriGeCZ",
"iclr_2018_H1zriGeCZ",
"iclr_2018_H1zriGeCZ",
"H1nRveigz",
"SyM469sgf",
"Syx3D46ez"
] |
iclr_2018_H1Xw62kRZ | Leveraging Grammar and Reinforcement Learning for Neural Program Synthesis | Program synthesis is the task of automatically generating a program consistent with
a specification. Recent years have seen proposal of a number of neural approaches
for program synthesis, many of which adopt a sequence generation paradigm similar
to neural machine translation, in which sequence-to-sequence models are trained to
maximize the likelihood of known reference programs. While achieving impressive
results, this strategy has two key limitations. First, it ignores Program Aliasing: the
fact that many different programs may satisfy a given specification (especially with
incomplete specifications such as a few input-output examples). By maximizing
the likelihood of only a single reference program, it penalizes many semantically
correct programs, which can adversely affect the synthesizer performance. Second,
this strategy overlooks the fact that programs have a strict syntax that can be
efficiently checked. To address the first limitation, we perform reinforcement
learning on top of a supervised model with an objective that explicitly maximizes
the likelihood of generating semantically correct programs. For addressing the
second limitation, we introduce a training procedure that directly maximizes the
probability of generating syntactically correct programs that fulfill the specification.
We show that our contributions lead to improved accuracy of the models, especially
in cases where the training data is limited. | accepted-poster-papers | Below is a summary of the pros and cons of the proposed paper:
Pros:
* Proposes a novel method to tune program synthesizers to generate correct programs and prune search space, leading to better and more efficient synthesis
* Shows small but substantial gains on a standard benchmark
Cons:
* Reviewers and commenters cited a few clarity issues, although these have mostly been resolved
* Lack of empirical comparison with relevant previous work (e.g. Parisotto et al.) makes it hard to determine their relative merit
Overall, this seems to be a solid, well-evaluated contribution and seems to me to warrant a poster presentation.
Also, just a few notes from the area chair to potentially make the final version better:
The proposed method is certainly different from the method of Parisotto et al., but it is attempting to solve the same problem: the lack of consideration of the grammar in neural program synthesis models. The relative merit is stated to be that the proposed method can be used when there is no grammar specification, but the model of Parisotto et al. also learns expansion rules from data, so no explicit grammar specification is necessary (as long as a parser exists, which is presumably necessary to perform the syntax checking that is core to the proposed method). It would have been ideal to see an empirical comparison between the two methods, but this is obviously a lot of work. It would be nice to have the method acknowledged more prominently in the description, perhaps in the introduction, however.
It is nice to see a head-nod to Guu et al.'s work on semantic parsing (as semantic parsing from natural language is also highly relevant). There is obviously a lot of work on generating structured representations from natural lanugage, and the following two might be particularly relevant given their focus on grammar-based formalisms for code synthesis from natural language:
* "A Syntactic Neural Model for General-purpose Code Generation" Yin and Neubig ACL 2017.
* "Abstract Syntax Networks for Code Generation and Semantic Parsing" Rabinovich et al. ACL 2017
| train | [
"Hk4_Jw9xG",
"H1JSNUjeG",
"HkcxQ4Rxf",
"SyzhcVbXf",
"S1UnKVWXf",
"ByKvFVW7f",
"r1GbKVb7G",
"Sy8bJjuMM"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"public"
] | [
"The authors consider the task of program synthesis in the Karel DSL. Their innovations are to use reinforcement learning to guide sequential generation of tokes towards a high reward output, incorporate syntax checking into the synthesis procedure to prune syntactically invalid programs. Finally they learn a model... | [
5,
6,
7,
-1,
-1,
-1,
-1,
-1
] | [
3,
3,
3,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_H1Xw62kRZ",
"iclr_2018_H1Xw62kRZ",
"iclr_2018_H1Xw62kRZ",
"Sy8bJjuMM",
"HkcxQ4Rxf",
"H1JSNUjeG",
"Hk4_Jw9xG",
"iclr_2018_H1Xw62kRZ"
] |
iclr_2018_HJzgZ3JCW | Efficient Sparse-Winograd Convolutional Neural Networks | Convolutional Neural Networks (CNNs) are computationally intensive, which limits their application on mobile devices. Their energy is dominated by the number of multiplies needed to perform the convolutions. Winograd’s minimal filtering algorithm (Lavin, 2015) and network pruning (Han et al., 2015) can reduce the operation count, but these two methods cannot be straightforwardly combined — applying the Winograd transform fills in the sparsity in both the weights and the activations. We propose two modifications to Winograd-based CNNs to enable these methods to exploit sparsity. First, we move the ReLU operation into the Winograd domain to increase the sparsity of the transformed activations. Second, we prune the weights in the Winograd domain to exploit static weight sparsity. For models on CIFAR-10, CIFAR-100 and ImageNet datasets, our method reduces the number of multiplications by 10.4x, 6.8x and 10.8x respectively with loss of accuracy less than 0.1%, outperforming previous baselines by 2.0x-3.0x. We also show that moving ReLU to the Winograd domain allows more aggressive pruning. | accepted-poster-papers | The paper presents a modification of the Winograd convolution algorithm that reduces the number of multiplications in a forward pass of a CNN with minimal loss of accuracy. The reviewers brought up the strong results, the readability of the paper, and the thoroughness of the experiments. One concern brought up was the applicability to deeper network structures. This was acknowledged by the authors to be a subject of future work. Another issue raised was the question of theoretical vs. actual speedup. Again, this was acknowledged by the authors to be an eventual goal but subject to further systems work and architecture optimizations. The reviewers were consistent in their support of the paper. I follow their recommendation: Accept.
| train | [
"Hk0i6DUVz",
"SyMeSO8ef",
"rJMLjDqeM",
"HJ8UsZ6gM",
"rktxb6fNM",
"BJdkWgiGz",
"ryvYuSgzf",
"B171DHlfG",
"BkOMrBeGM"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author"
] | [
"Thanks for the clarifications! I think the score is still appropriate.",
"This paper proposes to combine Winograd transformation with sparsity to reduce the computation for deep convolutional neural network. Specifically, ReLU nonlinearity was moved after Winograd transformation to increase the dynamic sparsity ... | [
-1,
7,
7,
8,
-1,
-1,
-1,
-1,
-1
] | [
-1,
3,
4,
4,
-1,
-1,
-1,
-1,
-1
] | [
"BkOMrBeGM",
"iclr_2018_HJzgZ3JCW",
"iclr_2018_HJzgZ3JCW",
"iclr_2018_HJzgZ3JCW",
"ryvYuSgzf",
"B171DHlfG",
"SyMeSO8ef",
"rJMLjDqeM",
"HJ8UsZ6gM"
] |
iclr_2018_Sk6fD5yCb | Espresso: Efficient Forward Propagation for Binary Deep Neural Networks | There are many applications scenarios for which the computational
performance and memory footprint of the prediction phase of Deep
Neural Networks (DNNs) need to be optimized. Binary Deep Neural
Networks (BDNNs) have been shown to be an effective way of achieving
this objective. In this paper, we show how Convolutional Neural
Networks (CNNs) can be implemented using binary
representations. Espresso is a compact, yet powerful
library written in C/CUDA that features all the functionalities
required for the forward propagation of CNNs, in a binary file less
than 400KB, without any external dependencies. Although it is mainly
designed to take advantage of massive GPU parallelism, Espresso also
provides an equivalent CPU implementation for CNNs. Espresso
provides special convolutional and dense layers for BCNNs,
leveraging bit-packing and bit-wise computations
for efficient execution. These techniques provide a speed-up of
matrix-multiplication routines, and at the same time, reduce memory
usage when storing parameters and activations. We experimentally
show that Espresso is significantly faster than existing
implementations of optimized binary neural networks (~ 2
orders of magnitude). Espresso is released under the Apache 2.0
license and is available at http://github.com/organization/project. | accepted-poster-papers | This paper describes a new library for forward propagation of binary CNNs. R1 for clarification on the contributions and novelty, which the authors provided. They subsequently updated their score. I think that optimized code with permissive licensing (as R2 points out) benefits the community. The paper will benefit those who decide to work with the library. | train | [
"HJCLeXtgM",
"SyRQ7Vq1G",
"HymwoY3lM"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper presents a library written in C/CUDA that features all the functionalities required for the forward propagation of BCNNs. The library is significantly faster than existing implementations of optimized binary neural networks (≈ 2 orders of magnitude), and will be released on github.\n\nBCNNs have been abl... | [
6,
7,
7
] | [
3,
4,
1
] | [
"iclr_2018_Sk6fD5yCb",
"iclr_2018_Sk6fD5yCb",
"iclr_2018_Sk6fD5yCb"
] |
iclr_2018_r11Q2SlRW | Auto-Conditioned Recurrent Networks for Extended Complex Human Motion Synthesis | We present a real-time method for synthesizing highly complex human motions using a novel training regime we call the auto-conditioned Recurrent Neural Network (acRNN). Recently, researchers have attempted to synthesize new motion by using autoregressive techniques, but existing methods tend to freeze or diverge after a couple of seconds due to an accumulation of errors that are fed back into the network. Furthermore, such methods have only been shown to be reliable for relatively simple human motions, such as walking or running. In contrast, our approach can synthesize arbitrary motions with highly complex styles, including dances or martial arts in addition to locomotion. The acRNN is able to accomplish this by explicitly accommodating for autoregressive noise accumulation during training. Our work is the first to our knowledge that demonstrates the ability to generate over 18,000 continuous frames (300 seconds) of new complex human motion w.r.t. different styles. | accepted-poster-papers | This paper proposes a real-time method for synthesizing human motion of highly complex styles. The key concern raised by R2 was that the method did not depart greatly from a standard LSTM: parts of the generated sequences are conditioned on generated data as opposed to ground truth data. However, the reviewer thought the idea was sensible and the results were very good in practice. R1 also agreed that the results were very good and asked for a more detailed analysis of conditioning length and some clarification. R3 brought up similarities to Professor Forcing (Goyal et al. 2016) -- also noted by R2 -- and Learning Human Motion Models for Long-term Predictions (Ghosh et al. 2017) -- noting not peer-reviewed. R3 also raised the open issue of how to best evaluate sequence prediction models like these. They brought up an interesting point, which was that the synthesized motions were low quality compared to recent works by Holden et al., however, they acknowledged that by rendering the characters this exposed the motion flaws. The authors responded to all of the reviews, committing to a comparison to Scheduled Sampling, though a comparison to Professor Forcing was proving difficult in the review timeline. While this paper may not receive the highest novelty score, I agree with the reviewers that it has merit. It is well written, has clear and reasonably thorough experiments, and the results are indeed good. | train | [
"H1b7FSwgM",
"r1NGC2dlf",
"S1Lqh4YxG",
"B1uPnlTGM"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author"
] | [
"This paper proposes acLSTM to synthesize long sequences of human motion. It tackles the challenge of error accumulation of traditional techniques to predict long sequences step by step. The key idea is to combine prediction and ground truth in training. It is impressive that this architecture can predict hundreds ... | [
7,
7,
6,
-1
] | [
3,
5,
5,
-1
] | [
"iclr_2018_r11Q2SlRW",
"iclr_2018_r11Q2SlRW",
"iclr_2018_r11Q2SlRW",
"iclr_2018_r11Q2SlRW"
] |
iclr_2018_SyMvJrdaW | Decoupling the Layers in Residual Networks | We propose a Warped Residual Network (WarpNet) using a parallelizable warp operator for forward and backward propagation to distant layers that trains faster than the original residual neural network. We apply a perturbation theory on residual networks and decouple the interactions between residual units. The resulting warp operator is a first order approximation of the output over multiple layers. The first order perturbation theory exhibits properties such as binomial path lengths and exponential gradient scaling found experimentally by Veit et al (2016).
We demonstrate through an extensive performance study that the proposed network achieves comparable predictive performance to the original residual network with the same number of parameters, while achieving a significant speed-up on the total training time. As WarpNet performs model parallelism in residual network training in which weights are distributed over different GPUs, it offers speed-up and capability to train larger networks compared to original residual networks. | accepted-poster-papers | This paper proposes a “warp operator” based on Taylor expansion that can replace a block of layers in a residual network, allowing for parallelization. Taking advantage of multi-GPU parallelization the paper shows increased speedup with similar performance on CIFAR-10 and CIFAR-100. R1 asked for clarification on rotational symmetry. The authors instead removed the discussion that was causing confusion (replacing with additional experimental results that had been requested). R2 had the most detailed review and thought that the idea and analysis were interesting. They also had difficulty following the discussion of symmetry (noted above). They also pointed out several other issues around clarity and had several suggestions for improving the experiments which seem to have been taken to heart by the authors, who detailed their changes in response to this review. There was also an anonymous public comment that pointed out a “fatal mathematical flaw and weak experiments”. There was a lengthy exchange between this reviewer and the authors, and the paper was actually corrected and clarified in the process. This anonymous poster was rather demanding of the authors, asking for latex-formatted equations, pseudo-code, and giving direction on how to respond to his/her rebuttal. I don't agree with the point that the paper is flawed by "only" presenting a speed-up over ResNet, and furthermore the comment of "not everyone has access to parallelization" isn’t a fair criticism of the paper. | val | [
"ryFgDsREM",
"rycPJEAVM",
"Sy_NM8aNG",
"BJAGxsHNf",
"HJ6XbgpEG",
"HyzlPHYNG",
"rkDRpk_4M",
"BJ9xeoSNf",
"B1pRJiHNG",
"r1-31oHNM",
"r1nrJjSVG",
"ryCv5QFgz",
"r1NvXZ9ez",
"S1wxhnsef",
"B1DKKRcmf",
"SJdktR5mz",
"rk7hdR5mf",
"ry04OA9mf",
"BkrnPR5mf",
"BJbpaOlGM",
"S16X2Oeff",
"... | [
"author",
"public",
"author",
"public",
"public",
"author",
"author",
"public",
"public",
"public",
"public",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"public",
"public",
"author",
"author",
"public... | [
"Our formula (3) and its proof are correct and sound. We believe that your conclusion is based on a critical misunderstanding of our approach. As evident from the derivation of Equation 9 from Equation 8, our approximation is built upon MANY local Taylor expansions to build an approximation for each layer.\n\nYour ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"rycPJEAVM",
"Sy_NM8aNG",
"HJ6XbgpEG",
"B1DKKRcmf",
"HyzlPHYNG",
"rkDRpk_4M",
"BJAGxsHNf",
"B1DKKRcmf",
"B1DKKRcmf",
"B1DKKRcmf",
"B1DKKRcmf",
"iclr_2018_SyMvJrdaW",
"iclr_2018_SyMvJrdaW",
"iclr_2018_SyMvJrdaW",
"BJbpaOlGM",
"ryCv5QFgz",
"r1NvXZ9ez",
"BkrnPR5mf",
"S1wxhnsef",
"... |
iclr_2018_HktRlUlAZ | Polar Transformer Networks | Convolutional neural networks (CNNs) are inherently equivariant to translation. Efforts to embed other forms of equivariance have concentrated solely on rotation. We expand the notion of equivariance in CNNs through the Polar Transformer Network (PTN). PTN combines ideas from the Spatial Transformer Network (STN) and canonical coordinate representations. The result is a network invariant to translation and equivariant to both rotation and scale. PTN is trained end-to-end and composed of three distinct stages: a polar origin predictor, the newly introduced polar transformer module and a classifier. PTN achieves state-of-the-art on rotated MNIST and the newly introduced SIM2MNIST dataset, an MNIST variation obtained by adding clutter and perturbing digits with translation, rotation and scaling. The ideas of PTN are extensible to 3D which we demonstrate through the Cylindrical Transformer Network. | accepted-poster-papers | The paper proposes a new deep architecture based on polar transformation for improving rotational invariance. The proposed method is interesting and the experimental results strong classification performance on small/medium-scale datasets (e.g., rotated MNIST and its variants with added translations and clutters, ModelNet40, etc.). It will be more impressive and impactful if the proposed method can bring performance improvement on large-scale, real datasets with potentially cluttered scenes (e.g., Imagenet, Pascal VOC, MS-COCO, etc.). | train | [
"ryVw7PIVG",
"rkMG9c_gf",
"r1XT6wdeG",
"B1aLPb5eM",
"BJxiU46XG",
"S15H7uNMz",
"BJzqfu4fM",
"H11Me_VGM"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"I think my initial review score was a bit low. There is certainly still a lot of residual uncertainty about whether the method in its current state would work well on more serious vision problems, but:\n1) The method is conceptually novel and innovative\n2) I can see a plausible path towards real-world usage. This... | [
-1,
7,
7,
8,
-1,
-1,
-1,
-1
] | [
-1,
4,
4,
3,
-1,
-1,
-1,
-1
] | [
"BJzqfu4fM",
"iclr_2018_HktRlUlAZ",
"iclr_2018_HktRlUlAZ",
"iclr_2018_HktRlUlAZ",
"iclr_2018_HktRlUlAZ",
"r1XT6wdeG",
"rkMG9c_gf",
"B1aLPb5eM"
] |
iclr_2018_H1VGkIxRZ | Enhancing The Reliability of Out-of-distribution Image Detection in Neural Networks | We consider the problem of detecting out-of-distribution images in neural networks. We propose ODIN, a simple and effective method that does not require any change to a pre-trained neural network. Our method is based on the observation that using temperature scaling and adding small perturbations to the input can separate the softmax score distributions of in- and out-of-distribution images, allowing for more effective detection. We show in a series of experiments that ODIN is compatible with diverse network architectures and datasets. It consistently outperforms the baseline approach by a large margin, establishing a new state-of-the-art performance on this task. For example, ODIN reduces the false positive rate from the baseline 34.7% to 4.3% on the DenseNet (applied to CIFAR-10 and Tiny-ImageNet) when the true positive rate is 95%. | accepted-poster-papers | The reviewers agree that the method is simple, the results are quite good, and the paper is well written. The issues the reviewers brought up have been adequately addressed. There is a slight concern about novelty, however the approach will likely be quite useful in practice. | train | [
"r1KVjuSlf",
"By__JgYef",
"By0tIonxf",
"SJDGWFiQG",
"ryYGAGoXf",
"r1OWfeiXf",
"ryfs-roff",
"ryPuWHofM",
"H1E4-rszf"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"public",
"author",
"public",
"author",
"author",
"author"
] | [
"\n-----UPDATE------\n\nThe authors addressed my concerns satisfactorily. Given this and the other reviews I have bumped up my score from a 5 to a 6.\n\n----------------------\n\n\nThis paper introduces two modifications that allow neural networks to be better at distinguishing between in- and out- of distribution ... | [
6,
6,
9,
-1,
-1,
-1,
-1,
-1,
-1
] | [
4,
3,
3,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_H1VGkIxRZ",
"iclr_2018_H1VGkIxRZ",
"iclr_2018_H1VGkIxRZ",
"ryYGAGoXf",
"r1OWfeiXf",
"iclr_2018_H1VGkIxRZ",
"r1KVjuSlf",
"By__JgYef",
"By0tIonxf"
] |
iclr_2018_Skj8Kag0Z | Stabilizing Adversarial Nets with Prediction Methods | Adversarial neural networks solve many important problems in data science, but are notoriously difficult to train. These difficulties come from the fact that optimal weights for adversarial nets correspond to saddle points, and not minimizers, of the loss function. The alternating stochastic gradient methods typically used for such problems do not reliably converge to saddle points, and when convergence does happen it is often highly sensitive to learning rates. We propose a simple modification of stochastic gradient descent that stabilizes adversarial networks. We show, both in theory and practice, that the proposed method reliably converges to saddle points. This makes adversarial networks less likely to "collapse," and enables faster training with larger learning rates. | accepted-poster-papers | This paper provides a simple technique for stabilizing GAN training, and works over a variety of GAN models.
One of the reviewers expressed concerns with the value of the theory. I think that it would be worth emphasizing that similar arguments could be made for alternating gradient descent, and simultaneous gradient descent. In this case, if possible, it would be good to highlight how the convergence of the prediction method approach differs from the alternating descent approach. Otherwise, highlight that this theory simply shows that the prediction method is not a completely crazy idea (in that it doesn't break existing theory).
Practically, I think the experiments are sufficiently interesting to show that this approach has promise. I don't see the updated results for Stacked GAN for a fixed set of epochs (20 and 40 at different learning rates). Perhaps put this below Table 1. | train | [
"HyUTUNfHz",
"SyaZlQnEG",
"HJR6IerZf",
"rJTFwew4M",
"rk7Fq_IEM",
"SkpeoDLEG",
"rJvkqPIVz",
"ry-q9ZOlf",
"HJgHrDugM",
"SyyOtPi7G",
"rkou2HjXf",
"BJMVz7qQG",
"Bk1l-xfmf",
"HkUSllGXM",
"HyXpJeMQz",
"HkOZkgzQf",
"HJUBAbUbf",
"SyMkklU-G"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"public",
"official_reviewer",
"official_reviewer",
"author",
"author",
"public",
"author",
"author",
"author",
"author",
"public",
"public"
] | [
"Dear Reviewer,\n\nWe have tried addressing all your concerns in our latest response, please let us know if you still have any remaining concerns ? \n",
"> Rather, I'm saying that other algorithms have similar theoretical guarantees\nCould the reviewer be more specific on which algorithms have similar theoretical... | [
-1,
-1,
4,
-1,
-1,
-1,
-1,
9,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
-1,
-1,
4,
-1,
-1,
-1,
-1,
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"HJR6IerZf",
"rJTFwew4M",
"iclr_2018_Skj8Kag0Z",
"HkUSllGXM",
"iclr_2018_Skj8Kag0Z",
"rJvkqPIVz",
"iclr_2018_Skj8Kag0Z",
"iclr_2018_Skj8Kag0Z",
"iclr_2018_Skj8Kag0Z",
"Bk1l-xfmf",
"iclr_2018_Skj8Kag0Z",
"iclr_2018_Skj8Kag0Z",
"HJR6IerZf",
"HJR6IerZf",
"HJgHrDugM",
"ry-q9ZOlf",
"SyMkk... |
iclr_2018_rJXMpikCZ | Graph Attention Networks | We present graph attention networks (GATs), novel neural network architectures that operate on graph-structured data, leveraging masked self-attentional layers to address the shortcomings of prior methods based on graph convolutions or their approximations. By stacking layers in which nodes are able to attend over their neighborhoods' features, we enable (implicitly) specifying different weights to different nodes in a neighborhood, without requiring any kind of computationally intensive matrix operation (such as inversion) or depending on knowing the graph structure upfront. In this way, we address several key challenges of spectral-based graph neural networks simultaneously, and make our model readily applicable to inductive as well as transductive problems. Our GAT models have achieved or matched state-of-the-art results across four established transductive and inductive graph benchmarks: the Cora, Citeseer and Pubmed citation network datasets, as well as a protein-protein interaction dataset (wherein test graphs remain unseen during training). | accepted-poster-papers | The authors appear to have largely addressed the concerns of the reviewers and commenters regarding related work and experiments. The results are strong, and this will likely be a useful contribution for the graph neural network literature. | train | [
"S1vzCb-bz",
"BJth1UKlf",
"ryFW0bhlM",
"r1nI2A8fG",
"HkQRsR8Mz",
"Sya9jCIzG",
"BJE_oR8Gf",
"SynboRIMz",
"Hyresurlz",
"ryMGLHXxz",
"Sy2YY-MlM",
"SkMm8gaJM",
"HyL2UVDJG",
"BJg4NXZyz",
"HJMkC2xyz",
"H1Cechgkz",
"HJk72Fekz",
"S12tIMACW",
"r1KPYM0Cb",
"ryGdFlCCb",
"B16obCO0W"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"public",
"author",
"public",
"author",
"public",
"author",
"author",
"author",
"public",
"author",
"public",
"public",
"public"
] | [
"This paper has proposed a new method for classifying nodes of a graph. Their method can be used in both semi-supervised scenarios where the label of some of the nodes of the same graph as the graph in training is missing (Transductive) and in the scenario that the test is on a completely new graph (Inductive).\nEa... | [
6,
7,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
4,
5,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_rJXMpikCZ",
"iclr_2018_rJXMpikCZ",
"iclr_2018_rJXMpikCZ",
"iclr_2018_rJXMpikCZ",
"S1vzCb-bz",
"ryFW0bhlM",
"BJth1UKlf",
"Hyresurlz",
"iclr_2018_rJXMpikCZ",
"Sy2YY-MlM",
"iclr_2018_rJXMpikCZ",
"HyL2UVDJG",
"iclr_2018_rJXMpikCZ",
"HJk72Fekz",
"r1KPYM0Cb",
"ryGdFlCCb",
"iclr_... |
iclr_2018_BywyFQlAW | Minimax Curriculum Learning: Machine Teaching with Desirable Difficulties and Scheduled Diversity | We introduce and study minimax curriculum learning (MCL), a new method for adaptively selecting a sequence of training subsets for a succession of stages in machine learning. The subsets are encouraged to be small and diverse early on, and then larger, harder, and allowably more homogeneous in later stages. At each stage, model weights and training sets are chosen by solving a joint continuous-discrete minimax optimization, whose objective is composed of a continuous loss (reflecting training set hardness) and a discrete submodular promoter of diversity for the chosen subset. MCL repeatedly solves a sequence of such optimizations with a schedule of increasing training set size and decreasing pressure on diversity encouragement. We reduce MCL to the minimization of a surrogate function handled by submodular maximization and continuous gradient methods. We show that MCL achieves better performance and, with a clustering trick, uses fewer labeled samples for both shallow and deep models while achieving the same performance. Our method involves repeatedly solving constrained submodular maximization of an only slowly varying function on the same ground set. Therefore, we develop a heuristic method that utilizes the previous submodular maximization solution as a warm start for the current submodular maximization process to reduce computation while still yielding a guarantee. | accepted-poster-papers | The submission formulates self paced learning as a specific iterative mini-max optimization, which incorporates both a risk minimization step and a submodular maximization for selecting the next training examples.
The strengths of the paper lie primarily in the theoretical analysis, while the experiments are somewhat limited to simple datasets: News20, MNIST, & CIFAR10. Additionally, the main paper is probably too long in its current form, and could benefit from some of the proof details being moved to the appendix.
| train | [
"BJcnVd6mG",
"BkbPVPzgG",
"H1-u-QCef",
"HkO3F9EbM",
"BkJNS_6Xf",
"H1xfB_TXM",
"SyPKEdpQM"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author"
] | [
"Thanks for your positive comments about the theoretical analysis and helpful suggestions to extend Theorem 1! In the new revision, we've added a 4.5-page analysis. This does not only complete the analysis of Theorem 1, but also shows the convergence speed for both the outer-loop and the whole algorithm, and show b... | [
-1,
5,
6,
6,
-1,
-1,
-1
] | [
-1,
3,
4,
3,
-1,
-1,
-1
] | [
"HkO3F9EbM",
"iclr_2018_BywyFQlAW",
"iclr_2018_BywyFQlAW",
"iclr_2018_BywyFQlAW",
"BkbPVPzgG",
"H1-u-QCef",
"iclr_2018_BywyFQlAW"
] |
iclr_2018_B1n8LexRZ | Generalizing Hamiltonian Monte Carlo with Neural Networks | We present a general-purpose method to train Markov chain Monte Carlo kernels, parameterized by deep neural networks, that converge and mix quickly to their target distribution. Our method generalizes Hamiltonian Monte Carlo and is trained to maximize expected squared jumped distance, a proxy for mixing speed. We demonstrate large empirical gains on a collection of simple but challenging distributions, for instance achieving a 106x improvement in effective sample size in one case, and mixing when standard HMC makes no measurable progress in a second. Finally, we show quantitative and qualitative gains on a real-world task: latent-variable generative modeling. Python source code will be open-sourced with the camera-ready paper. | accepted-poster-papers | This paper presents a learned inference architecture which generalizes HMC. It defines a parameterized family of MCMC transition operators which share the volume preserving structure of HMC updates, which allows the acceptance ratio to be computed efficiently. Experiments show that the learned operators are able to mix significantly faster on some simple toy examples, and evidence is presented that it can improve posterior inference for a deep latent variable model. This paper has not quite demonstrated usefulness of the method, but it is still a good proof of concept for adaptive extensions of HMC.
| val | [
"ryUPYorHz",
"B1_so3fSM",
"SJGzn0pNz",
"ryeffj94z",
"rJ0a6k9Nf",
"Hksh6uugz",
"HJdCshKgf",
"rkZzfMqef",
"By4O2iZVM",
"r1-mLY6XG",
"BkkI4roQM",
"HyMf9EiXz",
"ryQkGp_XG",
"Bylo8owGM",
"Syp2SsvzG",
"rkeGIjvMf"
] | [
"public",
"author",
"public",
"author",
"public",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"public",
"author",
"author",
"public",
"author",
"author",
"author"
] | [
"Yes, that is an interesting point that ReLU networks are routinely used together with stochastic gradient VI. My concern would then apply to these methods as well, even though the discontinuity in the MH method is inherent to the problem where for NNs it can be resolved by replacing ReLUs with a continuously diffe... | [
-1,
-1,
-1,
-1,
-1,
7,
6,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
-1,
-1,
-1,
-1,
-1,
4,
3,
2,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"B1_so3fSM",
"SJGzn0pNz",
"ryeffj94z",
"rJ0a6k9Nf",
"iclr_2018_B1n8LexRZ",
"iclr_2018_B1n8LexRZ",
"iclr_2018_B1n8LexRZ",
"iclr_2018_B1n8LexRZ",
"r1-mLY6XG",
"HyMf9EiXz",
"iclr_2018_B1n8LexRZ",
"ryQkGp_XG",
"iclr_2018_B1n8LexRZ",
"rkZzfMqef",
"HJdCshKgf",
"Hksh6uugz"
] |
iclr_2018_H1Yp-j1Cb | An Online Learning Approach to Generative Adversarial Networks | We consider the problem of training generative models with a Generative Adversarial Network (GAN). Although GANs can accurately model complex distributions, they are known to be difficult to train due to instabilities caused by a difficult minimax optimization problem. In this paper, we view the problem of training GANs as finding a mixed strategy in a zero-sum game. Building on ideas from online learning we propose a novel training method named Chekhov GAN. On the theory side, we show that our method provably converges to an equilibrium for semi-shallow GAN architectures, i.e. architectures where the discriminator is a one-layer network and the generator is arbitrary. On the practical side, we develop an efficient heuristic guided by our theoretical results, which we apply to commonly used deep GAN architectures.
On several real-world tasks our approach exhibits improved stability and performance compared to standard GAN training. | accepted-poster-papers | This paper presents a GAN training algorithm motivated by online learning. The method is shown to converge to a mixed Nash equilibrium in the case of a shallow discriminator. In the initial version of the paper, reviewers had concerns about weak baselines in the experiments, but the updated version includes comparisons against a variety of modern GAN architectures which have been claimed to fix mode dropping. This seems to address the main criticism of the reviewers. Overall, this paper seems like a worthwhile addition to the GAN literature. | train | [
"HJ66g8RgM",
"HkQupw5gf",
"Bko7pxAWz",
"H1ldL6q7f",
"B1TYdDcXG",
"BJgv_w9QG",
"Bylluw97M",
"S19bDv9mG",
"Sy6RLv9Qf",
"BkKnrjN-z",
"rycH3LVbz"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"public",
"author",
"author",
"author",
"author",
"author",
"public",
"public"
] | [
"It is well known that the original GAN (Goodfellow et al.) suffers from instability and mode collapsing. Indeed, existing work has pointed out that the standard GAN training process may not converge if we insist on obtaining pure strategies (for the minmax game). The present paper proposes to obtain mixed strategy... | [
7,
8,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
4,
5,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_H1Yp-j1Cb",
"iclr_2018_H1Yp-j1Cb",
"iclr_2018_H1Yp-j1Cb",
"Sy6RLv9Qf",
"HkQupw5gf",
"HJ66g8RgM",
"Bko7pxAWz",
"rycH3LVbz",
"BkKnrjN-z",
"iclr_2018_H1Yp-j1Cb",
"iclr_2018_H1Yp-j1Cb"
] |
iclr_2018_rkQkBnJAb | Improving GANs Using Optimal Transport | We present Optimal Transport GAN (OT-GAN), a variant of generative adversarial nets minimizing a new metric measuring the distance between the generator distribution and the data distribution. This metric, which we call mini-batch energy distance, combines optimal transport in primal form with an energy distance defined in an adversarially learned feature space, resulting in a highly discriminative distance function with unbiased mini-batch gradients. Experimentally we show OT-GAN to be highly stable when trained with large mini-batches, and we present state-of-the-art results on several popular benchmark problems for image generation. | accepted-poster-papers | This is another paper, similar in spirit to the Wasserstein GAN and Cramer GAN, which uses ideas from optimal transport theory to define a more stable GAN architecture. It combines both a primal representation (with Sinkhorn loss) with a minibatch-based energy distance between distributions.
The experiments show that the OT-GAN produces sharper samples than a regular GAN on various datasets. While more could probably be done to distinguish the model from WGANs and Cramer GANs, this paper seems like a worthwhile contribution to the GAN literature and merits publication.
| train | [
"SJyW7vDgz",
"SyGyMzqez",
"SyLFVA3ez",
"S1V_zUt7G",
"ryc4f8YQf",
"rJIAWLtXz",
"Syr1_P3bz"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"The paper introduces a new algorithm for training GANs based on the Earth Mover’s distance. In order to avoid biased gradients, the authors use the dual form of the distance on mini-batches, to make it more robust. To compute the distance between mini batches, they use the Sinkhorn distance. Unlike the original Si... | [
8,
6,
6,
-1,
-1,
-1,
-1
] | [
4,
2,
3,
-1,
-1,
-1,
-1
] | [
"iclr_2018_rkQkBnJAb",
"iclr_2018_rkQkBnJAb",
"iclr_2018_rkQkBnJAb",
"SJyW7vDgz",
"SyGyMzqez",
"Syr1_P3bz",
"SyLFVA3ez"
] |
iclr_2018_S1HlA-ZAZ | The Kanerva Machine: A Generative Distributed Memory | We present an end-to-end trained memory system that quickly adapts to new data and generates samples like them. Inspired by Kanerva's sparse distributed memory, it has a robust distributed reading and writing mechanism. The memory is analytically tractable, which enables optimal on-line compression via a Bayesian update-rule. We formulate it as a hierarchical conditional generative model, where memory provides a rich data-dependent prior distribution. Consequently, the top-down memory and bottom-up perception are combined to produce the code representing an observation. Empirically, we demonstrate that the adaptive memory significantly improves generative models trained on both the Omniglot and CIFAR datasets. Compared with the Differentiable Neural Computer (DNC) and its variants, our memory model has greater capacity and is significantly easier to train. | accepted-poster-papers | This paper presents a distributed memory architecture based on a generative model with a VAE-like training criterion. The claim is that this approach is easier to train than other memory-based architectures. The model seems sound, and it is described clearly. The experimental validation seems a bit limited: most of the comparisons are against plain VAEs, which aren't a memory-based architecture. The discussion of "one-shot generalization" is confusing, since the task is modified without justification to have many categories and samples per category. The experiment of Section 4.4 seems promising, but this needs to be expanded to more tasks and baselines since it's the only experiment that really tests the Kanerva Machine as a memory architecture. Despite these concerns, I think the idea is promising and this paper contributes usefully to the discussion, so I recommend acceptance. | test | [
"rJlzr-5lM",
"HJ7qJh9eM",
"r14ew-W-G",
"rybuz4-zf",
"S1JTjXWfz",
"HJxcoX-GM"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author"
] | [
"The generative model comprises a real-valued matrix M (with a multivariate normal prior) that serves\nas the memory for an episode (an unordered set of datapoints). For each datapoint a marginally independent\nlatent variable y_t is used to index into M and realize a conditional density\nof another latent variable... | [
6,
7,
7,
-1,
-1,
-1
] | [
4,
3,
2,
-1,
-1,
-1
] | [
"iclr_2018_S1HlA-ZAZ",
"iclr_2018_S1HlA-ZAZ",
"iclr_2018_S1HlA-ZAZ",
"rJlzr-5lM",
"HJ7qJh9eM",
"r14ew-W-G"
] |
iclr_2018_r1gs9JgRZ | Mixed Precision Training | Increasing the size of a neural network typically improves accuracy but also increases the memory and compute requirements for training the model. We introduce methodology for training deep neural networks using half-precision floating point numbers, without losing model accuracy or having to modify hyper-parameters. This nearly halves memory requirements and, on recent GPUs, speeds up arithmetic. Weights, activations, and gradients are stored in IEEE half-precision format. Since this format has a narrower range than single-precision we propose three techniques for preventing the loss of critical information. Firstly, we recommend maintaining a single-precision copy of weights that accumulates the gradients after each optimizer step (this copy is rounded to half-precision for the forward- and back-propagation). Secondly, we propose loss-scaling to preserve gradient values with small magnitudes. Thirdly, we use half-precision arithmetic that accumulates into single-precision outputs, which are converted to half-precision before storing to memory. We demonstrate that the proposed methodology works across a wide variety of tasks and modern large scale (exceeding 100 million parameters) model architectures, trained on large datasets. | accepted-poster-papers | meta score: 8
The paper explores mixing 16- and 32-bit floating point arithmetic for NN training with CNN and LSTM experiments on a variety of tasks
Pros:
- addresses an important practical problem
- very wide range of experimentation, reported in depth
Cons:
- one might say the novelty was minor, but the novelty comes from the extensive analysis and experiments | train | [
"rJwXkeOgM",
"SkSMlWcgG",
"SJQ3bonlG",
"rktIavaQz",
"SyoJNu2Xf",
"SyCvNOnQf",
"ryjSm_hQz",
"S1nTzd2Qf",
"SyetJ_Pbz"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"public"
] | [
"The paper considers the problem of training neural networks in mixed precision (MP), using both 16-bit floating point (FP16) and 32-bit floating point (FP32). The paper proposes three techniques for training networks in mixed precision: first, keep a master copy of network parameters in FP32; second, use loss scal... | [
8,
5,
7,
-1,
-1,
-1,
-1,
-1,
-1
] | [
4,
3,
4,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_r1gs9JgRZ",
"iclr_2018_r1gs9JgRZ",
"iclr_2018_r1gs9JgRZ",
"iclr_2018_r1gs9JgRZ",
"SyetJ_Pbz",
"rJwXkeOgM",
"SkSMlWcgG",
"SJQ3bonlG",
"SkSMlWcgG"
] |
iclr_2018_Sy8XvGb0- | Latent Constraints: Learning to Generate Conditionally from Unconditional Generative Models | Deep generative neural networks have proven effective at both conditional and unconditional modeling of complex data distributions. Conditional generation enables interactive control, but creating new controls often requires expensive retraining. In this paper, we develop a method to condition generation without retraining the model. By post-hoc learning latent constraints, value functions identify regions in latent space that generate outputs with desired attributes, we can conditionally sample from these regions with gradient-based optimization or amortized actor functions. Combining attribute constraints with a universal “realism” constraint, which enforces similarity to the data distribution, we generate realistic conditional images from an unconditional variational autoencoder. Further, using gradient-based optimization, we demonstrate identity-preserving transformations that make the minimal adjustment in latent space to modify the attributes of an image. Finally, with discrete sequences of musical notes, we demonstrate zero-shot conditional generation, learning latent constraints in the absence of labeled data or a differentiable reward function. | accepted-poster-papers | This paper clearly surveys a set of methods related to using generative models to produce samples with desired characteristics. It explores several approaches and extensions to the standard recipe to try to address some weaknesses. It also demonstrates a wide variety of tasks. The exposition and figures are well-done. | train | [
"S1A-vIcgf",
"HyRPZlYeG",
"HkyYzWcxf",
"Byf9aN6Mf",
"rJLvjqOzM",
"Hk-S95OzM",
"Sy5Yu9_Mz"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author"
] | [
"UPDATE: I think the authors' rebuttal and updated draft address my points sufficiently well for me to update my score and align myself with the other reviewers.\n\n-----\n\nORIGINAL REVIEW: The paper proposes a method for learning post-hoc to condition a decoder-based generative model which was trained uncondition... | [
7,
7,
7,
-1,
-1,
-1,
-1
] | [
4,
3,
3,
-1,
-1,
-1,
-1
] | [
"iclr_2018_Sy8XvGb0-",
"iclr_2018_Sy8XvGb0-",
"iclr_2018_Sy8XvGb0-",
"Sy5Yu9_Mz",
"S1A-vIcgf",
"HkyYzWcxf",
"HyRPZlYeG"
] |
iclr_2018_ByOExmWAb | MaskGAN: Better Text Generation via Filling in the _______ | Neural text generation models are often autoregressive language models or seq2seq models. Neural autoregressive and seq2seq models that generate text by sampling words sequentially, with each word conditioned on the previous model, are state-of-the-art for several machine translation and summarization benchmarks. These benchmarks are often defined by validation perplexity even though this is not a direct measure of sample quality. Language models are typically trained via maximum likelihood and most often with teacher forcing. Teacher forcing is well-suited to optimizing perplexity but can result in poor sample quality because generating text requires conditioning on sequences of words that were never observed at training time. We propose to improve sample quality using Generative Adversarial Network (GANs), which explicitly train the generator to produce high quality samples and have shown a lot of success in image generation. GANs were originally to designed to output differentiable values, so discrete language generation is challenging for them. We introduce an actor-critic conditional GAN that fills in missing text conditioned on the surrounding context. We show qualitatively and quantitatively, evidence that this produces more realistic text samples compared to a maximum likelihood trained model. | accepted-poster-papers | This paper makes progress on the open problem of text generation with GANs, by a sensible combination of novel approaches. The method was described clearly, and is somewhat original. The only problem is the hand-engineering of the masking setup.
| train | [
"rkgAfEoeG",
"S1K1k4wVM",
"HJrrYeDNf",
"SytHGsLVG",
"Sy4HaTtlz",
"HyHlSKjlG",
"HJtlpRN7f",
"ByoC3CNXf",
"H1tBxZ_Mz",
"B1V31Wdff",
"r1k4JZOGM"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author"
] | [
"Generating high-quality sentences/paragraphs is an open research problem that is receiving a lot of attention. This text generation task is traditionally done using recurrent neural networks. This paper proposes to generate text using GANs. GANs are notorious for drawing images of high quality but they have a hard... | [
7,
-1,
-1,
-1,
7,
7,
-1,
-1,
-1,
-1,
-1
] | [
4,
-1,
-1,
-1,
3,
5,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_ByOExmWAb",
"HJrrYeDNf",
"ByoC3CNXf",
"H1tBxZ_Mz",
"iclr_2018_ByOExmWAb",
"iclr_2018_ByOExmWAb",
"H1tBxZ_Mz",
"B1V31Wdff",
"Sy4HaTtlz",
"rkgAfEoeG",
"HyHlSKjlG"
] |
iclr_2018_B1jscMbAW | Divide and Conquer Networks | We consider the learning of algorithmic tasks by mere observation of input-output
pairs. Rather than studying this as a black-box discrete regression problem with
no assumption whatsoever on the input-output mapping, we concentrate on tasks
that are amenable to the principle of divide and conquer, and study what are its
implications in terms of learning.
This principle creates a powerful inductive bias that we leverage with neural
architectures that are defined recursively and dynamically, by learning two scale-
invariant atomic operations: how to split a given input into smaller sets, and how
to merge two partially solved tasks into a larger partial solution. Our model can be
trained in weakly supervised environments, namely by just observing input-output
pairs, and in even weaker environments, using a non-differentiable reward signal.
Moreover, thanks to the dynamic aspect of our architecture, we can incorporate
the computational complexity as a regularization term that can be optimized by
backpropagation. We demonstrate the flexibility and efficiency of the Divide-
and-Conquer Network on several combinatorial and geometric tasks: convex hull,
clustering, knapsack and euclidean TSP. Thanks to the dynamic programming
nature of our model, we show significant improvements in terms of generalization
error and computational complexity. | accepted-poster-papers | The paper proposes a unique network architecture that can learn divide-and-conquer strategies to solve algorithmic tasks, mimicking a class of standard algorithms. The paper is clearly written, and the experiments are diverse. It also seems to point in the direction of a wider class of algorithm-inspired neural net architectures. | train | [
"H1wZwQwef",
"ByZKLz5gz",
"B1Qwc-LWf",
"SkeOr1imz",
"rkGXn5S-z"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author"
] | [
"This paper proposes to add new inductive bias to neural network architecture - namely a divide and conquer strategy know from algorithmics. Since introduced model has to split data into subsets, it leads to non-differentiable paths in the graph, which authors propose to tackle with RL and policy gradients. The who... | [
6,
7,
7,
-1,
-1
] | [
3,
3,
3,
-1,
-1
] | [
"iclr_2018_B1jscMbAW",
"iclr_2018_B1jscMbAW",
"iclr_2018_B1jscMbAW",
"iclr_2018_B1jscMbAW",
"H1wZwQwef"
] |
iclr_2018_HyjC5yWCW | Meta-Learning and Universality: Deep Representations and Gradient Descent can Approximate any Learning Algorithm | Learning to learn is a powerful paradigm for enabling models to learn from data more effectively and efficiently. A popular approach to meta-learning is to train a recurrent model to read in a training dataset as input and output the parameters of a learned model, or output predictions for new test inputs. Alternatively, a more recent approach to meta-learning aims to acquire deep representations that can be effectively fine-tuned, via standard gradient descent, to new tasks. In this paper, we consider the meta-learning problem from the perspective of universality, formalizing the notion of learning algorithm approximation and comparing the expressive power of the aforementioned recurrent models to the more recent approaches that embed gradient descent into the meta-learner. In particular, we seek to answer the following question: does deep representation combined with standard gradient descent have sufficient capacity to approximate any learning algorithm? We find that this is indeed true, and further find, in our experiments, that gradient-based meta-learning consistently leads to learning strategies that generalize more widely compared to those represented by recurrent models. | accepted-poster-papers | R3 summarizes the reasons for the decision on this paper: "The universal learning algorithm approximator result is a nice result, although I do not agree with the other reviewer that it is a "significant contribution to the theoretical understanding of meta-learning," which the authors have reinforced (although it can probably be considered a significant contribution to the theoretical understanding of MAML in particular). Expressivity of the model or algorithm is far from the main or most significant consideration in a machine learning problem, even in the standard supervised learning scenario. Questions pertaining to issues such as optimization and model selection are just as, if not more, important. These sorts of ideas are explored in the empirical part of the paper, but I did not find the actual experiments in this section to be very compelling. Still, I think the universal learning algorithm approximator result is sufficient on its own for the paper to be accepted." | train | [
"ByJP4Htez",
"SyTFKLYgf",
"S1CSbaKez",
"Sy-YsJ27f",
"S1qm_Qvzz",
"HyUeuXDfM",
"ryN8PmPGG",
"SJ5ZD7DGM",
"Hy-TUXPzM"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"public",
"author",
"author",
"author",
"author",
"author"
] | [
"This paper studies the capacity of the model-agnostic meta-learning (MAML) framework as a universal learning algorithm approximator. Since a (supervised) learning algorithm can be interpreted as a map from a dataset and an input to an output, the authors define a universal learning algorithm approximator to be a u... | [
6,
6,
7,
-1,
-1,
-1,
-1,
-1,
-1
] | [
3,
1,
1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_HyjC5yWCW",
"iclr_2018_HyjC5yWCW",
"iclr_2018_HyjC5yWCW",
"iclr_2018_HyjC5yWCW",
"HyUeuXDfM",
"ByJP4Htez",
"SyTFKLYgf",
"S1CSbaKez",
"iclr_2018_HyjC5yWCW"
] |
iclr_2018_S1ANxQW0b | Maximum a Posteriori Policy Optimisation | We introduce a new algorithm for reinforcement learning called Maximum a-posteriori Policy Optimisation (MPO) based on coordinate ascent on a relative-entropy objective. We show that several existing methods can directly be related to our derivation. We develop two off-policy algorithms and demonstrate that they are competitive with the state-of-the-art in deep reinforcement learning. In particular, for continuous control, our method outperforms existing methods with respect to sample efficiency, premature convergence and robustness to hyperparameter settings. | accepted-poster-papers | The main idea of policy-as-inference is not new, but it seems to be the first application of this idea to deep RL, and is somewhat well motivated. The computational details get a bit hairy, but the good experimental results and the inclusion of ablation studies pushes this above the bar.
| test | [
"HkHiimLEG",
"rypb6tngM",
"H1y3N2alf",
"Hy4_ANE-f",
"HyixP8aQz",
"S1ymUU6Xz",
"HJEuS8amM",
"HkJ6aF2xf",
"S1DYhi9gz",
"By6E3iqxz",
"HkX4eXIC-",
"Hkk3DISC-"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"public",
"public"
] | [
"We have updated the paper to address the concerns raised by the reviewers.\nIn particular we have included:\n - A detailed theoretical analysis of the MPO framework\n - An updated methods section that has a simpler derivation of the algorithm",
"The paper presents a new algorithm for inference-based reinforcemen... | [
-1,
7,
6,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
-1,
5,
1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_S1ANxQW0b",
"iclr_2018_S1ANxQW0b",
"iclr_2018_S1ANxQW0b",
"iclr_2018_S1ANxQW0b",
"rypb6tngM",
"H1y3N2alf",
"Hy4_ANE-f",
"iclr_2018_S1ANxQW0b",
"Hkk3DISC-",
"HkX4eXIC-",
"iclr_2018_S1ANxQW0b",
"iclr_2018_S1ANxQW0b"
] |
iclr_2018_SyX0IeWAW | META LEARNING SHARED HIERARCHIES | We develop a metalearning approach for learning hierarchically structured poli- cies, improving sample efficiency on unseen tasks through the use of shared primitives—policies that are executed for large numbers of timesteps. Specifi- cally, a set of primitives are shared within a distribution of tasks, and are switched between by task-specific policies. We provide a concrete metric for measuring the strength of such hierarchies, leading to an optimization problem for quickly reaching high reward on unseen tasks. We then present an algorithm to solve this problem end-to-end through the use of any off-the-shelf reinforcement learning method, by repeatedly sampling new tasks and resetting task-specific policies. We successfully discover meaningful motor primitives for the directional movement of four-legged robots, solely by interacting with distributions of mazes. We also demonstrate the transferability of primitives to solve long-timescale sparse-reward obstacle courses, and we enable 3D humanoid robots to robustly walk and crawl with the same policy. | accepted-poster-papers | This paper presents a fairly straightforward algorithm for learning a set of sub-controllers that can be re-used between tasks. The development of these concepts in a relatively clear way is a nice contribution. However, the real problem is how niche the setup is. However, it's over the bar in general. | test | [
"HyJuHTteM",
"BJN4gTtlM",
"r1RR1Vclf",
"rJMH0whmz",
"ByaUAxmMz",
"HkdQtbZfM",
"S1BykK1Mz",
"Hy28ObpbM",
"ByaWalTbf",
"BJBohe6bz",
"r1nH2xp-f",
"rJG0slabf",
"S1YphBy-f",
"ByaasHJWM",
"rkCZR_3xz",
"r13qQpKxM"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"public",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"public",
"official_reviewer"
] | [
"Please see my detailed comments in the \"official comment\"\n\nThe extensive revisions addressed most of my concerns\n\nQuality\n======\nThe idea is interesting, the theory is hand-wavy at best (ADDRESSED but still a bit vague), the experiments show that it works but don't evaluate many interesting/relevant aspect... | [
6,
4,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
3,
4,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_SyX0IeWAW",
"iclr_2018_SyX0IeWAW",
"iclr_2018_SyX0IeWAW",
"iclr_2018_SyX0IeWAW",
"HkdQtbZfM",
"iclr_2018_SyX0IeWAW",
"Hy28ObpbM",
"ByaWalTbf",
"S1YphBy-f",
"BJN4gTtlM",
"HyJuHTteM",
"r1RR1Vclf",
"r13qQpKxM",
"rkCZR_3xz",
"iclr_2018_SyX0IeWAW",
"iclr_2018_SyX0IeWAW"
] |
iclr_2018_B1EA-M-0Z | Deep Neural Networks as Gaussian Processes | It has long been known that a single-layer fully-connected neural network with an i.i.d. prior over its parameters is equivalent to a Gaussian process (GP), in the limit of infinite network width. This correspondence enables exact Bayesian inference for infinite width neural networks on regression tasks by means of evaluating the corresponding GP. Recently, kernel functions which mimic multi-layer random neural networks have been developed, but only outside of a Bayesian framework. As such, previous work has not identified that these kernels can be used as covariance functions for GPs and allow fully Bayesian prediction with a deep neural network.
In this work, we derive the exact equivalence between infinitely wide, deep, networks and GPs with a particular covariance function. We further develop a computationally efficient pipeline to compute this covariance function. We then use the resulting GP to perform Bayesian inference for deep neural networks on MNIST and CIFAR-10. We observe that the trained neural network accuracy approaches that of the corresponding GP with increasing layer width, and that the GP uncertainty is strongly correlated with trained network prediction error. We further find that test performance increases as finite-width trained networks are made wider and more similar to a GP, and that the GP-based predictions typically outperform those of finite-width networks. Finally we connect the prior distribution over weights and variances in our GP formulation to the recent development of signal propagation in random neural networks. | accepted-poster-papers | This paper presents several theoretical results linking deep, wide neural networks to GPs. It even includes illuminating experiments.
Many of the results were already developed in earlier works. However, many at ICLR may be unaware of these links, and we hope this paper will contribute to the discussion.
| train | [
"S1_Zyk9xG",
"BJGq_QclM",
"SJDCe5jeM",
"rkKCfwTmf",
"H1BQLITXf",
"Hy8PZu_GG",
"rye7Z_ufG",
"Hkfpl_uff",
"Syyjgu_zf",
"Hy-veOdfG",
"ryNSpTbGG"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"public"
] | [
"Neal (1994) showed that a one hidden layer Bayesian neural network, under certain conditions, converges to a Gaussian process as the number of hidden units approaches infinity. Neal (1994) and Williams (1997) derive the resulting kernel functions for such Gaussian processes when the neural networks have certain tr... | [
4,
6,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
4,
4,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_B1EA-M-0Z",
"iclr_2018_B1EA-M-0Z",
"iclr_2018_B1EA-M-0Z",
"iclr_2018_B1EA-M-0Z",
"ryNSpTbGG",
"S1_Zyk9xG",
"S1_Zyk9xG",
"BJGq_QclM",
"BJGq_QclM",
"SJDCe5jeM",
"iclr_2018_B1EA-M-0Z"
] |
iclr_2018_SyqShMZRb | Syntax-Directed Variational Autoencoder for Structured Data | Deep generative models have been enjoying success in modeling continuous data. However it remains challenging to capture the representations for discrete structures with formal grammars and semantics, e.g., computer programs and molecular structures. How to generate both syntactically and semantically correct data still remains largely an open problem. Inspired by the theory of compiler where syntax and semantics check is done via syntax-directed translation (SDT), we propose a novel syntax-directed variational autoencoder (SD-VAE) by introducing stochastic lazy attributes. This approach converts the offline SDT check into on-the-fly generated guidance for constraining the decoder. Comparing to the state-of-the-art methods, our approach enforces constraints on the output space so that the output will be not only syntactically valid, but also semantically reasonable. We evaluate the proposed model with applications in programming language and molecules, including reconstruction and program/molecule optimization. The results demonstrate the effectiveness in incorporating syntactic and semantic constraints in discrete generative models, which is significantly better than current state-of-the-art approaches. | accepted-poster-papers | This paper presents a more complex version of the grammar-VAE, which can be used to generate structured discrete objects for which a grammar is known, by adding a second 'attribute grammar', inspired by Knuth.
Overall, the idea is a bit incremental, but the space is wide open and I think that structured encoder/decoders is an important direction. The experiments seem to have been done carefully (with some help from the reviewers) and the results are convincing. | train | [
"SJy4ZMU4G",
"rJ7ZTaYxf",
"ByHD_eqxf",
"SkUs6e5lG",
"BJVJ4_6XG",
"H1KTirvZf",
"ryFoAfUWz",
"S1EYCMIbz",
"B1396zI-f",
"HJMcxIJWf",
"BkQglHagz"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author"
] | [
"The presentation of the paper has definately improved, but I find the language used in the paper still below the quality needed for publication. There are still way too many grammatical and syntactical errors. ",
"The paper presents an approach for improving variational autoencoders for structured data that pro... | [
-1,
3,
5,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
-1,
2,
1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"ryFoAfUWz",
"iclr_2018_SyqShMZRb",
"iclr_2018_SyqShMZRb",
"iclr_2018_SyqShMZRb",
"iclr_2018_SyqShMZRb",
"iclr_2018_SyqShMZRb",
"rJ7ZTaYxf",
"ByHD_eqxf",
"SkUs6e5lG",
"iclr_2018_SyqShMZRb",
"iclr_2018_SyqShMZRb"
] |
iclr_2018_rywDjg-RW | Neural-Guided Deductive Search for Real-Time Program Synthesis from Examples | Synthesizing user-intended programs from a small number of input-output exam-
ples is a challenging problem with several important applications like spreadsheet
manipulation, data wrangling and code refactoring. Existing synthesis systems
either completely rely on deductive logic techniques that are extensively hand-
engineered or on purely statistical models that need massive amounts of data, and in
general fail to provide real-time synthesis on challenging benchmarks. In this work,
we propose Neural Guided Deductive Search (NGDS), a hybrid synthesis technique
that combines the best of both symbolic logic techniques and statistical models.
Thus, it produces programs that satisfy the provided specifications by construction
and generalize well on unseen examples, similar to data-driven systems. Our
technique effectively utilizes the deductive search framework to reduce the learning
problem of the neural component to a simple supervised learning setup. Further,
this allows us to both train on sparingly available real-world data and still leverage
powerful recurrent neural network encoders. We demonstrate the effectiveness
of our method by evaluating on real-world customer scenarios by synthesizing
accurate programs with up to 12× speed-up compared to state-of-the-art systems. | accepted-poster-papers | The pros and cons of this paper cited by the reviewers can be summarized below:
Pros:
* The method proposed here is highly technically sophisticated and appropriate for the problem of program synthesis from examples
* The results are convincing, demonstrating that the proposed method is able to greatly speed up search in an existing synthesis system
Cons:
* The contribution in terms of machine learning or representation learning is minimal (mainly adding an LSTM to an existing system)
* The overall system itself is quite complicated, which might raise the barrier of entry to other researchers who might want to follow the work, limiting impact
In our decision, the fact that the paper significantly moves forward the state of the art in this area outweighs the concerns about lack of machine learning contribution or barrier of entry. | test | [
"SkPNib9ez",
"SyFsGdSlM",
"S1qCIfJWz",
"H12e4JcQz",
"B1rMMpYMz",
"Bkq9JykMG",
"HJTR0CRbG",
"rJT-R0RZM"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author"
] | [
"This paper extends and speeds up PROSE, a programming by example system, by posing the selection of the next production rule in the grammar as a supervised learning problem.\n\nThis paper requires a large amount of background knowledge as it depends on understanding program synthesis as it is done in the programmi... | [
6,
6,
8,
-1,
-1,
-1,
-1,
-1
] | [
3,
4,
3,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_rywDjg-RW",
"iclr_2018_rywDjg-RW",
"iclr_2018_rywDjg-RW",
"B1rMMpYMz",
"iclr_2018_rywDjg-RW",
"SyFsGdSlM",
"SkPNib9ez",
"S1qCIfJWz"
] |
iclr_2018_rJl3yM-Ab | Evidence Aggregation for Answer Re-Ranking in Open-Domain Question Answering | Very recently, it comes to be a popular approach for answering open-domain questions by first searching question-related passages, then applying reading comprehension models to extract answers. Existing works usually extract answers from single passages independently, thus not fully make use of the multiple searched passages, especially for the some questions requiring several evidences, which can appear in different passages, to be answered. The above observations raise the problem of evidence aggregation from multiple passages. In this paper, we deal with this problem as answer re-ranking. Specifically, based on the answer candidates generated from the existing state-of-the-art QA model, we propose two different re-ranking methods, strength-based and coverage-based re-rankers, which make use of the aggregated evidences from different passages to help entail the ground-truth answer for the question. Our model achieved state-of-the-arts on three public open-domain QA datasets, Quasar-T, SearchQA and the open-domain version of TriviaQA, with about 8\% improvement on the former two datasets. | accepted-poster-papers | The pros and cons of this paper cited by the reviewers can be summarized below:
Pros:
* Solid experimental results against strong baselines on a task of great interest
* Method presented is appropriate for the task
* Paper is presented relatively clearly, especially after revision
Cons:
* The paper is somewhat incremental. The basic idea of aggregating across multiple examples was presented in Kadlec et al. 2016, but the methodology here is different.
| train | [
"H1pRH5def",
"rJZQa3YgG",
"S1OAdY3eG",
"SyM0gXa7G",
"rJZ1JmaQz",
"S1YOCf67M"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author"
] | [
"The paper is clear, although there are many English mistakes (that should be corrected).\nThe proposed method aggregates answers from multiple passages in the context of QA. The new method is motivated well and departs from prior work. Experiments on three datasets show the proposed method to be notably better tha... | [
6,
8,
6,
-1,
-1,
-1
] | [
2,
3,
4,
-1,
-1,
-1
] | [
"iclr_2018_rJl3yM-Ab",
"iclr_2018_rJl3yM-Ab",
"iclr_2018_rJl3yM-Ab",
"S1OAdY3eG",
"rJZQa3YgG",
"H1pRH5def"
] |
iclr_2018_B1ZvaaeAZ | WRPN: Wide Reduced-Precision Networks | For computer vision applications, prior works have shown the efficacy of reducing numeric precision of model parameters (network weights) in deep neural networks. Activation maps, however, occupy a large memory footprint during both the training and inference step when using mini-batches of inputs. One way to reduce this large memory footprint is to reduce the precision of activations. However, past works have shown that reducing the precision of activations hurts model accuracy. We study schemes to train networks from scratch using reduced-precision activations without hurting accuracy. We reduce the precision of activation maps (along with model parameters) and increase the number of filter maps in a layer, and find that this scheme matches or surpasses the accuracy of the baseline full-precision network. As a result, one can significantly improve the execution efficiency (e.g. reduce dynamic memory footprint, memory band- width and computational energy) and speed up the training and inference process with appropriate hardware support. We call our scheme WRPN -- wide reduced-precision networks. We report results and show that WRPN scheme is better than previously reported accuracies on ILSVRC-12 dataset while being computationally less expensive compared to previously reported reduced-precision networks. | accepted-poster-papers | This paper explores the training of CNNs which have reduced-precision activations. By widening layers, it shows less of an accuracy hit on ILSVRC-12 compared to other recent reduced-precision networks. R1 was extremely positive on the paper, impressed by its readability and the quality of comparison to previous approaches (noting that results with 2-bit activations and 4-bit weights matched FP baselines). This seems very significant to me. R1 also pointed out that the technique used the same hyperparameters as the original training scheme, improving reproducibility/accessibility. R1 asked about application to MobileNets, and the authors reported some early results showing that the technique also worked with smaller network/architectures designed for low-memory hardware. R2 was less positive on the paper, with the main criticism being that the overall technical contribution of the paper was limited. They also were concerned that the paper seemed to motivate based on reducing memory footprint, but the results were focused on reducing computation. R3 liked the simplicity of the idea and comprehensiveness of the results. Like R2, they thought the paper was limited novelty. In their response to R3, the authors defended the novelty of the paper. I tend to side with the authors that very few papers target quantization at no accuracy loss. Moreover, the paper targets training, which also receives much less attention in the model compression / reduced precision literature. Is the architecture really novel? No. But does the experimental work investigate an important tradeoff? Yes. | train | [
"S1L25x5xz",
"rJ6IEcsgM",
"HJiml81-z",
"HJO6d1Jzz",
"H1DasMA-z",
"S1dpuBMWM",
"BJeu11EbG",
"rk7t20mWG",
"rkJX68XbM",
"H1ayhNMZz",
"rkuEBHfWM"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"public",
"author",
"author",
"author",
"official_reviewer",
"author",
"author"
] | [
"The paper studies the effect of reduced precision weights and activations on the performance, memory and computation cost of deep networks and proposes a quantization scheme and wide filters to offset the accuracy lost due to the reduced precision. The study is performed on AlexNet, ResNet and Inception on the Ima... | [
5,
9,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
3,
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_B1ZvaaeAZ",
"iclr_2018_B1ZvaaeAZ",
"iclr_2018_B1ZvaaeAZ",
"H1DasMA-z",
"iclr_2018_B1ZvaaeAZ",
"S1L25x5xz",
"rk7t20mWG",
"rkJX68XbM",
"rkuEBHfWM",
"HJiml81-z",
"rJ6IEcsgM"
] |
iclr_2018_rkmu5b0a- | MGAN: Training Generative Adversarial Nets with Multiple Generators | We propose in this paper a new approach to train the Generative Adversarial Nets (GANs) with a mixture of generators to overcome the mode collapsing problem. The main intuition is to employ multiple generators, instead of using a single one as in the original GAN. The idea is simple, yet proven to be extremely effective at covering diverse data modes, easily overcoming the mode collapsing problem and delivering state-of-the-art results. A minimax formulation was able to establish among a classifier, a discriminator, and a set of generators in a similar spirit with GAN. Generators create samples that are intended to come from the same distribution as the training data, whilst the discriminator determines whether samples are true data or generated by generators, and the classifier specifies which generator a sample comes from. The distinguishing feature is that internal samples are created from multiple generators, and then one of them will be randomly selected as final output similar to the mechanism of a probabilistic mixture model. We term our method Mixture Generative Adversarial Nets (MGAN). We develop theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon divergence (JSD) between the mixture of generators’ distributions and the empirical data distribution is minimal, whilst the JSD among generators’ distributions is maximal, hence effectively avoiding the mode collapsing problem. By utilizing parameter sharing, our proposed model adds minimal computational cost to the standard GAN, and thus can also efficiently scale to large-scale datasets. We conduct extensive experiments on synthetic 2D data and natural image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior performance of our MGAN in achieving state-of-the-art Inception scores over latest baselines, generating diverse and appealing recognizable objects at different resolutions, and specializing in capturing different types of objects by the generators. | accepted-poster-papers | This paper presents an analysis of using multiple generators in a GAN setup, to address the mode-collapse problem. R1 was generally positive about the paper, raising the concern on how to choose the number of generators, and also whether parameter sharing was essential. The authors reported back on parameter sharing, showing its benefits yet did not have any principled method of selecting the number of generators. R2 was less positive about the paper, pointing out that mixture GANs and multiple generators have been tried before. They also raised concern with the (flawed) Inception score as the basis for comparison. R2 also pointed out that fixing the mixing proportions to uniform was an unrealistic assumption. The authors responded to these claims, clarifying the differences between this paper and the previous mixture GAN/multiple generator papers, and reporting FID scores. R3 was generally positive, also citing some novelty concerns similar to that of R2. I acknowledge the authors detailed responses to the reviews (in particular in response to R2) and I believe that the majority of concerns expressed have now been addressed. I also encourage the authors to include the FID scores in the final version of the paper. | train | [
"Sy9Uo3Ygz",
"rynmx_XHf",
"ByiOfTVNz",
"rJAgO6KlM",
"Hkib3t2lz",
"HkXND5MMM",
"BkNAv9fMz",
"HkTPt5zMM",
"ByShYqMMM",
"BJUxcqfzG",
"HJGWrcGMG",
"rycjHcGMz",
"r117lsfGG"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author"
] | [
"The present manuscript attempts to address the problem of mode collapse in GANs using a constrained mixture distribution for the generator, and an auxiliary classifier which predicts the source mixture component, plus a loss term which encourages diversity amongst components.\n\nAll told the proposed method is qui... | [
5,
-1,
-1,
7,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
4,
-1,
-1,
5,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_rkmu5b0a-",
"ByiOfTVNz",
"rycjHcGMz",
"iclr_2018_rkmu5b0a-",
"iclr_2018_rkmu5b0a-",
"Sy9Uo3Ygz",
"Sy9Uo3Ygz",
"Sy9Uo3Ygz",
"Sy9Uo3Ygz",
"Sy9Uo3Ygz",
"Hkib3t2lz",
"rJAgO6KlM",
"iclr_2018_rkmu5b0a-"
] |
iclr_2018_rkHVZWZAZ | The Reactor: A fast and sample-efficient Actor-Critic agent for Reinforcement Learning | In this work we present a new agent architecture, called Reactor, which combines multiple algorithmic and architectural contributions to produce an agent with higher sample-efficiency than Prioritized Dueling DQN (Wang et al., 2016) and Categorical DQN (Bellemare et al., 2017), while giving better run-time performance than A3C (Mnih et al., 2016). Our first contribution is a new policy evaluation algorithm called Distributional Retrace, which brings multi-step off-policy updates to the distributional reinforcement learning setting. The same approach can be used to convert several classes of multi-step policy evaluation algorithms designed for expected value evaluation into distributional ones. Next, we introduce the β-leaveone-out policy gradient algorithm which improves the trade-off between variance and bias by using action values as a baseline. Our final algorithmic contribution is a new prioritized replay algorithm for sequences, which exploits the temporal locality of neighboring observations for more efficient replay prioritization. Using the Atari 2600 benchmarks, we show that each of these innovations contribute to both the sample efficiency and final agent performance. Finally, we demonstrate that Reactor reaches state-of-the-art performance after 200 million frames and less than a day of training. | accepted-poster-papers | This paper presents a nice set of results on a new RL algorithm. The main downside is the limitation to the Atari domain, but otherwise the ablation studies are nice and the results are strong. | test | [
"HJH_9xLNG",
"SJRs56Ylz",
"r1MU1AtlG",
"rksMwz9xG",
"SyQtte4NM",
"B1W_WMpQz",
"SJtRa_WMM",
"HJgzROWzf",
"SJqjT_ZGf"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"Thanks to the authors for their response. As I mentioned in the initial review, I think the method is definitely promising and provides improvements. My comments were more on claims like \"Reactor significantly outperforms Rainbow\" which is not evident from the results in the paper (a point also noted by Reviewer... | [
-1,
7,
7,
7,
-1,
-1,
-1,
-1,
-1
] | [
-1,
4,
2,
4,
-1,
-1,
-1,
-1,
-1
] | [
"SJqjT_ZGf",
"iclr_2018_rkHVZWZAZ",
"iclr_2018_rkHVZWZAZ",
"iclr_2018_rkHVZWZAZ",
"HJgzROWzf",
"iclr_2018_rkHVZWZAZ",
"r1MU1AtlG",
"SJRs56Ylz",
"rksMwz9xG"
] |
iclr_2018_HkUR_y-RZ | SEARNN: Training RNNs with global-local losses | We propose SEARNN, a novel training algorithm for recurrent neural networks (RNNs) inspired by the "learning to search" (L2S) approach to structured prediction. RNNs have been widely successful in structured prediction applications such as machine translation or parsing, and are commonly trained using maximum likelihood estimation (MLE). Unfortunately, this training loss is not always an appropriate surrogate for the test error: by only maximizing the ground truth probability, it fails to exploit the wealth of information offered by structured losses. Further, it introduces discrepancies between training and predicting (such as exposure bias) that may hurt test performance. Instead, SEARNN leverages test-alike search space exploration to introduce global-local losses that are closer to the test error. We first demonstrate improved performance over MLE on two different tasks: OCR and spelling correction. Then, we propose a subsampling strategy to enable SEARNN to scale to large vocabulary sizes. This allows us to validate the benefits of our approach on a machine translation task. | accepted-poster-papers | This paper generally presents a nice idea, and some of the modifications to searn/lols that the authors had to make to work with neural networks are possibly useful to others. Some weaknesses exist in the evaluation that everyone seems to agree on, but disagree about importance (in particular, comparison to things like BLS and Mixer on problems other than MT).
A few side-comments (not really part of meta-review, but included here anyway):
- Treating rollin/out as a hyperparameter is not unique to this paper; this was also done by Chang et al., NIPS 2016, "A credit assignment compiler..."
- One big question that goes unanswered in this paper is "why does learned rollin (or mixed rollin) not work in the MT setting." If the authors could add anything to explain this, it would be very helpful!
- Goldberg & Nivre didn't really introduce the _idea_ of dynamic oracles, they simply gave it that name (e.g., in the original Searn paper, and in most of the imitation learning literature, what G&n call a "dynamic oracle" everyone else just calls an "oracle" or "expert") | train | [
"H1_0NDUEG",
"S1KZ3x5ef",
"S1rKPVcgz",
"SJEBLCSZM",
"HyM2dLTmG",
"rJPavIa7G",
"ry80LL6Qf",
"Syk3BIa7M",
"HkMLxA5Mf",
"ByamFTczz",
"SyoNr6cMG",
"BktgJQZMz",
"B1hvAzWMf",
"S1YZ0GbzG",
"BJWspGbzz"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"While the paper has been improved, my main concern \"lack of comparison against previous work and unclear experiments\" remains. As the authors acknowedge, the experiments I have argued are missing are sensible and they would provide the evidence to support the claims about the suitability of the proposed IL-based... | [
-1,
8,
5,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
-1,
4,
5,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"Syk3BIa7M",
"iclr_2018_HkUR_y-RZ",
"iclr_2018_HkUR_y-RZ",
"iclr_2018_HkUR_y-RZ",
"HkMLxA5Mf",
"ByamFTczz",
"SyoNr6cMG",
"iclr_2018_HkUR_y-RZ",
"BktgJQZMz",
"B1hvAzWMf",
"S1YZ0GbzG",
"B1hvAzWMf",
"S1YZ0GbzG",
"S1rKPVcgz",
"iclr_2018_HkUR_y-RZ"
] |
iclr_2018_SyZipzbCb | Distributed Distributional Deterministic Policy Gradients | This work adopts the very successful distributional perspective on reinforcement learning and adapts it to the continuous control setting. We combine this within a distributed framework for off-policy learning in order to develop what we call the Distributed Distributional Deep Deterministic Policy Gradient algorithm, D4PG. We also combine this technique with a number of additional, simple improvements such as the use of N-step returns and prioritized experience replay. Experimentally we examine the contribution of each of these individual components, and show how they interact, as well as their combined contributions. Our results show that across a wide variety of simple control tasks, difficult manipulation tasks, and a set of hard obstacle-based locomotion tasks the D4PG algorithm achieves state of the art performance. | accepted-poster-papers | As identified by most reviewers, this paper does a very thorough empirical evaluation of a relatively straightforward combination of known techniques for distributed RL. The work also builds on "Distributed prioritized experience replay", which could be noted more prominently in the introduction. | train | [
"Byqj1QtlM",
"r1Wcz1clz",
"Bk3bXW5gM",
"HJ9K0RXEf",
"ryuoGzKMz",
"r14lUMKMf",
"HJ3eWMFzG",
"SkfngGYMG",
"ryjnMgWZz",
"HJkIEozeM",
"HJt1T-2R-"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"public",
"author",
"author",
"author",
"author",
"public",
"public",
"public"
] | [
"A DeepRL algorithm is presented that represents distributions over Q values, as applied to DDPG,\nand in conjunction with distributed evaluation across multiple actors, prioritized experience replay, and \nN-step look-aheads. The algorithm is called Distributed Distributional Deep Deterministic Policy Gradient alg... | [
9,
6,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
4,
5,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_SyZipzbCb",
"iclr_2018_SyZipzbCb",
"iclr_2018_SyZipzbCb",
"r14lUMKMf",
"Byqj1QtlM",
"HJt1T-2R-",
"r1Wcz1clz",
"Bk3bXW5gM",
"HJkIEozeM",
"HJt1T-2R-",
"iclr_2018_SyZipzbCb"
] |
iclr_2018_ry80wMW0W | Hierarchical Subtask Discovery with Non-Negative Matrix Factorization | Hierarchical reinforcement learning methods offer a powerful means of planning flexible behavior in complicated domains. However, learning an appropriate hierarchical decomposition of a domain into subtasks remains a substantial challenge. We present a novel algorithm for subtask discovery, based on the recently introduced multitask linearly-solvable Markov decision process (MLMDP) framework. The MLMDP can perform never-before-seen tasks by representing them as a linear combination of a previously learned basis set of tasks. In this setting, the subtask discovery problem can naturally be posed as finding an optimal low-rank approximation of the set of tasks the agent will face in a domain. We use non-negative matrix factorization to discover this minimal basis set of tasks, and show that the technique learns intuitive decompositions in a variety of domains. Our method has several qualitatively desirable features: it is not limited to learning subtasks with single goal states, instead learning distributed patterns of preferred states; it learns qualitatively different hierarchical decompositions in the same domain depending on the ensemble of tasks the agent will face; and it may be straightforwardly iterated to obtain deeper hierarchical decompositions. | accepted-poster-papers | Overall this paper seems to make an interesting contribution to the problem of subtask discovery, but unfortunately this only works in a tabular setting, which is quite limiting. | val | [
"H16Hn-6lf",
"HJo-rvwWz",
"BkITkWpZM",
"ry1tEdTmM",
"HJbgmvpXf",
"HJf6zwpmf",
"HJi81v6mG",
"HJ2ekDp7G"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author"
] | [
"This paper proposes a formulation for discovering subtasks in Linearly-solvable MDPs. The idea is to decompose the optimal value function into a fixed set of sub value functions (each corresponding to a subtask) in a way that they best approximate (e.g. in a KL-divergence sense) the original value.\n\nAutomaticall... | [
6,
5,
7,
-1,
-1,
-1,
-1,
-1
] | [
2,
2,
3,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_ry80wMW0W",
"iclr_2018_ry80wMW0W",
"iclr_2018_ry80wMW0W",
"H16Hn-6lf",
"HJf6zwpmf",
"HJo-rvwWz",
"HJ2ekDp7G",
"BkITkWpZM"
] |
iclr_2018_rJl63fZRb | Parametrized Hierarchical Procedures for Neural Programming | Neural programs are highly accurate and structured policies that perform algorithmic tasks by controlling the behavior of a computation mechanism. Despite the potential to increase the interpretability and the compositionality of the behavior of artificial agents, it remains difficult to learn from demonstrations neural networks that represent computer programs. The main challenges that set algorithmic domains apart from other imitation learning domains are the need for high accuracy, the involvement of specific structures of data, and the extremely limited observability. To address these challenges, we propose to model programs as Parametrized Hierarchical Procedures (PHPs). A PHP is a sequence of conditional operations, using a program counter along with the observation to select between taking an elementary action, invoking another PHP as a sub-procedure, and returning to the caller. We develop an algorithm for training PHPs from a set of supervisor demonstrations, only some of which are annotated with the internal call structure, and apply it to efficient level-wise training of multi-level PHPs. We show in two benchmarks, NanoCraft and long-hand addition, that PHPs can learn neural programs more accurately from smaller amounts of both annotated and unannotated demonstrations. | accepted-poster-papers | This paper is somewhat incremental on recent prior work in a hot area; it has some weaknesses but does move the needle somewhat on these problems. | train | [
"SylxFWcgG",
"SkiyHjDlf",
"HyXxfzsxM",
"Hk7J-daXf",
"SkoXZdamG",
"BkMVlJ6mM",
"H1mz-p2Xz",
"B197SIKXf",
"B15VNT1mz",
"HJqP76Jmz",
"Hy2_fTymG"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author"
] | [
"I thank the authors for their updates and clarifications. I stand by my original review and score. I think their method and their evaluation has some major weaknesses, but I think that it still provides a good baseline to force work in this space towards tasks which can not be solved by simpler models like this.... | [
6,
6,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
3,
1,
2,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_rJl63fZRb",
"iclr_2018_rJl63fZRb",
"iclr_2018_rJl63fZRb",
"BkMVlJ6mM",
"iclr_2018_rJl63fZRb",
"H1mz-p2Xz",
"B197SIKXf",
"HJqP76Jmz",
"HyXxfzsxM",
"SylxFWcgG",
"SkiyHjDlf"
] |
iclr_2018_S1D8MPxA- | Viterbi-based Pruning for Sparse Matrix with Fixed and High Index Compression Ratio | Weight pruning has proven to be an effective method in reducing the model size and computation cost while not sacrificing the model accuracy. Conventional sparse matrix formats, however, involve irregular index structures with large storage requirement and sequential reconstruction process, resulting in inefficient use of highly parallel computing resources. Hence, pruning is usually restricted to inference with a batch size of one, for which an efficient parallel matrix-vector multiplication method exists. In this paper, a new class of sparse matrix representation utilizing Viterbi algorithm that has a high, and more importantly, fixed index compression ratio regardless of the pruning rate, is proposed. In this approach, numerous sparse matrix candidates are first generated by the Viterbi encoder, and then the one that aims to minimize the model accuracy degradation is selected by the Viterbi algorithm. The model pruning process based on the proposed Viterbi encoder and Viterbi algorithm is highly parallelizable, and can be implemented efficiently in hardware to achieve low-energy, high-performance index decoding process. Compared with the existing magnitude-based pruning methods, index data storage requirement can be further compressed by 85.2% in MNIST and 83.9% in AlexNet while achieving similar pruning rate. Even compared with the relative index compression technique, our method can still reduce the index storage requirement by 52.7% in MNIST and 35.5% in AlexNet. | accepted-poster-papers | The paper proposes a new sparse matrix representation based on Viterbi algorithm with high and fixed index compression ratio regardless of the pruning rate. The method allows for faster parallel decoding and achieves improved compression of index data storage requirement over existing methods (e.g., magnitude-based pruning) while maintaining the pruning rate. The quality of paper seems solid and of interest to a subset of the ICLR audience. | train | [
"BJle65dxG",
"Hkiuu4PlM",
"SktHYC_xM",
"rJJvOXZ7M",
"S1SOzm-mM",
"S1_WfmZQM"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author"
] | [
"The paper proposes VCM, a novel way to store sparse matrices that is based on the Viterbi Decompressor. Only a subset of sparse matrices can be represented in the VCM format, however, unlike CSR format, it allows for faster parallel decoding and requires much less index space. The authors also propose a novel meth... | [
6,
6,
7,
-1,
-1,
-1
] | [
4,
4,
3,
-1,
-1,
-1
] | [
"iclr_2018_S1D8MPxA-",
"iclr_2018_S1D8MPxA-",
"iclr_2018_S1D8MPxA-",
"SktHYC_xM",
"BJle65dxG",
"Hkiuu4PlM"
] |
iclr_2018_ByS1VpgRZ | cGANs with Projection Discriminator | We propose a novel, projection based way to incorporate the conditional information into the discriminator of GANs that respects the role of the conditional information in the underlining probabilistic model.
This approach is in contrast with most frameworks of conditional GANs used in application today, which use the conditional information by concatenating the (embedded) conditional vector to the feature vectors.
With this modification, we were able to significantly improve the quality of the class conditional image generation on ILSVRC2012 (ImageNet) dataset from the current state-of-the-art result, and we achieved this with a single pair of a discriminator and a generator.
We were also able to extend the application to super-resolution and succeeded in producing highly discriminative super-resolution images.
This new structure also enabled high quality category transformation based on parametric functional transformation of conditional batch normalization layers in the generator. | accepted-poster-papers | The paper proposes a simple modification to conditional GANs, where the discriminator involves an inner product term between the condition vector y and the feature vector of x. This formulation is reasonable and well motivated from popular models (e.g., log-linear, Gaussians). Experimentally, the proposed method is evaluated on conditional image generation and super-resolution tasks, demonstrating improved qualitative and qualitative performance over the existing state-of-the-art (AC-GAN).
| val | [
"BJSIkW61f",
"rkyTeFweM",
"SynfBlcgf",
"rk3jNQPGf",
"HJ-udwIGf",
"BJERLD8fz",
"BJYUqi--G",
"Byeml2ZWM",
"HJd7sibbz",
"ByeD_ibZz",
"HJP0Zr3ef",
"r1-ygQixf"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"public"
] | [
"\nI thank the authors for the thoughtful response and updated manuscript. After reading through both, my review score remains unchanged.\n\n=================\n\nThe authors describe a new variant of a generative adversarial network (GAN) for generating images. This model employs a 'projection discriminator' in ord... | [
6,
7,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
4,
5,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_ByS1VpgRZ",
"iclr_2018_ByS1VpgRZ",
"iclr_2018_ByS1VpgRZ",
"BJYUqi--G",
"SynfBlcgf",
"iclr_2018_ByS1VpgRZ",
"SynfBlcgf",
"BJSIkW61f",
"rkyTeFweM",
"iclr_2018_ByS1VpgRZ",
"r1-ygQixf",
"iclr_2018_ByS1VpgRZ"
] |
iclr_2018_S1v4N2l0- | Unsupervised Representation Learning by Predicting Image Rotations | Over the last years, deep convolutional neural networks (ConvNets) have transformed the field of computer vision thanks to their unparalleled capacity to learn high level semantic image features. However, in order to successfully learn those features, they usually require massive amounts of manually labeled data, which is both expensive and impractical to scale. Therefore, unsupervised semantic feature learning, i.e., learning without requiring manual annotation effort, is of crucial importance in order to successfully harvest the vast amount of visual data that are available today. In our work we propose to learn image features by training ConvNets to recognize the 2d rotation that is applied to the image that it gets as input. We demonstrate both qualitatively and quantitatively that this apparently simple task actually provides a very powerful supervisory signal for semantic feature learning. We exhaustively evaluate our method in various unsupervised feature learning benchmarks and we exhibit in all of them state-of-the-art performance. Specifically, our results on those benchmarks demonstrate dramatic improvements w.r.t. prior state-of-the-art approaches in unsupervised representation learning and thus significantly close the gap with supervised feature learning. For instance, in PASCAL VOC 2007 detection task our unsupervised pre-trained AlexNet model achieves the state-of-the-art (among unsupervised methods) mAP of 54.4%$that is only 2.4 points lower from the supervised case. We get similarly striking results when we transfer our unsupervised learned features on various other tasks, such as ImageNet classification, PASCAL classification, PASCAL segmentation, and CIFAR-10 classification. The code and models of our paper will be published on:
https://github.com/gidariss/FeatureLearningRotNet | accepted-poster-papers | The paper proposes a new way of learning image representations from unlabeled data by predicting the image rotations. The problem formulation implicitly encourages the learned representation to be informative about the (foreground) object and its rotation. The idea is simple, but it turns out to be very effective. The authors demonstrate strong performance in multiple transfer learning scenarios, such as ImageNet classification, PASCAL classification, PASCAL segmentation, and CIFAR-10 classification. | train | [
"HJ91THweG",
"HyCI-CKeG",
"BywXN7WMG",
"BJDsmT7NM",
"S1XsOz6fM",
"SyOCqz6fz",
"H1TdLfaff",
"SJhoHzTff",
"SJsHNf6ff",
"r1Tf0Z6fM"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author"
] | [
"The paper proposes a simple classification task for learning feature extractors without requiring manual annotations: predicting one of four rotations that the image has been subjected to: by 0, 90, 180 or 270º. Then the paper shows that pre-training on this task leads to state-of-the-art results on a number of po... | [
6,
6,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
5,
4,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_S1v4N2l0-",
"iclr_2018_S1v4N2l0-",
"iclr_2018_S1v4N2l0-",
"SyOCqz6fz",
"HyCI-CKeG",
"HyCI-CKeG",
"BywXN7WMG",
"BywXN7WMG",
"BywXN7WMG",
"HJ91THweG"
] |
iclr_2018_rJGZq6g0- | Emergent Communication in a Multi-Modal, Multi-Step Referential Game | Inspired by previous work on emergent communication in referential games, we propose a novel multi-modal, multi-step referential game, where the sender and receiver have access to distinct modalities of an object, and their information exchange is bidirectional and of arbitrary duration. The multi-modal multi-step setting allows agents to develop an internal communication significantly closer to natural language, in that they share a single set of messages, and that the length of the conversation may vary according to the difficulty of the task. We examine these properties empirically using a dataset consisting of images and textual descriptions of mammals, where the agents are tasked with identifying the correct object. Our experiments indicate that a robust and efficient communication protocol emerges, where gradual information exchange informs better predictions and higher communication bandwidth improves generalization. | accepted-poster-papers | An interesting paper, generally well-written. Though it would be nice to see that the methods and observations generalize to other datasets, it is probably too much to ask as datasets with required properties do not seem to exist. There is a clear consensus to accept the paper.
+ an interesting extension of previous work on emergent communications (e.g., referential games)
+ well written paper
| test | [
"rJhUvu5gf",
"BJ8ZFxKgM",
"SkN953tgG",
"S1RjXkq7z",
"S1oFQJqQG",
"rJuI7k9XG"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author"
] | [
"The setup in the paper for learning representations is different to many other approaches in the area, using to agents that communicate over descriptions of objects using different modalities. The experimental setup is interesting in that it allows comparing approaches in learning an effective representation. The ... | [
7,
7,
7,
-1,
-1,
-1
] | [
3,
4,
4,
-1,
-1,
-1
] | [
"iclr_2018_rJGZq6g0-",
"iclr_2018_rJGZq6g0-",
"iclr_2018_rJGZq6g0-",
"rJhUvu5gf",
"SkN953tgG",
"BJ8ZFxKgM"
] |
iclr_2018_rytstxWAW | FastGCN: Fast Learning with Graph Convolutional Networks via Importance Sampling | The graph convolutional networks (GCN) recently proposed by Kipf and Welling are an effective graph model for semi-supervised learning. Such a model, however, is transductive in nature because parameters are learned through convolutions with both training and test data. Moreover, the recursive neighborhood expansion across layers poses time and memory challenges for training with large, dense graphs. To relax the requirement of simultaneous availability of test data, we interpret graph convolutions as integral transforms of embedding functions under probability measures. Such an interpretation allows for the use of Monte Carlo approaches to consistently estimate the integrals, which in turn leads to a batched training scheme as we propose in this work---FastGCN. Enhanced with importance sampling, FastGCN not only is efficient for training but also generalizes well for inference. We show a comprehensive set of experiments to demonstrate its effectiveness compared with GCN and related models. In particular, training is orders of magnitude more efficient while predictions remain comparably accurate.
| accepted-poster-papers | Graph neural networks (incl. GCNs) have been shown effective on a large range of tasks. However, it has been so far hard (i.e. computationally expensive or requiring the use of heuristics) to apply them to large graphs. This paper aims to address this problem and the solution is clean and elegant. The reviewers generally find it well written and interesting. There were some concerns about the comparison to GraphSAGE (an alternative approach), but these have been addressed in a subsequent revision.
+ an important problem
+ a simple approach
+ convincing results
+ clear and well written
| train | [
"SJce_4YlM",
"r1n9o5jEM",
"BkgVo9s4z",
"SkaxvK_VM",
"B1ymVPEgM",
"HJDVPNYgf",
"H1IdT6AlG",
"B1QpuuTmz",
"ryk5t8Tmf",
"Sy312STmM",
"Hyq35OnXG",
"SJL8DwOff",
"SyMp44dMM",
"Byp_N4_ff",
"HyiEV4dGM",
"rkOyV7wGf",
"SyhcQQvfM",
"r1oFX1wGf",
"rkv4i8Szf",
"HJo69LBfz",
"B1bf9LBMG",
"... | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"public",
"author",
"author",
"author",
"public",
"public",
"public",
"author",
"author",
... | [
"Update:\n\nI have read the rebuttal and the revised manuscript. Additionally I had a brief discussion with the authors regarding some aspects of their probabilistic framework. I think that batch training of GCN is an important problem and authors have proposed an interesting solution to this problem. I appreciated... | [
6,
-1,
-1,
-1,
7,
7,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
4,
-1,
-1,
-1,
2,
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_rytstxWAW",
"iclr_2018_rytstxWAW",
"SkaxvK_VM",
"B1bf9LBMG",
"iclr_2018_rytstxWAW",
"iclr_2018_rytstxWAW",
"iclr_2018_rytstxWAW",
"ryk5t8Tmf",
"Sy312STmM",
"Hyq35OnXG",
"B1bf9LBMG",
"HyiEV4dGM",
"HJDVPNYgf",
"r1oFX1wGf",
"rkOyV7wGf",
"SyhcQQvfM",
"iclr_2018_rytstxWAW",
"... |
iclr_2018_H1vEXaxA- | Emergent Translation in Multi-Agent Communication | While most machine translation systems to date are trained on large parallel corpora, humans learn language in a different way: by being grounded in an environment and interacting with other humans. In this work, we propose a communication game where two agents, native speakers of their own respective languages, jointly learn to solve a visual referential task. We find that the ability to understand and translate a foreign language emerges as a means to achieve shared goals. The emergent translation is interactive and multimodal, and crucially does not require parallel corpora, but only monolingual, independent text and corresponding images. Our proposed translation model achieves this by grounding the source and target languages into a shared visual modality, and outperforms several baselines on both word-level and sentence-level translation tasks. Furthermore, we show that agents in a multilingual community learn to translate better and faster than in a bilingual communication setting. | accepted-poster-papers | The paper considers learning an NMT systems while pivoting through images. The task is formulated as a referential game. From the modeling and set-up perspective it is similar to previous work in the area of emergent communication / referential games, e.g., Lazaridou et al (ICLR 17) and especially to Havrylov & Titov (NIPS 17), as similar techniques are used to handle the variable-length channel (RNN encoders / decoders + the ST Gumbel-Softmax estimator). However, its multilingual version is interesting and the results are sufficiently convincing (e.g., comparison to Nakayama and Nishida, 17). The paper would more attractive for those interested in emergent communication than the NMT community, as the set-up (using pivoting through images) may be perceived somewhat exotic by the NMT community. Also, the model is not attention-based (unlike SoA in seq2seq / NMT), and it is not straightforward to incorporate attention (see R2 and author response).
+ an interesting framing of the weakly-supervised MT problem
+ well written
+ sufficiently convincing results
- the set-up and framework (e.g., non-attention based) is questionable from practical perspective
| val | [
"SyDjKqLEM",
"HJnX0AFeG",
"SyKh0W9eG",
"Bk5lbEjxG",
"r1iwyrpQf",
"r19bjleQz",
"B1z9cegXz",
"S163sxgQf"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"My reply to authors' arguments about emergent translation:\n\nI agree that the agents learn to translate without seeing any parallel data. But you are bridging the languages through image which is the common modality. How is this different than bridge based representation learning or machine translation? The only ... | [
-1,
8,
7,
5,
-1,
-1,
-1,
-1
] | [
-1,
5,
3,
5,
-1,
-1,
-1,
-1
] | [
"S163sxgQf",
"iclr_2018_H1vEXaxA-",
"iclr_2018_H1vEXaxA-",
"iclr_2018_H1vEXaxA-",
"iclr_2018_H1vEXaxA-",
"SyKh0W9eG",
"Bk5lbEjxG",
"HJnX0AFeG"
] |
iclr_2018_rJvJXZb0W | An efficient framework for learning sentence representations | In this work we propose a simple and efficient framework for learning sentence representations from unlabelled data. Drawing inspiration from the distributional hypothesis and recent work on learning sentence representations, we reformulate the problem of predicting the context in which a sentence appears as a classification problem. Given a sentence and the context in which it appears, a classifier distinguishes context sentences from other contrastive sentences based on their vector representations. This allows us to efficiently learn different types of encoding functions, and we show that the model learns high-quality sentence representations. We demonstrate that our sentence representations outperform state-of-the-art unsupervised and supervised representation learning methods on several downstream NLP tasks that involve understanding sentence semantics while achieving an order of magnitude speedup in training time. | accepted-poster-papers | Though the approach is not terribly novel, it is quite effective (as confirmed on a wide range of evaluation tasks). The approach is simple and likely to be useful in applications. The paper is well written.
+ simple and efficient
+ high quality evaluation
+ strong results
- novelty is somewhat limited
| train | [
"rJMoj-jxf",
"SJNPXFyeM",
"ByVL483xf",
"SkWHdKwzz",
"BJq_m4imf",
"SJPgI5BR-",
"SJpU59rCW",
"Sk1dtLqAW",
"HkdCWV0Ab",
"BJidT3DCb",
"SyctLIIC-",
"BkzJW9QC-",
"SJbLL_zCb"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"public",
"public",
"public",
"public",
"public"
] | [
"[REVISION]\n\nThank you for your clarification. I appreciate the effort and think it has improved the paper. I have updated my score accordingly\n\n====== \n\nThis paper proposes a new objective for learning SkipThought-style sentence representations from corpora of ordered sentences. The algorithm is much faster ... | [
6,
8,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
5,
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_rJvJXZb0W",
"iclr_2018_rJvJXZb0W",
"iclr_2018_rJvJXZb0W",
"iclr_2018_rJvJXZb0W",
"HkdCWV0Ab",
"SJbLL_zCb",
"BkzJW9QC-",
"SyctLIIC-",
"Sk1dtLqAW",
"iclr_2018_rJvJXZb0W",
"SJpU59rCW",
"iclr_2018_rJvJXZb0W",
"iclr_2018_rJvJXZb0W"
] |
iclr_2018_S1sqHMZCb | NerveNet: Learning Structured Policy with Graph Neural Networks | We address the problem of learning structured policies for continuous control. In traditional reinforcement learning, policies of agents are learned by MLPs which take the concatenation of all observations from the environment as input for predicting actions. In this work, we propose NerveNet to explicitly model the structure of an agent, which naturally takes the form of a graph. Specifically, serving as the agent's policy network, NerveNet first propagates information over the structure of the agent and then predict actions for different parts of the agent. In the experiments, we first show that our NerveNet is comparable to state-of-the-art methods on standard MuJoCo environments. We further propose our customized reinforcement learning environments for benchmarking two types of structure transfer learning tasks, i.e., size and disability transfer. We demonstrate that policies learned by NerveNet are significantly better than policies learned by other models and are able to transfer even in a zero-shot setting.
| accepted-poster-papers | An interesting application of graph neural networks to robotics. The body of a robot is represented as a graph, and the agent’s policy is defined using a graph neural network (GNNs/GCNs) over the graph structure.
The GNN-based policy network perform on par with best methods on traditional benchmarks, but shown to be very effective for transfer scenarios: changing robot size or disabling its components. I believe that the reviewers' concern that the original experiments focused solely on centepedes and snakes were (at least partially) addressed in the author response: they showed that their GNN-based model outperforms MLPs on a dataset of 2D walkers.
Overall:
-- an interesting application
-- modeling robot morphology is an under-explored direction
-- the paper is well written
-- experiments are sufficiently convincing (esp. after addressing the concerns re diversity and robustness).
| train | [
"BkjfRYLxz",
"r1r7Vd9gf",
"Hy24AAnlM",
"HyaBB7SmM",
"SkNdxmSmf",
"rkn4g7S7f",
"ryPylXB7G",
"S1FRPUgWG",
"SJ8gfJebG",
"rJm-vMaxG"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"public",
"public"
] | [
"This paper proposes NerveNet to represent and learn structured policy for continuous control tasks. Instead of using the widely adopted fully connected MLP, this paper uses Graph Neural Networks to learn a structured controller for various MuJoco environments. It shows that this structured controller can be easily... | [
7,
6,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
3,
3,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_S1sqHMZCb",
"iclr_2018_S1sqHMZCb",
"iclr_2018_S1sqHMZCb",
"iclr_2018_S1sqHMZCb",
"BkjfRYLxz",
"r1r7Vd9gf",
"Hy24AAnlM",
"rJm-vMaxG",
"rJm-vMaxG",
"iclr_2018_S1sqHMZCb"
] |
iclr_2018_HkMvEOlAb | Learning Latent Representations in Neural Networks for Clustering through Pseudo Supervision and Graph-based Activity Regularization | In this paper, we propose a novel unsupervised clustering approach exploiting the hidden information that is indirectly introduced through a pseudo classification objective. Specifically, we randomly assign a pseudo parent-class label to each observation which is then modified by applying the domain specific transformation associated with the assigned label. Generated pseudo observation-label pairs are subsequently used to train a neural network with Auto-clustering Output Layer (ACOL) that introduces multiple softmax nodes for each pseudo parent-class. Due to the unsupervised objective based on Graph-based Activity Regularization (GAR) terms, softmax duplicates of each parent-class are specialized as the hidden information captured through the help of domain specific transformations is propagated during training. Ultimately we obtain a k-means friendly latent representation. Furthermore, we demonstrate how the chosen transformation type impacts performance and helps propagate the latent information that is useful in revealing unknown clusters. Our results show state-of-the-art performance for unsupervised clustering tasks on MNIST, SVHN and USPS datasets, with the highest accuracies reported to date in the literature. | accepted-poster-papers | The reviewers concerns regarding novelty and the experimental evaluation have been resolved accordingly and all recommend acceptance. I would recommend removing the term "unsupervised" in clustering, as it is redundant. Clustering is, by default, assumed to be unsupervised.
There is some interest in extending this to non-vision domains, however this is beyond the scope of the current work. | train | [
"H1_YBgxZz",
"r1Eovo2gf",
"rkO0PUnlG",
"r1aVRRsQG",
"SymwYAomz",
"HkMoBrhMf",
"rkjjzrhzM",
"ByAHGrhzz",
"r1BObB2Mz",
"SkSPQrwGf",
"Sy0zGI8Mz",
"BkV2yJNWG",
"ByUAwgHJz",
"Sk_7iiVJG"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"public",
"author",
"public",
"author",
"public"
] | [
"This paper presents a method for clustering based on latent representations learned from the classification of transformed data after pseudo-labellisation corresponding to applied transformation. Pipeline: -Data are augmented with domain-specific transformations. For instance, in the case of MNIST, rotations with ... | [
6,
7,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
5,
4,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_HkMvEOlAb",
"iclr_2018_HkMvEOlAb",
"iclr_2018_HkMvEOlAb",
"SkSPQrwGf",
"iclr_2018_HkMvEOlAb",
"iclr_2018_HkMvEOlAb",
"rkO0PUnlG",
"r1Eovo2gf",
"H1_YBgxZz",
"Sy0zGI8Mz",
"BkV2yJNWG",
"iclr_2018_HkMvEOlAb",
"Sk_7iiVJG",
"iclr_2018_HkMvEOlAb"
] |
iclr_2018_HJIoJWZCZ | Adversarial Dropout Regularization | We present a domain adaptation method for transferring neural representations from label-rich source domains to unlabeled target domains. Recent adversarial methods proposed for this task learn to align features across domains by ``fooling'' a special domain classifier network. However, a drawback of this approach is that the domain classifier simply labels the generated features as in-domain or not, without considering the boundaries between classes. This means that ambiguous target features can be generated near class boundaries, reducing target classification accuracy. We propose a novel approach, Adversarial Dropout Regularization (ADR), which encourages the generator to output more discriminative features for the target domain. Our key idea is to replace the traditional domain critic with a critic that detects non-discriminative features by using dropout on the classifier network. The generator then learns to avoid these areas of the feature space and thus creates better features. We apply our ADR approach to the problem of unsupervised domain adaptation for image classification and semantic segmentation tasks, and demonstrate significant improvements over the state of the art. | accepted-poster-papers | The general consensus is that this method provides a practical and interesting approach to unsupervised domain adaptation. One reviewer had concerns with comparing to state of the art baselines, but those have been addressed in the revision.
There were also issues concerning correctness due to a typo. Based on the responses, and on the pseudocode, it seems like there wasn't an issue with the results, just in the way the entropy objective was reported.
You may want to consider reporting the example given by reviewer 2 as a negative example where you expect the method to fail. This will be helpful for researchers using and building on your paper. | test | [
"HJvZyW07M",
"HJ4p6dFeG",
"rJO3y_qgz",
"Hy6M2mybG",
"Sy8PFkAmM",
"rkiADJZmz",
"BkUZUXp7f",
"HytVlE6mG",
"ryU_S767z",
"HkW5HWbQz",
"rJSsGZbmG",
"HyVjZdBGz",
"H1TwZ_HMG",
"SJB7-_BGM"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"public",
"public",
"author",
"author",
"author",
"public",
"public",
"author",
"author",
"author"
] | [
"We have double checked our implementation and it was icorrect, so the error was only in the equation written in the original paper draft. Thus the notation error did not affect our experiments.\n\nThe codes are here. We are using Pytorch.\nThis is the minimized objective.\ndef entropy(self,output):\n prob = ... | [
-1,
5,
7,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
-1,
4,
3,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"Sy8PFkAmM",
"iclr_2018_HJIoJWZCZ",
"iclr_2018_HJIoJWZCZ",
"iclr_2018_HJIoJWZCZ",
"HytVlE6mG",
"H1TwZ_HMG",
"rJSsGZbmG",
"rkiADJZmz",
"HkW5HWbQz",
"iclr_2018_HJIoJWZCZ",
"HyVjZdBGz",
"rJO3y_qgz",
"Hy6M2mybG",
"HJ4p6dFeG"
] |
iclr_2018_r1lUOzWCW | Demystifying MMD GANs | We investigate the training and performance of generative adversarial networks using the Maximum Mean Discrepancy (MMD) as critic, termed MMD GANs. As our main theoretical contribution, we clarify the situation with bias in GAN loss functions raised by recent work: we show that gradient estimators used in the optimization process for both MMD GANs and Wasserstein GANs are unbiased, but learning a discriminator based on samples leads to biased gradients for the generator parameters. We also discuss the issue of kernel choice for the MMD critic, and characterize the kernel corresponding to the energy distance used for the Cramér GAN critic. Being an integral probability metric, the MMD benefits from training strategies recently developed for Wasserstein GANs. In experiments, the MMD GAN is able to employ a smaller critic network than the Wasserstein GAN, resulting in a simpler and faster-training algorithm with matching performance. We also propose an improved measure of GAN convergence, the Kernel Inception Distance, and show how to use it to dynamically adapt learning rates during GAN training. | accepted-poster-papers | This paper does an excellent job at helping to clarify the relationship between various, recently proposed GAN models. The empirical contribution is small, but the KID metric will hopefully be a useful one for researchers. It would be really useful to show that it maintains its advantage when the dimensionality of the images increases (e.g., on Imagenet 128x128). | train | [
"SJsGyNugf",
"rkkFfN5gz",
"rJOKM41-M",
"H1kV32sXz",
"BkP7k4QQG",
"HJ291EQ7z",
"B1WFJNQ7f",
"SkdLkVmXz",
"S1b4HDuez",
"S1YDTsmgG"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"public"
] | [
"This paper claims to demystify MMD-GAN, a generative adversarial network with the maximum mean discrepancy (MMD) as a critic, by showing that the usual estimator for MMD yields unbiased gradient estimates (Theorem 1). It was noted by the authors that biased gradient estimate can cause problem when performing stoch... | [
4,
7,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
4,
4,
2,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_r1lUOzWCW",
"iclr_2018_r1lUOzWCW",
"iclr_2018_r1lUOzWCW",
"BkP7k4QQG",
"iclr_2018_r1lUOzWCW",
"rkkFfN5gz",
"SJsGyNugf",
"rJOKM41-M",
"S1YDTsmgG",
"iclr_2018_r1lUOzWCW"
] |
iclr_2018_Hk5elxbRW | Smooth Loss Functions for Deep Top-k Classification | The top-k error is a common measure of performance in machine learning and computer vision. In practice, top-k classification is typically performed with deep neural networks trained with the cross-entropy loss. Theoretical results indeed suggest that cross-entropy is an optimal learning objective for such a task in the limit of infinite data. In the context of limited and noisy data however, the use of a loss function that is specifically designed for top-k classification can bring significant improvements.
Our empirical evidence suggests that the loss function must be smooth and have non-sparse gradients in order to work well with deep neural networks. Consequently, we introduce a family of smoothed loss functions that are suited to top-k optimization via deep learning. The widely used cross-entropy is a special case of our family. Evaluating our smooth loss functions is computationally challenging: a na{\"i}ve algorithm would require O((nk)) operations, where n is the number of classes. Thanks to a connection to polynomial algebra and a divide-and-conquer approach, we provide an algorithm with a time complexity of O(kn). Furthermore, we present a novel approximation to obtain fast and stable algorithms on GPUs with single floating point precision. We compare the performance of the cross-entropy loss and our margin-based losses in various regimes of noise and data size, for the predominant use case of k=5. Our investigation reveals that our loss is more robust to noise and overfitting than cross-entropy. | accepted-poster-papers | The submission proposes a loss surrogate for top-k classification, as in the official imagenet evaluation. The approach is well motivated, and the paper is very well organized with thorough technical proofs in the appendix, and a well presented main text. The main results are: 1) a theoretically motivated surrogate, 2) that gives up to a couple percent improvement over cross-entropy loss in the presence of label noise or smaller datasets.
It is a bit disappointing that performance is limited in the ideal case and that it does not more gracefully degrade to epsilon better than cross entropy loss. Rather, it seems to give performance epsilon worse than cross-entropy loss in an ideal case with clean labels and lots of data. Nevertheless, it is a step in the right direction for optimizing the error measure to be used during evaluation. The reviewers uniformly recommended acceptance. | train | [
"HykoG7Oef",
"BJbqjU0eM",
"ryOmoYZZM",
"HJfBTmUQz",
"S1Ori7U7z",
"SyFTqXImz",
"H15E5Q8mz"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"The paper is clear and well written. The proposed approach seems to be of interest and to produce interesting results. As datasets in various domain get more and more precise, the problem of class confusing with very similar classes both present or absent of the training dataset is an important problem, and this p... | [
6,
7,
8,
-1,
-1,
-1,
-1
] | [
5,
4,
4,
-1,
-1,
-1,
-1
] | [
"iclr_2018_Hk5elxbRW",
"iclr_2018_Hk5elxbRW",
"iclr_2018_Hk5elxbRW",
"HykoG7Oef",
"BJbqjU0eM",
"ryOmoYZZM",
"iclr_2018_Hk5elxbRW"
] |
iclr_2018_B1Lc-Gb0Z | Deep Learning as a Mixed Convex-Combinatorial Optimization Problem | As neural networks grow deeper and wider, learning networks with hard-threshold activations is becoming increasingly important, both for network quantization, which can drastically reduce time and energy requirements, and for creating large integrated systems of deep networks, which may have non-differentiable components and must avoid vanishing and exploding gradients for effective learning. However, since gradient descent is not applicable to hard-threshold functions, it is not clear how to learn them in a principled way. We address this problem by observing that setting targets for hard-threshold hidden units in order to minimize loss is a discrete optimization problem, and can be solved as such. The discrete optimization goal is to find a set of targets such that each unit, including the output, has a linearly separable problem to solve. Given these targets, the network decomposes into individual perceptrons, which can then be learned with standard convex approaches. Based on this, we develop a recursive mini-batch algorithm for learning deep hard-threshold networks that includes the popular but poorly justified straight-through estimator as a special case. Empirically, we show that our algorithm improves classification accuracy in a number of settings, including for AlexNet and ResNet-18 on ImageNet, when compared to the straight-through estimator. | accepted-poster-papers | The submission proposes optimization with hard-threshold activations. This setting can lead to compressed networks, and is therefore an interesting setting if learning can be achieved feasibly. This leads to a combinatorial optimization problem due to the non-differentiability of the non-linearity. The submission proceeds to analyze the resulting problem and propose an algorithm for its optimization.
Results show slight improvement over a recent variant of straight-through estimation (Hinton 2012, Bengio et al. 2013), called saturated straight-through estimation (Hubara et al., 2016). Although the improvements are somewhat modest, the submission is interesting for its framing of an important problem and improvement over a popular setting. | train | [
"SJ7YJpueM",
"Byn3CAYlM",
"BkOfh_eWM",
"B168vlvMz",
"HJDD0JDGM"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author"
] | [
"The paper studies learning in deep neural networks with hard activation functions, e.g. step functions like sign(x). Of course, backpropagation is difficult to adapt to such networks, so prior work has considered different approaches. Arguably the most popular is straight-through estimation (Hinton 2012, Bengio et... | [
7,
7,
7,
-1,
-1
] | [
4,
4,
3,
-1,
-1
] | [
"iclr_2018_B1Lc-Gb0Z",
"iclr_2018_B1Lc-Gb0Z",
"iclr_2018_B1Lc-Gb0Z",
"SJ7YJpueM",
"BkOfh_eWM"
] |
iclr_2018_H1WgVz-AZ | Learning Approximate Inference Networks for Structured Prediction | Structured prediction energy networks (SPENs; Belanger & McCallum 2016) use neural network architectures to define energy functions that can capture arbitrary dependencies among parts of structured outputs. Prior work used gradient descent for inference, relaxing the structured output to a set of continuous variables and then optimizing the energy with respect to them. We replace this use of gradient descent with a neural network trained to approximate structured argmax inference. This
“inference network” outputs continuous values that we treat as the output structure. We develop large-margin training criteria for joint training of the structured energy function and inference network. On multi-label classification we report speed-ups
of 10-60x compared to (Belanger et al., 2017) while also improving accuracy. For sequence labeling with simple structured energies, our approach performs comparably to exact inference while being much faster at test time. We then demonstrate improved accuracy by augmenting the energy with a “label language model” that scores entire output label sequences, showing it can improve handling of long-distance dependencies in part-of-speech tagging. Finally, we show how inference networks can replace dynamic programming for test-time inference in conditional random fields, suggestive for their general use for fast inference in structured settings. | accepted-poster-papers | The submission modifies the SPEN framework for structured prediction by adding an inference network in place of the usual combinatorial optimization based inference. The resulting architecture has some similarity to a GAN, and significantly increases the speed of inference.
The submission provides links between two seemingly different frameworks: SPENs and GANs. By replacing inference with a network output, the connection is made, but importantly, this massively speeds up inference and may mark an important step forward in structured prediction with deep learning. | train | [
"H12sn0dlf",
"Sk0CEftxG",
"HytdnPcgG",
"rJKCC5GNM",
"rJsf2997M",
"rJl1nqq7f",
"B1ixj9cmM",
"H1-Y5qqXM"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"= Quality = \nOverall, the authors do a good job of placing their work in the context of related research, and employ a variety of non-trivial technical details to get their methods to work well. \n\n= Clarity = \n\nOverall, the exposition regarding the method is good. I found the setup for the sequence tagging ex... | [
7,
5,
9,
-1,
-1,
-1,
-1,
-1
] | [
5,
3,
4,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_H1WgVz-AZ",
"iclr_2018_H1WgVz-AZ",
"iclr_2018_H1WgVz-AZ",
"H12sn0dlf",
"H12sn0dlf",
"Sk0CEftxG",
"HytdnPcgG",
"iclr_2018_H1WgVz-AZ"
] |
iclr_2018_rypT3fb0b | LEARNING TO SHARE: SIMULTANEOUS PARAMETER TYING AND SPARSIFICATION IN DEEP LEARNING | Deep neural networks (DNNs) usually contain millions, maybe billions, of parameters/weights, making both storage and computation very expensive. This has motivated a large body of work to reduce the complexity of the neural network by using sparsity-inducing regularizers. Another well-known approach for controlling the complexity of DNNs is parameter sharing/tying, where certain sets of weights are forced to share a common value. Some forms of weight sharing are hard-wired to express certain in- variances, with a notable example being the shift-invariance of convolutional layers. However, there may be other groups of weights that may be tied together during the learning process, thus further re- ducing the complexity of the network. In this paper, we adopt a recently proposed sparsity-inducing regularizer, named GrOWL (group ordered weighted l1), which encourages sparsity and, simulta- neously, learns which groups of parameters should share a common value. GrOWL has been proven effective in linear regression, being able to identify and cope with strongly correlated covariates. Unlike standard sparsity-inducing regularizers (e.g., l1 a.k.a. Lasso), GrOWL not only eliminates unimportant neurons by setting all the corresponding weights to zero, but also explicitly identifies strongly correlated neurons by tying the corresponding weights to a common value. This ability of GrOWL motivates the following two-stage procedure: (i) use GrOWL regularization in the training process to simultaneously identify significant neurons and groups of parameter that should be tied together; (ii) retrain the network, enforcing the structure that was unveiled in the previous phase, i.e., keeping only the significant neurons and enforcing the learned tying structure. We evaluate the proposed approach on several benchmark datasets, showing that it can dramatically compress the network with slight or even no loss on generalization performance.
| accepted-poster-papers | The paper proposes to regularize via a family of structured sparsity norms on the weights of a deep network. A proximal algorithm is employed for optimization, and results are shown on synthetic data, MNIST, and CIFAR10.
Pros: the regularization scheme is reasonably general, the optimization is principled, the presentation is reasonable, and all three reviewers recommend acceptance.
Cons: the regularization is conceptually not terribly different from other kinds of regularization proposed in the literature. The experiments are limited to quite simple data sets. | val | [
"HJms-iOEz",
"H1fONf_gG",
"Skc-JodVf",
"rkPj2vjeM",
"BkP3B6U4M",
"rkJfM20eG",
"SJk7igVEM",
"r1wvzD27z",
"BkfXrD37G",
"SyXDUD2mG",
"rkwxPE67M"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"Thank the authors for their detailed response to my questions. The revision and response provided clearer explantation for the motivation of compressing a deep neural network. Additional experimental results were also included for the uncompressed neural net. I would like to change my rating based on these updates... | [
-1,
6,
-1,
8,
-1,
7,
-1,
-1,
-1,
-1,
-1
] | [
-1,
3,
-1,
5,
-1,
4,
-1,
-1,
-1,
-1,
-1
] | [
"BkfXrD37G",
"iclr_2018_rypT3fb0b",
"SJk7igVEM",
"iclr_2018_rypT3fb0b",
"SyXDUD2mG",
"iclr_2018_rypT3fb0b",
"H1fONf_gG",
"rkJfM20eG",
"H1fONf_gG",
"rkPj2vjeM",
"iclr_2018_rypT3fb0b"
] |
iclr_2018_S1XolQbRW | Model compression via distillation and quantization | Deep neural networks (DNNs) continue to make significant advances, solving tasks from image classification to translation or reinforcement learning. One aspect of the field receiving considerable attention is efficiently executing deep models in resource-constrained environments, such as mobile or embedded devices. This paper focuses on this problem, and proposes two new compression methods, which jointly leverage weight quantization and distillation of larger teacher networks into smaller student networks. The first method we propose is called quantized distillation and leverages distillation during the training process, by incorporating distillation loss, expressed with respect to the teacher, into the training of a student network whose weights are quantized to a limited set of levels. The second method, differentiable quantization, optimizes the location of quantization points through stochastic gradient descent, to better fit the behavior of the teacher model. We validate both methods through experiments on convolutional and recurrent architectures. We show that quantized shallow students can reach similar accuracy levels to full-precision teacher models, while providing order of magnitude compression, and inference speedup that is linear in the depth reduction. In sum, our results enable DNNs for resource-constrained environments to leverage architecture and accuracy advances developed on more powerful devices.
| accepted-poster-papers | The submission proposes a method for quantization. The approach is reasonably straightforward, and is summarized in Algorithm 1. It is the analysis which is more interesting, showing the relationship between quantization and adding Gaussian noise (Appendix B) - motivating quantization as regularization.
The submission has a reasonable mix of empirical and theoretical results, motivating a simple-to-implement algorithm. All three reviewers recommended acceptance. | val | [
"HkCg0RFlz",
"SkBJ0mdlG",
"SkgcJoogf",
"ryjGLfTQz",
"HJACQSKmG",
"ByEhc3uQG",
"r1DTHwBXG",
"SJG3HPrmM",
"BkV5BwSmG",
"B1iCfvBQf",
"B1lfRZd-G",
"SycJC-ubM",
"Bk1jTZ_Wz",
"HJnHpZ_Wf",
"H1BudrVbM",
"rkMb3Yjlz"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"public",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"public"
] | [
"The paper proposes to combine two approaches to compress deep neural networks - distillation and quantization. The authors proposed two methods, one largely relying on the distillation loss idea then followed by a quantization step, and another one that also learns the location of the quantization points. Somewhat... | [
7,
6,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
2,
4,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_S1XolQbRW",
"iclr_2018_S1XolQbRW",
"iclr_2018_S1XolQbRW",
"B1iCfvBQf",
"ByEhc3uQG",
"H1BudrVbM",
"SkBJ0mdlG",
"HkCg0RFlz",
"SkgcJoogf",
"iclr_2018_S1XolQbRW",
"SkBJ0mdlG",
"HkCg0RFlz",
"SkgcJoogf",
"iclr_2018_S1XolQbRW",
"rkMb3Yjlz",
"iclr_2018_S1XolQbRW"
] |
iclr_2018_HyH9lbZAW | Variational Message Passing with Structured Inference Networks | Recent efforts on combining deep models with probabilistic graphical models are promising in providing flexible models that are also easy to interpret. We propose a variational message-passing algorithm for variational inference in such models. We make three contributions. First, we propose structured inference networks that incorporate the structure of the graphical model in the inference network of variational auto-encoders (VAE). Second, we establish conditions under which such inference networks enable fast amortized inference similar to VAE. Finally, we derive a variational message passing algorithm to perform efficient natural-gradient inference while retaining the efficiency of the amortized inference. By simultaneously enabling structured, amortized, and natural-gradient inference for deep structured models, our method simplifies and generalizes existing methods. | accepted-poster-papers | Thank you for submitting you paper to ICLR. The paper presents a general approach for handling inference in probabilistic graphical models that employ deep neural networks. The framework extends Jonhson et al. (2016) and Khan & Lin (2017). The reviewers are all in agreement that the paper is suitable for publication. The paper is well written and the use of examples to illustrate the applicability of the methods brings great clarity. The experiments are not the strongest suit of the paper and, although the revision has improved this aspect, I would encourage a more comprehensive evaluation of the proposed methods. Nevertheless, this is a strong paper. | train | [
"rkgie2rlf",
"B1ytDAtlG",
"HJGBwE9gG",
"Bkr5LPpXM",
"HkX_IvpXz",
"BJo4LPamG",
"rJKTjyYzG",
"HJsmo1tzM",
"H1T9cJYMz"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author"
] | [
"The authors adapts stochastic natural gradient methods for variational inference with structured inference networks. The variational approximation proposed is similar to SVAE by Jonhson et al. (2016), but rather than directly using the global variable theta in the local approximation for x the authors propose to o... | [
7,
7,
7,
-1,
-1,
-1,
-1,
-1,
-1
] | [
3,
4,
2,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_HyH9lbZAW",
"iclr_2018_HyH9lbZAW",
"iclr_2018_HyH9lbZAW",
"rkgie2rlf",
"B1ytDAtlG",
"HJGBwE9gG",
"rkgie2rlf",
"B1ytDAtlG",
"HJGBwE9gG"
] |
iclr_2018_H1mCp-ZRZ | Action-dependent Control Variates for Policy Optimization via Stein Identity | Policy gradient methods have achieved remarkable successes in solving challenging reinforcement learning problems. However, it still often suffers from the large variance issue on policy gradient estimation, which leads to poor sample efficiency during training. In this work, we propose a control variate method to effectively reduce variance for policy gradient methods. Motivated by the Stein’s identity, our method extends the previous control variate methods used in REINFORCE and advantage actor-critic by introducing more flexible and general action-dependent baseline functions. Empirical studies show that our method essentially improves the sample efficiency of the state-of-the-art policy gradient approaches.
| accepted-poster-papers | Thank you for submitting you paper to ICLR. The reviewers agree that the paper’s development of action-dependent baselines for reducing variance in policy gradient is a strong contribution and that the use of Stein's identity to provide a principled way to think about control variates is sensible. The revision clarified an number of the reviewers’ questions and the resulting paper is suitable for publication in ICLR. | train | [
"rJLc6CN-z",
"Hk7S_RLEG",
"B1Dl__BEf",
"SkRWcmOgz",
"ryP7s5Oxz",
"By16DK9xM",
"S1zJPApmG",
"rJjc58tXz",
"BJ2OqUF7f",
"SJNDc8tQG",
"H13WKlSWG",
"Hyq5QKWWf",
"H1Fan9z1G",
"S17GnS-1G",
"H1siIPk1M"
] | [
"author",
"public",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"public",
"public",
"public",
"author",
"public"
] | [
"Hi, thanks for your interest, code has released here: https://github.com/DartML/PPO-Stein-Control-Variate.\n\nWe plan to share the videos of learned policies.",
"After several exchanges with the authors, we have been unable to replicate the results produced in Figure 1 that show the improvement of an action-depe... | [
-1,
-1,
-1,
7,
7,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
-1,
-1,
-1,
3,
3,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"Hyq5QKWWf",
"iclr_2018_H1mCp-ZRZ",
"SJNDc8tQG",
"iclr_2018_H1mCp-ZRZ",
"iclr_2018_H1mCp-ZRZ",
"iclr_2018_H1mCp-ZRZ",
"iclr_2018_H1mCp-ZRZ",
"SkRWcmOgz",
"ryP7s5Oxz",
"By16DK9xM",
"rJLc6CN-z",
"iclr_2018_H1mCp-ZRZ",
"S17GnS-1G",
"H1siIPk1M",
"iclr_2018_H1mCp-ZRZ"
] |
iclr_2018_rkcQFMZRb | Variational image compression with a scale hyperprior | We describe an end-to-end trainable model for image compression based on variational autoencoders. The model incorporates a hyperprior to effectively capture spatial dependencies in the latent representation. This hyperprior relates to side information, a concept universal to virtually all modern image codecs, but largely unexplored in image compression using artificial neural networks (ANNs). Unlike existing autoencoder compression methods, our model trains a complex prior jointly with the underlying autoencoder. We demonstrate that this model leads to state-of-the-art image compression when measuring visual quality using the popular MS-SSIM index, and yields rate--distortion performance surpassing published ANN-based methods when evaluated using a more traditional metric based on squared error (PSNR). Furthermore, we provide a qualitative comparison of models trained for different distortion metrics. | accepted-poster-papers | Thank you for submitting you paper to ICLR. The reviewers and authors have engaged well and the revision has improved the paper. The reviewers are all in agreement that the paper substantially expands the prior work in this area, e.g. by Balle et al. (2016, 2017), and is therefore suitable for publication. Although I understand that the authors have not optimised their compression method for runtime yet, a comment about this prospect in the main text would be a sensible addition. | train | [
"B1i1F5uxz",
"SkZGkFFxG",
"ryY3n25gG",
"SkPhJjZQz",
"r1NAe9bXM",
"B1KUgq-mM",
"S1s7e5WQG",
"Sk_Ge9bmf",
"r1361c-mf",
"ryf4J5ZXf",
"Syqz19WQG",
"rJqPCtZ7z",
"Syy1kl8ez"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"public",
"public"
] | [
"Summary:\n\nThis paper extends the work of Balle et al. (2016, 2017) on using certain types of variational autoencoders for image compression. After encoding pixels with a convolutional net with GDN nonlinearities, the quantized coefficients are entropy encoded. Where before the coefficients were independently enc... | [
7,
7,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
4,
5,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_rkcQFMZRb",
"iclr_2018_rkcQFMZRb",
"iclr_2018_rkcQFMZRb",
"rJqPCtZ7z",
"Syy1kl8ez",
"B1i1F5uxz",
"SkZGkFFxG",
"SkZGkFFxG",
"SkZGkFFxG",
"ryY3n25gG",
"ryY3n25gG",
"iclr_2018_rkcQFMZRb",
"iclr_2018_rkcQFMZRb"
] |
iclr_2018_H1kG7GZAW | Variational Inference of Disentangled Latent Concepts from Unlabeled Observations | Disentangled representations, where the higher level data generative factors are reflected in disjoint latent dimensions, offer several benefits such as ease of deriving invariant representations, transferability to other tasks, interpretability, etc. We consider the problem of unsupervised learning of disentangled representations from large pool of unlabeled observations, and propose a variational inference based approach to infer disentangled latent factors. We introduce a regularizer on the expectation of the approximate posterior over observed data that encourages the disentanglement. We also propose a new disentanglement metric which is better aligned with the qualitative disentanglement observed in the decoder's output. We empirically observe significant improvement over existing methods in terms of both disentanglement and data likelihood (reconstruction quality).
| accepted-poster-papers | Thank you for submitting you paper to ICLR. The reviewers are all in agreement that the paper is suitable for publication, each revising their score upwards in response to the revision that has made the paper stronger.
The authors may want to consider adding a discussion about whether the simple standard Gaussian prior, which is invariant under transformation by an orthogonal matrix, is a sensible one if the objective is to find disentangled representations. Alternatives, such as sparse priors, might be more sensible if a model-based solution to this problem is sought. | test | [
"SkF57lqez",
"SydFTsFkG",
"r1YT_Wtxf",
"BkHttn74M",
"SyAI1RfEM",
"HyhMdTc7G",
"BJdehEFXG",
"HkbQ3I_mz",
"SJ3gvydXz",
"HyOiU1OXz",
"Sy_I8yuXf",
"SyEK7AHez"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"public"
] | [
"******\nUpdate: revising reviewer score to 6 after acknowledging revisions and improved manuscript\n******\n\nThe authors propose a new regularization term modifying the VAE (Kingma et al 2013) objective to encourage learning disentangling representations.\nSpecifically, the authors suggest to add penalization to ... | [
6,
7,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
5,
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_H1kG7GZAW",
"iclr_2018_H1kG7GZAW",
"iclr_2018_H1kG7GZAW",
"SyAI1RfEM",
"HyOiU1OXz",
"BJdehEFXG",
"Sy_I8yuXf",
"SyEK7AHez",
"SkF57lqez",
"r1YT_Wtxf",
"SydFTsFkG",
"iclr_2018_H1kG7GZAW"
] |
iclr_2018_rJNpifWAb | Flipout: Efficient Pseudo-Independent Weight Perturbations on Mini-Batches | Stochastic neural net weights are used in a variety of contexts, including regularization, Bayesian neural nets, exploration in reinforcement learning, and evolution strategies. Unfortunately, due to the large number of weights, all the examples in a mini-batch typically share the same weight perturbation, thereby limiting the variance reduction effect of large mini-batches. We introduce flipout, an efficient method for decorrelating the gradients within a mini-batch by implicitly sampling pseudo-independent weight perturbations for each example. Empirically, flipout achieves the ideal linear variance reduction for fully connected networks, convolutional networks, and RNNs. We find significant speedups in training neural networks with multiplicative Gaussian perturbations. We show that flipout is effective at regularizing LSTMs, and outperforms previous methods. Flipout also enables us to vectorize evolution strategies: in our experiments, a single GPU with flipout can handle the same throughput as at least 40 CPU cores using existing methods, equivalent to a factor-of-4 cost reduction on Amazon Web Services. | accepted-poster-papers | Thank you for submitting you paper to ICLR. The idea is simple, but easy to implement and effective. The paper examines the performance fairly thoroughly across a number of different scenarios showing that the method consistently reduces variance. How this translates into final performance is complex of course, but faster convergence is demonstrated and the revised experiments in table 2 show that it can lead to improvements in accuracy. | train | [
"HyQ2gfD4G",
"B1NpHn8EG",
"rkLiPl9xz",
"rknUpWqgz",
"Hkh0HMjgM",
"H1MugXp7M",
"ByVfb76Qz",
"ryxgaz6Qf",
"ryqxiGTmG"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"Thank you for your comment.\n\nVariance reduction is a central issue in stochastic optimization, and countless papers have tried to address it. To summarize, lower variance enables faster convergence and hence improves the sample efficiency. We gave one reference above, but there are many more that we did not ment... | [
-1,
-1,
6,
8,
6,
-1,
-1,
-1,
-1
] | [
-1,
-1,
4,
3,
4,
-1,
-1,
-1,
-1
] | [
"B1NpHn8EG",
"H1MugXp7M",
"iclr_2018_rJNpifWAb",
"iclr_2018_rJNpifWAb",
"iclr_2018_rJNpifWAb",
"Hkh0HMjgM",
"rkLiPl9xz",
"iclr_2018_rJNpifWAb",
"rknUpWqgz"
] |
iclr_2018_r1l4eQW0Z | Kernel Implicit Variational Inference | Recent progress in variational inference has paid much attention to the flexibility of variational posteriors. One promising direction is to use implicit distributions, i.e., distributions without tractable densities as the variational posterior. However, existing methods on implicit posteriors still face challenges of noisy estimation and computational infeasibility when applied to models with high-dimensional latent variables. In this paper, we present a new approach named Kernel Implicit Variational Inference that addresses these challenges. As far as we know, for the first time implicit variational inference is successfully applied to Bayesian neural networks, which shows promising results on both regression and classification tasks. | accepted-poster-papers | Thank you for submitting you paper to ICLR. This paper was enhanced noticeably in the rebuttal period and two of the reviewers improved their score as a result. There is a good range of experimental work on a number of different tasks. The addition of the comparison with Liu & Feng, 2016 to the appendix was sensible. Please make sure that the general conclusions drawn from this are explained in the main text and also the differences to Tran et al., 2017 (i.e. that the original model can also be implicit in this case). | train | [
"Hk8v9J5eM",
"rkzduZIyf",
"S1jTB1PlM",
"SkExYk0QG",
"BkyjKpa7M",
"S1wm3mKmf",
"S1UY9z47G",
"HkKijfE7G",
"rJVK3zEmf",
"Hkmf7Nukz"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"Update: I read the other reviews and the authors' rebuttal. Thanks to the authors for clarifying some details. I'm still against the paper being accepted. But I don't have a strong opinion and will not argue against so if other reviewers are willing. \n\n------\n\nThe authors propose Kernel Implicit VI, an algorit... | [
5,
7,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
4,
3,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_r1l4eQW0Z",
"iclr_2018_r1l4eQW0Z",
"iclr_2018_r1l4eQW0Z",
"iclr_2018_r1l4eQW0Z",
"S1wm3mKmf",
"rJVK3zEmf",
"Hk8v9J5eM",
"S1jTB1PlM",
"rkzduZIyf",
"iclr_2018_r1l4eQW0Z"
] |
iclr_2018_Skdvd2xAZ | A Scalable Laplace Approximation for Neural Networks | We leverage recent insights from second-order optimisation for neural networks to construct a Kronecker factored Laplace approximation to the posterior over the weights of a trained network. Our approximation requires no modification of the training procedure, enabling practitioners to estimate the uncertainty of their models currently used in production without having to retrain them. We extensively compare our method to using Dropout and a diagonal Laplace approximation for estimating the uncertainty of a network. We demonstrate that our Kronecker factored method leads to better uncertainty estimates on out-of-distribution data and is more robust to simple adversarial attacks. Our approach only requires calculating two square curvature factor matrices for each layer. Their size is equal to the respective square of the input and output size of the layer, making the method efficient both computationally and in terms of memory usage. We illustrate its scalability by applying it to a state-of-the-art convolutional network architecture. | accepted-poster-papers | This paper gives a scalable Laplace approximation which makes use of recently proposed Kronecker-factored approximations to the Gauss-Newton matrix. The approach seems sound and useful. While it is a rather natural extension of existing methods, it is well executed, and the ideas seem worth putting out there.
| train | [
"HkLWn48Ef",
"HJM4Z-IVz",
"ByVD9ZdxG",
"rJ_9qMuef",
"rJ7aicdgM",
"BJF6oL6mM",
"S1FDsITXf",
"r1Kqi8pQG"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author"
] | [
"Many thanks for writing your rebuttal and for adding experiments on variational inference with fully factorized posterior. I believe that these comparisons add value to the proposal, given that the proposed approach achieves better performance. I'm keen to raise my score due to that, although I still think that th... | [
-1,
-1,
9,
6,
6,
-1,
-1,
-1
] | [
-1,
-1,
4,
4,
4,
-1,
-1,
-1
] | [
"r1Kqi8pQG",
"S1FDsITXf",
"iclr_2018_Skdvd2xAZ",
"iclr_2018_Skdvd2xAZ",
"iclr_2018_Skdvd2xAZ",
"ByVD9ZdxG",
"rJ7aicdgM",
"rJ_9qMuef"
] |
iclr_2018_B1IDRdeCW | The High-Dimensional Geometry of Binary Neural Networks | Recent research has shown that one can train a neural network with binary weights and activations at train time by augmenting the weights with a high-precision continuous latent variable that accumulates small changes from stochastic gradient descent. However, there is a dearth of work to explain why one can effectively capture the features in data with binary weights and activations. Our main result is that the neural networks with binary weights and activations trained using the method of Courbariaux, Hubara et al. (2016) work because of the high-dimensional geometry of binary vectors. In particular, the ideal continuous vectors that extract out features in the intermediate representations of these BNNs are well-approximated by binary vectors in the sense that dot products are approximately preserved. Compared to previous research that demonstrated good classification performance with BNNs, our work explains why these BNNs work in terms of HD geometry. Furthermore, the results and analysis used on BNNs are shown to generalize to neural networks with ternary weights and activations. Our theory serves as a foundation for understanding not only BNNs but a variety of methods that seek to compress traditional neural networks. Furthermore, a better understanding of multilayer binary neural networks serves as a starting point for generalizing BNNs to other neural network architectures such as recurrent neural networks. | accepted-poster-papers | This paper analyzes mathematically why weights of trained networks can be replaced with ternary weights without much loss in accuracy. Understanding this is an important problem, as binary or ternary weights can be much more efficient on limited hardware, and we've seen much empirical success of binarization schemes. This paper shows that the continuous angles and dot products are well approximated in the discretized network. The paper concludes with an input rotation trick to fix discretization failures in the first layer.
Overall, the contribution seems substantial, and the reviewers haven't found any significant issues. One reviewer wasn't convinced of the problem's importance, but I disagree here. I think the paper will plausibly be helpful for guiding architectural and algorithmic decisions. I recommend acceptance.
| train | [
"SkhtYrwef",
"Ske7rLdeM",
"ryDPFMKef",
"rJNQu_zVf",
"HJGnzbfVG",
"B1kLURtzG",
"rkeGHyKMG",
"B1OG_Y_GM",
"rkj_PefMG",
"SyzXDgfff",
"Sy1bHgMGf"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"public",
"author",
"author",
"public",
"author",
"author",
"author"
] | [
"This paper investigates numerically and theoretically the reasons behind the empirical success of binarized neural networks. Specifically, they observe that:\n\n(1) The angle between continuous vectors sampled from a spherical symmetric distribution and their binarized version is relatively small in high dimension... | [
7,
4,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
4,
3,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_B1IDRdeCW",
"iclr_2018_B1IDRdeCW",
"iclr_2018_B1IDRdeCW",
"HJGnzbfVG",
"rkeGHyKMG",
"SyzXDgfff",
"B1OG_Y_GM",
"iclr_2018_B1IDRdeCW",
"SkhtYrwef",
"Ske7rLdeM",
"ryDPFMKef"
] |
iclr_2018_B1ae1lZRb | Apprentice: Using Knowledge Distillation Techniques To Improve Low-Precision Network Accuracy | Deep learning networks have achieved state-of-the-art accuracies on computer vision workloads like image classification and object detection. The performant systems, however, typically involve big models with numerous parameters. Once trained, a challenging aspect for such top performing models is deployment on resource constrained inference systems -- the models (often deep networks or wide networks or both) are compute and memory intensive. Low precision numerics and model compression using knowledge distillation are popular techniques to lower both the compute requirements and memory footprint of these deployed models. In this paper, we study the combination of these two techniques and show that the performance of low precision networks can be significantly improved by using knowledge distillation techniques. We call our approach Apprentice and show state-of-the-art accuracies using ternary precision and 4-bit precision for many variants of ResNet architecture on ImageNet dataset. We study three schemes in which one can apply knowledge distillation techniques to various stages of the train-and-deploy pipeline. | accepted-poster-papers | Meta score: 7
The paper combined low precision computation with different approaches to teacher-student knowledge distillation. The experimentation is good, with good experimental analysis. Very clearly written. The main contribution is in the different forms of teacher-student training combined with low precision.
Pros:
- good practical contribution
- good experiments
- good analysis
- well written
Cons:
- limited originality | train | [
"B1QhiA4eG",
"rkPqK_tef",
"Byv_pHRlM",
"B1e0XDKXf",
"H1qebLVff",
"Hkg4HumZz",
"HJ-vEuXWz",
"r1n-7OXbG"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author"
] | [
"The authors investigate knowledge distillation as a way to learn low precision networks. They propose three training schemes to train a low precision student network from a teacher network. They conduct experiments on ImageNet-1k with variants of ResNets and multiple low precision regimes and compare performance w... | [
7,
7,
8,
-1,
-1,
-1,
-1,
-1
] | [
4,
4,
4,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_B1ae1lZRb",
"iclr_2018_B1ae1lZRb",
"iclr_2018_B1ae1lZRb",
"iclr_2018_B1ae1lZRb",
"HJ-vEuXWz",
"B1QhiA4eG",
"rkPqK_tef",
"Byv_pHRlM"
] |
iclr_2018_H1Dy---0Z | Distributed Prioritized Experience Replay | We propose a distributed architecture for deep reinforcement learning at scale, that enables agents to learn effectively from orders of magnitude more data than previously possible. The algorithm decouples acting from learning: the actors interact with their own instances of the environment by selecting actions according to a shared neural network, and accumulate the resulting experience in a shared experience replay memory; the learner replays samples of experience and updates the neural network. The architecture relies on prioritized experience replay to focus only on the most significant data generated by the actors. Our architecture substantially improves the state of the art on the Arcade Learning Environment, achieving better final performance in a fraction of the wall-clock training time. | accepted-poster-papers | meta score: 8
The paper present a distributed architecture using prioritized experience replay for deep reinforcement learning. It is well-written and the experimentation is extremely strong. The main issue is the originality - technically, it extends previous work in a limited way; the main contribution is practical, and this is validated by the experiments. The experimental support is such that the paper has meaningful conclusions and will surely be of interest to people working in the field. Thus I would say it is comfortably over the acceptance threshold.
Pros:
- good motivation and literature review
- strong experimentation
- well-written and clearly presented
- details in the appendix are very helpful
Cons:
- possibly limited originality in terms of modelling advances
| train | [
"HJhRlOzkM",
"Hkx8IaKgM",
"ry8UxQ6gM",
"rkgPS1VEf",
"HkpmgU67M",
"BkcN9edQM",
"BJrW5xu7f",
"S1uvLgdXG",
"ByN9zlu7f"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author"
] | [
"This paper examines a distributed Deep RL system in which experiences, rather than gradients, are shared between the parallel workers and the centralized learner. The experiences are accumulated into a central replay memory and prioritized replay is used to update the policy based on the diverse experience accumul... | [
9,
7,
6,
-1,
-1,
-1,
-1,
-1,
-1
] | [
4,
4,
3,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_H1Dy---0Z",
"iclr_2018_H1Dy---0Z",
"iclr_2018_H1Dy---0Z",
"BkcN9edQM",
"iclr_2018_H1Dy---0Z",
"BJrW5xu7f",
"Hkx8IaKgM",
"HJhRlOzkM",
"ry8UxQ6gM"
] |
iclr_2018_B1Gi6LeRZ | Learning from Between-class Examples for Deep Sound Recognition | Deep learning methods have achieved high performance in sound recognition tasks. Deciding how to feed the training data is important for further performance improvement. We propose a novel learning method for deep sound recognition: Between-Class learning (BC learning). Our strategy is to learn a discriminative feature space by recognizing the between-class sounds as between-class sounds. We generate between-class sounds by mixing two sounds belonging to different classes with a random ratio. We then input the mixed sound to the model and train the model to output the mixing ratio. The advantages of BC learning are not limited only to the increase in variation of the training data; BC learning leads to an enlargement of Fisher’s criterion in the feature space and a regularization of the positional relationship among the feature distributions of the classes. The experimental results show that BC learning improves the performance on various sound recognition networks, datasets, and data augmentation schemes, in which BC learning proves to be always beneficial. Furthermore, we construct a new deep sound recognition network (EnvNet-v2) and train it with BC learning. As a result, we achieved a performance surpasses the human level. | accepted-poster-papers | meta score: 8
This is a good paper which augments the data by mixing sound classes, and then learns the mixing ratio. Experiments performed on a number of sound classification results
Pros
- novel approach, clearly explained
- very good set of experimentation with excellent results
- good approach to mixing using perceptual criteria
Cons
- discussion doesn't really generalise beyond sound recognition
| train | [
"ryKFRjXBf",
"BJ3tjUzSG",
"ByW22ClHf",
"BJ79_7RNM",
"S103m19Nf",
"HJmtVULeG",
"HJ80q6KlG",
"r1vicbqeG",
"S1O3zOp7f",
"BJlPLX4Mz",
"SJp7SQEGz",
"rJIDfQNfM"
] | [
"author",
"author",
"public",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"We have revised the paper, considering the comments from AnonReviewer3.\nMajor changes:\n- The last paragraph of Section 1: modified the description about the novelty of our paper.\n- Section 3.3.2: added the description about the dimension of the feature space.",
"Thanks for your questions.\n1. Each fold has 40... | [
-1,
-1,
-1,
-1,
-1,
9,
4,
8,
-1,
-1,
-1,
-1
] | [
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
-1,
-1,
-1,
-1
] | [
"iclr_2018_B1Gi6LeRZ",
"ByW22ClHf",
"iclr_2018_B1Gi6LeRZ",
"S103m19Nf",
"SJp7SQEGz",
"iclr_2018_B1Gi6LeRZ",
"iclr_2018_B1Gi6LeRZ",
"iclr_2018_B1Gi6LeRZ",
"iclr_2018_B1Gi6LeRZ",
"HJmtVULeG",
"HJ80q6KlG",
"r1vicbqeG"
] |
iclr_2018_ryiAv2xAZ | Training Confidence-calibrated Classifiers for Detecting Out-of-Distribution Samples | The problem of detecting whether a test sample is from in-distribution (i.e., training distribution by a classifier) or out-of-distribution sufficiently different from it arises in many real-world machine learning applications. However, the state-of-art deep neural networks are known to be highly overconfident in their predictions, i.e., do not distinguish in- and out-of-distributions. Recently, to handle this issue, several threshold-based detectors have been proposed given pre-trained neural classifiers. However, the performance of prior works highly depends on how to train the classifiers since they only focus on improving inference procedures. In this paper, we develop a novel training method for classifiers so that such inference algorithms can work better. In particular, we suggest two additional terms added to the original loss (e.g., cross entropy). The first one forces samples from out-of-distribution less confident by the classifier and the second one is for (implicitly) generating most effective training samples for the first one. In essence, our method jointly trains both classification and generative neural networks for out-of-distribution. We demonstrate its effectiveness using deep convolutional neural networks on various popular image datasets. | accepted-poster-papers | Meta score: 6
The paper approaches the problem of identifying out-of-distribution data by modifying the objective function to include a generative term. Experiments on a number of image datasets.
Pros:
- clearly expressed idea, well-supported by experimentation
- good experimental results
- well-written
Cons:
- slightly limited novelty
- could be improved by linking to work on semi-supervised learning approaches using GANs
The authors note that ICLR submission 267 (https://openreview.net/forum?id=H1VGkIxRZ) covers similar ground to theirs. | train | [
"B1ja8-9lf",
"B1klq-5lG",
"By_HQdCeG",
"Sk-WbjBMz",
"HypDejBMM",
"SJyZlsSMG",
"ry5v1orGG"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"I have read authors' reply. In response to authors' comprehensive reply and feedback. I upgrade my score to 6.\n\n-----------------------------\n\nThis paper presents a novel approach to calibrate classifiers for out of distribution samples. In additional to the original cross entropy loss, the “confidence loss” ... | [
6,
7,
6,
-1,
-1,
-1,
-1
] | [
4,
3,
3,
-1,
-1,
-1,
-1
] | [
"iclr_2018_ryiAv2xAZ",
"iclr_2018_ryiAv2xAZ",
"iclr_2018_ryiAv2xAZ",
"iclr_2018_ryiAv2xAZ",
"By_HQdCeG",
"B1klq-5lG",
"B1ja8-9lf"
] |
iclr_2018_SkFAWax0- | VoiceLoop: Voice Fitting and Synthesis via a Phonological Loop | We present a new neural text to speech (TTS) method that is able to transform text to speech in voices that are sampled in the wild. Unlike other systems, our solution is able to deal with unconstrained voice samples and without requiring aligned phonemes or linguistic features. The network architecture is simpler than those in the existing literature and is based on a novel shifting buffer working memory. The same buffer is used for estimating the attention, computing the output audio, and for updating the buffer itself. The input sentence is encoded using a context-free lookup table that contains one entry per character or phoneme. The speakers are similarly represented by a short vector that can also be fitted to new identities, even with only a few samples. Variability in the generated speech is achieved by priming the buffer prior to generating the audio. Experimental results on several datasets demonstrate convincing capabilities, making TTS accessible to a wider range of applications. In order to promote reproducibility, we release our source code and models. | accepted-poster-papers | Meta score: 7
This paper presents a novel architecture for neural network based TTS using a memory buffer architecture. The authors have made good efforts to evaluate this system against other state-of-the-art neural TTS systems, although this is hampered by the need for re-implementation and the evident lack of optimal hyperparameters for e.g. tacotron. TTS is hard to evaluate against existing approaches, since it requires subjective user evaluation. But overall, despite its limtations, this is a good and interesting paper which I would like to see accepted
Pros:
- novel architecture
- good experimentation on multiple databases
- good response to reviewer comments
- good results
Cons:
- some problems with the experimental comparison (baselines compared against)
- writing could be clearer, and sometimes it feels like the authors are slightly overclaiming
I take the point that this might be more suitable for a speech conference, but it seems to me that paper offers enough to the ICLR community for it to be worth accepting.
| train | [
"HkP46XXlf",
"rJW-33tlG",
"Hy_P77pxM",
"rJzre96bf",
"SkY6kqpWM",
"HkBPy9pZf",
"SyGuaKpZG",
"Hy9R2F6-M",
"r1IFcs7xG",
"SyrJ6XQgM"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"public",
"official_reviewer"
] | [
"This is an interesting paper investigating a novel neural TTS strategy that can generate speech signals by sampling voices in the wild. The main idea here is to use a working memory with a shifting buffer. I also listened to the samples posted on github and the quality of the generated voices seems to be OK cons... | [
8,
5,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
4,
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_SkFAWax0-",
"iclr_2018_SkFAWax0-",
"iclr_2018_SkFAWax0-",
"SkY6kqpWM",
"HkBPy9pZf",
"rJW-33tlG",
"HkP46XXlf",
"Hy_P77pxM",
"SyrJ6XQgM",
"iclr_2018_SkFAWax0-"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.