Datasets:

Modalities:
Text
Formats:
json
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
Dataset Viewer
Auto-converted to Parquet Duplicate
paper_id
stringlengths
19
21
paper_title
stringlengths
8
170
paper_abstract
stringlengths
8
5.01k
paper_acceptance
stringclasses
18 values
meta_review
stringlengths
29
10k
label
stringclasses
3 values
review_ids
sequence
review_writers
sequence
review_contents
sequence
review_ratings
sequence
review_confidences
sequence
review_reply_tos
sequence
iclr_2018_ryQu7f-RZ
On the Convergence of Adam and Beyond
Several recently proposed stochastic optimization methods that have been successfully used in training deep networks such as RMSProp, Adam, Adadelta, Nadam are based on using gradient updates scaled by square roots of exponential moving averages of squared past gradients. In many applications, e.g. learning with large output spaces, it has been empirically observed that these algorithms fail to converge to an optimal solution (or a critical point in nonconvex settings). We show that one cause for such failures is the exponential moving average used in the algorithms. We provide an explicit example of a simple convex optimization setting where Adam does not converge to the optimal solution, and describe the precise problems with the previous analysis of Adam algorithm. Our analysis suggests that the convergence issues can be fixed by endowing such algorithms with ``long-term memory'' of past gradients, and propose new variants of the Adam algorithm which not only fix the convergence issues but often also lead to improved empirical performance.
accepted-oral-papers
This paper analyzes a problem with the convergence of Adam, and presents a solution. It identifies an error in the convergence proof of Adam (which also applies to related methods such as RMSProp) and gives a simple example where it fails to converge. The paper then repairs the algorithm in a way that guarantees convergence without introducing much computational or memory overhead. There ought to be a lot of interest in this paper: Adam is a widely used algorithm, but sometimes underperforms SGD on certain problems, and this could be part of the explanation. The fix is both principled and practical. Overall, this is a strong paper, and I recommend acceptance.
test
[ "HkhdRaVlG", "H15qgiFgf", "Hyl2iJgGG", "BJQcTsbzf", "HJXG6sWzG", "H16UnjZMM", "ryA-no-zz", "HJTujoWGG", "ByhZijZfG", "SkjC2Ni-z", "SJXpTMFbf", "rkBQ_QuWf", "Sy5rDQu-z", "SJRh-9lef", "Bye7sLhkM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "public", "public", "public", "public", "author", "public" ]
[ "The paper presents three contributions: 1) it shows that the proof of convergence Adam is wrong; 2) it presents adversarial and stochastic examples on which Adam converges to the worst possible solution (i.e. there is no hope to just fix Adam's proof); 3) it proposes a variant of Adam called AMSGrad that fixes the...
[ 9, 8, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 5, 4, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_ryQu7f-RZ", "iclr_2018_ryQu7f-RZ", "iclr_2018_ryQu7f-RZ", "HkhdRaVlG", "H15qgiFgf", "Sy5rDQu-z", "SJXpTMFbf", "SkjC2Ni-z", "Hyl2iJgGG", "iclr_2018_ryQu7f-RZ", "iclr_2018_ryQu7f-RZ", "HkhdRaVlG", "iclr_2018_ryQu7f-RZ", "Bye7sLhkM", "iclr_2018_ryQu7f-RZ" ]
iclr_2018_BJ8vJebC-
Synthetic and Natural Noise Both Break Neural Machine Translation
Character-based neural machine translation (NMT) models alleviate out-of-vocabulary issues, learn morphology, and move us closer to completely end-to-end translation systems. Unfortunately, they are also very brittle and easily falter when presented with noisy data. In this paper, we confront NMT models with synthetic and natural sources of noise. We find that state-of-the-art models fail to translate even moderately noisy texts that humans have no trouble comprehending. We explore two approaches to increase model robustness: structure-invariant word representations and robust training on noisy texts. We find that a model based on a character convolutional neural network is able to simultaneously learn representations robust to multiple kinds of noise.
accepted-oral-papers
The pros and cons of this paper cited by the reviewers can be summarized below: Pros: * The paper is a first attempt to investigate an under-studied area in neural MT (and potentially other applications of sequence-to-sequence models as well) * This area might have a large impact; existing models such as Google Translate fail badly on the inputs described here * Experiments are very carefully designed and thorough * Experiments on not only synthetic but also natural noise add significant reliability to the results * Paper is well-written and easy to follow Cons: * There may be better architectures for this problem than the ones proposed here * Even the natural noise is not entirely natural, e.g. artificially constrained to exist within words * Paper is not a perfect fit to ICLR (although ICLR is attempting to cast a wide net, so this alone is not a critical criticism of the paper) This paper had uniformly positive reviews and has potential for large real-world impact.
train
[ "SJoXiUUNM", "SkABkz5gM", "BkQzs54VG", "BkVD7bqlf", "SkeZfu2xG", "SyTfeD5bz", "B1dT1vqWf", "HJ1vJDcZz", "HyRwAIqWf", "rJIbAd7-z" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "public" ]
[ "Thanks for your thoughtful response to my review.", "This paper investigates the impact of character-level noise on various flavours of neural machine translation. It tests 4 different NMT systems with varying degrees and types of character awareness, including a novel meanChar system that uses averaged unigram ...
[ -1, 7, -1, 7, 8, -1, -1, -1, -1, -1 ]
[ -1, 4, -1, 4, 4, -1, -1, -1, -1, -1 ]
[ "HJ1vJDcZz", "iclr_2018_BJ8vJebC-", "rJIbAd7-z", "iclr_2018_BJ8vJebC-", "iclr_2018_BJ8vJebC-", "rJIbAd7-z", "BkVD7bqlf", "SkABkz5gM", "SkeZfu2xG", "iclr_2018_BJ8vJebC-" ]
iclr_2018_Hk2aImxAb
Multi-Scale Dense Networks for Resource Efficient Image Classification
In this paper we investigate image classification with computational resource limits at test time. Two such settings are: 1. anytime classification, where the network’s prediction for a test example is progressively updated, facilitating the output of a prediction at any time; and 2. budgeted batch classification, where a fixed amount of computation is available to classify a set of examples that can be spent unevenly across “easier” and “harder” inputs. In contrast to most prior work, such as the popular Viola and Jones algorithm, our approach is based on convolutional neural networks. We train multiple classifiers with varying resource demands, which we adaptively apply during test time. To maximally re-use computation between the classifiers, we incorporate them as early-exits into a single deep convolutional neural network and inter-connect them with dense connectivity. To facilitate high quality classification early on, we use a two-dimensional multi-scale network architecture that maintains coarse and fine level features all-throughout the network. Experiments on three image-classification tasks demonstrate that our framework substantially improves the existing state-of-the-art in both settings.
accepted-oral-papers
As stated by reviewer 3 "This paper introduces a new model to perform image classification with limited computational resources at test time. The model is based on a multi-scale convolutional neural network similar to the neural fabric (Saxena and Verbeek 2016), but with dense connections (Huang et al., 2017) and with a classifier at each layer." As stated by reviewer 2 "My only major concern is the degree of technical novelty with respect to the original DenseNet paper of Huang et al. (2017). ". The authors assert novelty in the sense that they provide a solution to improve computational efficiency and focus on this aspect of the problem. Overall, the technical innovation is not huge, but I think this could be a very useful idea in practice.
train
[ "rJSuJm4lG", "SJ7lAAYgG", "rk6gRwcxz", "Hy_75oomz", "HkJRFjomf", "HJiXYjjQz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "This work proposes a variation of the DenseNet architecture that can cope with computational resource limits at test time. The paper is very well written, experiments are clearly presented and convincing and, most importantly, the research question is exciting (and often overlooked). \n\nMy only major concern is t...
[ 8, 7, 10, -1, -1, -1 ]
[ 4, 4, 4, -1, -1, -1 ]
[ "iclr_2018_Hk2aImxAb", "iclr_2018_Hk2aImxAb", "iclr_2018_Hk2aImxAb", "rJSuJm4lG", "SJ7lAAYgG", "rk6gRwcxz" ]
iclr_2018_HJGXzmspb
Training and Inference with Integers in Deep Neural Networks
Researches on deep neural networks with discrete parameters and their deployment in embedded systems have been active and promising topics. Although previous works have successfully reduced precision in inference, transferring both training and inference processes to low-bitwidth integers has not been demonstrated simultaneously. In this work, we develop a new method termed as ``"WAGE" to discretize both training and inference, where weights (W), activations (A), gradients (G) and errors (E) among layers are shifted and linearly constrained to low-bitwidth integers. To perform pure discrete dataflow for fixed-point devices, we further replace batch normalization by a constant scaling layer and simplify other components that are arduous for integer implementation. Improved accuracies can be obtained on multiple datasets, which indicates that WAGE somehow acts as a type of regularization. Empirically, we demonstrate the potential to deploy training in hardware systems such as integer-based deep learning accelerators and neuromorphic chips with comparable accuracy and higher energy efficiency, which is crucial to future AI applications in variable scenarios with transfer and continual learning demands.
accepted-oral-papers
High quality paper, appreciated by reviewers, likely to be of substantial interest to the community. It's worth an oral to facilitate a group discussion.
train
[ "SkzPEnBeG", "rJG2o3wxf", "SyrOMN9eM", "HJ7oecRZf", "r1t-e5CZf", "ryW51cAbG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "This paper proposes a method to train neural networks with low precision. However, it is not clear if this work obtains significant improvements over previous works. \n\nNote that:\n1)\tWorking with 16bit, one can train neural networks with little to no reduction in performance. For example, on ImageNet with AlexN...
[ 7, 7, 8, -1, -1, -1 ]
[ 4, 3, 4, -1, -1, -1 ]
[ "iclr_2018_HJGXzmspb", "iclr_2018_HJGXzmspb", "iclr_2018_HJGXzmspb", "SkzPEnBeG", "rJG2o3wxf", "SyrOMN9eM" ]
iclr_2018_HJGv1Z-AW
Emergence of Linguistic Communication from Referential Games with Symbolic and Pixel Input
The ability of algorithms to evolve or learn (compositional) communication protocols has traditionally been studied in the language evolution literature through the use of emergent communication tasks. Here we scale up this research by using contemporary deep learning methods and by training reinforcement-learning neural network agents on referential communication games. We extend previous work, in which agents were trained in symbolic environments, by developing agents which are able to learn from raw pixel data, a more challenging and realistic input representation. We find that the degree of structure found in the input data affects the nature of the emerged protocols, and thereby corroborate the hypothesis that structured compositional language is most likely to emerge when agents perceive the world as being structured.
accepted-oral-papers
Important problem (analyzing the properties of emergent languages in multi-agent reference games), a number of interesting analyses (both with symbolic and pixel inputs), reaching a finding that varying the environment and restrictions on language result in variations in the learned communication protocols (which in hindsight is that not surprising, but that's hindsight). While the pixel experiments are not done with real images, it's an interesting addition the literature nonetheless.
train
[ "HJ3-u2Ogf", "H15X_V8yM", "BytyNwclz", "S1XPn0jXG", "r1QdpPjXf", "SJWDw1iXG", "ryjhESdQG", "S1GjVrOmz", "rJylbvSzG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author" ]
[ "--------------\nSummary:\n--------------\nThis paper presents a series of experiments on language emergence through referential games between two agents. They ground these experiments in both fully-specified symbolic worlds and through raw, entangled, visual observations of simple synthetic scenes. They provide ri...
[ 7, 9, 5, -1, -1, -1, -1, -1, -1 ]
[ 4, 5, 4, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_HJGv1Z-AW", "iclr_2018_HJGv1Z-AW", "iclr_2018_HJGv1Z-AW", "iclr_2018_HJGv1Z-AW", "SJWDw1iXG", "H15X_V8yM", "HJ3-u2Ogf", "HJ3-u2Ogf", "BytyNwclz" ]
iclr_2018_Hkbd5xZRb
Spherical CNNs
Convolutional Neural Networks (CNNs) have become the method of choice for learning problems involving 2D planar images. However, a number of problems of recent interest have created a demand for models that can analyze spherical images. Examples include omnidirectional vision for drones, robots, and autonomous cars, molecular regression problems, and global weather and climate modelling. A naive application of convolutional networks to a planar projection of the spherical signal is destined to fail, because the space-varying distortions introduced by such a projection will make translational weight sharing ineffective. In this paper we introduce the building blocks for constructing spherical CNNs. We propose a definition for the spherical cross-correlation that is both expressive and rotation-equivariant. The spherical correlation satisfies a generalized Fourier theorem, which allows us to compute it efficiently using a generalized (non-commutative) Fast Fourier Transform (FFT) algorithm. We demonstrate the computational efficiency, numerical accuracy, and effectiveness of spherical CNNs applied to 3D model recognition and atomization energy regression.
accepted-oral-papers
This work introduces a trainable signal representation for spherical signals (functions defined in the sphere) which are rotationally equivariant by design, by extending CNNs to the corresponding group SO(3). The method is implemented efficiently using fast Fourier transforms on the sphere and illustrated with compelling tasks such as 3d shape recognition and molecular energy prediction. Reviewers agreed this is a solid, well-written paper, which demonstrates the usefulness of group invariance/equivariance beyond the standard Euclidean translation group in real-world scenarios. It will be a great addition to the conference.
train
[ "r1VD9T_SM", "r1rikDLVG", "SJ3LYkFez", "B1gQIy9gM", "Bkv4qd3bG", "r1CVE6O7f", "Sy9FmTuQM", "ryi-Q6_Xf", "HkZy7TdXM", "S1rz4yvGf" ]
[ "public", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "public" ]
[ "How to describe the relationships between these two papers?", "Thank you for the feedback; I maintain my opinion.", "Summary:\n\nThe paper proposes a framework for constructing spherical convolutional networks (ConvNets) based on a novel synthesis of several existing concepts. The goal is to detect patterns i...
[ -1, -1, 8, 7, 9, -1, -1, -1, -1, -1 ]
[ -1, -1, 4, 3, 4, -1, -1, -1, -1, -1 ]
[ "iclr_2018_Hkbd5xZRb", "ryi-Q6_Xf", "iclr_2018_Hkbd5xZRb", "iclr_2018_Hkbd5xZRb", "iclr_2018_Hkbd5xZRb", "Bkv4qd3bG", "SJ3LYkFez", "B1gQIy9gM", "S1rz4yvGf", "iclr_2018_Hkbd5xZRb" ]
iclr_2018_S1CChZ-CZ
Ask the Right Questions: Active Question Reformulation with Reinforcement Learning
We frame Question Answering (QA) as a Reinforcement Learning task, an approach that we call Active Question Answering. We propose an agent that sits between the user and a black box QA system and learns to reformulate questions to elicit the best possible answers. The agent probes the system with, potentially many, natural language reformulations of an initial question and aggregates the returned evidence to yield the best answer. The reformulation system is trained end-to-end to maximize answer quality using policy gradient. We evaluate on SearchQA, a dataset of complex questions extracted from Jeopardy!. The agent outperforms a state-of-the-art base model, playing the role of the environment, and other benchmarks. We also analyze the language that the agent has learned while interacting with the question answering system. We find that successful question reformulations look quite different from natural language paraphrases. The agent is able to discover non-trivial reformulation strategies that resemble classic information retrieval techniques such as term re-weighting (tf-idf) and stemming.
accepted-oral-papers
this submission presents a novel way in which a neural machine reader could be improved. that is, by learning to reformulate a question specifically for the downstream machine reader. all the reviewers found it positive, and so do i.
train
[ "r10KoNDgf", "HJ9W8iheM", "Hydu7nFeG", "Hk9DKzYzM", "H15NIQOfM", "SJZ0UmdfM", "BkGlU7OMz", "BkXuXQufM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author" ]
[ "This paper proposes active question answering via a reinforcement learning approach that can learn to rephrase the original questions in a way that can provide the best possible answers. Evaluation on the SearchQA dataset shows significant improvement over the state-of-the-art model that uses the original question...
[ 7, 6, 8, -1, -1, -1, -1, -1 ]
[ 5, 4, 3, -1, -1, -1, -1, -1 ]
[ "iclr_2018_S1CChZ-CZ", "iclr_2018_S1CChZ-CZ", "iclr_2018_S1CChZ-CZ", "BkGlU7OMz", "Hydu7nFeG", "r10KoNDgf", "HJ9W8iheM", "iclr_2018_S1CChZ-CZ" ]
iclr_2018_rJTutzbA-
On the insufficiency of existing momentum schemes for Stochastic Optimization
Momentum based stochastic gradient methods such as heavy ball (HB) and Nesterov's accelerated gradient descent (NAG) method are widely used in practice for training deep networks and other supervised learning models, as they often provide significant improvements over stochastic gradient descent (SGD). Rigorously speaking, fast gradient methods have provable improvements over gradient descent only for the deterministic case, where the gradients are exact. In the stochastic case, the popular explanations for their wide applicability is that when these fast gradient methods are applied in the stochastic case, they partially mimic their exact gradient counterparts, resulting in some practical gain. This work provides a counterpoint to this belief by proving that there exist simple problem instances where these methods cannot outperform SGD despite the best setting of its parameters. These negative problem instances are, in an informal sense, generic; they do not look like carefully constructed pathological instances. These results suggest (along with empirical evidence) that HB or NAG's practical performance gains are a by-product of minibatching. Furthermore, this work provides a viable (and provable) alternative, which, on the same set of problem instances, significantly improves over HB, NAG, and SGD's performance. This algorithm, referred to as Accelerated Stochastic Gradient Descent (ASGD), is a simple to implement stochastic algorithm, based on a relatively less popular variant of Nesterov's Acceleration. Extensive empirical results in this paper show that ASGD has performance gains over HB, NAG, and SGD. The code for implementing the ASGD Algorithm can be found at https://github.com/rahulkidambi/AccSGD.
accepted-oral-papers
The reviewers unanimously recommended that this paper be accepted, as it contains an important theoretical result that there are problems for which heavy-ball momentum cannot outperform SGD. The theory is backed up by solid experimental results, and the writing is clear. While the reviewers were originally concerned that the paper was missing a discussion of some related algorithms (ASVRG and ASDCA) that were handled in discussion.
train
[ "Sy3aR8wxz", "Sk0uMIqef", "Sy2Sc4CWz", "SkEtTX6Xz", "BJqEtWdMf", "SyL2ub_fM", "rkv8dZ_fz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "I like the idea of the paper. Momentum and accelerations are proved to be very useful both in deterministic and stochastic optimization. It is natural that it is understood better in the deterministic case. However, this comes quite naturally, as deterministic case is a bit easier ;) Indeed, just recently people s...
[ 7, 7, 8, -1, -1, -1, -1 ]
[ 4, 3, 5, -1, -1, -1, -1 ]
[ "iclr_2018_rJTutzbA-", "iclr_2018_rJTutzbA-", "iclr_2018_rJTutzbA-", "iclr_2018_rJTutzbA-", "Sy2Sc4CWz", "Sk0uMIqef", "Sy3aR8wxz" ]
iclr_2018_Hk6kPgZA-
Certifying Some Distributional Robustness with Principled Adversarial Training
Neural networks are vulnerable to adversarial examples and researchers have proposed many heuristic attack and defense mechanisms. We address this problem through the principled lens of distributionally robust optimization, which guarantees performance under adversarial input perturbations. By considering a Lagrangian penalty formulation of perturbing the underlying data distribution in a Wasserstein ball, we provide a training procedure that augments model parameter updates with worst-case perturbations of training data. For smooth losses, our procedure provably achieves moderate levels of robustness with little computational or statistical cost relative to empirical risk minimization. Furthermore, our statistical guarantees allow us to efficiently certify robustness for the population loss. For imperceptible perturbations, our method matches or outperforms heuristic approaches.
accepted-oral-papers
This paper attracted strong praise from the reviewers, who felt that it was of high quality and originality. The broad problem that is being tackled is clearly of great importance. This paper also attracted the attention of outside experts, who were more skeptical of the claims made by the paper. The technical merits do not seem to be in question, but rather, their interpretation/application. The perception by a community as to whether an important problem has been essentially solved can affect the choices made by other reviewers when they decide what work to pursue themselves, evaluate grants, etc. It's important that claims be conservative and highlight the ways in which the present work does not fully address the broader problem of adversarial examples. Ultimately, it has been decided that the paper will be of great interest to the community. The authors have also been entrusted with the responsibility to consider the issues raised by the outside expert (and then echoed by the AC) in their final revisions. One final note: In their responses to the outside expert, the authors several times remark that the guarantees made in the paper are, in form, no different from standard learning-theoretic claims: "This criticism, however, applies to many learning-theoretic results (including those applied in deep learning)." I don't find any comfort in this statement. Learning theorists have often focused on the form of the bounds (sqrt(m) dependence and, say, independence from the # of weights) and then they resort to empirical observations of correlation to demonstrate that the value of the bound is predictive for generalization. because the bounds are often meaningless ("vacuous") when evaluated on real data sets. (There are some recent examples bucking this trend.) In a sense, learning theorists have gotten off easy. Adversarial examples, however, concern security, and so there is more at stake. The slack we might afford learning theorists is not appropriate in this new context. I would encourage the authors to clearly explain any remaining work that needs to be done to move from "good enough for learning theory" to "good enough for security". The authors promise to outline important future work / open problems for the community. I definitely encourage this.
train
[ "S1pdil8Sz", "rkn74s8BG", "HJNBMS8rf", "rJnkAlLBf", "H1g0Nx8rf", "rklzlzBVf", "HJ-1AnFlM", "HySlNfjgf", "rkx-2-y-f", "rkix5PTQf", "rJ63YwTQM", "HyFBKPp7z", "Hkzmdv67G", "rJBbuPTmz", "Hk2kQP3Qz", "BJVnpJPXM", "H1wDpaNbM" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "public", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "public", "public", "public" ]
[ "We just received an email notification abut this comment a few minutes ago and somehow did not receive any notification of the original comment uploaded on 21 January. We will upload a response later today.", "Apologies for the (evidently) tardy response. We have now uploaded a response to the area chair's comme...
[ -1, -1, -1, -1, -1, -1, 9, 9, 9, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, -1, -1, -1, -1, -1, 5, 4, 4, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "H1g0Nx8rf", "rJnkAlLBf", "rklzlzBVf", "S1pdil8Sz", "iclr_2018_Hk6kPgZA-", "rJBbuPTmz", "iclr_2018_Hk6kPgZA-", "iclr_2018_Hk6kPgZA-", "iclr_2018_Hk6kPgZA-", "Hk2kQP3Qz", "BJVnpJPXM", "BJVnpJPXM", "H1wDpaNbM", "iclr_2018_Hk6kPgZA-", "iclr_2018_Hk6kPgZA-", "iclr_2018_Hk6kPgZA-", "iclr_...
iclr_2018_HktK4BeCZ
Learning Deep Mean Field Games for Modeling Large Population Behavior
"We consider the problem of representing collective behavior of large populations and predicting the(...TRUNCATED)
accepted-oral-papers
"The reviewers are unanimous in finding the work in this paper highly novel and significant. They h(...TRUNCATED)
val
["BkGA_x3SG","ByGPUUYgz","rJLBq1DVM","S1PF1UKxG","rJBLYC--f","BycoZZimG","HyRrEDLWG","SJJDxd8Wf","r1(...TRUNCATED)
["author","official_reviewer","official_reviewer","official_reviewer","official_reviewer","author","(...TRUNCATED)
["We appreciate your suggestions for further improving the precision of our language, and we underst(...TRUNCATED)
[ -1, 8, -1, 8, 10, -1, -1, -1, -1 ]
[ -1, 4, -1, 3, 5, -1, -1, -1, -1 ]
["rJLBq1DVM","iclr_2018_HktK4BeCZ","SJJDxd8Wf","iclr_2018_HktK4BeCZ","iclr_2018_HktK4BeCZ","iclr_201(...TRUNCATED)
End of preview. Expand in Data Studio

This is PeerSum, a multi-document summarization dataset in the peer-review domain. More details can be found in the paper accepted at EMNLP 2023, Summarizing Multiple Documents with Conversational Structure for Meta-review Generation. The original code and datasets are public on GitHub.

Please use the following code to download the dataset with the datasets library from Huggingface.

from datasets import load_dataset
peersum_all = load_dataset('oaimli/PeerSum', split='all')
peersum_train = peersum_all.filter(lambda s: s['label'] == 'train')
peersum_val = peersum_all.filter(lambda s: s['label'] == 'val')
peersum_test = peersum_all.filter(lambda s: s['label'] == 'test')

The Huggingface dataset is mainly for multi-document summarization. Each sample comprises information with the following keys:

* paper_id: str (a link to the raw data)
* paper_title: str
* paper_abstract, str
* paper_acceptance, str
* meta_review, str
* review_ids, list(str)
* review_writers, list(str)
* review_contents, list(str)
* review_ratings, list(int)
* review_confidences, list(int)
* review_reply_tos, list(str)
* label, str, (train, val, test)

You can also download the raw data from Google Drive. The raw data comprises more information and it can be used for other analysis for peer reviews.

Downloads last month
40

Paper for oaimli/PeerSum