Model sequencelengths 0 6 | type stringclasses 3
values | GitHub sequencelengths 0 2 | abstract stringlengths 446 3.07k | project_page stringclasses 2
values | Space sequencelengths 0 2 | Dataset sequencelengths 0 3 | title stringlengths 15 138 | authors sequencelengths 1 35 | arxiv_id stringlengths 0 10 | id int64 17.4k 19.8k | OpenReview stringlengths 42 42 |
|---|---|---|---|---|---|---|---|---|---|---|---|
[] | Poster | [] | Image restoration poses a garners substantial interest due to the exponential surge in demands for recovering high-quality images from diverse mobile camera devices, adverse lighting conditions, suboptimal shooting environments, and frequent image compression for efficient transmission purposes. Yet this problem gather... | [] | [] | DreamClean: Restoring Clean Image Using Deep Diffusion Prior | [
"Jie Xiao",
"Ruili Feng",
"Han Zhang",
"Zhiheng Liu",
"Zhantao Yang",
"Yurui Zhu",
"Xueyang Fu",
"Kai Zhu",
"Yu Liu",
"Zheng-Jun Zha"
] | 19,402 | https://openreview.net/forum?id=6ALuy19mPa | ||
[] | Poster | [] | Post-hoc out-of-distribution (OOD) detection has garnered intensive attention in reliable machine learning. Many efforts have been dedicated to deriving score functions based on logits, distances, or rigorous data distribution assumptions to identify low-scoring OOD samples. Nevertheless, these estimate scores may fail... | [] | [] | ConjNorm: Tractable Density Estimation for Out-of-Distribution Detection | [
"Bo Peng",
"Yadan Luo",
"Yonggang Zhang",
"Yixuan Li",
"Zhen Fang"
] | 2402.17888 | 19,568 | https://openreview.net/forum?id=1pSL2cXWoz | |
[] | Poster | [
"https://github.com/cszhilu1998/SelfHDR"
] | Merging multi-exposure images is a common approach for obtaining high dynamic range (HDR) images, with the primary challenge being the avoidance of ghosting artifacts in dynamic scenes. Recent methods have proposed using deep neural networks for deghosting. However, the methods typically rely on sufficient data with HD... | [] | [] | Self-Supervised High Dynamic Range Imaging with Multi-Exposure Images in Dynamic Scenes | [
"Zhilu Zhang",
"Haoyu Wang",
"Shuai Liu",
"Xiaotao Wang",
"LEI LEI",
"Wangmeng Zuo"
] | 2310.01840 | 18,010 | https://openreview.net/forum?id=jjiOHEcS2c | |
[] | Spotlight Poster | [
"https://github.com/apple/ml-ferret"
] | We introduce Ferret, a new Multimodal Large Language Model (MLLM) capable of understanding spatial referring of any shape or granularity within an image and accurately grounding open-vocabulary descriptions. To unify referring and grounding in the LLM paradigm, Ferret employs a novel and powerful hybrid region represen... | [] | [] | Ferret: Refer and Ground Anything Anywhere at Any Granularity | [
"Haoxuan You",
"Haotian Zhang",
"Zhe Gan",
"Xianzhi Du",
"Bowen Zhang",
"Zirui Wang",
"Liangliang Cao",
"Shih-Fu Chang",
"Yinfei Yang"
] | 2310.07704 | 19,537 | https://openreview.net/forum?id=2msbbX3ydD | |
[] | Poster | [] | Existing vision-language models exhibit strong generalization on a variety of visual domains and tasks. However, such models mainly perform zero-shot recognition in a closed-set manner, and thus struggle to handle open-domain visual concepts by design. There are recent finetuning methods, such as prompt learning, that ... | [] | [] | Overcoming the Pitfalls of Vision-Language Model Finetuning for OOD Generalization | [
"Yuhang Zang",
"Hanlin Goh",
"Joshua M. Susskind",
"Chen Huang"
] | 2401.15914 | 18,711 | https://openreview.net/forum?id=PKICZXVY9M | |
[] | Poster | [] | Supervised learning datasets may contain multiple cues that explain the training set equally well, i.e., learning any of them would lead to the correct predictions on the training data. However, many of them can be spurious, i.e., lose their predictive power under a distribution shift and consequently fail to generaliz... | [] | [] | Unraveling the Key Components of OOD Generalization via Diversification | [
"Harold Luc Benoit",
"Liangze Jiang",
"Andrei Atanov",
"Oguzhan Fatih Kar",
"Mattia Rigotti",
"Amir Zamir"
] | 2312.16313 | 18,844 | https://openreview.net/forum?id=Lvf7GnaLru | |
[] | Poster | [] | Large Vision-Language Models (LVLMs) can understand the world comprehensively by integrating rich information from different modalities, achieving remarkable performance improvements on various multimodal downstream tasks. However, deploying LVLMs is often problematic due to their massive computational/energy costs and... | [] | [] | ECoFLaP: Efficient Coarse-to-Fine Layer-Wise Pruning for Vision-Language Models | [
"Yi-Lin Sung",
"Jaehong Yoon",
"Mohit Bansal"
] | 2310.02998 | 18,067 | https://openreview.net/forum?id=iIT02bAKzv | |
[] | Spotlight Poster | [] | Sparsely activated Mixture-of-Experts (SMoE) has shown promise to scale up the learning capacity of neural networks, however, they have issues like: ($a$) $\textit{High Memory Usage,}$ due to duplication of the network layers into multiple copies as experts; and ($b$) $\textit{Redundancy in Experts,}$ as common learnin... | [] | [] | Merge, Then Compress: Demystify Efficient SMoE with Hints from Its Routing Policy | [
"Pingzhi Li",
"Zhenyu Zhang",
"Prateek Yadav",
"Yi-Lin Sung",
"Yu Cheng",
"Mohit Bansal",
"Tianlong Chen"
] | 2310.01334 | 18,228 | https://openreview.net/forum?id=eFWG9Cy3WK | |
[] | Poster | [] | Methods for carefully selecting or generating a small set of training data to learn from, i.e., data pruning, coreset selection, and dataset distillation, have been shown to be effective in reducing the ever-increasing cost of training neural networks. Behind this success are rigorously designed, yet expensive, strateg... | [] | [] | Repeated Random Sampling for Minimizing the Time-to-Accuracy of Learning | [
"Patrik Okanovic",
"Roger Waleffe",
"Vasilis Mageirakos",
"Konstantinos Nikolakakis",
"Amin Karbasi",
"Dionysios Kalogerias",
"Nezihe Merve GΓΌrel",
"Theodoros Rekatsinas"
] | 2305.18424 | 18,927 | https://openreview.net/forum?id=JnRStoIuTe | |
[] | Poster | [] | Biological cortical neurons are remarkably sophisticated computational devices,temporally integrating their vast synaptic input over an intricate dendritic tree,subject to complex, nonlinearly interacting internal biological processes. A recentstudy proposed to characterize this complexity by fitting accurate surrogate... | [] | [] | The Expressive Leaky Memory Neuron: an Efficient and Expressive Phenomenological Neuron Model Can Solve Long-Horizon Tasks. | [
"Aaron Spieler",
"Nasim Rahaman",
"Georg Martius",
"Bernhard SchΓΆlkopf",
"Anna Levina"
] | 2306.16922 | 17,545 | https://openreview.net/forum?id=vE1e1mLJ0U | |
[] | Poster | [] | Recurrent neural networks (RNNs) in the brain and in silico excel at solving tasks with intricate temporal dependencies. Long timescales required for solving such tasks can arise from properties of individual neurons (single-neuron timescale, $\tau$, e.g., membrane time constant in biological neurons) or recurrent inte... | [] | [] | Emergent mechanisms for long timescales depend on training curriculum and affect performance in memory tasks | [
"Sina Khajehabdollahi",
"Roxana Zeraati",
"Emmanouil Giannakakis",
"Tim Jakob SchΓ€fer",
"Georg Martius",
"Anna Levina"
] | 2309.12927 | 17,434 | https://openreview.net/forum?id=xwKt6bUkXj | |
[] | Spotlight Poster | [] | The ML community is rapidly exploring techniques for prompting language models (LMs) and for stacking them into pipelines that solve complex tasks. Unfortunately, existing LM pipelines are typically implemented using hard-coded βprompt templatesβ, i.e. lengthy strings discovered via trial and error. Toward a more syste... | [] | [] | DSPy: Compiling Declarative Language Model Calls into State-of-the-Art Pipelines | [
"Omar Khattab",
"Arnav Singhvi",
"Paridhi Maheshwari",
"Zhiyuan Zhang",
"Keshav Santhanam",
"Sri Vardhamanan A",
"Saiful Haq",
"Ashutosh Sharma",
"Thomas T. Joshi",
"Hanna Moazam",
"Heather Miller",
"Matei Zaharia",
"Christopher Potts"
] | 2310.03714 | 17,642 | https://openreview.net/forum?id=sY5N0zY5Od | |
[] | Poster | [] | Decision-makers are often experts of their domain and take actions based on their domain knowledge. Doctors, for instance, may prescribe treatments by predicting the likely outcome of each available treatment. Actions of an expert thus naturally encode part of their domain knowledge, and can help make inferences within... | [] | [] | Defining Expertise: Applications to Treatment Effect Estimation | [
"Alihan HΓΌyΓΌk",
"Qiyao Wei",
"Alicia Curth",
"Mihaela van der Schaar"
] | 2403.00694 | 19,582 | https://openreview.net/forum?id=1YPfmglNRU | |
[] | Poster | [] | Large deep learning models have achieved impressive performance across a range of applications. However, their large memory requirements, including parameter memory and activation memory, have become a significant challenge for their practical serving. While existing methods mainly address parameter memory, the importa... | [] | [] | AutoChunk: Automated Activation Chunk for Memory-Efficient Deep Learning Inference | [
"Xuanlei Zhao",
"Shenggan Cheng",
"Guangyang LU",
"Haotian Zhou",
"Bin Jia",
"Yang You"
] | 19,032 | https://openreview.net/forum?id=GQGNLEHmdl | ||
[] | Poster | [] | We study infinite-horizon average-reward Markov decision processes (AMDPs) in the context of general function approximation. Specifically, we propose a novel algorithmic framework named Fixed-Point Local Optimization (FLOP), which incorporates both model-based and value-based incarnations. In particular, FLOP features ... | [] | [] | Sample-efficient Learning of Infinite-horizon Average-reward MDPs with General Function Approximation | [
"Jianliang He",
"Han Zhong",
"Zhuoran Yang"
] | 2404.12648 | 18,170 | https://openreview.net/forum?id=fq1wNrC2ai | |
[] | Spotlight Poster | [] | Offline reinforcement learning (RL) presents a promising approach for learning reinforced policies from offline datasets without the need for costly or unsafe interactions with the environment. However, datasets collected by humans in real-world environments are often noisy and may even be maliciously corrupted, which ... | [] | [] | Towards Robust Offline Reinforcement Learning under Diverse Data Corruption | [
"Rui Yang",
"Han Zhong",
"Jiawei Xu",
"Amy Zhang",
"Chongjie Zhang",
"Lei Han",
"Tong Zhang"
] | 2310.12955 | 19,419 | https://openreview.net/forum?id=5hAMmCU0bK | |
[] | Poster | [] | Extreme Classification (XC) architectures, which utilize a massive one-vs-all classifier layer at the output, have demonstrated remarkable performance on problems with large label sets. Nonetheless, these have also been observed to falter on tail labels with few representative samples. This phenomenon has been attribut... | [] | [] | Enhancing Tail Performance in Extreme Classifiers by Label Variance Reduction | [
"Anirudh Buvanesh",
"Rahul Chand",
"Jatin Prakash",
"Bhawna Paliwal",
"Mudit Dhawan",
"Neelabh Madan",
"Deepesh Hada",
"Vidit Jain",
"SONU MEHTA",
"Yashoteja Prabhu",
"Manish Gupta",
"Ramachandran Ramjee",
"Manik Varma"
] | 19,401 | https://openreview.net/forum?id=6ARlSgun7J | ||
[] | Oral | [] | Existing video-language studies mainly focus on learning short video clips, leaving long-term temporal dependencies rarely explored due to over-high computational cost of modeling long videos. To address this issue, one feasible solution is learning the correspondence between video clips and captions, which however ine... | [] | [] | Multi-granularity Correspondence Learning from Long-term Noisy Videos | [
"Yijie Lin",
"Jie Zhang",
"Zhenyu Huang",
"Jia Liu",
"zujie wen",
"Xi Peng"
] | 2401.16702 | 19,786 | https://openreview.net/forum?id=9Cu8MRmhq2 | |
[] | Oral | [] | In this work, we define a diffusion-based generative model capable of both music generation and source separation by learning the score of the joint probability density of sources sharing a context. Alongside the classic total inference tasks (i.e., generating a mixture, separating the sources), we also introduce and e... | [] | [] | Multi-Source Diffusion Models for Simultaneous Music Generation and Separation | [
"Giorgio Mariani",
"Irene Tallini",
"Emilian Postolache",
"Michele Mancusi",
"Luca Cosmo",
"Emanuele RodolΓ "
] | 2302.02257 | 19,737 | https://openreview.net/forum?id=h922Qhkmx1 | |
[] | Spotlight Poster | [] | Learning a precise dynamics model can be crucial for offline reinforcement learning, which, unfortunately, has been found to be quite challenging. Dynamics models that are learned by fitting historical transitions often struggle to generalize to unseen transitions. In this study, we identify a hidden but pivotal factor... | [] | [] | Reward-Consistent Dynamics Models are Strongly Generalizable for Offline Reinforcement Learning | [
"Fan-Ming Luo",
"Tian Xu",
"Xingchen Cao",
"Yang Yu"
] | 2310.05422 | 19,031 | https://openreview.net/forum?id=GSBHKiw19c | |
[
"warp-ai/wuerstchen"
] | Oral | [] | We introduce WΓΌrstchen, a novel architecture for text-to-image synthesis that combines competitive performance with unprecedented cost-effectiveness for large-scale text-to-image diffusion models.A key contribution of our work is to develop a latent diffusion technique in which we learn a detailed but extremely compact... | [
"warp-ai/Wuerstchen"
] | [] | WΓΌrstchen: An Efficient Architecture for Large-Scale Text-to-Image Diffusion Models | [
"Pablo Pernias",
"Dominic Rampas",
"Mats Leon Richter",
"Christopher Pal",
"Marc Aubreville"
] | 2306.00637 | 19,738 | https://openreview.net/forum?id=gU58d5QeGv | |
[] | Poster | [] | Diffusion models have emerged as a key pillar of foundation models in visual domains. One of their critical applications is to universally solve different downstream inverse tasks via a single diffusion prior without re-training for each task. Most inverse tasks can be formulated as inferring a posterior distribution o... | [] | [] | A Variational Perspective on Solving Inverse Problems with Diffusion Models | [
"Morteza Mardani",
"Jiaming Song",
"Jan Kautz",
"Arash Vahdat"
] | 2305.04391 | 19,583 | https://openreview.net/forum?id=1YO4EE3SPB | |
[] | Poster | [] | Meta-reinforcement learning (meta-RL) is a promising framework for tackling challenging domains requiring efficient exploration. Existing meta-RL algorithms are characterized by low sample efficiency, and mostly focus on low-dimensional task distributions. In parallel, model-based RL methods have been successful in sol... | [] | [] | MAMBA: an Effective World Model Approach for Meta-Reinforcement Learning | [
"Zohar Rimon",
"Tom Jurgenson",
"Orr Krupnik",
"Gilad Adler",
"Aviv Tamar"
] | 2403.09859 | 19,589 | https://openreview.net/forum?id=1RE0H6mU7M | |
[] | Spotlight Poster | [] | We reveal and address the frequently overlooked yet important issue of _disguised procedural unfairness_, namely, the potentially inadvertent alterations on the behavior of neutral (i.e., not problematic) aspects of data generating process, and/or the lack of procedural assurance of the greatest benefit of the least ad... | [] | [] | Procedural Fairness Through Decoupling Objectionable Data Generating Components | [
"Zeyu Tang",
"Jialu Wang",
"Yang Liu",
"Peter Spirtes",
"Kun Zhang"
] | 2311.14688 | 18,279 | https://openreview.net/forum?id=cxfPefbu1s | |
[] | Poster | [] | In Online Continual Learning (OCL) a learning system receives a stream of data and sequentially performs prediction and training steps. Important challenges in OCL are concerned with automatic adaptation to the particular non-stationary structure of the data, and with quantification of predictive uncertainty. Motivated... | [] | [] | Kalman Filter for Online Classification of Non-Stationary Data | [
"Michalis Titsias",
"Alexandre Galashov",
"Amal Rannen-Triki",
"Razvan Pascanu",
"Yee Whye Teh",
"Jorg Bornschein"
] | 2306.08448 | 18,380 | https://openreview.net/forum?id=ZzmKEpze8e | |
[] | Poster | [] | Recent studies have shown that code language model at scale demonstrate significant performance gains on downstream tasks, i.e., code generation. However, most of the existing works on code representation learning train models at a hundred million parameter scale using very limited pretraining corpora. In this work, we... | [] | [] | CODE REPRESENTATION LEARNING AT SCALE | [
"Dejiao Zhang",
"Wasi Uddin Ahmad",
"Ming Tan",
"Hantian Ding",
"Ramesh Nallapati",
"Dan Roth",
"Xiaofei Ma",
"Bing Xiang"
] | 2402.01935 | 17,524 | https://openreview.net/forum?id=vfzRRjumpX | |
[] | Poster | [] | Combining offline and online reinforcement learning (RL) is crucial for efficient and safe learning. However, previous approaches treat offline and online learning as separate procedures, resulting in redundant designs and limited performance. We ask: *Can we achieve straightforward yet effective offline and online lea... | [] | [] | Uni-O4: Unifying Online and Offline Deep Reinforcement Learning with Multi-Step On-Policy Optimization | [
"Kun LEI",
"Zhengmao He",
"Chenhao Lu",
"Kaizhe Hu",
"Yang Gao",
"Huazhe Xu"
] | 17,610 | https://openreview.net/forum?id=tbFBh3LMKi | ||
[] | Spotlight Poster | [] | Embodied AI models often employ off the shelf vision backbones like CLIP to encode their visual observations. Although such general purpose representations encode rich syntactic and semantic information about the scene, much of this information is often irrelevant to the specific task at hand. This introduces noise wit... | [] | [] | Selective Visual Representations Improve Convergence and Generalization for Embodied AI | [
"Ainaz Eftekhar",
"Kuo-Hao Zeng",
"Jiafei Duan",
"Ali Farhadi",
"Aniruddha Kembhavi",
"Ranjay Krishna"
] | 2311.04193 | 17,987 | https://openreview.net/forum?id=kC5nZDU5zf | |
[] | Oral | [] | Video editing and generation methods often rely on pre-trained image-based diffusion models. During the diffusion process, however, the reliance on rudimentary noise sampling techniques that do not preserve correlations present in subsequent frames of a video is detrimental to the quality of the results. This either pr... | [] | [] | How I Warped Your Noise: a Temporally-Correlated Noise Prior for Diffusion Models | [
"Pascal Chang",
"Jingwei Tang",
"Markus Gross",
"Vinicius C. Azevedo"
] | 19,723 | https://openreview.net/forum?id=pzElnMrgSD | ||
[] | Poster | [] | Diffusion models have recently been shown to be relevant for high-quality speech generation. Most work has been focused on generating spectrograms, and as such, they further require a subsequent model to convert the spectrogram to a waveform (i.e., a vocoder). This work proposes a diffusion probabilistic end-to-end mod... | [] | [] | DiffAR: Denoising Diffusion Autoregressive Model for Raw Speech Waveform Generation | [
"Roi Benita",
"Michael Elad",
"Joseph Keshet"
] | 2310.01381 | 19,029 | https://openreview.net/forum?id=GTk0AdOYLq | |
[] | Poster | [] | Recent studies in using deep reinforcement learning (DRL) to solve Job-shop scheduling problems (JSSP) focus on construction heuristics. However, their performance is still far from optimality, mainly because the underlying graph representation scheme is unsuitable for modelling partial solutions at each construction s... | [] | [] | Deep Reinforcement Learning Guided Improvement Heuristic for Job Shop Scheduling | [
"Cong Zhang",
"Zhiguang Cao",
"Wen Song",
"Yaoxin Wu",
"Jie Zhang"
] | 2211.10936 | 18,004 | https://openreview.net/forum?id=jsWCmrsHHs | |
[] | Poster | [] | Most neural networks for classification primarily learn features differentiated by input-domain related information such as visual similarity of objects in an image. While this focus is natural behavior, it can inadvertently introduce an inductive bias that conflicts with unseen relations in an implicit output-domain d... | [] | [] | Label-Focused Inductive Bias over Latent Object Features in Visual Classification | [
"Ilmin Kang",
"HyounYoung Bae",
"Kangil Kim"
] | 18,303 | https://openreview.net/forum?id=cH3oufN8Pl | ||
[] | Spotlight Poster | [] | Recent progress in text-to-3D generation has been achieved through the utilization of score distillation methods: they make use of the pre-trained text-to-image (T2I) diffusion models by distilling via the diffusion model training objective. However, such an approach inevitably results in the use of random timesteps at... | [] | [] | DreamFlow: High-quality text-to-3D generation by Approximating Probability Flow | [
"Kyungmin Lee",
"Kihyuk Sohn",
"Jinwoo Shin"
] | 2403.14966 | 19,028 | https://openreview.net/forum?id=GURqUuTebY | |
[] | Spotlight Poster | [] | Given that Transformers are ubiquitous in wide tasks, interpreting their internals is a pivotal issue. Still, their particular components, feed-forward (FF) blocks, have typically been less analyzed despite their substantial parameter amounts.We analyze the input contextualization effects of FF blocks by rendering them... | [] | [] | Analyzing Feed-Forward Blocks in Transformers through the Lens of Attention Map | [
"Goro Kobayashi",
"Tatsuki Kuribayashi",
"Sho Yokoi",
"Kentaro Inui"
] | 2302.00456 | 17,891 | https://openreview.net/forum?id=mYWsyTuiRp | |
[] | Poster | [] | Information-theoretic generalization analysis has achieved astonishing success in characterizing the generalization capabilities of noisy and iterative learning algorithms. However, current advancements are mostly restricted to average-case scenarios and necessitate the stringent bounded loss assumption, leaving a gap ... | [] | [] | Rethinking Information-theoretic Generalization: Loss Entropy Induced PAC Bounds | [
"Yuxin Dong",
"Tieliang Gong",
"Hong Chen",
"Shujian Yu",
"Chen Li"
] | 19,026 | https://openreview.net/forum?id=GWSIo2MzuH | ||
[] | Oral | [] | Low-Rank Adaptation (LoRA) has recently gained attention for fine-tuning foundation models by incorporating trainable low-rank matrices, thereby reducing the number of trainable parameters. While \lora/ offers numerous advantages, its applicability for real-time serving to a diverse and global user base is constrained ... | [] | [] | Batched Low-Rank Adaptation of Foundation Models | [
"Yeming Wen",
"Swarat Chaudhuri"
] | 2312.05677 | 19,716 | https://openreview.net/forum?id=w4abltTZ2f | |
[] | Poster | [] | Dyna-style model-based reinforcement learning contains two phases: model rollouts to generate sample for policy learning and real environment exploration using current policy for dynamics model learning. However, due to the complex real-world environment, it is inevitable to learn an imperfect dynamics model with model... | [] | [] | COPlanner: Plan to Roll Out Conservatively but to Explore Optimistically for Model-Based RL | [
"Xiyao Wang",
"Ruijie Zheng",
"Yanchao Sun",
"Ruonan Jia",
"Wichayaporn Wongkamjan",
"Huazhe Xu",
"Furong Huang"
] | 2310.07220 | 18,007 | https://openreview.net/forum?id=jnFcKjtUPN | |
[] | Spotlight Poster | [] | Visual reinforcement learning (RL) has shown promise in continuous control tasks.Despite its progress, current algorithms are still unsatisfactory in virtually every aspect of the performance such as sample efficiency, asymptotic performance, and their robustness to the choice of random seeds.In this paper, we identify... | [] | [] | DrM: Mastering Visual Reinforcement Learning through Dormant Ratio Minimization | [
"Guowei Xu",
"Ruijie Zheng",
"Yongyuan Liang",
"Xiyao Wang",
"Zhecheng Yuan",
"Tianying Ji",
"Yu Luo",
"Xiaoyu Liu",
"Jiaxin Yuan",
"Pu Hua",
"Shuzhen Li",
"Yanjie Ze",
"Hal DaumΓ© III",
"Furong Huang",
"Huazhe Xu"
] | 2310.19668 | 18,821 | https://openreview.net/forum?id=MSe8YFbhUE | |
[] | Poster | [] | Shapelets and CNN are two typical approaches to model time series. Shapelets aim at finding a set of sub-sequences that extract feature-based interpretable shapes, but may suffer from accuracy and efficiency issues. CNN performs well by encoding sequences with a series of hidden representations, but lacks interpretabil... | [] | [] | CNN Kernels Can Be the Best Shapelets | [
"Eric Qu",
"Yansen Wang",
"Xufang Luo",
"Wenqiang He",
"Kan Ren",
"Dongsheng Li"
] | 18,756 | https://openreview.net/forum?id=O8ouVV8PjF | ||
[] | Spotlight Poster | [] | Optimizing for humansβ latent preferences remains a grand challenge in route recommendation. Prior research has provided increasingly general methods based on inverse reinforcement learning (IRL), yet no approach has successfully addressed planetary-scale routing problems with hundreds of millions of states and demonst... | [] | [] | Massively Scalable Inverse Reinforcement Learning in Google Maps | [
"Matt Barnes",
"Matthew Abueg",
"Oliver F. Lange",
"Matt Deeds",
"Jason Trader",
"Denali Molitor",
"Markus Wulfmeier",
"Shawn O'Banion"
] | 2305.11290 | 17,395 | https://openreview.net/forum?id=z3L59iGALM | |
[] | Poster | [] | Learning neural subset selection tasks, such as compound selection in AI-aided drug discovery, have become increasingly pivotal across diverse applications. The existing methodologies in the field primarily concentrate on constructing models that capture the relationship between utility function values and subsets with... | [] | [] | Enhancing Neural Subset Selection: Integrating Background Information into Set Representations | [
"Binghui Xie",
"Yatao Bian",
"Kaiwen Zhou",
"Yongqiang Chen",
"Peilin Zhao",
"Bo Han",
"Wei Meng",
"James Cheng"
] | 2402.03139 | 18,216 | https://openreview.net/forum?id=eepoE7iLpL | |
[] | Poster | [
"https://github.com/wudongming97/TopoMLP"
] | Topology reasoning aims to comprehensively understand road scenes and present drivable routes in autonomous driving. It requires detecting road centerlines (lane) and traffic elements, further reasoning their topology relationship, \textit{i.e.}, lane-lane topology, and lane-traffic topology. In this work, we first pre... | [] | [] | TopoMLP: A Simple yet Strong Pipeline for Driving Topology Reasoning | [
"Dongming Wu",
"Jiahao Chang",
"Fan Jia",
"Yingfei Liu",
"Tiancai Wang",
"Jianbing Shen"
] | 2310.06753 | 19,610 | https://openreview.net/forum?id=0gTW5JUFTW | |
[] | Poster | [] | Hyperbolic space has proven to be well-suited for capturing hierarchical relations in data, such as trees and directed acyclic graphs. Prior work introduced the concept of entailment cones, which uses partial orders defined by nested cones in the Poincar\'e ball to model hierarchies. Here, we introduce the ``shadow con... | [] | [] | Shadow Cones: A Generalized Framework for Partial Order Embeddings | [
"Tao Yu",
"Toni J.B. Liu",
"Albert Tseng",
"Christopher De Sa"
] | 2305.15215 | 17,377 | https://openreview.net/forum?id=zbKcFZ6Dbp | |
[] | Poster | [] | The recent wave of generative AI has sparked unprecedented global attention, with both excitement and concern over potentially superhuman levels of artificial intelligence: models now take only seconds to produce outputs that would challenge or exceed the capabilities even of expert humans. At the same time, models sti... | [] | [] | The Generative AI Paradox: βWhat It Can Create, It May Not Understandβ | [
"Peter West",
"Ximing Lu",
"Nouha Dziri",
"Faeze Brahman",
"Linjie Li",
"Jena D. Hwang",
"Liwei Jiang",
"Jillian Fisher",
"Abhilasha Ravichander",
"Khyathi Chandu",
"Benjamin Newman",
"Pang Wei Koh",
"Allyson Ettinger",
"Yejin Choi"
] | 2311.00059 | 19,183 | https://openreview.net/forum?id=CF8H8MS5P8 | |
[] | Poster | [
"https://github.com/ProjectNUWA/LayoutNUWA"
] | Graphic layout generation, a growing research field, plays a significant role in user engagement and information perception. Existing methods primarily treat layout generation as a numerical optimization task, focusing on quantitative aspects while overlooking the semantic information of layout, such as the relationshi... | [] | [] | LayoutNUWA: Revealing the Hidden Layout Expertise of Large Language Models | [
"Zecheng Tang",
"Chenfei Wu",
"Juntao Li",
"Nan Duan"
] | 2309.09506 | 17,744 | https://openreview.net/forum?id=qCUWVT0Ayy | |
[] | Poster | [] | Language modeling at scale has proven very effective and brought unprecedented success to natural language models. Many typical representatives, especially decoder-only models, e.g., BLOOM and LLaMA, and encoder-decoder models, e.g., Flan-T5 and AlexaTM, have exhibited incredible instruction-following capabilities whil... | [] | [] | Are Bert Family Good Instruction Followers? A Study on Their Potential And Limitations | [
"yisheng xiao",
"Juntao Li",
"Zechen Sun",
"Zechang Li",
"Qingrong Xia",
"Xinyu Duan",
"Zhefeng Wang",
"Min Zhang"
] | 17,466 | https://openreview.net/forum?id=x8VNtpCu1I | ||
[] | Poster | [] | Matching cross-modality features between images and point clouds is a fundamental problem for image-to-point cloud registration. However, due to the modality difference between images and points, it is difficult to learn robust and discriminative cross-modality features by existing metric learning methods for feature m... | [] | [] | FreeReg: Image-to-Point Cloud Registration Leveraging Pretrained Diffusion Models and Monocular Depth Estimators | [
"Haiping Wang",
"Yuan Liu",
"Bing WANG",
"YUJING SUN",
"Zhen Dong",
"Wenping Wang",
"Bisheng Yang"
] | 2310.03420 | 19,217 | https://openreview.net/forum?id=BPb5AhT2Vf | |
[] | Poster | [] | Since real-world machine systems are running in non-stationary environments, Continual Test-Time Adaptation (CTTA) task is proposed to adapt the pre-trained model to continually changing target domains. Recently, existing methods mainly focus on model-based adaptation, which aims to leverage a self-training manner to e... | [] | [] | ViDA: Homeostatic Visual Domain Adapter for Continual Test Time Adaptation | [
"Jiaming Liu",
"Senqiao Yang",
"Peidong Jia",
"Renrui Zhang",
"Ming Lu",
"Yandong Guo",
"Wei Xue",
"Shanghang Zhang"
] | 2306.04344 | 17,657 | https://openreview.net/forum?id=sJ88Wg5Bp5 | |
[] | Poster | [
"https://github.com/yuyudeep/hcmt"
] | Recently, many mesh-based graph neural network (GNN) models have been proposed for modeling complex high-dimensional physical systems. Remarkable achievements have been made in significantly reducing the solving time compared to traditional numerical solvers. These methods are typically designed to i) reduce the comput... | [] | [] | Learning Flexible Body Collision Dynamics with Hierarchical Contact Mesh Transformer | [
"Youn-Yeol Yu",
"Jeongwhan Choi",
"Woojin Cho",
"Kookjin Lee",
"Nayong Kim",
"Kiseok Chang",
"ChangSeung Woo",
"ILHO KIM",
"SeokWoo Lee",
"Joon Young Yang",
"SOOYOUNG YOON",
"Noseong Park"
] | 2312.12467 | 19,309 | https://openreview.net/forum?id=90yw2uM6J5 | |
[] | Spotlight Poster | [] | In reinforcement learning (RL), rewards of states are typically considered additive, and following the Markov assumption, they are independent of states visited previously. In many important applications, such as coverage control, experiment design and informative path planning, rewards naturally have diminishing retur... | [] | [] | Submodular Reinforcement Learning | [
"Manish Prajapat",
"Mojmir Mutny",
"Melanie Zeilinger",
"Andreas Krause"
] | 2307.13372 | 17,918 | https://openreview.net/forum?id=loYSzjSaAK | |
[] | Oral | [] | Privacy estimation techniques for differentially private (DP) algorithms are useful for comparing against analytical bounds, or to empirically measure privacy loss insettings where known analytical bounds are not tight. However, existing privacy auditing techniques usually make strong assumptions on the adversary (e.g.... | [] | [] | One-shot Empirical Privacy Estimation for Federated Learning | [
"Galen Andrew",
"Peter Kairouz",
"Sewoong Oh",
"Alina Oprea",
"Hugh Brendan McMahan",
"Vinith Menon Suriyakumar"
] | 2302.03098 | 19,797 | https://openreview.net/forum?id=0BqyZSWfzo | |
[] | Poster | [
"https://github.com/KuofengGao/Verbose_Images"
] | Large vision-language models (VLMs) such as GPT-4 have achieved exceptional performance across various multi-modal tasks. However, the deployment of VLMs necessitates substantial energy consumption and computational resources. Once attackers maliciously induce high energy consumption and latency time (energy-latency co... | [] | [] | Inducing High Energy-Latency of Large Vision-Language Models with Verbose Images | [
"Kuofeng Gao",
"Yang Bai",
"Jindong Gu",
"Shu-Tao Xia",
"Philip Torr",
"Zhifeng Li",
"Wei Liu"
] | 2401.11170 | 19,194 | https://openreview.net/forum?id=BteuUysuXX | |
[] | Poster | [] | The interaction between users and recommender systems is not only affected by selection bias but also the neighborhood effect, i.e., the interaction between a user and an item is affected by the interactions between other users and other items, or between the same user and other items, or between other users and the sa... | [] | [] | Be Aware of the Neighborhood Effect: Modeling Selection Bias under Interference for Recommendation | [
"Haoxuan Li",
"Chunyuan Zheng",
"Sihao Ding",
"Peng Wu",
"Zhi Geng",
"Fuli Feng",
"Xiangnan He"
] | 19,441 | https://openreview.net/forum?id=52fz5sUAy2 | ||
[] | Spotlight Poster | [] | Collaborative filtering builds personalized models from the collected user feedback. However, the collected data is observational rather than experimental, leading to various biases in the data, which can significantly affect the learned model. To address this issue, many studies have focused on propensity-based method... | [] | [] | Debiased Collaborative Filtering with Kernel-based Causal Balancing | [
"Haoxuan Li",
"Yanghao Xiao",
"Chunyuan Zheng",
"Peng Wu",
"Zhi Geng",
"Xu Chen",
"Peng Cui"
] | 2404.19596 | 19,055 | https://openreview.net/forum?id=Ffjc8ApSbt | |
[] | Poster | [] | Out-of-distribution (OOD) problems in few-shot classification (FSC) occur when novel classes sampled from testing distributions differ from base classes drawn from training distributions, which considerably degrades the performance of deep learning models deployed in real-world applications. Recent studies suggest that... | [] | [] | MetaCoCo: A New Few-Shot Classification Benchmark with Spurious Correlation | [
"Min Zhang",
"Haoxuan Li",
"Fei Wu",
"Kun Kuang"
] | 2404.19644 | 19,133 | https://openreview.net/forum?id=DiWRG9JTWZ | |
[] | Poster | [] | In recent years, advances in the large-scale pretraining of language and text-to-image models have revolutionized the field of machine learning. Yet, integrating these two modalities into a single, robust model capable of generating seamless multimodal outputs remains a significant challenge. To address this gap, we pr... | [] | [] | Jointly Training Large Autoregressive Multimodal Models | [
"Emanuele Aiello",
"LILI YU",
"Yixin Nie",
"Armen Aghajanyan",
"Barlas Oguz"
] | 2309.15564 | 19,416 | https://openreview.net/forum?id=5jcav5RcKw | |
[] | Poster | [] | A central objective in neuroscience is to understand how the brain orchestrates movement. Recent advances in automated tracking technologies have made it possible to document behavior with unprecedented temporal resolution and scale, generating rich datasets which can be exploited to gain insights into the neural contr... | [] | [] | Learning interpretable control inputs and dynamics underlying animal locomotion | [
"Thomas Soares Mullen",
"Marine Schimel",
"Guillaume Hennequin",
"Christian K. Machens",
"Michael Orger",
"Adrien Jouary"
] | 18,835 | https://openreview.net/forum?id=MFCjgEOLJT | ||
[] | Poster | [] | Time-series causal discovery (TSCD) is a fundamental problem of machine learning. However, existing synthetic datasets cannot properly evaluate or predict the algorithms' performance on real data. This study introduces the CausalTime pipeline to generate time-series that highly resemble the real data and with ground t... | [] | [] | CausalTime: Realistically Generated Time-series for Benchmarking of Causal Discovery | [
"Yuxiao Cheng",
"Ziqian Wang",
"Tingxiong Xiao",
"Qin Zhong",
"Jinli Suo",
"Kunlun He"
] | 2310.01753 | 18,059 | https://openreview.net/forum?id=iad1yyyGme | |
[] | Poster | [] | Bayesian optimization is a highly efficient approach to optimizing objective functions which are expensive to query. These objectives are typically represented by Gaussian process (GP) surrogate models which are easy to optimize and support exact inference. While standard GP surrogates have been well-established in Bay... | [] | [] | A Study of Bayesian Neural Network Surrogates for Bayesian Optimization | [
"Yucen Lily Li",
"Tim G. J. Rudner",
"Andrew Gordon Wilson"
] | 2305.20028 | 18,615 | https://openreview.net/forum?id=SA19ijj44B | |
[] | Spotlight Poster | [] | We introduce SocioDojo, an open-ended lifelong learning environment for developing ready-to-deploy autonomous agents capable of performing human-like analysis and decision-making on societal topics such as economics, finance, politics, and culture. It consists of (1) information sources from news, social media, reports... | [] | [] | SocioDojo: Building Lifelong Analytical Agents with Real-world Text and Time Series | [
"Junyan Cheng",
"Peter Chin"
] | 17,662 | https://openreview.net/forum?id=s9z0HzWJJp | ||
[] | Poster | [] | This paper introduces a novel Transitional Dictionary Learning (TDL) framework that can implicitly learn symbolic knowledge, such as visual parts and relations, by reconstructing the input as a combination of parts with implicit relations. We propose a game-theoretic diffusion model to decompose the input into visual p... | [] | [] | Bridging Neural and Symbolic Representations with Transitional Dictionary Learning | [
"Junyan Cheng",
"Peter Chin"
] | 17,562 | https://openreview.net/forum?id=uqxBTcWRnj | ||
[] | Oral | [] | Autoregressive large language models (LLMs) compress knowledge from their training data through next-token conditional distributions. This limits tractable querying of this knowledge to start-to-end autoregressive sampling. However, many tasks of interest---including sequence continuation, infilling, and other forms of... | [] | [] | Amortizing intractable inference in large language models | [
"Edward J Hu",
"Moksh Jain",
"Eric Elmoznino",
"Younesse Kaddar",
"Guillaume Lajoie",
"Yoshua Bengio",
"Nikolay Malkin"
] | 2310.04363 | 19,763 | https://openreview.net/forum?id=Ouj6p4ca60 | |
[] | Poster | [] | Some reinforcement learning (RL) algorithms have the capability of recombining together pieces of previously seen experience to solve a task never seen before during training. This oft-sought property is one of the few ways in which dynamic programming based RL algorithms are considered different from supervised learni... | [] | [] | Closing the Gap between TD Learning and Supervised Learning - A Generalisation Point of View. | [
"Raj Ghugare",
"Matthieu Geist",
"Glen Berseth",
"Benjamin Eysenbach"
] | 2401.11237 | 17,723 | https://openreview.net/forum?id=qg5JENs0N4 | |
[] | Oral | [] | Gene regulatory network inference (GRNI) is a challenging problem, particularly owing to the presence of zeros in single-cell RNA sequencing data: some are biological zeros representing no gene expression, while some others are technical zeros arising from the sequencing procedure (aka dropouts), which may bias GRNI by... | [] | [] | Gene Regulatory Network Inference in the Presence of Dropouts: a Causal View | [
"Haoyue Dai",
"Ignavier Ng",
"Gongxu Luo",
"Peter Spirtes",
"Petar Stojanov",
"Kun Zhang"
] | 2403.15500 | 19,739 | https://openreview.net/forum?id=gFR4QwK53h | |
[] | Poster | [] | Supervised Fine-Tuning (SFT) on human demonstrations combined with Reinforcement Learning from Human Feedback (RLHF) constitutes a powerful alignment paradigm for Large Language Model (LLM) AI-assistant agents. However, a significant limitation of this approach is its substantial dependency on high-quality human annota... | [] | [] | SALMON: Self-Alignment with Principle-Following Reward Models | [
"Zhiqing Sun",
"Yikang Shen",
"Hongxin Zhang",
"Qinhong Zhou",
"Zhenfang Chen",
"David Daniel Cox",
"Yiming Yang",
"Chuang Gan"
] | 2310.05910 | 17,454 | https://openreview.net/forum?id=xJbsmB8UMx | |
[] | Poster | [] | Preventing the performance decay of Transformers on inputs longer than those used for training has been an important challenge in extending the context length of these models. Though the Transformer architecture has fundamentally no limits on the input sequence lengths it can process, the choice of position encoding us... | [] | [] | Functional Interpolation for Relative Positions improves Long Context Transformers | [
"Shanda Li",
"Chong You",
"Guru Guruganesh",
"Joshua Ainslie",
"Santiago Ontanon",
"Manzil Zaheer",
"Sumit Sanghai",
"Yiming Yang",
"Sanjiv Kumar",
"Srinadh Bhojanapalli"
] | 2310.04418 | 17,693 | https://openreview.net/forum?id=rR03qFesqk | |
[] | Poster | [] | Reinforcement learning (RL) over text representations can be effective for finding high-value policies that can search over graphs. However, RL requires careful structuring of the search space and algorithm design to be effective in this challenge. Through extensive experiments, we explore how different design choices ... | [] | [] | Searching for High-Value Molecules Using Reinforcement Learning and Transformers | [
"Raj Ghugare",
"Santiago Miret",
"Adriana Hugessen",
"Mariano Phielipp",
"Glen Berseth"
] | 2310.02902 | 17,841 | https://openreview.net/forum?id=nqlymMx42E | |
[] | Spotlight Poster | [] | With the waning of Moore's law, optimizing program performance has become a major focus of software research. However, high-level optimizations such as API and algorithm changes remain elusive due to the difficulty of understanding the semantics of code.Simultaneously, pretrained large language models (LLMs) have demon... | [] | [] | Learning Performance-Improving Code Edits | [
"Alexander G Shypula",
"Aman Madaan",
"Yimeng Zeng",
"Uri Alon",
"Jacob R. Gardner",
"Yiming Yang",
"Milad Hashemi",
"Graham Neubig",
"Parthasarathy Ranganathan",
"Osbert Bastani",
"Amir Yazdanbakhsh"
] | 2302.07867 | 18,045 | https://openreview.net/forum?id=ix7rLVHXyY | |
[] | Oral | [] | Direct image alignment is a widely used technique for relative 6DoF pose estimation between two images, but its accuracy strongly depends on pose initialization.Therefore, recent end-to-end frameworks focused on training objectives, such as the Gauss-Newton loss, which increase the convergence basin of the learned feat... | [] | [] | An Analytical Solution to Gauss-Newton Loss for Direct Image Alignment | [
"Sergei Solonets",
"Daniil Sinitsyn",
"Lukas Von Stumberg",
"Nikita Araslanov",
"Daniel Cremers"
] | 19,730 | https://openreview.net/forum?id=mE52zURNGc | ||
[] | Oral | [] | We study a family of distributed stochastic optimization algorithms where gradients are sampled by a token traversing a network of agents in random-walk fashion. Typically, these random-walks are chosen to be Markov chains that asymptotically sample from a desired target distribution, and play a critical role in the co... | [] | [] | Accelerating Distributed Stochastic Optimization via Self-Repellent Random Walks | [
"Jie Hu",
"Vishwaraj Doshi",
"Do Young Eun"
] | 2401.09665 | 19,780 | https://openreview.net/forum?id=BV1PHbTJzd | |
[] | Oral | [] | We show how to obtain improved active learning methods in the agnostic (adversarial noise) setting by combining marginal leverage score sampling with non-independent sampling strategies that promote spatial coverage. In particular, we propose an easily implemented method based on the \emph{pivotal sampling algorithm}, ... | [] | [] | Improved Active Learning via Dependent Leverage Score Sampling | [
"Atsushi Shimizu",
"Xiaoou Cheng",
"Christopher Musco",
"Jonathan Weare"
] | 2310.04966 | 19,770 | https://openreview.net/forum?id=IYxDy2jDFL | |
[] | Poster | [] | Propositional satisfiability (SAT) is an NP-complete problem that impacts manyresearch fields, such as planning, verification, and security. Mainstream modernSAT solvers are based on the Conflict-Driven Clause Learning (CDCL) algorithm.Recent work aimed to enhance CDCL SAT solvers using Graph Neural Networks(GNNs). How... | [] | [] | NeuroBack: Improving CDCL SAT Solving using Graph Neural Networks | [
"Wenxi Wang",
"Yang Hu",
"Mohit Tiwari",
"Sarfraz Khurshid",
"Kenneth McMillan",
"Risto Miikkulainen"
] | 2110.14053 | 17,641 | https://openreview.net/forum?id=samyfu6G93 | |
[] | Oral | [] | Seven years ago, researchers proposed a postprocessing method to equalize the error rates of a model across different demographic groups. The work launched hundreds of papers purporting to improve over the postprocessing baseline. We empirically evaluate these claims through thousands of model evaluations on several ta... | [] | [] | Unprocessing Seven Years of Algorithmic Fairness | [
"AndrΓ© Cruz",
"Moritz Hardt"
] | 2306.07261 | 19,731 | https://openreview.net/forum?id=jr03SfWsBS | |
[] | Poster | [] | Minimax problems are notoriously challenging to optimize. However, we present that two-timescale extragradient can be a viable solution. By utilizing dynamical systems theory, we show that it converges to points that satisfy the second-order necessary condition of local minimax points, under mild conditions. This work ... | [] | [] | Two-timescale Extragradient for Finding Local Minimax Points | [
"Jiseok Chae",
"Kyuwon Kim",
"Donghwan Kim"
] | 2305.16242 | 19,400 | https://openreview.net/forum?id=6CIGhcJYJH | |
[] | Poster | [] | The success of many RL techniques heavily relies on human-engineered dense rewards, which typically demands substantial domain expertise and extensive trial and error. In our work, we propose **DrS** (**D**ense **r**eward learning from **S**tages), a novel approach for learning *reusable* dense rewards for multi-stage ... | [] | [] | DrS: Learning Reusable Dense Rewards for Multi-Stage Tasks | [
"Tongzhou Mu",
"Minghua Liu",
"Hao Su"
] | 2404.16779 | 19,399 | https://openreview.net/forum?id=6CZ50WgfCG | |
[] | Poster | [] | We introduce a novel task within the field of human motion generation, termed dance accompaniment, which necessitates the generation of responsive movements from a dance partner, the "follower", synchronized with the lead dancerβs movements and the underlying musical rhythm. Unlike existing solo or group dance generati... | [] | [] | Duolando: Follower GPT with Off-Policy Reinforcement Learning for Dance Accompaniment | [
"Li Siyao",
"Tianpei Gu",
"Zhitao Yang",
"Zhengyu Lin",
"Ziwei Liu",
"Henghui Ding",
"Lei Yang",
"Chen Change Loy"
] | 2403.18811 | 19,027 | https://openreview.net/forum?id=GW4j4n2cjH | |
[] | Poster | [
"https://github.com/ZrrSkywalker/Personalize-SAM"
] | Driven by large-data pre-training, Segment Anything Model (SAM) has been demonstrated as a powerful promptable framework, revolutionizing the segmentation field. Despite the generality, customizing SAM for specific visual concepts without man-powered prompting is under-explored, e.g., automatically segmenting your pet ... | [
"justin-zk/Personalize-SAM"
] | [] | Personalize Segment Anything Model with One Shot | [
"Renrui Zhang",
"Zhengkai Jiang",
"Ziyu Guo",
"Shilin Yan",
"Junting Pan",
"Hao Dong",
"Yu Qiao",
"Peng Gao",
"Hongsheng Li"
] | 2305.03048 | 19,398 | https://openreview.net/forum?id=6Gzkhoc6YS | |
[] | Poster | [] | Adversarial training improves the robustness of neural networks against adversarial attacks, albeit at the expense of the trade-off between standard and robust generalization.To unveil the underlying factors driving this phenomenon, we examine the layer-wise learning capabilities of neural networks during the transitio... | [] | [] | Conserve-Update-Revise to Cure Generalization and Robustness Trade-off in Adversarial Training | [
"Shruthi Gowda",
"Bahram Zonooz",
"Elahe Arani"
] | 2401.14948 | 19,397 | https://openreview.net/forum?id=6IjN7oxjXt | |
[] | Spotlight Poster | [] | This paper focuses on graph metric learning. First, we present a class of maximum mean discrepancy (MMD) based graph kernels, called MMD-GK. These kernels are computed by applying MMD to the node representations of two graphs with message-passing propagation. Compared to classical graph kernels such as the Weisfeiler-L... | [] | [] | MMD Graph Kernel: Effective Metric Learning for Graphs via Maximum Mean Discrepancy | [
"Yan Sun",
"Jicong Fan"
] | 19,024 | https://openreview.net/forum?id=GZ6AcZwA8r | ||
[] | Poster | [] | Federated Domain Adaptation (FDA) describes the federated learning (FL) setting where source clients and a server work collaboratively to improve the performance of a target client where limited data is available. The domain shift between the source and target domains, coupled with limited data of the target client, ma... | [] | [] | Principled Federated Domain Adaptation: Gradient Projection and Auto-Weighting | [
"Enyi Jiang",
"Yibo Jacky Zhang",
"Sanmi Koyejo"
] | 2302.05049 | 19,396 | https://openreview.net/forum?id=6J3ehSUrMU | |
[] | Oral | [] | This paper studies generative flow networks (GFlowNets) to sample objects from the Boltzmann energy distribution via a sequence of actions. In particular, we focus on improving GFlowNet with partial inference: training flow functions with the evaluation of the intermediate states or transitions. To this end, the recent... | [] | [] | Learning Energy Decompositions for Partial Inference in GFlowNets | [
"Hyosoon Jang",
"Minsu Kim",
"Sungsoo Ahn"
] | 2310.03301 | 19,762 | https://openreview.net/forum?id=P15CHILQlg | |
[] | Poster | [] | Recent advancements in Natural Language Processing (NLP) have witnessed the groundbreaking impact of pretrained models, yielding impressive outcomes across various tasks. This study seeks to extend the power of pretraining methodologies to facilitating the prediction over tables in data science, a domain traditionally ... | [] | [] | UniTabE: A Universal Pretraining Protocol for Tabular Foundation Model in Data Science | [
"Yazheng Yang",
"Yuqi Wang",
"Guang Liu",
"Ledell Wu",
"Qi Liu"
] | 2307.09249 | 19,395 | https://openreview.net/forum?id=6LLho5X6xV | |
[] | Oral | [] | Regularization-based methods have so far been among the *de facto* choices for continual learning. Recent theoretical studies have revealed that these methods all boil down to relying on the Hessian matrix approximation of model weights. However, these methods suffer from suboptimal trade-offs between knowledge transfe... | [] | [] | Meta Continual Learning Revisited: Implicitly Enhancing Online Hessian Approximation via Variance Reduction | [
"Yichen Wu",
"Long-Kai Huang",
"Renzhen Wang",
"Deyu Meng",
"Ying Wei"
] | 19,759 | https://openreview.net/forum?id=TpD2aG1h0D | ||
[] | Oral | [] | Distribution shifts over time are common in real-world machine-learning applications. This scenario is formulated as Evolving Domain Generalization (EDG), where models aim to generalize well to unseen target domains in a time-varying system by learning and leveraging the underlying evolving pattern of the distribution ... | [] | [] | Latent Trajectory Learning for Limited Timestamps under Distribution Shift over Time | [
"QIUHAO Zeng",
"Changjian Shui",
"Long-Kai Huang",
"Peng Liu",
"Xi Chen",
"Charles Ling",
"Boyu Wang"
] | 19,746 | https://openreview.net/forum?id=bTMMNT7IdW | ||
[] | Spotlight Poster | [] | Integrating information while recognizing dependence from multiple data sources and enhancing the predictive performance of the multi-output regression are challenging tasks. Multioutput Gaussian Process (MOGP) methods offer outstanding solutions with tractable predictions and uncertainty quantification.However, their ... | [] | [] | Graphical Multioutput Gaussian Process with Attention | [
"Yijue Dai",
"Wenzhong Yan",
"Feng Yin"
] | 19,393 | https://openreview.net/forum?id=6N8TW504aa | ||
[] | Oral | [] | When writing programs, people have the ability to tackle a new complex task by decomposing it into smaller and more familiar subtasks. While it is difficult to measure whether neural program synthesis methods have similar capabilities, we can measure whether they compositionally generalize, that is, whether a model tha... | [] | [] | ExeDec: Execution Decomposition for Compositional Generalization in Neural Program Synthesis | [
"Kensen Shi",
"Joey Hong",
"Yinlin Deng",
"Pengcheng Yin",
"Manzil Zaheer",
"Charles Sutton"
] | 2307.13883 | 19,726 | https://openreview.net/forum?id=oTRwljRgiv | |
[] | Spotlight Poster | [] | Image interpolation based on diffusion models is promising in creating fresh and interesting images. Advanced interpolation methods mainly focus on linear spherical interpolation, delivering remarkable success for images generated by diffusion models.However, existing methods struggle with natural images (not generated... | [] | [] | NoiseDiffusion: Correcting Noise for Image Interpolation with Diffusion Models beyond Spherical Linear Interpolation | [
"PengFei Zheng",
"Yonggang Zhang",
"Zhen Fang",
"Tongliang Liu",
"Defu Lian",
"Bo Han"
] | 2403.08840 | 19,392 | https://openreview.net/forum?id=6O3Q6AFUTu | |
[] | Spotlight Poster | [] | Counterfactual regret minimization (CFR) is a family of iterative algorithms showing promising results in solving imperfect-information games. Recent novel CFR variants (e.g., CFR+, DCFR) have significantly improved the convergence rate of the vanilla CFR. The key to these CFR variantsβ performance is weighting each it... | [] | [] | Dynamic Discounted Counterfactual Regret Minimization | [
"Hang Xu",
"Kai Li",
"Haobo Fu",
"QIANG FU",
"Junliang Xing",
"Jian Cheng"
] | 19,391 | https://openreview.net/forum?id=6PbvbLyqT6 | ||
[] | Poster | [] | We introduce $\mathcal{L}_1$-MBRL, a control-theoretic augmentation scheme for Model-Based Reinforcement Learning (MBRL) algorithms. Unlike model-free approaches, MBRL algorithms learn a model of the transition function using data and use it to design a control input. Our approach generates an approximate control-affin... | [] | [] | Robust Model Based Reinforcement Learning Using $\mathcal{L}_1$ Adaptive Control | [
"Minjun Sung",
"Sambhu Harimanas Karumanchi",
"Aditya Gahlawat",
"Naira Hovakimyan"
] | 19,023 | https://openreview.net/forum?id=GaLCLvJaoF | ||
[] | Oral | [
"https://github.com/dvlab-research/LongLoRA"
] | We present LongLoRA, an efficient fine-tuning approach that extends the context sizes of pre-trained large language models (LLMs), with limited computation cost. Typically, training LLMs with long context sizes is computationally expensive, requiring extensive training hours and GPU resources. For example, training on ... | [] | [] | LongLoRA: Efficient Fine-tuning of Long-Context Large Language Models | [
"Yukang Chen",
"Shengju Qian",
"Haotian Tang",
"Xin Lai",
"Zhijian Liu",
"Song Han",
"Jiaya Jia"
] | 2309.12307 | 19,790 | https://openreview.net/forum?id=6PmJoRfdaK | |
[] | Spotlight Poster | [] | While backpropagation (BP) has achieved widespread success in deep learning, it faces two prominent challenges; that is, computational inefficiency and biological implausibility. These issues arise from the requirements of feedback weight symmetry and the forward/backward pass locking. "Forward learning" (FL), an emerg... | [] | [] | Dictionary Contrastive Learning for Efficient Local Supervision without Auxiliary Networks | [
"Suhwan Choi",
"Myeongho Jeon",
"Yeonjung Hwang",
"Jeonglyul Oh",
"Sungjun Lim",
"Joonseok Lee",
"Myungjoo Kang"
] | 19,021 | https://openreview.net/forum?id=Gg7cXo3S8l | ||
[] | Poster | [] | Recently, multimodal contrastive learning (MMCL) approaches, such as CLIP \citep{radford2021learning}, have achieved a remarkable success in learning representations that are robust against distribution shift and generalize to new domains. Despite the empirical success, the mechanism behind learning such generalizable ... | [] | [] | Investigating the Benefits of Projection Head for Representation Learning | [
"Yihao Xue",
"Eric Gan",
"Jiayi Ni",
"Siddharth Joshi",
"Baharan Mirzasoleiman"
] | 2403.11391 | 19,020 | https://openreview.net/forum?id=GgEAdqYPNA | |
[] | Oral | [
"https://github.com/he-y/Multisize-Dataset-Condensation"
] | While dataset condensation effectively enhances training efficiency, its application in on-device scenarios brings unique challenges. 1) Due to the fluctuating computational resources of these devices, there's a demand for a flexible dataset size that diverges from a predefined size. 2) The limited computational power ... | [] | [] | Multisize Dataset Condensation | [
"Yang He",
"Lingao Xiao",
"Joey Tianyi Zhou",
"Ivor Tsang"
] | 2403.06075 | 19,777 | https://openreview.net/forum?id=FVhmnvqnsI | |
[] | Poster | [] | Equivariance is an important structural property that is captured by architectures such as graph neural networks (GNNs). However, equivariant graph functions cannot produce different outputs for similar nodes, which may be undesirable when the function is trying to optimize some global graph property. In this paper, we... | [] | [] | Orbit-Equivariant Graph Neural Networks | [
"Matthew Morris",
"Bernardo Cuenca Grau",
"Ian Horrocks"
] | 19,019 | https://openreview.net/forum?id=GkJOCga62u | ||
[] | Oral | [
"https://github.com/henryqin1997/InfoBatch"
] | Data pruning aims to obtain lossless performances with less overall cost. A common approach is to filter out samples that make less contribution to the training. This could lead to gradient expectation bias compared to the original data. To solve this problem, we propose InfoBatch, a novel framework aiming to achieve l... | [] | [] | InfoBatch: Lossless Training Speed Up by Unbiased Dynamic Data Pruning | [
"Ziheng Qin",
"Kai Wang",
"Zangwei Zheng",
"Jianyang Gu",
"Xiangyu Peng",
"xu Zhao Pan",
"Daquan Zhou",
"Lei Shang",
"Baigui Sun",
"Xuansong Xie",
"Yang You"
] | 2303.04947 | 19,779 | https://openreview.net/forum?id=C61sk5LsK6 | |
[] | Oral | [] | Climate prediction traditionally relies on complex numerical simulations of atmospheric physics. Deep learning approaches, such as transformers, have recently challenged the simulation paradigm with complex network forecasts. However, they often act as data-driven black-box models that neglect the underlying physics an... | [] | [] | ClimODE: Climate Forecasting With Physics-informed Neural ODEs | [
"Yogesh Verma",
"Markus Heinonen",
"Vikas Garg"
] | 19,715 | https://openreview.net/forum?id=xuY33XhEGR | ||
[] | Oral | [] | Recent advances in tabular data generation have greatly enhanced synthetic data quality. However, extending diffusion models to tabular data is challenging due to the intricately varied distributions and a blend of data types of tabular data. This paper introduces TABSYN, a methodology that synthesizes tabular data by ... | [] | [] | Mixed-Type Tabular Data Synthesis with Score-based Diffusion in Latent Space | [
"Hengrui Zhang",
"Jiani Zhang",
"Zhengyuan Shen",
"Balasubramaniam Srinivasan",
"Xiao Qin",
"Christos Faloutsos",
"Huzefa Rangwala",
"George Karypis"
] | 2310.09656 | 19,792 | https://openreview.net/forum?id=4Ay23yeuz0 | |
[] | Oral | [] | Large Language Models (LLMs) have recently showcased their remarkable capacities, not only in natural language processing tasks but also across diverse domains such as clinical medicine, legal consultation, and education. LLMs become more than mere applications, evolving into assistants capable of addressing diverse us... | [] | [] | On the Humanity of Conversational AI: Evaluating the Psychological Portrayal of LLMs | [
"Jen-tse Huang",
"Wenxuan Wang",
"Eric John Li",
"Man Ho LAM",
"Shujie Ren",
"Youliang Yuan",
"Wenxiang Jiao",
"Zhaopeng Tu",
"Michael Lyu"
] | 19,775 | https://openreview.net/forum?id=H3UayAQWoE | ||
[] | Oral | [
"https://github.com/RuoyuChen10/SMDL-Attribution"
] | Image attribution algorithms aim to identify important regions that are highly relevant to model decisions. Although existing attribution solutions can effectively assign importance to target elements, they still face the following challenges: 1) existing attribution methods generate inaccurate small regions thus misle... | [] | [] | Less is More: Fewer Interpretable Region via Submodular Subset Selection | [
"Ruoyu Chen",
"Hua Zhang",
"Siyuan Liang",
"Jingzhi Li",
"Xiaochun Cao"
] | 2402.09164 | 19,733 | https://openreview.net/forum?id=jKTUlxo5zy | |
[] | Oral | [] | Learning features from data is one of the defining characteristics of deep learning,but our theoretical understanding of the role features play in deep learning is stillrudimentary. To address this gap, we introduce a new tool, the interaction tensor,for empirically analyzing the interaction between data and model thro... | [] | [] | On the Joint Interaction of Models, Data, and Features | [
"Yiding Jiang",
"Christina Baek",
"J Zico Kolter"
] | 2306.04793 | 19,712 | https://openreview.net/forum?id=ze7DOLi394 |
End of preview. Expand in Data Studio
No dataset card yet
- Downloads last month
- 41