umd-huang-lab Goto Github PK
Name: Furong's Lab
Type: Organization
Bio: This is Dr. Furong Huang's group at University of Maryland.
Twitter: furongh
Blog: https://furong-huang.com
Name: Furong's Lab
Type: Organization
Bio: This is Dr. Furong Huang's group at University of Maryland.
Twitter: furongh
Blog: https://furong-huang.com
Implementation of ICLR'23 publication "Certifiably Robust Policy Learning against Adversarial Multi-Agent Communication".
ICLR 2023 paper "Exploring and Exploiting Decision Boundary Dynamics for Adversarial Robustness" by Yuancheng Xu, Yanchao Sun, Micah Goldblum, Tom Goldstein and Furong Huang
Official Implementation of the paper "Equal Long-term Benefit Rate: Adapting Static Fairness Notions to Sequential Decision Making" by Yuancheng Xu, Chenghao Deng, Yanchao Sun, Ruijie Zheng, Xiyao Wang, Jieyu Zhao and Furong Huang.
Code and data for our paper "Automatic Pseudo-Harmful Prompt Generation for Evaluating False Refusals in Large Language Models"
Off-policy evaluation in contextual bandits, which evaluates the reward of a target policy given the history of a logged pol- icy, is a task of importance as it provides an estimate of the performance of a new policy without experimenting with it. Existing off-policy evaluation methods in contextual bandits make an oversimplified assumption that the distribution of contexts is stationary. In this paper, we consider a more prac- tical setting of a context/reward distribution shift between the logged data and the contexts observed for evaluating a target policy in the future. We propose an intent shift model which introduces a latent intent variable to capture the distribution shift on context and reward, avoiding the intractable prob- lem of density estimation of contexts in high-dimension. Un- der the intent shift model, we introduce a consistent spectral- based IPS estimator, characterize its finite-sample complexity and derive an MSE bound on the performance of the final re- ward estimation. Experiments demonstrate that the proposed spectral-based IPS estimator outperforms the existing estimators under distribution shift.
Deep neural networks generalize well on unseen data though the number of parameters often far exceeds the number of training examples. Recently proposed complexity measures have provided insights to understanding the generalizability in neural networks from perspectives of PAC-Bayes, robustness, overparametrization, compression and so on. In this work, we advance the understanding of the relations between the network's architecture and its generalizability from the compression perspective. Using tensor analysis, we propose a series of intuitive, data-dependent and easily-measurable properties that tightly characterize the compressibility and generalizability of neural networks; thus, in practice, our generalization bound outperforms the previous compression-based ones, especially for neural networks using tensors as their weight kernels (e.g. CNNs). Moreover, these intuitive measurements provide further insights into designing neural network architectures with properties favorable for better/guaranteed generalizability. Our experimental results demonstrate that through the proposed measurable properties, our generalization error bound matches the trend of the test error well. Our theoretical analysis further provides justifications for the empirical success and limitations of some widely-used tensor-based compression approaches. We also discover the improvements to the compressibility and robustness of current neural networks when incorporating tensor operations via our proposed layer-wise structure.
Code for ICLR 2022 publication: Who Is the Strongest Enemy? Towards Optimal and Efficient Evasion Attacks in Deep RL. https://openreview.net/forum?id=JM2kFbJvvI
Parallel implementations of decomposed tensorial neural network layers
Code for our ICLR 2024 paper "PerceptionCLIP: Visual Classification by Inferring and Conditioning on Contexts"
Code for paper Vulnerability-Aware Poisoning Mechanism for Online RL with Unknown Dynamics. https://arxiv.org/abs/2009.00774
We provide an end-to-end differentially pri- vate spectral algorithm for learning LDA, based on matrix/tensor decompositions, and establish theoretical guarantees on util- ity/consistency of the estimated model pa- rameters. The spectral algorithm consists of multiple algorithmic steps, named as “edges”, to which noise could be injected to obtain differential privacy. We identify subsets of edges, named as “configurations”, such that adding noise to all edges in such a subset guarantees differential privacy of the end-to-end spectral algorithm. We character- ize the sensitivity of the edges with respect to the input and thus estimate the amount of noise to be added to each edge for any required privacy level. We then character- ize the utility loss for each configuration as a function of injected noise. Overall, by com- bining the sensitivity and utility characteri- zation, we obtain an end-to-end differentially private spectral algorithm for LDA and iden- tify the corresponding configuration that out- performs others in any specific regime. We are the first to achieve utility guarantees un- der the required level of differential privacy for learning in LDA. Overall our method sys- tematically outperforms differentially private variational inference.
Code for paper "Beyond Worst-case Attacks: Robust RL with Adaptive Defense via Non-dominated Policies" by Xiangyu Liu, Chenghao Deng, Yanchao Sun, Yongyuan Liang, Furong Huang
Repository for RealFM: A Realistic Mechanism to Incentivize Federated Participation and Contribution
Model-based reinforcement learning algorithms make decisions by building and utilizing a model of the environment. However, none of the existing algorithms attempts to infer the dynamics of any state-action pair from known state-action pairs before meeting it for sufficient times. We propose a new model-based method called Greedy Inference Model (GIM) that infers the unknown dynamics from known dynamics based on the internal spectral properties of the environment. In other words, GIM can “learn by analogy”. We further introduce a new exploration strategy which ensures that the agent rapidly and evenly visits unknown state-action pairs. GIM is much more computationally efficient than state-of-the-art model-based algorithms, as the number of dynamic programming operations is independent of the environment size. Lower sample complexity could also be achieved under mild conditions compared against methods without inferring. Experimental results demon- strate the effectiveness and efficiency of GIM in a variety of real- world tasks.
SWIFT: Shared WaIt Free Transmission
Code for "TACO: Temporal Latent Action-Driven Contrastive Loss for Visual Reinforcement Learning"
We implement tensorial neural networks (TNNs), a generalization of existing neural networks by extending tensor operations on low order operands to those on high order operands.
A self-training method for transferring fairness under distribution shifts.
Code for paper "Transfer RL across Observation Feature Spaces via Model-Based Regularization". https://openreview.net/forum?id=7KdAoOsI81C
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.