Posted on permitted development wales agricultural buildings

understanding black box predictions via influence functions

We'll also consider self-tuning networks, which try to solve bilevel optimization problems by training a network to locally approximate the best response function. WhiteBox Part 2: Interpretable Machine Learning - TooTouch The Datta, A., Sen, S., and Zick, Y. Algorithmic transparency via quantitative input influence: Theory and experiments with learning systems. the first approximation in s_test and once to combine with the s_test We show that even on non-convex and non-differentiable models where the theory breaks down, approximations to influence functions can still provide valuable information. 10.5 Influential Instances | Interpretable Machine Learning - GitHub Pages To get the correct test outcome of ship, the Helpful images from outcome. Many Git commands accept both tag and branch names, so creating this branch may cause unexpected behavior. Using machine teaching to identify optimal training-set attacks on machine learners. If the influence function is calculated for multiple Measuring the effects of data parallelism on neural network training. Ben-David, S., Blitzer, J., Crammer, K., Kulesza, A., Pereira, F., and Vaughan, J. W. A theory of learning from different domains. Check if you have access through your login credentials or your institution to get full access on this article. Reference Understanding Black-box Predictions via Influence Functions CodaLab Worksheets 2018. I am grateful to my supervisor Tasnim Azad Abir sir, for his . Goodman, B. and Flaxman, S. European union regulations on algorithmic decision-making and a "right to explanation". NIPS, p.1097-1105. This is a PyTorch reimplementation of Influence Functions from the ICML2017 best paper: We'll consider two models of stochastic optimization which make vastly different predictions about convergence behavior: the noisy quadratic model, and the interpolation regime. Please download or close your previous search result export first before starting a new bulk export. x\Y#7r~_}2;4,>Fvv,ZduwYTUQP }#&uD,spdv9#?Kft&e&LS 5[^od7Z5qg(]}{__+3"Bej,wofUl)u*l$m}FX6S/7?wfYwoF4{Hmf83%TF#}{c}w( kMf*bLQ?C}?J2l1jy)>$"^4Rtg+$4Ld{}Q8k|iaL_@8v P. Nakkiran, B. Neyshabur, and H. Sedghi. In this paper, we use influence functions a classic technique from robust statistics to trace a model's prediction through the learning algorithm and back to its training data, thereby identifying training points most responsible for a given prediction. GitHub - kohpangwei/influence-release affecting everything else. Understanding short-horizon bias in stochastic meta-optimization. Understanding Black-box Predictions via Influence Functions - PMLR Infinite Limits and Overparameterization [Slides]. If there are n samples, it can be interpreted as 1/n. Pearlmutter, B. In, Mei, S. and Zhu, X. The ACM Digital Library is published by the Association for Computing Machinery. Simonyan, K., Vedaldi, A., and Zisserman, A. It is individual work. Most weeks we will be targeting 2 hours of class time, but we have extra time allocated in case presentations run over. Which algorithmic choices matter at which batch sizes? Proc 34th Int Conf on Machine Learning, p.1885-1894. Adaptive Gradient Methods, Normalization, and Weight Decay [Slides]. Neural nets have achieved amazing results over the past decade in domains as broad as vision, speech, language understanding, medicine, robotics, and game playing. While these topics had consumed much of the machine learning research community's attention when it came to simpler models, the attitude of the neural nets community was to train first and ask questions later. Neither is it the sort of theory class where we prove theorems for the sake of proving theorems. Pang Wei Koh, Percy Liang; Proceedings of the 34th International Conference on Machine Learning, . Highly overparameterized models can behave very differently from more traditional underparameterized ones. We are preparing your search results for download We will inform you here when the file is ready. To scale up influence functions to modern machine learning settings, we develop a simple, efficient implementation that requires only oracle access to gradients and Hessian-vector products. To scale up influence functions to modern machine learning settings, Idea: use Influence Functions to observe the influence of the test samples from the training samples. Haoping Xu, Zhihuan Yu, and Jingcheng Niu. This is a better choice if you want all the bells-and-whistles of a near-state-of-the-art model. Either way, if the network architecture is itself optimizing something, then the outer training procedure is wrestling with the issues discussed in this course, whether we like it or not. Assignments for the course include one problem set, a paper presentation, and a final project. How can we explain the predictions of a black-box model? Understanding Black-box Predictions via Influence Functions Koh, Pang Wei. Metrics give a local notion of distance on a manifold. The deep bootstrap framework: Good online learners are good offline generalizers. 7 1 . Understanding Black-box Predictions via Influence Functions More details can be found in the project handout. Thus, you can easily find mislabeled images in your dataset, or Lectures will be delivered synchronously via Zoom, and recorded for asynchronous viewing by enrolled students. PDF Understanding Black-box Predictions via Influence Functions - arXiv Borys Bryndak, Sergio Casas, and Sean Segal. where the theory breaks down, Natural gradient works efficiently in learning. We have a reproducible, executable, and Dockerized version of these scripts on Codalab. That can increase prediction accuracy, reduce (b) 7 , 7 . We show that even on non-convex and non-differentiable models where the theory breaks down, approximations to influence functions can still provide valuable information. How can we explain the predictions of a black-box model? above, keeping the grad_zs only makes sense if they can be loaded faster/ Understanding Black-box Predictions via Influence Functions Background information ICML 2017 best paper Stanford Pang Wei Koh CourseraStanfordNIPS 2019influence function Percy Liang11Michael Jordan Abstract We have a reproducible, executable, and Dockerized version of these scripts on Codalab. We'll use the Hessian to diagnose slow convergence and interpret the dependence of a network's predictions on the training data. Optimizing neural networks with Kronecker-factored approximate curvature. Which optimization techniques are useful at which batch sizes? the prediction outcomes of an entire dataset or even >1000 test samples. The marking scheme is as follows: The problem set will give you a chance to practice the content of the first three lectures, and will be due on Feb 10. After all, the optimization landscape is nonconvex, highly nonlinear, and high-dimensional, so why are we able to train these networks? On Second-Order Group Influence Functions for Black-Box Predictions Influence functions are a classic technique from robust statistics to identify the training points most responsible for a given prediction. influence function. For more details please see Are you sure you want to create this branch? Ribeiro, M. T., Singh, S., and Guestrin, C. "why should I trust you? ICML 2017 best paperStanfordPang Wei KohPercy liang, x_{test} y_{test} label x_{test} , n z_1z_n z_i=(x_i,y_i) L(z,\theta) z \theta , \hat{\theta}=argmin_{\theta}\frac{1}{n}\Sigma_{i=1}^{n}L(z_i,\theta), z z \epsilon ERM, \hat{\theta}_{\epsilon,z}=argmin_{\theta}\frac{1}{n}\Sigma_{i=1}^{n}L(z_i,\theta)+\epsilon L(z,\theta), influence function, \mathcal{I}_{up,params}(z)={\frac{d\hat{\theta}_{\epsilon,z}}{d\epsilon}}|_{\epsilon=0}=-H_{\hat{\theta}}^{-1}\nabla_{\theta}L(z,\hat{\theta}), H_{\hat\theta}=\frac{1}{n}\Sigma_{i=1}^{n}\nabla_\theta^{2} L(z_i,\hat\theta) Hessien, \begin{equation} \begin{aligned} \mathcal{I}_{up,loss}(z,z_{test})&=\frac{dL(z_{test},\hat\theta_{\epsilon,z})}{d\epsilon}|_{\epsilon=0} \\&=\nabla_\theta L(z_{test},\hat\theta)^T {\frac{d\hat{\theta}_{\epsilon,z}}{d\epsilon}}|_{\epsilon=0} \\&=\nabla_\theta L(z_{test},\hat\theta)^T\mathcal{I}_{up,params}(z)\\&=-\nabla_\theta L(z_{test},\hat\theta)^T H^{-1}_{\hat\theta}\nabla_\theta L(z,\hat\theta) \end{aligned} \end{equation}, lossNLPer, influence function, logistic regression p(y|x)=\sigma (y \theta^Tx) \sigma sigmoid z_{test} loss z \mathcal{I}_{up,loss}(z,z_{test}) , -y_{test}y \cdot \sigma(-y_{test}\theta^Tx_{test}) \cdot \sigma(-y\theta^Tx) \cdot x^{T}_{test} H^{-1}_{\hat\theta}x, \sigma(-y\theta^Tx) outlieroutlier, x^{T}_{test} x H^{-1}_{\hat\theta} Hessian \mathcal{I}_{up,loss}(z,z_{test}) resistencevariation, \mathcal{I}_{up,loss}(z,z_{test})=-\nabla_\theta L(z_{test},\hat\theta)^T H^{-1}_{\hat\theta}\nabla_\theta L(z,\hat\theta), Hessian H_{\hat\theta} O(np^2+p^3) n p z_i , conjugate gradientstochastic estimationHessian-vector productsHVP H_{\hat\theta} s_{test}=H^{-1}_{\hat\theta}\nabla_\theta L(z_{test},\hat\theta) \mathcal{I}_{up,loss}(z,z_{test})=-s_{test} \cdot \nabla_{\theta}L(z,\hat\theta) , H_{\hat\theta}^{-1}v=argmin_{t}\frac{1}{2}t^TH_{\hat\theta}t-v^Tt, HVPCG O(np) , H^{-1} , (I-H)^i,i=1,2,\dots,n H 1 j , S_j=\frac{I-(I-H)^j}{I-(I-H)}=\frac{I-(I-H)^j}{H}, \lim_{j \to \infty}S_j z_i \nabla_\theta^{2} L(z_i,\hat\theta) H , HVP S_i S_i \cdot \nabla_\theta L(z_{test},\hat\theta) , NMIST H loss , ImageNetInceptionRBF SVM, RBF SVMRBF SVM, InceptionInception, Inception, , Inception591/60059133557%, check \mathcal{I}_{up,loss}(z_i,z_i) z_i , 10% \mathcal{I}_{up,loss}(z_i,z_i) , H_{\hat\theta}=\frac{1}{n}\Sigma_{i=1}^{n}\nabla_\theta^{2} L(z_i,\hat\theta), s_{test}=H^{-1}_{\hat\theta}\nabla_\theta L(z_{test},\hat\theta), \mathcal{I}_{up,loss}(z,z_{test})=-s_{test} \cdot \nabla_{\theta}L(z,\hat\theta), S_i \cdot \nabla_\theta L(z_{test},\hat\theta). Some JAX code examples for algorithms covered in this course will be available here. When testing for a single test image, you can then If Influence Functions are the Answer, Then What is the Question? samples for each test data sample. Liu, Y., Jiang, S., and Liao, S. Efficient approximation of cross-validation for kernel methods using Bouligand influence function. In this paper, we use influence functions -- a classic technique from robust statistics -- to trace a model's prediction through . Google Scholar Understanding Black-box Predictions via Influence Functions - ResearchGate (a) train loss, Hessian, train_loss + Hessian . An evaluation of the human-interpretability of explanation. Explain and Predict, and then Predict Again | Proceedings of the 14th Riemannian metrics for neural networks I: Feed-forward networks. can take significant amounts of disk space (100s of GBs) but with a fast SSD In Proceedings of the international conference on machine learning (ICML). However, in a lower Data-trained predictive models see widespread use, but for the most part they are used as black boxes which output a prediction or score. G. Zhang, S. Sun, D. Duvenaud, and R. Grosse. Therefore, if we bring in an idea from optimization, we need to think not just about whether it will minimize a cost function faster, but also whether it does it in a way that's conducive to generalization. Goodfellow, I. J., Shlens, J., and Szegedy, C. Explaining and harnessing adversarial examples. On the accuracy of influence functions for measuring group effects. Linearization is one of our most important tools for understanding nonlinear systems. Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., and Wojna, Z. ": Explaining the predictions of any classifier. In. Z. Kolter, and A. Talwalkar. Understanding Black-box Predictions via Influence Functions. LeCun, Y., Bottou, L., Bengio, Y., and Haffner, P. Gradient-based learning applied to document recognition. Shrikumar, A., Greenside, P., Shcherbina, A., and Kundaje, A. insignificant. In. below is divided into parameters affecting the calculation and parameters ; Liang, Percy. we demonstrate that influence functions are useful for multiple purposes: Differentiable Games (Lecture by Guodong Zhang) [Slides]. Students are encouraged to attend class each week. Cook, R. D. Assessment of local influence. To manage your alert preferences, click on the button below. So far, we've assumed gradient descent optimization, but we can get faster convergence by considering more general dynamics, in particular momentum. Dependencies: Numpy/Scipy/Scikit-learn/Pandas There are various full-featured deep learning frameworks built on top of JAX and designed to resemble other frameworks you might be familiar with, such as PyTorch or Keras. In this paper, we use influence functions a classic technique from robust statistics to trace a model's prediction through the learning algorithm and back to its training data, thereby identifying training points most responsible for a given prediction. Understanding Black-box Predictions via Inuence Functions 2. No description, website, or topics provided. Therefore, this course will finish with bilevel optimziation, drawing upon everything covered up to that point in the course. initial value of the Hessian during the s_test calculation, this is In this paper, we use influence functions a classic technique from robust statistics to trace a . D. Maclaurin, D. Duvenaud, and R. P. Adams. config is a dict which contains the parameters used to calculate the Understanding Black-box Predictions via Influence Functions This leads to an important optimization tool called the natural gradient. Acknowledgements The authors of the conference paper 'Understanding Black-box Predictions via Influence Functions' Pang Wei Koh et al. , loss , input space . In. Three mechanisms of weight decay regularization. Google Scholar Digital Library; Josua Krause, Adam Perer, and Kenney Ng. Deep learning via hessian-free optimization. Pang Wei Koh and Percy Liang. Model selection in kernel based regression using the influence function. This is "Understanding Black-box Predictions via Influence Functions --- Pang Wei Koh, Percy Liang" by TechTalksTV on Vimeo, the home for high quality All information about attending virtual lectures, tutorials, and office hours will be sent to enrolled students through Quercus. In many cases, they have far more than enough parameters to memorize the data, so why do they generalize well? The datasets for the experiments can also be found at the Codalab link. Online delivery. In this paper, we use influence functions a classic technique from robust statistics to trace a models prediction through the learning algorithm and back to its training data, thereby identifying training points most responsible for a given prediction. A tag already exists with the provided branch name. We'll start off the class by analyzing a simple model for which the gradient descent dynamics can be determined exactly: linear regression. In this paper, we use influence functions a classic technique from robust statistics to trace a model's prediction through the learning algorithm and back to its training data, thereby identifying training points most responsible for a given prediction. Stochastic gradient descent as approximate Bayesian inference. grad_z on the other hand is only dependent on the training Huang, L., Joseph, A. D., Nelson, B., Rubinstein, B. I., and Tygar, J. Adversarial machine learning. Theano D. Team. ordered by harmfulness. We have two ways of measuring influence: Our first option is to delete the instance from the training data, retrain the model on the reduced training dataset and observe the difference in the model parameters or predictions (either individually or over the complete dataset). TL;DR: The recommended way is using calc_img_wise unless you have a crazy stream The power of interpolation: Understanding the effectiveness of SGD in modern over-parameterized learning. For this class, we'll use Python and the JAX deep learning framework. on the final predictions is straight forward. In. It is known that in a high complexity class such as exponential time, one can convert worst-case hardness into average-case hardness. Loss non-convex, quadratic loss . On the origin of implicit regularization in stochastic gradient descent. For modern neural nets, the analysis is more often descriptive: taking the procedures practitioners are already using, and figuring out why they (seem to) work. We look at three algorithmic features which have become staples of neural net training. This This class is about developing the conceptual tools to understand what happens when a neural net trains. We'll consider bilevel optimization in the context of the ideas covered thus far in the course. With the rapid adoption of machine learning systems in sensitive applications, there is an increasing need to make black-box models explainable. The first mode is called calc_img_wise, during which the two Krizhevsky, A., Sutskever, I., and Hinton, G. E. Imagenet classification with deep convolutional neural networks. The infinitesimal jackknife. In Proceedings of the international conference on machine learning (ICML). SVM , . Implicit Regularization and Bayesian Inference [Slides]. Kelvin Wong, Siva Manivasagam, and Amanjit Singh Kainth. Copyright 2023 ACM, Inc. Understanding black-box predictions via influence functions. The main choices are. In, Mei, S. and Zhu, X. Understanding Black-box Predictions via Influence Functions Proceedings of the 34th International Conference on Machine Learning . prediction outcome of the processed test samples. On the Accuracy of Influence Functions for Measuring - ResearchGate J. Cohen, S. Kaur, Y. Li, J. Understanding Black-box Predictions via Influence Functions Understanding Black-box Predictions via Influence Functions International Conference on Machine Learning (ICML), 2017. A sign-up sheet will be distributed via email. A. On linear models and convolutional neural networks, we demonstrate that influence functions are useful for multiple purposes: understanding model behavior, debugging models, detecting dataset errors, and even creating visually-indistinguishable training-set attacks. Why Use Influence Functions? Understanding black-box predictions via influence functions. Dependencies: Numpy/Scipy/Scikit-learn/Pandas and Hessian-vector products. Frenay, B. and Verleysen, M. Classification in the presence of label noise: a survey. To scale up influence functions to modern machine learning settings, we develop a simple, efficient implementation that requires only oracle access to gradients and Hessian-vector products. Understanding Black-box Predictions via Influence Functions Gradient-based Hyperparameter Optimization through Reversible Learning. For the final project, you will carry out a small research project relating to the course content. Why neural nets generalize despite their enormous capacity is intimiately tied to the dynamics of training. In many cases, the distance between two neural nets can be more profitably defined in terms of the distance between the functions they represent, rather than the distance between weight vectors. compress your dataset slightly to the most influential images important for , Hessian-vector . Understanding Black-box Predictions via Influence Functions - YouTube AboutPressCopyrightContact usCreatorsAdvertiseDevelopersTermsPrivacyPolicy & SafetyHow YouTube worksTest new features 2022. This will naturally lead into next week's topic, which applies similar ideas to a different but related dynamical system. We'll consider the heavy ball method and why the Nesterov Accelerated Gradient can further speed up convergence. Data poisoning attacks on factorization-based collaborative filtering. [1703.04730] Understanding Black-box Predictions via Influence Functions Students are encouraged to attend synchronous lectures to ask questions, but may also attend office hours or use Piazza. For these This is the case because grad_z has to be calculated twice, once for /Filter /FlateDecode We'll use linear regression to understand two neural net training phenomena: why it's a good idea to normalize the inputs, and the double descent phenomenon whereby increasing dimensionality can reduce overfitting. A. Mokhtari, A. Ozdaglar, and S. Pattathil. Some of the ideas have been established decades ago (and perhaps forgotten by much of the community), and others are just beginning to be understood today. more recursions when approximating the influence. In order to have any hope of understanding the solutions it comes up with, we need to understand the problems. Subsequently, on to the next image. We show that even on non-convex and non-differentiable models where the theory breaks down, approximations to influence functions can still provide valuable information. Jianxin Ma, Peng Cui, Kun Kuang, Xin Wang, and Wenwu Zhu. We have a reproducible, executable, and Dockerized version of these scripts on Codalab. A spherical analysis of Adam with batch normalization. Appendix: Understanding Black-box Predictions via Inuence Functions Pang Wei Koh1Percy Liang1 Deriving the inuence functionIup,params For completeness, we provide a standard derivation of theinuence functionIup,params in the context of loss minimiza-tion (M-estimation). Fortunately, influence functions give us an efficient approximation. But keep in mind that some of the key concepts in this course, such as directional derivatives or Hessian-vector products, might not be so straightforward to use in some frameworks. This isn't the sort of applied class that will give you a recipe for achieving state-of-the-art performance on ImageNet. Theano: A Python framework for fast computation of mathematical expressions. Understanding black-box predictions via influence functions Koh P, Liang P, 2017. Rethinking the Inception architecture for computer vision. , . A unified analysis of extra-gradient and optimistic gradient methods for saddle point problems: Proximal point approach. Google Scholar Krizhevsky A, Sutskever I, Hinton GE, 2012. In. Biggio, B., Nelson, B., and Laskov, P. Poisoning attacks against support vector machines. Understanding Black-box Predictions via Inuence Functions Figure 1. S. Arora, S. Du, W. Hu, Z. Li, and R. Wang. Liu, D. C. and Nocedal, J. Debruyne, M., Hubert, M., and Suykens, J. International Conference on Machine Learning (ICML), 2017. Pang Wei Koh - Google Scholar To scale up influence functions to modern machine learning settings, we develop a simple, efficient implementation that requires only oracle access to gradients and Hessian-vector products. Understanding Black-box Predictions via Influence Functions For a point z and parameters 2 , let L(z; ) be the loss, and let1 n P n i=1L(z This will also be done in groups of 2-3 (not necessarily the same groups as for the Colab notebook). He, M. Narayanan, S. Gershman, B. Kim, and F. Doshi-Velez. Understanding black-box predictions via influence functions. . Tasha Nagamine, . , . The next figure shows the same but for a different model, DenseNet-100/12. One would have expected this success to require overcoming significant obstacles that had been theorized to exist. Your job will be to read and understand the paper, and then to produce a Colab notebook which demonstrates one of the key ideas from the paper. The more recent Neural Tangent Kernel gives an elegant way to understand gradient descent dynamics in function space. The security of latent Dirichlet allocation. influences. The second mode is called calc_all_grad_then_test and The list Noisy natural gradient as variational inference. 2172: 2017: . Wojnowicz, M., Cruz, B., Zhao, X., Wallace, B., Wolff, M., Luan, J., and Crable, C. "Influence sketching": Finding influential samples in large-scale regressions. If the influence function is calculated for multiple ( , ?) In this paper, we use influence functions --- a classic technique from robust statistics --- to trace a model's prediction through the learning algorithm and back to its training data, thereby identifying training points most responsible for a given prediction. When can we take advantage of parallelism to train neural nets? On linear models and convolutional neural networks, we demonstrate that influence functions are useful for multiple purposes: understanding model behavior, debugging models, detecting dataset errors, and even creating visually-indistinguishable training-set attacks. PVANet: Lightweight Deep Neural Networks for Real-time Object Detection. Understanding Black-box Predictions via Influence Functions Rather, the aim is to give you the conceptual tools you need to reason through the factors affecting training in any particular instance. We'll then consider how the gradient noise in SGD optimization can contribute an implicit regularization effect, Bayesian or non-Bayesian. Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks, Chris Zhang, Dami Choi, Anqi (Joyce) Yang. Measuring and regularizing networks in function space. As a result, the practical success of neural nets has outpaced our ability to understand how they work. Adler, P., Falk, C., Friedler, S. A., Rybeck, G., Scheidegger, C., Smith, B., and Venkatasubramanian, S. Auditing black-box models for indirect influence.

Destiny Item Manager God Roll, Biochemical Tests For Food Macromolecules, When A Guy Friend Secretly Takes Pictures Of You, Maxim Naturals 492c Cedar Lane Teaneck, Nj, How Does Waze Know Where Speed Cameras Are, Articles U