understanding black box predictions via influence functions

To manage your alert preferences, click on the button below. Understanding Black-box Predictions via Influence Functions and Hessian-vector products. Simonyan, K., Vedaldi, A., and Zisserman, A. On the importance of initialization and momentum in deep learning. Components of inuence. D. Maclaurin, D. Duvenaud, and R. P. Adams. Understanding short-horizon bias in stochastic meta-optimization. More details can be found in the project handout. Uses cases Roadmap 2 Reviving an "old technique" from Robust statistics: Influence function The reference implementation can be found here: link. Rather, the aim is to give you the conceptual tools you need to reason through the factors affecting training in any particular instance. Not just a black box: Learning important features through propagating activation differences. Using machine teaching to identify optimal training-set attacks on machine learners. % A Dockerfile with these dependencies can be found here: https://hub.docker.com/r/pangwei/tf1.1/. vector to calculate the influence. /Filter /FlateDecode SVM , . How can we explain the predictions of a black-box model? The answers boil down to an observation that neural net training seems to have two distinct phases: a small-batch, noise-dominated phase, and a large-batch, curvature-dominated one. Liu, D. C. and Nocedal, J. To scale up influence functions to modern machine learning settings, Visualised, the output can look like this: The test image on the top left is test image for which the influences were To get the correct test outcome of ship, the Helpful images from << 7 1 . Jianxin Ma, Peng Cui, Kun Kuang, Xin Wang, and Wenwu Zhu. Time permitting, we'll also consider the limit of infinite depth. Interacting with predictions: Visual inspection of black-box machine learning models. Gradient descent on neural networks typically occurs on the edge of stability. Imagenet classification with deep convolutional neural networks. Datta, A., Sen, S., and Zick, Y. Algorithmic transparency via quantitative input influence: Theory and experiments with learning systems. Hopefully this understanding will let us improve the algorithms. Rethinking the Inception architecture for computer vision. Understanding Black-box Predictions via Influence Functions understanding model behavior, debugging models, detecting dataset errors, Understanding Black-box Predictions via Influence Functions --- Pang In this paper, we use influence functions a classic technique from robust statistics to trace a model's prediction through the learning algorithm and back to its training data, thereby identifying training points most responsible for a given prediction. Understanding Black-box Predictions via Influence Functions Pang Wei Koh & Perry Liang Presented by -Theo, Aditya, Patrick 1 1.Influence functions: definitions and theory 2.Efficiently calculating influence functions 3. the original paper linked here. We show that even on non-convex and non-differentiable models where the theory breaks down, approximations to influence functions can still provide valuable information. How can we explain the predictions of a black-box model? The datasets for the experiments can also be found at the Codalab link. After all, the optimization landscape is nonconvex, highly nonlinear, and high-dimensional, so why are we able to train these networks? Understanding Black-box Predictions via Influence Functions Unofficial implementation of the paper "Understanding Black-box Preditions via Influence Functions", which got ICML best paper award, in Chainer. This is the case because grad_z has to be calculated twice, once for We'll use linear regression to understand two neural net training phenomena: why it's a good idea to normalize the inputs, and the double descent phenomenon whereby increasing dimensionality can reduce overfitting. We show that even on non-convex and non-differentiable models In this paper, we use influence functions a classic technique from robust statistics to trace a . Assignments for the course include one problem set, a paper presentation, and a final project. Koh P, Liang P, 2017. Delta-STN: Efficient bilevel optimization of neural networks using structured response Jacobians. Negative momentum for improved game dynamics. However, as stated stream calculate which training images had the largest result on the classification In Proceedings of the international conference on machine learning (ICML). The meta-optimizer has to confront many of the same challenges we've been dealing with in this course, so we can apply the insights to reverse engineer the solutions it picks. The final report is due April 7. In Artificial Intelligence and Statistics (AISTATS), pages 3382-3390, 2019. kept in RAM than calculating them on-the-fly. Christmann, A. and Steinwart, I. (a) What is the effect of the training loss and H 1 ^ terms in I up,loss? Understanding Black-box Predictions via Influence Functions (2017) 1. numbers above the images show the actual influence value which was calculated. This is "Understanding Black-box Predictions via Influence Functions --- Pang Wei Koh, Percy Liang" by TechTalksTV on Vimeo, the home for high quality Agarwal, N., Bullins, B., and Hazan, E. Second order stochastic optimization in linear time. J. Cohen, S. Kaur, Y. Li, J. We would like to show you a description here but the site won't allow us. Appendix: Understanding Black-box Predictions via Inuence Functions Pang Wei Koh1Percy Liang1 Deriving the inuence functionIup,params For completeness, we provide a standard derivation of theinuence functionIup,params in the context of loss minimiza-tion (M-estimation). We have a reproducible, executable, and Dockerized version of these scripts on Codalab. This is a PyTorch reimplementation of Influence Functions from the ICML2017 best paper: Understanding Black-box Predictions via Influence Functions by Pang Wei Koh and Percy Liang. Kelvin Wong, Siva Manivasagam, and Amanjit Singh Kainth. ( , ) Inception, . . Despite its simplicity, linear regression provides a surprising amount of insight into neural net training. Koh, Pang Wei. in terms of the dataset. Your job will be to read and understand the paper, and then to produce a Colab notebook which demonstrates one of the key ideas from the paper. In this paper, we use influence functions a classic technique from robust statistics to trace a models prediction through the learning algorithm and back to its training data, thereby identifying training points most responsible for a given prediction.

Oh Brother, This Guy Stinks Voice Actor, Articles U

0 Comments

understanding black box predictions via influence functions

©[2017] RabbitCRM. All rights reserved.

understanding black box predictions via influence functions

understanding black box predictions via influence functions