Posts by Collection

portfolio

Interpretability through Training Samples: Data Attribution for Diffusion Models

Published:

Data attribution methods help interpret how neural networks behave by linking the model behavior to their training data. We extend the first-order influence approximation, TracIn, to diffusion models by incorporating the denoising timestep dynamics. We demonstrate that this influence estimation may be biased due to dominating gradient norms. To this end, Diffusion-ReTrac with a renormalization technique is introduced, enabling notably more localized influence estimation and the targeted attribution of training samples.image_tracing-1

publications

talks

teaching

Teaching experience 1

Undergraduate course, University 1, Department, 2014

This is a description of a teaching experience. You can use markdown like any other post.

Teaching experience 2

Workshop, University 1, Department, 2015

This is a description of a teaching experience. You can use markdown like any other post.