Hacking Generative Models with Differentiable Network Bending
In this paper, we 'hack' generative models by injecting a small-scale trainable module between the intermediate layers of the model. This approach pushes their outputs away from the original training distribution towards a new objective.
Dec 1, 2023