79176

I am using the SciPy implementation of the kernel density estimate (KDE) (http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.gaussian_kde.html), which is working fine so far. However, I would now like to obtain the gradient of the KDE at a particular point.

I have looked at the Python source for the library, but haven't been able to figure out how to easily implement this functionality. Is anybody aware of a method to do this?

If you look at the source you referenced, you'll see that the density estimation is constructed from contributions from all points in the dataset Assuming there's only one point `points[:,i]` you want to evaluate for the moment (lines 219–222):

```diff = self.dataset - points[:, i, newaxis] tdiff = dot(self.inv_cov, diff) energy = sum(diff * tdiff, axis=0) / 2.0 result[i] = sum(exp(-energy), axis=0) ```

In matrix notation (no LaTeX available?), this would be written, for a single point `D` from the dataset and point `p` to be evaluated as

```d = D - p t = Cov^-1 d e = 1/2 d^T t r = exp(-e) ```

The gradient you're looking for is `grad(r) = (dr/dx, dr/dy)`:

```dr/dx = d(exp(-e))/dx = -de/dx exp(-e) = -d(1/2 d^T Cov^-1 d)/dx exp(-e) = -(Cov^-1 d) exp(-e) ```

Likewise for `dr/dy`. Hence all you need to do is calculate the term `Cov^-1 d` and multiply it with the result you already obtained.

```result = zeros((self.d,m), dtype=float) [...] diff = self.dataset - points[:, i, newaxis] tdiff = dot(self.inv_cov, diff) energy = sum(diff * tdiff, axis=0) / 2.0 grad = dot(self.inv_cov, diff) result[:,i] = sum(grad * exp(-energy), axis=1) ```

For some reason I needed to drop the `-1` when calculating `grad` to obtain results coherent with a evaluating the density estimation at `p` and `p+delta` in all four directions, which is a sign I might of course be way off here.