 # Python: Gridsearch Without Machine Learning?

I want to optimize an algorithm that has several variable parameters as input.

For machine learning tasks, Sklearn offers the optimization of Hyperparameters with the gridsearch functionality.

Is there a standardized way / library in Python that allows the optimization of parameters that is not limited to machine learning topics?

### Answer1:

You can create a custom pipeline/estimator ( see link http://scikit-learn.org/dev/developers/contributing.html#rolling-your-own-estimator) with a score method to compare the results.

The ParameterGrid might help you too. It will automatically populated all the hyper-parameters settings.

### Answer2:

You might consider scipy's optimize.brute, which essentially is the same, although not that constrained in terms of API-usage. You will just have to define a function, which returns a scalar.

Minimize a function over a given range by brute force.

Uses the “brute force” method, i.e. computes the function’s value at each point of a multidimensional grid of points, to find the global minimum of the function.

Shameless example-copy from the docs:

### Code

```import numpy as np from scipy import optimize params = (2, 3, 7, 8, 9, 10, 44, -1, 2, 26, 1, -2, 0.5) def f1(z, *params): x, y = z a, b, c, d, e, f, g, h, i, j, k, l, scale = params return (a * x**2 + b * x * y + c * y**2 + d*x + e*y + f) def f2(z, *params): x, y = z a, b, c, d, e, f, g, h, i, j, k, l, scale = params return (-g*np.exp(-((x-h)**2 + (y-i)**2) / scale)) def f3(z, *params): x, y = z a, b, c, d, e, f, g, h, i, j, k, l, scale = params return (-j*np.exp(-((x-k)**2 + (y-l)**2) / scale)) def f(z, *params): return f1(z, *params) + f2(z, *params) + f3(z, *params) rranges = (slice(-4, 4, 0.25), slice(-4, 4, 0.25)) resbrute = optimize.brute(f, rranges, args=params, full_output=True, finish=optimize.fmin) print(resbrute[:2]) # x0, feval ```

### Out

```(array([-1.05665192, 1.80834843]), -3.4085818767996527) ```

Brute-force functions are not much black-magic and often one might consider an own implementation. The scipy-example above has <strong>one more interesting feature</strong>:

finish : callable, optional

An optimization function that is called with the result of brute force minimization as initial guess. finish should take func and the initial guess as positional arguments, and take args as keyword arguments. It may additionally take full_output and/or disp as keyword arguments. Use None if no “polishing” function is to be used. See Notes for more details.

which i would recommend for most use-cases (in continuous-space). But be sure to get some minimal understanding what this is doing to understand there are use-cases where you don't want to do this (discrete-space results needed; slow function-evaluation).

If you are using sklearn, you already have scipy installed (it's a dependency).

<strong>Edit:</strong> here some small plot i created (code) to show what `finish` is doing (local-opt) with an 1d-example (not the best example, but easier to plot):

<img src=https://www.e-learn.cn/content/wangluowenzhang/"https://i.stack.imgur.com/pDOeG.png" alt="enter image description here">

 人点赞