Reference

Quick Summary

These methods and attributes you will use a lot:

Minuit(fcn[, grad, errordef, print_level, …]) Construct minuit object from given fcn
Minuit.from_array_func(type cls, fcn, start) Construct Minuit object from given fcn and start sequence.
Minuit.migrad(self[, ncall, resume, …]) Run MIGRAD.
Minuit.hesse(self[, ncall]) Run HESSE to compute parabolic errors.
Minuit.minos(self[, var, sigma, ncall]) Run MINOS to compute asymmetric confidence intervals.
Minuit.values values: iminuit._libiminuit.ValueView Parameter values in a dict-like object.
Minuit.fixed fixed: iminuit._libiminuit.FixedView Access fixation state of a parameter in a dict-like object.
Minuit.valid Check if function minimum is valid.
Minuit.accurate Check if covariance (of the last MIGRAD run) is accurate.
Minuit.fval Last evaluated FCN value
Minuit.nfit Number of fitted parameters (fixed parameters not counted).
Minuit.np_values(self) Parameter values in numpy array format.
Minuit.np_covariance(self) Covariance matrix in numpy array format.
Minuit.np_errors(self) Hesse parameter errors in numpy array format.
Minuit.np_merrors(self) MINOS parameter errors in numpy array format.
Minuit.mnprofile(self, vname[, bins, bound, …]) Calculate MINOS profile around the specified range.
Minuit.draw_mnprofile(self, vname[, bins, …]) Draw MINOS profile in the specified range.

Minuit

class iminuit.Minuit(fcn, grad=None, errordef=None, print_level=0, name=None, pedantic=True, throw_nan=False, use_array_call=False, **kwds)

Construct minuit object from given fcn

Arguments:

fcn, the function to be optimized, is the only required argument.

Two kinds of function signatures are understood.

  1. Parameters passed as positional arguments

The function has several positional arguments, one for each fit parameter. Example:

def func(a, b, c): ...

The parameters a, b, c must accept a real number.

iminuit automagically detects parameters names in this case. More information about how the function signature is detected can be found in Function Signature Extraction Ordering

  1. Parameters passed as Numpy array

The function has a single argument which is a Numpy array. Example:

def func(x): ...

Pass the keyword use_array_call=True to use this signature. For more information, see “Parameter Keyword Arguments” further down.

If you work with array parameters a lot, have a look at the static initializer method from_array_func(), which adds some convenience and safety to this use case.

Builtin Keyword Arguments:

  • throw_nan: set fcn to raise RuntimeError when it encounters nan. (Default False)
  • pedantic: warns about parameters that do not have initial value or initial error/stepsize set.
  • name: sequence of strings. If set, this is used to detect parameter names instead of iminuit’s function signature detection.
  • print_level: set the print_level for this Minuit. 0 is quiet. 1 print out at the end of MIGRAD/HESSE/MINOS. 2 prints debug messages.
  • errordef: Optional. See errordef for details on this parameter. If set to None (the default), Minuit will try to call fcn.errordef and fcn.default_errordef() (deprecated) to set the error definition. If this fails, a warning is raised and use a value appropriate for a least-squares function is used.
  • grad: Optional. Provide a function that calculates the gradient analytically and returns an iterable object with one element for each dimension. If None is given MINUIT will calculate the gradient numerically. (Default None)
  • use_array_call: Optional. Set this to true if your function signature accepts a single numpy array of the parameters. You need to also pass the name keyword then to explicitly name the parameters.

Parameter Keyword Arguments:

iminuit allows user to set initial value, initial stepsize/error, limits of parameters and whether the parameter should be fixed by passing keyword arguments to Minuit.

This is best explained through examples:

def f(x, y):
    return (x-2)**2 + (y-3)**2
  • Initial value (varname):

    #initial value for x and y
    m = Minuit(f, x=1, y=2)
    
  • Initial step size (fix_varname):

    #initial step size for x and y
    m = Minuit(f, error_x=0.5, error_y=0.5)
    
  • Limits (limit_varname=tuple):

    #limits x and y
    m = Minuit(f, limit_x=(-10,10), limit_y=(-20,20))
    
  • Fixing parameters:

    #fix x but vary y
    m = Minuit(f, fix_x=True)
    

Note

You can use dictionary expansion to programmatically change parameters.:

kwargs = dict(x=1., error_x=0.5)
m = Minuit(f, **kwargs)

You can also obtain fit arguments from Minuit object for later reuse. fitarg will be automatically updated to the minimum value and the corresponding error when you ran migrad/hesse.:

m = Minuit(f, x=1, error_x=0.5)
my_fitarg = m.fitarg
another_fit = Minuit(f, **my_fitarg)
LEAST_SQUARES = 1.0
LIKELIHOOD = 0.5
accurate

Check if covariance (of the last MIGRAD run) is accurate.

args

args: iminuit._libiminuit.ArgsView Parameter values in a list-like object.

See values for details.

See also

values, errors, fixed

contour(self, x, y, bins=50, bound=2, subtract_min=False, **deprecated_kwargs)

2D contour scan.

Return the contour of a function scan over x and y, while keeping all other parameters fixed.

The related mncontour() works differently: for new pair of x and y in the scan, it minimises the function with the respect to the other parameters.

This method is useful to inspect the function near the minimum to detect issues (the contours should look smooth). Use mncontour() to create confidence regions for the parameters. If the fit has only two free parameters, you can use this instead of mncontour().

Arguments:

  • x variable name for X axis of scan
  • y variable name for Y axis of scan
  • bound If bound is 2x2 array, [[v1min,v1max],[v2min,v2max]]. If bound is a number, it specifies how many \(\sigma\) symmetrically from minimum (minimum+- bound*:math:sigma). Default: 2.
  • subtract_min Subtract minimum off from return values. Default False.

Returns:

x_bins, y_bins, values

values[y, x] <– this choice is so that you can pass it to through matplotlib contour()

covariance

Covariance matrix (dict (name1, name2) -> covariance).

See also

matrix()

draw_contour(self, x, y, bins=50, bound=2, **deprecated_kwargs)

Convenience wrapper for drawing contours.

The arguments are the same as contour().

Please read the docs of contour() and mncontour() to understand the difference between the two.

draw_mncontour(self, x, y, nsigma=2, numpoints=100)

Draw MINOS contour.

Arguments:

  • x, y parameter name
  • nsigma number of sigma contours to draw
  • numpoints number of points to calculate for each contour

Returns:

contour

See also

mncontour()

from iminuit import Minuit


def cost(x, y, z):
    return (x - 1) ** 2 + (y - x) ** 2 + (z - 2) ** 2


m = Minuit(cost, print_level=0, pedantic=False)
m.migrad()
m.draw_mncontour("x", "y", nsigma=4)

(Source code, png, hires.png, pdf)

_images/mncontour.png
draw_mnprofile(self, vname, bins=30, bound=2, subtract_min=False, band=True, text=True)

Draw MINOS profile in the specified range.

It is obtained by finding MIGRAD results with vname fixed at various places within bound.

Arguments:

  • vname variable name to scan
  • bins number of scanning bin. Default 30.
  • bound If bound is tuple, (left, right) scanning bound. If bound is a number, it specifies how many \(\sigma\) symmetrically from minimum (minimum+- bound* \(\sigma\)). Default 2.
  • subtract_min subtract_minimum off from return value. This makes it easy to label confidence interval. Default False.
  • band show green band to indicate the increase of fcn by errordef. Default True.
  • text show text for the location where the fcn is increased by errordef. This is less accurate than minos(). Default True.

Returns:

bins(center point), value, migrad results
from iminuit import Minuit


def cost(x, y, z):
    return (x - 1) ** 2 + (y - x) ** 2 + (z - 2) ** 2


m = Minuit(cost, pedantic=False)
m.migrad()
m.draw_mnprofile("y")

(Source code, png, hires.png, pdf)

_images/mnprofile.png
draw_profile(self, vname, bins=100, bound=2, subtract_min=False, band=True, text=True, **deprecated_kwargs)

A convenient wrapper for drawing profile using matplotlib.

A 1D scan of the cost function around the minimum, useful to inspect the minimum and the FCN around the minimum for defects.

For a fit with several free parameters this is not the same as the MINOS profile computed by draw_mncontour(). Use mnprofile() or draw_mnprofile() to compute confidence intervals.

If a function minimum was found in a previous MIGRAD call, a vertical line indicates the parameter value. An optional band indicates the uncertainty interval of the parameter computed by HESSE or MINOS.

Arguments:

In addition to argument listed on profile(). draw_profile take these addition argument:

  • band show green band to indicate the increase of fcn by errordef. Note again that this is NOT minos error in general. Default True.
  • text show text for the location where the fcn is increased by errordef. This is less accurate than minos() Note again that this is NOT minos error in general. Default True.
errordef

FCN increment above the minimum that corresponds to one standard deviation.

Default value is 1.0. errordef should be 1.0 for a least-squares cost function and 0.5 for negative log-likelihood function. See page 37 of http://hep.fi.infn.it/minuit.pdf. This parameter is sometimes called UP in the MINUIT docs.

To make user code more readable, we provided two named constants:

from iminuit import Minuit
assert Minuit.LEAST_SQUARES == 1
assert Minuit.LIKELIHOOD == 0.5

Minuit(a_least_squares_function, errordef=Minuit.LEAST_SQUARES)
Minuit(a_likelihood_function, errordef=Minuit.LIKELIHOOD)
errors

errors: iminuit._libiminuit.ErrorView Parameter parabolic errors in a dict-like object.

Like values, but instead of reading or writing the values, you read or write the errors (which double as step sizes for MINUITs numerical gradient estimation).

See also

values, fixed

fcn

Cost function (usually a chi^2 or likelihood function).

fitarg

Current Minuit state in form of a dict.

  • name -> value
  • error_name -> error
  • fix_name -> fix
  • limit_name -> (lower_limit, upper_limit)

This is very useful when you want to save the fit parameters and re-use them later. For example:

m = Minuit(f, x=1)
m.migrad()
fitarg = m.fitarg

m2 = Minuit(f, **fitarg)
fixed

fixed: iminuit._libiminuit.FixedView Access fixation state of a parameter in a dict-like object.

Use to read or write the fixation state of a parameter based on the parameter index or the parameter name as a string. If you change the state and run migrad(), hesse(), or minos(), the new state is used.

In case of complex fits, it can help to fix some parameters first and only minimize the function with respect to the other parameters, then release the fixed parameters and minimize again starting from that state.

See also

values, errors

fmin

Current function minimum data object

from_array_func(type cls, fcn, start, error=None, limit=None, fix=None, name=None, **kwds)

Construct Minuit object from given fcn and start sequence.

This is an alternative named constructor for the minuit object. It is more convenient to use for functions that accept a numpy array.

Arguments:

fcn: The function to be optimized. Must accept a single parameter that is a numpy array.

def func(x): …

start: Sequence of numbers. Starting point for the minimization.

Keyword arguments:

error: Optional sequence of numbers. Initial step sizes. Scalars are automatically broadcasted to the length of the start sequence.

limit: Optional sequence of limits that restrict the range in which a parameter is varied by minuit. Limits can be set in several ways. With inf = float(“infinity”) we get:

  • No limit: None, (-inf, inf), (None, None)
  • Lower limit: (x, None), (x, inf) [replace x with a number]
  • Upper limit: (None, x), (-inf, x) [replace x with a number]

A single limit is automatically broadcasted to the length of the start sequence.

fix: Optional sequence of boolean values. Whether to fix a parameter to the starting value.

name: Optional sequence of parameter names. If names are not specified, the parameters are called x0, …, xN.

All other keywords are forwarded to Minuit, see its documentation.

Example:

A simple example function is passed to Minuit. It accept a numpy array of the parameters. Initial starting values and error estimates are given:

import numpy as np

def f(x):
    mu = (2, 3)
    return np.sum((x-mu)**2)

# error is automatically broadcasted to (0.5, 0.5)
m = Minuit.from_array_func(f, (2, 3),
                           error=0.5)
fval

Last evaluated FCN value

See also

fmin()

gcc

Global correlation coefficients (dict : name -> gcc).

grad

Gradient function of the cost function.

hesse(self, ncall=None, **deprecated_kwargs)

Run HESSE to compute parabolic errors.

HESSE estimates the covariance matrix by inverting the matrix of second derivatives (Hesse matrix) at the minimum. This covariance matrix is valid if your \(\chi^2\) or likelihood profile looks like a hyperparabola around the the minimum. This is usually the case, especially when you fit many observations (in the limit of infinite samples this is always the case). If you want to know how your parameters are correlated, you also need to use HESSE.

Also see minos(), which computes the uncertainties in a different way.

Arguments:
  • ncall: integer or None, limit the number of calls made by MINOS. Default: None (uses an internal heuristic by C++ MINUIT).

Returns:

init_params

List of current parameter data objects set to the initial fit state

is_clean_state(self)

Check if minuit is in a clean state, ie. no MIGRAD call

latex_initial_param(self)

Build iminuit.latex.LatexTable for initial parameter

latex_matrix(self)

Build LatexFactory object with correlation matrix.

latex_param(self)

build iminuit.latex.LatexTable for current parameter

matrix(self, correlation=False, skip_fixed=True)

Error or correlation matrix in tuple or tuples format.

merrors

MINOS errors.

migrad(self, ncall=None, resume=True, precision=None, iterate=5, **deprecated_kwargs)

Run MIGRAD.

MIGRAD is a robust minimisation algorithm which earned its reputation in 40+ years of almost exclusive usage in high-energy physics. How MIGRAD works is described in the MINUIT paper.

Arguments:

  • ncall: integer or None, optional; (approximate) maximum number of call before MIGRAD will stop trying. Default: None (indicates to use MIGRAD’s internal heuristic). Note: MIGRAD may slightly violate this limit, because it checks the condition only after a full iteration of the algorithm, which usually performs several function calls.
  • resume: boolean indicating whether MIGRAD should resume from the previous minimiser attempt(True) or should start from the beginning(False). Default True.
  • precision: override Minuit precision estimate for the cost function. Default: None (= use epsilon of a C++ double). If the cost function has a lower precision (e.g. of a C++ float), setting this to a lower value will accelerate convergence and reduce the rate of unsuccessful convergence.
  • iterate: automatically call Migrad up to N times if convergence was not reached. Default: 5. This simple heuristic makes Migrad converge more often even if the numerical precision of the cost function is low. Setting this to 1 disables the feature.

Return:

minos(self, var=None, sigma=1., ncall=None, **deprecated_kwargs)

Run MINOS to compute asymmetric confidence intervals.

MINOS uses the profile likelihood method to compute (asymmetric) confidence intervals. It scans the negative log-likelihood or (equivalently) the least-squares cost function around the minimum to construct an asymmetric confidence interval. This interval may be more reasonable when a parameter is close to one of its parameter limits. As a rule-of-thumb: when the confidence intervals computed with HESSE and MINOS differ strongly, the MINOS intervals are to be preferred. Otherwise, HESSE intervals are preferred.

Running MINOS is computationally expensive when there are many fit parameters. Effectively, it scans over var in small steps and runs MIGRAD to minimise the FCN with respect to all other free parameters at each point. This is requires many more FCN evaluations than running HESSE.

Arguments:

  • var: optional variable name to compute the error for. If var is not given, MINOS is run for every variable.
  • sigma: number of \(\sigma\) error. Default 1.0.
  • ncall: integer or None, limit the number of calls made by MINOS. Default: None (uses an internal heuristic by C++ MINUIT).

Returns:

Dictionary of varname to Minos Data Object, containing all up to now computed errors, including the current request.
mncontour(self, x, y, int numpoints=100, sigma=1.0)

Two-dimensional MINOS contour scan.

This scans over x and y and minimises all other free parameters in each scan point. This works as if x and y are fixed, while the other parameters are minimised by MIGRAD.

This scan produces a statistical confidence region with the profile likelihood method. The contour line represents the values of x and y where the function passes the threshold that corresponds to sigma standard deviations (note that 1 standard deviations in two dimensions has a smaller coverage probability than 68 %).

The calculation is expensive since it has to run MIGRAD at various points.

Arguments:

  • x string variable name of the first parameter
  • y string variable name of the second parameter
  • numpoints number of points on the line to find. Default 20.
  • sigma number of sigma for the contour line. Default 1.0.

Returns:

x MINOS error struct, y MINOS error struct, contour line

contour line is a list of the form [[x1,y1]…[xn,yn]]

mnprofile(self, vname, bins=30, bound=2, subtract_min=False)

Calculate MINOS profile around the specified range.

Scans over vname and minimises FCN over the other parameters in each point.

Arguments:

  • vname name of variable to scan
  • bins number of scanning bins. Default 30.
  • bound If bound is tuple, (left, right) scanning bound. If bound isa number, it specifies how many \(\sigma\) symmetrically from minimum (minimum+- bound* \(\sigma\)). Default 2
  • subtract_min subtract_minimum off from return value. This makes it easy to label confidence interval. Default False.

Returns:

bins(center point), value, MIGRAD results
narg

Number of parameters.

ncalls_total

Total number of calls to FCN (not just the last operation)

nfit

Number of fitted parameters (fixed parameters not counted).

ngrads_total

Total number of calls to Gradient (not just the last operation)

np_covariance(self)

Covariance matrix in numpy array format.

Fixed parameters are included, the order follows parameters.

Returns:

numpy.ndarray of shape (N,N) (not a numpy.matrix).
np_errors(self)

Hesse parameter errors in numpy array format.

Fixed parameters are included, the order follows parameters.

Returns:

numpy.ndarray of shape (N,).
np_matrix(self, **kwds)

Covariance or correlation matrix in numpy array format.

Keyword arguments are forwarded to matrix().

The name of this function was chosen to be analogous to matrix(), it returns the same information in a different format. For documentation on the arguments, please see matrix().

Returns:

2D numpy.ndarray of shape (N,N) (not a numpy.matrix).
np_merrors(self)

MINOS parameter errors in numpy array format.

Fixed parameters are included (zeros are returned), the order follows parameters.

The format of the produced array follows matplotlib conventions, as in matplotlib.pyplot.errorbar. The shape is (2, N) for N parameters. The first row represents the downward error as a positive offset from the center. Likewise, the second row represents the upward error as a positive offset from the center.

Returns:

numpy.ndarray of shape (2, N).
np_values(self)

Parameter values in numpy array format.

Fixed parameters are included, the order follows parameters.

Returns:

numpy.ndarray of shape (N,).
parameters

Parameter name tuple

params

List of current parameter data objects

pos2var

Map variable position to name

print_level

Current print level.

  • 0: quiet
  • 1: print minimal debug messages to terminal
  • 2: print more debug messages to terminal
  • 3: print even more debug messages to terminal

Note: Setting the level to 3 has a global side effect on all current instances of Minuit (this is an issue in C++ MINUIT2).

profile(self, vname, bins=100, bound=2, subtract_min=False, **deprecated_kwargs)

Calculate cost function profile around specify range.

Arguments:

  • vname variable name to scan
  • bins number of scanning bin. Default 100.
  • bound If bound is tuple, (left, right) scanning bound. If bound is a number, it specifies how many \(\sigma\) symmetrically from minimum (minimum+- bound* \(\sigma\)). Default: 2.
  • subtract_min subtract_minimum off from return value. This makes it easy to label confidence interval. Default False.

Returns:

bins(center point), value

See also

mnprofile()

strategy

strategy: ‘unsigned int’ Current minimization strategy.

0: Fast. Does not check a user-provided gradient. Does not improve Hesse matrix at minimum. Extra call to hesse() after migrad() is always needed for good error estimates. If you pass a user-provided gradient to MINUIT, convergence is faster.

1: Default. Checks user-provided gradient against numerical gradient. Checks and usually improves Hesse matrix at minimum. Extra call to hesse() after migrad() is usually superfluous. If you pass a user-provided gradient to MINUIT, convergence is slower.

2: Careful. Like 1, but does extra checks of intermediate Hessian matrix during minimization. The effect in benchmarks is a somewhat improved accuracy at the cost of more function evaluations. A similar effect can be achieved by reducing the tolerance attr:tol for convergence at any strategy level.

throw_nan

Boolean. Whether to raise runtime error if function evaluate to nan.

tol

tol: ‘double’ Tolerance for convergence.

The main convergence criteria of MINUIT is edm < edm_max, where edm_max is calculated as edm_max = 0.002 * tol * errordef and EDM is the estimated distance to minimum, as described in the MINUIT paper.
use_array_call

Boolean. Whether to pass parameters as numpy array to cost function.

valid

Check if function minimum is valid.

values

values: iminuit._libiminuit.ValueView Parameter values in a dict-like object.

Use to read or write current parameter values based on the parameter index or the parameter name as a string. If you change a parameter value and run migrad(), the minimization will start from that value, similar for hesse() and minos().

See also

errors, fixed

var2pos

Map variable name to position

Cost functions

Standard cost functions to minimize.

class iminuit.cost.BinnedNLL(n, xe, cdf, verbose=0)

Binned negative log-likelihood.

Use this if only the shape of the fitted PDF is of interest and the data is binned.

Parameters

n: array-like
Histogram counts.
xe: array-like
Bin edge locations, must be len(n) + 1.
cdf: callable
Cumulative density function of the form f(xe, par0, par1, …, parN), where xe is a bin edge and par0, … parN are model parameters.
verbose: int, optional

Verbosity level

  • 0: is no output (default)
  • 1: print current args and negative log-likelihood value
class iminuit.cost.Cost(args, verbose, errordef)

Common base class for cost functions.

Attributes

mask : array-like or None
If not None, only values selected by the mask are considered. The mask acts on the first dimension of a value array, i.e. values[mask]. Default is None.
verbose : int
Verbosity level. Default is 0.
errordef : int
Error definition constant used by Minuit. For internal use.
class iminuit.cost.ExtendedBinnedNLL(n, xe, scaled_cdf, verbose=0)

Binned extended negative log-likelihood.

Use this if shape and normalization of the fitted PDF are of interest and the data is binned.

Parameters

n: array-like
Histogram counts.
xe: array-like
Bin edge locations, must be len(n) + 1.
scaled_cdf: callable
Scaled Cumulative density function of the form f(xe, par0, par1, …, parN), where xe is a bin edge and par0, … parN are model parameters.
verbose: int, optional

Verbosity level

  • 0: is no output (default)
  • 1: print current args and negative log-likelihood value
class iminuit.cost.ExtendedUnbinnedNLL(data, scaled_pdf, verbose=0)

Unbinned extended negative log-likelihood.

Use this if shape and normalization of the fitted PDF are of interest and the original unbinned data is available.

Parameters

data: array-like
Sample of observations.
scaled_pdf: callable
Scaled probability density function of the form f(data, par0, par1, …, parN), where data is the data sample and par0, … parN are model parameters. Must return a tuple (<integral over f in data range>, <f evaluated at data points>).
verbose: int, optional

Verbosity level

  • 0: is no output (default)
  • 1: print current args and negative log-likelihood value
class iminuit.cost.LeastSquares(x, y, yerror, model, loss='linear', verbose=0)

Least-squares cost function (aka chisquare function).

Use this if you have data of the form (x, y +/- yerror).

Parameters

x: array-like
Locations where the model is evaluated.
y: array-like
Observed values. Must have the same length as x.
yerror: array-like or float
Estimated uncertainty of observed values. Must have same shape as y or be a scalar, which is then broadcasted to same shape as y.
model: callable
Function of the form f(x, par0, par1, …, parN) whose output is compared to observed values, where x is the location and par0, … parN are model parameters.
loss: str or callable, optional

The loss function can be modified to make the fit robust against outliers, see scipy.optimize.least_squares for details. Only “linear” (default) and “soft_l1” are currently implemented, but users can pass any loss function as this argument. It should be a monotonic, twice differentiable function, which accepts the squared residual and returns a modified squared residual.

(Source code, png, hires.png, pdf)

_images/loss.png
verbose: int, optional

Verbosity level

  • 0: is no output (default)
  • 1: print current args and negative log-likelihood value
class iminuit.cost.UnbinnedNLL(data, pdf, verbose=0)

Unbinned negative log-likelihood.

Use this if only the shape of the fitted PDF is of interest and the original unbinned data is available.

Parameters

data: array-like
Sample of observations.
pdf: callable
Probability density function of the form f(data, par0, par1, …, parN), where data is the data sample and par0, … parN are model parameters.
verbose: int, optional

Verbosity level

  • 0: is no output (default)
  • 1: print current args and negative log-likelihood value

minimize

The iminuit.minimize() function provides the same interface as scipy.optimize.minimize(). If you are familiar with the latter, this allows you to use Minuit with a quick start. Eventually, you still may want to learn the interface of the iminuit.Minuit class, as it provides more functionality if you are interested in parameter uncertainties.

iminuit.minimize(fun, x0, args=(), method=None, jac=None, hess=None, hessp=None, bounds=None, constraints=None, tol=None, callback=None, options=None)

An interface to MIGRAD using the scipy.optimize.minimize API.

For a general description of the arguments, see scipy.optimize.minimize.

The method argument is ignored. The optimisation is always done using MIGRAD.

The options argument can be used to pass special settings to Minuit. All are optional.

Options:

  • disp (bool): Set to true to print convergence messages. Default: False.
  • tol (float): Tolerance for convergence. Default: None.
  • maxfev (int): Maximum allowed number of iterations. Default: None.
  • eps (sequence): Initial step size to numerical compute derivative. Minuit automatically refines this in subsequent iterations and is very insensitive to the initial choice. Default: 1.
Returns: OptimizeResult (dict with attribute access)
  • x (ndarray): Solution of optimization.
  • fun (float): Value of objective function at minimum.
  • message (str): Description of cause of termination.
  • hess_inv (ndarray): Inverse of Hesse matrix at minimum (may not be exact).
  • nfev (int): Number of function evaluations.
  • njev (int): Number of jacobian evaluations.
  • minuit (object): Minuit object internally used to do the minimization. Use this to extract more information about the parameter errors.

Utility Functions

The module iminuit.util provides the describe() function and various function to manipulate fit arguments. Most of these functions (apart from describe) are for internal use. You should not rely on them in your code. We list the ones that are for the public.

iminuit utility functions and classes.

class iminuit.util.FMin

Function minimum status object.

Create new instance of _FMin(fval, edm, tolerance, nfcn, nfcn_total, up, is_valid, has_valid_parameters, has_accurate_covar, has_posdef_covar, has_made_posdef_covar, hesse_failed, has_covariance, is_above_max_edm, has_reached_call_limit, has_parameters_at_limit, ngrad, ngrad_total)

items()
keys()
values()
exception iminuit.util.HesseFailedWarning

HESSE failed warning.

exception iminuit.util.IMinuitWarning

iminuit warning.

exception iminuit.util.InitialParamWarning

Initial parameter warning.

class iminuit.util.MError

Minos result object.

Create new instance of _MError(name, is_valid, lower, upper, lower_valid, upper_valid, at_lower_limit, at_upper_limit, at_lower_max_fcn, at_upper_max_fcn, lower_new_min, upper_new_min, nfcn, min)

items()
keys()
values()
class iminuit.util.MErrors

Dict from parameter name to Minos result object.

class iminuit.util.Matrix

Matrix data object (tuple of tuples).

Create new matrix.

class iminuit.util.MigradResult

Holds the Migrad result.

Create new instance of MigradResult(fmin, params)

class iminuit.util.Param

Data object for a single Parameter.

Create new instance of _Param(number, name, value, error, is_const, is_fixed, has_limits, has_lower_limit, has_upper_limit, lower_limit, upper_limit)

items()
keys()
values()
class iminuit.util.Params(seq, merrors)

List of parameter data objects.

Make Params from sequence of Param objects and MErrors object.

iminuit.util.arguments_from_inspect(f)

Check inspect.signature for arguemnts

iminuit.util.describe(f, verbose=False)

Try to extract the function argument names.

iminuit.util.make_func_code(params)

Make a func_code object to fake function signature.

You can make a funccode from describable object by:

make_func_code(["x", "y"])

Data objects

iminuit uses various data objects as return values. This section lists them.

Function Minimum Data Object

Subclass of NamedTuple that stores information about the fit result. It is returned by Minuit.get_fmin() and Minuit.migrad(). It has the following attributes:

  • fval: Value of the cost function at the minimum.

  • edm: Estimated Distance to Minimum.

  • nfcn: Number of function call in last Migrad call

  • up: Equal to the value of errordef when Migrad ran.

  • is_valid: Whether the function minimum is ok, defined as

    • has_valid_parameters
    • and not has_reached_call_limit
    • and not is_above_max_edm
  • has_valid_parameters: Validity of parameters. This means:

    1. The parameters must have valid error(if it’s not fixed). Valid error is not necessarily accurate.
    2. The parameters value must be valid
  • has_accurate_covariance: Whether covariance matrix is accurate.

  • has_pos_def_covar: Positive definiteness of covariance matrix. Must be true if the extremum is a minimum.

  • has_made_posdef_covar: Whether Migrad has to force covariance matrix to be positive definite by adding a diagonal matrix (should not happen!).

  • hesse_failed: Whether a call to Hesse after Migrad was successful.

  • has_covaraince: Has Covariance.

  • is_above_max_edm: Is estimated distance to minimum above its goal? This is the convergence criterion of Migrad, if it is violated, Migrad did not converge.

  • has_reached_call_limit: Whether Migrad exceeded the allowed number of function calls.

Minos Data Object

Subclass of NamedTuple which stores information about the Minos result. It is returned by Minuit.minos() (as part of a dictionary from parameter name -> data object). You can get it also from Minuit.merrors(). It has the following attributes:

  • lower: lower error value
  • upper: upper error value
  • is_valid: Validity of minos error value. This means lower_valid and upper_valid
  • lower_valid: Validity of lower error
  • upper_valid: Validity of upper error
  • at_lower_limit: minos calculation hits the lower limit on parameters
  • at_upper_limit: minos calculation hits the upper limit on parameters
  • lower_new_min: found a new minimum while scanning cost function for lower error value
  • upper_new_min: found a new minimum while scanning cost function for upper error value
  • nfn: number of call to FCN in the last minos scan
  • min: the value of the parameter at the minimum

Parameter Data Object

Subclass of NamedTuple which stores the fit parameter state. It is returned by Minuit.hesse() and as part of the Minuit.migrad() result. You can access the latest parameter state by calling Minuit.get_param_states(), and the initial state via Minuit.get_initial_param_states(). It has the following attrubutes:

  • number: parameter number
  • name: parameter name
  • value: parameter value
  • error: parameter parabolic error(like those from hesse)
  • is_fixed: is the parameter fixed
  • is_const: is the parameter a constant(We do not support const but you can alway use fixing parameter instead)
  • has_limits: parameter has limits set
  • has_lower_limit: parameter has lower limit set. We do not support one sided limit though.
  • has_upper_limit: parameter has upper limit set.
  • lower_limit: value of lower limit for this parameter
  • upper_limit: value of upper limit for this parameter

Function Signature Extraction Ordering

  1. Using f.func_code.co_varnames, f.func_code.co_argcount All functions that are defined like:

    def f(x, y):
        return (x - 2) ** 2 + (y - 3) ** 2
    

    or:

    f = lambda x, y: (x - 2) ** 2 + (y - 3) ** 2
    

    Have these two attributes.

  2. Using f.__call__.func_code.co_varnames, f.__call__.co_argcount. Minuit knows how to skip the self parameter. This allow you to do things like encapsulate your data with in a fitting algorithm:

    class MyLeastSquares:
        def __init__(self, data_x, data_y, data_yerr):
            self.x = data_x
            self.y = data_y
            self.ye = data_yerr
    
        def __call__(self, a, b):
            result = 0.0
            for x, y, ye in zip(self.x, self.y, self.ye):
                y_predicted = a * x + b
                residual = (y - y_predicted) / ye
                result += residual ** 2
            return result
    
  3. If all fails, Minuit will try to read the function signature from the docstring to get function signature.

This order is very similar to PyMinuit signature detection. Actually, it is a superset of PyMinuit signature detection. The difference is that it allows you to fake function signature by having a func_code attribute in the object. This allows you to make a generic functor of your custom cost function. This is explained in the Advanced Tutorial in the docs.

Note

If you are unsure what iminuit will parse your function signature, you can use describe() to check which argument names are detected.