Reference¶
Quick Summary¶
These methods and properties you will probably use a lot:
Minuit (fcn, *args, grad, name, **kwds) 
Function minimizer and error computer. 
Minuit.migrad (ncall, iterate) 
Run Migrad minimization. 
Minuit.hesse (ncall) 
Run Hesse algorithm to compute asymptotic errors. 
Minuit.minos (*parameters, cl, ncall) 
Run Minos algorithm to compute confidence intervals. 
Minuit.values 
Access parameter values via an arraylike view. 
Minuit.errors 
Access parameter parabolic errors via an arraylike view. 
Minuit.merrors 
Return a dictlike with Minos data objects. 
Minuit.fixed 
Access whether parameters are fixed via an arraylike view. 
Minuit.limits 
Access parameter limits via a arraylike view. 
Minuit.covariance 
Return covariance matrix. 
Minuit.valid 
Return True if the function minimum is valid. 
Minuit.accurate 
Return True if the covariance matrix is accurate. 
Minuit.fval 
Get function value at minimum. 
Minuit.nfit 
Get number of fitted parameters (fixed parameters not counted). 
Minuit.mnprofile (vname, *, size, bound, …) 
Get Minos profile over a specified interval. 
Minuit.draw_mnprofile (vname, *, size, bound, …) 
Draw Minos profile over a specified interval (requires matplotlib). 
Minuit¶

class
iminuit.
Minuit
(fcn: Callable, *args, grad: Optional[Callable] = None, name: Optional[Sequence[str]] = None, **kwds)¶ Function minimizer and error computer.
Initialize Minuit object.
This does not start the minimization or perform any other work yet. Algorithms are started by calling the corresponding methods.
Parameters:  fcn – Function to minimize. See notes for details on what kind of functions are accepted.
 *args – Starting values for the minimization as positional arguments. See notes for details on how to set starting values.
 grad – Function that calculates the gradient and returns an iterable object with one entry for each parameter, which is the derivative for that parameter. If None (default), Minuit will calculate the gradient numerically.
 name – If it is set, it overrides iminuit’s function signature detection.
 **kwds – Starting values for the minimization as keyword arguments. See notes for details on how to set starting values.
Notes
Accepted callables
Minuit reads the function signature of fcn to detect the number and names of the function parameters. Two kinds of function signatures are understood.
Function with positional arguments.
The function has positional arguments, one for each fit parameter. Example:
def fcn(a, b, c): ...
The parameters a, b, c must accept a real number.
iminuit automatically detects the parameters names in this case. More information about how the function signature is detected can be found in
iminuit.util.describe()
.Function with arguments passed as a single Numpy array.
The function has a single argument which is a Numpy array. Example:
def fcn_np(x): ...
To use this form, starting values need to be passed to Minuit in form as an arraylike type, e.g. a numpy array, tuple or list. For more details, see “Parameter Keyword Arguments” further down.
In some cases, the detection fails, for example for a function like this:
def difficult_fcn(*args): ...
To use such a function, set name.
Parameter initialization
Initial values for the minimization can be set with positional arguments or via keywords. This is best explained through an example:
def fcn(x, y): return (x  2) ** 2 + (y  3) ** 2
The following ways of passing starting values are equivalent:
Minuit(fcn, x=1, y=2) Minuit(fcn, y=2, x=1) # order is irrelevant when keywords are used ... Minuit(fcn, 1, 2) # ... but here the order matters
Positional arguments can also be used if the function has no signature:
def fcn_no_sig(*args): # ... Minuit(fcn_no_sig, 1, 2)
If the arguments are explicitly named with the name keyword described further below, keywords can be used for initialization:
Minuit(fcn_no_sig, x=1, y=2, name=("x", "y")) # this also works
If the function accepts a single Numpy array, then the initial values must be passed as a single arraylike object:
def fcn_np(x): return (x[0]  2) ** 2 + (x[1]  3) ** 2 Minuit(fcn_np, (1, 2))
Setting the values with keywords is not possible in this case. Minuit deduces the number of parameters from the length of the initialization sequence.

accurate
¶ Return True if the covariance matrix is accurate.
This is an alias for
iminuit.util.FMin.has_accurate_covar
.See also

contour
(x: str, y: str, *, size: int = 50, bound: Union[int, Sequence[Sequence[int]]] = 2, subtract_min: bool = False) → Tuple[Sequence[float], Sequence[float], Sequence[Sequence[float]]]¶ Get a 2D contour of the function around the minimum.
It computes the contour via a function scan over two parameters, while keeping all other parameters fixed. The related
mncontour()
works differently: for each pair of parameter values in the scan, it minimises the function with the respect to all other parameters.This method is useful to inspect the function near the minimum to detect issues (the contours should look smooth). It is not a confidence region unless the function only has two parameters. Use
mncontour()
to compute confidence regions.Parameters:  x – First parameter for scan.
 y – Second parameter for scan.
 size – Number of scanning points (Default: 50).
 bound – If bound is 2x2 array, [[v1min,v1max],[v2min,v2max]]. If bound is a number, it specifies how many \(\sigma\) symmetrically from minimum (minimum+ bound*:math:sigma). (Default: 2).
 subtract_min – Subtract minimum from return values (Default: False).
Returns:  x (array of float) – Parameter values of first parameter.
 y (array of float) – Parameter values of second parameter.
 fval (2D array of float) – Function values.
See also

covariance
¶ Return covariance matrix.
The squareroot of the diagonal elements of the covariance matrix correspond to a standard deviation for each parameter with 68 % coverage probability in the asymptotic limit (large samples). To get k standard deviations, multiply the covariance matrix with k^2.
The submatrix formed by two parameters describes an ellipse. The asymptotic coverage probabilty of the standard ellipse is lower than 68 %. It can be computed from the \(\chi^2\) distribution with 2 degrees of freedom. In general, to obtain a (hyper)ellipsoid with coverage probability CL, one has to multiply the submatrix of the corresponding k parameters with a factor. For k = 1,2,3 and CL = 0.99
from scipy.stats import chi2 chi2(1).ppf(0.99) # 6.63... chi2(2).ppf(0.99) # 9.21... chi2(3).ppf(0.99) # 11.3...
See also

draw_contour
(x: str, y: str, *, size: int = 50, bound: Union[int, Sequence[Sequence[int]]] = 2) → Tuple[Sequence[float], Sequence[float], Sequence[Sequence[float]]]¶ Draw 2D contour around minimum (required matplotlib).
See
contour()
for details on parameters and interpretation. Please also read the docs ofmncontour()
to understand the difference between the two.See also

draw_mncontour
(x: str, y: str, *, cl: Optional[float] = None, size: int = 100) → Sequence[Sequence[float]]¶ Draw 2D Minos confidence region (requires matplotlib).
See
mncontour()
for details on parameters and interpretation.Examples
from iminuit import Minuit def cost(x, y, z): return (x  1) ** 2 + (y  x) ** 2 + (z  2) ** 2 cost.errordef = Minuit.LEAST_SQUARES m = Minuit(cost, x=0, y=0, z=0) m.migrad() m.draw_mncontour("x", "y")
(Source code, png, hires.png, pdf)
See also

draw_mnprofile
(vname: str, *, size: int = 30, bound: Union[int, Sequence[Sequence[int]]] = 2, subtract_min: bool = False, band: bool = True, text: bool = True) → Tuple[Sequence[float], Sequence[float]]¶ Draw Minos profile over a specified interval (requires matplotlib).
See
mnprofile()
for details and shared arguments. The following arguments are accepted.Parameters:  band – If true, show a band to indicate the Hesse error interval (Default: True).
 text – If true, show text a title with the function value and the Hesse error (Default: True).
Examples
from iminuit import Minuit def cost(x, y, z): return (x  1) ** 2 + (y  x) ** 2 + (z  2) ** 2 cost.errordef = Minuit.LEAST_SQUARES m = Minuit(cost, x=0, y=0, z=0) m.migrad() m.draw_mnprofile("y")
(Source code, png, hires.png, pdf)
See also

draw_profile
(vname: str, *, size: int = 100, bound: Union[int, Tuple[int, int]] = 2, subtract_min: bool = False, band: bool = True, text: bool = True) → Tuple[Sequence[float], Sequence[float]]¶ Draw 1D cost function profile over a range (requires matplotlib).
See
profile()
for details and shared arguments. The following additional arguments are accepted.Parameters:  band – If true, show a band to indicate the Hesse error interval (Default: True).
 text – If true, show text a title with the function value and the Hesse error (Default: True).
See also

errordef
¶ Access FCN increment above the minimum that corresponds to one standard deviation.
Default value is 1.0. errordef should be 1.0 for a leastsquares cost function and 0.5 for a negative loglikelihood function. See section 1.5.1 on page 6 of the
MINUIT2 User's Guide
. This parameter is also called UP in MINUIT documents.To make user code more readable, we provided two named constants:
m_lsq = Minuit(a_least_squares_function) m_lsq.errordef = Minuit.LEAST_SQUARES # == 1 m_nll = Minuit(a_likelihood_function) m_nll.errordef = Minuit.LIKELIHOOD # == 0.5

errors
¶ Access parameter parabolic errors via an arraylike view.
Like
values
, but instead of reading or writing the values, you read or write the errors (which double as step sizes for MINUITs numerical gradient estimation).

fcn
¶ Get cost function (usually a leastsquares or likelihood function).

fixed
¶ Access whether parameters are fixed via an arraylike view.
Use to read or write the fixation state of a parameter based on the parameter index or the parameter name as a string. If you change the state and run
migrad()
,hesse()
, orminos()
, the new state is used.In case of complex fits, it can help to fix some parameters first and only minimize the function with respect to the other parameters, then release the fixed parameters and minimize again starting from that state.

fval
¶ Get function value at minimum.
This is an alias for
iminuit.util.FMin.fval
.See also

grad
¶ Get gradient function of the cost function.

hesse
(ncall: Optional[int] = None)¶ Run Hesse algorithm to compute asymptotic errors.
The Hesse method estimates the covariance matrix by inverting the matrix of second derivatives (Hesse matrix) at the minimum. To get parameters correlations, you need to use this. The Minos algorithm is another way to estimate parameter uncertainties, see
minos()
.Parameters: ncall – Approximate upper limit for the number of calls made by the Hesse algorithm. If set to None, use the adaptive heuristic from the Minuit2 library (Default: None). Notes
The covariance matrix is asymptotically (in large samples) valid. By valid we mean that confidence intervals constructed from the errors contain the true value with a wellknown coverage probability (68 % for each interval). In finite samples, this is likely to be true if your cost function looks like a hyperparabola around the minimum.
In practice, the errors very likely have correct coverage if the results from Minos and Hesse methods agree. It is possible to construct artifical functions where this rule is violated, but in practice it should always work.
See also

init_params
¶ Get list of current parameter data objects set to the initial fit state.
See also

limits
¶ Access parameter limits via a arraylike view.
Use to read or write the limits of a parameter based on the parameter index or the parameter name as a string. If you change the limits and run
migrad()
,hesse()
, orminos()
, the new state is used.In case of complex fits, it can help to limit some parameters first, run Migrad, then remove the limits and run Migrad again. Limits will bias the result only if the best fit value is outside the limits, not if it is inside. Limits will affect the estimated Hesse uncertainties if the parameter is close to a limit. They do not affect the Minos uncertainties, because those are invariant to transformations and limits are implemented via a variable transformation.

merrors
¶ Return a dictlike with Minos data objects.
The Minos data objects contain the full status information of the Minos run.
See also

migrad
(ncall: Optional[int] = None, iterate: int = 5)¶ Run Migrad minimization.
Migrad from the Minuit2 library is a robust minimisation algorithm which earned its reputation in 40+ years of almost exclusive usage in highenergy physics. How Migrad works is described in the Minuit paper. It uses first and approximate second derivatives to achieve quadratic convergence near the minimum.
Parameters:  ncall – Approximate maximum number of calls before minimization will be aborted. If set to None, use the adaptive heuristic from the Minuit2 library (Default: None). Note: The limit may be slightly violated, because the condition is checked only after a full iteration of the algorithm, which usually performs several function calls.
 iterate – Automatically call Migrad up to N times if convergence was not reached (Default: 5). This simple heuristic makes Migrad converge more often even if the numerical precision of the cost function is low. Setting this to 1 disables the feature.

minos
(*parameters, cl: Optional[float] = None, ncall: Optional[int] = None)¶ Run Minos algorithm to compute confidence intervals.
The Minos algorithm uses the profile likelihood method to compute (generally asymmetric) confidence intervals. It scans the negative loglikelihood or (equivalently) the leastsquares cost function around the minimum to construct a confidence interval.
Notes
Asymptotically (large samples), the Minos interval has a coverage probability equal to the given confidence level. The coverage probability is the probility for the interval to contain the true value in repeated identical experiments.
The interval is invariant to transformations and thus not distorted by parameter limits, unless the limits intersect with the confidence interval. As a ruleofthumb: when the confidence intervals computed with the Hesse and Minos algorithms differ strongly, the Minos intervals are preferred. Otherwise, Hesse intervals are preferred.
Running Minos is computationally expensive when there are many fit parameters. Effectively, it scans over one parameter in small steps and runs a full minimisation for all other parameters of the cost function for each scan point. This requires many more function evaluations than running the Hesse algorithm.
Parameters:  *parameters – Names of parameters to generate Minos errors for. If no positional arguments are given, Minos is run for each parameter.
 cl – Confidence level for the confidence interval. If None, a standard 68.3 % confidence interval is produced (Default: None). Setting this to another value requires the scipy module to be installed.
 ncall – Limit the number of calls made by Minos. If None, an adaptive internal heuristic of the Minuit2 library is used (Default: None).

mncontour
(x: str, y: str, *, cl: Optional[float] = None, size: int = 100) → Sequence[Sequence[float]]¶ Get 2D Minos confidence region.
This scans over two parameters and minimises all other free parameters for each scan point. This scan produces a statistical confidence region according to the profile likelihood method with a confidence level cl, which is asymptotically equal to the coverage probability of the confidence region.
The calculation is expensive since a numerical minimisation has to be performed at various points.
Parameters:  x – Variable name of the first parameter.
 y – Variable name of the second parameter.
 cl – Confidence level of the contour. If None, a standard 68 % contour is computed (Default: None). Setting this to another value requires the scipy module to be installed.
 size – Number of points on the contour to find. Default 100. Increasing this makes the contour smoother, but requires more computation time.
Returns: points – Contour points of the form [[x1, y1]…[xn, yn]].
Return type: array of float (N x 2)
See also

mnprofile
(vname: str, *, size: int = 30, bound: Union[int, Sequence[int]] = 2, subtract_min: bool = False) → Tuple[Sequence[float], Sequence[float], Sequence[bool]]¶ Get Minos profile over a specified interval.
Scans over one parameter and minimises the function with respect to all other parameters for each scan point.
Parameters:  vname – Name of parameter to scan.
 size – Number of scanning points (Default: 30).
 bound – If bound is tuple, (left, right) scanning bound. If bound is a number, it specifies how many \(\sigma\) symmetrically from minimum (minimum +/ bound * \(\sigma\)) (Default: 2).
 subtract_min – Subtract minimum from return values (Default: False).
Returns:  locations (array of float) – Parameter values where the profile was computed.
 fvals (array of float) – Profile values.
 status (array of bool) – Whether minimisation in each point succeeded or not.

ndof
¶ Get number of degrees of freedom if cost function supports this.
To support this feature, the cost function has to report the number of data points with a property called
ndata
. Unbinned cost functions should return infinity.

nfcn
¶ Get total number of function calls.

nfit
¶ Get number of fitted parameters (fixed parameters not counted).

ngrad
¶ Get total number of gradient calls.

npar
¶ Get number of parameters.

params
¶ Get list of current parameter data objects.
See also

pos2var
¶ Map variable index to name.

precision
¶ Access estimated precision of the cost function.
Default: None. If set to None, Minuit assumes the cost function is computed in double precision. If the precision of the cost function is lower (because it computes in single precision, for example) set this to some multiple of the smallest relative change of a parameter that still changes the function.

print_level
¶ Access current print level.
You can assign an integer:
 0: quiet (default)
 1: print minimal debug messages to terminal
 2: print more debug messages to terminal
 3: print even more debug messages to terminal
Warning
Setting print_level has the unwanted sideeffect of setting the level globally for all Minuit instances in the current Python session.

profile
(vname: str, *, size: int = 100, bound: Union[int, Tuple[int, int]] = 2, subtract_min: bool = False) → Tuple[Sequence[float], Sequence[float]]¶ Calculate 1D cost function profile over a range.
A 1D scan of the cost function around the minimum, useful to inspect the minimum. For a fit with several free parameters this is not the same as the Minos profile computed by
mncontour()
.Parameters:  vname – Parameter to scan over.
 size – Number of scanning points (Default: 100).
 bound – If bound is tuple, (left, right) scanning bound. If bound is a number, it specifies an interval of N \(\sigma\) symmetrically around the minimum (Default: 2).
 subtract_min – If true, subtract offset so that smallest value is zero (Default: False).
Returns:  x (array of float) – Parameter values.
 y (array of float) – Function values.
See also

reset
()¶ Reset minimization state to initial state.
Leaves
strategy
,precision
,tol
,errordef
,print_level
unchanged.

scan
(ncall: Optional[int] = None)¶ Bruteforce minimization.
Scans the function on a regular hypercube grid, whose bounds are defined either by parameter limits if present or by Minuit.values +/ Minuit.errors. Minuit.errors are initialized to very small values by default, too small for this scan. They should be increased before running scan or limits should be set. The scan evaluates the function exactly at the limit boundary, so the function should be defined there.
Parameters: ncall – Approximate number of function calls to spend on the scan. The actual number will be close to this, the scan uses ncall^(1/npar) steps per cube dimension. If no value is given, a heuristic is used to set ncall. Notes
The scan can return an invalid minimum, this is not a cause for alarm. It just minimizes the cost function, the EDM value is only computed after the scan found a best point. If the best point still has a bad EDM value, the minimum is considered invalid. But even if it is considered valid, it is probably not accurate, since the tolerance is very lax. One should always run
migrad()
after the scan.This implementation here does a full scan of the hypercube in Python. Originally, this was supposed to use MnScan from C++ Minuit2, but MnScan is unsuitable. It does a 1D scan with 41 steps (not configurable) for each parameter in sequence, so it is not actually scanning the full hypercube. It first scans one parameter, then starts the scan of the second parameter from the best value of the first and so on. This fails easily when the parameters are correlated.

simplex
(ncall: Optional[int] = None)¶ Run Simplex minimization.
Simplex from the Minuit2 C++ library is a variant of the NelderMead algorithm to find the minimum of a function. It does not make use of derivatives. The Wikipedia has a good article on the NelderMead method.
Parameters: ncall – Approximate maximum number of calls before minimization will be aborted. If set to None, use the adaptive heuristic from the Minuit2 library (Default: None). Note: The limit may be slightly violated, because the condition is checked only after a full iteration of the algorithm, which usually performs several function calls. Notes
The Simplex method usually converges more slowly than Migrad, but performs better in certain cases, the Rosenbrock function is a notable example. Unlike Migrad, the Simplex method does not have quadratic convergence near the minimum, so it is a good approach to run Migrad after Simplex to obtain an accurate solution in fewer steps. Simplex may also be useful to get close to the minimum from an unsuitable starting point.
The convergence criterion for Simplex is also based on EDM, but the threshold is much more lax than that of Migrad (see
Minuit.tol
for details). This was made so that Simplex stops early when getting near the minimum, to give the user a chance to switch to the more efficient Migrad algorithm to finish the minimization. Early stopping can be avoided by setting Minuit.tol to an accordingly smaller value, however.

strategy
¶ Access current minimization strategy.
You can assign an integer:
 0: Fast. Does not check a userprovided gradient. Does not improve Hesse matrix
at minimum. Extra call to
hesse()
aftermigrad()
is always needed for good error estimates. If you pass a userprovided gradient to MINUIT, convergence is faster.  1: Default. Checks userprovided gradient against numerical gradient. Checks and
usually improves Hesse matrix at minimum. Extra call to
hesse()
aftermigrad()
is usually superfluous. If you pass a userprovided gradient to MINUIT, convergence is slower.  2: Careful. Like 1, but does extra checks of intermediate Hessian matrix during
minimization. The effect in benchmarks is a somewhat improved accuracy at the
cost of more function evaluations. A similar effect can be achieved by reducing
the tolerance
tol
for convergence at any strategy level.
 0: Fast. Does not check a userprovided gradient. Does not improve Hesse matrix
at minimum. Extra call to

throw_nan
¶ Access whether to raise runtime error if the function evaluates to NaN.
If you set this to True, an error is raised whenever the function evaluates to NaN.

tol
¶ Access tolerance for convergence with the EDM criterion.
Minuit detects converge with the EDM criterion. EDM stands for Estimated Distance to Minimum, it is mathematically described in the MINUIT paper. The EDM criterion is well suited for statistical cost functions, since it stops the minimization when parameter improvements become small compared to parameter uncertainties.
The convergence is detected when edm < edm_max, where edm_max is calculated as
 Migrad: edm_max = 0.002 * tol * errordef
 Simplex: edm_max = tol * errordef
Users can set tol (default: 0.1) to a different value to speed up convergence at the cost of a larger error on the fitted parameters and possibly invalid estimates for parameter uncertainties.
Under some circumstances, Migrad is allowed to violate edm_max by a factor of 10. Users should not try to detect convergence by comparing edm with edm_max, but query
iminuit.util.FMin.is_above_max_edm
.

valid
¶ Return True if the function minimum is valid.
This is an alias for
iminuit.util.FMin.is_valid
.See also

values
¶ Access parameter values via an arraylike view.
Use to read or write current parameter values based on the parameter index or the parameter name as a string. If you change a parameter value and run
migrad()
, the minimization will start from that value, similar forhesse()
andminos()
.

var2pos
¶ Map variable name to index.
minimize¶
The minimize()
function provides the same interface as scipy.optimize.minimize()
.
If you are familiar with the latter, this allows you to use Minuit with a quick start.
Eventually, you still may want to learn the interface of the Minuit
class,
as it provides more functionality if you are interested in parameter uncertainties.

iminuit.
minimize
(fun, x0, args=(), method='migrad', jac=None, hess=None, hessp=None, bounds=None, constraints=None, tol=None, callback=None, options=None)¶ Interface to MIGRAD using the
scipy.optimize.minimize
API.For a general description of the arguments, see
scipy.optimize.minimize
.Allowed values for
method
are “migrad” or “simplex”. Default: “migrad”.The
options
argument can be used to pass special settings to Minuit. All are optional.Options:
 disp (bool): Set to true to print convergence messages. Default: False.
 stra (int): Minuit strategy (0: fast/inaccurate, 1: balanced, 2: slow/accurate). Default: 1.
 maxfun (int): Maximum allowed number of iterations. Default: None.
 maxfev (int): Deprecated alias for maxfun.
 eps (sequence): Initial step size to numerical compute derivative. Minuit automatically refines this in subsequent iterations and is very insensitive to the initial choice. Default: 1.
 Returns: OptimizeResult (dict with attribute access)
 x (ndarray): Solution of optimization.
 fun (float): Value of objective function at minimum.
 message (str): Description of cause of termination.
 hess_inv (ndarray): Inverse of Hesse matrix at minimum (may not be exact).
 nfev (int): Number of function evaluations.
 njev (int): Number of jacobian evaluations.
 minuit (object): Minuit object internally used to do the minimization. Use this to extract more information about the parameter errors.
iminuit.cost¶
Standard cost functions to minimize.
The cost functions defined here should be preferred over custom implementations. They have been optimized with knowledge about implementation details of Minuit to give the highest accucary and the most robust results. They are partially accelerated with numba, if numba is available.

class
iminuit.cost.
BinnedCost
(n, xe, model, verbose)¶ Base class for binned cost functions.
For internal use.

n
¶ Access bin counts.

xe
¶ Access bin edges.


class
iminuit.cost.
BinnedNLL
(n, xe, cdf, verbose=0)¶ Binned negative loglikelihood.
Use this if only the shape of the fitted PDF is of interest and the data is binned.
Initialize cost function with data and model.
Parameters:  n (arraylike) – Histogram counts. If this is an array N x 2, it is interpreted as pairs of sum of weights and sum of weights squared.
 xe (arraylike) – Bin edge locations, must be len(n) + 1.
 cdf (callable) – Cumulative density function of the form f(xe, par0, par1, …, parN), where xe is a bin edge and par0, … parN are model parameters. Must be normalized to unity over the range (xe[0], xe[1]).
 verbose (int, optional) – Verbosity level. 0: is no output (default). 1: print current args and negative loglikelihood value.

cdf
¶ Get cumulative density function.

class
iminuit.cost.
Constant
(value: float)¶ Cost function that represents a constant.
If your cost function produces results that are far away from O(1), adding a constant that brings the value closer to zero may improve the numerical stability.
Initialize constant with a value.

class
iminuit.cost.
Cost
(args: Tuple[str], ndata: float, verbose: int)¶ Base class for all cost functions.
For internal use.

errordef
¶ For internal use.

func_code
¶ For internal use.

ndata
¶ Return number of points in leastsquares fits or bins in a binned fit.
Infinity is returned if the cost function is unbinned.

verbose
¶ Access verbosity level.
Set this to 1 to print all function calls with input and output.


class
iminuit.cost.
CostSum
(*items)¶ Sum of cost functions.
Users do not need to create objects of this class themselves. They should just add cost functions, for example:
nll = UnbinnedNLL(...) lsq = LeastSquares(...) ncs = NormalConstraint(...) csum = nll + lsq + ncs
CostSum is used to combine data from different experiments or to combine normal cost functions with penalty terms (see NormalConstraint).
The parameters of CostSum are the union of all parameters of its constituents.
Supports the sequence protocol to access the constituents.
Warning
CostSum does not work very well with cost functions that accept arrays, because the function signature does not allow one to determine how many parameters are accepted by the function and which parameters overlap between different cost functions.
CostSum works with cost functions that accept arrays only under the condition that all cost functions accept the very same array parameter:
 All array must have the same name in all constituent cost functions.
 All arrays must have the same length.
 The positions in each array must correspond to the same model parameters.
Initialize with cost functions.
Parameters: *items (Cost) – Cost functions. May also be other CostSum functions.

class
iminuit.cost.
ExtendedBinnedNLL
(n, xe, scaled_cdf, verbose=0)¶ Binned extended negative loglikelihood.
Use this if shape and normalization of the fitted PDF are of interest and the data is binned.
Initialize cost function with data and model.
Parameters:  n (arraylike) – Histogram counts. If this is an array N x 2, it is interpreted as pairs of sum of weights and sum of weights squared.
 xe (arraylike) – Bin edge locations, must be len(n) + 1.
 scaled_cdf (callable) – Scaled Cumulative density function of the form f(xe, par0, [par1, …]), where xe is a bin edge and parN are model parameters.
 verbose (int, optional) – Verbosity level. 0: is no output (default). 1: print current args and negative loglikelihood value.

scaled_cdf
¶ Get integrated density model.

class
iminuit.cost.
ExtendedUnbinnedNLL
(data, scaled_pdf, verbose=0)¶ Unbinned extended negative loglikelihood.
Use this if shape and normalization of the fitted PDF are of interest and the original unbinned data is available.
Initialize cost function with data and model.
Parameters:  data (arraylike) – Sample of observations.
 scaled_pdf (callable) – Density function of the form f(data, par0, [par1, …]), where data is the data sample and parN are model parameters. Must return a tuple (<integral over f in data range>, <f evaluated at data points>).
 verbose (int, optional) – Verbosity level. 0: is no output (default). 1: print current args and negative loglikelihood value.

scaled_pdf
¶ Get density model.

class
iminuit.cost.
LeastSquares
(x, y, yerror, model, loss='linear', verbose=0)¶ Leastsquares cost function (aka chisquare function).
Use this if you have data of the form (x, y +/ yerror).
Initialize cost function with data and model.
Parameters:  x (arraylike) – Locations where the model is evaluated.
 y (arraylike) – Observed values. Must have the same length as x.
 yerror (arraylike or float) – Estimated uncertainty of observed values. Must have same shape as y or be a scalar, which is then broadcasted to same shape as y.
 model (callable) – Function of the form f(x, par0, [par1, …]) whose output is compared to observed values, where x is the location and parN are model parameters.
 loss (str or callable, optional) – The loss function can be modified to make the fit robust against outliers, see scipy.optimize.least_squares for details. Only “linear” (default) and “soft_l1” are currently implemented, but users can pass any loss function as this argument. It should be a monotonic, twice differentiable function, which accepts the squared residual and returns a modified squared residual.
 verbose (int, optional) – Verbosity level. 0: is no output (default). 1: print current args and negative loglikelihood value.
Notes
Alternative loss functions make the fit more robust against outliers by weakening the pull of outliers. The mechanical analog of a leastsquares fit is a system with attractive forces. The points pull the model towards them with a force whose potential is given by \(rho(z)\) for a squaredoffset \(z\). The plot shows the standard potential in comparison with the weaker softl1 potential, in which outliers act with a constant force independent of their distance.
(Source code, png, hires.png, pdf)

loss
¶ Get loss function.

model
¶ Get model of the form y = f(x, par0, [par1, …]).

x
¶ Get explanatory variables.

y
¶ Get samples.

yerror
¶ Get sample uncertainties.

class
iminuit.cost.
MaskedCost
(args, ndata, verbose)¶ Base class for cost functions that support data masking.
For internal use.

mask
¶ Boolean array, array of indices, or None.
If not None, only values selected by the mask are considered. The mask acts on the first dimension of a value array, i.e. values[mask]. Default is None.

ndata
¶ Return number of points in leastsquares fits or bins in a binned fit.
Infinity is returned if the cost function is unbinned.


class
iminuit.cost.
NormalConstraint
(args, value, error)¶ Gaussian penalty for one or several parameters.
The Gaussian penalty acts like a pseudomeasurement of the parameter itself, based on a (multivariate) normal distribution. Penalties can be set for one or several parameters at once (which is more efficient). When several parameter are constrained, one can specify the full covariance matrix of the parameters.
Notes
It is sometimes necessary to add a weak penalty on a parameter to avoid instabilities in the fit. A typical example in highenergy physics is the fit of a signal peak above some background. If the amplitude of the peak vanishes, the shape parameters of the peak become unconstrained and the fit becomes unstable. This can be avoided by adding weak (large uncertainty) penalty on the shape parameters whose pull is negligible if the peak amplitude is nonzero.
This class can also be used to approximately include external measurements of some parameters, if the original cost function is not available or too costly to compute. If the external measurement was performed in the asymptotic limit with a large sample, a Gaussian penalty is an accurate statistical representation of the external result.
Initialize the normal constraint with expected value(s) and error(s).
Parameters:  args (str or sequence of str) – Parameter name(s).
 value (float or arraylike) – Expected value(s). Must have same length as args.
 error (float or arraylike) – Expected error(s). If 1D, must have same length as args. If 2D, must be the covariance matrix of the parameters.

covariance
¶ Get expected covariance of parameters.
Can be 1D (diagonal of covariance matrix) or 2D (full covariance matrix).

value
¶ Get expected parameter values.

class
iminuit.cost.
UnbinnedCost
(data, model, verbose)¶ Base class for unbinned cost functions.
For internal use.

data
¶ Unbinned samples.


class
iminuit.cost.
UnbinnedNLL
(data, pdf, verbose=0)¶ Unbinned negative loglikelihood.
Use this if only the shape of the fitted PDF is of interest and the original unbinned data is available.
Initialize UnbinnedNLL with data and model.
Parameters:  data (arraylike) – Sample of observations.
 pdf (callable) – Probability density function of the form f(data, par0, [par1, …]), where data is the data sample and parN are model parameters.
 verbose (int, optional) – Verbosity level. 0: is no output (default). 1: print current args and negative loglikelihood value.

pdf
¶ Get probability density model.
iminuit.util¶
Data classes and utilities used by iminuit.Minuit
.

class
iminuit.util.
BasicView
(minuit, ndim=0)¶ Arraylike view of parameter state.
Derived classes need to implement methods _set and _get to access specific properties of the parameter state.
Not to be initialized by users.

to_dict
() → Dict[str, float]¶ Obtain dict representation.


class
iminuit.util.
ErrorView
(minuit, ndim=0)¶ Arraylike view of parameter errors.
Not to be initialized by users.

class
iminuit.util.
FMin
(fmin, nfcn, ngrad, ndof, edm_goal)¶ Function minimum view.
This object provides detailed metadata about the function minimum. Inspect this to check what exactly happened if the fit did not converge. Use the
iminuit.Minuit
object to get the best fit values, their uncertainties, or the function value at the minimum. For convenience, you can also get a basic OK fromiminuit.Minuit
with the methodsiminuit.Minuit.valid
andiminuit.Minuit.accurate
.See also
iminuit.Minuit.values
,iminuit.Minuit.errors
,iminuit.Minuit.merrors
,iminuit.Minuit.covariance
,iminuit.Minuit.fval
,iminuit.Minuit.valid
,iminuit.Minuit.accurate
Not to be initialized by users.

edm
¶ Get Estimated Distance to Minimum.
Minuit uses this criterion to determine whether the fit converged. It depends on the gradient and the Hessian matrix. It measures how well the current second order expansion around the function minimum describes the function, by taking the difference between the predicted (based on gradient and Hessian) function value at the minimum and the actual value.

edm_goal
¶ Get EDM threshold value for stopping the minimization.
The threshold is allowed to be violated by a factor of 10 in some situations.

errordef
¶ Equal to the value of
iminuit.Minuit.errordef
when Migrad ran.

fval
¶ Get cost function value at the minimum.

has_accurate_covar
¶ Return whether the covariance matrix is accurate.
While Migrad runs, it computes an approximation to the current Hessian matrix. If the strategy is set to 0 or if the fit did not converge, the inverse of this approximation is returned instead of the inverse of the accurately computed Hessian matrix. This property returns False if the approximation has been returned instead of an accurate matrix computed by the Hesse method.

has_covariance
¶ Return whether a covariance matrix was computed at all.
This is false if the Simplex minimization algorithm was used instead of Migrad, in which no approximation to the Hessian is computed.

has_made_posdef_covar
¶ Return whether the matrix was forced to be positive definite.
While Migrad runs, it computes an approximation to the current Hessian matrix. It can happen that this approximation is not positive definite, but that is required to compute the next Newton step. Migrad then adds an appropriate diagonal matrix to enforce positive definiteness.
If the fit has converged successfully, this should always return False. If Minuit forced the matrix to be positive definite, the parameter uncertainties are false, see
has_posdef_covar
for more details.

has_parameters_at_limit
¶ Return whether any bounded parameter was fitted close to a bound.
The estimated error for the affected parameters is usually off. May be an indication to remove or loosen the limits on the affected parameter.

has_posdef_covar
¶ Return whether the Hessian matrix is positive definite.
This must be the case if the extremum is a minimum, otherwise it is a saddle point. If it returns False, the fitted result may be correct, but the reported uncertainties are false. This may affect some parameters or all of them. Possible causes:
Model contains redundanted parameters that are 100% correlated. Fix: remove the parameters that are 100% correlated.
Cost function is not computed in double precision. Fix: try adjusting
iminuit.Minuit.precision
or change the cost function to compute in double precision.Cost function is not analytical near the minimum. Fix: change the cost function to something analytical. Functions are not analytical if:
It does computations based on (pseudo)random numbers.
It contains vertical steps, for example from code like this:
if cond: return value1 else: return value2

has_reached_call_limit
¶ Return whether Migrad exceeded the allowed number of function calls.
Returns True true, the fit was stopped before convergence was reached; otherwise returns False.

has_valid_parameters
¶ Return whether parameters are valid.
For it to return True, the following conditions need to be fulfilled:
has_reached_call_limit
is Falseis_above_max_edm
is False
Note: The actual verdict is computed inside the Minuit2 C++ code, so we cannot guarantee that is_valid is exactly equivalent to these conditions.

hesse_failed
¶ Return whether the last call to Hesse failed.

is_above_max_edm
¶ Return whether the EDM value is below the convergence threshold.
Returns True, if the fit did not converge; otherwise returns False.

is_valid
¶ Return whether Migrad converged successfully.
For it to return True, the following conditions need to be fulfilled:
has_valid_parameters
is Truehas_reached_call_limit
is Falseis_above_max_edm
is False
Note: The actual verdict is computed inside the Minuit2 C++ code, so we cannot guarantee that is_valid is exactly equivalent to these conditions.

nfcn
¶ Get number of function calls so far.

ngrad
¶ Get number of function gradient calls so far.

reduced_chi2
¶ Get chi2/ndof of the fit.
This returns NaN if the cost function is unbinned or does not support reporting the degrees of freedom.


class
iminuit.util.
FixedView
(minuit, ndim=0)¶ Arraylike view of whether parameters are fixed.
Not to be initialized by users.

exception
iminuit.util.
HesseFailedWarning
¶ HESSE failed warning.

exception
iminuit.util.
IMinuitWarning
¶ Generic iminuit warning.

class
iminuit.util.
LimitView
(minuit)¶ Arraylike view of parameter limits.
Not to be initialized by users.

class
iminuit.util.
MError
(*args)¶ Minos data object.

number
¶ Parameter index.
Type: int

name
¶ Parameter name.
Type: str

lower
¶ Lower error.
Type: float

upper
¶ Upper error.
Type: float

is_valid
¶ Whether Minos computation was successful.
Type: bool

lower_valid
¶ Whether downward scan was successful.
Type: bool

upper_valid
¶ Whether upward scan was successful.
Type: bool

at_lower_limit
¶ Whether scan reached lower limit.
Type: bool

at_upper_limit
¶ Whether scan reached upper limit.
Type: bool

at_lower_max_fcn
¶ Whether allowed number of function evaluations was exhausted.
Type: bool

at_upper_max_fcn
¶ Whether allowed number of function evaluations was exhausted.
Type: bool

lower_new_min
¶ Parameter value for new minimum, if one was found in downward scan.
Type: float

upper_new_min
¶ Parameter value for new minimum, if one was found in upward scan.
Type: float

nfcn
¶ Number of function calls.
Type: int

min
¶ Function value at the new minimum.
Type: float
Not to be initialized by users.


class
iminuit.util.
MErrors
¶ Dictlike map from parameter name to Minos result object.

class
iminuit.util.
Matrix
¶ Enhanced Numpy ndarray.
Works like a normal ndarray in computations, but also supports pretty printing in ipython and Jupyter notebooks. Elements can be accessed via indices or parameter names.
Not to be initialized by users.

correlation
()¶ Compute and return correlation matrix.
If the matrix is already a correlation matrix, this effectively returns a copy of the original matrix.

to_table
()¶ Convert matrix to tabular format.
The output is consumable by the external tabulate module.
Examples
>>> import tabulate as tab >>> from iminuit import Minuit >>> m = Minuit(lambda x, y: x ** 2 + y ** 2, x=1, y=2).migrad() >>> tab.tabulate(*m.covariance.to_table()) x y    x 1 0 y 0 4


class
iminuit.util.
Param
(*args)¶ Data object for a single Parameter.
Not to be initialized by users.

class
iminuit.util.
Params
¶ Tuplelike holder of parameter data objects.

to_table
()¶ Convert parameter data to a tabular format.
The output is consumable by the external tabulate module.
Examples
>>> import tabulate as tab >>> from iminuit import Minuit >>> m = Minuit(lambda x, y: x ** 2 + (y / 2) ** 2 + 1, x=0, y=0) >>> m.fixed["x"] = True >>> m.migrad().minos() >>> tab.tabulate(*m.params.to_table()) pos name value error error error+ limit limit+ fixed          0 x 0 0.1 yes 1 y 0 1.4 1.0 1.0


class
iminuit.util.
ValueView
(minuit, ndim=0)¶ Arraylike view of parameter values.
Not to be initialized by users.

iminuit.util.
describe
(callable)¶ Attempt to extract the function argument names.
Parameters: callable (callable) – Callable whose parameters should be extracted. Returns: Returns a list of strings with the parameters names if successful and an empty list otherwise. Return type: list Notes
Parameter names are extracted with the following three methods, which are attempted in order. The first to succeed determines the result.
Using
obj.func_code
. If an objects has afunc_code
attribute, it is used to detect the parameters. Examples:def f(*args): # no signature x, y = args return (x  2) ** 2 + (y  3) ** 2 f.func_code = make_func_code(("x", "y"))
Users are encouraged to use this mechanism to provide signatures for objects that otherwise would not have a detectable signature. The function
make_func_code()
can be used to generate an appropriate func_code object. An example where this is useful is shown in one of the tutorials.Using
inspect.signature()
. Theinspect
module provides a general function to extract the signature of a Python callable. It works on most callables, including Functors like this:class MyLeastSquares: def __call__(self, a, b): # ...
Using the docstring. The docstring is textually parsed to detect the parameter names. This requires that a docstring is present which follows the Python standard formatting for function signatures.
Ambiguous cases with positional and keyword argument are handled in the following way:
# describe returns [a, b]; # *args and **kwargs are ignored def fcn(a, b, *args, **kwargs): ... # describe returns [a, b, c]; # positional arguments with default values are detected def fcn(a, b, c=1): ...

iminuit.util.
make_func_code
(params)¶ Make a func_code object to fake a function signature.
Example:
def f(a, b): ... f.func_code = make_func_code(["x", "y"])

iminuit.util.
make_with_signature
(callable, *varnames, **replacements)¶ Return new callable with altered signature.
Parameters:  *varnames (sequence of str) – Replace the first N argument names with these.
 **replacements (mapping of str to str) – Replace old argument name (key) with new argument name (value).
Returns: Return type: callable with new argument names.

iminuit.util.
merge_signatures
(callables)¶ Merge signatures of callables with positional arguments.
This is best explained by an example:
def f(x, y, z): ... def g(x, p): ... parameters, mapping = merge_signatures(f, g) # parameters is ('x', 'y', 'z', 'p') # mapping is ((0, 1, 2), (0, 3))
Parameters: callable (callable) – Callable whose parameters can be extracted with describe()
.Returns: parameters is the tuple of the merged parameter names. mapping contains the mapping of parameters indices from the merged signature to the original signatures. Return type: tuple(parameters, mapping)

iminuit.util.
propagate
(fn, x, cov)¶ Numerically propagates the covariance into a new space.
The function does error propagation. It computes C’ = J C J^T, where C is the covariance matrix of the input, C’ the matrix of the output, and J is the Jacobi matrix of first derivatives of the mapping function fn. The Jacobi matrix is computed numerically.
Parameters:  fn (callable) – Vectorized function that computes y = fn(x).
 x (arraylike with shape (N,)) – Input vector.
 cov (arraylike with shape (N, N)) – Covariance matrix of input vector.
Returns: y is the result of fn(x) ycov is the propagated covariance matrix.
Return type: y, ycov