Parameter estimation is a discipline that provides tools for the efficient use of data for aiding in mathematically modeling of phenomena and the estimation of constants appearing in these models [2]. It can thus be visualized as a study of inverse problems. Much of parameter estimation can be related to four optimization problems:
Let be the (state/parameter) vector containing the parameters to be estimated. The dimension of , say m, is the number of parameters to be estimated. Let be the (measurement) vector which is the output of the system to be modeled. The system is described by a vector function which relates to such that
In practice, observed measurements are only available for the system output corrupted with noise , i.e.,
We usually make a number of measurements for the system, say ( ), and we want to estimate using . As the data are noisy, the function is not valid anymore. In this case, we write down a function
which is to be optimized (without loss of generality, we will minimize the function). This function is usually called the cost function or the objective function.
If there are no constraints on and the function has first and second partial derivatives everywhere, necessary conditions for a minimum are
and
By the last, we mean that the -matrix is positive definite.