In statistics, an error term is the sum of the deviations of each actual observation from a model regression line. Regression analysis is used to establish the degree of correlation between two variables, one independent and one dependent, the result of which is a line that best "fits" the actually observed values of the dependent value in relation to the independent variable or variables. Put another way, an error term is the term in a model regression equation that tallies up and accounts for the unexplained difference between the actually observed values of the independent variable and the results predicted by the model. Hence, the error term is a measure of how accurately the regression model reflects the actual relationship between the independent and dependent variable or variables. The error term can indicate either that the model can be improved, such as by adding in another independent variable that explains some or all of the difference, or by randomness, meaning that the dependent and independent variable or variables are not correlated to any greater degree.
Also known as the residual term or disturbance term, according to mathematical convention, the error term is the last term in a model regression equation and is represented by the Greek letter epsilon (ε). Economists and financial industry professionals regularly make use of regression models, or at least their results, to better understand and forecast a wide range of relationships, such as how changes in the money supply are related to inflation, how stock market prices are related to unemployment rates or how changes in commodity prices affect specific companies in an economic sector. Hence, the error term is an important variable to keep in mind and keep track of in that it measures the degree to which any given model does not reflect, or account for, the actual relationship between the dependent and independent variables.
There are actually two types of error terms commonly used in regression analysis: absolute error and relative error. Absolute error is the error term as previously defined, the difference between the actually observed values of the independent variable and the results predicted by the model. Derived from this, relative error is defined as the absolute error divided by the exact value predicted by the model. Expressed in percentage terms, relative error is known as percent error, which is helpful because it puts the error term into greater perspective. For example, an error term of 1 when the predicted value is 10 is much worse than an error term of 1 when the predicted value is 1 million when attempting to come up with a regression model that shows how well two or more variables are correlated.