Understanding Polynomial Regression

Tahera Firdose
8 min readJun 20, 2023

--

Polynomial regression is based on the idea that the relationship between the dependent variable and the independent variable may not always be linear. In some cases, the data may exhibit a nonlinear pattern that cannot be captured by a simple straight line. Polynomial regression addresses this by introducing polynomial terms (e.g., x², x³) into the regression equation, allowing for curves and more complex patterns to be modelled.

Key Features of Polynomial Regression:

· Nonlinear Relationships: Polynomial regression is particularly useful when the relationship between the independent and dependent variables is nonlinear. By including polynomial terms, we can model curves, bends, and other nonlinear patterns in the data.

· Degree of the Polynomial: The degree of the polynomial determines the complexity of the curve that can be fit to the data. Higher-degree polynomials can capture more intricate patterns, but they can also lead to overfitting (discussed later).

Let’s dive into the intuition behind the difference between linear and polynomial points using both mathematical formulas and graphs.

Simple Linear Regression: Simple linear regression represents the relationship between two variables, where one is the independent variable (x) and the other is the dependent variable (y). The equation for simple linear regression is:

y = β₀ + β₁x

where y represents the dependent variable, x represents the independent variable, β₀ is the y-intercept, and β₁ is the slope of the regression line.

Multiple Linear Regression: Multiple linear regression involves multiple independent variables (x₁, x₂, …, xₙ) to predict the dependent variable (y). The equation for multiple linear regression is:

y = β₀ + β₁x₁ + β₂x₂ + … + βₙxₙ

where y represents the dependent variable, x₁, x₂, …, xₙ represent the independent variables, β₀ is the y-intercept, and β₁, β₂, …, βₙ are the coefficients associated with each independent variable.

Polynomial Regression: Polynomial regression extends the linear regression model by introducing higher-order polynomial terms. The equation for polynomial regression of degree n is:

y = β₀ + β₁x + β₂x² + β₃x³ + … + βₙxⁿ

where y represents the dependent variable, x represents the independent variable, and β₀, β₁, β₂, …, βₙ are the coefficients of the polynomial terms. The degree n determines the complexity of the polynomial curve.

In all three cases, the goal is to estimate the coefficients (β₀, β₁, β₂, …, βₙ) that minimize the error between the predicted values and the actual values of the dependent variable. This estimation is typically done using techniques such as ordinary least squares (OLS) or gradient descent.

Advantages of Polynomial Regression:

1. Flexibility: Polynomial regression can capture nonlinear relationships that linear regression cannot represent, allowing for more accurate modeling of complex data patterns.

2. Improved Fit: By accommodating curves and bends, polynomial regression can provide a better fit to the data, resulting in higher accuracy in predictions.

3. Polynomial Interpretation: Polynomial regression can provide insights into the higher-order effects of the independent variables on the dependent variable.

Limitations of Polynomial Regression:

1. Overfitting: As the degree of the polynomial increases, the model can become overly complex and may start to fit noise or random variations in the data, leading to overfitting and poor generalization to new data.

2. Interpretability: Higher-degree polynomials can be challenging to interpret and explain compared to linear models.

3. Extrapolation: Polynomial regression may not accurately predict values outside the range of the observed data, especially with high-degree polynomials.

Implementing Polynomial Regression in Python:

To implement polynomial regression in Python, we will be using the NumPy and scikit-learn libraries. Ensure that you have these libraries installed before proceeding.

Import the required libraries:

Prepare the data: Before fitting the polynomial regression model, we need to prepare our data by creating arrays for the independent variable (X) and dependent variable (y). Assume that we have two arrays, X and y, representing the data points.

1. Define the degrees for polynomial regression: The degrees variable is a list that contains the degrees for polynomial regression that we want to explore.

2. Perform polynomial regression for each degree and plot the regression lines:

· Iterate through each degree in the degrees list.

· Create polynomial features: The PolynomialFeatures class from scikit-learn is used to transform the input data x_poly into polynomial features up to the specified degree.

· Fit the polynomial regression model: An instance of the LinearRegression class is initialized, and the model is fitted using the transformed polynomial features x_poly_transformed and the target variable y_poly.

· Generate predictions: A range of values x_range is created to represent the x-axis of the plot, and the polynomial features are transformed for these values x_range_transformed. The model is then used to predict the corresponding y-values y_pred.

· Plot the polynomial regression line: A new figure is created for each degree, and the scatter plot of the original data points is plotted. The predicted y-values y_pred are plotted as a line using x_range, with the degree mentioned in the legend.

· Customize the graph: Axes labels, title, and legend are added to the plot.

· Finally, the plt.show() function is called to display all the plotted polynomial regression lines.

This code allows us to visualize how the polynomial regression lines vary with different degrees, helping us understand the impact of polynomial degree on the curve fitting ability of the model.

Choosing the appropriate degree for polynomial regression is crucial. A low degree may result in underfitting(degree 2 and 3), where the model fails to capture the underlying patterns in the data. On the other hand, a high degree(degree 30) can lead to overfitting, where the model fits the noise in the data, resulting in poor generalization to new data.

To determine the optimal degree in polynomial regression, you can use techniques like cross-validation or R2 score and evaluate the model’s performance on a validation set. Here’s an example of how to determine the optimal degree using R2 score in Python:

In this updated code, we initialize the optimal_degree and max_r2 variables to None and negative infinity, respectively. As we iterate through each degree, we check if the current R-squared score is higher than the maximum R-squared score found so far. If it is, we update the optimal_degree and max_r2 variables accordingly.

After plotting all the polynomial regression lines and calculating the R-squared scores, we print the optimal degree and its corresponding R-squared score.

The output will display the optimal degree and its associated R-squared score, representing the degree that provides the best fit to the data based on the R-squared metric.

we can see the highest R-squared is for degree 15.

Evaluating polynomial regression models is essential to understand their performance and interpret the results. Here are some key considerations:

1. Model Evaluation: Common evaluation metrics for regression models, such as mean squared error (MSE), mean absolute error (MAE), and R-squared, can be used to assess the performance of polynomial regression models. These metrics provide insights into how well the model fits the data and how much of the variation in the dependent variable is explained by the model.

2. Interpreting Polynomial Coefficients: The coefficients in polynomial regression represent the relationship between the independent variables and the dependent variable. For higher-degree polynomials, interpreting coefficients becomes more complex. While the sign of the coefficient indicates the direction of the relationship, the magnitude and interaction effects may require further analysis.

3. Nonlinear Relationship: Polynomial regression captures nonlinear relationships between variables. It allows us to model curves, bends, and other nonlinear patterns in the data. By visualizing the polynomial regression curve, we can gain insights into the nature of the relationship between the independent and dependent variables.

Dealing with Challenges in Polynomial Regression:

Polynomial regression can present some challenges that need to be addressed:

1. Multicollinearity: When using higher-degree polynomials or including interaction terms, multicollinearity can occur. This means that independent variables become highly correlated, leading to unstable coefficient estimates. Techniques like principal component analysis (PCA) or ridge regression can help mitigate multicollinearity.

2. Regularization Techniques: Regularization methods like Ridge, Lasso, and ElasticNet can be employed to control the complexity of polynomial regression models. These techniques add a penalty term to the least squares equation, reducing the impact of large coefficient values and preventing overfitting.

3. Feature Selection: Polynomial regression with a large number of features can suffer from overfitting and decreased interpretability. Feature selection techniques like backward elimination, forward selection, or regularization-based feature selection can help identify the most relevant features for the model.

Conclusion:

In this post, we explored the concept of polynomial regression, a powerful extension of linear regression that allows us to model nonlinear relationships between variables. We covered various aspects of polynomial regression, including its mathematical foundations, implementation in Python, techniques for selecting the optimal degree, assessing model performance, and handling challenges.

--

--

Tahera Firdose

Datascience - Knowledge grows exponentially when it is shared