kopia lustrzana https://github.com/animator/learn-python
141 wiersze
5.2 KiB
Markdown
141 wiersze
5.2 KiB
Markdown
![]() |
# Ensemble Learning
|
||
|
|
||
|
Ensemble Learning is a powerful machine learning paradigm that combines multiple models to achieve better performance than any individual model. The idea is to leverage the strengths of different models to improve overall accuracy, robustness, and generalization.
|
||
|
|
||
|
|
||
|
|
||
|
## Introduction
|
||
|
|
||
|
Ensemble Learning is a technique that combines the predictions from multiple machine learning models to make more accurate and robust predictions than a single model. It leverages the diversity of different models to reduce errors and improve performance.
|
||
|
|
||
|
## Types of Ensemble Learning
|
||
|
|
||
|
### Bagging
|
||
|
|
||
|
Bagging, or Bootstrap Aggregating, involves training multiple versions of the same model on different subsets of the training data and averaging their predictions. The most common example of bagging is the `RandomForest` algorithm.
|
||
|
|
||
|
### Boosting
|
||
|
|
||
|
Boosting focuses on training models sequentially, where each new model corrects the errors made by the previous ones. This way, the ensemble learns from its mistakes, leading to improved performance. `AdaBoost` and `Gradient Boosting` are popular examples of boosting algorithms.
|
||
|
|
||
|
### Stacking
|
||
|
|
||
|
Stacking involves training multiple models (the base learners) and a meta-model that combines their predictions. The base learners are trained on the original dataset, while the meta-model is trained on the outputs of the base learners. This approach allows leveraging the strengths of different models.
|
||
|
|
||
|
## Advantages and Disadvantages
|
||
|
|
||
|
### Advantages
|
||
|
|
||
|
- **Improved Accuracy**: Combines the strengths of multiple models.
|
||
|
- **Robustness**: Reduces the risk of overfitting and model bias.
|
||
|
- **Versatility**: Can be applied to various machine learning tasks, including classification and regression.
|
||
|
|
||
|
### Disadvantages
|
||
|
|
||
|
- **Complexity**: More complex than individual models, making interpretation harder.
|
||
|
- **Computational Cost**: Requires more computational resources and training time.
|
||
|
- **Implementation**: Can be challenging to implement and tune effectively.
|
||
|
|
||
|
## Key Concepts
|
||
|
|
||
|
- **Diversity**: The models in the ensemble should be diverse to benefit from their different strengths.
|
||
|
- **Voting/Averaging**: For classification, majority voting is used to combine predictions. For regression, averaging is used.
|
||
|
- **Weighting**: In some ensembles, models are weighted based on their accuracy or other metrics.
|
||
|
|
||
|
## Code Examples
|
||
|
|
||
|
### Bagging with Random Forest
|
||
|
|
||
|
Below is an example of using Random Forest for classification on the Iris dataset.
|
||
|
|
||
|
```python
|
||
|
import numpy as np
|
||
|
import pandas as pd
|
||
|
from sklearn.datasets import load_iris
|
||
|
from sklearn.ensemble import RandomForestClassifier
|
||
|
from sklearn.model_selection import train_test_split
|
||
|
from sklearn.metrics import accuracy_score, classification_report
|
||
|
|
||
|
# Load dataset
|
||
|
iris = load_iris()
|
||
|
X, y = iris.data, iris.target
|
||
|
|
||
|
# Split dataset
|
||
|
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)
|
||
|
|
||
|
# Initialize Random Forest model
|
||
|
clf = RandomForestClassifier(n_estimators=100, random_state=42)
|
||
|
|
||
|
# Train the model
|
||
|
clf.fit(X_train, y_train)
|
||
|
|
||
|
# Make predictions
|
||
|
y_pred = clf.predict(X_test)
|
||
|
|
||
|
# Evaluate the model
|
||
|
accuracy = accuracy_score(y_test, y_pred)
|
||
|
print(f"Accuracy: {accuracy * 100:.2f}%")
|
||
|
print("Classification Report:\n", classification_report(y_test, y_pred))
|
||
|
```
|
||
|
|
||
|
### Boosting with AdaBoost
|
||
|
Below is an example of using AdaBoost for classification on the Iris dataset.
|
||
|
|
||
|
```
|
||
|
from sklearn.ensemble import AdaBoostClassifier
|
||
|
from sklearn.tree import DecisionTreeClassifier
|
||
|
|
||
|
# Initialize base model
|
||
|
base_model = DecisionTreeClassifier(max_depth=1)
|
||
|
|
||
|
# Initialize AdaBoost model
|
||
|
ada_clf = AdaBoostClassifier(base_estimator=base_model, n_estimators=50, random_state=42)
|
||
|
|
||
|
# Train the model
|
||
|
ada_clf.fit(X_train, y_train)
|
||
|
|
||
|
# Make predictions
|
||
|
y_pred = ada_clf.predict(X_test)
|
||
|
|
||
|
# Evaluate the model
|
||
|
accuracy = accuracy_score(y_test, y_pred)
|
||
|
print(f"Accuracy: {accuracy * 100:.2f}%")
|
||
|
print("Classification Report:\n", classification_report(y_test, y_pred))
|
||
|
```
|
||
|
|
||
|
### Stacking with Multiple Models
|
||
|
Below is an example of using stacking with multiple models for classification on the Iris dataset.
|
||
|
|
||
|
```
|
||
|
from sklearn.linear_model import LogisticRegression
|
||
|
from sklearn.neighbors import KNeighborsClassifier
|
||
|
from sklearn.svm import SVC
|
||
|
from sklearn.ensemble import StackingClassifier
|
||
|
|
||
|
# Define base models
|
||
|
base_models = [
|
||
|
('knn', KNeighborsClassifier(n_neighbors=5)),
|
||
|
('svc', SVC(kernel='linear', probability=True))
|
||
|
]
|
||
|
|
||
|
# Define meta-model
|
||
|
meta_model = LogisticRegression()
|
||
|
|
||
|
# Initialize Stacking model
|
||
|
stacking_clf = StackingClassifier(estimators=base_models, final_estimator=meta_model, cv=5)
|
||
|
|
||
|
# Train the model
|
||
|
stacking_clf.fit(X_train, y_train)
|
||
|
|
||
|
# Make predictions
|
||
|
y_pred = stacking_clf.predict(X_test)
|
||
|
|
||
|
# Evaluate the model
|
||
|
accuracy = accuracy_score(y_test, y_pred)
|
||
|
print(f"Accuracy: {accuracy * 100:.2f}%")
|
||
|
print("Classification Report:\n", classification_report(y_test, y_pred))
|
||
|
```
|
||
|
|
||
|
## Conclusion
|
||
![]() |
Ensemble Learning is a powerful technique that combines multiple models to improve overall performance. By leveraging the strengths of different models, it provides better accuracy, robustness, and generalization. However, it comes with increased complexity and computational cost. Understanding and implementing ensemble methods can significantly enhance machine learning solutions.
|