Skip to main content
Ctrl+K
scikit-learn homepage scikit-learn homepage
  • Install
  • User Guide
  • API
  • Examples
  • Community
    • Getting Started
    • Release History
    • Glossary
    • Development
    • FAQ
    • Support
    • Related Projects
    • Roadmap
    • Governance
    • About us
  • GitHub
  • Install
  • User Guide
  • API
  • Examples
  • Community
  • Getting Started
  • Release History
  • Glossary
  • Development
  • FAQ
  • Support
  • Related Projects
  • Roadmap
  • Governance
  • About us
  • GitHub

Section Navigation

  • 1. Supervised learning
    • 1.1. Linear Models
    • 1.2. Linear and Quadratic Discriminant Analysis
    • 1.3. Kernel ridge regression
    • 1.4. Support Vector Machines
    • 1.5. Stochastic Gradient Descent
    • 1.6. Nearest Neighbors
    • 1.7. Gaussian Processes
    • 1.8. Cross decomposition
    • 1.9. Naive Bayes
    • 1.10. Decision Trees
    • 1.11. Ensembles: Gradient boosting, random forests, bagging, voting, stacking
    • 1.12. Multiclass and multioutput algorithms
    • 1.13. Feature selection
    • 1.14. Semi-supervised learning
    • 1.15. Isotonic regression
    • 1.16. Probability calibration
    • 1.17. Neural network models (supervised)
  • 2. Unsupervised learning
    • 2.1. Gaussian mixture models
    • 2.2. Manifold learning
    • 2.3. Clustering
    • 2.4. Biclustering
    • 2.5. Decomposing signals in components (matrix factorization problems)
    • 2.6. Covariance estimation
    • 2.7. Novelty and Outlier Detection
    • 2.8. Density Estimation
    • 2.9. Neural network models (unsupervised)
  • 3. Model selection and evaluation
    • 3.1. Cross-validation: evaluating estimator performance
    • 3.2. Tuning the hyper-parameters of an estimator
    • 3.3. Tuning the decision threshold for class prediction
    • 3.4. Metrics and scoring: quantifying the quality of predictions
    • 3.5. Validation curves: plotting scores to evaluate models
  • 4. Metadata Routing
  • 5. Inspection
    • 5.1. Partial Dependence and Individual Conditional Expectation plots
    • 5.2. Permutation feature importance
  • 6. Visualizations
  • 7. Dataset transformations
    • 7.1. Pipelines and composite estimators
    • 7.2. Feature extraction
    • 7.3. Preprocessing data
    • 7.4. Imputation of missing values
    • 7.5. Unsupervised dimensionality reduction
    • 7.6. Random Projection
    • 7.7. Kernel Approximation