API Reference#
This is the class and function reference of scikit-learn. Please refer to the full user guide for further details, as the raw specifications of classes and functions may not be enough to give full guidelines on their use. For reference on concepts repeated across the API, see Glossary of Common Terms and API Elements.
Object |
Description |
|---|---|
Context manager to temporarily change the global scikit-learn configuration. |
|
Retrieve the current scikit-learn configuration. |
|
Set global scikit-learn configuration. |
|
Print useful debugging information. |
|
Base class for all estimators in scikit-learn. |
|
Mixin class for all bicluster estimators in scikit-learn. |
|
Mixin class for transformers that generate their own names by prefixing. |
|
Mixin class for all classifiers in scikit-learn. |
|
Mixin class for all cluster estimators in scikit-learn. |
|
Mixin class for all density estimators in scikit-learn. |
|
Mixin class for all meta estimators in scikit-learn. |
|
Provides |
|
Mixin class for all outlier detection estimators in scikit-learn. |
|
Mixin class for all regression estimators in scikit-learn. |
|
Mixin class for all transformers in scikit-learn. |
|
Construct a new unfitted estimator with the same parameters. |
|
Return True if the given estimator is (probably) a classifier. |
|
Return True if the given estimator is (probably) a clusterer. |
|
Return True if the given estimator is (probably) a regressor. |
|
Return True if the given estimator is (probably) an outlier detector. |
|
Calibrate probabilities using isotonic, sigmoid, or temperature scaling. |
|
Compute true and predicted probabilities for a calibration curve. |
|
Calibration curve (also known as reliability diagram) visualization. |
|
Perform Affinity Propagation Clustering of data. |
|
Agglomerative Clustering. |
|
Implements the BIRCH clustering algorithm. |
|
Bisecting K-Means clustering. |
|
Perform DBSCAN clustering from vector array or distance matrix. |
|
Agglomerate features. |
|
Cluster data using hierarchical density-based clustering. |
|
K-Means clustering. |
|
Mean shift clustering using a flat kernel. |
|
Mini-Batch K-Means clustering. |
|
Estimate clustering structure from vector array. |
|
Spectral biclustering (Kluger, 2003) [R2af9f5762274-1]. |
|
Apply clustering to a projection of the normalized Laplacian. |
|
Spectral Co-Clustering algorithm (Dhillon, 2001) [R0dd0f3306ba7-1]. |
|
Perform Affinity Propagation Clustering of data. |
|
Perform DBSCAN extraction for an arbitrary epsilon. |
|
Automatically extract clusters according to the Xi-steep method. |
|
Compute the OPTICS reachability graph. |
|
Perform DBSCAN clustering from vector array or distance matrix. |
|
Estimate the bandwidth to use with the mean-shift algorithm. |
|
Perform K-means clustering algorithm. |
|
Init n_clusters seeds according to k-means++. |
|
Perform mean shift clustering of data using a flat kernel. |
|
Apply clustering to a projection of the normalized Laplacian. |
|
Ward clustering based on a Feature matrix. |
|
Applies transformers to columns of an array or pandas DataFrame. |
|
Meta-estimator to regress on a transformed target. |
|
Create a callable to select columns to be used with |
|
Construct a ColumnTransformer from the given transformers. |
|
An object for detecting outliers in a Gaussian distributed dataset. |
|
Maximum likelihood covariance estimator. |
|
Sparse inverse covariance estimation with an l1-penalized estimator. |
|
Sparse inverse covariance w/ cross-validated choice of the l1 penalty. |
|
LedoitWolf Estimator. |
|
Minimum Covariance Determinant (MCD): robust estimator of covariance. |
|
Oracle Approximating Shrinkage Estimator. |
|
Covariance estimator with shrinkage. |
|
Compute the Maximum likelihood covariance estimator. |
|
L1-penalized covariance estimator. |
|
Estimate the shrunk Ledoit-Wolf covariance matrix. |
|
Estimate the shrunk Ledoit-Wolf covariance matrix. |
|
Estimate covariance with the Oracle Approximating Shrinkage. |
|
Calculate covariance matrices shrunk on the diagonal. |
|
Canonical Correlation Analysis, also known as “Mode B” PLS. |
|
Partial Least Squares transformer and regressor. |
|
PLS regression. |
|
Partial Least Square SVD. |
|
Delete all the content of the data home cache. |
|
Dump the dataset in svmlight / libsvm file format. |
|
Load the filenames and data from the 20 newsgroups dataset (classification). |
|
Load and vectorize the 20 newsgroups dataset (classification). |
|
Load the California housing dataset (regression). |
|
Load the covertype dataset (classification). |
|
Fetch a file from the web if not already present in the local folder. |
|
Load the kddcup99 dataset (classification). |
|
Load the Labeled Faces in the Wild (LFW) pairs dataset (classification). |
|
Load the Labeled Faces in the Wild (LFW) people dataset (classification). |
|
Load the Olivetti faces data-set from AT&T (classification). |
|
Fetch dataset from openml by name or dataset id. |
|
Load the RCV1 multilabel dataset (classification). |
|
Loader for species distribution dataset from Phillips et. al. (2006). |
|
Return the path of the scikit-learn data directory. |
|
Load and return the breast cancer Wisconsin dataset (classification). |
|
Load and return the diabetes dataset (regression). |
|
Load and return the digits dataset (classification). |
|
Load text files with categories as subfolder names. |
|
Load and return the iris dataset (classification). |
|
Load and return the physical exercise Linnerud dataset. |
|
Load the numpy array of a single sample image. |
|
Load sample images for image manipulation. |
|
Load datasets in the svmlight / libsvm format into sparse CSR matrix. |
|
Load dataset from multiple files in SVMlight format. |
|
Load and return the wine dataset (classification). |
|
Generate a constant block diagonal structure array for biclustering. |
|
Generate isotropic Gaussian blobs for clustering. |
|
Generate an array with block checkerboard structure for biclustering. |
|
Make a large circle containing a smaller circle in 2d. |
|
Generate a random n-class classification problem. |
|
Generate the “Friedman #1” regression problem. |
|
Generate the “Friedman #2” regression problem. |
|
Generate the “Friedman #3” regression problem. |
|
Generate isotropic Gaussian and label samples by quantile. |
|
Generate data for binary classification used in Hastie et al. 2009, Example 10.2. |
|
Generate a mostly low rank matrix with bell-shaped singular values. |
|
Make two interleaving half circles. |
|
Generate a random multilabel classification problem. |
|
Generate a random regression problem. |
|
Generate an S curve dataset. |
|
Generate a signal as a sparse combination of dictionary elements. |
|
Generate a sparse symmetric definite positive matrix. |
|
Generate a random regression problem with sparse uncorrelated design. |
|
Generate a random symmetric, positive-definite matrix. |
|
Generate a swiss roll dataset. |
|
Dictionary learning. |
|
Factor Analysis (FA). |
|
FastICA: a fast algorithm for Independent Component Analysis. |
|
Incremental principal components analysis (IPCA). |
|
Kernel Principal component analysis (KPCA). |
|
Latent Dirichlet Allocation with online variational Bayes algorithm. |
|
Mini-batch dictionary learning. |
|
Mini-Batch Non-Negative Matrix Factorization (NMF). |
|
Mini-batch Sparse Principal Components Analysis. |
|
Non-Negative Matrix Factorization (NMF). |
|
Principal component analysis (PCA). |
|
Sparse coding. |
|
Sparse Principal Components Analysis (SparsePCA). |
|
Dimensionality reduction using truncated SVD (aka LSA). |
|
Solve a dictionary learning matrix factorization problem. |
|
Solve a dictionary learning matrix factorization problem online. |
|
Perform Fast Independent Component Analysis. |
|
Compute Non-negative Matrix Factorization (NMF). |
|
Sparse coding. |
|
Linear Discriminant Analysis. |
|
Quadratic Discriminant Analysis. |
|
DummyClassifier makes predictions that ignore the input features. |
|
Regressor that makes predictions using simple rules. |
|
An AdaBoost classifier. |
|
An AdaBoost regressor. |
|
A Bagging classifier. |
|
A Bagging regressor. |
|
An extra-trees classifier. |
|
An extra-trees regressor. |
|
Gradient Boosting for classification. |
|
Gradient Boosting for regression. |
|
Histogram-based Gradient Boosting Classification Tree. |
|
Histogram-based Gradient Boosting Regression Tree. |
|
Isolation Forest Algorithm. |
|
A random forest classifier. |
|
A random forest regressor. |
|
An ensemble of totally random trees. |
|
Stack of estimators with a final classifier. |
|
Stack of estimators with a final regressor. |
|
Soft Voting/Majority Rule classifier for unfitted estimators. |
|
Prediction voting regressor for unfitted estimators. |
|
Custom warning to capture convergence problems |
|
Warning used to notify implicit data conversions happening in the code. |
|
Custom warning to notify potential issues with data dimensionality. |
|
Warning used to notify the user of inefficient computation. |
|
Warning class used if there is an error while fitting the estimator. |
|
Warning raised when an estimator is unpickled with an inconsistent version. |
|
Exception class to raise if estimator is used before fitting. |
|
Warning used when the metric is invalid |
|
Warning raised when an estimator check from the common tests fails. |
|
Enables Successive Halving search-estimators |
|
Enables IterativeImputer |
|
Transforms lists of feature-value mappings to vectors. |
|
Implements feature hashing, aka the hashing trick. |
|