pub struct PyElasticNet { /* private fields */ }Expand description
Linear regression with combined L1 and L2 priors as regularizer.
Minimizes the objective function:
1 / (2 * n_samples) * ||y - Xw||^2_2
+ alpha * l1_ratio * ||w||_1
+ 0.5 * alpha * (1 - l1_ratio) * ||w||^2_2If you are interested in controlling the L1 and L2 penalty separately, keep in mind that this is equivalent to:
a * L1 + b * L2where:
alpha = a + b and l1_ratio = a / (a + b)The parameter l1_ratio corresponds to alpha in the glmnet R package while alpha corresponds to the lambda parameter in glmnet. Specifically, l1_ratio = 1 is the lasso penalty. Currently, l1_ratio <= 0.01 is not reliable, unless you supply your own sequence of alpha.
§Parameters
alpha : float, default=1.0
Constant that multiplies the penalty terms. Defaults to 1.0.
See the notes for the exact mathematical meaning of this
parameter. alpha = 0 is equivalent to an ordinary least square,
solved by the :class:LinearRegression object. For numerical
reasons, using alpha = 0 with the Lasso object is not advised.
Given this, you should use the :class:LinearRegression object.
l1_ratio : float, default=0.5
The ElasticNet mixing parameter, with 0 <= l1_ratio <= 1. For
l1_ratio = 0 the penalty is an L2 penalty. For l1_ratio = 1 it
is an L1 penalty. For 0 < l1_ratio < 1, the penalty is a
combination of L1 and L2.
fit_intercept : bool, default=True Whether to calculate the intercept for this model. If set to False, no intercept will be used in calculations (i.e. data is expected to be centered).
copy_X : bool, default=True
If True, X will be copied; else, it may be overwritten.
max_iter : int, default=1000 The maximum number of iterations for the optimization algorithm.
tol : float, default=1e-4
The tolerance for the optimization: if the updates are
smaller than tol, the optimization code checks the
dual gap for optimality and continues until it is smaller
than tol, see Notes below.
warm_start : bool, default=False
When set to True, reuse the solution of the previous call to fit as
initialization, otherwise, just erase the previous solution.
See :term:the Glossary <warm_start>.
positive : bool, default=False
When set to True, forces the coefficients to be positive.
random_state : int, RandomState instance, default=None
The seed of the pseudo random number generator that selects a random
feature to update. Used when selection == ‘random’.
Pass an int for reproducible output across multiple function calls.
See :term:Glossary <random_state>.
selection : {‘cyclic’, ‘random’}, default=‘cyclic’ If set to ‘random’, a random coefficient is updated every iteration rather than looping over features sequentially by default. This (setting to ‘random’) often leads to significantly faster convergence especially when tol is higher than 1e-4.
§Attributes
coef_ : ndarray of shape (n_features,) or (n_targets, n_features) Parameter vector (w in the cost function formula).
sparse_coef_ : sparse matrix of shape (n_features,) or
(n_targets, n_features)
Sparse representation of the fitted coef_.
intercept_ : float or ndarray of shape (n_targets,) Independent term in decision function.
n_features_in_ : int
Number of features seen during :term:fit.
n_iter_ : list of int Number of iterations run by the coordinate descent solver to reach the specified tolerance.
§Examples
from sklears_python import ElasticNet from sklearn.datasets import make_regression X, y = make_regression(n_features=2, random_state=0) regr = ElasticNet(random_state=0) regr.fit(X, y) ElasticNet(random_state=0) print(regr.coef_) [18.83816119 64.55968437] print(regr.intercept_) 1.451… print(regr.predict([[0, 0]])) [1.451…]
§Notes
To avoid unnecessary memory duplication the X argument of the fit method should be directly passed as a Fortran-contiguous NumPy array.
The precise stopping criteria based on tol are the following: First,
check that that maximum coordinate update, i.e. :math:\\max_j |w_j^{new} - w_j^{old}| is smaller than tol times the maximum absolute coefficient,
:math:\\max_j |w_j|. If so, then additionally check whether the dual gap
is smaller than tol times :math:||y||_2^2 / n_\\text{samples}.
Trait Implementations§
Source§impl<'py> IntoPyObject<'py> for PyElasticNet
impl<'py> IntoPyObject<'py> for PyElasticNet
Source§type Target = PyElasticNet
type Target = PyElasticNet
Source§type Output = Bound<'py, <PyElasticNet as IntoPyObject<'py>>::Target>
type Output = Bound<'py, <PyElasticNet as IntoPyObject<'py>>::Target>
Source§fn into_pyobject(
self,
py: Python<'py>,
) -> Result<<Self as IntoPyObject<'_>>::Output, <Self as IntoPyObject<'_>>::Error>
fn into_pyobject( self, py: Python<'py>, ) -> Result<<Self as IntoPyObject<'_>>::Output, <Self as IntoPyObject<'_>>::Error>
Source§impl PyClass for PyElasticNet
impl PyClass for PyElasticNet
Source§impl PyClassImpl for PyElasticNet
impl PyClassImpl for PyElasticNet
Source§const IS_BASETYPE: bool = false
const IS_BASETYPE: bool = false
Source§const IS_SUBCLASS: bool = false
const IS_SUBCLASS: bool = false
Source§const IS_MAPPING: bool = false
const IS_MAPPING: bool = false
Source§const IS_SEQUENCE: bool = false
const IS_SEQUENCE: bool = false
Source§const IS_IMMUTABLE_TYPE: bool = false
const IS_IMMUTABLE_TYPE: bool = false
Source§type ThreadChecker = SendablePyClass<PyElasticNet>
type ThreadChecker = SendablePyClass<PyElasticNet>
Source§type PyClassMutability = <<PyAny as PyClassBaseType>::PyClassMutability as PyClassMutability>::MutableChild
type PyClassMutability = <<PyAny as PyClassBaseType>::PyClassMutability as PyClassMutability>::MutableChild
Source§type BaseNativeType = PyAny
type BaseNativeType = PyAny
PyAny by default, and when you declare
#[pyclass(extends=PyDict)], it’s PyDict.fn items_iter() -> PyClassItemsIter
fn lazy_type_object() -> &'static LazyTypeObject<Self>
fn dict_offset() -> Option<isize>
fn weaklist_offset() -> Option<isize>
Source§impl PyClassNewTextSignature<PyElasticNet> for PyClassImplCollector<PyElasticNet>
impl PyClassNewTextSignature<PyElasticNet> for PyClassImplCollector<PyElasticNet>
fn new_text_signature(self) -> Option<&'static str>
Source§impl<'a, 'py> PyFunctionArgument<'a, 'py, false> for &'a PyElasticNet
impl<'a, 'py> PyFunctionArgument<'a, 'py, false> for &'a PyElasticNet
Source§impl<'a, 'py> PyFunctionArgument<'a, 'py, false> for &'a mut PyElasticNet
impl<'a, 'py> PyFunctionArgument<'a, 'py, false> for &'a mut PyElasticNet
Source§impl PyMethods<PyElasticNet> for PyClassImplCollector<PyElasticNet>
impl PyMethods<PyElasticNet> for PyClassImplCollector<PyElasticNet>
fn py_methods(self) -> &'static PyClassItems
Source§impl PyTypeInfo for PyElasticNet
impl PyTypeInfo for PyElasticNet
Source§fn type_object_raw(py: Python<'_>) -> *mut PyTypeObject
fn type_object_raw(py: Python<'_>) -> *mut PyTypeObject
Source§fn type_object(py: Python<'_>) -> Bound<'_, PyType>
fn type_object(py: Python<'_>) -> Bound<'_, PyType>
impl DerefToPyAny for PyElasticNet
Auto Trait Implementations§
impl Freeze for PyElasticNet
impl RefUnwindSafe for PyElasticNet
impl Send for PyElasticNet
impl Sync for PyElasticNet
impl Unpin for PyElasticNet
impl UnwindSafe for PyElasticNet
Blanket Implementations§
Source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Source§impl<T> IntoEither for T
impl<T> IntoEither for T
Source§fn into_either(self, into_left: bool) -> Either<Self, Self>
fn into_either(self, into_left: bool) -> Either<Self, Self>
self into a Left variant of Either<Self, Self>
if into_left is true.
Converts self into a Right variant of Either<Self, Self>
otherwise. Read moreSource§fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
self into a Left variant of Either<Self, Self>
if into_left(&self) returns true.
Converts self into a Right variant of Either<Self, Self>
otherwise. Read moreSource§impl<'py, T> IntoPyObjectExt<'py> for Twhere
T: IntoPyObject<'py>,
impl<'py, T> IntoPyObjectExt<'py> for Twhere
T: IntoPyObject<'py>,
Source§fn into_bound_py_any(self, py: Python<'py>) -> Result<Bound<'py, PyAny>, PyErr>
fn into_bound_py_any(self, py: Python<'py>) -> Result<Bound<'py, PyAny>, PyErr>
self into an owned Python object, dropping type information.