Title: | Sorted L1 Penalized Estimation |
---|---|
Description: | Efficient implementations for Sorted L-One Penalized Estimation (SLOPE): generalized linear models regularized with the sorted L1-norm (Bogdan et al. 2015). Supported models include ordinary least-squares regression, binomial regression, multinomial regression, and Poisson regression. Both dense and sparse predictor matrices are supported. In addition, the package features predictor screening rules that enable fast and efficient solutions to high-dimensional problems. |
Authors: | Johan Larsson [aut, cre] , Jonas Wallin [aut] , Malgorzata Bogdan [aut], Ewout van den Berg [aut], Chiara Sabatti [aut], Emmanuel Candes [aut], Evan Patterson [aut], Weijie Su [aut], Jakub Kała [aut], Krystyna Grzesiak [aut], Michal Burdukiewicz [aut] , Jerome Friedman [ctb] (code adapted from 'glmnet'), Trevor Hastie [ctb] (code adapted from 'glmnet'), Rob Tibshirani [ctb] (code adapted from 'glmnet'), Balasubramanian Narasimhan [ctb] (code adapted from 'glmnet'), Noah Simon [ctb] (code adapted from 'glmnet'), Junyang Qian [ctb] (code adapted from 'glmnet'), Akarsh Goyal [ctb] |
Maintainer: | Johan Larsson <[email protected]> |
License: | GPL-3 |
Version: | 0.5.1 |
Built: | 2024-11-01 06:28:16 UTC |
Source: | https://github.com/jolars/slope |
This data set contains observations of abalones, the common name for any of a group of sea snails. The goal is to predict the age of an individual abalone given physical measurements such as sex, weight, and height.
abalone
abalone
A list with two items representing 211 observations from 9 variables
sex of abalone, 1 for female
indicates that the person is an infant
longest shell measurement in mm
perpendicular to length in mm
height in mm including meat in shell
weight of entire abalone
weight of meat
weight of viscera
weight of shell
rings. +1.5 gives the age in years
Only a stratified sample of 211 rows of the original data set are used here.
Pace, R. Kelley and Ronald Barry, Sparse Spatial Autoregressions, Statistics and Probability Letters, 33 (1997) 291-297.
Other datasets:
bodyfat
,
heart
,
student
,
wine
The response (y
) corresponds to
estimates of percentage of body fat from application of
Siri's 1956 equation to measurements of underwater weighing, as well as
age, weight, height, and a variety of
body circumference measurements.
bodyfat
bodyfat
A list with two items representing 252 observations from 14 variables
age (years)
weight (lbs)
height (inches)
neck circumference (cm)
chest circumference (cm)
abdomen circumference (cm)
hip circumference (cm)
thigh circumference (cm)
knee circumference (cm)
ankle circumference (cm)
biceps circumference (cm)
forearm circumference (cm)
wrist circumference (cm)
http://lib.stat.cmu.edu/datasets/bodyfat
https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/regression.html
Other datasets:
abalone
,
heart
,
student
,
wine
This function can be used in a call to caret::train()
to enable
model tuning using caret. Note that this function does not properly work
with sparse feature matrices and standardization due to the way
resampling is implemented in caret. So for these cases, please
check out trainSLOPE()
instead.
caretSLOPE()
caretSLOPE()
A model description list to be used in the method
argument
in caret::train()
.
caret::train()
, trainSLOPE()
, SLOPE()
Other model-tuning:
plot.TrainedSLOPE()
,
trainSLOPE()
This function returns coefficients from a model fit by SLOPE()
.
## S3 method for class 'SLOPE' coef(object, alpha = NULL, exact = FALSE, simplify = TRUE, sigma, ...)
## S3 method for class 'SLOPE' coef(object, alpha = NULL, exact = FALSE, simplify = TRUE, sigma, ...)
object |
an object of class |
alpha |
penalty parameter for SLOPE models; if |
exact |
if |
simplify |
if |
sigma |
deprecated. Please use |
... |
arguments that are passed on to |
If exact = FALSE
and alpha
is not in object
,
then the returned coefficients will be approximated by linear interpolation.
If coefficients from another type of penalty sequence
(with a different lambda
) are required, however,
please use SLOPE()
to refit the model.
Coefficients from the model.
Other SLOPE-methods:
deviance.SLOPE()
,
plot.SLOPE()
,
predict.SLOPE()
,
print.SLOPE()
,
score()
fit <- SLOPE(mtcars$mpg, mtcars$vs, path_length = 1) coef(fit)
fit <- SLOPE(mtcars$mpg, mtcars$vs, path_length = 1) coef(fit)
Model deviance
## S3 method for class 'SLOPE' deviance(object, ...)
## S3 method for class 'SLOPE' deviance(object, ...)
object |
an object of class |
... |
ignored |
For Gaussian models this is twice the residual sums of squares. For all other models, two times the negative loglikelihood is returned.
Other SLOPE-methods:
coef.SLOPE()
,
plot.SLOPE()
,
predict.SLOPE()
,
print.SLOPE()
,
score()
fit <- SLOPE(abalone$x, abalone$y, family = "poisson") deviance(fit)
fit <- SLOPE(abalone$x, abalone$y, family = "poisson") deviance(fit)
Diagnostic attributes of patients classified as having heart disease or not.
heart
heart
270 observations from 17 variables represented as a list consisting
of a binary factor response vector y
,
with levels 'absence' and 'presence' indicating the absence or presence of
heart disease and x
: a sparse feature matrix of class 'dgCMatrix' with the
following variables:
age
diastolic blood pressure
serum cholesterol in mg/dl
maximum heart rate achieved
ST depression induced by exercise relative to rest
the number of major blood vessels (0 to 3) that were colored by fluoroscopy
sex of the participant: 0 for male, 1 for female
a dummy variable indicating whether the person suffered angina-pectoris during exercise
indicates a fasting blood sugar over 120 mg/dl
typical angina
atypical angina
non-anginal pain
indicates a ST-T wave abnormality (T wave inversions and/or ST elevation or depression of > 0.05 mV)
probable or definite left ventricular hypertrophy by Estes' criteria
a flat ST curve during peak exercise
a downwards-sloping ST curve during peak exercise
reversible defect
fixed defect
The original dataset contained 13 variables. The nominal of these were
dummycoded, removing the first category. No precise information regarding
variables chest_pain
, thal
and ecg
could be found, which explains
their obscure definitions here.
Dua, D. and Karra Taniskidou, E. (2017). UCI Machine Learning Repository http://archive.ics.uci.edu/ml/. Irvine, CA: University of California, School of Information and Computer Science.
https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/binary.html#heart
Other datasets:
abalone
,
bodyfat
,
student
,
wine
Plot the fitted model's regression coefficients along the regularization path.
## S3 method for class 'SLOPE' plot( x, intercept = FALSE, x_variable = c("alpha", "deviance_ratio", "step"), ... )
## S3 method for class 'SLOPE' plot( x, intercept = FALSE, x_variable = c("alpha", "deviance_ratio", "step"), ... )
x |
an object of class |
intercept |
whether to plot the intercept |
x_variable |
what to plot on the x axis. |
... |
further arguments passed to or from other methods. |
An object of class "ggplot"
, which will be plotted on the
current device unless stored in a variable.
Other SLOPE-methods:
coef.SLOPE()
,
deviance.SLOPE()
,
predict.SLOPE()
,
print.SLOPE()
,
score()
fit <- SLOPE(heart$x, heart$y) plot(fit)
fit <- SLOPE(heart$x, heart$y) plot(fit)
Plot results from cross-validation
## S3 method for class 'TrainedSLOPE' plot( x, measure = c("auto", "mse", "mae", "deviance", "auc", "misclass"), plot_min = TRUE, ci_alpha = 0.2, ci_border = FALSE, ci_col = "salmon", ... )
## S3 method for class 'TrainedSLOPE' plot( x, measure = c("auto", "mse", "mae", "deviance", "auc", "misclass"), plot_min = TRUE, ci_alpha = 0.2, ci_border = FALSE, ci_col = "salmon", ... )
x |
an object of class |
measure |
any of the measures used in the call to |
plot_min |
whether to mark the location of the penalty corresponding to the best prediction score |
ci_alpha |
alpha (opacity) for fill in confidence limits |
ci_border |
color (or flag to turn off and on) the border of the confidence limits |
ci_col |
color for border of confidence limits |
... |
words |
An object of class "ggplot"
, which will be plotted on the
current device unless stored in a variable.
Other model-tuning:
caretSLOPE()
,
trainSLOPE()
# Cross-validation for a SLOPE binomial model set.seed(123) tune <- trainSLOPE(subset(mtcars, select = c("mpg", "drat", "wt")), mtcars$hp, q = c(0.1, 0.2), number = 10 ) plot(tune, ci_col = "salmon")
# Cross-validation for a SLOPE binomial model set.seed(123) tune <- trainSLOPE(subset(mtcars, select = c("mpg", "drat", "wt")), mtcars$hp, q = c(0.1, 0.2), number = 10 ) plot(tune, ci_col = "salmon")
This function plots various diagnostics collected during
the model fitting resulting from a call to SLOPE()
provided that
diagnostics = TRUE
.
plotDiagnostics( object, ind = max(object$diagnostics$penalty), xvar = c("time", "iteration") )
plotDiagnostics( object, ind = max(object$diagnostics$penalty), xvar = c("time", "iteration") )
object |
an object of class |
ind |
either "last" |
xvar |
what to place on the x axis. |
An object of class "ggplot"
, which will be plotted on the
current device unless stored in a variable.
x <- SLOPE(abalone$x, abalone$y, diagnostics = TRUE) plotDiagnostics(x)
x <- SLOPE(abalone$x, abalone$y, diagnostics = TRUE) plotDiagnostics(x)
Return predictions from models fit by SLOPE()
.
## S3 method for class 'SLOPE' predict(object, x, alpha = NULL, type = "link", simplify = TRUE, sigma, ...) ## S3 method for class 'GaussianSLOPE' predict( object, x, sigma = NULL, type = c("link", "response"), simplify = TRUE, ... ) ## S3 method for class 'BinomialSLOPE' predict( object, x, sigma = NULL, type = c("link", "response", "class"), simplify = TRUE, ... ) ## S3 method for class 'PoissonSLOPE' predict( object, x, sigma = NULL, type = c("link", "response"), exact = FALSE, simplify = TRUE, ... ) ## S3 method for class 'MultinomialSLOPE' predict( object, x, sigma = NULL, type = c("link", "response", "class"), exact = FALSE, simplify = TRUE, ... )
## S3 method for class 'SLOPE' predict(object, x, alpha = NULL, type = "link", simplify = TRUE, sigma, ...) ## S3 method for class 'GaussianSLOPE' predict( object, x, sigma = NULL, type = c("link", "response"), simplify = TRUE, ... ) ## S3 method for class 'BinomialSLOPE' predict( object, x, sigma = NULL, type = c("link", "response", "class"), simplify = TRUE, ... ) ## S3 method for class 'PoissonSLOPE' predict( object, x, sigma = NULL, type = c("link", "response"), exact = FALSE, simplify = TRUE, ... ) ## S3 method for class 'MultinomialSLOPE' predict( object, x, sigma = NULL, type = c("link", "response", "class"), exact = FALSE, simplify = TRUE, ... )
object |
an object of class |
x |
new data |
alpha |
penalty parameter for SLOPE models; if |
type |
type of prediction; |
simplify |
if |
sigma |
deprecated. Please use |
... |
ignored and only here for method consistency |
exact |
if |
Predictions from the model with scale determined by type
.
stats::predict()
, stats::predict.glm()
, coef.SLOPE()
Other SLOPE-methods:
coef.SLOPE()
,
deviance.SLOPE()
,
plot.SLOPE()
,
print.SLOPE()
,
score()
fit <- with(mtcars, SLOPE(cbind(mpg, hp), vs, family = "binomial")) predict(fit, with(mtcars, cbind(mpg, hp)), type = "class")
fit <- with(mtcars, SLOPE(cbind(mpg, hp), vs, family = "binomial")) predict(fit, with(mtcars, cbind(mpg, hp)), type = "class")
Print results from SLOPE fit
## S3 method for class 'SLOPE' print(x, ...) ## S3 method for class 'TrainedSLOPE' print(x, ...)
## S3 method for class 'SLOPE' print(x, ...) ## S3 method for class 'TrainedSLOPE' print(x, ...)
x |
an object of class |
... |
other arguments passed to |
Prints output on the screen
Other SLOPE-methods:
coef.SLOPE()
,
deviance.SLOPE()
,
plot.SLOPE()
,
predict.SLOPE()
,
score()
fit <- SLOPE(wine$x, wine$y, family = "multinomial") print(fit, digits = 1)
fit <- SLOPE(wine$x, wine$y, family = "multinomial") print(fit, digits = 1)
This function generates sequences of regularizations weights for use in
SLOPE()
(or elsewhere).
regularizationWeights( n_lambda = 100, type = c("bh", "gaussian", "oscar", "lasso"), q = 0.2, theta1 = 1, theta2 = 0.5, n = NULL )
regularizationWeights( n_lambda = 100, type = c("bh", "gaussian", "oscar", "lasso"), q = 0.2, theta1 = 1, theta2 = 0.5, n = NULL )
n_lambda |
The number of lambdas to generate. This should typically be equal to the number of predictors in your data set. |
type |
The type of lambda sequence to use. See documentation for
in |
q |
parameter controlling the shape of the lambda sequence, with
usage varying depending on the type of path used and has no effect
is a custom |
theta1 |
parameter controlling the shape of the lambda sequence
when |
theta2 |
parameter controlling the shape of the lambda sequence
when |
n |
The number of rows (observations) in the design matrix. |
Please see SLOPE()
for detailed information regarding the parameters in
this function, in particular the section Regularization Sequences.
Note that these sequences are automatically scaled (unless a value for
the alpha
parameter is manually supplied) when using SLOPE()
. In this
function, nu such scaling is attempted.
A vector of length n_lambda
with regularization weights.
# compute different penalization sequences bh <- regularizationWeights(100, q = 0.2, type = "bh") gaussian <- regularizationWeights( 100, q = 0.2, n = 300, type = "gaussian" ) oscar <- regularizationWeights( 100, theta1 = 1.284, theta2 = 0.0182, type = "oscar" ) lasso <- regularizationWeights(100, type = "lasso") * mean(oscar) # Plot a comparison between these sequences plot(bh, type = "l", ylab = expression(lambda)) lines(gaussian, col = "dark orange") lines(oscar, col = "navy") lines(lasso, col = "red3") legend( "topright", legend = c("BH", "Gaussian", "OSCAR", "lasso"), col = c("black", "dark orange", "navy", "red3"), lty = 1 )
# compute different penalization sequences bh <- regularizationWeights(100, q = 0.2, type = "bh") gaussian <- regularizationWeights( 100, q = 0.2, n = 300, type = "gaussian" ) oscar <- regularizationWeights( 100, theta1 = 1.284, theta2 = 0.0182, type = "oscar" ) lasso <- regularizationWeights(100, type = "lasso") * mean(oscar) # Plot a comparison between these sequences plot(bh, type = "l", ylab = expression(lambda)) lines(gaussian, col = "dark orange") lines(oscar, col = "navy") lines(lasso, col = "red3") legend( "topright", legend = c("BH", "Gaussian", "OSCAR", "lasso"), col = c("black", "dark orange", "navy", "red3"), lty = 1 )
This function is a unified interface to return various types of loss for a
model fit with SLOPE()
.
score(object, x, y, measure) ## S3 method for class 'GaussianSLOPE' score(object, x, y, measure = c("mse", "mae")) ## S3 method for class 'BinomialSLOPE' score(object, x, y, measure = c("mse", "mae", "deviance", "misclass", "auc")) ## S3 method for class 'MultinomialSLOPE' score(object, x, y, measure = c("mse", "mae", "deviance", "misclass")) ## S3 method for class 'PoissonSLOPE' score(object, x, y, measure = c("mse", "mae"))
score(object, x, y, measure) ## S3 method for class 'GaussianSLOPE' score(object, x, y, measure = c("mse", "mae")) ## S3 method for class 'BinomialSLOPE' score(object, x, y, measure = c("mse", "mae", "deviance", "misclass", "auc")) ## S3 method for class 'MultinomialSLOPE' score(object, x, y, measure = c("mse", "mae", "deviance", "misclass")) ## S3 method for class 'PoissonSLOPE' score(object, x, y, measure = c("mse", "mae"))
object |
an object of class |
x |
feature matrix |
y |
response |
measure |
type of target measure. |
The measure along the regularization path depending on the
value in measure
.#'
Other SLOPE-methods:
coef.SLOPE()
,
deviance.SLOPE()
,
plot.SLOPE()
,
predict.SLOPE()
,
print.SLOPE()
x <- subset(infert, select = c("induced", "age", "pooled.stratum")) y <- infert$case fit <- SLOPE(x, y, family = "binomial") score(fit, x, y, measure = "auc")
x <- subset(infert, select = c("induced", "age", "pooled.stratum")) y <- infert$case fit <- SLOPE(x, y, family = "binomial") score(fit, x, y, measure = "auc")
Fit a generalized linear model regularized with the
sorted L1 norm, which applies a
non-increasing regularization sequence to the
coefficient vector () after having sorted it
in decreasing order according to its absolute values.
SLOPE( x, y, family = c("gaussian", "binomial", "multinomial", "poisson"), intercept = TRUE, center = !inherits(x, "sparseMatrix"), scale = c("l2", "l1", "sd", "none"), alpha = c("path", "estimate"), lambda = c("bh", "gaussian", "oscar", "lasso"), alpha_min_ratio = if (NROW(x) < NCOL(x)) 0.01 else 1e-04, path_length = if (alpha[1] == "estimate") 1 else 20, q = 0.1 * min(1, NROW(x)/NCOL(x)), theta1 = 1, theta2 = 0.5, prox_method = c("stack", "pava"), screen = TRUE, screen_alg = c("strong", "previous"), tol_dev_change = 1e-05, tol_dev_ratio = 0.995, max_variables = NROW(x), solver = c("fista", "admm"), max_passes = 1e+06, tol_abs = 1e-05, tol_rel = 1e-04, tol_rel_gap = 1e-05, tol_infeas = 0.001, tol_rel_coef_change = 0.001, diagnostics = FALSE, verbosity = 0, sigma, n_sigma, lambda_min_ratio )
SLOPE( x, y, family = c("gaussian", "binomial", "multinomial", "poisson"), intercept = TRUE, center = !inherits(x, "sparseMatrix"), scale = c("l2", "l1", "sd", "none"), alpha = c("path", "estimate"), lambda = c("bh", "gaussian", "oscar", "lasso"), alpha_min_ratio = if (NROW(x) < NCOL(x)) 0.01 else 1e-04, path_length = if (alpha[1] == "estimate") 1 else 20, q = 0.1 * min(1, NROW(x)/NCOL(x)), theta1 = 1, theta2 = 0.5, prox_method = c("stack", "pava"), screen = TRUE, screen_alg = c("strong", "previous"), tol_dev_change = 1e-05, tol_dev_ratio = 0.995, max_variables = NROW(x), solver = c("fista", "admm"), max_passes = 1e+06, tol_abs = 1e-05, tol_rel = 1e-04, tol_rel_gap = 1e-05, tol_infeas = 0.001, tol_rel_coef_change = 0.001, diagnostics = FALSE, verbosity = 0, sigma, n_sigma, lambda_min_ratio )
x |
the design matrix, which can be either a dense matrix of the standard matrix class, or a sparse matrix inheriting from Matrix::sparseMatrix. Data frames will be converted to matrices internally. |
y |
the response, which for |
family |
model family (objective); see Families for details. |
intercept |
whether to fit an intercept |
center |
whether to center predictors or not by their mean. Defaults
to |
scale |
type of scaling to apply to predictors.
|
alpha |
scale for regularization path: either a decreasing numeric vector (possibly of length 1) or a character vector; in the latter case, the choices are:
When a value is manually entered for |
lambda |
either a character vector indicating the method used to construct the lambda path or a numeric non-decreasing vector with length equal to the number of coefficients in the model; see section Regularization sequences for details. |
alpha_min_ratio |
smallest value for |
path_length |
length of regularization path; note that the path
returned may still be shorter due to the early termination criteria
given by |
q |
parameter controlling the shape of the lambda sequence, with
usage varying depending on the type of path used and has no effect
is a custom |
theta1 |
parameter controlling the shape of the lambda sequence
when |
theta2 |
parameter controlling the shape of the lambda sequence
when |
prox_method |
method for calculating the proximal operator for
the Sorted L1 Norm (the SLOPE penalty). Please see |
screen |
whether to use predictor screening rules (rules that allow some predictors to be discarded prior to fitting), which improve speed greatly when the number of predictors is larger than the number of observations. |
screen_alg |
what type of screening algorithm to use.
|
tol_dev_change |
the regularization path is stopped if the fractional change in deviance falls below this value; note that this is automatically set to 0 if a alpha is manually entered |
tol_dev_ratio |
the regularization path is stopped if the
deviance ratio |
max_variables |
criterion for stopping the path in terms of the maximum number of unique, nonzero coefficients in absolute value in model. For the multinomial family, this value will be multiplied internally with the number of levels of the response minus one. |
solver |
type of solver use, either |
max_passes |
maximum number of passes (outer iterations) for solver |
tol_abs |
absolute tolerance criterion for ADMM solver |
tol_rel |
relative tolerance criterion for ADMM solver |
tol_rel_gap |
stopping criterion for the duality gap; used only with FISTA solver. |
tol_infeas |
stopping criterion for the level of infeasibility; used with FISTA solver and KKT checks in screening algorithm. |
tol_rel_coef_change |
relative tolerance criterion for change in coefficients between iterations, which is reached when the maximum absolute change in any coefficient divided by the maximum absolute coefficient size is less than this value. |
diagnostics |
whether to save diagnostics from the solver (timings and other values depending on type of solver) |
verbosity |
level of verbosity for displaying output from the program. Setting this to 1 displays basic information on the path level, 2 a little bit more information on the path level, and 3 displays information from the solver. |
sigma |
deprecated; please use |
n_sigma |
deprecated; please use |
lambda_min_ratio |
deprecated; please use |
SLOPE()
solves the convex minimization problem
where is a smooth and convex function and
the second part is the sorted L1-norm.
In ordinary least-squares regression,
is simply the squared norm of the least-squares residuals.
See section Families for specifics regarding the various types of
(model families) that are allowed in
SLOPE()
.
By default, SLOPE()
fits a path of models, each corresponding to
a separate regularization sequence, starting from
the null (intercept-only) model to an almost completely unregularized
model. These regularization sequences are parameterized using
and
, with only
varying along the
path. The length of the path can be manually, but will terminate
prematurely depending on
arguments
tol_dev_change
, tol_dev_ratio
, and max_variables
.
This means that unless these arguments are modified, the path is not
guaranteed to be of length path_length
.
An object of class "SLOPE"
with the following slots:
coefficients |
a three-dimensional array of the coefficients from the model fit, including the intercept if it was fit. There is one row for each coefficient, one column for each target (dependent variable), and one slice for each penalty. |
nonzeros |
a three-dimensional logical array indicating whether a coefficient was zero or not |
lambda |
the lambda vector that when multiplied by a value in |
alpha |
vector giving the (unstandardized) scaling of the lambda sequence |
class_names |
a character vector giving the names of the classes for binomial and multinomial families |
passes |
the number of passes the solver took at each step on the path |
violations |
the number of violations of the screening rule at each step on the path;
only available if |
active_sets |
a list where each element indicates the indices of the coefficients that were active at that point in the regularization path |
unique |
the number of unique predictors (in absolute value) |
deviance_ratio |
the deviance ratio (as a fraction of 1) |
null_deviance |
the deviance of the null (intercept-only) model |
family |
the name of the family used in the model fit |
diagnostics |
a |
call |
the call used for fitting the model |
Gaussian
The Gaussian model (Ordinary Least Squares) minimizes the following objective:
Binomial
The binomial model (logistic regression) has the following objective:
with .
Poisson
In poisson regression, we use the following objective:
Multinomial
In multinomial regression, we minimize the full-rank objective
with being the element in a
by
matrix, where
is the number of classes in the response.
There are multiple ways of specifying the lambda
sequence
in SLOPE()
. It is, first of all, possible to select the sequence manually
by
using a non-increasing
numeric vector, possibly of length one, as argument instead of a character.
The greater the differences are between
consecutive values along the sequence, the more clustering behavior
will the model exhibit. Note, also, that the scale of the
vector makes no difference if
alpha = NULL
, since alpha
will be
selected automatically to ensure that the model is completely sparse at the
beginning and almost unregularized at the end. If, however, both
alpha
and lambda
are manually specified, then the scales of both do
matter, so make sure to choose them wisely.
Instead of choosing the sequence manually, one of the following automatically generated sequences may be chosen.
BH (Benjamini–Hochberg)
If lambda = "bh"
, the sequence used is that referred to
as by Bogdan et al, which sets
according to
for , where
is the quantile
function for the standard normal distribution and
is a parameter
that can be set by the user in the call to
SLOPE()
.
Gaussian
This penalty sequence is related to BH, such that
for , where
. We let
and
adjust the sequence to make sure that it's non-increasing.
Note that if
is large relative
to
, this option will result in a constant sequence, which is
usually not what you would want.
OSCAR
This sequence comes from Bondell and Reich and is a linear non-increasing sequence, such that
for . We use the parametrization from Zhong and Kwok
(2021) but use
and
instead of
and
to avoid confusion and abuse of notation.
lasso
SLOPE is exactly equivalent to the lasso when the sequence of regularization weights is constant, i.e.
for . Here, again, we stress that the fact that
all
are equal to one does not matter as long as
alpha == NULL
since we scale the vector automatically.
Note that this option is only here for academic interest and
to highlight the fact that SLOPE is
a generalization of the lasso. There are more efficient packages, such as
glmnet and biglasso, for fitting the lasso.
There are currently two solvers available for SLOPE: FISTA (Beck and
Teboulle 2009) and ADMM (Boyd et al. 2008). FISTA is available for
families but ADMM is currently only available for family = "gaussian"
.
Bogdan, M., van den Berg, E., Sabatti, C., Su, W., & Candès, E. J. (2015). SLOPE – adaptive variable selection via convex optimization. The Annals of Applied Statistics, 9(3), 1103–1140.
Bondell, H. D., & Reich, B. J. (2008). Simultaneous Regression Shrinkage, Variable Selection, and Supervised Clustering of Predictors with OSCAR. Biometrics, 64(1), 115–123. JSTOR.
Boyd, S., Parikh, N., Chu, E., Peleato, B., & Eckstein, J. (2010). Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers. Foundations and Trends® in Machine Learning, 3(1), 1–122.
Beck, A., & Teboulle, M. (2009). A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems. SIAM Journal on Imaging Sciences, 2(1), 183–202.
plot.SLOPE()
, plotDiagnostics()
, score()
, predict.SLOPE()
,
trainSLOPE()
, coef.SLOPE()
, print.SLOPE()
, print.SLOPE()
,
deviance.SLOPE()
, sortedL1Prox()
# Gaussian response, default lambda sequence fit <- SLOPE(bodyfat$x, bodyfat$y) # Poisson response, OSCAR-type lambda sequence fit <- SLOPE( abalone$x, abalone$y, family = "poisson", lambda = "oscar", theta1 = 1, theta2 = 0.9 ) # Multinomial response, custom alpha and lambda m <- length(unique(wine$y)) - 1 p <- ncol(wine$x) alpha <- 0.005 lambda <- exp(seq(log(2), log(1.8), length.out = p * m)) fit <- SLOPE( wine$x, wine$y, family = "multinomial", lambda = lambda, alpha = alpha )
# Gaussian response, default lambda sequence fit <- SLOPE(bodyfat$x, bodyfat$y) # Poisson response, OSCAR-type lambda sequence fit <- SLOPE( abalone$x, abalone$y, family = "poisson", lambda = "oscar", theta1 = 1, theta2 = 0.9 ) # Multinomial response, custom alpha and lambda m <- length(unique(wine$y)) - 1 p <- ncol(wine$x) alpha <- 0.005 lambda <- exp(seq(log(2), log(1.8), length.out = p * m)) fit <- SLOPE( wine$x, wine$y, family = "multinomial", lambda = lambda, alpha = alpha )
The proximal operator for the Sorted L1 Norm, which is the penalty function in SLOPE. It solves the problem
where is the Sorted L1 Norm.
sortedL1Prox(x, lambda, method = c("stack", "pava"))
sortedL1Prox(x, lambda, method = c("stack", "pava"))
x |
A vector. In SLOPE, this is the vector of coefficients. |
lambda |
A non-negative and decreasing sequence
of weights for the Sorted L1 Norm. Needs to be the same length as
|
method |
Method used in the prox. |
An evaluation of the proximal operator at x
and lambda
.
M. Bogdan, E. van den Berg, Chiara Sabatti, Weijie Su, and Emmanuel J. Candès, “SLOPE – adaptive variable selection via convex optimization,” Ann Appl Stat, vol. 9, no. 3, pp. 1103–1140, 2015.
A data set of the attributes of 382 students in secondary education collected from two schools. The goal is to predict the grade in math and Portugese at the end of the third period. See the cited sources for additional information.
student
student
382 observations from 13 variables represented as a list consisting
of a binary factor response matrix y
with two responses: portugese
and
math
for the final scores in period three for the respective subjects.
The list also contains x
: a sparse feature matrix of class
'dgCMatrix' with the following variables:
student's primary school, 1 for Mousinho da Silveira and 0 for Gabriel Pereira
sex of student, 1 for male
age of student
urban (1) or rural (0) home address
whether the family size is larger than 3
whether parents live together
mother's level of education (ordered)
fathers's level of education (ordered)
whether the mother was employed in health care
whether the mother was employed as something other than the specified job roles
whether the mother was employed in the service sector
whether the mother was employed as a teacher
whether the father was employed in health care
whether the father was employed as something other than the specified job roles
whether the father was employed in the service sector
whether the father was employed as a teacher
school chosen for being close to home
school chosen for another reason
school chosen for its reputation
whether the student attended nursery school
Pwhether the student has internet access at home
All of the grade-specific predictors were dropped from the data set. (Note that it is not clear from the source why some of these predictors are specific to each grade, such as which parent is the student's guardian.) The categorical variables were dummy-coded. Only the final grades (G3) were kept as dependent variables, whilst the first and second period grades were dropped.
P. Cortez and A. Silva. Using Data Mining to Predict Secondary School Student Performance. In A. Brito and J. Teixeira Eds., Proceedings of 5th FUture BUsiness TEChnology Conference (FUBUTEC 2008) pp. 5-12, Porto, Portugal, April, 2008, EUROSIS, ISBN 978-9077381-39-7. http://www3.dsi.uminho.pt/pcortez/student.pdf
Dua, D. and Karra Taniskidou, E. (2017). UCI Machine Learning Repository http://archive.ics.uci.edu/ml/. Irvine, CA: University of California, School of Information and Computer Science.
Other datasets:
abalone
,
bodyfat
,
heart
,
wine
This function trains a model fit by SLOPE()
by tuning its parameters
through cross-validation.
trainSLOPE( x, y, q = 0.2, number = 10, repeats = 1, measure = c("mse", "mae", "deviance", "misclass", "auc"), ... )
trainSLOPE( x, y, q = 0.2, number = 10, repeats = 1, measure = c("mse", "mae", "deviance", "misclass", "auc"), ... )
x |
the design matrix, which can be either a dense matrix of the standard matrix class, or a sparse matrix inheriting from Matrix::sparseMatrix. Data frames will be converted to matrices internally. |
y |
the response, which for |
q |
parameter controlling the shape of the lambda sequence, with
usage varying depending on the type of path used and has no effect
is a custom |
number |
number of folds (cross-validation) |
repeats |
number of repeats for each fold (for repeated k-fold cross validation) |
measure |
measure to try to optimize; note that you may supply multiple values here and that, by default, all the possible measures for the given model will be used. |
... |
other arguments to pass on to |
Note that by default this method matches all of the available metrics
for the given model family against those provided in the argument
measure
. Collecting these measures is not particularly demanding
computationally so it is almost always best to leave this argument
as it is and then choose which argument to focus on in the call
to plot.TrainedSLOPE()
.
An object of class "TrainedSLOPE"
, with the following slots:
summary |
a summary of the results with means, standard errors, and 0.95 confidence levels |
data |
the raw data from the model training |
optima |
a |
measure |
a |
model |
the model fit to the entire data set |
call |
the call |
This function uses the foreach package to enable parallel
operation. To enable this, simply register a parallel backend
using, for instance, doParallel::registerDoParallel()
from the
doParallel package before running this function.
foreach::foreach()
, plot.TrainedSLOPE()
Other model-tuning:
caretSLOPE()
,
plot.TrainedSLOPE()
# 8-fold cross-validation repeated 5 times tune <- trainSLOPE(subset(mtcars, select = c("mpg", "drat", "wt")), mtcars$hp, q = c(0.1, 0.2), number = 8, repeats = 5, measure = "mse" )
# 8-fold cross-validation repeated 5 times tune <- trainSLOPE(subset(mtcars, select = c("mpg", "drat", "wt")), mtcars$hp, q = c(0.1, 0.2), number = 8, repeats = 5, measure = "mse" )
A data set of results from chemical analysis of wines grown in Italy from three different cultivars.
wine
wine
178 observations from 13 variables represented as a list consisting
of a categorical response vector y
with three levels: A, B, and C representing different
cultivars of wine as well as x
: a sparse feature matrix of class
'dgCMatrix' with the following variables:
alcoholic content
malic acid
ash
alcalinity of ash
magnemium
total phenols
flavanoids
nonflavanoid phenols
proanthocyanins
color intensity
hue
OD280/OD315 of diluted wines
proline
Dua, D. and Karra Taniskidou, E. (2017). UCI Machine Learning Repository http://archive.ics.uci.edu/ml/. Irvine, CA: University of California, School of Information and Computer Science.
https://raw.githubusercontent.com/hadley/rminds/master/1-data/wine.csv
https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/multiclass.html#wine
Other datasets:
abalone
,
bodyfat
,
heart
,
student