0% found this document useful (0 votes)
1 views65 pages

5-R

The document is a lecture on R for Data Science, focusing on regularized regression techniques including Ridge, Lasso, and Elastic Net regression. It explains how these methods help reduce overfitting in linear models by constraining estimated coefficients, and provides practical examples using the glmnet package in R. Additionally, it covers the implementation of K-nearest neighbors, decision trees, and various sampling methods.

Uploaded by

Yanbo PAN
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1 views65 pages

5-R

The document is a lecture on R for Data Science, focusing on regularized regression techniques including Ridge, Lasso, and Elastic Net regression. It explains how these methods help reduce overfitting in linear models by constraining estimated coefficients, and provides practical examples using the glmnet package in R. Additionally, it covers the implementation of K-nearest neighbors, decision trees, and various sampling methods.

Uploaded by

Yanbo PAN
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 65

R for Data Science Lecture 5

Chang Liu
R for Data Science Lecture 5

Contents

Regularized Regression 3
Ridge penalty . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Lasso penalty . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Elastic Net Regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

K-nearest Neighbors Algorithm 46

Decision Tree 49

Bagging (extension: Random forest) 53

Unique Value 56

Length 57

Gsub 59

sampling 61
Stratified Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
Cluster Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
Systematic Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64

Chang Liu 2
R for Data Science Lecture 5

Regularized Regression

Linear models (LMs) provide a simple, yet effective, approach to predictive modeling. Moreover,
when certain assumptions required by LMs are met (e.g., constant variance), the estimated
coefficients are unbiased and, of all linear unbiased estimates, have the lowest variance. However,
in today’s world, data sets being analyzed typically contain a large number of features. As the
number of features grow, certain assumptions typically break down and these models tend to
overfit the training data, causing our out of sample error to increase. Regularization methods
provide a means to constrain or regularize the estimated coefficients, which can reduce the
variance and decrease out of sample error.

Ridge penalty

library(tidyverse)

## -- Attaching packages --------------------------------------- tidyverse 1.3.1 --

Chang Liu 3
R for Data Science Lecture 5

## v ggplot2 3.3.6 v purrr 0.3.4


## v tibble 3.1.6 v dplyr 1.0.8
## v tidyr 1.1.4 v stringr 1.4.0
## v readr 2.1.1 v forcats 0.5.1

## Warning: package 'dplyr' was built under R version 4.0.5

## -- Conflicts ------------------------------------------ tidyverse_conflicts() --


## x dplyr::filter() masks stats::filter()
## x dplyr::lag() masks stats::lag()

#define response variable


y <- mtcars$hp

#define matrix of predictor variables


x <- data.matrix(mtcars[, c('mpg', 'wt', 'drat', 'qsec')])

Next, we’ll use the glmnet() function to fit the ridge regression model and specify alpha=0.
Note that setting alpha equal to 1 is equivalent to using Lasso Regression and setting alpha to
some value between 0 and 1 is equivalent to using an elastic net.

library(glmnet)

## Loading required package: Matrix

##
## Attaching package: 'Matrix'

## The following objects are masked from 'package:tidyr':


##
## expand, pack, unpack

## Loaded glmnet 4.0-2

#fit ridge regression model


model <- glmnet(x, y, alpha = 0)

#view summary of model


summary(model)

Chang Liu 4
R for Data Science Lecture 5

## Length Class Mode


## a0 100 -none- numeric
## beta 400 dgCMatrix S4
## df 100 -none- numeric
## dim 2 -none- numeric
## lambda 100 -none- numeric
## dev.ratio 100 -none- numeric
## nulldev 1 -none- numeric
## npasses 1 -none- numeric
## jerr 1 -none- numeric
## offset 1 -none- logical
## call 4 -none- call
## nobs 1 -none- numeric

Next, we’ll identify the lambda value that produces the lowest test mean squared error (MSE)
by using k-fold cross-validation.

Fortunately, glmnet has the function cv.glmnet() that automatically performs k-fold cross vali-
dation using k = 10 folds.

#perform k-fold cross-validation to find optimal lambda value


cv_model <- cv.glmnet(x, y, alpha = 0)

#find optimal lambda value that minimizes test MSE


best_lambda <- cv_model$lambda.min
best_lambda

## [1] 13.27979

#produce plot of test MSE by lambda value


plot(cv_model)

Chang Liu 5
R for Data Science Lecture 5

5000 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4
Mean−Squared Error

3000
1000

2 4 6 8 10

Log(λ)

best_model <- glmnet(x, y, alpha = 0, lambda = best_lambda)


coef(best_model)

## 5 x 1 sparse Matrix of class "dgCMatrix"


## s0
## (Intercept) 470.705022
## mpg -3.280904
## wt 18.505501
## drat -2.468512
## qsec -17.298734

Chang Liu 6
R for Data Science Lecture 5

Lasso penalty

#define response variable


y <- mtcars$hp
#define matrix of predictor variables
x <- data.matrix(mtcars[, c('mpg', 'wt', 'drat', 'qsec')])

Next, we will use the glmnet() function to fit the lasso regression model and specify alpha=1.

Note that setting alpha equal to 0 is equivalent to using ridge regression and setting alpha to
some value between 0 and 1 is equivalent to using an elastic net.

To determine what value to use for lambda, we’ll perform k-fold cross-validation and identify
the lambda value that produces the lowest test mean squared error (MSE).

Note that the function cv.glmnet() automatically performs k-fold cross validation using k = 10
folds.

Chang Liu 7
R for Data Science Lecture 5

library(glmnet)

#perform k-fold cross-validation to find optimal lambda value


cv_model <- cv.glmnet(x, y, alpha = 1)

#find optimal lambda value that minimizes test MSE


best_lambda <- cv_model$lambda.min
best_lambda

## [1] 2.928367

#produce plot of test MSE by lambda value


plot(cv_model)

4 4 4 4 4 4 3 3 3 3 3 3 3 3 3 3 3 3 2 2 2 0
5000
Mean−Squared Error

3000
1000

−2 −1 0 1 2 3 4

Log(λ)

We can also use the final lasso regression model to make predictions on new observations.

#find coefficients of best model


best_model <- glmnet(x, y, alpha = 1, lambda = best_lambda)
coef(best_model)

## 5 x 1 sparse Matrix of class "dgCMatrix"

Chang Liu 8
R for Data Science Lecture 5

## s0
## (Intercept) 483.169999
## mpg -2.981768
## wt 21.029736
## drat .
## qsec -19.286215

#define new observation


new = matrix(c(24, 2.5, 3.5, 18.5), nrow=1, ncol=4)

#use lasso regression model to predict response value


predict(best_model, s = best_lambda, newx = new)

## 1
## [1,] 107.3869

Chang Liu 9
R for Data Science Lecture 5

Elastic Net Regression

#install.packages("dplyr")
#install.packages("glmnet")
#install.packages("ggplot2")
#install.packages("caret")
library(glmnet)
library(caret)

## Warning: package 'caret' was built under R version 4.0.5

## Loading required package: lattice

##
## Attaching package: 'caret'

Chang Liu 10
R for Data Science Lecture 5

## The following object is masked from 'package:purrr':


##
## lift

# X and Y datasets
Y <- mtcars %>%
select(disp) %>%
scale(center = TRUE, scale = FALSE) %>%
as.matrix()
X<- mtcars %>%
select(-disp) %>%
as.matrix()

# Model Building : Elastic Net Regression


control <- trainControl(method = "repeatedcv",
number = 5,
repeats = 5,
search = "random",
verboseIter = TRUE)

# Training ELastic Net Regression model


elastic_model <- train(disp ~ .,
data = cbind(Y, X),
method = "glmnet",
preProcess = c("center", "scale"),
tuneLength = 25,
trControl = control)

## + Fold1.Rep1: alpha=0.63950, lambda=0.004507


## - Fold1.Rep1: alpha=0.63950, lambda=0.004507
## + Fold1.Rep1: alpha=0.04530, lambda=0.016509
## - Fold1.Rep1: alpha=0.04530, lambda=0.016509
## + Fold1.Rep1: alpha=0.97420, lambda=0.004648
## - Fold1.Rep1: alpha=0.97420, lambda=0.004648
## + Fold1.Rep1: alpha=0.34773, lambda=0.060861
## - Fold1.Rep1: alpha=0.34773, lambda=0.060861
## + Fold1.Rep1: alpha=0.51433, lambda=0.001761
## - Fold1.Rep1: alpha=0.51433, lambda=0.001761
## + Fold1.Rep1: alpha=0.31833, lambda=0.479416
## - Fold1.Rep1: alpha=0.31833, lambda=0.479416

Chang Liu 11
R for Data Science Lecture 5

## + Fold1.Rep1: alpha=0.74806, lambda=0.327981


## - Fold1.Rep1: alpha=0.74806, lambda=0.327981
## + Fold1.Rep1: alpha=0.14914, lambda=0.028964
## - Fold1.Rep1: alpha=0.14914, lambda=0.028964
## + Fold1.Rep1: alpha=0.84515, lambda=0.897325
## - Fold1.Rep1: alpha=0.84515, lambda=0.897325
## + Fold1.Rep1: alpha=0.93503, lambda=0.082985
## - Fold1.Rep1: alpha=0.93503, lambda=0.082985
## + Fold1.Rep1: alpha=0.38077, lambda=1.148290
## - Fold1.Rep1: alpha=0.38077, lambda=1.148290
## + Fold1.Rep1: alpha=0.84280, lambda=0.045415
## - Fold1.Rep1: alpha=0.84280, lambda=0.045415
## + Fold1.Rep1: alpha=0.98348, lambda=0.006142
## - Fold1.Rep1: alpha=0.98348, lambda=0.006142
## + Fold1.Rep1: alpha=0.78895, lambda=0.042054
## - Fold1.Rep1: alpha=0.78895, lambda=0.042054
## + Fold1.Rep1: alpha=0.52859, lambda=0.428235
## - Fold1.Rep1: alpha=0.52859, lambda=0.428235
## + Fold1.Rep1: alpha=0.73053, lambda=0.015889
## - Fold1.Rep1: alpha=0.73053, lambda=0.015889
## + Fold1.Rep1: alpha=0.90284, lambda=0.445782
## - Fold1.Rep1: alpha=0.90284, lambda=0.445782
## + Fold1.Rep1: alpha=0.32570, lambda=0.081177
## - Fold1.Rep1: alpha=0.32570, lambda=0.081177
## + Fold1.Rep1: alpha=0.75140, lambda=0.561138
## - Fold1.Rep1: alpha=0.75140, lambda=0.561138
## + Fold1.Rep1: alpha=0.03599, lambda=0.001115
## - Fold1.Rep1: alpha=0.03599, lambda=0.001115
## + Fold1.Rep1: alpha=0.25504, lambda=2.954617
## - Fold1.Rep1: alpha=0.25504, lambda=2.954617
## + Fold1.Rep1: alpha=0.86475, lambda=0.043683
## - Fold1.Rep1: alpha=0.86475, lambda=0.043683
## + Fold1.Rep1: alpha=0.10863, lambda=0.154091
## - Fold1.Rep1: alpha=0.10863, lambda=0.154091
## + Fold1.Rep1: alpha=0.98116, lambda=7.405037
## - Fold1.Rep1: alpha=0.98116, lambda=7.405037
## + Fold1.Rep1: alpha=0.94388, lambda=0.002046
## - Fold1.Rep1: alpha=0.94388, lambda=0.002046

Chang Liu 12
R for Data Science Lecture 5

## + Fold2.Rep1: alpha=0.63950, lambda=0.004507


## - Fold2.Rep1: alpha=0.63950, lambda=0.004507
## + Fold2.Rep1: alpha=0.04530, lambda=0.016509
## - Fold2.Rep1: alpha=0.04530, lambda=0.016509
## + Fold2.Rep1: alpha=0.97420, lambda=0.004648
## - Fold2.Rep1: alpha=0.97420, lambda=0.004648
## + Fold2.Rep1: alpha=0.34773, lambda=0.060861
## - Fold2.Rep1: alpha=0.34773, lambda=0.060861
## + Fold2.Rep1: alpha=0.51433, lambda=0.001761
## - Fold2.Rep1: alpha=0.51433, lambda=0.001761
## + Fold2.Rep1: alpha=0.31833, lambda=0.479416
## - Fold2.Rep1: alpha=0.31833, lambda=0.479416
## + Fold2.Rep1: alpha=0.74806, lambda=0.327981
## - Fold2.Rep1: alpha=0.74806, lambda=0.327981
## + Fold2.Rep1: alpha=0.14914, lambda=0.028964
## - Fold2.Rep1: alpha=0.14914, lambda=0.028964
## + Fold2.Rep1: alpha=0.84515, lambda=0.897325
## - Fold2.Rep1: alpha=0.84515, lambda=0.897325
## + Fold2.Rep1: alpha=0.93503, lambda=0.082985
## - Fold2.Rep1: alpha=0.93503, lambda=0.082985
## + Fold2.Rep1: alpha=0.38077, lambda=1.148290
## - Fold2.Rep1: alpha=0.38077, lambda=1.148290
## + Fold2.Rep1: alpha=0.84280, lambda=0.045415
## - Fold2.Rep1: alpha=0.84280, lambda=0.045415
## + Fold2.Rep1: alpha=0.98348, lambda=0.006142
## - Fold2.Rep1: alpha=0.98348, lambda=0.006142
## + Fold2.Rep1: alpha=0.78895, lambda=0.042054
## - Fold2.Rep1: alpha=0.78895, lambda=0.042054
## + Fold2.Rep1: alpha=0.52859, lambda=0.428235
## - Fold2.Rep1: alpha=0.52859, lambda=0.428235
## + Fold2.Rep1: alpha=0.73053, lambda=0.015889
## - Fold2.Rep1: alpha=0.73053, lambda=0.015889
## + Fold2.Rep1: alpha=0.90284, lambda=0.445782
## - Fold2.Rep1: alpha=0.90284, lambda=0.445782
## + Fold2.Rep1: alpha=0.32570, lambda=0.081177
## - Fold2.Rep1: alpha=0.32570, lambda=0.081177
## + Fold2.Rep1: alpha=0.75140, lambda=0.561138
## - Fold2.Rep1: alpha=0.75140, lambda=0.561138

Chang Liu 13
R for Data Science Lecture 5

## + Fold2.Rep1: alpha=0.03599, lambda=0.001115


## - Fold2.Rep1: alpha=0.03599, lambda=0.001115
## + Fold2.Rep1: alpha=0.25504, lambda=2.954617
## - Fold2.Rep1: alpha=0.25504, lambda=2.954617
## + Fold2.Rep1: alpha=0.86475, lambda=0.043683
## - Fold2.Rep1: alpha=0.86475, lambda=0.043683
## + Fold2.Rep1: alpha=0.10863, lambda=0.154091
## - Fold2.Rep1: alpha=0.10863, lambda=0.154091
## + Fold2.Rep1: alpha=0.98116, lambda=7.405037
## - Fold2.Rep1: alpha=0.98116, lambda=7.405037
## + Fold2.Rep1: alpha=0.94388, lambda=0.002046
## - Fold2.Rep1: alpha=0.94388, lambda=0.002046
## + Fold3.Rep1: alpha=0.63950, lambda=0.004507
## - Fold3.Rep1: alpha=0.63950, lambda=0.004507
## + Fold3.Rep1: alpha=0.04530, lambda=0.016509
## - Fold3.Rep1: alpha=0.04530, lambda=0.016509
## + Fold3.Rep1: alpha=0.97420, lambda=0.004648
## - Fold3.Rep1: alpha=0.97420, lambda=0.004648
## + Fold3.Rep1: alpha=0.34773, lambda=0.060861
## - Fold3.Rep1: alpha=0.34773, lambda=0.060861
## + Fold3.Rep1: alpha=0.51433, lambda=0.001761
## - Fold3.Rep1: alpha=0.51433, lambda=0.001761
## + Fold3.Rep1: alpha=0.31833, lambda=0.479416
## - Fold3.Rep1: alpha=0.31833, lambda=0.479416
## + Fold3.Rep1: alpha=0.74806, lambda=0.327981
## - Fold3.Rep1: alpha=0.74806, lambda=0.327981
## + Fold3.Rep1: alpha=0.14914, lambda=0.028964
## - Fold3.Rep1: alpha=0.14914, lambda=0.028964
## + Fold3.Rep1: alpha=0.84515, lambda=0.897325
## - Fold3.Rep1: alpha=0.84515, lambda=0.897325
## + Fold3.Rep1: alpha=0.93503, lambda=0.082985
## - Fold3.Rep1: alpha=0.93503, lambda=0.082985
## + Fold3.Rep1: alpha=0.38077, lambda=1.148290
## - Fold3.Rep1: alpha=0.38077, lambda=1.148290
## + Fold3.Rep1: alpha=0.84280, lambda=0.045415
## - Fold3.Rep1: alpha=0.84280, lambda=0.045415
## + Fold3.Rep1: alpha=0.98348, lambda=0.006142
## - Fold3.Rep1: alpha=0.98348, lambda=0.006142

Chang Liu 14
R for Data Science Lecture 5

## + Fold3.Rep1: alpha=0.78895, lambda=0.042054


## - Fold3.Rep1: alpha=0.78895, lambda=0.042054
## + Fold3.Rep1: alpha=0.52859, lambda=0.428235
## - Fold3.Rep1: alpha=0.52859, lambda=0.428235
## + Fold3.Rep1: alpha=0.73053, lambda=0.015889
## - Fold3.Rep1: alpha=0.73053, lambda=0.015889
## + Fold3.Rep1: alpha=0.90284, lambda=0.445782
## - Fold3.Rep1: alpha=0.90284, lambda=0.445782
## + Fold3.Rep1: alpha=0.32570, lambda=0.081177
## - Fold3.Rep1: alpha=0.32570, lambda=0.081177
## + Fold3.Rep1: alpha=0.75140, lambda=0.561138
## - Fold3.Rep1: alpha=0.75140, lambda=0.561138
## + Fold3.Rep1: alpha=0.03599, lambda=0.001115
## - Fold3.Rep1: alpha=0.03599, lambda=0.001115
## + Fold3.Rep1: alpha=0.25504, lambda=2.954617
## - Fold3.Rep1: alpha=0.25504, lambda=2.954617
## + Fold3.Rep1: alpha=0.86475, lambda=0.043683
## - Fold3.Rep1: alpha=0.86475, lambda=0.043683
## + Fold3.Rep1: alpha=0.10863, lambda=0.154091
## - Fold3.Rep1: alpha=0.10863, lambda=0.154091
## + Fold3.Rep1: alpha=0.98116, lambda=7.405037
## - Fold3.Rep1: alpha=0.98116, lambda=7.405037
## + Fold3.Rep1: alpha=0.94388, lambda=0.002046
## - Fold3.Rep1: alpha=0.94388, lambda=0.002046
## + Fold4.Rep1: alpha=0.63950, lambda=0.004507
## - Fold4.Rep1: alpha=0.63950, lambda=0.004507
## + Fold4.Rep1: alpha=0.04530, lambda=0.016509
## - Fold4.Rep1: alpha=0.04530, lambda=0.016509
## + Fold4.Rep1: alpha=0.97420, lambda=0.004648
## - Fold4.Rep1: alpha=0.97420, lambda=0.004648
## + Fold4.Rep1: alpha=0.34773, lambda=0.060861
## - Fold4.Rep1: alpha=0.34773, lambda=0.060861
## + Fold4.Rep1: alpha=0.51433, lambda=0.001761
## - Fold4.Rep1: alpha=0.51433, lambda=0.001761
## + Fold4.Rep1: alpha=0.31833, lambda=0.479416
## - Fold4.Rep1: alpha=0.31833, lambda=0.479416
## + Fold4.Rep1: alpha=0.74806, lambda=0.327981
## - Fold4.Rep1: alpha=0.74806, lambda=0.327981

Chang Liu 15
R for Data Science Lecture 5

## + Fold4.Rep1: alpha=0.14914, lambda=0.028964


## - Fold4.Rep1: alpha=0.14914, lambda=0.028964
## + Fold4.Rep1: alpha=0.84515, lambda=0.897325
## - Fold4.Rep1: alpha=0.84515, lambda=0.897325
## + Fold4.Rep1: alpha=0.93503, lambda=0.082985
## - Fold4.Rep1: alpha=0.93503, lambda=0.082985
## + Fold4.Rep1: alpha=0.38077, lambda=1.148290
## - Fold4.Rep1: alpha=0.38077, lambda=1.148290
## + Fold4.Rep1: alpha=0.84280, lambda=0.045415
## - Fold4.Rep1: alpha=0.84280, lambda=0.045415
## + Fold4.Rep1: alpha=0.98348, lambda=0.006142
## - Fold4.Rep1: alpha=0.98348, lambda=0.006142
## + Fold4.Rep1: alpha=0.78895, lambda=0.042054
## - Fold4.Rep1: alpha=0.78895, lambda=0.042054
## + Fold4.Rep1: alpha=0.52859, lambda=0.428235
## - Fold4.Rep1: alpha=0.52859, lambda=0.428235
## + Fold4.Rep1: alpha=0.73053, lambda=0.015889
## - Fold4.Rep1: alpha=0.73053, lambda=0.015889
## + Fold4.Rep1: alpha=0.90284, lambda=0.445782
## - Fold4.Rep1: alpha=0.90284, lambda=0.445782
## + Fold4.Rep1: alpha=0.32570, lambda=0.081177
## - Fold4.Rep1: alpha=0.32570, lambda=0.081177
## + Fold4.Rep1: alpha=0.75140, lambda=0.561138
## - Fold4.Rep1: alpha=0.75140, lambda=0.561138
## + Fold4.Rep1: alpha=0.03599, lambda=0.001115
## - Fold4.Rep1: alpha=0.03599, lambda=0.001115
## + Fold4.Rep1: alpha=0.25504, lambda=2.954617
## - Fold4.Rep1: alpha=0.25504, lambda=2.954617
## + Fold4.Rep1: alpha=0.86475, lambda=0.043683
## - Fold4.Rep1: alpha=0.86475, lambda=0.043683
## + Fold4.Rep1: alpha=0.10863, lambda=0.154091
## - Fold4.Rep1: alpha=0.10863, lambda=0.154091
## + Fold4.Rep1: alpha=0.98116, lambda=7.405037
## - Fold4.Rep1: alpha=0.98116, lambda=7.405037
## + Fold4.Rep1: alpha=0.94388, lambda=0.002046
## - Fold4.Rep1: alpha=0.94388, lambda=0.002046
## + Fold5.Rep1: alpha=0.63950, lambda=0.004507
## - Fold5.Rep1: alpha=0.63950, lambda=0.004507

Chang Liu 16
R for Data Science Lecture 5

## + Fold5.Rep1: alpha=0.04530, lambda=0.016509


## - Fold5.Rep1: alpha=0.04530, lambda=0.016509
## + Fold5.Rep1: alpha=0.97420, lambda=0.004648
## - Fold5.Rep1: alpha=0.97420, lambda=0.004648
## + Fold5.Rep1: alpha=0.34773, lambda=0.060861
## - Fold5.Rep1: alpha=0.34773, lambda=0.060861
## + Fold5.Rep1: alpha=0.51433, lambda=0.001761
## - Fold5.Rep1: alpha=0.51433, lambda=0.001761
## + Fold5.Rep1: alpha=0.31833, lambda=0.479416
## - Fold5.Rep1: alpha=0.31833, lambda=0.479416
## + Fold5.Rep1: alpha=0.74806, lambda=0.327981
## - Fold5.Rep1: alpha=0.74806, lambda=0.327981
## + Fold5.Rep1: alpha=0.14914, lambda=0.028964
## - Fold5.Rep1: alpha=0.14914, lambda=0.028964
## + Fold5.Rep1: alpha=0.84515, lambda=0.897325
## - Fold5.Rep1: alpha=0.84515, lambda=0.897325
## + Fold5.Rep1: alpha=0.93503, lambda=0.082985
## - Fold5.Rep1: alpha=0.93503, lambda=0.082985
## + Fold5.Rep1: alpha=0.38077, lambda=1.148290
## - Fold5.Rep1: alpha=0.38077, lambda=1.148290
## + Fold5.Rep1: alpha=0.84280, lambda=0.045415
## - Fold5.Rep1: alpha=0.84280, lambda=0.045415
## + Fold5.Rep1: alpha=0.98348, lambda=0.006142
## - Fold5.Rep1: alpha=0.98348, lambda=0.006142
## + Fold5.Rep1: alpha=0.78895, lambda=0.042054
## - Fold5.Rep1: alpha=0.78895, lambda=0.042054
## + Fold5.Rep1: alpha=0.52859, lambda=0.428235
## - Fold5.Rep1: alpha=0.52859, lambda=0.428235
## + Fold5.Rep1: alpha=0.73053, lambda=0.015889
## - Fold5.Rep1: alpha=0.73053, lambda=0.015889
## + Fold5.Rep1: alpha=0.90284, lambda=0.445782
## - Fold5.Rep1: alpha=0.90284, lambda=0.445782
## + Fold5.Rep1: alpha=0.32570, lambda=0.081177
## - Fold5.Rep1: alpha=0.32570, lambda=0.081177
## + Fold5.Rep1: alpha=0.75140, lambda=0.561138
## - Fold5.Rep1: alpha=0.75140, lambda=0.561138
## + Fold5.Rep1: alpha=0.03599, lambda=0.001115
## - Fold5.Rep1: alpha=0.03599, lambda=0.001115

Chang Liu 17
R for Data Science Lecture 5

## + Fold5.Rep1: alpha=0.25504, lambda=2.954617


## - Fold5.Rep1: alpha=0.25504, lambda=2.954617
## + Fold5.Rep1: alpha=0.86475, lambda=0.043683
## - Fold5.Rep1: alpha=0.86475, lambda=0.043683
## + Fold5.Rep1: alpha=0.10863, lambda=0.154091
## - Fold5.Rep1: alpha=0.10863, lambda=0.154091
## + Fold5.Rep1: alpha=0.98116, lambda=7.405037
## - Fold5.Rep1: alpha=0.98116, lambda=7.405037
## + Fold5.Rep1: alpha=0.94388, lambda=0.002046
## - Fold5.Rep1: alpha=0.94388, lambda=0.002046
## + Fold1.Rep2: alpha=0.63950, lambda=0.004507
## - Fold1.Rep2: alpha=0.63950, lambda=0.004507
## + Fold1.Rep2: alpha=0.04530, lambda=0.016509
## - Fold1.Rep2: alpha=0.04530, lambda=0.016509
## + Fold1.Rep2: alpha=0.97420, lambda=0.004648
## - Fold1.Rep2: alpha=0.97420, lambda=0.004648
## + Fold1.Rep2: alpha=0.34773, lambda=0.060861
## - Fold1.Rep2: alpha=0.34773, lambda=0.060861
## + Fold1.Rep2: alpha=0.51433, lambda=0.001761
## - Fold1.Rep2: alpha=0.51433, lambda=0.001761
## + Fold1.Rep2: alpha=0.31833, lambda=0.479416
## - Fold1.Rep2: alpha=0.31833, lambda=0.479416
## + Fold1.Rep2: alpha=0.74806, lambda=0.327981
## - Fold1.Rep2: alpha=0.74806, lambda=0.327981
## + Fold1.Rep2: alpha=0.14914, lambda=0.028964
## - Fold1.Rep2: alpha=0.14914, lambda=0.028964
## + Fold1.Rep2: alpha=0.84515, lambda=0.897325
## - Fold1.Rep2: alpha=0.84515, lambda=0.897325
## + Fold1.Rep2: alpha=0.93503, lambda=0.082985
## - Fold1.Rep2: alpha=0.93503, lambda=0.082985
## + Fold1.Rep2: alpha=0.38077, lambda=1.148290
## - Fold1.Rep2: alpha=0.38077, lambda=1.148290
## + Fold1.Rep2: alpha=0.84280, lambda=0.045415
## - Fold1.Rep2: alpha=0.84280, lambda=0.045415
## + Fold1.Rep2: alpha=0.98348, lambda=0.006142
## - Fold1.Rep2: alpha=0.98348, lambda=0.006142
## + Fold1.Rep2: alpha=0.78895, lambda=0.042054
## - Fold1.Rep2: alpha=0.78895, lambda=0.042054

Chang Liu 18
R for Data Science Lecture 5

## + Fold1.Rep2: alpha=0.52859, lambda=0.428235


## - Fold1.Rep2: alpha=0.52859, lambda=0.428235
## + Fold1.Rep2: alpha=0.73053, lambda=0.015889
## - Fold1.Rep2: alpha=0.73053, lambda=0.015889
## + Fold1.Rep2: alpha=0.90284, lambda=0.445782
## - Fold1.Rep2: alpha=0.90284, lambda=0.445782
## + Fold1.Rep2: alpha=0.32570, lambda=0.081177
## - Fold1.Rep2: alpha=0.32570, lambda=0.081177
## + Fold1.Rep2: alpha=0.75140, lambda=0.561138
## - Fold1.Rep2: alpha=0.75140, lambda=0.561138
## + Fold1.Rep2: alpha=0.03599, lambda=0.001115
## - Fold1.Rep2: alpha=0.03599, lambda=0.001115
## + Fold1.Rep2: alpha=0.25504, lambda=2.954617
## - Fold1.Rep2: alpha=0.25504, lambda=2.954617
## + Fold1.Rep2: alpha=0.86475, lambda=0.043683
## - Fold1.Rep2: alpha=0.86475, lambda=0.043683
## + Fold1.Rep2: alpha=0.10863, lambda=0.154091
## - Fold1.Rep2: alpha=0.10863, lambda=0.154091
## + Fold1.Rep2: alpha=0.98116, lambda=7.405037
## - Fold1.Rep2: alpha=0.98116, lambda=7.405037
## + Fold1.Rep2: alpha=0.94388, lambda=0.002046
## - Fold1.Rep2: alpha=0.94388, lambda=0.002046
## + Fold2.Rep2: alpha=0.63950, lambda=0.004507
## - Fold2.Rep2: alpha=0.63950, lambda=0.004507
## + Fold2.Rep2: alpha=0.04530, lambda=0.016509
## - Fold2.Rep2: alpha=0.04530, lambda=0.016509
## + Fold2.Rep2: alpha=0.97420, lambda=0.004648
## - Fold2.Rep2: alpha=0.97420, lambda=0.004648
## + Fold2.Rep2: alpha=0.34773, lambda=0.060861
## - Fold2.Rep2: alpha=0.34773, lambda=0.060861
## + Fold2.Rep2: alpha=0.51433, lambda=0.001761
## - Fold2.Rep2: alpha=0.51433, lambda=0.001761
## + Fold2.Rep2: alpha=0.31833, lambda=0.479416
## - Fold2.Rep2: alpha=0.31833, lambda=0.479416
## + Fold2.Rep2: alpha=0.74806, lambda=0.327981
## - Fold2.Rep2: alpha=0.74806, lambda=0.327981
## + Fold2.Rep2: alpha=0.14914, lambda=0.028964
## - Fold2.Rep2: alpha=0.14914, lambda=0.028964

Chang Liu 19
R for Data Science Lecture 5

## + Fold2.Rep2: alpha=0.84515, lambda=0.897325


## - Fold2.Rep2: alpha=0.84515, lambda=0.897325
## + Fold2.Rep2: alpha=0.93503, lambda=0.082985
## - Fold2.Rep2: alpha=0.93503, lambda=0.082985
## + Fold2.Rep2: alpha=0.38077, lambda=1.148290
## - Fold2.Rep2: alpha=0.38077, lambda=1.148290
## + Fold2.Rep2: alpha=0.84280, lambda=0.045415
## - Fold2.Rep2: alpha=0.84280, lambda=0.045415
## + Fold2.Rep2: alpha=0.98348, lambda=0.006142
## - Fold2.Rep2: alpha=0.98348, lambda=0.006142
## + Fold2.Rep2: alpha=0.78895, lambda=0.042054
## - Fold2.Rep2: alpha=0.78895, lambda=0.042054
## + Fold2.Rep2: alpha=0.52859, lambda=0.428235
## - Fold2.Rep2: alpha=0.52859, lambda=0.428235
## + Fold2.Rep2: alpha=0.73053, lambda=0.015889
## - Fold2.Rep2: alpha=0.73053, lambda=0.015889
## + Fold2.Rep2: alpha=0.90284, lambda=0.445782
## - Fold2.Rep2: alpha=0.90284, lambda=0.445782
## + Fold2.Rep2: alpha=0.32570, lambda=0.081177
## - Fold2.Rep2: alpha=0.32570, lambda=0.081177
## + Fold2.Rep2: alpha=0.75140, lambda=0.561138
## - Fold2.Rep2: alpha=0.75140, lambda=0.561138
## + Fold2.Rep2: alpha=0.03599, lambda=0.001115
## - Fold2.Rep2: alpha=0.03599, lambda=0.001115
## + Fold2.Rep2: alpha=0.25504, lambda=2.954617
## - Fold2.Rep2: alpha=0.25504, lambda=2.954617
## + Fold2.Rep2: alpha=0.86475, lambda=0.043683
## - Fold2.Rep2: alpha=0.86475, lambda=0.043683
## + Fold2.Rep2: alpha=0.10863, lambda=0.154091
## - Fold2.Rep2: alpha=0.10863, lambda=0.154091
## + Fold2.Rep2: alpha=0.98116, lambda=7.405037
## - Fold2.Rep2: alpha=0.98116, lambda=7.405037
## + Fold2.Rep2: alpha=0.94388, lambda=0.002046
## - Fold2.Rep2: alpha=0.94388, lambda=0.002046
## + Fold3.Rep2: alpha=0.63950, lambda=0.004507
## - Fold3.Rep2: alpha=0.63950, lambda=0.004507
## + Fold3.Rep2: alpha=0.04530, lambda=0.016509
## - Fold3.Rep2: alpha=0.04530, lambda=0.016509

Chang Liu 20
R for Data Science Lecture 5

## + Fold3.Rep2: alpha=0.97420, lambda=0.004648


## - Fold3.Rep2: alpha=0.97420, lambda=0.004648
## + Fold3.Rep2: alpha=0.34773, lambda=0.060861
## - Fold3.Rep2: alpha=0.34773, lambda=0.060861
## + Fold3.Rep2: alpha=0.51433, lambda=0.001761
## - Fold3.Rep2: alpha=0.51433, lambda=0.001761
## + Fold3.Rep2: alpha=0.31833, lambda=0.479416
## - Fold3.Rep2: alpha=0.31833, lambda=0.479416
## + Fold3.Rep2: alpha=0.74806, lambda=0.327981
## - Fold3.Rep2: alpha=0.74806, lambda=0.327981
## + Fold3.Rep2: alpha=0.14914, lambda=0.028964
## - Fold3.Rep2: alpha=0.14914, lambda=0.028964
## + Fold3.Rep2: alpha=0.84515, lambda=0.897325
## - Fold3.Rep2: alpha=0.84515, lambda=0.897325
## + Fold3.Rep2: alpha=0.93503, lambda=0.082985
## - Fold3.Rep2: alpha=0.93503, lambda=0.082985
## + Fold3.Rep2: alpha=0.38077, lambda=1.148290
## - Fold3.Rep2: alpha=0.38077, lambda=1.148290
## + Fold3.Rep2: alpha=0.84280, lambda=0.045415
## - Fold3.Rep2: alpha=0.84280, lambda=0.045415
## + Fold3.Rep2: alpha=0.98348, lambda=0.006142
## - Fold3.Rep2: alpha=0.98348, lambda=0.006142
## + Fold3.Rep2: alpha=0.78895, lambda=0.042054
## - Fold3.Rep2: alpha=0.78895, lambda=0.042054
## + Fold3.Rep2: alpha=0.52859, lambda=0.428235
## - Fold3.Rep2: alpha=0.52859, lambda=0.428235
## + Fold3.Rep2: alpha=0.73053, lambda=0.015889
## - Fold3.Rep2: alpha=0.73053, lambda=0.015889
## + Fold3.Rep2: alpha=0.90284, lambda=0.445782
## - Fold3.Rep2: alpha=0.90284, lambda=0.445782
## + Fold3.Rep2: alpha=0.32570, lambda=0.081177
## - Fold3.Rep2: alpha=0.32570, lambda=0.081177
## + Fold3.Rep2: alpha=0.75140, lambda=0.561138
## - Fold3.Rep2: alpha=0.75140, lambda=0.561138
## + Fold3.Rep2: alpha=0.03599, lambda=0.001115
## - Fold3.Rep2: alpha=0.03599, lambda=0.001115
## + Fold3.Rep2: alpha=0.25504, lambda=2.954617
## - Fold3.Rep2: alpha=0.25504, lambda=2.954617

Chang Liu 21
R for Data Science Lecture 5

## + Fold3.Rep2: alpha=0.86475, lambda=0.043683


## - Fold3.Rep2: alpha=0.86475, lambda=0.043683
## + Fold3.Rep2: alpha=0.10863, lambda=0.154091
## - Fold3.Rep2: alpha=0.10863, lambda=0.154091
## + Fold3.Rep2: alpha=0.98116, lambda=7.405037
## - Fold3.Rep2: alpha=0.98116, lambda=7.405037
## + Fold3.Rep2: alpha=0.94388, lambda=0.002046
## - Fold3.Rep2: alpha=0.94388, lambda=0.002046
## + Fold4.Rep2: alpha=0.63950, lambda=0.004507
## - Fold4.Rep2: alpha=0.63950, lambda=0.004507
## + Fold4.Rep2: alpha=0.04530, lambda=0.016509
## - Fold4.Rep2: alpha=0.04530, lambda=0.016509
## + Fold4.Rep2: alpha=0.97420, lambda=0.004648
## - Fold4.Rep2: alpha=0.97420, lambda=0.004648
## + Fold4.Rep2: alpha=0.34773, lambda=0.060861
## - Fold4.Rep2: alpha=0.34773, lambda=0.060861
## + Fold4.Rep2: alpha=0.51433, lambda=0.001761
## - Fold4.Rep2: alpha=0.51433, lambda=0.001761
## + Fold4.Rep2: alpha=0.31833, lambda=0.479416
## - Fold4.Rep2: alpha=0.31833, lambda=0.479416
## + Fold4.Rep2: alpha=0.74806, lambda=0.327981
## - Fold4.Rep2: alpha=0.74806, lambda=0.327981
## + Fold4.Rep2: alpha=0.14914, lambda=0.028964
## - Fold4.Rep2: alpha=0.14914, lambda=0.028964
## + Fold4.Rep2: alpha=0.84515, lambda=0.897325
## - Fold4.Rep2: alpha=0.84515, lambda=0.897325
## + Fold4.Rep2: alpha=0.93503, lambda=0.082985
## - Fold4.Rep2: alpha=0.93503, lambda=0.082985
## + Fold4.Rep2: alpha=0.38077, lambda=1.148290
## - Fold4.Rep2: alpha=0.38077, lambda=1.148290
## + Fold4.Rep2: alpha=0.84280, lambda=0.045415
## - Fold4.Rep2: alpha=0.84280, lambda=0.045415
## + Fold4.Rep2: alpha=0.98348, lambda=0.006142
## - Fold4.Rep2: alpha=0.98348, lambda=0.006142
## + Fold4.Rep2: alpha=0.78895, lambda=0.042054
## - Fold4.Rep2: alpha=0.78895, lambda=0.042054
## + Fold4.Rep2: alpha=0.52859, lambda=0.428235
## - Fold4.Rep2: alpha=0.52859, lambda=0.428235

Chang Liu 22
R for Data Science Lecture 5

## + Fold4.Rep2: alpha=0.73053, lambda=0.015889


## - Fold4.Rep2: alpha=0.73053, lambda=0.015889
## + Fold4.Rep2: alpha=0.90284, lambda=0.445782
## - Fold4.Rep2: alpha=0.90284, lambda=0.445782
## + Fold4.Rep2: alpha=0.32570, lambda=0.081177
## - Fold4.Rep2: alpha=0.32570, lambda=0.081177
## + Fold4.Rep2: alpha=0.75140, lambda=0.561138
## - Fold4.Rep2: alpha=0.75140, lambda=0.561138
## + Fold4.Rep2: alpha=0.03599, lambda=0.001115
## - Fold4.Rep2: alpha=0.03599, lambda=0.001115
## + Fold4.Rep2: alpha=0.25504, lambda=2.954617
## - Fold4.Rep2: alpha=0.25504, lambda=2.954617
## + Fold4.Rep2: alpha=0.86475, lambda=0.043683
## - Fold4.Rep2: alpha=0.86475, lambda=0.043683
## + Fold4.Rep2: alpha=0.10863, lambda=0.154091
## - Fold4.Rep2: alpha=0.10863, lambda=0.154091
## + Fold4.Rep2: alpha=0.98116, lambda=7.405037
## - Fold4.Rep2: alpha=0.98116, lambda=7.405037
## + Fold4.Rep2: alpha=0.94388, lambda=0.002046
## - Fold4.Rep2: alpha=0.94388, lambda=0.002046
## + Fold5.Rep2: alpha=0.63950, lambda=0.004507
## - Fold5.Rep2: alpha=0.63950, lambda=0.004507
## + Fold5.Rep2: alpha=0.04530, lambda=0.016509
## - Fold5.Rep2: alpha=0.04530, lambda=0.016509
## + Fold5.Rep2: alpha=0.97420, lambda=0.004648
## - Fold5.Rep2: alpha=0.97420, lambda=0.004648
## + Fold5.Rep2: alpha=0.34773, lambda=0.060861
## - Fold5.Rep2: alpha=0.34773, lambda=0.060861
## + Fold5.Rep2: alpha=0.51433, lambda=0.001761
## - Fold5.Rep2: alpha=0.51433, lambda=0.001761
## + Fold5.Rep2: alpha=0.31833, lambda=0.479416
## - Fold5.Rep2: alpha=0.31833, lambda=0.479416
## + Fold5.Rep2: alpha=0.74806, lambda=0.327981
## - Fold5.Rep2: alpha=0.74806, lambda=0.327981
## + Fold5.Rep2: alpha=0.14914, lambda=0.028964
## - Fold5.Rep2: alpha=0.14914, lambda=0.028964
## + Fold5.Rep2: alpha=0.84515, lambda=0.897325
## - Fold5.Rep2: alpha=0.84515, lambda=0.897325

Chang Liu 23
R for Data Science Lecture 5

## + Fold5.Rep2: alpha=0.93503, lambda=0.082985


## - Fold5.Rep2: alpha=0.93503, lambda=0.082985
## + Fold5.Rep2: alpha=0.38077, lambda=1.148290
## - Fold5.Rep2: alpha=0.38077, lambda=1.148290
## + Fold5.Rep2: alpha=0.84280, lambda=0.045415
## - Fold5.Rep2: alpha=0.84280, lambda=0.045415
## + Fold5.Rep2: alpha=0.98348, lambda=0.006142
## - Fold5.Rep2: alpha=0.98348, lambda=0.006142
## + Fold5.Rep2: alpha=0.78895, lambda=0.042054
## - Fold5.Rep2: alpha=0.78895, lambda=0.042054
## + Fold5.Rep2: alpha=0.52859, lambda=0.428235
## - Fold5.Rep2: alpha=0.52859, lambda=0.428235
## + Fold5.Rep2: alpha=0.73053, lambda=0.015889
## - Fold5.Rep2: alpha=0.73053, lambda=0.015889
## + Fold5.Rep2: alpha=0.90284, lambda=0.445782
## - Fold5.Rep2: alpha=0.90284, lambda=0.445782
## + Fold5.Rep2: alpha=0.32570, lambda=0.081177
## - Fold5.Rep2: alpha=0.32570, lambda=0.081177
## + Fold5.Rep2: alpha=0.75140, lambda=0.561138
## - Fold5.Rep2: alpha=0.75140, lambda=0.561138
## + Fold5.Rep2: alpha=0.03599, lambda=0.001115
## - Fold5.Rep2: alpha=0.03599, lambda=0.001115
## + Fold5.Rep2: alpha=0.25504, lambda=2.954617
## - Fold5.Rep2: alpha=0.25504, lambda=2.954617
## + Fold5.Rep2: alpha=0.86475, lambda=0.043683
## - Fold5.Rep2: alpha=0.86475, lambda=0.043683
## + Fold5.Rep2: alpha=0.10863, lambda=0.154091
## - Fold5.Rep2: alpha=0.10863, lambda=0.154091
## + Fold5.Rep2: alpha=0.98116, lambda=7.405037
## - Fold5.Rep2: alpha=0.98116, lambda=7.405037
## + Fold5.Rep2: alpha=0.94388, lambda=0.002046
## - Fold5.Rep2: alpha=0.94388, lambda=0.002046
## + Fold1.Rep3: alpha=0.63950, lambda=0.004507
## - Fold1.Rep3: alpha=0.63950, lambda=0.004507
## + Fold1.Rep3: alpha=0.04530, lambda=0.016509
## - Fold1.Rep3: alpha=0.04530, lambda=0.016509
## + Fold1.Rep3: alpha=0.97420, lambda=0.004648
## - Fold1.Rep3: alpha=0.97420, lambda=0.004648

Chang Liu 24
R for Data Science Lecture 5

## + Fold1.Rep3: alpha=0.34773, lambda=0.060861


## - Fold1.Rep3: alpha=0.34773, lambda=0.060861
## + Fold1.Rep3: alpha=0.51433, lambda=0.001761
## - Fold1.Rep3: alpha=0.51433, lambda=0.001761
## + Fold1.Rep3: alpha=0.31833, lambda=0.479416
## - Fold1.Rep3: alpha=0.31833, lambda=0.479416
## + Fold1.Rep3: alpha=0.74806, lambda=0.327981
## - Fold1.Rep3: alpha=0.74806, lambda=0.327981
## + Fold1.Rep3: alpha=0.14914, lambda=0.028964
## - Fold1.Rep3: alpha=0.14914, lambda=0.028964
## + Fold1.Rep3: alpha=0.84515, lambda=0.897325
## - Fold1.Rep3: alpha=0.84515, lambda=0.897325
## + Fold1.Rep3: alpha=0.93503, lambda=0.082985
## - Fold1.Rep3: alpha=0.93503, lambda=0.082985
## + Fold1.Rep3: alpha=0.38077, lambda=1.148290
## - Fold1.Rep3: alpha=0.38077, lambda=1.148290
## + Fold1.Rep3: alpha=0.84280, lambda=0.045415
## - Fold1.Rep3: alpha=0.84280, lambda=0.045415
## + Fold1.Rep3: alpha=0.98348, lambda=0.006142
## - Fold1.Rep3: alpha=0.98348, lambda=0.006142
## + Fold1.Rep3: alpha=0.78895, lambda=0.042054
## - Fold1.Rep3: alpha=0.78895, lambda=0.042054
## + Fold1.Rep3: alpha=0.52859, lambda=0.428235
## - Fold1.Rep3: alpha=0.52859, lambda=0.428235
## + Fold1.Rep3: alpha=0.73053, lambda=0.015889
## - Fold1.Rep3: alpha=0.73053, lambda=0.015889
## + Fold1.Rep3: alpha=0.90284, lambda=0.445782
## - Fold1.Rep3: alpha=0.90284, lambda=0.445782
## + Fold1.Rep3: alpha=0.32570, lambda=0.081177
## - Fold1.Rep3: alpha=0.32570, lambda=0.081177
## + Fold1.Rep3: alpha=0.75140, lambda=0.561138
## - Fold1.Rep3: alpha=0.75140, lambda=0.561138
## + Fold1.Rep3: alpha=0.03599, lambda=0.001115
## - Fold1.Rep3: alpha=0.03599, lambda=0.001115
## + Fold1.Rep3: alpha=0.25504, lambda=2.954617
## - Fold1.Rep3: alpha=0.25504, lambda=2.954617
## + Fold1.Rep3: alpha=0.86475, lambda=0.043683
## - Fold1.Rep3: alpha=0.86475, lambda=0.043683

Chang Liu 25
R for Data Science Lecture 5

## + Fold1.Rep3: alpha=0.10863, lambda=0.154091


## - Fold1.Rep3: alpha=0.10863, lambda=0.154091
## + Fold1.Rep3: alpha=0.98116, lambda=7.405037
## - Fold1.Rep3: alpha=0.98116, lambda=7.405037
## + Fold1.Rep3: alpha=0.94388, lambda=0.002046
## - Fold1.Rep3: alpha=0.94388, lambda=0.002046
## + Fold2.Rep3: alpha=0.63950, lambda=0.004507
## - Fold2.Rep3: alpha=0.63950, lambda=0.004507
## + Fold2.Rep3: alpha=0.04530, lambda=0.016509
## - Fold2.Rep3: alpha=0.04530, lambda=0.016509
## + Fold2.Rep3: alpha=0.97420, lambda=0.004648
## - Fold2.Rep3: alpha=0.97420, lambda=0.004648
## + Fold2.Rep3: alpha=0.34773, lambda=0.060861
## - Fold2.Rep3: alpha=0.34773, lambda=0.060861
## + Fold2.Rep3: alpha=0.51433, lambda=0.001761
## - Fold2.Rep3: alpha=0.51433, lambda=0.001761
## + Fold2.Rep3: alpha=0.31833, lambda=0.479416
## - Fold2.Rep3: alpha=0.31833, lambda=0.479416
## + Fold2.Rep3: alpha=0.74806, lambda=0.327981
## - Fold2.Rep3: alpha=0.74806, lambda=0.327981
## + Fold2.Rep3: alpha=0.14914, lambda=0.028964
## - Fold2.Rep3: alpha=0.14914, lambda=0.028964
## + Fold2.Rep3: alpha=0.84515, lambda=0.897325
## - Fold2.Rep3: alpha=0.84515, lambda=0.897325
## + Fold2.Rep3: alpha=0.93503, lambda=0.082985
## - Fold2.Rep3: alpha=0.93503, lambda=0.082985
## + Fold2.Rep3: alpha=0.38077, lambda=1.148290
## - Fold2.Rep3: alpha=0.38077, lambda=1.148290
## + Fold2.Rep3: alpha=0.84280, lambda=0.045415
## - Fold2.Rep3: alpha=0.84280, lambda=0.045415
## + Fold2.Rep3: alpha=0.98348, lambda=0.006142
## - Fold2.Rep3: alpha=0.98348, lambda=0.006142
## + Fold2.Rep3: alpha=0.78895, lambda=0.042054
## - Fold2.Rep3: alpha=0.78895, lambda=0.042054
## + Fold2.Rep3: alpha=0.52859, lambda=0.428235
## - Fold2.Rep3: alpha=0.52859, lambda=0.428235
## + Fold2.Rep3: alpha=0.73053, lambda=0.015889
## - Fold2.Rep3: alpha=0.73053, lambda=0.015889

Chang Liu 26
R for Data Science Lecture 5

## + Fold2.Rep3: alpha=0.90284, lambda=0.445782


## - Fold2.Rep3: alpha=0.90284, lambda=0.445782
## + Fold2.Rep3: alpha=0.32570, lambda=0.081177
## - Fold2.Rep3: alpha=0.32570, lambda=0.081177
## + Fold2.Rep3: alpha=0.75140, lambda=0.561138
## - Fold2.Rep3: alpha=0.75140, lambda=0.561138
## + Fold2.Rep3: alpha=0.03599, lambda=0.001115
## - Fold2.Rep3: alpha=0.03599, lambda=0.001115
## + Fold2.Rep3: alpha=0.25504, lambda=2.954617
## - Fold2.Rep3: alpha=0.25504, lambda=2.954617
## + Fold2.Rep3: alpha=0.86475, lambda=0.043683
## - Fold2.Rep3: alpha=0.86475, lambda=0.043683
## + Fold2.Rep3: alpha=0.10863, lambda=0.154091
## - Fold2.Rep3: alpha=0.10863, lambda=0.154091
## + Fold2.Rep3: alpha=0.98116, lambda=7.405037
## - Fold2.Rep3: alpha=0.98116, lambda=7.405037
## + Fold2.Rep3: alpha=0.94388, lambda=0.002046
## - Fold2.Rep3: alpha=0.94388, lambda=0.002046
## + Fold3.Rep3: alpha=0.63950, lambda=0.004507
## - Fold3.Rep3: alpha=0.63950, lambda=0.004507
## + Fold3.Rep3: alpha=0.04530, lambda=0.016509
## - Fold3.Rep3: alpha=0.04530, lambda=0.016509
## + Fold3.Rep3: alpha=0.97420, lambda=0.004648
## - Fold3.Rep3: alpha=0.97420, lambda=0.004648
## + Fold3.Rep3: alpha=0.34773, lambda=0.060861
## - Fold3.Rep3: alpha=0.34773, lambda=0.060861
## + Fold3.Rep3: alpha=0.51433, lambda=0.001761
## - Fold3.Rep3: alpha=0.51433, lambda=0.001761
## + Fold3.Rep3: alpha=0.31833, lambda=0.479416
## - Fold3.Rep3: alpha=0.31833, lambda=0.479416
## + Fold3.Rep3: alpha=0.74806, lambda=0.327981
## - Fold3.Rep3: alpha=0.74806, lambda=0.327981
## + Fold3.Rep3: alpha=0.14914, lambda=0.028964
## - Fold3.Rep3: alpha=0.14914, lambda=0.028964
## + Fold3.Rep3: alpha=0.84515, lambda=0.897325
## - Fold3.Rep3: alpha=0.84515, lambda=0.897325
## + Fold3.Rep3: alpha=0.93503, lambda=0.082985
## - Fold3.Rep3: alpha=0.93503, lambda=0.082985

Chang Liu 27
R for Data Science Lecture 5

## + Fold3.Rep3: alpha=0.38077, lambda=1.148290


## - Fold3.Rep3: alpha=0.38077, lambda=1.148290
## + Fold3.Rep3: alpha=0.84280, lambda=0.045415
## - Fold3.Rep3: alpha=0.84280, lambda=0.045415
## + Fold3.Rep3: alpha=0.98348, lambda=0.006142
## - Fold3.Rep3: alpha=0.98348, lambda=0.006142
## + Fold3.Rep3: alpha=0.78895, lambda=0.042054
## - Fold3.Rep3: alpha=0.78895, lambda=0.042054
## + Fold3.Rep3: alpha=0.52859, lambda=0.428235
## - Fold3.Rep3: alpha=0.52859, lambda=0.428235
## + Fold3.Rep3: alpha=0.73053, lambda=0.015889
## - Fold3.Rep3: alpha=0.73053, lambda=0.015889
## + Fold3.Rep3: alpha=0.90284, lambda=0.445782
## - Fold3.Rep3: alpha=0.90284, lambda=0.445782
## + Fold3.Rep3: alpha=0.32570, lambda=0.081177
## - Fold3.Rep3: alpha=0.32570, lambda=0.081177
## + Fold3.Rep3: alpha=0.75140, lambda=0.561138
## - Fold3.Rep3: alpha=0.75140, lambda=0.561138
## + Fold3.Rep3: alpha=0.03599, lambda=0.001115
## - Fold3.Rep3: alpha=0.03599, lambda=0.001115
## + Fold3.Rep3: alpha=0.25504, lambda=2.954617
## - Fold3.Rep3: alpha=0.25504, lambda=2.954617
## + Fold3.Rep3: alpha=0.86475, lambda=0.043683
## - Fold3.Rep3: alpha=0.86475, lambda=0.043683
## + Fold3.Rep3: alpha=0.10863, lambda=0.154091
## - Fold3.Rep3: alpha=0.10863, lambda=0.154091
## + Fold3.Rep3: alpha=0.98116, lambda=7.405037
## - Fold3.Rep3: alpha=0.98116, lambda=7.405037
## + Fold3.Rep3: alpha=0.94388, lambda=0.002046
## - Fold3.Rep3: alpha=0.94388, lambda=0.002046
## + Fold4.Rep3: alpha=0.63950, lambda=0.004507
## - Fold4.Rep3: alpha=0.63950, lambda=0.004507
## + Fold4.Rep3: alpha=0.04530, lambda=0.016509
## - Fold4.Rep3: alpha=0.04530, lambda=0.016509
## + Fold4.Rep3: alpha=0.97420, lambda=0.004648
## - Fold4.Rep3: alpha=0.97420, lambda=0.004648
## + Fold4.Rep3: alpha=0.34773, lambda=0.060861
## - Fold4.Rep3: alpha=0.34773, lambda=0.060861

Chang Liu 28
R for Data Science Lecture 5

## + Fold4.Rep3: alpha=0.51433, lambda=0.001761


## - Fold4.Rep3: alpha=0.51433, lambda=0.001761
## + Fold4.Rep3: alpha=0.31833, lambda=0.479416
## - Fold4.Rep3: alpha=0.31833, lambda=0.479416
## + Fold4.Rep3: alpha=0.74806, lambda=0.327981
## - Fold4.Rep3: alpha=0.74806, lambda=0.327981
## + Fold4.Rep3: alpha=0.14914, lambda=0.028964
## - Fold4.Rep3: alpha=0.14914, lambda=0.028964
## + Fold4.Rep3: alpha=0.84515, lambda=0.897325
## - Fold4.Rep3: alpha=0.84515, lambda=0.897325
## + Fold4.Rep3: alpha=0.93503, lambda=0.082985
## - Fold4.Rep3: alpha=0.93503, lambda=0.082985
## + Fold4.Rep3: alpha=0.38077, lambda=1.148290
## - Fold4.Rep3: alpha=0.38077, lambda=1.148290
## + Fold4.Rep3: alpha=0.84280, lambda=0.045415
## - Fold4.Rep3: alpha=0.84280, lambda=0.045415
## + Fold4.Rep3: alpha=0.98348, lambda=0.006142
## - Fold4.Rep3: alpha=0.98348, lambda=0.006142
## + Fold4.Rep3: alpha=0.78895, lambda=0.042054
## - Fold4.Rep3: alpha=0.78895, lambda=0.042054
## + Fold4.Rep3: alpha=0.52859, lambda=0.428235
## - Fold4.Rep3: alpha=0.52859, lambda=0.428235
## + Fold4.Rep3: alpha=0.73053, lambda=0.015889
## - Fold4.Rep3: alpha=0.73053, lambda=0.015889
## + Fold4.Rep3: alpha=0.90284, lambda=0.445782
## - Fold4.Rep3: alpha=0.90284, lambda=0.445782
## + Fold4.Rep3: alpha=0.32570, lambda=0.081177
## - Fold4.Rep3: alpha=0.32570, lambda=0.081177
## + Fold4.Rep3: alpha=0.75140, lambda=0.561138
## - Fold4.Rep3: alpha=0.75140, lambda=0.561138
## + Fold4.Rep3: alpha=0.03599, lambda=0.001115
## - Fold4.Rep3: alpha=0.03599, lambda=0.001115
## + Fold4.Rep3: alpha=0.25504, lambda=2.954617
## - Fold4.Rep3: alpha=0.25504, lambda=2.954617
## + Fold4.Rep3: alpha=0.86475, lambda=0.043683
## - Fold4.Rep3: alpha=0.86475, lambda=0.043683
## + Fold4.Rep3: alpha=0.10863, lambda=0.154091
## - Fold4.Rep3: alpha=0.10863, lambda=0.154091

Chang Liu 29
R for Data Science Lecture 5

## + Fold4.Rep3: alpha=0.98116, lambda=7.405037


## - Fold4.Rep3: alpha=0.98116, lambda=7.405037
## + Fold4.Rep3: alpha=0.94388, lambda=0.002046
## - Fold4.Rep3: alpha=0.94388, lambda=0.002046
## + Fold5.Rep3: alpha=0.63950, lambda=0.004507
## - Fold5.Rep3: alpha=0.63950, lambda=0.004507
## + Fold5.Rep3: alpha=0.04530, lambda=0.016509
## - Fold5.Rep3: alpha=0.04530, lambda=0.016509
## + Fold5.Rep3: alpha=0.97420, lambda=0.004648
## - Fold5.Rep3: alpha=0.97420, lambda=0.004648
## + Fold5.Rep3: alpha=0.34773, lambda=0.060861
## - Fold5.Rep3: alpha=0.34773, lambda=0.060861
## + Fold5.Rep3: alpha=0.51433, lambda=0.001761
## - Fold5.Rep3: alpha=0.51433, lambda=0.001761
## + Fold5.Rep3: alpha=0.31833, lambda=0.479416
## - Fold5.Rep3: alpha=0.31833, lambda=0.479416
## + Fold5.Rep3: alpha=0.74806, lambda=0.327981
## - Fold5.Rep3: alpha=0.74806, lambda=0.327981
## + Fold5.Rep3: alpha=0.14914, lambda=0.028964
## - Fold5.Rep3: alpha=0.14914, lambda=0.028964
## + Fold5.Rep3: alpha=0.84515, lambda=0.897325
## - Fold5.Rep3: alpha=0.84515, lambda=0.897325
## + Fold5.Rep3: alpha=0.93503, lambda=0.082985
## - Fold5.Rep3: alpha=0.93503, lambda=0.082985
## + Fold5.Rep3: alpha=0.38077, lambda=1.148290
## - Fold5.Rep3: alpha=0.38077, lambda=1.148290
## + Fold5.Rep3: alpha=0.84280, lambda=0.045415
## - Fold5.Rep3: alpha=0.84280, lambda=0.045415
## + Fold5.Rep3: alpha=0.98348, lambda=0.006142
## - Fold5.Rep3: alpha=0.98348, lambda=0.006142
## + Fold5.Rep3: alpha=0.78895, lambda=0.042054
## - Fold5.Rep3: alpha=0.78895, lambda=0.042054
## + Fold5.Rep3: alpha=0.52859, lambda=0.428235
## - Fold5.Rep3: alpha=0.52859, lambda=0.428235
## + Fold5.Rep3: alpha=0.73053, lambda=0.015889
## - Fold5.Rep3: alpha=0.73053, lambda=0.015889
## + Fold5.Rep3: alpha=0.90284, lambda=0.445782
## - Fold5.Rep3: alpha=0.90284, lambda=0.445782

Chang Liu 30
R for Data Science Lecture 5

## + Fold5.Rep3: alpha=0.32570, lambda=0.081177


## - Fold5.Rep3: alpha=0.32570, lambda=0.081177
## + Fold5.Rep3: alpha=0.75140, lambda=0.561138
## - Fold5.Rep3: alpha=0.75140, lambda=0.561138
## + Fold5.Rep3: alpha=0.03599, lambda=0.001115
## - Fold5.Rep3: alpha=0.03599, lambda=0.001115
## + Fold5.Rep3: alpha=0.25504, lambda=2.954617
## - Fold5.Rep3: alpha=0.25504, lambda=2.954617
## + Fold5.Rep3: alpha=0.86475, lambda=0.043683
## - Fold5.Rep3: alpha=0.86475, lambda=0.043683
## + Fold5.Rep3: alpha=0.10863, lambda=0.154091
## - Fold5.Rep3: alpha=0.10863, lambda=0.154091
## + Fold5.Rep3: alpha=0.98116, lambda=7.405037
## - Fold5.Rep3: alpha=0.98116, lambda=7.405037
## + Fold5.Rep3: alpha=0.94388, lambda=0.002046
## - Fold5.Rep3: alpha=0.94388, lambda=0.002046
## + Fold1.Rep4: alpha=0.63950, lambda=0.004507
## - Fold1.Rep4: alpha=0.63950, lambda=0.004507
## + Fold1.Rep4: alpha=0.04530, lambda=0.016509
## - Fold1.Rep4: alpha=0.04530, lambda=0.016509
## + Fold1.Rep4: alpha=0.97420, lambda=0.004648
## - Fold1.Rep4: alpha=0.97420, lambda=0.004648
## + Fold1.Rep4: alpha=0.34773, lambda=0.060861
## - Fold1.Rep4: alpha=0.34773, lambda=0.060861
## + Fold1.Rep4: alpha=0.51433, lambda=0.001761
## - Fold1.Rep4: alpha=0.51433, lambda=0.001761
## + Fold1.Rep4: alpha=0.31833, lambda=0.479416
## - Fold1.Rep4: alpha=0.31833, lambda=0.479416
## + Fold1.Rep4: alpha=0.74806, lambda=0.327981
## - Fold1.Rep4: alpha=0.74806, lambda=0.327981
## + Fold1.Rep4: alpha=0.14914, lambda=0.028964
## - Fold1.Rep4: alpha=0.14914, lambda=0.028964
## + Fold1.Rep4: alpha=0.84515, lambda=0.897325
## - Fold1.Rep4: alpha=0.84515, lambda=0.897325
## + Fold1.Rep4: alpha=0.93503, lambda=0.082985
## - Fold1.Rep4: alpha=0.93503, lambda=0.082985
## + Fold1.Rep4: alpha=0.38077, lambda=1.148290
## - Fold1.Rep4: alpha=0.38077, lambda=1.148290

Chang Liu 31
R for Data Science Lecture 5

## + Fold1.Rep4: alpha=0.84280, lambda=0.045415


## - Fold1.Rep4: alpha=0.84280, lambda=0.045415
## + Fold1.Rep4: alpha=0.98348, lambda=0.006142
## - Fold1.Rep4: alpha=0.98348, lambda=0.006142
## + Fold1.Rep4: alpha=0.78895, lambda=0.042054
## - Fold1.Rep4: alpha=0.78895, lambda=0.042054
## + Fold1.Rep4: alpha=0.52859, lambda=0.428235
## - Fold1.Rep4: alpha=0.52859, lambda=0.428235
## + Fold1.Rep4: alpha=0.73053, lambda=0.015889
## - Fold1.Rep4: alpha=0.73053, lambda=0.015889
## + Fold1.Rep4: alpha=0.90284, lambda=0.445782
## - Fold1.Rep4: alpha=0.90284, lambda=0.445782
## + Fold1.Rep4: alpha=0.32570, lambda=0.081177
## - Fold1.Rep4: alpha=0.32570, lambda=0.081177
## + Fold1.Rep4: alpha=0.75140, lambda=0.561138
## - Fold1.Rep4: alpha=0.75140, lambda=0.561138
## + Fold1.Rep4: alpha=0.03599, lambda=0.001115
## - Fold1.Rep4: alpha=0.03599, lambda=0.001115
## + Fold1.Rep4: alpha=0.25504, lambda=2.954617
## - Fold1.Rep4: alpha=0.25504, lambda=2.954617
## + Fold1.Rep4: alpha=0.86475, lambda=0.043683
## - Fold1.Rep4: alpha=0.86475, lambda=0.043683
## + Fold1.Rep4: alpha=0.10863, lambda=0.154091
## - Fold1.Rep4: alpha=0.10863, lambda=0.154091
## + Fold1.Rep4: alpha=0.98116, lambda=7.405037
## - Fold1.Rep4: alpha=0.98116, lambda=7.405037
## + Fold1.Rep4: alpha=0.94388, lambda=0.002046
## - Fold1.Rep4: alpha=0.94388, lambda=0.002046
## + Fold2.Rep4: alpha=0.63950, lambda=0.004507
## - Fold2.Rep4: alpha=0.63950, lambda=0.004507
## + Fold2.Rep4: alpha=0.04530, lambda=0.016509
## - Fold2.Rep4: alpha=0.04530, lambda=0.016509
## + Fold2.Rep4: alpha=0.97420, lambda=0.004648
## - Fold2.Rep4: alpha=0.97420, lambda=0.004648
## + Fold2.Rep4: alpha=0.34773, lambda=0.060861
## - Fold2.Rep4: alpha=0.34773, lambda=0.060861
## + Fold2.Rep4: alpha=0.51433, lambda=0.001761
## - Fold2.Rep4: alpha=0.51433, lambda=0.001761

Chang Liu 32
R for Data Science Lecture 5

## + Fold2.Rep4: alpha=0.31833, lambda=0.479416


## - Fold2.Rep4: alpha=0.31833, lambda=0.479416
## + Fold2.Rep4: alpha=0.74806, lambda=0.327981
## - Fold2.Rep4: alpha=0.74806, lambda=0.327981
## + Fold2.Rep4: alpha=0.14914, lambda=0.028964
## - Fold2.Rep4: alpha=0.14914, lambda=0.028964
## + Fold2.Rep4: alpha=0.84515, lambda=0.897325
## - Fold2.Rep4: alpha=0.84515, lambda=0.897325
## + Fold2.Rep4: alpha=0.93503, lambda=0.082985
## - Fold2.Rep4: alpha=0.93503, lambda=0.082985
## + Fold2.Rep4: alpha=0.38077, lambda=1.148290
## - Fold2.Rep4: alpha=0.38077, lambda=1.148290
## + Fold2.Rep4: alpha=0.84280, lambda=0.045415
## - Fold2.Rep4: alpha=0.84280, lambda=0.045415
## + Fold2.Rep4: alpha=0.98348, lambda=0.006142
## - Fold2.Rep4: alpha=0.98348, lambda=0.006142
## + Fold2.Rep4: alpha=0.78895, lambda=0.042054
## - Fold2.Rep4: alpha=0.78895, lambda=0.042054
## + Fold2.Rep4: alpha=0.52859, lambda=0.428235
## - Fold2.Rep4: alpha=0.52859, lambda=0.428235
## + Fold2.Rep4: alpha=0.73053, lambda=0.015889
## - Fold2.Rep4: alpha=0.73053, lambda=0.015889
## + Fold2.Rep4: alpha=0.90284, lambda=0.445782
## - Fold2.Rep4: alpha=0.90284, lambda=0.445782
## + Fold2.Rep4: alpha=0.32570, lambda=0.081177
## - Fold2.Rep4: alpha=0.32570, lambda=0.081177
## + Fold2.Rep4: alpha=0.75140, lambda=0.561138
## - Fold2.Rep4: alpha=0.75140, lambda=0.561138
## + Fold2.Rep4: alpha=0.03599, lambda=0.001115
## - Fold2.Rep4: alpha=0.03599, lambda=0.001115
## + Fold2.Rep4: alpha=0.25504, lambda=2.954617
## - Fold2.Rep4: alpha=0.25504, lambda=2.954617
## + Fold2.Rep4: alpha=0.86475, lambda=0.043683
## - Fold2.Rep4: alpha=0.86475, lambda=0.043683
## + Fold2.Rep4: alpha=0.10863, lambda=0.154091
## - Fold2.Rep4: alpha=0.10863, lambda=0.154091
## + Fold2.Rep4: alpha=0.98116, lambda=7.405037
## - Fold2.Rep4: alpha=0.98116, lambda=7.405037

Chang Liu 33
R for Data Science Lecture 5

## + Fold2.Rep4: alpha=0.94388, lambda=0.002046


## - Fold2.Rep4: alpha=0.94388, lambda=0.002046
## + Fold3.Rep4: alpha=0.63950, lambda=0.004507
## - Fold3.Rep4: alpha=0.63950, lambda=0.004507
## + Fold3.Rep4: alpha=0.04530, lambda=0.016509
## - Fold3.Rep4: alpha=0.04530, lambda=0.016509
## + Fold3.Rep4: alpha=0.97420, lambda=0.004648
## - Fold3.Rep4: alpha=0.97420, lambda=0.004648
## + Fold3.Rep4: alpha=0.34773, lambda=0.060861
## - Fold3.Rep4: alpha=0.34773, lambda=0.060861
## + Fold3.Rep4: alpha=0.51433, lambda=0.001761
## - Fold3.Rep4: alpha=0.51433, lambda=0.001761
## + Fold3.Rep4: alpha=0.31833, lambda=0.479416
## - Fold3.Rep4: alpha=0.31833, lambda=0.479416
## + Fold3.Rep4: alpha=0.74806, lambda=0.327981
## - Fold3.Rep4: alpha=0.74806, lambda=0.327981
## + Fold3.Rep4: alpha=0.14914, lambda=0.028964
## - Fold3.Rep4: alpha=0.14914, lambda=0.028964
## + Fold3.Rep4: alpha=0.84515, lambda=0.897325
## - Fold3.Rep4: alpha=0.84515, lambda=0.897325
## + Fold3.Rep4: alpha=0.93503, lambda=0.082985
## - Fold3.Rep4: alpha=0.93503, lambda=0.082985
## + Fold3.Rep4: alpha=0.38077, lambda=1.148290
## - Fold3.Rep4: alpha=0.38077, lambda=1.148290
## + Fold3.Rep4: alpha=0.84280, lambda=0.045415
## - Fold3.Rep4: alpha=0.84280, lambda=0.045415
## + Fold3.Rep4: alpha=0.98348, lambda=0.006142
## - Fold3.Rep4: alpha=0.98348, lambda=0.006142
## + Fold3.Rep4: alpha=0.78895, lambda=0.042054
## - Fold3.Rep4: alpha=0.78895, lambda=0.042054
## + Fold3.Rep4: alpha=0.52859, lambda=0.428235
## - Fold3.Rep4: alpha=0.52859, lambda=0.428235
## + Fold3.Rep4: alpha=0.73053, lambda=0.015889
## - Fold3.Rep4: alpha=0.73053, lambda=0.015889
## + Fold3.Rep4: alpha=0.90284, lambda=0.445782
## - Fold3.Rep4: alpha=0.90284, lambda=0.445782
## + Fold3.Rep4: alpha=0.32570, lambda=0.081177
## - Fold3.Rep4: alpha=0.32570, lambda=0.081177

Chang Liu 34
R for Data Science Lecture 5

## + Fold3.Rep4: alpha=0.75140, lambda=0.561138


## - Fold3.Rep4: alpha=0.75140, lambda=0.561138
## + Fold3.Rep4: alpha=0.03599, lambda=0.001115
## - Fold3.Rep4: alpha=0.03599, lambda=0.001115
## + Fold3.Rep4: alpha=0.25504, lambda=2.954617
## - Fold3.Rep4: alpha=0.25504, lambda=2.954617
## + Fold3.Rep4: alpha=0.86475, lambda=0.043683
## - Fold3.Rep4: alpha=0.86475, lambda=0.043683
## + Fold3.Rep4: alpha=0.10863, lambda=0.154091
## - Fold3.Rep4: alpha=0.10863, lambda=0.154091
## + Fold3.Rep4: alpha=0.98116, lambda=7.405037
## - Fold3.Rep4: alpha=0.98116, lambda=7.405037
## + Fold3.Rep4: alpha=0.94388, lambda=0.002046
## - Fold3.Rep4: alpha=0.94388, lambda=0.002046
## + Fold4.Rep4: alpha=0.63950, lambda=0.004507
## - Fold4.Rep4: alpha=0.63950, lambda=0.004507
## + Fold4.Rep4: alpha=0.04530, lambda=0.016509
## - Fold4.Rep4: alpha=0.04530, lambda=0.016509
## + Fold4.Rep4: alpha=0.97420, lambda=0.004648
## - Fold4.Rep4: alpha=0.97420, lambda=0.004648
## + Fold4.Rep4: alpha=0.34773, lambda=0.060861
## - Fold4.Rep4: alpha=0.34773, lambda=0.060861
## + Fold4.Rep4: alpha=0.51433, lambda=0.001761
## - Fold4.Rep4: alpha=0.51433, lambda=0.001761
## + Fold4.Rep4: alpha=0.31833, lambda=0.479416
## - Fold4.Rep4: alpha=0.31833, lambda=0.479416
## + Fold4.Rep4: alpha=0.74806, lambda=0.327981
## - Fold4.Rep4: alpha=0.74806, lambda=0.327981
## + Fold4.Rep4: alpha=0.14914, lambda=0.028964
## - Fold4.Rep4: alpha=0.14914, lambda=0.028964
## + Fold4.Rep4: alpha=0.84515, lambda=0.897325
## - Fold4.Rep4: alpha=0.84515, lambda=0.897325
## + Fold4.Rep4: alpha=0.93503, lambda=0.082985
## - Fold4.Rep4: alpha=0.93503, lambda=0.082985
## + Fold4.Rep4: alpha=0.38077, lambda=1.148290
## - Fold4.Rep4: alpha=0.38077, lambda=1.148290
## + Fold4.Rep4: alpha=0.84280, lambda=0.045415
## - Fold4.Rep4: alpha=0.84280, lambda=0.045415

Chang Liu 35
R for Data Science Lecture 5

## + Fold4.Rep4: alpha=0.98348, lambda=0.006142


## - Fold4.Rep4: alpha=0.98348, lambda=0.006142
## + Fold4.Rep4: alpha=0.78895, lambda=0.042054
## - Fold4.Rep4: alpha=0.78895, lambda=0.042054
## + Fold4.Rep4: alpha=0.52859, lambda=0.428235
## - Fold4.Rep4: alpha=0.52859, lambda=0.428235
## + Fold4.Rep4: alpha=0.73053, lambda=0.015889
## - Fold4.Rep4: alpha=0.73053, lambda=0.015889
## + Fold4.Rep4: alpha=0.90284, lambda=0.445782
## - Fold4.Rep4: alpha=0.90284, lambda=0.445782
## + Fold4.Rep4: alpha=0.32570, lambda=0.081177
## - Fold4.Rep4: alpha=0.32570, lambda=0.081177
## + Fold4.Rep4: alpha=0.75140, lambda=0.561138
## - Fold4.Rep4: alpha=0.75140, lambda=0.561138
## + Fold4.Rep4: alpha=0.03599, lambda=0.001115
## - Fold4.Rep4: alpha=0.03599, lambda=0.001115
## + Fold4.Rep4: alpha=0.25504, lambda=2.954617
## - Fold4.Rep4: alpha=0.25504, lambda=2.954617
## + Fold4.Rep4: alpha=0.86475, lambda=0.043683
## - Fold4.Rep4: alpha=0.86475, lambda=0.043683
## + Fold4.Rep4: alpha=0.10863, lambda=0.154091
## - Fold4.Rep4: alpha=0.10863, lambda=0.154091
## + Fold4.Rep4: alpha=0.98116, lambda=7.405037
## - Fold4.Rep4: alpha=0.98116, lambda=7.405037
## + Fold4.Rep4: alpha=0.94388, lambda=0.002046
## - Fold4.Rep4: alpha=0.94388, lambda=0.002046
## + Fold5.Rep4: alpha=0.63950, lambda=0.004507
## - Fold5.Rep4: alpha=0.63950, lambda=0.004507
## + Fold5.Rep4: alpha=0.04530, lambda=0.016509
## - Fold5.Rep4: alpha=0.04530, lambda=0.016509
## + Fold5.Rep4: alpha=0.97420, lambda=0.004648
## - Fold5.Rep4: alpha=0.97420, lambda=0.004648
## + Fold5.Rep4: alpha=0.34773, lambda=0.060861
## - Fold5.Rep4: alpha=0.34773, lambda=0.060861
## + Fold5.Rep4: alpha=0.51433, lambda=0.001761
## - Fold5.Rep4: alpha=0.51433, lambda=0.001761
## + Fold5.Rep4: alpha=0.31833, lambda=0.479416
## - Fold5.Rep4: alpha=0.31833, lambda=0.479416

Chang Liu 36
R for Data Science Lecture 5

## + Fold5.Rep4: alpha=0.74806, lambda=0.327981


## - Fold5.Rep4: alpha=0.74806, lambda=0.327981
## + Fold5.Rep4: alpha=0.14914, lambda=0.028964
## - Fold5.Rep4: alpha=0.14914, lambda=0.028964
## + Fold5.Rep4: alpha=0.84515, lambda=0.897325
## - Fold5.Rep4: alpha=0.84515, lambda=0.897325
## + Fold5.Rep4: alpha=0.93503, lambda=0.082985
## - Fold5.Rep4: alpha=0.93503, lambda=0.082985
## + Fold5.Rep4: alpha=0.38077, lambda=1.148290
## - Fold5.Rep4: alpha=0.38077, lambda=1.148290
## + Fold5.Rep4: alpha=0.84280, lambda=0.045415
## - Fold5.Rep4: alpha=0.84280, lambda=0.045415
## + Fold5.Rep4: alpha=0.98348, lambda=0.006142
## - Fold5.Rep4: alpha=0.98348, lambda=0.006142
## + Fold5.Rep4: alpha=0.78895, lambda=0.042054
## - Fold5.Rep4: alpha=0.78895, lambda=0.042054
## + Fold5.Rep4: alpha=0.52859, lambda=0.428235
## - Fold5.Rep4: alpha=0.52859, lambda=0.428235
## + Fold5.Rep4: alpha=0.73053, lambda=0.015889
## - Fold5.Rep4: alpha=0.73053, lambda=0.015889
## + Fold5.Rep4: alpha=0.90284, lambda=0.445782
## - Fold5.Rep4: alpha=0.90284, lambda=0.445782
## + Fold5.Rep4: alpha=0.32570, lambda=0.081177
## - Fold5.Rep4: alpha=0.32570, lambda=0.081177
## + Fold5.Rep4: alpha=0.75140, lambda=0.561138
## - Fold5.Rep4: alpha=0.75140, lambda=0.561138
## + Fold5.Rep4: alpha=0.03599, lambda=0.001115
## - Fold5.Rep4: alpha=0.03599, lambda=0.001115
## + Fold5.Rep4: alpha=0.25504, lambda=2.954617
## - Fold5.Rep4: alpha=0.25504, lambda=2.954617
## + Fold5.Rep4: alpha=0.86475, lambda=0.043683
## - Fold5.Rep4: alpha=0.86475, lambda=0.043683
## + Fold5.Rep4: alpha=0.10863, lambda=0.154091
## - Fold5.Rep4: alpha=0.10863, lambda=0.154091
## + Fold5.Rep4: alpha=0.98116, lambda=7.405037
## - Fold5.Rep4: alpha=0.98116, lambda=7.405037
## + Fold5.Rep4: alpha=0.94388, lambda=0.002046
## - Fold5.Rep4: alpha=0.94388, lambda=0.002046

Chang Liu 37
R for Data Science Lecture 5

## + Fold1.Rep5: alpha=0.63950, lambda=0.004507


## - Fold1.Rep5: alpha=0.63950, lambda=0.004507
## + Fold1.Rep5: alpha=0.04530, lambda=0.016509
## - Fold1.Rep5: alpha=0.04530, lambda=0.016509
## + Fold1.Rep5: alpha=0.97420, lambda=0.004648
## - Fold1.Rep5: alpha=0.97420, lambda=0.004648
## + Fold1.Rep5: alpha=0.34773, lambda=0.060861
## - Fold1.Rep5: alpha=0.34773, lambda=0.060861
## + Fold1.Rep5: alpha=0.51433, lambda=0.001761
## - Fold1.Rep5: alpha=0.51433, lambda=0.001761
## + Fold1.Rep5: alpha=0.31833, lambda=0.479416
## - Fold1.Rep5: alpha=0.31833, lambda=0.479416
## + Fold1.Rep5: alpha=0.74806, lambda=0.327981
## - Fold1.Rep5: alpha=0.74806, lambda=0.327981
## + Fold1.Rep5: alpha=0.14914, lambda=0.028964
## - Fold1.Rep5: alpha=0.14914, lambda=0.028964
## + Fold1.Rep5: alpha=0.84515, lambda=0.897325
## - Fold1.Rep5: alpha=0.84515, lambda=0.897325
## + Fold1.Rep5: alpha=0.93503, lambda=0.082985
## - Fold1.Rep5: alpha=0.93503, lambda=0.082985
## + Fold1.Rep5: alpha=0.38077, lambda=1.148290
## - Fold1.Rep5: alpha=0.38077, lambda=1.148290
## + Fold1.Rep5: alpha=0.84280, lambda=0.045415
## - Fold1.Rep5: alpha=0.84280, lambda=0.045415
## + Fold1.Rep5: alpha=0.98348, lambda=0.006142
## - Fold1.Rep5: alpha=0.98348, lambda=0.006142
## + Fold1.Rep5: alpha=0.78895, lambda=0.042054
## - Fold1.Rep5: alpha=0.78895, lambda=0.042054
## + Fold1.Rep5: alpha=0.52859, lambda=0.428235
## - Fold1.Rep5: alpha=0.52859, lambda=0.428235
## + Fold1.Rep5: alpha=0.73053, lambda=0.015889
## - Fold1.Rep5: alpha=0.73053, lambda=0.015889
## + Fold1.Rep5: alpha=0.90284, lambda=0.445782
## - Fold1.Rep5: alpha=0.90284, lambda=0.445782
## + Fold1.Rep5: alpha=0.32570, lambda=0.081177
## - Fold1.Rep5: alpha=0.32570, lambda=0.081177
## + Fold1.Rep5: alpha=0.75140, lambda=0.561138
## - Fold1.Rep5: alpha=0.75140, lambda=0.561138

Chang Liu 38
R for Data Science Lecture 5

## + Fold1.Rep5: alpha=0.03599, lambda=0.001115


## - Fold1.Rep5: alpha=0.03599, lambda=0.001115
## + Fold1.Rep5: alpha=0.25504, lambda=2.954617
## - Fold1.Rep5: alpha=0.25504, lambda=2.954617
## + Fold1.Rep5: alpha=0.86475, lambda=0.043683
## - Fold1.Rep5: alpha=0.86475, lambda=0.043683
## + Fold1.Rep5: alpha=0.10863, lambda=0.154091
## - Fold1.Rep5: alpha=0.10863, lambda=0.154091
## + Fold1.Rep5: alpha=0.98116, lambda=7.405037
## - Fold1.Rep5: alpha=0.98116, lambda=7.405037
## + Fold1.Rep5: alpha=0.94388, lambda=0.002046
## - Fold1.Rep5: alpha=0.94388, lambda=0.002046
## + Fold2.Rep5: alpha=0.63950, lambda=0.004507
## - Fold2.Rep5: alpha=0.63950, lambda=0.004507
## + Fold2.Rep5: alpha=0.04530, lambda=0.016509
## - Fold2.Rep5: alpha=0.04530, lambda=0.016509
## + Fold2.Rep5: alpha=0.97420, lambda=0.004648
## - Fold2.Rep5: alpha=0.97420, lambda=0.004648
## + Fold2.Rep5: alpha=0.34773, lambda=0.060861
## - Fold2.Rep5: alpha=0.34773, lambda=0.060861
## + Fold2.Rep5: alpha=0.51433, lambda=0.001761
## - Fold2.Rep5: alpha=0.51433, lambda=0.001761
## + Fold2.Rep5: alpha=0.31833, lambda=0.479416
## - Fold2.Rep5: alpha=0.31833, lambda=0.479416
## + Fold2.Rep5: alpha=0.74806, lambda=0.327981
## - Fold2.Rep5: alpha=0.74806, lambda=0.327981
## + Fold2.Rep5: alpha=0.14914, lambda=0.028964
## - Fold2.Rep5: alpha=0.14914, lambda=0.028964
## + Fold2.Rep5: alpha=0.84515, lambda=0.897325
## - Fold2.Rep5: alpha=0.84515, lambda=0.897325
## + Fold2.Rep5: alpha=0.93503, lambda=0.082985
## - Fold2.Rep5: alpha=0.93503, lambda=0.082985
## + Fold2.Rep5: alpha=0.38077, lambda=1.148290
## - Fold2.Rep5: alpha=0.38077, lambda=1.148290
## + Fold2.Rep5: alpha=0.84280, lambda=0.045415
## - Fold2.Rep5: alpha=0.84280, lambda=0.045415
## + Fold2.Rep5: alpha=0.98348, lambda=0.006142
## - Fold2.Rep5: alpha=0.98348, lambda=0.006142

Chang Liu 39
R for Data Science Lecture 5

## + Fold2.Rep5: alpha=0.78895, lambda=0.042054


## - Fold2.Rep5: alpha=0.78895, lambda=0.042054
## + Fold2.Rep5: alpha=0.52859, lambda=0.428235
## - Fold2.Rep5: alpha=0.52859, lambda=0.428235
## + Fold2.Rep5: alpha=0.73053, lambda=0.015889
## - Fold2.Rep5: alpha=0.73053, lambda=0.015889
## + Fold2.Rep5: alpha=0.90284, lambda=0.445782
## - Fold2.Rep5: alpha=0.90284, lambda=0.445782
## + Fold2.Rep5: alpha=0.32570, lambda=0.081177
## - Fold2.Rep5: alpha=0.32570, lambda=0.081177
## + Fold2.Rep5: alpha=0.75140, lambda=0.561138
## - Fold2.Rep5: alpha=0.75140, lambda=0.561138
## + Fold2.Rep5: alpha=0.03599, lambda=0.001115
## - Fold2.Rep5: alpha=0.03599, lambda=0.001115
## + Fold2.Rep5: alpha=0.25504, lambda=2.954617
## - Fold2.Rep5: alpha=0.25504, lambda=2.954617
## + Fold2.Rep5: alpha=0.86475, lambda=0.043683
## - Fold2.Rep5: alpha=0.86475, lambda=0.043683
## + Fold2.Rep5: alpha=0.10863, lambda=0.154091
## - Fold2.Rep5: alpha=0.10863, lambda=0.154091
## + Fold2.Rep5: alpha=0.98116, lambda=7.405037
## - Fold2.Rep5: alpha=0.98116, lambda=7.405037
## + Fold2.Rep5: alpha=0.94388, lambda=0.002046
## - Fold2.Rep5: alpha=0.94388, lambda=0.002046
## + Fold3.Rep5: alpha=0.63950, lambda=0.004507
## - Fold3.Rep5: alpha=0.63950, lambda=0.004507
## + Fold3.Rep5: alpha=0.04530, lambda=0.016509
## - Fold3.Rep5: alpha=0.04530, lambda=0.016509
## + Fold3.Rep5: alpha=0.97420, lambda=0.004648
## - Fold3.Rep5: alpha=0.97420, lambda=0.004648
## + Fold3.Rep5: alpha=0.34773, lambda=0.060861
## - Fold3.Rep5: alpha=0.34773, lambda=0.060861
## + Fold3.Rep5: alpha=0.51433, lambda=0.001761
## - Fold3.Rep5: alpha=0.51433, lambda=0.001761
## + Fold3.Rep5: alpha=0.31833, lambda=0.479416
## - Fold3.Rep5: alpha=0.31833, lambda=0.479416
## + Fold3.Rep5: alpha=0.74806, lambda=0.327981
## - Fold3.Rep5: alpha=0.74806, lambda=0.327981

Chang Liu 40
R for Data Science Lecture 5

## + Fold3.Rep5: alpha=0.14914, lambda=0.028964


## - Fold3.Rep5: alpha=0.14914, lambda=0.028964
## + Fold3.Rep5: alpha=0.84515, lambda=0.897325
## - Fold3.Rep5: alpha=0.84515, lambda=0.897325
## + Fold3.Rep5: alpha=0.93503, lambda=0.082985
## - Fold3.Rep5: alpha=0.93503, lambda=0.082985
## + Fold3.Rep5: alpha=0.38077, lambda=1.148290
## - Fold3.Rep5: alpha=0.38077, lambda=1.148290
## + Fold3.Rep5: alpha=0.84280, lambda=0.045415
## - Fold3.Rep5: alpha=0.84280, lambda=0.045415
## + Fold3.Rep5: alpha=0.98348, lambda=0.006142
## - Fold3.Rep5: alpha=0.98348, lambda=0.006142
## + Fold3.Rep5: alpha=0.78895, lambda=0.042054
## - Fold3.Rep5: alpha=0.78895, lambda=0.042054
## + Fold3.Rep5: alpha=0.52859, lambda=0.428235
## - Fold3.Rep5: alpha=0.52859, lambda=0.428235
## + Fold3.Rep5: alpha=0.73053, lambda=0.015889
## - Fold3.Rep5: alpha=0.73053, lambda=0.015889
## + Fold3.Rep5: alpha=0.90284, lambda=0.445782
## - Fold3.Rep5: alpha=0.90284, lambda=0.445782
## + Fold3.Rep5: alpha=0.32570, lambda=0.081177
## - Fold3.Rep5: alpha=0.32570, lambda=0.081177
## + Fold3.Rep5: alpha=0.75140, lambda=0.561138
## - Fold3.Rep5: alpha=0.75140, lambda=0.561138
## + Fold3.Rep5: alpha=0.03599, lambda=0.001115
## - Fold3.Rep5: alpha=0.03599, lambda=0.001115
## + Fold3.Rep5: alpha=0.25504, lambda=2.954617
## - Fold3.Rep5: alpha=0.25504, lambda=2.954617
## + Fold3.Rep5: alpha=0.86475, lambda=0.043683
## - Fold3.Rep5: alpha=0.86475, lambda=0.043683
## + Fold3.Rep5: alpha=0.10863, lambda=0.154091
## - Fold3.Rep5: alpha=0.10863, lambda=0.154091
## + Fold3.Rep5: alpha=0.98116, lambda=7.405037
## - Fold3.Rep5: alpha=0.98116, lambda=7.405037
## + Fold3.Rep5: alpha=0.94388, lambda=0.002046
## - Fold3.Rep5: alpha=0.94388, lambda=0.002046
## + Fold4.Rep5: alpha=0.63950, lambda=0.004507
## - Fold4.Rep5: alpha=0.63950, lambda=0.004507

Chang Liu 41
R for Data Science Lecture 5

## + Fold4.Rep5: alpha=0.04530, lambda=0.016509


## - Fold4.Rep5: alpha=0.04530, lambda=0.016509
## + Fold4.Rep5: alpha=0.97420, lambda=0.004648
## - Fold4.Rep5: alpha=0.97420, lambda=0.004648
## + Fold4.Rep5: alpha=0.34773, lambda=0.060861
## - Fold4.Rep5: alpha=0.34773, lambda=0.060861
## + Fold4.Rep5: alpha=0.51433, lambda=0.001761
## - Fold4.Rep5: alpha=0.51433, lambda=0.001761
## + Fold4.Rep5: alpha=0.31833, lambda=0.479416
## - Fold4.Rep5: alpha=0.31833, lambda=0.479416
## + Fold4.Rep5: alpha=0.74806, lambda=0.327981
## - Fold4.Rep5: alpha=0.74806, lambda=0.327981
## + Fold4.Rep5: alpha=0.14914, lambda=0.028964
## - Fold4.Rep5: alpha=0.14914, lambda=0.028964
## + Fold4.Rep5: alpha=0.84515, lambda=0.897325
## - Fold4.Rep5: alpha=0.84515, lambda=0.897325
## + Fold4.Rep5: alpha=0.93503, lambda=0.082985
## - Fold4.Rep5: alpha=0.93503, lambda=0.082985
## + Fold4.Rep5: alpha=0.38077, lambda=1.148290
## - Fold4.Rep5: alpha=0.38077, lambda=1.148290
## + Fold4.Rep5: alpha=0.84280, lambda=0.045415
## - Fold4.Rep5: alpha=0.84280, lambda=0.045415
## + Fold4.Rep5: alpha=0.98348, lambda=0.006142
## - Fold4.Rep5: alpha=0.98348, lambda=0.006142
## + Fold4.Rep5: alpha=0.78895, lambda=0.042054
## - Fold4.Rep5: alpha=0.78895, lambda=0.042054
## + Fold4.Rep5: alpha=0.52859, lambda=0.428235
## - Fold4.Rep5: alpha=0.52859, lambda=0.428235
## + Fold4.Rep5: alpha=0.73053, lambda=0.015889
## - Fold4.Rep5: alpha=0.73053, lambda=0.015889
## + Fold4.Rep5: alpha=0.90284, lambda=0.445782
## - Fold4.Rep5: alpha=0.90284, lambda=0.445782
## + Fold4.Rep5: alpha=0.32570, lambda=0.081177
## - Fold4.Rep5: alpha=0.32570, lambda=0.081177
## + Fold4.Rep5: alpha=0.75140, lambda=0.561138
## - Fold4.Rep5: alpha=0.75140, lambda=0.561138
## + Fold4.Rep5: alpha=0.03599, lambda=0.001115
## - Fold4.Rep5: alpha=0.03599, lambda=0.001115

Chang Liu 42
R for Data Science Lecture 5

## + Fold4.Rep5: alpha=0.25504, lambda=2.954617


## - Fold4.Rep5: alpha=0.25504, lambda=2.954617
## + Fold4.Rep5: alpha=0.86475, lambda=0.043683
## - Fold4.Rep5: alpha=0.86475, lambda=0.043683
## + Fold4.Rep5: alpha=0.10863, lambda=0.154091
## - Fold4.Rep5: alpha=0.10863, lambda=0.154091
## + Fold4.Rep5: alpha=0.98116, lambda=7.405037
## - Fold4.Rep5: alpha=0.98116, lambda=7.405037
## + Fold4.Rep5: alpha=0.94388, lambda=0.002046
## - Fold4.Rep5: alpha=0.94388, lambda=0.002046
## + Fold5.Rep5: alpha=0.63950, lambda=0.004507
## - Fold5.Rep5: alpha=0.63950, lambda=0.004507
## + Fold5.Rep5: alpha=0.04530, lambda=0.016509
## - Fold5.Rep5: alpha=0.04530, lambda=0.016509
## + Fold5.Rep5: alpha=0.97420, lambda=0.004648
## - Fold5.Rep5: alpha=0.97420, lambda=0.004648
## + Fold5.Rep5: alpha=0.34773, lambda=0.060861
## - Fold5.Rep5: alpha=0.34773, lambda=0.060861
## + Fold5.Rep5: alpha=0.51433, lambda=0.001761
## - Fold5.Rep5: alpha=0.51433, lambda=0.001761
## + Fold5.Rep5: alpha=0.31833, lambda=0.479416
## - Fold5.Rep5: alpha=0.31833, lambda=0.479416
## + Fold5.Rep5: alpha=0.74806, lambda=0.327981
## - Fold5.Rep5: alpha=0.74806, lambda=0.327981
## + Fold5.Rep5: alpha=0.14914, lambda=0.028964
## - Fold5.Rep5: alpha=0.14914, lambda=0.028964
## + Fold5.Rep5: alpha=0.84515, lambda=0.897325
## - Fold5.Rep5: alpha=0.84515, lambda=0.897325
## + Fold5.Rep5: alpha=0.93503, lambda=0.082985
## - Fold5.Rep5: alpha=0.93503, lambda=0.082985
## + Fold5.Rep5: alpha=0.38077, lambda=1.148290
## - Fold5.Rep5: alpha=0.38077, lambda=1.148290
## + Fold5.Rep5: alpha=0.84280, lambda=0.045415
## - Fold5.Rep5: alpha=0.84280, lambda=0.045415
## + Fold5.Rep5: alpha=0.98348, lambda=0.006142
## - Fold5.Rep5: alpha=0.98348, lambda=0.006142
## + Fold5.Rep5: alpha=0.78895, lambda=0.042054
## - Fold5.Rep5: alpha=0.78895, lambda=0.042054

Chang Liu 43
R for Data Science Lecture 5

## + Fold5.Rep5: alpha=0.52859, lambda=0.428235


## - Fold5.Rep5: alpha=0.52859, lambda=0.428235
## + Fold5.Rep5: alpha=0.73053, lambda=0.015889
## - Fold5.Rep5: alpha=0.73053, lambda=0.015889
## + Fold5.Rep5: alpha=0.90284, lambda=0.445782
## - Fold5.Rep5: alpha=0.90284, lambda=0.445782
## + Fold5.Rep5: alpha=0.32570, lambda=0.081177
## - Fold5.Rep5: alpha=0.32570, lambda=0.081177
## + Fold5.Rep5: alpha=0.75140, lambda=0.561138
## - Fold5.Rep5: alpha=0.75140, lambda=0.561138
## + Fold5.Rep5: alpha=0.03599, lambda=0.001115
## - Fold5.Rep5: alpha=0.03599, lambda=0.001115
## + Fold5.Rep5: alpha=0.25504, lambda=2.954617
## - Fold5.Rep5: alpha=0.25504, lambda=2.954617
## + Fold5.Rep5: alpha=0.86475, lambda=0.043683
## - Fold5.Rep5: alpha=0.86475, lambda=0.043683
## + Fold5.Rep5: alpha=0.10863, lambda=0.154091
## - Fold5.Rep5: alpha=0.10863, lambda=0.154091
## + Fold5.Rep5: alpha=0.98116, lambda=7.405037
## - Fold5.Rep5: alpha=0.98116, lambda=7.405037
## + Fold5.Rep5: alpha=0.94388, lambda=0.002046
## - Fold5.Rep5: alpha=0.94388, lambda=0.002046
## Aggregating results
## Selecting tuning parameters
## Fitting alpha = 0.255, lambda = 2.95 on full training set

# Model Prediction
y_hat_pre <- predict(elastic_model, X)
y_hat_pre

## Mazda RX4 Mazda RX4 Wag Datsun 710 Hornet 4 Drive


## -76.689730 -63.507267 -101.480805 -3.325839
## Hornet Sportabout Valiant Duster 360 Merc 240D
## 93.829305 4.352543 106.753683 -93.017947
## Merc 230 Merc 280 Merc 280C Merc 450SE
## -101.213698 -41.043819 -46.153274 114.245778
## Merc 450SL Merc 450SLC Cadillac Fleetwood Lincoln Continental
## 88.607576 88.721132 183.384131 202.849050

Chang Liu 44
R for Data Science Lecture 5

## Chrysler Imperial Fiat 128 Honda Civic Toyota Corolla


## 209.473900 -133.055889 -197.967302 -162.997974
## Toyota Corona Dodge Challenger AMC Javelin Camaro Z28
## -100.028842 86.105235 76.459696 129.422705
## Pontiac Firebird Fiat X1-9 Porsche 914-2 Lotus Europa
## 122.084701 -146.857166 -107.839739 -155.648579
## Ford Pantera L Ferrari Dino Maserati Bora Volvo 142E
## 100.300718 -67.158840 74.668657 -83.272098

# Plot
plot(elastic_model, main = "Elastic Net Regression")

Elastic Net Regression


Regularization Parameter
0.0165093099991542 0.0829852579927621
0.0289642507893573 0.154090592241317
0.0420535300675737 0.327981224069327
0.0436832243254204 0.428235457044259
0.0454153120956423 0.445781722010769
0.0608610736326484 0.47941572765797
0.0811774345923398 0.561137521217022
RMSE (Repeated Cross−Validation)

42

40

38

36

0.0 0.2 0.4 0.6 0.8 1.0

Mixing Percentage

Chang Liu 45
R for Data Science Lecture 5

K-nearest Neighbors Algorithm

K-Nearest Neighbor or K-NN is a Supervised Non-linear classification algorithm. K-NN is a


Non-parametric algorithm i.e it doesn’t make any assumption about underlying data or its
distribution. It is one of the simplest and widely used algorithm which depends on it’s k
value(Neighbors) and finds it’s applications in many industries like finance industry, healthcare
industry etc.

In the KNN algorithm, K specifies the number of neighbors and its algorithm is as follows:

• Choose the number K of neighbor.


• Take the K Nearest Neighbor of unknown data point according to distance.
• Among the K-neighbors, Count the number of data points in each category.
• Assign the new data point to a category, where you counted the most neighbors.

For the Nearest Neighbor classifier, the distance between two points is expressed in the form of
Euclidean Distance.

df <- data(iris) ##load data


head(iris) ## see the studcture

Chang Liu 46
R for Data Science Lecture 5

## Sepal.Length Sepal.Width Petal.Length Petal.Width Species


## 1 5.1 3.5 1.4 0.2 setosa
## 2 4.9 3.0 1.4 0.2 setosa
## 3 4.7 3.2 1.3 0.2 setosa
## 4 4.6 3.1 1.5 0.2 setosa
## 5 5.0 3.6 1.4 0.2 setosa
## 6 5.4 3.9 1.7 0.4 setosa

##Generate a random number that is 90% of the total number of rows in dataset.
ran <- sample(1:nrow(iris), 0.9 * nrow(iris))

##the normalization function is created


nor <-function(x) { (x -min(x))/(max(x)-min(x)) }

##Run nomalization on first 4 coulumns of dataset because they are the predictors
iris_norm <- as.data.frame(lapply(iris[,c(1,2,3,4)], nor))

summary(iris_norm)

## Sepal.Length Sepal.Width Petal.Length Petal.Width


## Min. :0.0000 Min. :0.0000 Min. :0.0000 Min. :0.00000
## 1st Qu.:0.2222 1st Qu.:0.3333 1st Qu.:0.1017 1st Qu.:0.08333
## Median :0.4167 Median :0.4167 Median :0.5678 Median :0.50000
## Mean :0.4287 Mean :0.4406 Mean :0.4675 Mean :0.45806
## 3rd Qu.:0.5833 3rd Qu.:0.5417 3rd Qu.:0.6949 3rd Qu.:0.70833
## Max. :1.0000 Max. :1.0000 Max. :1.0000 Max. :1.00000

##extract training set


iris_train <- iris_norm[ran,]
##extract testing set
iris_test <- iris_norm[-ran,]
##extract 5th column of train dataset because it will be used as 'cl' argument in knn
,→ function.
iris_target_category <- iris[ran,5]
##extract 5th column if test dataset to measure the accuracy
iris_test_category <- iris[-ran,5]
##load the package class
library(class)
##run knn function
pr <- knn(iris_train,iris_test,cl=iris_target_category,k=13)

Chang Liu 47
R for Data Science Lecture 5

##create confusion matrix


tab <- table(pr,iris_test_category)

##this function divides the correct predictions by total number of predictions that
,→ tell us how accurate teh model is.

accuracy <- function(x){sum(diag(x)/(sum(rowSums(x)))) * 100}

accuracy (tab)

## [1] 100

In the iris dataset that is already available in R, I have run the k-nearest neighbor algorithm
that gave me 80% accurate result. First, I normalized the data to convert petal.length,
sepal.length, petal.width and sepal.length into a standardized 0-to-1 form so that we can fit
them into one box (one graph) and also because our main objective is to predict whether a
flower is virginica, Versicolor, or setosa and that is why I excluded the column 5 and stored
it into another variable called iris_target_category. Then, I separated the normalized values
into training and testing dataset. Imagine it this way, that the values from training dataset
are firstly drawn on a graph and after we run knn function with all the necessary arguments,
we introduce testing dataset’s values into the graph and calculate Euclidean distance with
each and every already stored point in graph. Now, although we know which flower it is in
testing dataset, we still predict the values and store them in variable called “pr” so that we can
compare predicted values with original testing dataset’s values. This way we understand the
accuracy of our model and if we are to get new 50 values in future and we are asked to predict
the category of those 50 values, we can do that with this model.

Chang Liu 48
R for Data Science Lecture 5

Decision Tree

Decision Trees are versatile Machine Learning algorithm that can perform both classification
and regression tasks. They are very powerful algorithms, capable of fitting complex datasets.
Besides, decision trees are fundamental components of random forests, which are among the
most potent Machine Learning algorithms available today.

library(tidyverse)
library(rpart)
library(partykit)

## Loading required package: grid

## Loading required package: libcoin

## Loading required package: mvtnorm

happiness2017 <- read_csv("data/happinessdata_2017.csv")

## Rows: 1420 Columns: 10

## -- Column specification --------------------------------------------------------


## Delimiter: ","
## chr (2): country, continent
## dbl (8): year, life_ladder, logGDP, social_support, life_exp, freedom, gener...
##
## i Use `spec()` to retrieve the full column specification for this data.
## i Specify the column types or set `show_col_types = FALSE` to quiet this message.

happiness2017 <- happiness2017 %>%


mutate(life_exp_category = case_when(life_exp > 60 ~ "Good",
life_exp <= 60 ~ "Poor"))
set.seed(834)
n <- nrow(happiness2017); # number of obs in the full dataset n

# Add row IDs to happiness data


happiness2017 <- happiness2017 %>% rowid_to_column() # Random sample of 80% of row
,→ indices

Chang Liu 49
R for Data Science Lecture 5

training_indices <- sample(1:n, size=round(0.8*n))


train <- happiness2017 %>% filter(rowid %in% training_indices)
test <- happiness2017 %>% filter(!(rowid %in% training_indices))
# Fit the tree based on training data
tree1 <- rpart(life_exp_category ~ social_support, data=train)
plot(as.party(tree1), type="simple")

1
social_support

≥ 0.83 < 0.83

2 3
Good
social_support
(n = 576, err = 10.6%)

≥ 0.725 < 0.725

4 5
Good Poor
(n = 315, err = 42.2%) (n = 230, err = 29.6%)

# Fit the tree based on training data


tree2 <- rpart(life_exp_category ~ logGDP + social_support + freedom + generosity,
,→ data=train)
plot(as.party(tree2), type="simple")

Chang Liu 50
R for Data Science Lecture 5

1
logGDP

2≥ 8.723 < 8.723 3


Good
logGDP
(n = 750, err = 8.1%)
4 ≥ 8.265 < 8.265
11
Poor
social_support
(n = 279, err = 11.5%)
5 ≥ 0.737 < 0.737
10
Poor
logGDP
(n = 34, err = 17.6%)
6< 8.504 ≥ 8.504 7
Good
social_support
(n = 44, err = 9.1%)
8≥ 0.833< 0.8339
Good Poor
(n = 7, err = 0.0%)
(n = 17, err = 0.0%)

### Tree from (b)


predicted_tree1 <- predict(tree1, newdata = test, type="class")
table(predicted_tree1, test$life_exp_category)

##
## predicted_tree1 Good Poor
## Good 183 47
## Poor 11 39

m <- table(predicted_tree1, test$life_exp_category)


tp <- m[1,1] / sum(m[,1])
tn <- m[2,2] / sum(m[,2])
accuracy <- sum(diag(m))/sum(m)
c(tp, tn, accuracy)

## [1] 0.9432990 0.4534884 0.7928571

Chang Liu 51
R for Data Science Lecture 5

predicted_tree2 <- predict(tree2, newdata = test, type="class")


table(predicted_tree2, test$life_exp_category)

##
## predicted_tree2 Good Poor
## Good 188 18
## Poor 6 68

m <- table(predicted_tree2, test$life_exp_category)


tp <- m[1,1] / sum(m[,1])
tn <- m[2,2] / sum(m[,2])
accuracy <- sum(diag(m))/sum(m)
c(tp, tn, accuracy)

## [1] 0.9690722 0.7906977 0.9142857

Chang Liu 52
R for Data Science Lecture 5

Bagging (extension: Random forest)

Bagging is a powerful method to improve the performance of simple models and reduce overfitting
of more complex models. The principle is very easy to understand, instead of fitting the model on
one sample of the population, several models are fitted on different samples (with replacement)
of the population. Then, these models are aggregated by using their average, weighted average
or a voting system (mainly for classification).

Though bagging reduces the explanatory ability of your model, it makes it much more robust
and able to get the “big picture” from your data.

To build a bagged trees, the process is easy. Let’s say you want 100 models that you will average,
for each of the hundred iterations you will:

• Take a sample with replacement of your original dataset


• Train a regression tree on this sample (you can learn more on classification trees there,
regression trees are similar)
• Save the model with your other models

Once you trained all your models, to get a prediction from your bagged model on new data, you
will need to:

• Get the estimate from each of the individual trees you saved.
• Average the estimates.

library(rpart)
require(ggplot2)
library(data.table)

##
## Attaching package: 'data.table'

## The following objects are masked from 'package:dplyr':


##
## between, first, last

## The following object is masked from 'package:purrr':


##
## transpose

Chang Liu 53
R for Data Science Lecture 5

set.seed(456)

##Reading data
bagging_data=data.table(airquality)
ggplot(bagging_data,aes(Wind,Ozone))+geom_point()+ggtitle("Ozone vs wind speed")

## Warning: Removed 37 rows containing missing values (geom_point).

Ozone vs wind speed

150

100
Ozone

50

5 10 15 20
Wind

data_test=na.omit(bagging_data[,.(Ozone,Wind)])
##Training data
train_index=sample.int(nrow(data_test),size=round(nrow(data_test)*0.8),replace = F)
data_test[train_index,train:=TRUE][-train_index,train:=FALSE]
data_test

## Ozone Wind train


## 1: 41 7.4 TRUE
## 2: 36 8.0 FALSE

Chang Liu 54
R for Data Science Lecture 5

## 3: 12 12.6 TRUE
## 4: 18 11.5 TRUE
## 5: 28 14.9 TRUE
## ---
## 112: 14 16.6 TRUE
## 113: 30 6.9 TRUE
## 114: 14 14.3 TRUE
## 115: 18 8.0 TRUE
## 116: 20 11.5 TRUE

##Model without bagging


no_bag_model=rpart(Ozone~Wind,data_test[train_index],control=rpart.control(minsplit=6))
result_no_bag=predict(no_bag_model,bagging_data)

##Training of the bagged model


n_model=100
bagged_models=list()
for (i in 1:n_model)
{
new_sample=sample(train_index,size=length(train_index),replace=T)

,→ bagged_models=c(bagged_models,list(rpart(Ozone~Wind,data_test[new_sample],control=rpart.control(m
}

##Getting estimate from the bagged model


bagged_result=NULL
i=0
for (from_bag_model in bagged_models)
{
if (is.null(bagged_result))
bagged_result=predict(from_bag_model,bagging_data)
else
bagged_result=(i*bagged_result+predict(from_bag_model,bagging_data))/(i+1)
i=i+1
}

Chang Liu 55
R for Data Science Lecture 5

Unique Value

df <- data.frame(team=c('A', 'A', 'B', 'B', 'C', 'C'),


points=c(90, 99, 90, 85, 90, 85),
assists=c(33, 33, 31, 39, 34, 34),
rebounds=c(30, 28, 24, 24, 28, 28))

unique(df$team)

## [1] "A" "B" "C"

unique(df$points)

## [1] 90 99 85

#find and sort unique values in 'points' column


sort(unique(df$points))

## [1] 85 90 99

#find and sort unique values in 'points' column


sort(unique(df$points), decreasing=TRUE)

## [1] 99 90 85

#find and count unique values in 'points' column


table(df$points)

##
## 85 90 99
## 2 3 1

Chang Liu 56
R for Data Science Lecture 5

Length

#create vector
my_vector <- c(2, 7, 6, 6, 9, 10, 14, 13, 4, 20, NA)

#calculate length of vector


length(my_vector)

## [1] 11

my_vector <- c(2, 7, 6, 6, 9, 10, 14, 13, 4, 20, NA)

#calculate length of vector, excluding NA values


sum(!is.na(my_vector))

## [1] 10

#create list
my_list <- list(A=1:5, B=c('hey', 'hi'), C=c(3, 5, 7))

#calculate length of entire list


length(my_list)

## [1] 3

#calculate length of first element in list


length(my_list[[1]])

## [1] 5

#create data frame


df <- data.frame(team=c('A', 'B', 'B', 'B', 'C', 'D'),
points=c(10, 15, 29, 24, 30, 31))
nrow(df)

## [1] 6

Chang Liu 57
R for Data Science Lecture 5

#define string
my_string <- "hey there"

#calculate length of string


length(my_string)

## [1] 1

#define string
my_string <- "hey there"

#calculate total characters in string


nchar(my_string)

## [1] 9

Chang Liu 58
R for Data Science Lecture 5

Gsub

The gsub() function in R can be used to replace all occurrences of certain text within a string
in R.
gsub(pattern, replacement, x)

• pattern: The pattern to look for

• replacement: The replacement for the pattern

• x: The string to search

x <- "This is a fun sentence"

#replace 'fun' with 'great'


x <- gsub('fun', 'great', x)

#view updated string


x

## [1] "This is a great sentence"

#define vector
x <- c('Mavs', 'Mavs', 'Spurs', 'Nets', 'Spurs', 'Mavs')

#replace 'Mavs' with 'M'


x <- gsub('Mavs', 'M', x)

#view updated vector


x

## [1] "M" "M" "Spurs" "Nets" "Spurs" "M"

#define vector
x <- c('A', 'A', 'B', 'C', 'D', 'D')

#replace 'A' or 'B' or 'C' with 'X'

Chang Liu 59
R for Data Science Lecture 5

x <- gsub('A|B|C', 'X', x)

#view updated string


x

## [1] "X" "X" "X" "X" "D" "D"

#define data frame


df <- data.frame(team=c('A', 'B', 'C', 'D'),
conf=c('West', 'West', 'East', 'East'),
points=c(99, 98, 92, 87),
rebounds=c(18, 22, 26, 19))

#replace 'West' and 'East' with 'W' and 'E'


df$conf <- gsub('West', 'W', df$conf)
df$conf <- gsub('East', 'E', df$conf)

#view updated data frame


df

## team conf points rebounds


## 1 A W 99 18
## 2 B W 98 22
## 3 C E 92 26
## 4 D E 87 19

Chang Liu 60
R for Data Science Lecture 5

sampling

Stratified Sampling

Researchers often take samples from a population and use the data from the sample to draw
conclusions about the population as a whole.

One commonly used sampling method is stratified random sampling, in which a population is
split into groups and a certain number of members from each group are randomly selected to
be included in the sample.

#make this example reproducible


set.seed(1)

#create data frame


df <- data.frame(grade = rep(c('Freshman', 'Sophomore', 'Junior', 'Senior'),
,→ each=100),
gpa = rnorm(400, mean=85, sd=3))

#view first six rows of data frame


head(df)

## grade gpa
## 1 Freshman 83.12064
## 2 Freshman 85.55093
## 3 Freshman 82.49311
## 4 Freshman 89.78584
## 5 Freshman 85.98852
## 6 Freshman 82.53859

library(dplyr)

#obtain stratified sample


strat_sample <- df %>%
group_by(grade) %>%
sample_n(size=10)

#find frequency of students from each grade


table(strat_sample$grade)

Chang Liu 61
R for Data Science Lecture 5

##
## Freshman Junior Senior Sophomore
## 10 10 10 10

library(dplyr)

#obtain stratified sample


strat_sample <- df %>%
group_by(grade) %>%
sample_frac(size=.15)

#find frequency of students from each grade


table(strat_sample$grade)

##
## Freshman Junior Senior Sophomore
## 15 15 15 15

Chang Liu 62
R for Data Science Lecture 5

Cluster Sampling

One commonly used sampling method is cluster sampling, in which a population is split into
clusters and all members of some clusters are chosen to be included in the sample.

set.seed(1)

#create data frame


df <- data.frame(tour = rep(1:10, each=20),
experience = rnorm(200, mean=7, sd=1))

#view first six rows of data frame


head(df)

## tour experience
## 1 1 6.373546
## 2 1 7.183643
## 3 1 6.164371
## 4 1 8.595281
## 5 1 7.329508
## 6 1 6.179532

#randomly choose 4 tour groups out of the 10


clusters <- sample(unique(df$tour), size=4, replace=F)

#define sample as all members who belong to one of the 4 tour groups
cluster_sample <- df[df$tour %in% clusters, ]

#view how many customers came from each tour


table(cluster_sample$tour)

##
## 1 2 3 7
## 20 20 20 20

Chang Liu 63
R for Data Science Lecture 5

Systematic Sampling

One commonly used sampling method is systematic sampling, which is implemented with a
simple two step process:

1. Place each member of a population in some order.

2. Choose a random starting point and select every nth member to be in the sample.

#make this example reproducible


set.seed(1)

#create simple function to generate random last names


randomNames <- function(n = 5000) {
do.call(paste0, replicate(5, sample(LETTERS, n, TRUE), FALSE))
}

#create data frame


df <- data.frame(last_name = randomNames(500),
gpa = rnorm(500, mean=82, sd=3))

#view first six rows of data frame


head(df)

## last_name gpa
## 1 YLGRG 74.66755
## 2 DCVUK 80.74210
## 3 GZXSE 80.89685
## 4 ARZOG 80.31026
## 5 BNVRR 77.83073
## 6 WMWJM 80.10269

#define function to obtain systematic sample


obtain_sys = function(N,n){
k = ceiling(N/n)
r = sample(1:k, 1)
seq(r, r + k*(n-1), k)
}

#obtain systematic sample


sys_sample_df = df[obtain_sys(nrow(df), 100), ]

Chang Liu 64
R for Data Science Lecture 5

#view first six rows of data frame


head(sys_sample_df)

## last_name gpa
## 5 BNVRR 77.83073
## 10 SBTFE 81.51290
## 15 VPJTO 80.63059
## 20 OVQCE 83.80557
## 25 NLJRM 83.35642
## 30 YMCZO 82.31994

Chang Liu 65

You might also like