Brian McKay’s Flu Analysis Data: Model Evaluation

Let’s Begin with Model Evaluation

But first, let’s load some packages…

library(dplyr) #Data wrangling 

Attaching package: 'dplyr'
The following objects are masked from 'package:stats':

    filter, lag
The following objects are masked from 'package:base':

    intersect, setdiff, setequal, union
library(tidyr) #Helps with data wrangling
Warning: package 'tidyr' was built under R version 4.2.3
library(here) #Setting paths
here() starts at C:/GitHub/MADA/kimberlyperez-MADA-portfolio
library(tidyverse) #Data transformation
── Attaching packages
───────────────────────────────────────
tidyverse 1.3.2 ──
✔ ggplot2 3.4.0     ✔ purrr   1.0.1
✔ tibble  3.1.8     ✔ stringr 1.5.0
✔ readr   2.1.4     ✔ forcats 0.5.2
── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
✖ dplyr::filter() masks stats::filter()
✖ dplyr::lag()    masks stats::lag()
library(ggplot2) #Graphs/Visualization
library(tidymodels) #For modeling
── Attaching packages ────────────────────────────────────── tidymodels 1.0.0 ──
✔ broom        1.0.2     ✔ rsample      1.1.1
✔ dials        1.1.0     ✔ tune         1.0.1
✔ infer        1.0.4     ✔ workflows    1.1.2
✔ modeldata    1.1.0     ✔ workflowsets 1.0.0
✔ parsnip      1.0.3     ✔ yardstick    1.1.0
✔ recipes      1.0.4     
── Conflicts ───────────────────────────────────────── tidymodels_conflicts() ──
✖ scales::discard() masks purrr::discard()
✖ dplyr::filter()   masks stats::filter()
✖ recipes::fixed()  masks stringr::fixed()
✖ dplyr::lag()      masks stats::lag()
✖ yardstick::spec() masks readr::spec()
✖ recipes::step()   masks stats::step()
• Learn how to get started at https://www.tidymodels.org/start/

1. Reading in my Cleaned Data

flu_ME<-readRDS(here("fluanalysis","processed_data", "SympAct_cleaned.rds")) #Loading in the data

glimpse(flu_ME) #Looking at the Data 
Rows: 730
Columns: 32
$ SwollenLymphNodes <fct> Yes, Yes, Yes, Yes, Yes, No, No, No, Yes, No, Yes, Y…
$ ChestCongestion   <fct> No, Yes, Yes, Yes, No, No, No, Yes, Yes, Yes, Yes, Y…
$ ChillsSweats      <fct> No, No, Yes, Yes, Yes, Yes, Yes, Yes, Yes, No, Yes, …
$ NasalCongestion   <fct> No, Yes, Yes, Yes, No, No, No, Yes, Yes, Yes, Yes, Y…
$ CoughYN           <fct> Yes, Yes, No, Yes, No, Yes, Yes, Yes, Yes, Yes, No, …
$ Sneeze            <fct> No, No, Yes, Yes, No, Yes, No, Yes, No, No, No, No, …
$ Fatigue           <fct> Yes, Yes, Yes, Yes, Yes, Yes, Yes, Yes, Yes, Yes, Ye…
$ SubjectiveFever   <fct> Yes, Yes, Yes, Yes, Yes, Yes, Yes, Yes, Yes, No, Yes…
$ Headache          <fct> Yes, Yes, Yes, Yes, Yes, Yes, No, Yes, Yes, Yes, Yes…
$ Weakness          <fct> Mild, Severe, Severe, Severe, Moderate, Moderate, Mi…
$ WeaknessYN        <fct> Yes, Yes, Yes, Yes, Yes, Yes, Yes, Yes, Yes, Yes, Ye…
$ CoughIntensity    <fct> Severe, Severe, Mild, Moderate, None, Moderate, Seve…
$ CoughYN2          <fct> Yes, Yes, Yes, Yes, No, Yes, Yes, Yes, Yes, Yes, Yes…
$ Myalgia           <fct> Mild, Severe, Severe, Severe, Mild, Moderate, Mild, …
$ MyalgiaYN         <fct> Yes, Yes, Yes, Yes, Yes, Yes, Yes, Yes, Yes, Yes, Ye…
$ RunnyNose         <fct> No, No, Yes, Yes, No, No, Yes, Yes, Yes, Yes, No, No…
$ AbPain            <fct> No, No, Yes, No, No, No, No, No, No, No, Yes, Yes, N…
$ ChestPain         <fct> No, No, Yes, No, No, Yes, Yes, No, No, No, No, Yes, …
$ Diarrhea          <fct> No, No, No, No, No, Yes, No, No, No, No, No, No, No,…
$ EyePn             <fct> No, No, No, No, Yes, No, No, No, No, No, Yes, No, Ye…
$ Insomnia          <fct> No, No, Yes, Yes, Yes, No, No, Yes, Yes, Yes, Yes, Y…
$ ItchyEye          <fct> No, No, No, No, No, No, No, No, No, No, No, No, Yes,…
$ Nausea            <fct> No, No, Yes, Yes, Yes, Yes, No, No, Yes, Yes, Yes, Y…
$ EarPn             <fct> No, Yes, No, Yes, No, No, No, No, No, No, No, Yes, Y…
$ Hearing           <fct> No, Yes, No, No, No, No, No, No, No, No, No, No, No,…
$ Pharyngitis       <fct> Yes, Yes, Yes, Yes, Yes, Yes, Yes, No, No, No, Yes, …
$ Breathless        <fct> No, No, Yes, No, No, Yes, No, No, No, Yes, No, Yes, …
$ ToothPn           <fct> No, No, Yes, No, No, No, No, No, Yes, No, No, Yes, N…
$ Vision            <fct> No, No, No, No, No, No, No, No, No, No, No, No, No, …
$ Vomit             <fct> No, No, No, No, No, No, Yes, No, No, No, Yes, Yes, N…
$ Wheeze            <fct> No, No, No, Yes, No, Yes, No, No, No, No, No, Yes, N…
$ BodyTemp          <dbl> 98.3, 100.4, 100.8, 98.8, 100.5, 98.4, 102.5, 98.4, …

2. Splitting the Data

set.seed(321)
data_split_ME<- initial_split(flu_ME, prop=3/4)

train_data_flu<- training(data_split_ME)
test_data_flu<- testing(data_split_ME)

3. Fitting a Model with a Recipe [Trained Data]

#Creating the recipe 
flu_recipe<- recipe(Nausea ~ ., data=train_data_flu)

4. Workflow Creation [Trained Data]

#Now Let's set a model
log_flu<- logistic_reg() %>%
  set_engine("glm")

#Creating Workflow
flu_WF<- workflow() %>% 
  add_model (log_flu) %>%
  add_recipe(flu_recipe)

#Creation of Single Function
flu_fit<- 
  flu_WF %>% 
  fit(data= train_data_flu)

#Extracting 
flu_fit %>%
  extract_fit_parsnip() %>%
  tidy()
# A tibble: 38 × 5
   term                 estimate std.error statistic p.value
   <chr>                   <dbl>     <dbl>     <dbl>   <dbl>
 1 (Intercept)           -0.469      9.35    -0.0501   0.960
 2 SwollenLymphNodesYes  -0.0812     0.230   -0.353    0.724
 3 ChestCongestionYes     0.282      0.249    1.13     0.257
 4 ChillsSweatsYes       -0.122      0.341   -0.358    0.720
 5 NasalCongestionYes     0.220      0.299    0.735    0.462
 6 CoughYNYes            -0.135      0.601   -0.224    0.823
 7 SneezeYes              0.162      0.243    0.666    0.506
 8 FatigueYes             0.220      0.450    0.488    0.626
 9 SubjectiveFeverYes     0.115      0.262    0.439    0.661
10 HeadacheYes            0.469      0.339    1.38     0.166
# … with 28 more rows
#Predicting 
predict(flu_fit, train_data_flu)
Warning in predict.lm(object, newdata, se.fit, scale = 1, type = if (type == :
prediction from a rank-deficient fit may be misleading
# A tibble: 547 × 1
   .pred_class
   <fct>      
 1 No         
 2 No         
 3 No         
 4 No         
 5 No         
 6 Yes        
 7 Yes        
 8 No         
 9 Yes        
10 No         
# … with 537 more rows
pred_flufit<- augment(flu_fit, train_data_flu)
Warning in predict.lm(object, newdata, se.fit, scale = 1, type = if (type == :
prediction from a rank-deficient fit may be misleading

Warning in predict.lm(object, newdata, se.fit, scale = 1, type = if (type == :
prediction from a rank-deficient fit may be misleading
pred_flufit %>% 
  select(Nausea, .pred_No, .pred_Yes)
# A tibble: 547 × 3
   Nausea .pred_No .pred_Yes
   <fct>     <dbl>     <dbl>
 1 No       0.572      0.428
 2 No       0.558      0.442
 3 Yes      0.851      0.149
 4 No       0.776      0.224
 5 Yes      0.898      0.102
 6 Yes      0.0465     0.953
 7 Yes      0.433      0.567
 8 No       0.881      0.119
 9 Yes      0.180      0.820
10 No       0.674      0.326
# … with 537 more rows

5. ROC Curve (1) [Trained Data]

pred_flufit %>% #Cool!
  roc_curve(truth= Nausea, .pred_No) %>%
  autoplot()

Let’s check the ROC Curve (1) performance

pred_flufit %>%
  roc_auc(truth= Nausea, .pred_No) #Sitting at 0.78 ROC-AUC seems to be useful 
# A tibble: 1 × 3
  .metric .estimator .estimate
  <chr>   <chr>          <dbl>
1 roc_auc binary         0.784

ROC Curve (2) [Trained Data]

pred_flufit %>% 
  roc_curve(truth= Nausea, .pred_Yes) %>%
  autoplot()

Let’s check the ROC Curve (2) performance

pred_flufit %>%
  roc_auc(truth= Nausea, .pred_Yes) #Please note the PREDICTOR- Sitting at 0.22 ROC-AUC seems to be no good.
# A tibble: 1 × 3
  .metric .estimator .estimate
  <chr>   <chr>          <dbl>
1 roc_auc binary         0.216

6. Let’s Do it Again with the Test Data!

#Creating the recipe 
flu_recipe_test<- recipe(Nausea ~ ., data=test_data_flu)

Workflow Creation [Test Data]

#Now Let's set a model
log_flu_test<- logistic_reg() %>%
  set_engine("glm")

#Creating Workflow
flu_WF_test<- workflow() %>% 
  add_model (log_flu_test) %>%
  add_recipe(flu_recipe_test)

#Creation of Single Function
flu_fit_test<- 
  flu_WF_test %>% 
  fit(data= test_data_flu)

#Extracting 
flu_fit_test %>%
  extract_fit_parsnip() %>%
  tidy()
# A tibble: 38 × 5
   term                 estimate std.error statistic p.value
   <chr>                   <dbl>     <dbl>     <dbl>   <dbl>
 1 (Intercept)           14.5       17.1       0.847  0.397 
 2 SwollenLymphNodesYes  -0.705      0.487    -1.45   0.148 
 3 ChestCongestionYes     0.0830     0.508     0.163  0.870 
 4 ChillsSweatsYes        1.33       0.706     1.88   0.0600
 5 NasalCongestionYes     1.05       0.626     1.67   0.0943
 6 CoughYNYes             1.22       1.43      0.855  0.392 
 7 SneezeYes              0.130      0.545     0.239  0.811 
 8 FatigueYes             0.721      0.886     0.814  0.416 
 9 SubjectiveFeverYes     1.13       0.647     1.75   0.0796
10 HeadacheYes           -0.445      0.685    -0.650  0.516 
# … with 28 more rows
#Predicting 
predict(flu_fit_test, test_data_flu)
Warning in predict.lm(object, newdata, se.fit, scale = 1, type = if (type == :
prediction from a rank-deficient fit may be misleading
# A tibble: 183 × 1
   .pred_class
   <fct>      
 1 No         
 2 Yes        
 3 No         
 4 No         
 5 Yes        
 6 No         
 7 Yes        
 8 No         
 9 No         
10 Yes        
# … with 173 more rows
pred_flufit_test<- augment(flu_fit_test, test_data_flu)
Warning in predict.lm(object, newdata, se.fit, scale = 1, type = if (type == :
prediction from a rank-deficient fit may be misleading

Warning in predict.lm(object, newdata, se.fit, scale = 1, type = if (type == :
prediction from a rank-deficient fit may be misleading
pred_flufit_test %>% 
  select(Nausea, .pred_No, .pred_Yes)
# A tibble: 183 × 3
   Nausea .pred_No .pred_Yes
   <fct>     <dbl>     <dbl>
 1 No      0.993     0.00689
 2 Yes     0.00627   0.994  
 3 No      0.855     0.145  
 4 No      0.954     0.0464 
 5 Yes     0.0332    0.967  
 6 No      0.775     0.225  
 7 Yes     0.100     0.900  
 8 No      0.780     0.220  
 9 No      0.608     0.392  
10 Yes     0.0603    0.940  
# … with 173 more rows

ROC Curve (1) [Test Data]

pred_flufit_test %>% #Let's make the curve
  roc_curve(truth= Nausea, .pred_No) %>%
  autoplot()

Let’s check the ROC Curve (1) performance

pred_flufit_test %>% #Let's check the performance <0.5= not useful, ~0.7= useful, 1= perfect
  roc_auc(truth= Nausea, .pred_No) #Sitting at 0.86 ROC-AUC is useful and the test data performers better than the trained.
# A tibble: 1 × 3
  .metric .estimator .estimate
  <chr>   <chr>          <dbl>
1 roc_auc binary         0.861

ROC Curve (2) [Test Data]

pred_flufit_test %>% 
  roc_curve(truth= Nausea, .pred_Yes) %>%
  autoplot()

Let’s check the ROC Curve (2) performance

pred_flufit_test %>%
  roc_auc(truth= Nausea, .pred_Yes) #Please note the PREDICTOR- Sitting at 0.14 ROC-AUC seems to be no good.
# A tibble: 1 × 3
  .metric .estimator .estimate
  <chr>   <chr>          <dbl>
1 roc_auc binary         0.139

7. Alternative Model with Categorical Outcome

I. Splitting the Data

set.seed(321)
data_split_RN<- initial_split(flu_ME, prop=3/4)

train_data_RN<- training(data_split_RN)
test_data_RN<- testing(data_split_RN)

II. Fitting a Model with a Recipe [Trained Data]

#Creating the recipe 
flu_recipe_RN<- recipe(Nausea ~ RunnyNose, data=train_data_RN)

III. Workflow Creation [Trained Data]

#Now Let's set a model
log_RN<- logistic_reg() %>%
  set_engine("glm")

#Creating Workflow
flu_WF_RN<- workflow() %>% 
  add_model (log_RN) %>%
  add_recipe(flu_recipe_RN)

#Creation of Single Function
flu_fit_RN<- 
  flu_WF_RN %>% 
  fit(data= train_data_RN)

#Extracting 
flu_fit_RN %>%
  extract_fit_parsnip() %>%
  tidy()
# A tibble: 2 × 5
  term         estimate std.error statistic   p.value
  <chr>           <dbl>     <dbl>     <dbl>     <dbl>
1 (Intercept)    -0.753     0.173    -4.34  0.0000140
2 RunnyNoseYes    0.135     0.203     0.664 0.507    
#Predicting 
predict(flu_fit_RN, train_data_RN)
# A tibble: 547 × 1
   .pred_class
   <fct>      
 1 No         
 2 No         
 3 No         
 4 No         
 5 No         
 6 No         
 7 No         
 8 No         
 9 No         
10 No         
# … with 537 more rows
pred_RNfit<- augment(flu_fit_RN, train_data_RN)

pred_RNfit %>% 
  select(Nausea, .pred_No, .pred_Yes)
# A tibble: 547 × 3
   Nausea .pred_No .pred_Yes
   <fct>     <dbl>     <dbl>
 1 No        0.650     0.350
 2 No        0.650     0.350
 3 Yes       0.650     0.350
 4 No        0.680     0.320
 5 Yes       0.680     0.320
 6 Yes       0.650     0.350
 7 Yes       0.650     0.350
 8 No        0.650     0.350
 9 Yes       0.650     0.350
10 No        0.650     0.350
# … with 537 more rows

IV. ROC Curve (1) [Runny Nose: Trained Data]

pred_RNfit %>% 
  roc_curve(truth= Nausea, .pred_No) %>%
  autoplot()

V. ROC Curve Performance [Runny Nose: Trained Data]

pred_RNfit %>% #Let's check the performance <0.5= not useful, ~0.7= useful, 1= perfect
  roc_auc(truth= Nausea, .pred_No) #Sitting at 0.51 ROC-AUC is not useful, performers worse than the above.
# A tibble: 1 × 3
  .metric .estimator .estimate
  <chr>   <chr>          <dbl>
1 roc_auc binary         0.513

IIIa. Workflow Creation [Test Data]

#Now Let's set a model
log_RNTest<- logistic_reg() %>%
  set_engine("glm")

#Creating Workflow
flu_WF_RNTest<- workflow() %>% 
  add_model (log_RNTest) %>%
  add_recipe(flu_recipe_RN)

#Creation of Single Function
flu_fit_RNTest<- 
  flu_WF_RNTest %>% 
  fit(data= test_data_RN)

#Extracting 
flu_fit_RNTest %>%
  extract_fit_parsnip() %>%
  tidy()
# A tibble: 2 × 5
  term         estimate std.error statistic p.value
  <chr>           <dbl>     <dbl>     <dbl>   <dbl>
1 (Intercept)    -0.420     0.268    -1.56    0.118
2 RunnyNoseYes   -0.156     0.327    -0.476   0.634
#Predicting 
predict(flu_fit_RNTest, test_data_RN)
# A tibble: 183 × 1
   .pred_class
   <fct>      
 1 No         
 2 No         
 3 No         
 4 No         
 5 No         
 6 No         
 7 No         
 8 No         
 9 No         
10 No         
# … with 173 more rows
pred_RNfitTest<- augment(flu_fit_RNTest, test_data_RN)

pred_RNfitTest %>% 
  select(Nausea, .pred_No, .pred_Yes)
# A tibble: 183 × 3
   Nausea .pred_No .pred_Yes
   <fct>     <dbl>     <dbl>
 1 No        0.603     0.397
 2 Yes       0.603     0.397
 3 No        0.640     0.360
 4 No        0.603     0.397
 5 Yes       0.640     0.360
 6 No        0.640     0.360
 7 Yes       0.640     0.360
 8 No        0.640     0.360
 9 No        0.603     0.397
10 Yes       0.603     0.397
# … with 173 more rows

IVa. ROC Curve [Runny Nose: Test Data]

pred_RNfitTest %>% 
  roc_curve(truth= Nausea, .pred_No) %>%
  autoplot()

Va. ROC Curve Performance [Runny Nose: Test Data]

pred_RNfitTest %>% #Let's check the performance <0.5= not useful, ~0.7= useful, 1= perfect
  roc_auc(truth= Nausea, .pred_No) #Sitting at 0.52 ROC-AUC is not useful
# A tibble: 1 × 3
  .metric .estimator .estimate
  <chr>   <chr>          <dbl>
1 roc_auc binary         0.517

<<<<<<< Updated upstream

Stashed changes # This section added by SARA BENIST

Now, we will be fitting models and predicting BodyTemp from all symptoms.

Create recipe with all symptoms

Following the same steps as above:

#create recipe using all symptoms as predictors of body temp
flu_recBTAS <- 
  recipe(BodyTemp ~ ., data = train_data_flu)

#set model
ln_mod <- linear_reg() %>% 
  set_engine("glm")

#create work flow
flu_wflowBTAS <-
  workflow() %>% 
  add_model(ln_mod) %>% 
  add_recipe(flu_recBTAS)

#create fitted model
flu_fitBTAS <-
  flu_wflowBTAS %>% 
  fit(data = train_data_flu)

#check fitted model
flu_fitBTAS %>% 
  extract_fit_parsnip() %>% 
  tidy()
# A tibble: 38 × 5
   term                 estimate std.error statistic p.value
   <chr>                   <dbl>     <dbl>     <dbl>   <dbl>
 1 (Intercept)           97.8        0.347   282.    0      
 2 SwollenLymphNodesYes  -0.124      0.105    -1.17  0.241  
 3 ChestCongestionYes     0.0731     0.112     0.655 0.513  
 4 ChillsSweatsYes        0.140      0.148     0.949 0.343  
 5 NasalCongestionYes    -0.183      0.131    -1.39  0.164  
 6 CoughYNYes             0.353      0.268     1.31  0.189  
 7 SneezeYes             -0.297      0.110    -2.70  0.00706
 8 FatigueYes             0.360      0.185     1.94  0.0528 
 9 SubjectiveFeverYes     0.361      0.116     3.11  0.00196
10 HeadacheYes            0.0332     0.142     0.233 0.816  
# … with 28 more rows

<<<<<<< Updated upstream

Here, we can see the fitted model predicts Body Temperature from all symptoms with most predictors not being statistically significant. Estimates cannot be directly compared without standardizing the variables.

Here, we can see the fitted model predicts Body Temperature from all symptoms with most predictors not being statistically significant. Estimates cannot be directly compared without standardizing the variables. >>>>>>> Stashed changes

Predictions from trained model

We can also make predictions using the flu_fitBTAS model and the test_data_flu.

#create predictions
flu_augBTAS <- augment(flu_fitBTAS, test_data_flu)
Warning in predict.lm(object, newdata, se.fit, scale = 1, type = if (type == :
prediction from a rank-deficient fit may be misleading
#check RMSE as metric for model performance
flu_augBTAS %>% 
  rmse(truth = BodyTemp, estimate = .pred)
# A tibble: 1 × 3
  .metric .estimator .estimate
  <chr>   <chr>          <dbl>
1 rmse    standard        1.23

<<<<<<< Updated upstream

The root mean square error has an estimate of 1.230, indicating this would not be a good model for the data.

We can also use the train_data_flu data to make predictions.

======= The root mean square error has an estimate of 1.230, indicating this would not be a good model for the data.

We can also use the train_data_flu data to make predictions. >>>>>>> Stashed changes

#predict from training data
flu_augRN2 <- augment(flu_fitBTAS, train_data_flu)
Warning in predict.lm(object, newdata, se.fit, scale = 1, type = if (type == :
prediction from a rank-deficient fit may be misleading
#generate RMSE for model performance
flu_augRN2 %>% 
  rmse(truth = BodyTemp, estimate = .pred)
# A tibble: 1 × 3
  .metric .estimator .estimate
  <chr>   <chr>          <dbl>
1 rmse    standard        1.08

<<<<<<< Updated upstream

The RMSE is lower for the train data, but still not an ideal value.

Create recipe with Runny Nose

======= The RMSE is lower for the train data, but still not an ideal value.

Create recipe with Runny Nose

Stashed changes Follow the same steps with RunnyNose as the predictor.

#create recipe using RunnyNose as predictor of body temp
flu_recBTRN <- 
  recipe(BodyTemp ~ RunnyNose, data = train_data_flu)

#set model
ln_mod <- linear_reg() %>% 
  set_engine("glm")

#create work flow
flu_wflowBTRN <-
  workflow() %>% 
  add_model(ln_mod) %>% 
  add_recipe(flu_recBTRN)

#create fitted model
flu_fitBTRN <-
  flu_wflowBTRN %>% 
  fit(data = train_data_flu)

#check fitted model
flu_fitBTRN %>% 
  extract_fit_parsnip() %>% 
  tidy()
# A tibble: 2 × 5
  term         estimate std.error statistic p.value
  <chr>           <dbl>     <dbl>     <dbl>   <dbl>
1 (Intercept)    99.1      0.0931   1065.    0     
2 RunnyNoseYes   -0.246    0.110      -2.24  0.0252

<<<<<<< Updated upstream

Here, the model predicts Body Temperature from Runny Nose. Having a runny nose appears to predict a lower body temperature by 0.246 degrees.

Predictions from trained model

======= Here, the model predicts Body Temperature from Runny Nose. Having a runny nose appears to predict a lower body temperature by 0.246 degrees.

Predictions from trained model

Stashed changes

#create predictions
flu_augBTRN <- augment(flu_fitBTRN, test_data_flu)

#check RMSE as metric for model performance
flu_augBTRN %>% 
  rmse(truth = BodyTemp, estimate = .pred)
# A tibble: 1 × 3
  .metric .estimator .estimate
  <chr>   <chr>          <dbl>
1 rmse    standard        1.30

<<<<<<< Updated upstream

======= >>>>>>> Stashed changes The RMSE is similar to the all symptoms model with an estimate of 1.299.

#predict from training data
flu_augRN3 <- augment(flu_fitBTRN, train_data_flu)

#generate RMSE for model performance
flu_augRN3 %>% 
  rmse(truth = BodyTemp, estimate = .pred)
# A tibble: 1 × 3
  .metric .estimator .estimate
  <chr>   <chr>          <dbl>
1 rmse    standard        1.15

<<<<<<< Updated upstream

Using the train_data_flu dataset to predict, the RMSE is lower at 1.149. None of these models appear to be productive at predicting body temperature.

Using the train_data_flu dataset to predict, the RMSE is lower at 1.149. None of these models appear to be productive at predicting body temperature. >>>>>>> Stashed changes