Session

Improving Feature Selection Through Stability

Are you a data scientist looking for a way to get more actionable insights from your clients’ data? Or maybe you’re looking for an edge in a machine learning competition? Stability selection, a recently developed feature selection method, can help. Similar to bagging, stability selection involves repeated subsamples of the data set to find the most important features. Stability selection has good properties, has received a lot of attention from academic machine learning researchers, and is very easy to implement in R. But the broader data science community hasn’t embraced it yet. In this talk, after briefly discussing why feature selection is important, I’ll explain how stability selection works and what problems it solves. I’ll show how it can easily be implemented using the R package stabs. I’ll also give some examples of when stability selection is useful, and I’ll wrap up by sharing some lessons I’ve learned through my own academic research on stability selection, including some of its limitations. This talk is geared towards data scientists at intermediate or advanced levels who already understand fundamentals like how to use the lasso.

Greg Faletto

Ph.D. Student, Dept. of Data Sciences and Operations, USC Marshall School of Business

Los Angeles, California, United States

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top