Feature Hashing for Scalable Machine Learning with Nick Pentreath
704 views
33 slides
Oct 31, 2017
Slide 1 of 33
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
About This Presentation
Feature hashing is a powerful technique for handling high-dimensional features in machine learning. It is fast, simple, memory-efficient, and well suited to online learning scenarios. While an approximation, it has surprisingly low accuracy tradeoffs in many machine learning problems. Feature hashin...
Feature hashing is a powerful technique for handling high-dimensional features in machine learning. It is fast, simple, memory-efficient, and well suited to online learning scenarios. While an approximation, it has surprisingly low accuracy tradeoffs in many machine learning problems. Feature hashing has been made somewhat popular by libraries such as Vowpal Wabbit and scikit-learn. In Spark MLlib, it is mostly used for text features; however, its use cases extend more broadly. Many Spark users are not familiar with the ways in which feature hashing might be applied to their problems. In this talk, I will cover the basics of feature hashing, and how to use it for all feature types in machine learning. I will also introduce a more flexible and powerful feature hashing transformer for use within Spark ML pipelines. Finally, I will explore the performance and scalability tradeoffs of feature hashing on various datasets.
Size: 1.16 MB
Language: en
Added: Oct 31, 2017
Slides: 33 pages
Slide Content
Nick Pentreath, IBM
Feature Hashing for
Scalable Machine
Learning
#EUds15
About
•About me
–@MLnick
–Principal Engineer at IBM working on machine
learning & Apache Spark
–Apache Spark committer & PMC
–Author of Machine Learning with Spark
#EUds15 2
Agenda
•Intro to feature hashing
•HashingTFin Spark ML
•FeatureHasherin Spark ML
•Experiments
•Future Work
#EUds15 3
Feature Hashing
4#EUds15
Encoding Features
5#EUds15
2.13.2−0.20.7
!!
•Most ML algorithms
operate on numeric
feature vectors
•Features are often
categorical –even
numerical features (e.g.
binning continuous
features)
Encoding Features
•“one-hot” encoding is
popular for categorical
features
•“bag of words” is
popular for text (or token
countsmore generally)
00100
! 02011
#EUds15 6
High Dimensional Features
•Many domains have very high densefeature dimension
(e.g. images, video)
•Here we’re concerned with sparsefeature domains, e.g.
online ads, ecommerce, social networks, video sharing,
text & NLP
•Model sizes can be very large even for simple models
#EUds15 7
!
The “Hashing Trick”
•Use a hash function to map feature values to
indices in the feature vector
Dublin
Hash “city=dublin” 00100
Modulo hash value
to vector size to get
index of feature
Stock Price
Hash “stock_price” 02.60…00
Modulo hash value
to vector size to get
index of feature
#EUds15 8
Feature Hashing: Pros
•Fast & Simple
•Preserves sparsity
•Memory efficient
–Limits feature vector size
–No need to store mapping feature name -> index
•Online learning
•Easy handling of missing data
•Feature engineering
#EUds15 9
Feature Hashing: Cons
•No inverse mapping => cannot go from feature
indices back to feature names
–Interpretability & feature importances
–But similar issues with other dim reduction techniques
(e.g. random projections, PCA, SVD)
•Hashcollisions …
–Impact on accuracy of feature collisions
–Can use signed hash functions to alleviate part of it
#EUds15 10
HashingTFin Spark ML
#EUds15 11
HashingTFTransformer
•Transforms text (sentences) -> term frequency
vectors (aka “bag of words”)
•Uses the “hashing trick” to compute the feature
indices
•Feature value is term frequency (token count)
•Optional parameter to only return binary token
occurrencevector
#EUds15 12
HashingTFTransformer
#EUds15 13
Hacking HashingTF
•HashingTFcan be used for
categorical features…
•…but doesn’t fit neatly into
Pipelines
#EUds15 14
FeatureHasherin Spark ML
#EUds15 15
FeatureHasher
•Flexible, scalable feature encoding using
hashing trick
•Support multiple input columns (numeric or
string / boolean, i.e. categorical)
•One-shotfeature encoder
•Core logic similar to HashingTF
#EUds15 16
FeatureHasher
•Operates on entire Row
•…UDF with structinput
#EUds15 17
FeatureHasher
#EUds15 18
•Determining feature
index
–Numeric: feature
name
–String / boolean:
“feature=value”
•String encoding =>
effectively “one hot”
with indicator value
of 1.0
FeatureHasher
•Modulo raw index to feature vector size
•Hash collisions are additive(currently)
#EUds15 19
Challenges
Raw Data StringIndexer OneHotEncoder VectorAssembler
OOM!
•Typical one-hot encoding pipeline fails consistently with
large feature dimension
#EUds15 26
Results
•Criteo1T logs –7 day subset
•Can train model on 1.5b examples
•300m original features for this subset
•2
24
hashed features (16m)
•Impossible with current Spark ML (OOM, 2Gb
broadcast limit)
#EUds15 28
Summary & Future Work
#EUds15 29
Summary
•Feature hashing is a fast, efficient, flexible tool for
feature encoding
•Can scale to high-dimensional sparse data, without
giving up much accuracy
•Supports multi-column “one-shot” encoding
•Avoids common issues with Spark ML Pipelines
using StringIndexer& OneHotEncoderat scale
#EUds15 30
Future Directions
•Will be part of Spark ML in 2.3.0 release (Q4 2017)
–Refer toSPARK-13969for details
•Allow users to specify set of numeric columns to be
treated as categorical
•Signed hash functions
•Internal feature crossing & namespaces (ala Vowpal
Wabbit)
•DictVectorizer-like transformer => one-pass feature
encoder for multiple numeric & categorical columns (with
inverse mapping) –see SPARK-19962
#EUds15 31
References
•Hash Kernels
•Feature Hashing for Large Scale Multitask
Learning
•VowpalWabbit
•Scikit-learn
#EUds15 32