public final class UAX29URLEmailAnalyzer extends StopwordAnalyzerBase
UAX29URLEmailTokenizer
with LowerCaseFilter
and
StopFilter
, using a list of
English stop words.Analyzer.ReuseStrategy, Analyzer.TokenStreamComponents
Modifier and Type | Field and Description |
---|---|
static int |
DEFAULT_MAX_TOKEN_LENGTH
Default maximum allowed token length
|
static CharArraySet |
STOP_WORDS_SET
An unmodifiable set containing some common English words that are usually not
useful for searching.
|
stopwords
GLOBAL_REUSE_STRATEGY, PER_FIELD_REUSE_STRATEGY
Constructor and Description |
---|
UAX29URLEmailAnalyzer()
Builds an analyzer with the default stop words (
STOP_WORDS_SET ). |
UAX29URLEmailAnalyzer(CharArraySet stopWords)
Builds an analyzer with the given stop words.
|
UAX29URLEmailAnalyzer(java.io.Reader stopwords)
Builds an analyzer with the stop words from the given reader.
|
Modifier and Type | Method and Description |
---|---|
protected Analyzer.TokenStreamComponents |
createComponents(java.lang.String fieldName)
Creates a new
Analyzer.TokenStreamComponents instance for this analyzer. |
int |
getMaxTokenLength() |
protected TokenStream |
normalize(java.lang.String fieldName,
TokenStream in)
Wrap the given
TokenStream in order to apply normalization filters. |
void |
setMaxTokenLength(int length)
Set the max allowed token length.
|
getStopwordSet, loadStopwordSet, loadStopwordSet, loadStopwordSet
attributeFactory, close, getOffsetGap, getPositionIncrementGap, getReuseStrategy, getVersion, initReader, initReaderForNormalization, normalize, setVersion, tokenStream, tokenStream
public static final int DEFAULT_MAX_TOKEN_LENGTH
public static final CharArraySet STOP_WORDS_SET
public UAX29URLEmailAnalyzer(CharArraySet stopWords)
stopWords
- stop wordspublic UAX29URLEmailAnalyzer()
STOP_WORDS_SET
).public UAX29URLEmailAnalyzer(java.io.Reader stopwords) throws java.io.IOException
stopwords
- Reader to read stop words fromjava.io.IOException
WordlistLoader.getWordSet(java.io.Reader)
public void setMaxTokenLength(int length)
LengthFilter
to remove long tokens. The default is
DEFAULT_MAX_TOKEN_LENGTH
.public int getMaxTokenLength()
setMaxTokenLength(int)
protected Analyzer.TokenStreamComponents createComponents(java.lang.String fieldName)
Analyzer
Analyzer.TokenStreamComponents
instance for this analyzer.createComponents
in class Analyzer
fieldName
- the name of the fields content passed to the
Analyzer.TokenStreamComponents
sink as a readerAnalyzer.TokenStreamComponents
for this analyzer.protected TokenStream normalize(java.lang.String fieldName, TokenStream in)
Analyzer
TokenStream
in order to apply normalization filters.
The default implementation returns the TokenStream
as-is. This is
used by Analyzer.normalize(String, String)
.Copyright © 2000–2019 The Apache Software Foundation. All rights reserved.