public final class CJKBigramFilter extends TokenFilter
CJK types are set by these tokenizers, but you can also use
CJKBigramFilter(TokenStream, int)
to explicitly control which
of the CJK scripts are turned into bigrams.
By default, when a CJK character has no adjacent characters to form
a bigram, it is output in unigram form. If you want to always output
both unigrams and bigrams, set the outputUnigrams
flag in CJKBigramFilter(TokenStream, int, boolean)
.
This can be used for a combined unigram+bigram approach.
Unlike ICUTokenizer, StandardTokenizer does not split at script boundaries.
Korean Hangul characters are treated the same as many other scripts'
letters, and as a result, StandardTokenizer can produce tokens that mix
Hangul and non-Hangul characters, e.g. "한국abc". Such mixed-script tokens
are typed as <ALPHANUM>
rather than
<HANGUL>
, and as a result, will not be converted to
bigrams by CJKBigramFilter.
In all cases, all non-CJK input is passed thru unmodified.
AttributeSource.State
Modifier and Type | Field and Description |
---|---|
static java.lang.String |
DOUBLE_TYPE
when we emit a bigram, it's then marked as this type
|
static int |
HAN
bigram flag for Han Ideographs
|
static int |
HANGUL
bigram flag for Hangul
|
static int |
HIRAGANA
bigram flag for Hiragana
|
static int |
KATAKANA
bigram flag for Katakana
|
static java.lang.String |
SINGLE_TYPE
when we emit a unigram, it's then marked as this type
|
input
DEFAULT_TOKEN_ATTRIBUTE_FACTORY
Constructor and Description |
---|
CJKBigramFilter(TokenStream in)
|
CJKBigramFilter(TokenStream in,
int flags)
|
CJKBigramFilter(TokenStream in,
int flags,
boolean outputUnigrams)
Create a new CJKBigramFilter, specifying which writing systems should be bigrammed,
and whether or not unigrams should also be output.
|
Modifier and Type | Method and Description |
---|---|
boolean |
incrementToken()
Consumers (i.e.,
IndexWriter ) use this method to advance the stream to
the next token. |
void |
reset()
This method is called by a consumer before it begins consumption using
TokenStream.incrementToken() . |
close, end
addAttribute, addAttributeImpl, captureState, clearAttributes, cloneAttributes, copyTo, endAttributes, equals, getAttribute, getAttributeClassesIterator, getAttributeFactory, getAttributeImplsIterator, hasAttribute, hasAttributes, hashCode, reflectAsString, reflectWith, removeAllAttributes, restoreState, toString
public static final int HAN
public static final int HIRAGANA
public static final int KATAKANA
public static final int HANGUL
public static final java.lang.String DOUBLE_TYPE
public static final java.lang.String SINGLE_TYPE
public CJKBigramFilter(TokenStream in)
public CJKBigramFilter(TokenStream in, int flags)
public CJKBigramFilter(TokenStream in, int flags, boolean outputUnigrams)
public boolean incrementToken() throws java.io.IOException
TokenStream
IndexWriter
) use this method to advance the stream to
the next token. Implementing classes must implement this method and update
the appropriate AttributeImpl
s with the attributes of the next
token.
The producer must make no assumptions about the attributes after the method
has been returned: the caller may arbitrarily change it. If the producer
needs to preserve the state for subsequent calls, it can use
AttributeSource.captureState()
to create a copy of the current attribute state.
This method is called for every token of a document, so an efficient
implementation is crucial for good performance. To avoid calls to
AttributeSource.addAttribute(Class)
and AttributeSource.getAttribute(Class)
,
references to all AttributeImpl
s that this stream uses should be
retrieved during instantiation.
To ensure that filters and consumers know which attributes are available,
the attributes must be added during instantiation. Filters and consumers
are not required to check for availability of attributes in
TokenStream.incrementToken()
.
incrementToken
in class TokenStream
java.io.IOException
public void reset() throws java.io.IOException
TokenFilter
TokenStream.incrementToken()
.
Resets this stream to a clean state. Stateful implementations must implement this method so that they can be reused, just as if they had been created fresh.
If you override this method, always call super.reset()
, otherwise
some internal state will not be correctly reset (e.g., Tokenizer
will
throw IllegalStateException
on further usage).
NOTE:
The default implementation chains the call to the input TokenStream, so
be sure to call super.reset()
when overriding this method.
reset
in class TokenFilter
java.io.IOException
Copyright © 2000–2019 The Apache Software Foundation. All rights reserved.