org.apache.lucene.analysis
Class CharTokenizer

java.lang.Object
  extended by org.apache.lucene.util.AttributeSource
      extended by org.apache.lucene.analysis.TokenStream
          extended by org.apache.lucene.analysis.Tokenizer
              extended by org.apache.lucene.analysis.CharTokenizer
All Implemented Interfaces:
Closeable
Direct Known Subclasses:
IndicTokenizer, LetterTokenizer, RussianLetterTokenizer, WhitespaceTokenizer

public abstract class CharTokenizer
extends Tokenizer

An abstract base class for simple, character-oriented tokenizers.

You must specify the required Version compatibility when creating CharTokenizer:

A new CharTokenizer API has been introduced with Lucene 3.1. This API moved from UTF-16 code units to UTF-32 codepoints to eventually add support for supplementary characters. The old char based API has been deprecated and should be replaced with the int based methods isTokenChar(int) and normalize(int).

As of Lucene 3.1 each CharTokenizer - constructor expects a Version argument. Based on the given Version either the new API or a backwards compatibility layer is used at runtime. For Version < 3.1 the backwards compatibility layer ensures correct behavior even for indexes build with previous versions of Lucene. If a Version >= 3.1 is used CharTokenizer requires the new API to be implemented by the instantiated class. Yet, the old char based API is not required anymore even if backwards compatibility must be preserved. CharTokenizer subclasses implementing the new API are fully backwards compatible if instantiated with Version < 3.1.

Note: If you use a subclass of CharTokenizer with Version >= 3.1 on an index build with a version < 3.1, created tokens might not be compatible with the terms in your index.


Nested Class Summary
 
Nested classes/interfaces inherited from class org.apache.lucene.util.AttributeSource
AttributeSource.AttributeFactory, AttributeSource.State
 
Field Summary
 
Fields inherited from class org.apache.lucene.analysis.Tokenizer
input
 
Constructor Summary
CharTokenizer(AttributeSource.AttributeFactory factory, Reader input)
          Deprecated. use CharTokenizer(Version, AttributeSource.AttributeFactory, Reader) instead. This will be removed in Lucene 4.0.
CharTokenizer(AttributeSource source, Reader input)
          Deprecated. use CharTokenizer(Version, AttributeSource, Reader) instead. This will be removed in Lucene 4.0.
CharTokenizer(Reader input)
          Deprecated. use CharTokenizer(Version, Reader) instead. This will be removed in Lucene 4.0.
CharTokenizer(Version matchVersion, AttributeSource.AttributeFactory factory, Reader input)
          Creates a new CharTokenizer instance
CharTokenizer(Version matchVersion, AttributeSource source, Reader input)
          Creates a new CharTokenizer instance
CharTokenizer(Version matchVersion, Reader input)
          Creates a new CharTokenizer instance
 
Method Summary
 void end()
          This method is called by the consumer after the last token has been consumed, after TokenStream.incrementToken() returned false (using the new TokenStream API).
 boolean incrementToken()
          Consumers (i.e., IndexWriter) use this method to advance the stream to the next token.
protected  boolean isTokenChar(char c)
          Deprecated. use isTokenChar(int) instead. This method will be removed in Lucene 4.0.
protected  boolean isTokenChar(int c)
          Returns true iff a codepoint should be included in a token.
protected  char normalize(char c)
          Deprecated. use normalize(int) instead. This method will be removed in Lucene 4.0.
protected  int normalize(int c)
          Called on each token character to normalize it before it is added to the token.
 void reset(Reader input)
          Expert: Reset the tokenizer to a new reader.
 
Methods inherited from class org.apache.lucene.analysis.Tokenizer
close, correctOffset
 
Methods inherited from class org.apache.lucene.analysis.TokenStream
reset
 
Methods inherited from class org.apache.lucene.util.AttributeSource
addAttribute, addAttributeImpl, captureState, clearAttributes, cloneAttributes, copyTo, equals, getAttribute, getAttributeClassesIterator, getAttributeFactory, getAttributeImplsIterator, hasAttribute, hasAttributes, hashCode, reflectAsString, reflectWith, restoreState, toString
 
Methods inherited from class java.lang.Object
clone, finalize, getClass, notify, notifyAll, wait, wait, wait
 

Constructor Detail

CharTokenizer

public CharTokenizer(Version matchVersion,
                     Reader input)
Creates a new CharTokenizer instance

Parameters:
matchVersion - Lucene version to match See above
input - the input to split up into tokens

CharTokenizer

public CharTokenizer(Version matchVersion,
                     AttributeSource source,
                     Reader input)
Creates a new CharTokenizer instance

Parameters:
matchVersion - Lucene version to match See above
source - the attribute source to use for this Tokenizer
input - the input to split up into tokens

CharTokenizer

public CharTokenizer(Version matchVersion,
                     AttributeSource.AttributeFactory factory,
                     Reader input)
Creates a new CharTokenizer instance

Parameters:
matchVersion - Lucene version to match See above
factory - the attribute factory to use for this Tokenizer
input - the input to split up into tokens

CharTokenizer

@Deprecated
public CharTokenizer(Reader input)
Deprecated. use CharTokenizer(Version, Reader) instead. This will be removed in Lucene 4.0.

Creates a new CharTokenizer instance

Parameters:
input - the input to split up into tokens

CharTokenizer

@Deprecated
public CharTokenizer(AttributeSource source,
                                Reader input)
Deprecated. use CharTokenizer(Version, AttributeSource, Reader) instead. This will be removed in Lucene 4.0.

Creates a new CharTokenizer instance

Parameters:
input - the input to split up into tokens
source - the attribute source to use for this Tokenizer

CharTokenizer

@Deprecated
public CharTokenizer(AttributeSource.AttributeFactory factory,
                                Reader input)
Deprecated. use CharTokenizer(Version, AttributeSource.AttributeFactory, Reader) instead. This will be removed in Lucene 4.0.

Creates a new CharTokenizer instance

Parameters:
input - the input to split up into tokens
factory - the attribute factory to use for this Tokenizer
Method Detail

isTokenChar

@Deprecated
protected boolean isTokenChar(char c)
Deprecated. use isTokenChar(int) instead. This method will be removed in Lucene 4.0.

Returns true iff a UTF-16 code unit should be included in a token. This tokenizer generates as tokens adjacent sequences of characters which satisfy this predicate. Characters for which this is false are used to define token boundaries and are not included in tokens.

Note: This method cannot handle supplementary characters. To support all Unicode characters, including supplementary characters, use the isTokenChar(int) method.


normalize

@Deprecated
protected char normalize(char c)
Deprecated. use normalize(int) instead. This method will be removed in Lucene 4.0.

Called on each token UTF-16 code unit to normalize it before it is added to the token. The default implementation does nothing. Subclasses may use this to, e.g., lowercase tokens.

Note: This method cannot handle supplementary characters. To support all Unicode characters, including supplementary characters, use the normalize(int) method.


isTokenChar

protected boolean isTokenChar(int c)
Returns true iff a codepoint should be included in a token. This tokenizer generates as tokens adjacent sequences of codepoints which satisfy this predicate. Codepoints for which this is false are used to define token boundaries and are not included in tokens.

As of Lucene 3.1 the char based API (isTokenChar(char) and normalize(char)) has been depreciated in favor of a Unicode 4.0 compatible int based API to support codepoints instead of UTF-16 code units. Subclasses of CharTokenizer must not override the char based methods if a Version >= 3.1 is passed to the constructor.

NOTE: This method will be marked abstract in Lucene 4.0.


normalize

protected int normalize(int c)
Called on each token character to normalize it before it is added to the token. The default implementation does nothing. Subclasses may use this to, e.g., lowercase tokens.

As of Lucene 3.1 the char based API (isTokenChar(char) and normalize(char)) has been depreciated in favor of a Unicode 4.0 compatible int based API to support codepoints instead of UTF-16 code units. Subclasses of CharTokenizer must not override the char based methods if a Version >= 3.1 is passed to the constructor.

NOTE: This method will be marked abstract in Lucene 4.0.


incrementToken

public final boolean incrementToken()
                             throws IOException
Description copied from class: TokenStream
Consumers (i.e., IndexWriter) use this method to advance the stream to the next token. Implementing classes must implement this method and update the appropriate AttributeImpls with the attributes of the next token.

The producer must make no assumptions about the attributes after the method has been returned: the caller may arbitrarily change it. If the producer needs to preserve the state for subsequent calls, it can use AttributeSource.captureState() to create a copy of the current attribute state.

This method is called for every token of a document, so an efficient implementation is crucial for good performance. To avoid calls to AttributeSource.addAttribute(Class) and AttributeSource.getAttribute(Class), references to all AttributeImpls that this stream uses should be retrieved during instantiation.

To ensure that filters and consumers know which attributes are available, the attributes must be added during instantiation. Filters and consumers are not required to check for availability of attributes in TokenStream.incrementToken().

Specified by:
incrementToken in class TokenStream
Returns:
false for end of stream; true otherwise
Throws:
IOException

end

public final void end()
Description copied from class: TokenStream
This method is called by the consumer after the last token has been consumed, after TokenStream.incrementToken() returned false (using the new TokenStream API). Streams implementing the old API should upgrade to use this feature.

This method can be used to perform any end-of-stream operations, such as setting the final offset of a stream. The final offset of a stream might differ from the offset of the last token eg in case one or more whitespaces followed after the last token, but a WhitespaceTokenizer was used.

Overrides:
end in class TokenStream

reset

public void reset(Reader input)
           throws IOException
Description copied from class: Tokenizer
Expert: Reset the tokenizer to a new reader. Typically, an analyzer (in its reusableTokenStream method) will use this to re-use a previously created tokenizer.

Overrides:
reset in class Tokenizer
Throws:
IOException