| 
|||||||||
| PREV NEXT | FRAMES NO FRAMES | ||||||||
| Packages that use TokenStream | |
|---|---|
| org.apache.lucene.analysis | API and code to convert text into indexable/searchable tokens. | 
| org.apache.lucene.document | The logical representation of a Document for indexing and searching.  | 
| org.apache.lucene.index | Code to maintain and access indices. | 
| Uses of TokenStream in org.apache.lucene.analysis | 
|---|
| Subclasses of TokenStream in org.apache.lucene.analysis | |
|---|---|
 class | 
CachingTokenFilter
This class can be used if the token attributes of a TokenStream are intended to be consumed more than once.  | 
 class | 
NumericTokenStream
Expert: This class provides a TokenStream
 for indexing numeric values that can be used by NumericRangeQuery or NumericRangeFilter. | 
 class | 
TokenFilter
A TokenFilter is a TokenStream whose input is another TokenStream.  | 
 class | 
Tokenizer
A Tokenizer is a TokenStream whose input is a Reader.  | 
| Fields in org.apache.lucene.analysis declared as TokenStream | |
|---|---|
protected  TokenStream | 
TokenFilter.input
The source of tokens for this filter.  | 
protected  TokenStream | 
Analyzer.TokenStreamComponents.sink
Sink tokenstream, such as the outer tokenfilter decorating the chain.  | 
| Methods in org.apache.lucene.analysis that return TokenStream | |
|---|---|
 TokenStream | 
Analyzer.TokenStreamComponents.getTokenStream()
Returns the sink TokenStream | 
 TokenStream | 
Analyzer.tokenStream(String fieldName,
            Reader reader)
Returns a TokenStream suitable for fieldName, tokenizing
 the contents of reader. | 
| Constructors in org.apache.lucene.analysis with parameters of type TokenStream | |
|---|---|
Analyzer.TokenStreamComponents(Tokenizer source,
                               TokenStream result)
Creates a new Analyzer.TokenStreamComponents instance. | 
|
CachingTokenFilter(TokenStream input)
Create a new CachingTokenFilter around input,
 caching its token attributes, which can be replayed again
 after a call to CachingTokenFilter.reset(). | 
|
TokenFilter(TokenStream input)
Construct a token stream filtering the given input.  | 
|
| Uses of TokenStream in org.apache.lucene.document | 
|---|
| Fields in org.apache.lucene.document declared as TokenStream | |
|---|---|
protected  TokenStream | 
Field.tokenStream
Pre-analyzed tokenStream for indexed fields; this is separate from fieldsData because you are allowed to have both; eg maybe field has a String value but you customize how it's tokenized  | 
| Methods in org.apache.lucene.document that return TokenStream | |
|---|---|
 TokenStream | 
Field.tokenStream(Analyzer analyzer)
 | 
 TokenStream | 
Field.tokenStreamValue()
The TokenStream for this field to be used when indexing, or null.  | 
| Methods in org.apache.lucene.document with parameters of type TokenStream | |
|---|---|
 void | 
Field.setTokenStream(TokenStream tokenStream)
Expert: sets the token stream to be used for indexing and causes isIndexed() and isTokenized() to return true.  | 
| Constructors in org.apache.lucene.document with parameters of type TokenStream | |
|---|---|
Field(String name,
      TokenStream tokenStream)
Deprecated. Use TextField instead | 
|
Field(String name,
      TokenStream tokenStream,
      Field.TermVector termVector)
Deprecated. Use TextField instead | 
|
Field(String name,
      TokenStream tokenStream,
      FieldType type)
Create field with TokenStream value.  | 
|
TextField(String name,
          TokenStream stream)
Creates a new un-stored TextField with TokenStream value.  | 
|
| Uses of TokenStream in org.apache.lucene.index | 
|---|
| Methods in org.apache.lucene.index that return TokenStream | |
|---|---|
 TokenStream | 
IndexableField.tokenStream(Analyzer analyzer)
Creates the TokenStream used for indexing this field.  | 
  | 
|||||||||
| PREV NEXT | FRAMES NO FRAMES | ||||||||