| 
 | ||||||||||
| PREV CLASS NEXT CLASS | FRAMES NO FRAMES | |||||||||
| SUMMARY: NESTED | FIELD | CONSTR | METHOD | DETAIL: FIELD | CONSTR | METHOD | |||||||||
java.lang.Objectorg.apache.lucene.util.AttributeSource
org.apache.lucene.analysis.TokenStream
org.apache.lucene.analysis.Tokenizer
org.apache.lucene.analysis.wikipedia.WikipediaTokenizer
public final class WikipediaTokenizer
Extension of StandardTokenizer that is aware of Wikipedia syntax. It is based off of the Wikipedia tutorial available at http://en.wikipedia.org/wiki/Wikipedia:Tutorial, but it may not be complete.
| Nested Class Summary | 
|---|
| Nested classes/interfaces inherited from class org.apache.lucene.util.AttributeSource | 
|---|
| AttributeSource.AttributeFactory, AttributeSource.State | 
| Field Summary | |
|---|---|
| static int | ACRONYM_ID | 
| static int | ALPHANUM_ID | 
| static int | APOSTROPHE_ID | 
| static String | BOLD | 
| static int | BOLD_ID | 
| static String | BOLD_ITALICS | 
| static int | BOLD_ITALICS_ID | 
| static int | BOTHOutput the both the untokenized token and the splits | 
| static String | CATEGORY | 
| static int | CATEGORY_ID | 
| static String | CITATION | 
| static int | CITATION_ID | 
| static int | CJ_ID | 
| static int | COMPANY_ID | 
| static int | EMAIL_ID | 
| static String | EXTERNAL_LINK | 
| static int | EXTERNAL_LINK_ID | 
| static String | EXTERNAL_LINK_URL | 
| static int | EXTERNAL_LINK_URL_ID | 
| static String | HEADING | 
| static int | HEADING_ID | 
| static int | HOST_ID | 
| static String | INTERNAL_LINK | 
| static int | INTERNAL_LINK_ID | 
| static String | ITALICS | 
| static int | ITALICS_ID | 
| static int | NUM_ID | 
| static String | SUB_HEADING | 
| static int | SUB_HEADING_ID | 
| static String[] | TOKEN_TYPESString token types that correspond to token type int constants | 
| static int | TOKENS_ONLYOnly output tokens | 
| static int | UNTOKENIZED_ONLYOnly output untokenized tokens, which are tokens that would normally be split into several tokens | 
| static int | UNTOKENIZED_TOKEN_FLAGThis flag is used to indicate that the produced "Token" would, if TOKENS_ONLYwas used, produce multiple tokens. | 
| Fields inherited from class org.apache.lucene.analysis.Tokenizer | 
|---|
| input | 
| Constructor Summary | |
|---|---|
| WikipediaTokenizer(AttributeSource.AttributeFactory factory,
                   Reader input,
                   int tokenOutput,
                   Set<String> untokenizedTypes)Creates a new instance of the WikipediaTokenizer. | |
| WikipediaTokenizer(AttributeSource source,
                   Reader input,
                   int tokenOutput,
                   Set<String> untokenizedTypes)Creates a new instance of the WikipediaTokenizer. | |
| WikipediaTokenizer(Reader input)Creates a new instance of the WikipediaTokenizer. | |
| WikipediaTokenizer(Reader input,
                   int tokenOutput,
                   Set<String> untokenizedTypes)Creates a new instance of the WikipediaTokenizer. | |
| Method Summary | |
|---|---|
|  void | end()This method is called by the consumer after the last token has been consumed, after TokenStream.incrementToken()returnedfalse(using the newTokenStreamAPI). | 
|  boolean | incrementToken()Consumers (i.e., IndexWriter) use this method to advance the stream to
 the next token. | 
|  void | reset()Resets this stream to the beginning. | 
|  void | reset(Reader reader)Expert: Reset the tokenizer to a new reader. | 
| Methods inherited from class org.apache.lucene.analysis.Tokenizer | 
|---|
| close, correctOffset | 
| Methods inherited from class org.apache.lucene.util.AttributeSource | 
|---|
| addAttribute, addAttributeImpl, captureState, clearAttributes, cloneAttributes, copyTo, equals, getAttribute, getAttributeClassesIterator, getAttributeFactory, getAttributeImplsIterator, hasAttribute, hasAttributes, hashCode, reflectAsString, reflectWith, restoreState, toString | 
| Methods inherited from class java.lang.Object | 
|---|
| clone, finalize, getClass, notify, notifyAll, wait, wait, wait | 
| Field Detail | 
|---|
public static final String INTERNAL_LINK
public static final String EXTERNAL_LINK
public static final String EXTERNAL_LINK_URL
public static final String CITATION
public static final String CATEGORY
public static final String BOLD
public static final String ITALICS
public static final String BOLD_ITALICS
public static final String HEADING
public static final String SUB_HEADING
public static final int ALPHANUM_ID
public static final int APOSTROPHE_ID
public static final int ACRONYM_ID
public static final int COMPANY_ID
public static final int EMAIL_ID
public static final int HOST_ID
public static final int NUM_ID
public static final int CJ_ID
public static final int INTERNAL_LINK_ID
public static final int EXTERNAL_LINK_ID
public static final int CITATION_ID
public static final int CATEGORY_ID
public static final int BOLD_ID
public static final int ITALICS_ID
public static final int BOLD_ITALICS_ID
public static final int HEADING_ID
public static final int SUB_HEADING_ID
public static final int EXTERNAL_LINK_URL_ID
public static final String[] TOKEN_TYPES
public static final int TOKENS_ONLY
public static final int UNTOKENIZED_ONLY
public static final int BOTH
public static final int UNTOKENIZED_TOKEN_FLAG
TOKENS_ONLY was used, produce multiple tokens.
| Constructor Detail | 
|---|
public WikipediaTokenizer(Reader input)
WikipediaTokenizer. Attaches the
 input to a newly created JFlex scanner.
input - The Input Reader
public WikipediaTokenizer(Reader input,
                          int tokenOutput,
                          Set<String> untokenizedTypes)
WikipediaTokenizer.  Attaches the
 input to a the newly created JFlex scanner.
input - The inputtokenOutput - One of TOKENS_ONLY, UNTOKENIZED_ONLY, BOTHuntokenizedTypes - 
public WikipediaTokenizer(AttributeSource.AttributeFactory factory,
                          Reader input,
                          int tokenOutput,
                          Set<String> untokenizedTypes)
WikipediaTokenizer.  Attaches the
 input to a the newly created JFlex scanner. Uses the given AttributeSource.AttributeFactory.
input - The inputtokenOutput - One of TOKENS_ONLY, UNTOKENIZED_ONLY, BOTHuntokenizedTypes - 
public WikipediaTokenizer(AttributeSource source,
                          Reader input,
                          int tokenOutput,
                          Set<String> untokenizedTypes)
WikipediaTokenizer.  Attaches the
 input to a the newly created JFlex scanner. Uses the given AttributeSource.
input - The inputtokenOutput - One of TOKENS_ONLY, UNTOKENIZED_ONLY, BOTHuntokenizedTypes - | Method Detail | 
|---|
public final boolean incrementToken()
                             throws IOException
TokenStreamIndexWriter) use this method to advance the stream to
 the next token. Implementing classes must implement this method and update
 the appropriate AttributeImpls with the attributes of the next
 token.
 
 The producer must make no assumptions about the attributes after the method
 has been returned: the caller may arbitrarily change it. If the producer
 needs to preserve the state for subsequent calls, it can use
 AttributeSource.captureState() to create a copy of the current attribute state.
 
 This method is called for every token of a document, so an efficient
 implementation is crucial for good performance. To avoid calls to
 AttributeSource.addAttribute(Class) and AttributeSource.getAttribute(Class),
 references to all AttributeImpls that this stream uses should be
 retrieved during instantiation.
 
 To ensure that filters and consumers know which attributes are available,
 the attributes must be added during instantiation. Filters and consumers
 are not required to check for availability of attributes in
 TokenStream.incrementToken().
incrementToken in class TokenStreamIOException
public void reset()
           throws IOException
TokenStreamTokenStream.reset() is not needed for
 the standard indexing process. However, if the tokens of a
 TokenStream are intended to be consumed more than once, it is
 necessary to implement TokenStream.reset(). Note that if your TokenStream
 caches tokens and feeds them back again after a reset, it is imperative
 that you clone the tokens when you store them away (on the first pass) as
 well as when you return them (on future passes after TokenStream.reset()).
reset in class TokenStreamIOException
public void reset(Reader reader)
           throws IOException
Tokenizer
reset in class TokenizerIOException
public void end()
         throws IOException
TokenStreamTokenStream.incrementToken() returned false
 (using the new TokenStream API). Streams implementing the old API
 should upgrade to use this feature.
 
 This method can be used to perform any end-of-stream operations, such as
 setting the final offset of a stream. The final offset of a stream might
 differ from the offset of the last token eg in case one or more whitespaces
 followed after the last token, but a WhitespaceTokenizer was used.
end in class TokenStreamIOException| 
 | ||||||||||
| PREV CLASS NEXT CLASS | FRAMES NO FRAMES | |||||||||
| SUMMARY: NESTED | FIELD | CONSTR | METHOD | DETAIL: FIELD | CONSTR | METHOD | |||||||||