Deprecated Classes |
org.apache.lucene.queryParser.standard.config.AllowLeadingWildcardAttributeImpl
|
org.apache.lucene.queryParser.standard.config.AnalyzerAttributeImpl
|
org.apache.lucene.analysis.ar.ArabicLetterTokenizer
(3.1) Use StandardTokenizer instead. |
org.apache.lucene.queryParser.standard.config.BoostAttributeImpl
|
org.apache.lucene.spatial.geometry.CartesianPoint
|
org.apache.lucene.spatial.tier.CartesianPolyFilterBuilder
|
org.apache.lucene.spatial.tier.CartesianShapeFilter
|
org.apache.lucene.spatial.tier.projections.CartesianTierPlotter
|
org.apache.lucene.analysis.CharArraySet.CharArraySetIterator
Use the standard iterator, which returns char[] instances. |
org.apache.lucene.analysis.cn.ChineseAnalyzer
Use StandardAnalyzer instead, which has the same functionality.
This analyzer will be removed in Lucene 5.0 |
org.apache.lucene.analysis.cn.ChineseFilter
Use StopFilter instead, which has the same functionality.
This filter will be removed in Lucene 5.0 |
org.apache.lucene.analysis.cn.ChineseTokenizer
Use StandardTokenizer instead, which has the same functionality.
This filter will be removed in Lucene 5.0 |
org.apache.lucene.analysis.cjk.CJKTokenizer
Use StandardTokenizer, CJKWidthFilter, CJKBigramFilter, and LowerCaseFilter instead. |
org.apache.lucene.document.DateField
If you build a new index, use DateTools or
NumericField instead.
This class is included for use with existing
indices and will be removed in a future release (possibly Lucene 4.0). |
org.apache.lucene.queryParser.standard.config.DateResolutionAttributeImpl
|
org.apache.lucene.queryParser.standard.config.DefaultOperatorAttributeImpl
|
org.apache.lucene.queryParser.standard.config.DefaultPhraseSlopAttributeImpl
|
org.apache.lucene.spatial.geometry.shape.DistanceApproximation
This has been replaced with more accurate
math in LLRect . This class will be removed in a future release. |
org.apache.lucene.spatial.tier.DistanceFieldComparatorSource
|
org.apache.lucene.spatial.tier.DistanceFilter
|
org.apache.lucene.spatial.tier.DistanceHandler
|
org.apache.lucene.spatial.tier.DistanceQueryBuilder
|
org.apache.lucene.spatial.DistanceUtils
|
org.apache.lucene.analysis.nl.DutchStemFilter
Use SnowballFilter with
DutchStemmer instead, which has the
same functionality. This filter will be removed in Lucene 5.0 |
org.apache.lucene.analysis.nl.DutchStemmer
Use DutchStemmer instead,
which has the same functionality. This filter will be removed in Lucene 5.0 |
org.apache.lucene.spatial.geometry.shape.Ellipse
|
org.apache.lucene.queryParser.standard.config.FieldBoostMapAttributeImpl
|
org.apache.lucene.queryParser.standard.config.FieldDateResolutionMapAttributeImpl
|
org.apache.lucene.index.FieldNormModifier
This class is broken, as it does not correctly take position
overlaps into account. |
org.apache.lucene.search.FilterManager
used by remote package which is deprecated as well. You should
use CachingWrapperFilter if you wish to cache
Filter s. |
org.apache.lucene.spatial.geometry.FixedLatLng
|
org.apache.lucene.spatial.geometry.FloatLatLng
|
org.apache.lucene.analysis.fr.FrenchStemFilter
Use SnowballFilter with
FrenchStemmer instead, which has the
same functionality. This filter will be removed in Lucene 5.0 |
org.apache.lucene.analysis.fr.FrenchStemmer
Use FrenchStemmer instead,
which has the same functionality. This filter will be removed in Lucene 5.0 |
org.apache.lucene.search.suggest.fst.FSTLookup
Use FSTCompletionLookup instead. |
org.apache.lucene.queryParser.standard.config.FuzzyAttributeImpl
|
org.apache.lucene.spatial.geohash.GeoHashDistanceFilter
|
org.apache.lucene.spatial.geohash.GeoHashUtils
|
org.apache.lucene.index.IndexWriter.MaxFieldLength
use LimitTokenCountAnalyzer instead. |
org.apache.lucene.analysis.in.IndicTokenizer
(3.6) Use StandardTokenizer instead. |
org.apache.lucene.store.instantiated.InstantiatedIndex
contrib/instantiated will be removed in 4.0;
you can use the memory codec to hold all postings in RAM |
org.apache.lucene.store.instantiated.InstantiatedIndexReader
contrib/instantiated will be removed in 4.0;
you can use the memory codec to hold all postings in RAM |
org.apache.lucene.store.instantiated.InstantiatedIndexWriter
contrib/instantiated will be removed in 4.0;
you can use the memory codec to hold all postings in RAM |
org.apache.lucene.analysis.ISOLatin1AccentFilter
If you build a new index, use ASCIIFoldingFilter
which covers a superset of Latin 1.
This class is included for use with existing
indexes and will be removed in a future release (possibly Lucene 4.0). |
org.apache.lucene.spatial.geometry.LatLng
|
org.apache.lucene.spatial.tier.LatLongDistanceFilter
|
org.apache.lucene.spatial.geometry.shape.LineSegment
|
org.apache.lucene.spatial.geometry.shape.LLRect
|
org.apache.lucene.queryParser.standard.config.LocaleAttributeImpl
|
org.apache.lucene.queryParser.standard.config.LowercaseExpandedTermsAttributeImpl
|
org.apache.lucene.messages.MessageImpl
Will be moved to a private package inside flexible query parser (Lucene 4.0). |
org.apache.lucene.queryParser.standard.config.MultiFieldAttributeImpl
|
org.apache.lucene.queryParser.standard.MultiFieldQueryParserWrapper
this class will be removed soon, it's a temporary class to be
used along the transition from the old query parser to the new
one |
org.apache.lucene.search.MultiSearcher
If you are using MultiSearcher over
IndexSearchers, please use MultiReader instead; this class
does not properly handle certain kinds of queries (see LUCENE-2756). |
org.apache.lucene.queryParser.standard.config.MultiTermRewriteMethodAttributeImpl
|
org.apache.lucene.messages.NLS
Will be moved to a private package inside flexible query parser (Lucene 4.0). |
org.apache.lucene.document.NumberTools
For new indexes use NumericUtils instead, which
provides a sortable binary representation (prefix encoded) of numeric
values.
To index and efficiently query numeric values use NumericField
and NumericRangeQuery .
This class is included for use with existing
indices and will be removed in a future release (possibly Lucene 4.0). |
org.apache.lucene.search.ParallelMultiSearcher
Please pass an ExecutorService to IndexSearcher , instead. |
org.apache.lucene.util.Parameter
Use Java 5 enum, will be removed in a later Lucene 3.x release. |
org.apache.lucene.queryParser.core.nodes.ParametricQueryNode
this class will be removed in future. FieldQueryNode
should be used instead. |
org.apache.lucene.index.PayloadProcessorProvider.DirPayloadProcessor
Use PayloadProcessorProvider.ReaderPayloadProcessor instead. |
org.apache.lucene.spatial.geometry.shape.Point2D
|
org.apache.lucene.queryParser.standard.config.PositionIncrementsAttributeImpl
|
org.apache.lucene.queryParser.standard.QueryParserWrapper
this class will be removed soon, it's a temporary class to be
used along the transition from the old query parser to the new
one |
org.apache.lucene.queryParser.standard.config.RangeCollatorAttributeImpl
|
org.apache.lucene.queryParser.standard.nodes.RangeQueryNode
this class will be removed in future, TermRangeQueryNode should
be used instead |
org.apache.lucene.queryParser.standard.builders.RangeQueryNodeBuilder
this builder will be removed in future together with RangeQueryNode |
org.apache.lucene.spatial.geometry.shape.Rectangle
|
org.apache.lucene.search.RemoteCachingWrapperFilter
This package (all of contrib/remote) will be
removed in 4.0. |
org.apache.lucene.search.RemoteSearchable
This package (all of contrib/remote) will be
removed in 4.0. |
org.apache.lucene.analysis.ru.RussianLetterTokenizer
Use StandardTokenizer instead, which has the same functionality.
This filter will be removed in Lucene 5.0 |
org.apache.lucene.analysis.ru.RussianLowerCaseFilter
Use LowerCaseFilter instead, which has the same
functionality. This filter will be removed in Lucene 4.0 |
org.apache.lucene.analysis.ru.RussianStemFilter
Use SnowballFilter with
RussianStemmer instead, which has the
same functionality. This filter will be removed in Lucene 4.0 |
org.apache.lucene.search.Searcher
In 4.0 this abstract class is removed/absorbed
into IndexSearcher |
org.apache.lucene.spatial.tier.Shape
|
org.apache.lucene.analysis.shingle.ShingleMatrixFilter
Will be removed in Lucene 4.0. This filter is unmaintained and might not behave
correctly if used with custom Attributes, i.e. Attributes other than
the ones located in org.apache.lucene.analysis.tokenattributes . It also uses
hardcoded payload encoders which makes it not easily adaptable to other use-cases. |
org.apache.lucene.search.SimilarityDelegator
this class will be removed in 4.0. Please
subclass Similarity or DefaultSimilarity instead. |
org.apache.lucene.spatial.tier.projections.SinusoidalProjector
Until we can put in place proper tests and a proper fix. |
org.apache.lucene.analysis.snowball.SnowballAnalyzer
Use the language-specific analyzer in contrib/analyzers instead.
This analyzer will be removed in Lucene 5.0 |
org.apache.lucene.search.regex.SpanRegexQuery
Use new SpanMultiTermQueryWrapper<RegexQuery>(new RegexQuery()) instead.
This query will be removed in Lucene 4.0 |
org.apache.lucene.analysis.standard.std31.StandardTokenizerImpl31
This class is only for exact backwards compatibility |
org.apache.lucene.analysis.tokenattributes.TermAttributeImpl
This class is not used anymore. The backwards layer in
AttributeFactory uses the replacement implementation. |
org.apache.lucene.analysis.standard.std31.UAX29URLEmailTokenizerImpl31
This class is only for exact backwards compatibility |
org.apache.lucene.analysis.standard.std34.UAX29URLEmailTokenizerImpl34
This class is only for exact backwards compatibility |
org.apache.lucene.spatial.geometry.shape.Vector2D
|
Deprecated Methods |
org.apache.lucene.index.IndexReader.acquireWriteLock()
Write support will be removed in Lucene 4.0. |
org.apache.lucene.analysis.PerFieldAnalyzerWrapper.addAnalyzer(String, Analyzer)
Changing the Analyzer for a field after instantiation prevents
reusability. Analyzers for fields should be set during construction. |
org.apache.lucene.index.IndexWriter.addIndexesNoOptimize(Directory...)
use IndexWriter.addIndexes(Directory...) instead |
org.apache.lucene.queryParser.core.processors.QueryNodeProcessorPipeline.addProcessor(QueryNodeProcessor)
this class now conforms to List interface, so use
QueryNodeProcessorPipeline.add(QueryNodeProcessor) instead |
org.apache.lucene.analysis.query.QueryAutoStopWordAnalyzer.addStopWords(IndexReader)
Stopwords should be calculated at instantiation using
QueryAutoStopWordAnalyzer.QueryAutoStopWordAnalyzer(Version, Analyzer, IndexReader) |
org.apache.lucene.analysis.query.QueryAutoStopWordAnalyzer.addStopWords(IndexReader, float)
Stowords should be calculated at instantiation using
QueryAutoStopWordAnalyzer.QueryAutoStopWordAnalyzer(Version, Analyzer, IndexReader, float) |
org.apache.lucene.analysis.query.QueryAutoStopWordAnalyzer.addStopWords(IndexReader, int)
Stopwords should be calculated at instantiation using
QueryAutoStopWordAnalyzer.QueryAutoStopWordAnalyzer(Version, Analyzer, IndexReader, int) |
org.apache.lucene.analysis.query.QueryAutoStopWordAnalyzer.addStopWords(IndexReader, String, float)
Stowords should be calculated at instantiation using
QueryAutoStopWordAnalyzer.QueryAutoStopWordAnalyzer(Version, Analyzer, IndexReader, Collection, float) |
org.apache.lucene.analysis.query.QueryAutoStopWordAnalyzer.addStopWords(IndexReader, String, int)
Stowords should be calculated at instantiation using
QueryAutoStopWordAnalyzer.QueryAutoStopWordAnalyzer(Version, Analyzer, IndexReader, Collection, int) |
org.apache.lucene.util._TestUtil.arrayToString(int[])
-- in 3.0 we can use Arrays.toString
instead |
org.apache.lucene.util._TestUtil.arrayToString(Object[])
-- in 3.0 we can use Arrays.toString
instead |
org.apache.lucene.util.LuceneTestCase.assertEquals(double, double)
|
org.apache.lucene.util.LuceneTestCase.assertEquals(float, float)
|
org.apache.lucene.util.LuceneTestCase.assertEquals(String, double, double)
|
org.apache.lucene.util.LuceneTestCase.assertEquals(String, float, float)
|
org.apache.lucene.search.MultiTermQuery.clearTotalNumberOfTerms()
Don't use this method, as its not thread safe and useless. |
org.apache.lucene.search.MultiTermQueryWrapperFilter.clearTotalNumberOfTerms()
Don't use this method, as its not thread safe and useless. |
org.apache.lucene.index.ParallelReader.clone(boolean)
Write support will be removed in Lucene 4.0.
Use ParallelReader.clone() instead. |
org.apache.lucene.index.IndexReader.clone(boolean)
Write support will be removed in Lucene 4.0.
Use IndexReader.clone() instead. |
org.apache.lucene.index.MultiReader.clone(boolean)
Write support will be removed in Lucene 4.0.
Use MultiReader.clone() instead. |
org.apache.lucene.index.SegmentReader.clone(boolean)
|
org.apache.lucene.index.SegmentReader.cloneDeletedDocs(BitVector)
|
org.apache.lucene.index.SegmentReader.cloneNormBytes(byte[])
|
org.apache.lucene.index.IndexReader.commit()
Write support will be removed in Lucene 4.0. |
org.apache.lucene.index.IndexReader.commit(Map)
Write support will be removed in Lucene 4.0. |
org.apache.lucene.queryParser.core.nodes.QueryNode.containsTag(CharSequence)
use QueryNode.containsTag(String) instead |
org.apache.lucene.queryParser.core.nodes.QueryNodeImpl.containsTag(CharSequence)
use QueryNodeImpl.containsTag(String) instead |
org.apache.lucene.store.Directory.copy(Directory, Directory, boolean)
should be replaced with calls to
Directory.copy(Directory, String, String) for every file that
needs copying. You can use the following code:
IndexFileNameFilter filter = IndexFileNameFilter.getFilter();
for (String file : src.listAll()) {
if (filter.accept(null, file)) {
src.copy(dest, file, file);
}
}
|
org.apache.lucene.analysis.CharArraySet.copy(Set>)
use CharArraySet.copy(Version, Set) instead. |
org.apache.lucene.search.Searcher.createWeight(Query)
never ever use this method in Weight implementations.
Subclasses of Searcher should use Searcher.createNormalizedWeight(org.apache.lucene.search.Query) , instead. |
org.apache.lucene.util.IndexableBinaryStringTools.decode(CharBuffer)
Use IndexableBinaryStringTools.decode(char[], int, int, byte[], int, int)
instead. This method will be removed in Lucene 4.0 |
org.apache.lucene.util.IndexableBinaryStringTools.decode(CharBuffer, ByteBuffer)
Use IndexableBinaryStringTools.decode(char[], int, int, byte[], int, int)
instead. This method will be removed in Lucene 4.0 |
org.apache.lucene.search.Similarity.decodeNorm(byte)
Use Similarity.decodeNormValue(byte) instead. |
org.apache.lucene.index.IndexReader.deleteDocument(int)
Write support will be removed in Lucene 4.0.
Use IndexWriter.deleteDocuments(Term) instead |
org.apache.lucene.index.IndexReader.deleteDocuments(Term)
Write support will be removed in Lucene 4.0.
Use IndexWriter.deleteDocuments(Term) instead |
org.apache.lucene.index.ParallelReader.doCommit(Map)
|
org.apache.lucene.index.IndexReader.doCommit(Map)
Write support will be removed in Lucene 4.0. |
org.apache.lucene.index.MultiReader.doCommit(Map)
|
org.apache.lucene.index.FilterIndexReader.doCommit(Map)
|
org.apache.lucene.index.SegmentReader.doCommit(Map)
|
org.apache.lucene.index.ParallelReader.doDelete(int)
|
org.apache.lucene.index.IndexReader.doDelete(int)
Write support will be removed in Lucene 4.0.
Use IndexWriter.deleteDocuments(Term) instead |
org.apache.lucene.index.MultiReader.doDelete(int)
|
org.apache.lucene.index.FilterIndexReader.doDelete(int)
|
org.apache.lucene.index.SegmentReader.doDelete(int)
|
org.apache.lucene.index.ParallelReader.doOpenIfChanged(boolean)
Write support will be removed in Lucene 4.0.
Use ParallelReader.doOpenIfChanged() instead. |
org.apache.lucene.index.IndexReader.doOpenIfChanged(boolean)
Write support will be removed in Lucene 4.0.
Use IndexReader.doOpenIfChanged() instead |
org.apache.lucene.index.MultiReader.doOpenIfChanged(boolean)
Write support will be removed in Lucene 4.0.
Use MultiReader.doOpenIfChanged() instead. |
org.apache.lucene.index.SegmentReader.doOpenIfChanged(boolean)
|
org.apache.lucene.index.ParallelReader.doSetNorm(int, String, byte)
|
org.apache.lucene.index.IndexReader.doSetNorm(int, String, byte)
Write support will be removed in Lucene 4.0.
There will be no replacement for this method. |
org.apache.lucene.index.MultiReader.doSetNorm(int, String, byte)
|
org.apache.lucene.index.FilterIndexReader.doSetNorm(int, String, byte)
|
org.apache.lucene.index.SegmentReader.doSetNorm(int, String, byte)
|
org.apache.lucene.index.ParallelReader.doUndeleteAll()
|
org.apache.lucene.index.IndexReader.doUndeleteAll()
Write support will be removed in Lucene 4.0.
There will be no replacement for this method. |
org.apache.lucene.index.MultiReader.doUndeleteAll()
|
org.apache.lucene.index.FilterIndexReader.doUndeleteAll()
|
org.apache.lucene.index.SegmentReader.doUndeleteAll()
|
org.apache.lucene.util.IndexableBinaryStringTools.encode(ByteBuffer)
Use IndexableBinaryStringTools.encode(byte[], int, int, char[], int, int)
instead. This method will be removed in Lucene 4.0 |
org.apache.lucene.util.IndexableBinaryStringTools.encode(ByteBuffer, CharBuffer)
Use IndexableBinaryStringTools.encode(byte[], int, int, char[], int, int)
instead. This method will be removed in Lucene 4.0 |
org.apache.lucene.search.Similarity.encodeNorm(float)
Use Similarity.encodeNormValue(float) instead. |
org.tartarus.snowball.SnowballProgram.eq_s_b(int, String)
for binary back compat. Will be removed in Lucene 4.0 |
org.tartarus.snowball.SnowballProgram.eq_s(int, String)
for binary back compat. Will be removed in Lucene 4.0 |
org.tartarus.snowball.SnowballProgram.eq_v_b(StringBuilder)
for binary back compat. Will be removed in Lucene 4.0 |
org.tartarus.snowball.SnowballProgram.eq_v(StringBuilder)
for binary back compat. Will be removed in Lucene 4.0 |
org.apache.lucene.util.RamUsageEstimator.estimateRamUsage(Object)
Don't create instances of this class, instead use the static
RamUsageEstimator.sizeOf(Object) method. |
org.apache.lucene.index.IndexWriter.expungeDeletes()
|
org.apache.lucene.index.IndexWriter.expungeDeletes(boolean)
|
org.apache.lucene.store.Directory.fileModified(String)
|
org.apache.lucene.search.ChainedFilter.finalResult(OpenBitSetDISI, int)
Either use CachingWrapperFilter, or
switch to a different DocIdSet implementation yourself.
This method will be removed in Lucene 4.0 |
org.apache.lucene.index.IndexReader.flush()
Write support will be removed in Lucene 4.0. |
org.apache.lucene.index.IndexReader.flush(Map)
Write support will be removed in Lucene 4.0. |
org.apache.lucene.queryParser.CharStream.getColumn()
|
org.apache.lucene.benchmark.byTask.feeds.demohtml.SimpleCharStream.getColumn()
|
org.apache.lucene.queryParser.standard.parser.JavaCharStream.getColumn()
|
org.apache.lucene.queryParser.surround.parser.CharStream.getColumn()
|
org.apache.lucene.index.IndexReader.getCommitUserData()
Call IndexReader.getIndexCommit() and then call
IndexCommit.getUserData() . |
org.apache.lucene.index.IndexReader.getCommitUserData(Directory)
Call IndexReader.getIndexCommit() on an open IndexReader, and then call
IndexCommit.getUserData() . |
org.apache.lucene.index.IndexReader.getCurrentVersion(Directory)
Use IndexReader.getVersion() on an opened IndexReader. |
org.apache.lucene.util.IndexableBinaryStringTools.getDecodedLength(CharBuffer)
Use IndexableBinaryStringTools.getDecodedLength(char[], int, int) instead. This
method will be removed in Lucene 4.0 |
org.apache.lucene.index.IndexWriter.getDefaultWriteLockTimeout()
use IndexWriterConfig.getDefaultWriteLockTimeout() instead |
org.apache.lucene.index.PayloadProcessorProvider.getDirProcessor(Directory)
Use PayloadProcessorProvider.getReaderProcessor(org.apache.lucene.index.IndexReader) instead. You can still select by Directory ,
if you retrieve the underlying directory from IndexReader.directory() . |
org.apache.lucene.analysis.StopFilter.getEnablePositionIncrementsVersionDefault(Version)
use StopFilter.StopFilter(Version, TokenStream, Set) instead |
org.apache.lucene.util.IndexableBinaryStringTools.getEncodedLength(ByteBuffer)
Use IndexableBinaryStringTools.getEncodedLength(byte[], int, int) instead. This
method will be removed in Lucene 4.0 |
org.apache.lucene.document.Document.getField(String)
use Document.getFieldable(java.lang.String) instead and cast depending on
data type. |
org.apache.lucene.queryParser.core.config.QueryConfigHandler.getFieldConfig(CharSequence)
use QueryConfigHandler.getFieldConfig(String) instead |
org.apache.lucene.queryParser.core.config.FieldConfig.getFieldName()
use FieldConfig.getField() instead |
org.apache.lucene.queryParser.QueryParser.getFieldQuery(String, String)
Use QueryParser.getFieldQuery(String,String,boolean) instead. |
org.apache.lucene.queryParser.standard.QueryParserWrapper.getFieldQuery(String, String)
Use QueryParserWrapper.getFieldQuery(String, String, boolean) instead |
org.apache.lucene.document.Document.getFields(String)
use Document.getFieldable(java.lang.String) instead and cast depending on
data type. |
org.apache.lucene.search.vectorhighlight.BaseFragmentsBuilder.getFieldValues(IndexReader, int, String)
|
org.apache.lucene.store.FSDirectory.getFile()
Use FSDirectory.getDirectory() instead. |
org.apache.lucene.search.vectorhighlight.BaseFragmentsBuilder.getFragmentSource(StringBuilder, int[], String[], int, int)
|
org.apache.lucene.analysis.compound.HyphenationCompoundWordTokenFilter.getHyphenationTree(Reader)
Don't use Readers with fixed charset to load XML files, unless programatically created.
Use HyphenationCompoundWordTokenFilter.getHyphenationTree(InputSource) instead, where you can supply default charset and input
stream, if you like. |
org.apache.lucene.queryParser.CharStream.getLine()
|
org.apache.lucene.benchmark.byTask.feeds.demohtml.SimpleCharStream.getLine()
|
org.apache.lucene.queryParser.standard.parser.JavaCharStream.getLine()
|
org.apache.lucene.queryParser.surround.parser.CharStream.getLine()
|
org.apache.lucene.index.IndexWriter.getMaxBufferedDeleteTerms()
use IndexWriterConfig.getMaxBufferedDeleteTerms() instead |
org.apache.lucene.index.IndexWriter.getMaxBufferedDocs()
use IndexWriterConfig.getMaxBufferedDocs() instead. |
org.apache.lucene.index.IndexWriter.getMaxFieldLength()
use LimitTokenCountAnalyzer to limit number of tokens. |
org.apache.lucene.index.IndexWriter.getMaxMergeDocs()
use LogMergePolicy.getMaxMergeDocs() directly. |
org.apache.lucene.index.LogByteSizeMergePolicy.getMaxMergeMBForOptimize()
Renamed to LogByteSizeMergePolicy.getMaxMergeMBForForcedMerge() |
org.apache.lucene.index.IndexWriter.getMergedSegmentWarmer()
use IndexWriterConfig.getMergedSegmentWarmer() instead. |
org.apache.lucene.index.IndexWriter.getMergeFactor()
use LogMergePolicy.getMergeFactor() directly. |
org.apache.lucene.index.IndexWriter.getMergePolicy()
use IndexWriterConfig.getMergePolicy() instead |
org.apache.lucene.index.IndexWriter.getMergeScheduler()
use IndexWriterConfig.getMergeScheduler() instead |
org.apache.lucene.search.Similarity.getNormDecoder()
Use instance methods for encoding/decoding norm values to enable customization. |
org.apache.lucene.document.AbstractField.getOmitTermFreqAndPositions()
use AbstractField.getIndexOptions() instead. |
org.apache.lucene.index.IndexWriter.getRAMBufferSizeMB()
use IndexWriterConfig.getRAMBufferSizeMB() instead. |
org.apache.lucene.index.IndexWriter.getReader()
Please use IndexReader.open(IndexWriter,boolean) instead. |
org.apache.lucene.index.IndexWriter.getReader(int)
Please use IndexReader.open(IndexWriter,boolean) instead. Furthermore,
this method cannot guarantee the reader (and its
sub-readers) will be opened with the
termInfosIndexDivisor setting because some of them may
have already been opened according to IndexWriterConfig.setReaderTermsIndexDivisor(int) . You
should set the requested termInfosIndexDivisor through
IndexWriterConfig.setReaderTermsIndexDivisor(int) and use
IndexWriter.getReader() . |
org.apache.lucene.index.IndexWriter.getReaderTermsIndexDivisor()
use IndexWriterConfig.getReaderTermsIndexDivisor() instead. |
org.apache.lucene.index.IndexWriter.getSimilarity()
use IndexWriterConfig.getSimilarity() instead |
org.apache.lucene.search.Scorer.getSimilarity()
Store any Similarity you might need privately in your implementation instead. |
org.apache.lucene.search.Query.getSimilarity(Searcher)
Instead of using "runtime" subclassing/delegation, subclass the Weight instead. |
org.apache.lucene.queryParser.core.nodes.QueryNode.getTag(CharSequence)
use QueryNode.getTag(String) instead |
org.apache.lucene.queryParser.core.nodes.QueryNodeImpl.getTag(CharSequence)
use QueryNodeImpl.getTag(String) instead |
org.apache.lucene.queryParser.core.nodes.QueryNode.getTags()
use QueryNode.getTagMap() |
org.apache.lucene.queryParser.core.nodes.QueryNodeImpl.getTags()
use QueryNodeImpl.getTagMap() instead |
org.apache.lucene.index.IndexWriter.getTermIndexInterval()
use IndexWriterConfig.getTermIndexInterval() |
org.apache.lucene.index.IndexCommit.getTimestamp()
If you need to track commit time of
an index, you can store it in the commit data (see
IndexWriter.commit(Map) |
org.apache.lucene.search.MultiTermQuery.getTotalNumberOfTerms()
Don't use this method, as its not thread safe and useless. |
org.apache.lucene.search.MultiTermQueryWrapperFilter.getTotalNumberOfTerms()
Don't use this method, as its not thread safe and useless. |
org.apache.lucene.index.IndexWriter.getUseCompoundFile()
use LogMergePolicy.getUseCompoundFile() |
org.apache.lucene.index.IndexCommit.getVersion()
use IndexCommit.getGeneration() instead |
org.apache.lucene.index.IndexWriter.getWriteLockTimeout()
use IndexWriterConfig.getWriteLockTimeout() |
org.apache.lucene.search.MultiTermQuery.incTotalNumberOfTerms(int)
Don't use this method, as its not thread safe and useless. |
org.tartarus.snowball.SnowballProgram.insert(int, int, String)
for binary back compat. Will be removed in Lucene 4.0 |
org.tartarus.snowball.SnowballProgram.insert(int, int, StringBuilder)
for binary back compat. Will be removed in Lucene 4.0 |
org.apache.lucene.index.ParallelReader.isOptimized()
|
org.apache.lucene.index.IndexReader.isOptimized()
Check segment count using IndexReader.getSequentialSubReaders() instead. |
org.apache.lucene.index.MultiReader.isOptimized()
|
org.apache.lucene.index.FilterIndexReader.isOptimized()
|
org.apache.lucene.store.instantiated.InstantiatedIndexReader.isOptimized()
|
org.apache.lucene.analysis.standard.ClassicTokenizer.isReplaceInvalidAcronym()
Remove in 3.X and make true the only valid value |
org.apache.lucene.analysis.standard.StandardTokenizer.isReplaceInvalidAcronym()
Remove in 3.X and make true the only valid value |
org.apache.lucene.analysis.CharTokenizer.isTokenChar(char)
use CharTokenizer.isTokenChar(int) instead. This method will be
removed in Lucene 4.0. |
org.apache.lucene.index.IndexReader.lastModified(Directory)
If you need to track commit time of
an index, you can store it in the commit data (see
IndexWriter.commit(Map) |
org.apache.lucene.search.Similarity.lengthNorm(String, int)
Please override computeNorm instead |
org.apache.lucene.search.similar.MoreLikeThis.like(File)
use MoreLikeThis.like(Reader, String) instead |
org.apache.lucene.search.similar.MoreLikeThis.like(InputStream)
use MoreLikeThis.like(Reader, String) instead |
org.apache.lucene.search.similar.MoreLikeThis.like(Reader)
use MoreLikeThis.like(Reader, String) instead |
org.apache.lucene.search.similar.MoreLikeThis.like(URL)
use MoreLikeThis.like(Reader, String) instead |
org.apache.lucene.analysis.cz.CzechAnalyzer.loadStopWords(InputStream, String)
use WordlistLoader.getWordSet(Reader, String, Version)
and CzechAnalyzer.CzechAnalyzer(Version, Set) instead |
org.apache.lucene.analysis.compound.CompoundWordTokenFilterBase.makeDictionary(Version, String[])
Only available for backwards compatibility. |
org.apache.lucene.search.vectorhighlight.BaseFragmentsBuilder.makeFragment(StringBuilder, int[], String[], FieldFragList.WeightedFragInfo)
|
org.apache.lucene.analysis.StopFilter.makeStopSet(List>)
use StopFilter.makeStopSet(Version, List) instead |
org.apache.lucene.analysis.StopFilter.makeStopSet(List>, boolean)
use StopFilter.makeStopSet(Version, List, boolean) instead |
org.apache.lucene.analysis.StopFilter.makeStopSet(String...)
use StopFilter.makeStopSet(Version, String...) instead |
org.apache.lucene.analysis.StopFilter.makeStopSet(String[], boolean)
use StopFilter.makeStopSet(Version, String[], boolean) instead; |
org.apache.lucene.search.SearcherManager.maybeReopen()
see ReferenceManager.maybeRefresh() . |
org.apache.lucene.analysis.CharTokenizer.normalize(char)
use CharTokenizer.normalize(int) instead. This method will be
removed in Lucene 4.0. |
org.apache.lucene.index.IndexReader.open(Directory, boolean)
Write support will be removed in Lucene 4.0.
Use IndexReader.open(Directory) instead |
org.apache.lucene.index.IndexReader.open(Directory, IndexDeletionPolicy, boolean)
Write support will be removed in Lucene 4.0.
Use IndexReader.open(Directory) instead |
org.apache.lucene.index.IndexReader.open(Directory, IndexDeletionPolicy, boolean, int)
Write support will be removed in Lucene 4.0.
Use IndexReader.open(Directory,int) instead |
org.apache.lucene.index.IndexReader.open(IndexCommit, boolean)
Write support will be removed in Lucene 4.0.
Use IndexReader.open(IndexCommit) instead |
org.apache.lucene.index.IndexReader.open(IndexCommit, IndexDeletionPolicy, boolean)
Write support will be removed in Lucene 4.0.
Use IndexReader.open(IndexCommit) instead |
org.apache.lucene.index.IndexReader.open(IndexCommit, IndexDeletionPolicy, boolean, int)
Write support will be removed in Lucene 4.0.
Use IndexReader.open(IndexCommit,int) instead |
org.apache.lucene.index.IndexReader.openIfChanged(IndexReader, boolean)
Write support will be removed in Lucene 4.0.
Use IndexReader.openIfChanged(IndexReader) instead |
org.apache.lucene.index.IndexWriter.optimize()
|
org.apache.lucene.index.IndexWriter.optimize(boolean)
|
org.apache.lucene.index.IndexWriter.optimize(int)
|
org.apache.lucene.index.SegmentInfos.range(int, int)
use asList().subList(first, last)
instead. |
org.apache.lucene.store.DataInput.readChars(char[], int, int)
-- please use readString or readBytes
instead, and construct the string
from those utf8 bytes |
org.apache.lucene.index.SegmentInfos.readCurrentVersion(Directory)
Load the SegmentInfos and then call SegmentInfos.getVersion() . |
org.apache.lucene.index.IndexReader.reopen()
Use IndexReader.openIfChanged(IndexReader) instead |
org.apache.lucene.index.IndexReader.reopen(boolean)
Write support will be removed in Lucene 4.0.
Use IndexReader.openIfChanged(IndexReader) instead |
org.apache.lucene.index.IndexReader.reopen(IndexCommit)
Use IndexReader.openIfChanged(IndexReader,IndexCommit) instead |
org.apache.lucene.index.IndexReader.reopen(IndexWriter, boolean)
Use IndexReader.openIfChanged(IndexReader,IndexWriter,boolean) instead |
org.tartarus.snowball.SnowballProgram.replace_s(int, int, String)
for binary back compat. Will be removed in Lucene 4.0 |
org.apache.lucene.analysis.tokenattributes.CharTermAttributeImpl.resizeTermBuffer(int)
|
org.apache.lucene.search.similar.MoreLikeThis.retrieveInterestingTerms(Reader)
use MoreLikeThis.retrieveInterestingTerms(Reader, String) instead. |
org.apache.lucene.search.similar.MoreLikeThis.retrieveTerms(Reader)
use MoreLikeThis.retrieveTerms(Reader, String) instead |
org.apache.lucene.analysis.reverse.ReverseStringFilter.reverse(char[])
use ReverseStringFilter.reverse(Version, char[]) instead. This
method will be removed in Lucene 4.0 |
org.apache.lucene.analysis.reverse.ReverseStringFilter.reverse(char[], int)
use ReverseStringFilter.reverse(Version, char[], int) instead. This
method will be removed in Lucene 4.0 |
org.apache.lucene.analysis.reverse.ReverseStringFilter.reverse(char[], int, int)
use ReverseStringFilter.reverse(Version, char[], int, int) instead. This
method will be removed in Lucene 4.0 |
org.apache.lucene.analysis.reverse.ReverseStringFilter.reverse(String)
use ReverseStringFilter.reverse(Version, String) instead. This method
will be removed in Lucene 4.0 |
org.apache.lucene.analysis.fr.ElisionFilter.setArticles(Set>)
use ElisionFilter.setArticles(Version, Set) instead |
org.apache.lucene.analysis.fr.ElisionFilter.setArticles(Version, Set>)
use ElisionFilter.ElisionFilter(Version, TokenStream, Set) instead |
org.apache.lucene.queryParser.core.builders.QueryTreeBuilder.setBuilder(CharSequence, QueryBuilder)
use QueryTreeBuilder.setBuilder(String, QueryBuilder) instead |
org.apache.lucene.queryParser.standard.StandardQueryParser.setDateResolution(Map)
this method was renamed to StandardQueryParser.setDateResolutionMap(Map) |
org.apache.lucene.queryParser.standard.StandardQueryParser.setDefaultOperator(DefaultOperatorAttribute.Operator)
|
org.apache.lucene.queryParser.standard.StandardQueryParser.setDefaultPhraseSlop(int)
renamed to StandardQueryParser.setPhraseSlop(int) |
org.apache.lucene.index.IndexWriter.setDefaultWriteLockTimeout(long)
use IndexWriterConfig.setDefaultWriteLockTimeout(long) instead |
org.apache.lucene.analysis.de.GermanStemFilter.setExclusionSet(Set>)
use KeywordAttribute with KeywordMarkerFilter instead. |
org.apache.lucene.analysis.nl.DutchStemFilter.setExclusionTable(HashSet>)
use KeywordAttribute with KeywordMarkerFilter instead. |
org.apache.lucene.analysis.fr.FrenchStemFilter.setExclusionTable(Map, ?>)
use KeywordAttribute with KeywordMarkerFilter instead. |
org.apache.lucene.index.IndexWriter.setMaxBufferedDeleteTerms(int)
use IndexWriterConfig.setMaxBufferedDeleteTerms(int) instead. |
org.apache.lucene.index.IndexWriter.setMaxBufferedDocs(int)
use IndexWriterConfig.setMaxBufferedDocs(int) instead. |
org.apache.lucene.index.IndexWriter.setMaxFieldLength(int)
use LimitTokenCountAnalyzer instead. Note that the
behvaior slightly changed - the analyzer limits the number of
tokens per token stream created, while this setting limits the
total number of tokens to index. This only matters if you index
many multi-valued fields though. |
org.apache.lucene.index.IndexWriter.setMaxMergeDocs(int)
use LogMergePolicy.setMaxMergeDocs(int) directly. |
org.apache.lucene.index.LogByteSizeMergePolicy.setMaxMergeMBForOptimize(double)
Renamed to LogByteSizeMergePolicy.setMaxMergeMBForForcedMerge(double) |
org.apache.lucene.analysis.shingle.ShingleAnalyzerWrapper.setMaxShingleSize(int)
Setting maxShingleSize after Analyzer instantiation prevents reuse.
Confgure maxShingleSize during construction. |
org.apache.lucene.index.IndexWriter.setMergedSegmentWarmer(IndexWriter.IndexReaderWarmer)
use
IndexWriterConfig.setMergedSegmentWarmer(org.apache.lucene.index.IndexWriter.IndexReaderWarmer)
instead. |
org.apache.lucene.index.IndexWriter.setMergeFactor(int)
use LogMergePolicy.setMergeFactor(int) directly. |
org.apache.lucene.index.IndexWriter.setMergePolicy(MergePolicy)
use IndexWriterConfig.setMergePolicy(MergePolicy) instead. |
org.apache.lucene.index.IndexWriter.setMergeScheduler(MergeScheduler)
use IndexWriterConfig.setMergeScheduler(MergeScheduler) instead |
org.apache.lucene.analysis.shingle.ShingleAnalyzerWrapper.setMinShingleSize(int)
Setting minShingleSize after Analyzer instantiation prevents reuse.
Confgure minShingleSize during construction. |
org.apache.lucene.index.IndexReader.setNorm(int, String, byte)
Write support will be removed in Lucene 4.0.
There will be no replacement for this method. |
org.apache.lucene.index.IndexReader.setNorm(int, String, float)
Write support will be removed in Lucene 4.0.
There will be no replacement for this method. |
org.apache.lucene.document.AbstractField.setOmitTermFreqAndPositions(boolean)
use AbstractField.setIndexOptions(FieldInfo.IndexOptions) instead. |
org.apache.lucene.analysis.shingle.ShingleAnalyzerWrapper.setOutputUnigrams(boolean)
Setting outputUnigrams after Analyzer instantiation prevents reuse.
Confgure outputUnigrams during construction. |
org.apache.lucene.analysis.shingle.ShingleAnalyzerWrapper.setOutputUnigramsIfNoShingles(boolean)
Setting outputUnigramsIfNoShingles after Analyzer instantiation prevents reuse.
Confgure outputUnigramsIfNoShingles during construction. |
org.apache.lucene.index.IndexWriter.setRAMBufferSizeMB(double)
use IndexWriterConfig.setRAMBufferSizeMB(double) instead. |
org.apache.lucene.index.IndexWriter.setReaderTermsIndexDivisor(int)
use IndexWriterConfig.setReaderTermsIndexDivisor(int) instead. |
org.apache.lucene.analysis.standard.ClassicTokenizer.setReplaceInvalidAcronym(boolean)
Remove in 3.X and make true the only valid value
See https://issues.apache.org/jira/browse/LUCENE-1068 |
org.apache.lucene.analysis.standard.StandardTokenizer.setReplaceInvalidAcronym(boolean)
Remove in 3.X and make true the only valid value
See https://issues.apache.org/jira/browse/LUCENE-1068 |
org.apache.lucene.index.IndexWriter.setSimilarity(Similarity)
use IndexWriterConfig.setSimilarity(Similarity) instead |
org.apache.lucene.analysis.nl.DutchAnalyzer.setStemDictionary(File)
This prevents reuse of TokenStreams. If you wish to use a custom
stem dictionary, create your own Analyzer with StemmerOverrideFilter |
org.apache.lucene.analysis.br.BrazilianAnalyzer.setStemExclusionTable(File)
use BrazilianAnalyzer.BrazilianAnalyzer(Version, Set, Set) instead |
org.apache.lucene.analysis.de.GermanAnalyzer.setStemExclusionTable(File)
use GermanAnalyzer.GermanAnalyzer(Version, Set, Set) instead |
org.apache.lucene.analysis.fr.FrenchAnalyzer.setStemExclusionTable(File)
use FrenchAnalyzer.FrenchAnalyzer(Version, Set, Set) instead |
org.apache.lucene.analysis.nl.DutchAnalyzer.setStemExclusionTable(File)
use DutchAnalyzer.DutchAnalyzer(Version, Set, Set) instead |
org.apache.lucene.analysis.nl.DutchAnalyzer.setStemExclusionTable(HashSet>)
use DutchAnalyzer.DutchAnalyzer(Version, Set, Set) instead |
org.apache.lucene.analysis.br.BrazilianAnalyzer.setStemExclusionTable(Map, ?>)
use BrazilianAnalyzer.BrazilianAnalyzer(Version, Set, Set) instead |
org.apache.lucene.analysis.de.GermanAnalyzer.setStemExclusionTable(Map, ?>)
use GermanAnalyzer.GermanAnalyzer(Version, Set, Set) instead |
org.apache.lucene.analysis.fr.FrenchAnalyzer.setStemExclusionTable(Map, ?>)
use FrenchAnalyzer.FrenchAnalyzer(Version, Set, Set) instead |
org.apache.lucene.analysis.br.BrazilianAnalyzer.setStemExclusionTable(String...)
use BrazilianAnalyzer.BrazilianAnalyzer(Version, Set, Set) instead |
org.apache.lucene.analysis.fr.FrenchAnalyzer.setStemExclusionTable(String...)
use FrenchAnalyzer.FrenchAnalyzer(Version, Set, Set) instead |
org.apache.lucene.analysis.nl.DutchAnalyzer.setStemExclusionTable(String...)
use DutchAnalyzer.DutchAnalyzer(Version, Set, Set) instead |
org.apache.lucene.analysis.de.GermanAnalyzer.setStemExclusionTable(String[])
use GermanAnalyzer.GermanAnalyzer(Version, Set, Set) instead |
org.apache.lucene.queryParser.core.nodes.QueryNode.setTag(CharSequence, Object)
use QueryNode.setTag(String, Object) instead |
org.apache.lucene.queryParser.core.nodes.QueryNodeImpl.setTag(CharSequence, Object)
use QueryNodeImpl.setTag(String, Object) instead |
org.apache.lucene.analysis.tokenattributes.CharTermAttributeImpl.setTermBuffer(char[], int, int)
|
org.apache.lucene.analysis.tokenattributes.CharTermAttributeImpl.setTermBuffer(String)
|
org.apache.lucene.analysis.tokenattributes.CharTermAttributeImpl.setTermBuffer(String, int, int)
|
org.apache.lucene.index.IndexWriter.setTermIndexInterval(int)
use IndexWriterConfig.setTermIndexInterval(int) |
org.apache.lucene.analysis.tokenattributes.CharTermAttributeImpl.setTermLength(int)
|
org.apache.lucene.index.ConcurrentMergeScheduler.setTestMode()
this test mode code will be removed in a future release |
org.apache.lucene.analysis.shingle.ShingleAnalyzerWrapper.setTokenSeparator(String)
Setting tokenSeparator after Analyzer instantiation prevents reuse.
Confgure tokenSeparator during construction. |
org.apache.lucene.index.IndexWriter.setUseCompoundFile(boolean)
use LogMergePolicy.setUseCompoundFile(boolean) . |
org.apache.lucene.index.IndexWriter.setWriteLockTimeout(long)
use IndexWriterConfig.setWriteLockTimeout(long) instead |
org.apache.lucene.store.IndexInput.skipChars(int)
this method operates on old "modified utf8" encoded
strings |
org.tartarus.snowball.SnowballProgram.slice_from(String)
for binary back compat. Will be removed in Lucene 4.0 |
org.tartarus.snowball.SnowballProgram.slice_from(StringBuilder)
for binary back compat. Will be removed in Lucene 4.0 |
org.apache.lucene.index.MultiPassIndexSplitter.split(IndexReader, Directory[], boolean)
use MultiPassIndexSplitter.split(Version, IndexReader, Directory[], boolean) instead.
This method will be removed in Lucene 4.0. |
org.apache.lucene.analysis.CharArraySet.stringIterator()
Use CharArraySet.iterator() , which returns char[] instances. |
org.apache.lucene.search.spell.SpellChecker.suggestSimilar(String, int, IndexReader, String, boolean)
use suggestSimilar(String, int, IndexReader, String, SuggestMode)
- SuggestMode.SUGGEST_WHEN_NOT_IN_INDEX instead of morePopular=false
- SuggestMode.SuGGEST_MORE_POPULAR instead of morePopular=true
|
org.apache.lucene.search.spell.SpellChecker.suggestSimilar(String, int, IndexReader, String, boolean, float)
use suggestSimilar(String, int, IndexReader, String, SuggestMode, float)
- SuggestMode.SUGGEST_WHEN_NOT_IN_INDEX instead of morePopular=false
- SuggestMode.SuGGEST_MORE_POPULAR instead of morePopular=true
|
org.apache.lucene.store.MockDirectoryWrapper.sync(String)
|
org.apache.lucene.store.FSDirectory.sync(String)
|
org.apache.lucene.store.FileSwitchDirectory.sync(String)
|
org.apache.lucene.store.Directory.sync(String)
use Directory.sync(Collection) instead.
For easy migration you can change your code to call
sync(Collections.singleton(name)) |
org.apache.lucene.analysis.tokenattributes.CharTermAttributeImpl.term()
|
org.apache.lucene.analysis.tokenattributes.CharTermAttributeImpl.termBuffer()
|
org.apache.lucene.analysis.tokenattributes.CharTermAttributeImpl.termLength()
|
org.apache.lucene.store.MockDirectoryWrapper.touchFile(String)
|
org.apache.lucene.store.FSDirectory.touchFile(String)
Lucene never uses this API; it will be
removed in 4.0. |
org.apache.lucene.store.RAMDirectory.touchFile(String)
Lucene never uses this API; it will be
removed in 4.0. |
org.apache.lucene.store.FileSwitchDirectory.touchFile(String)
|
org.apache.lucene.store.NRTCachingDirectory.touchFile(String)
|
org.apache.lucene.store.Directory.touchFile(String)
Lucene never uses this API; it will be
removed in 4.0. |
org.apache.lucene.index.IndexReader.undeleteAll()
Write support will be removed in Lucene 4.0.
There will be no replacement for this method. |
org.apache.lucene.queryParser.core.nodes.QueryNode.unsetTag(CharSequence)
use QueryNode.unsetTag(String) instead |
org.apache.lucene.queryParser.core.nodes.QueryNodeImpl.unsetTag(CharSequence)
use QueryNodeImpl.unsetTag(String) |
org.apache.lucene.search.Query.weight(Searcher)
never ever use this method in Weight implementations.
Subclasses of Query should use Query.createWeight(org.apache.lucene.search.Searcher) , instead. |
org.apache.lucene.store.DataOutput.writeChars(char[], int, int)
-- please pre-convert to utf8 bytes instead or use DataOutput.writeString(java.lang.String) |
org.apache.lucene.store.DataOutput.writeChars(String, int, int)
-- please pre-convert to utf8 bytes
instead or use DataOutput.writeString(java.lang.String) |
Deprecated Constructors |
org.apache.lucene.analysis.ar.ArabicAnalyzer(Version, File)
use ArabicAnalyzer.ArabicAnalyzer(Version, Set) instead |
org.apache.lucene.analysis.ar.ArabicAnalyzer(Version, Hashtable, ?>)
use ArabicAnalyzer.ArabicAnalyzer(Version, Set) instead |
org.apache.lucene.analysis.ar.ArabicAnalyzer(Version, String...)
use ArabicAnalyzer.ArabicAnalyzer(Version, Set) instead |
org.apache.lucene.analysis.ar.ArabicLetterTokenizer(AttributeSource.AttributeFactory, Reader)
use ArabicLetterTokenizer.ArabicLetterTokenizer(Version, AttributeSource.AttributeFactory, Reader)
instead. This will be removed in Lucene 4.0. |
org.apache.lucene.analysis.ar.ArabicLetterTokenizer(AttributeSource, Reader)
use ArabicLetterTokenizer.ArabicLetterTokenizer(Version, AttributeSource, Reader)
instead. This will be removed in Lucene 4.0. |
org.apache.lucene.analysis.ar.ArabicLetterTokenizer(Reader)
use ArabicLetterTokenizer.ArabicLetterTokenizer(Version, Reader) instead. This will
be removed in Lucene 4.0. |
org.apache.lucene.util.ArrayUtil()
This constructor was not intended to be public and should not be used.
This class contains solely a static utility methods.
It will be made private in Lucene 4.0 |
org.apache.lucene.analysis.br.BrazilianAnalyzer(Version, File)
use BrazilianAnalyzer.BrazilianAnalyzer(Version, Set) instead |
org.apache.lucene.analysis.br.BrazilianAnalyzer(Version, Map, ?>)
use BrazilianAnalyzer.BrazilianAnalyzer(Version, Set) instead |
org.apache.lucene.analysis.br.BrazilianAnalyzer(Version, String...)
use BrazilianAnalyzer.BrazilianAnalyzer(Version, Set) instead |
org.apache.lucene.analysis.br.BrazilianStemFilter(TokenStream, Set>)
use KeywordAttribute with KeywordMarkerFilter instead. |
org.apache.lucene.store.BufferedIndexInput()
please pass resourceDesc |
org.apache.lucene.store.BufferedIndexInput(int)
please pass resourceDesc |
org.apache.lucene.analysis.CharArraySet(Collection>, boolean)
use CharArraySet.CharArraySet(Version, Collection, boolean) instead |
org.apache.lucene.analysis.CharArraySet(int, boolean)
use CharArraySet.CharArraySet(Version, int, boolean) instead |
org.apache.lucene.analysis.CharTokenizer(AttributeSource.AttributeFactory, Reader)
use CharTokenizer.CharTokenizer(Version, AttributeSource.AttributeFactory, Reader) instead. This will be
removed in Lucene 4.0. |
org.apache.lucene.analysis.CharTokenizer(AttributeSource, Reader)
use CharTokenizer.CharTokenizer(Version, AttributeSource, Reader) instead. This will be
removed in Lucene 4.0. |
org.apache.lucene.analysis.CharTokenizer(Reader)
use CharTokenizer.CharTokenizer(Version, Reader) instead. This will be
removed in Lucene 4.0. |
org.apache.lucene.analysis.cjk.CJKAnalyzer(Version, String...)
use CJKAnalyzer.CJKAnalyzer(Version, Set) instead |
org.apache.lucene.analysis.standard.ClassicAnalyzer(Version, File)
Use ClassicAnalyzer.ClassicAnalyzer(Version, Reader) instead. |
org.apache.lucene.analysis.compound.CompoundWordTokenFilterBase(TokenStream, Set>)
use CompoundWordTokenFilterBase.CompoundWordTokenFilterBase(Version, TokenStream, Set) instead |
org.apache.lucene.analysis.compound.CompoundWordTokenFilterBase(TokenStream, Set>, boolean)
use CompoundWordTokenFilterBase.CompoundWordTokenFilterBase(Version, TokenStream, Set, boolean) instead |
org.apache.lucene.analysis.compound.CompoundWordTokenFilterBase(TokenStream, Set>, int, int, int, boolean)
use CompoundWordTokenFilterBase.CompoundWordTokenFilterBase(Version, TokenStream, Set, int, int, int, boolean) instead |
org.apache.lucene.analysis.compound.CompoundWordTokenFilterBase(TokenStream, String[])
use CompoundWordTokenFilterBase.CompoundWordTokenFilterBase(Version, TokenStream, String[]) instead |
org.apache.lucene.analysis.compound.CompoundWordTokenFilterBase(TokenStream, String[], boolean)
use CompoundWordTokenFilterBase.CompoundWordTokenFilterBase(Version, TokenStream, String[], boolean) instead |
org.apache.lucene.analysis.compound.CompoundWordTokenFilterBase(TokenStream, String[], int, int, int, boolean)
use CompoundWordTokenFilterBase.CompoundWordTokenFilterBase(Version, TokenStream, String[], int, int, int, boolean) instead |
org.apache.lucene.analysis.cz.CzechAnalyzer(Version, File)
use CzechAnalyzer.CzechAnalyzer(Version, Set) instead |
org.apache.lucene.analysis.cz.CzechAnalyzer(Version, HashSet>)
use CzechAnalyzer.CzechAnalyzer(Version, Set) instead |
org.apache.lucene.analysis.cz.CzechAnalyzer(Version, String...)
use CzechAnalyzer.CzechAnalyzer(Version, Set) instead |
org.apache.lucene.analysis.compound.DictionaryCompoundWordTokenFilter(TokenStream, Set)
use DictionaryCompoundWordTokenFilter.DictionaryCompoundWordTokenFilter(Version, TokenStream, Set) instead |
org.apache.lucene.analysis.compound.DictionaryCompoundWordTokenFilter(TokenStream, Set, int, int, int, boolean)
use DictionaryCompoundWordTokenFilter.DictionaryCompoundWordTokenFilter(Version, TokenStream, Set, int, int, int, boolean) instead |
org.apache.lucene.analysis.compound.DictionaryCompoundWordTokenFilter(TokenStream, String[])
use DictionaryCompoundWordTokenFilter.DictionaryCompoundWordTokenFilter(Version, TokenStream, String[]) instead |
org.apache.lucene.analysis.compound.DictionaryCompoundWordTokenFilter(TokenStream, String[], int, int, int, boolean)
use DictionaryCompoundWordTokenFilter.DictionaryCompoundWordTokenFilter(Version, TokenStream, String[], int, int, int, boolean) instead |
org.apache.lucene.analysis.compound.DictionaryCompoundWordTokenFilter(Version, TokenStream, String[])
Use the constructors taking Set |
org.apache.lucene.analysis.compound.DictionaryCompoundWordTokenFilter(Version, TokenStream, String[], int, int, int, boolean)
Use the constructors taking Set |
org.apache.lucene.analysis.nl.DutchAnalyzer(Version, File)
use DutchAnalyzer.DutchAnalyzer(Version, Set) instead |
org.apache.lucene.analysis.nl.DutchAnalyzer(Version, HashSet>)
use DutchAnalyzer.DutchAnalyzer(Version, Set) instead |
org.apache.lucene.analysis.nl.DutchAnalyzer(Version, String...)
use DutchAnalyzer.DutchAnalyzer(Version, Set) instead |
org.apache.lucene.analysis.nl.DutchStemFilter(TokenStream, Set>)
use KeywordAttribute with KeywordMarkerFilter instead. |
org.apache.lucene.analysis.nl.DutchStemFilter(TokenStream, Set>, Map, ?>)
use KeywordAttribute with KeywordMarkerFilter instead. |
org.apache.lucene.analysis.fr.ElisionFilter(TokenStream)
use ElisionFilter.ElisionFilter(Version, TokenStream) instead |
org.apache.lucene.analysis.fr.ElisionFilter(TokenStream, Set>)
use ElisionFilter.ElisionFilter(Version, TokenStream, Set) instead |
org.apache.lucene.analysis.fr.ElisionFilter(TokenStream, String[])
use ElisionFilter.ElisionFilter(Version, TokenStream, Set) instead |
org.apache.lucene.analysis.en.EnglishPossessiveFilter(TokenStream)
Use EnglishPossessiveFilter.EnglishPossessiveFilter(Version, TokenStream) instead. |
org.apache.lucene.document.Field(String, byte[], Field.Store)
Use instead |
org.apache.lucene.document.Field(String, byte[], int, int, Field.Store)
Use instead |
org.apache.lucene.queryParser.core.config.FieldConfig(CharSequence)
use FieldConfig.FieldConfig(String) instead |
org.apache.lucene.analysis.fr.FrenchAnalyzer(Version, File)
use FrenchAnalyzer.FrenchAnalyzer(Version, Set) instead |
org.apache.lucene.analysis.fr.FrenchAnalyzer(Version, String...)
use FrenchAnalyzer.FrenchAnalyzer(Version, Set) instead |
org.apache.lucene.analysis.fr.FrenchStemFilter(TokenStream, Set>)
use KeywordAttribute with KeywordMarkerFilter instead. |
org.apache.lucene.analysis.de.GermanAnalyzer(Version, File)
use GermanAnalyzer.GermanAnalyzer(Version, Set) |
org.apache.lucene.analysis.de.GermanAnalyzer(Version, Map, ?>)
use GermanAnalyzer.GermanAnalyzer(Version, Set) |
org.apache.lucene.analysis.de.GermanAnalyzer(Version, String...)
use GermanAnalyzer.GermanAnalyzer(Version, Set) |
org.apache.lucene.analysis.de.GermanStemFilter(TokenStream, Set>)
use KeywordAttribute with KeywordMarkerFilter instead. |
org.apache.lucene.analysis.el.GreekAnalyzer(Version, Map, ?>)
use GreekAnalyzer.GreekAnalyzer(Version, Set) instead |
org.apache.lucene.analysis.el.GreekAnalyzer(Version, String...)
use GreekAnalyzer.GreekAnalyzer(Version, Set) instead |
org.apache.lucene.analysis.el.GreekLowerCaseFilter(TokenStream)
Use GreekLowerCaseFilter.GreekLowerCaseFilter(Version, TokenStream) instead. |
org.apache.lucene.analysis.compound.HyphenationCompoundWordTokenFilter(TokenStream, HyphenationTree, Set>)
use HyphenationCompoundWordTokenFilter.HyphenationCompoundWordTokenFilter(Version, TokenStream, HyphenationTree, Set) instead. |
org.apache.lucene.analysis.compound.HyphenationCompoundWordTokenFilter(TokenStream, HyphenationTree, Set>, int, int, int, boolean)
use HyphenationCompoundWordTokenFilter.HyphenationCompoundWordTokenFilter(Version, TokenStream, HyphenationTree, Set, int, int, int, boolean) instead. |
org.apache.lucene.analysis.compound.HyphenationCompoundWordTokenFilter(TokenStream, HyphenationTree, String[])
use HyphenationCompoundWordTokenFilter.HyphenationCompoundWordTokenFilter(Version, TokenStream, HyphenationTree, String[]) instead. |
org.apache.lucene.analysis.compound.HyphenationCompoundWordTokenFilter(TokenStream, HyphenationTree, String[], int, int, int, boolean)
use HyphenationCompoundWordTokenFilter.HyphenationCompoundWordTokenFilter(Version, TokenStream, HyphenationTree, String[], int, int, int, boolean) instead. |
org.apache.lucene.analysis.compound.HyphenationCompoundWordTokenFilter(Version, TokenStream, HyphenationTree, String[])
Use the constructors taking Set |
org.apache.lucene.analysis.compound.HyphenationCompoundWordTokenFilter(Version, TokenStream, HyphenationTree, String[], int, int, int, boolean)
Use the constructors taking Set |
org.apache.lucene.store.IndexInput()
please pass resourceDescription |
org.apache.lucene.search.IndexSearcher(Directory)
use IndexSearcher.IndexSearcher(IndexReader) instead. |
org.apache.lucene.search.IndexSearcher(Directory, boolean)
Use IndexSearcher.IndexSearcher(IndexReader) instead. |
org.apache.lucene.index.IndexWriter(Directory, Analyzer, boolean, IndexDeletionPolicy, IndexWriter.MaxFieldLength)
use IndexWriter.IndexWriter(Directory, IndexWriterConfig) instead |
org.apache.lucene.index.IndexWriter(Directory, Analyzer, boolean, IndexWriter.MaxFieldLength)
use IndexWriter.IndexWriter(Directory, IndexWriterConfig) instead |
org.apache.lucene.index.IndexWriter(Directory, Analyzer, IndexDeletionPolicy, IndexWriter.MaxFieldLength)
use IndexWriter.IndexWriter(Directory, IndexWriterConfig) instead |
org.apache.lucene.index.IndexWriter(Directory, Analyzer, IndexDeletionPolicy, IndexWriter.MaxFieldLength, IndexCommit)
use IndexWriter.IndexWriter(Directory, IndexWriterConfig) instead |
org.apache.lucene.index.IndexWriter(Directory, Analyzer, IndexWriter.MaxFieldLength)
use IndexWriter.IndexWriter(Directory, IndexWriterConfig) instead |
org.apache.lucene.analysis.LengthFilter(TokenStream, int, int)
Use LengthFilter.LengthFilter(boolean, TokenStream, int, int) instead. |
org.apache.lucene.analysis.LetterTokenizer(AttributeSource.AttributeFactory, Reader)
use LetterTokenizer.LetterTokenizer(Version, AttributeSource.AttributeFactory, Reader)
instead. This will be removed in Lucene 4.0. |
org.apache.lucene.analysis.LetterTokenizer(AttributeSource, Reader)
use LetterTokenizer.LetterTokenizer(Version, AttributeSource, Reader) instead.
This will be removed in Lucene 4.0. |
org.apache.lucene.analysis.LetterTokenizer(Reader)
use LetterTokenizer.LetterTokenizer(Version, Reader) instead. This
will be removed in Lucene 4.0. |
org.apache.lucene.analysis.LowerCaseFilter(TokenStream)
Use LowerCaseFilter.LowerCaseFilter(Version, TokenStream) instead. |
org.apache.lucene.analysis.LowerCaseTokenizer(AttributeSource.AttributeFactory, Reader)
use LowerCaseTokenizer.LowerCaseTokenizer(Version, AttributeSource.AttributeFactory, Reader)
instead. This will be removed in Lucene 4.0. |
org.apache.lucene.analysis.LowerCaseTokenizer(AttributeSource, Reader)
use LowerCaseTokenizer.LowerCaseTokenizer(Version, AttributeSource, Reader)
instead. This will be removed in Lucene 4.0. |
org.apache.lucene.analysis.LowerCaseTokenizer(Reader)
use LowerCaseTokenizer.LowerCaseTokenizer(Version, Reader) instead. This will be
removed in Lucene 4.0. |
org.apache.lucene.search.similar.MoreLikeThisQuery(String, String[], Analyzer)
use MoreLikeThisQuery.MoreLikeThisQuery(String, String[], Analyzer, String) instead. |
org.apache.lucene.store.NoLockFactory()
This constructor was not intended to be public and should not be used.
It will be made private in Lucene 4.0 |
org.apache.lucene.analysis.fa.PersianAnalyzer(Version, File)
use PersianAnalyzer.PersianAnalyzer(Version, Set) instead |
org.apache.lucene.analysis.fa.PersianAnalyzer(Version, Hashtable, ?>)
use PersianAnalyzer.PersianAnalyzer(Version, Set) instead |
org.apache.lucene.analysis.fa.PersianAnalyzer(Version, String...)
use PersianAnalyzer.PersianAnalyzer(Version, Set) instead |
org.apache.lucene.index.PKIndexSplitter(Directory, Directory, Directory, Filter)
use PKIndexSplitter.PKIndexSplitter(Version, Directory, Directory, Directory, Filter) instead.
This constructor will be removed in Lucene 4.0. |
org.apache.lucene.index.PKIndexSplitter(Directory, Directory, Directory, Term)
use PKIndexSplitter.PKIndexSplitter(Version, Directory, Directory, Directory, Term)
instead. This constructor will be removed in Lucene 4.0. |
org.apache.lucene.analysis.query.QueryAutoStopWordAnalyzer(Version, Analyzer)
Stopwords should be calculated at instantiation using one of the other constructors |
org.apache.lucene.store.RAMInputStream(RAMFile)
|
org.apache.lucene.util.RamUsageEstimator()
Don't create instances of this class, instead use the static
RamUsageEstimator.sizeOf(Object) method that has no intern checking, too. |
org.apache.lucene.util.RamUsageEstimator(boolean)
Don't create instances of this class, instead use the static
RamUsageEstimator.sizeOf(Object) method. |
org.apache.lucene.analysis.reverse.ReverseStringFilter(TokenStream)
use ReverseStringFilter.ReverseStringFilter(Version, TokenStream)
instead. This constructor will be removed in Lucene 4.0 |
org.apache.lucene.analysis.reverse.ReverseStringFilter(TokenStream, char)
use ReverseStringFilter.ReverseStringFilter(Version, TokenStream, char)
instead. This constructor will be removed in Lucene 4.0 |
org.apache.lucene.analysis.ru.RussianAnalyzer(Version, Map, ?>)
use RussianAnalyzer.RussianAnalyzer(Version, Set) instead |
org.apache.lucene.analysis.ru.RussianAnalyzer(Version, String...)
use RussianAnalyzer.RussianAnalyzer(Version, Set) instead |
org.apache.lucene.analysis.ru.RussianLetterTokenizer(AttributeSource.AttributeFactory, Reader)
use RussianLetterTokenizer.RussianLetterTokenizer(Version, AttributeSource.AttributeFactory, Reader)
instead. This will be removed in Lucene 4.0. |
org.apache.lucene.analysis.ru.RussianLetterTokenizer(AttributeSource, Reader)
use RussianLetterTokenizer.RussianLetterTokenizer(Version, AttributeSource, Reader)
instead. This will be removed in Lucene 4.0. |
org.apache.lucene.analysis.ru.RussianLetterTokenizer(Reader)
use RussianLetterTokenizer.RussianLetterTokenizer(Version, Reader) instead. This will
be removed in Lucene 4.0. |
org.apache.lucene.search.Scorer(Similarity)
Use Scorer.Scorer(Weight) instead. |
org.apache.lucene.search.Scorer(Similarity, Weight)
Use Scorer.Scorer(Weight) instead. |
org.apache.lucene.analysis.SimpleAnalyzer()
use SimpleAnalyzer.SimpleAnalyzer(Version) instead |
org.apache.lucene.store.SimpleFSDirectory.SimpleFSIndexInput(File, int, int)
please pass resourceDesc |
org.apache.lucene.analysis.snowball.SnowballAnalyzer(Version, String, String[])
Use SnowballAnalyzer.SnowballAnalyzer(Version, String, Set) instead. |
org.apache.lucene.analysis.standard.StandardAnalyzer(Version, File)
Use StandardAnalyzer.StandardAnalyzer(Version, Reader) instead. |
org.apache.lucene.analysis.standard.StandardFilter(TokenStream)
Use StandardFilter.StandardFilter(Version, TokenStream) instead. |
org.apache.lucene.analysis.StopFilter(boolean, TokenStream, Set>)
use StopFilter.StopFilter(Version, TokenStream, Set) instead |
org.apache.lucene.analysis.StopFilter(boolean, TokenStream, Set>, boolean)
Use StopFilter.StopFilter(Version, TokenStream, Set) instead |
org.apache.lucene.analysis.StopFilter(Version, TokenStream, Set>, boolean)
Use StopFilter.StopFilter(Version, TokenStream, Set) instead |
org.apache.lucene.search.highlight.TextFragment(StringBuffer, int, int)
Use TextFragment.TextFragment(CharSequence, int, int) instead.
This constructor will be removed in Lucene 4.0 |
org.apache.lucene.analysis.th.ThaiWordFilter(TokenStream)
Use the ctor with matchVersion instead! |
org.apache.lucene.analysis.Tokenizer()
use Tokenizer.Tokenizer(Reader) instead. |
org.apache.lucene.analysis.Tokenizer(AttributeSource.AttributeFactory)
use Tokenizer.Tokenizer(AttributeSource.AttributeFactory, Reader) instead. |
org.apache.lucene.analysis.Tokenizer(AttributeSource)
use Tokenizer.Tokenizer(AttributeSource, Reader) instead. |
org.apache.lucene.analysis.standard.UAX29URLEmailTokenizer(AttributeSource.AttributeFactory, Reader)
use UAX29URLEmailTokenizer.UAX29URLEmailTokenizer(Version, AttributeSource.AttributeFactory, Reader) instead. |
org.apache.lucene.analysis.standard.UAX29URLEmailTokenizer(AttributeSource, Reader)
use UAX29URLEmailTokenizer.UAX29URLEmailTokenizer(Version, AttributeSource, Reader) instead. |
org.apache.lucene.analysis.standard.UAX29URLEmailTokenizer(InputStream)
use UAX29URLEmailTokenizer.UAX29URLEmailTokenizer(Version, Reader) instead. |
org.apache.lucene.analysis.standard.UAX29URLEmailTokenizer(Reader)
use UAX29URLEmailTokenizer.UAX29URLEmailTokenizer(Version, Reader) instead. |
org.apache.lucene.analysis.WhitespaceAnalyzer()
use WhitespaceAnalyzer.WhitespaceAnalyzer(Version) instead |
org.apache.lucene.analysis.WhitespaceTokenizer(AttributeSource.AttributeFactory, Reader)
use WhitespaceTokenizer.WhitespaceTokenizer(Version, AttributeSource.AttributeFactory, Reader)
instead. This will be removed in Lucene 4.0. |
org.apache.lucene.analysis.WhitespaceTokenizer(AttributeSource, Reader)
use WhitespaceTokenizer.WhitespaceTokenizer(Version, AttributeSource, Reader)
instead. This will be removed in Lucene 4.0. |
org.apache.lucene.analysis.WhitespaceTokenizer(Reader)
use WhitespaceTokenizer.WhitespaceTokenizer(Version, Reader) instead. This will
be removed in Lucene 4.0. |