|
||||||||||
PREV CLASS NEXT CLASS | FRAMES NO FRAMES | |||||||||
SUMMARY: NESTED | FIELD | CONSTR | METHOD | DETAIL: FIELD | CONSTR | METHOD |
java.lang.Object org.apache.lucene.index.IndexWriter
public class IndexWriter
An IndexWriter
creates and maintains an index.
The create
argument to the constructor
determines
whether a new index is created, or whether an existing index is
opened. Note that you can open an index with create=true
even while readers are using the index. The old readers will
continue to search the "point in time" snapshot they had opened,
and won't see the newly created index until they re-open. There are
also constructors
with no create
argument which will create a new index
if there is not already an index at the provided path and otherwise
open the existing index.
In either case, documents are added with addDocument
and removed with deleteDocuments(Term)
or deleteDocuments(Query)
. A document can be updated with updateDocument
(which just deletes
and then adds the entire document). When finished adding, deleting
and updating documents, close
should be called.
These changes are buffered in memory and periodically
flushed to the Directory
(during the above method
calls). A flush is triggered when there are enough
buffered deletes (see setMaxBufferedDeleteTerms(int)
)
or enough added documents since the last flush, whichever
is sooner. For the added documents, flushing is triggered
either by RAM usage of the documents (see setRAMBufferSizeMB(double)
) or the number of added documents.
The default is to flush when RAM usage hits 16 MB. For
best indexing speed you should flush by RAM usage with a
large RAM buffer. Note that flushing just moves the
internal buffered state in IndexWriter into the index, but
these changes are not visible to IndexReader until either
commit()
or close()
is called. A flush may
also trigger one or more segment merges which by default
run with a background thread so as not to block the
addDocument calls (see below
for changing the MergeScheduler
).
Opening an IndexWriter
creates a lock file for the directory in use. Trying to open
another IndexWriter
on the same directory will lead to a
LockObtainFailedException
. The LockObtainFailedException
is also thrown if an IndexReader on the same directory is used to delete documents
from the index.
Expert: IndexWriter
allows an optional
IndexDeletionPolicy
implementation to be
specified. You can use this to control when prior commits
are deleted from the index. The default policy is KeepOnlyLastCommitDeletionPolicy
which removes all prior
commits as soon as a new commit is done (this matches
behavior before 2.2). Creating your own policy can allow
you to explicitly keep previous "point in time" commits
alive in the index for some time, to allow readers to
refresh to the new commit without having the old commit
deleted out from under them. This is necessary on
filesystems like NFS that do not support "delete on last
close" semantics, which Lucene's "point in time" search
normally relies on.
Expert:
IndexWriter
allows you to separately change
the MergePolicy
and the MergeScheduler
.
The MergePolicy
is invoked whenever there are
changes to the segments in the index. Its role is to
select which merges to do, if any, and return a MergePolicy.MergeSpecification
describing the merges.
The default is LogByteSizeMergePolicy
. Then, the MergeScheduler
is invoked with the requested merges and
it decides when and how to run the merges. The default is
ConcurrentMergeScheduler
.
NOTE: if you hit an
OutOfMemoryError then IndexWriter will quietly record this
fact and block all future segment commits. This is a
defensive measure in case any internal state (buffered
documents and deletions) were corrupted. Any subsequent
calls to commit()
will throw an
IllegalStateException. The only course of action is to
call close()
, which internally will call rollback()
, to undo any changes to the index since the
last commit. You can also just call rollback()
directly.
NOTE: IndexWriter
instances are completely thread
safe, meaning multiple threads can call any of its
methods, concurrently. If your application requires
external synchronization, you should not
synchronize on the IndexWriter
instance as
this may cause deadlock; use your own (non-Lucene) objects
instead.
NOTE: If you call
Thread.interrupt()
on a thread that's within
IndexWriter, IndexWriter will try to catch this (eg, if
it's in a wait() or Thread.sleep()), and will then throw
the unchecked exception ThreadInterruptedException
and clear the interrupt status on the thread.
Nested Class Summary | |
---|---|
static class |
IndexWriter.IndexReaderWarmer
If getReader() has been called (ie, this writer
is in near real-time mode), then after a merge
completes, this class can be invoked to warm the
reader on the newly merged segment, before the merge
commits. |
static class |
IndexWriter.MaxFieldLength
Deprecated. use LimitTokenCountAnalyzer instead. |
Field Summary | |
---|---|
static int |
DEFAULT_MAX_BUFFERED_DELETE_TERMS
Deprecated. use IndexWriterConfig.DEFAULT_MAX_BUFFERED_DELETE_TERMS instead |
static int |
DEFAULT_MAX_BUFFERED_DOCS
Deprecated. use IndexWriterConfig.DEFAULT_MAX_BUFFERED_DOCS instead. |
static int |
DEFAULT_MAX_FIELD_LENGTH
Deprecated. see IndexWriterConfig |
static double |
DEFAULT_RAM_BUFFER_SIZE_MB
Deprecated. use IndexWriterConfig.DEFAULT_RAM_BUFFER_SIZE_MB instead. |
static int |
DEFAULT_TERM_INDEX_INTERVAL
Deprecated. use IndexWriterConfig.DEFAULT_TERM_INDEX_INTERVAL instead. |
static int |
DISABLE_AUTO_FLUSH
Deprecated. use IndexWriterConfig.DISABLE_AUTO_FLUSH instead |
static int |
MAX_TERM_LENGTH
Absolute hard maximum length for a term. |
static String |
WRITE_LOCK_NAME
Name of the write lock in the index. |
static long |
WRITE_LOCK_TIMEOUT
Deprecated. use IndexWriterConfig.WRITE_LOCK_TIMEOUT instead |
Constructor Summary | |
---|---|
IndexWriter(Directory d,
Analyzer a,
boolean create,
IndexDeletionPolicy deletionPolicy,
IndexWriter.MaxFieldLength mfl)
Deprecated. use IndexWriter(Directory, IndexWriterConfig) instead |
|
IndexWriter(Directory d,
Analyzer a,
boolean create,
IndexWriter.MaxFieldLength mfl)
Deprecated. use IndexWriter(Directory, IndexWriterConfig) instead |
|
IndexWriter(Directory d,
Analyzer a,
IndexDeletionPolicy deletionPolicy,
IndexWriter.MaxFieldLength mfl)
Deprecated. use IndexWriter(Directory, IndexWriterConfig) instead |
|
IndexWriter(Directory d,
Analyzer a,
IndexDeletionPolicy deletionPolicy,
IndexWriter.MaxFieldLength mfl,
IndexCommit commit)
Deprecated. use IndexWriter(Directory, IndexWriterConfig) instead |
|
IndexWriter(Directory d,
Analyzer a,
IndexWriter.MaxFieldLength mfl)
Deprecated. use IndexWriter(Directory, IndexWriterConfig) instead |
|
IndexWriter(Directory d,
IndexWriterConfig conf)
Constructs a new IndexWriter per the settings given in conf . |
Method Summary | |
---|---|
void |
addDocument(Document doc)
Adds a document to this index. |
void |
addDocument(Document doc,
Analyzer analyzer)
Adds a document to this index, using the provided analyzer instead of the value of getAnalyzer() . |
void |
addDocuments(Collection<Document> docs)
Atomically adds a block of documents with sequentially assigned document IDs, such that an external reader will see all or none of the documents. |
void |
addDocuments(Collection<Document> docs,
Analyzer analyzer)
Atomically adds a block of documents, analyzed using the provided analyzer, with sequentially assigned document IDs, such that an external reader will see all or none of the documents. |
void |
addIndexes(Directory... dirs)
Adds all segments from an array of indexes into this index. |
void |
addIndexes(IndexReader... readers)
Merges the provided indexes into this index. |
void |
addIndexesNoOptimize(Directory... dirs)
Deprecated. use addIndexes(Directory...) instead |
void |
close()
Commits all changes to an index and closes all associated files. |
void |
close(boolean waitForMerges)
Closes the index with or without waiting for currently running merges to finish. |
void |
commit()
Commits all pending changes (added & deleted documents, segment merges, added indexes, etc.) to the index, and syncs all referenced index files, such that a reader will see the changes and the index updates will survive an OS or machine crash or power loss. |
void |
commit(Map<String,String> commitUserData)
Commits all changes to the index, specifying a commitUserData Map (String -> String). |
void |
deleteAll()
Delete all documents in the index. |
void |
deleteDocuments(Query... queries)
Deletes the document(s) matching any of the provided queries. |
void |
deleteDocuments(Query query)
Deletes the document(s) matching the provided query. |
void |
deleteDocuments(Term... terms)
Deletes the document(s) containing any of the terms. |
void |
deleteDocuments(Term term)
Deletes the document(s) containing term . |
void |
deleteUnusedFiles()
Expert: remove any index files that are no longer used. |
protected void |
doAfterFlush()
A hook for extending classes to execute operations after pending added and deleted documents have been flushed to the Directory but before the change is committed (new segments_N file written). |
protected void |
doBeforeFlush()
A hook for extending classes to execute operations before pending added and deleted documents are flushed to the Directory. |
protected void |
ensureOpen()
|
protected void |
ensureOpen(boolean includePendingClose)
Used internally to throw an AlreadyClosedException if this IndexWriter has been
closed. |
void |
expungeDeletes()
Deprecated. |
void |
expungeDeletes(boolean doWait)
Deprecated. |
protected void |
flush(boolean triggerMerge,
boolean applyAllDeletes)
Flush all in-memory buffered updates (adds and deletes) to the Directory. |
protected void |
flush(boolean triggerMerge,
boolean flushDocStores,
boolean flushDeletes)
NOTE: flushDocStores is ignored now (hardwired to true); this method is only here for backwards compatibility |
void |
forceMerge(int maxNumSegments)
Forces merge policy to merge segments until there's <= maxNumSegments. |
void |
forceMerge(int maxNumSegments,
boolean doWait)
Just like forceMerge(int) , except you can
specify whether the call should block until
all merging completes. |
void |
forceMergeDeletes()
Forces merging of all segments that have deleted documents. |
void |
forceMergeDeletes(boolean doWait)
Just like forceMergeDeletes() , except you can
specify whether the call should block until the
operation completes. |
Analyzer |
getAnalyzer()
Returns the analyzer used by this index. |
IndexWriterConfig |
getConfig()
Returns the private IndexWriterConfig , cloned
from the IndexWriterConfig passed to
IndexWriter(Directory, IndexWriterConfig) . |
static PrintStream |
getDefaultInfoStream()
Returns the current default infoStream for newly instantiated IndexWriters. |
static long |
getDefaultWriteLockTimeout()
Deprecated. use IndexWriterConfig.getDefaultWriteLockTimeout() instead |
Directory |
getDirectory()
Returns the Directory used by this index. |
PrintStream |
getInfoStream()
Returns the current infoStream in use by this writer. |
int |
getMaxBufferedDeleteTerms()
Deprecated. use IndexWriterConfig.getMaxBufferedDeleteTerms() instead |
int |
getMaxBufferedDocs()
Deprecated. use IndexWriterConfig.getMaxBufferedDocs() instead. |
int |
getMaxFieldLength()
Deprecated. use LimitTokenCountAnalyzer to limit number of tokens. |
int |
getMaxMergeDocs()
Deprecated. use LogMergePolicy.getMaxMergeDocs() directly. |
IndexWriter.IndexReaderWarmer |
getMergedSegmentWarmer()
Deprecated. use IndexWriterConfig.getMergedSegmentWarmer() instead. |
int |
getMergeFactor()
Deprecated. use LogMergePolicy.getMergeFactor() directly. |
MergePolicy |
getMergePolicy()
Deprecated. use IndexWriterConfig.getMergePolicy() instead |
MergeScheduler |
getMergeScheduler()
Deprecated. use IndexWriterConfig.getMergeScheduler() instead |
Collection<SegmentInfo> |
getMergingSegments()
Expert: to be used by a MergePolicy to avoid
selecting merges for segments already being merged. |
MergePolicy.OneMerge |
getNextMerge()
Expert: the MergeScheduler calls this method
to retrieve the next merge requested by the
MergePolicy |
PayloadProcessorProvider |
getPayloadProcessorProvider()
Returns the PayloadProcessorProvider that is used during segment
merges to process payloads. |
double |
getRAMBufferSizeMB()
Deprecated. use IndexWriterConfig.getRAMBufferSizeMB() instead. |
IndexReader |
getReader()
Deprecated. Please use IndexReader.open(IndexWriter,boolean) instead. |
IndexReader |
getReader(int termInfosIndexDivisor)
Deprecated. Please use IndexReader.open(IndexWriter,boolean) instead. Furthermore,
this method cannot guarantee the reader (and its
sub-readers) will be opened with the
termInfosIndexDivisor setting because some of them may
have already been opened according to IndexWriterConfig.setReaderTermsIndexDivisor(int) . You
should set the requested termInfosIndexDivisor through
IndexWriterConfig.setReaderTermsIndexDivisor(int) and use
getReader() . |
int |
getReaderTermsIndexDivisor()
Deprecated. use IndexWriterConfig.getReaderTermsIndexDivisor() instead. |
Similarity |
getSimilarity()
Deprecated. use IndexWriterConfig.getSimilarity() instead |
int |
getTermIndexInterval()
Deprecated. use IndexWriterConfig.getTermIndexInterval() |
boolean |
getUseCompoundFile()
Deprecated. use LogMergePolicy.getUseCompoundFile() |
long |
getWriteLockTimeout()
Deprecated. use IndexWriterConfig.getWriteLockTimeout() |
boolean |
hasDeletions()
|
static boolean |
isLocked(Directory directory)
Returns true iff the index in the named directory is
currently locked. |
int |
maxDoc()
Returns total number of docs in this index, including docs not yet flushed (still in the RAM buffer), not counting deletions. |
void |
maybeMerge()
Expert: asks the mergePolicy whether any merges are necessary now and if so, runs the requested merges and then iterate (test again if merges are needed) until no more merges are returned by the mergePolicy. |
void |
merge(MergePolicy.OneMerge merge)
Merges the indicated segments, replacing them in the stack with a single segment. |
void |
message(String message)
Prints a message to the infoStream (if non-null), prefixed with the identifying information for this writer and the thread that's calling it. |
int |
numDeletedDocs(SegmentInfo info)
Obtain the number of deleted docs for a pooled reader. |
int |
numDocs()
Returns total number of docs in this index, including docs not yet flushed (still in the RAM buffer), and including deletions. |
int |
numRamDocs()
Expert: Return the number of documents currently buffered in RAM. |
void |
optimize()
Deprecated. |
void |
optimize(boolean doWait)
Deprecated. |
void |
optimize(int maxNumSegments)
Deprecated. |
void |
prepareCommit()
Expert: prepare for commit. |
void |
prepareCommit(Map<String,String> commitUserData)
Expert: prepare for commit, specifying commitUserData Map (String -> String). |
long |
ramSizeInBytes()
Expert: Return the total size of all index files currently cached in memory. |
void |
rollback()
Close the IndexWriter without committing
any changes that have occurred since the last commit
(or since it was opened, if commit hasn't been called). |
String |
segString()
|
String |
segString(Iterable<SegmentInfo> infos)
|
String |
segString(SegmentInfo info)
|
static void |
setDefaultInfoStream(PrintStream infoStream)
If non-null, this will be the default infoStream used by a newly instantiated IndexWriter. |
static void |
setDefaultWriteLockTimeout(long writeLockTimeout)
Deprecated. use IndexWriterConfig.setDefaultWriteLockTimeout(long) instead |
void |
setInfoStream(PrintStream infoStream)
If non-null, information about merges, deletes and a message when maxFieldLength is reached will be printed to this. |
void |
setMaxBufferedDeleteTerms(int maxBufferedDeleteTerms)
Deprecated. use IndexWriterConfig.setMaxBufferedDeleteTerms(int) instead. |
void |
setMaxBufferedDocs(int maxBufferedDocs)
Deprecated. use IndexWriterConfig.setMaxBufferedDocs(int) instead. |
void |
setMaxFieldLength(int maxFieldLength)
Deprecated. use LimitTokenCountAnalyzer instead. Note that the
behvaior slightly changed - the analyzer limits the number of
tokens per token stream created, while this setting limits the
total number of tokens to index. This only matters if you index
many multi-valued fields though. |
void |
setMaxMergeDocs(int maxMergeDocs)
Deprecated. use LogMergePolicy.setMaxMergeDocs(int) directly. |
void |
setMergedSegmentWarmer(IndexWriter.IndexReaderWarmer warmer)
Deprecated. use IndexWriterConfig.setMergedSegmentWarmer(org.apache.lucene.index.IndexWriter.IndexReaderWarmer)
instead. |
void |
setMergeFactor(int mergeFactor)
Deprecated. use LogMergePolicy.setMergeFactor(int) directly. |
void |
setMergePolicy(MergePolicy mp)
Deprecated. use IndexWriterConfig.setMergePolicy(MergePolicy) instead. |
void |
setMergeScheduler(MergeScheduler mergeScheduler)
Deprecated. use IndexWriterConfig.setMergeScheduler(MergeScheduler) instead |
void |
setPayloadProcessorProvider(PayloadProcessorProvider pcp)
Sets the PayloadProcessorProvider to use when merging payloads. |
void |
setRAMBufferSizeMB(double mb)
Deprecated. use IndexWriterConfig.setRAMBufferSizeMB(double) instead. |
void |
setReaderTermsIndexDivisor(int divisor)
Deprecated. use IndexWriterConfig.setReaderTermsIndexDivisor(int) instead. |
void |
setSimilarity(Similarity similarity)
Deprecated. use IndexWriterConfig.setSimilarity(Similarity) instead |
void |
setTermIndexInterval(int interval)
Deprecated. use IndexWriterConfig.setTermIndexInterval(int) |
void |
setUseCompoundFile(boolean value)
Deprecated. use LogMergePolicy.setUseCompoundFile(boolean) . |
void |
setWriteLockTimeout(long writeLockTimeout)
Deprecated. use IndexWriterConfig.setWriteLockTimeout(long) instead |
static void |
unlock(Directory directory)
Forcibly unlocks the index in the named directory. |
void |
updateDocument(Term term,
Document doc)
Updates a document by first deleting the document(s) containing term and then adding the new
document. |
void |
updateDocument(Term term,
Document doc,
Analyzer analyzer)
Updates a document by first deleting the document(s) containing term and then adding the new
document. |
void |
updateDocuments(Term delTerm,
Collection<Document> docs)
Atomically deletes documents matching the provided delTerm and adds a block of documents with sequentially assigned document IDs, such that an external reader will see all or none of the documents. |
void |
updateDocuments(Term delTerm,
Collection<Document> docs,
Analyzer analyzer)
Atomically deletes documents matching the provided delTerm and adds a block of documents, analyzed using the provided analyzer, with sequentially assigned document IDs, such that an external reader will see all or none of the documents. |
boolean |
verbose()
Returns true if verbosing is enabled (i.e., infoStream != null). |
void |
waitForMerges()
Wait for any currently outstanding merges to finish. |
Methods inherited from class java.lang.Object |
---|
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait |
Field Detail |
---|
@Deprecated public static long WRITE_LOCK_TIMEOUT
IndexWriterConfig.WRITE_LOCK_TIMEOUT
instead
setDefaultWriteLockTimeout(long)
public static final String WRITE_LOCK_NAME
@Deprecated public static final int DISABLE_AUTO_FLUSH
IndexWriterConfig.DISABLE_AUTO_FLUSH
instead
@Deprecated public static final int DEFAULT_MAX_BUFFERED_DOCS
IndexWriterConfig.DEFAULT_MAX_BUFFERED_DOCS
instead.setMaxBufferedDocs(int)
.
@Deprecated public static final double DEFAULT_RAM_BUFFER_SIZE_MB
IndexWriterConfig.DEFAULT_RAM_BUFFER_SIZE_MB
instead.setRAMBufferSizeMB(double)
.
@Deprecated public static final int DEFAULT_MAX_BUFFERED_DELETE_TERMS
IndexWriterConfig.DEFAULT_MAX_BUFFERED_DELETE_TERMS
insteadsetMaxBufferedDeleteTerms(int)
.
@Deprecated public static final int DEFAULT_MAX_FIELD_LENGTH
IndexWriterConfig
setMaxFieldLength(int)
.
@Deprecated public static final int DEFAULT_TERM_INDEX_INTERVAL
IndexWriterConfig.DEFAULT_TERM_INDEX_INTERVAL
instead.setTermIndexInterval(int)
.
public static final int MAX_TERM_LENGTH
setInfoStream(java.io.PrintStream)
).
Constructor Detail |
---|
@Deprecated public IndexWriter(Directory d, Analyzer a, boolean create, IndexWriter.MaxFieldLength mfl) throws CorruptIndexException, LockObtainFailedException, IOException
IndexWriter(Directory, IndexWriterConfig)
instead
d
.
Text will be analyzed with a
. If create
is true, then a new, empty index will be created in
d
, replacing the index already there, if any.
d
- the index directorya
- the analyzer to usecreate
- true
to create the index or overwrite
the existing one; false
to append to the existing
indexmfl
- Maximum field length in number of terms/tokens: LIMITED, UNLIMITED, or user-specified
via the MaxFieldLength constructor.
CorruptIndexException
- if the index is corrupt
LockObtainFailedException
- if another writer
has this index open (write.lock
could not
be obtained)
IOException
- if the directory cannot be read/written to, or
if it does not exist and create
is
false
or if there is any other low-level
IO error@Deprecated public IndexWriter(Directory d, Analyzer a, IndexWriter.MaxFieldLength mfl) throws CorruptIndexException, LockObtainFailedException, IOException
IndexWriter(Directory, IndexWriterConfig)
instead
d
, first creating it if it does not
already exist. Text will be analyzed with
a
.
d
- the index directorya
- the analyzer to usemfl
- Maximum field length in number of terms/tokens: LIMITED, UNLIMITED, or user-specified
via the MaxFieldLength constructor.
CorruptIndexException
- if the index is corrupt
LockObtainFailedException
- if another writer
has this index open (write.lock
could not
be obtained)
IOException
- if the directory cannot be
read/written to or if there is any other low-level
IO error@Deprecated public IndexWriter(Directory d, Analyzer a, IndexDeletionPolicy deletionPolicy, IndexWriter.MaxFieldLength mfl) throws CorruptIndexException, LockObtainFailedException, IOException
IndexWriter(Directory, IndexWriterConfig)
instead
IndexDeletionPolicy
, for the index in d
,
first creating it if it does not already exist. Text
will be analyzed with a
.
d
- the index directorya
- the analyzer to usedeletionPolicy
- see abovemfl
- whether or not to limit field lengths
CorruptIndexException
- if the index is corrupt
LockObtainFailedException
- if another writer
has this index open (write.lock
could not
be obtained)
IOException
- if the directory cannot be
read/written to or if there is any other low-level
IO error@Deprecated public IndexWriter(Directory d, Analyzer a, boolean create, IndexDeletionPolicy deletionPolicy, IndexWriter.MaxFieldLength mfl) throws CorruptIndexException, LockObtainFailedException, IOException
IndexWriter(Directory, IndexWriterConfig)
instead
IndexDeletionPolicy
, for the index in d
.
Text will be analyzed with a
. If
create
is true, then a new, empty index
will be created in d
, replacing the index
already there, if any.
d
- the index directorya
- the analyzer to usecreate
- true
to create the index or overwrite
the existing one; false
to append to the existing
indexdeletionPolicy
- see abovemfl
- IndexWriter.MaxFieldLength
, whether or not to limit field lengths. Value is in number of terms/tokens
CorruptIndexException
- if the index is corrupt
LockObtainFailedException
- if another writer
has this index open (write.lock
could not
be obtained)
IOException
- if the directory cannot be read/written to, or
if it does not exist and create
is
false
or if there is any other low-level
IO error@Deprecated public IndexWriter(Directory d, Analyzer a, IndexDeletionPolicy deletionPolicy, IndexWriter.MaxFieldLength mfl, IndexCommit commit) throws CorruptIndexException, LockObtainFailedException, IOException
IndexWriter(Directory, IndexWriterConfig)
instead
IndexDeletionPolicy
, for
the index in d
. Text will be analyzed
with a
.
This is only meaningful if you've used a IndexDeletionPolicy
in that past that keeps more than
just the last commit.
This operation is similar to rollback()
,
except that method can only rollback what's been done
with the current instance of IndexWriter since its last
commit, whereas this method can rollback to an
arbitrary commit point from the past, assuming the
IndexDeletionPolicy
has preserved past
commits.
d
- the index directorya
- the analyzer to usedeletionPolicy
- see abovemfl
- whether or not to limit field lengths, value is in number of terms/tokens. See IndexWriter.MaxFieldLength
.commit
- which commit to open
CorruptIndexException
- if the index is corrupt
LockObtainFailedException
- if another writer
has this index open (write.lock
could not
be obtained)
IOException
- if the directory cannot be read/written to, or
if it does not exist and create
is
false
or if there is any other low-level
IO errorpublic IndexWriter(Directory d, IndexWriterConfig conf) throws CorruptIndexException, LockObtainFailedException, IOException
conf
.
Note that the passed in IndexWriterConfig
is
privately cloned; if you need to make subsequent "live"
changes to the configuration use getConfig()
.
d
- the index directory. The index is either created or appended
according conf.getOpenMode()
.conf
- the configuration settings according to which IndexWriter should
be initialized.
CorruptIndexException
- if the index is corrupt
LockObtainFailedException
- if another writer has this index open (write.lock
could not be obtained)
IOException
- if the directory cannot be read/written to, or if it does not
exist and conf.getOpenMode()
is
OpenMode.APPEND
or if there is any other low-level
IO errorMethod Detail |
---|
@Deprecated public IndexReader getReader() throws IOException
IndexReader.open(IndexWriter,boolean)
instead.
commit()
.
Note that this is functionally equivalent to calling
{#flush} and then using IndexReader.open(org.apache.lucene.store.Directory)
to
open a new reader. But the turarnound time of this
method should be faster since it avoids the potentially
costly commit()
.
You must close the IndexReader
returned by
this method once you are done using it.
It's near real-time because there is no hard guarantee on how quickly you can get a new reader after making changes with IndexWriter. You'll have to experiment in your situation to determine if it's fast enough. As this is a new and experimental feature, please report back on your findings so we can learn, improve and iterate.
The resulting reader supports IndexReader.reopen()
, but that call will simply forward
back to this method (though this may change in the
future).
The very first time this method is called, this writer instance will make every effort to pool the readers that it opens for doing merges, applying deletes, etc. This means additional resources (RAM, file descriptors, CPU time) will be consumed.
For lower latency on reopening a reader, you should
call setMergedSegmentWarmer(org.apache.lucene.index.IndexWriter.IndexReaderWarmer)
to
pre-warm a newly merged segment before it's committed
to the index. This is important for minimizing
index-to-search delay after a large merge.
If an addIndexes* call is running in another thread, then this reader will only search those segments from the foreign index that have been successfully copied over, so far
.NOTE: Once the writer is closed, any
outstanding readers may continue to be used. However,
if you attempt to reopen any of those readers, you'll
hit an AlreadyClosedException
.
IOException
@Deprecated public IndexReader getReader(int termInfosIndexDivisor) throws IOException
IndexReader.open(IndexWriter,boolean)
instead. Furthermore,
this method cannot guarantee the reader (and its
sub-readers) will be opened with the
termInfosIndexDivisor setting because some of them may
have already been opened according to IndexWriterConfig.setReaderTermsIndexDivisor(int)
. You
should set the requested termInfosIndexDivisor through
IndexWriterConfig.setReaderTermsIndexDivisor(int)
and use
getReader()
.
getReader()
, except you can
specify which termInfosIndexDivisor should be used for
any newly opened readers.
termInfosIndexDivisor
- Subsamples which indexed
terms are loaded into RAM. This has the same effect as setTermIndexInterval(int)
except that setting
must be done at indexing time while this setting can be
set per reader. When set to N, then one in every
N*termIndexInterval terms in the index is loaded into
memory. By setting this to a value > 1 you can reduce
memory usage, at the expense of higher latency when
loading a TermInfo. The default value is 1. Set this
to -1 to skip loading the terms index entirely.
IOException
public int numDeletedDocs(SegmentInfo info) throws IOException
IOException
protected final void ensureOpen(boolean includePendingClose) throws AlreadyClosedException
AlreadyClosedException
if this IndexWriter has been
closed.
AlreadyClosedException
- if this IndexWriter is closedprotected final void ensureOpen() throws AlreadyClosedException
AlreadyClosedException
public void message(String message)
@Deprecated public boolean getUseCompoundFile()
LogMergePolicy.getUseCompoundFile()
Get the current setting of whether newly flushed segments will use the compound file format. Note that this just returns the value previously set with setUseCompoundFile(boolean), or the default value (true). You cannot use this to query the status of previously flushed segments.
Note that this method is a convenience method: it
just calls mergePolicy.getUseCompoundFile as long as
mergePolicy is an instance of LogMergePolicy
.
Otherwise an IllegalArgumentException is thrown.
setUseCompoundFile(boolean)
@Deprecated public void setUseCompoundFile(boolean value)
LogMergePolicy.setUseCompoundFile(boolean)
.
Setting to turn on usage of a compound file. When on, multiple files for each segment are merged into a single file when a new segment is flushed.
Note that this method is a convenience method: it just calls
mergePolicy.setUseCompoundFile as long as mergePolicy is an instance of
LogMergePolicy
. Otherwise an IllegalArgumentException is thrown.
@Deprecated public void setSimilarity(Similarity similarity)
IndexWriterConfig.setSimilarity(Similarity)
instead
Similarity.setDefault(Similarity)
@Deprecated public Similarity getSimilarity()
IndexWriterConfig.getSimilarity()
instead
This defaults to the current value of Similarity.getDefault()
.
@Deprecated public void setTermIndexInterval(int interval)
IndexWriterConfig.setTermIndexInterval(int)
numUniqueTerms/interval
terms are read into
memory by an IndexReader, and, on average, interval/2
terms
must be scanned for each random term access.
DEFAULT_TERM_INDEX_INTERVAL
@Deprecated public int getTermIndexInterval()
IndexWriterConfig.getTermIndexInterval()
setTermIndexInterval(int)
public IndexWriterConfig getConfig()
IndexWriterConfig
, cloned
from the IndexWriterConfig
passed to
IndexWriter(Directory, IndexWriterConfig)
.
NOTE: some settings may be changed on the
returned IndexWriterConfig
, and will take
effect in the current IndexWriter instance. See the
javadocs for the specific setters in IndexWriterConfig
for details.
@Deprecated public void setMergePolicy(MergePolicy mp)
IndexWriterConfig.setMergePolicy(MergePolicy)
instead.
@Deprecated public MergePolicy getMergePolicy()
IndexWriterConfig.getMergePolicy()
instead
setMergePolicy(org.apache.lucene.index.MergePolicy)
@Deprecated public void setMergeScheduler(MergeScheduler mergeScheduler) throws CorruptIndexException, IOException
IndexWriterConfig.setMergeScheduler(MergeScheduler)
instead
CorruptIndexException
IOException
@Deprecated public MergeScheduler getMergeScheduler()
IndexWriterConfig.getMergeScheduler()
instead
setMergeScheduler(MergeScheduler)
@Deprecated public void setMaxMergeDocs(int maxMergeDocs)
LogMergePolicy.setMaxMergeDocs(int)
directly.
Determines the largest segment (measured by document count) that may be merged with other segments. Small values (e.g., less than 10,000) are best for interactive indexing, as this limits the length of pauses while indexing to a few seconds. Larger values are best for batched indexing and speedier searches.
The default value is Integer.MAX_VALUE
.
Note that this method is a convenience method: it
just calls mergePolicy.setMaxMergeDocs as long as
mergePolicy is an instance of LogMergePolicy
.
Otherwise an IllegalArgumentException is thrown.
The default merge policy (LogByteSizeMergePolicy
) also allows you to set this
limit by net size (in MB) of the segment, using LogByteSizeMergePolicy.setMaxMergeMB(double)
.
@Deprecated public int getMaxMergeDocs()
LogMergePolicy.getMaxMergeDocs()
directly.
Returns the largest segment (measured by document count) that may be merged with other segments.
Note that this method is a convenience method: it
just calls mergePolicy.getMaxMergeDocs as long as
mergePolicy is an instance of LogMergePolicy
.
Otherwise an IllegalArgumentException is thrown.
setMaxMergeDocs(int)
@Deprecated public void setMaxFieldLength(int maxFieldLength)
LimitTokenCountAnalyzer
instead. Note that the
behvaior slightly changed - the analyzer limits the number of
tokens per token stream created, while this setting limits the
total number of tokens to index. This only matters if you index
many multi-valued fields though.
DEFAULT_MAX_FIELD_LENGTH
terms will be
indexed for a field.
@Deprecated public int getMaxFieldLength()
LimitTokenCountAnalyzer
to limit number of tokens.
setMaxFieldLength(int)
@Deprecated public void setReaderTermsIndexDivisor(int divisor)
IndexWriterConfig.setReaderTermsIndexDivisor(int)
instead.
@Deprecated public int getReaderTermsIndexDivisor()
IndexWriterConfig.getReaderTermsIndexDivisor()
instead.
@Deprecated public void setMaxBufferedDocs(int maxBufferedDocs)
IndexWriterConfig.setMaxBufferedDocs(int)
instead.
When this is set, the writer will flush every
maxBufferedDocs added documents. Pass in DISABLE_AUTO_FLUSH
to prevent triggering a flush due
to number of buffered documents. Note that if flushing
by RAM usage is also enabled, then the flush will be
triggered by whichever comes first.
Disabled by default (writer flushes by RAM usage).
IllegalArgumentException
- if maxBufferedDocs is
enabled but smaller than 2, or it disables maxBufferedDocs
when ramBufferSize is already disabledsetRAMBufferSizeMB(double)
@Deprecated public int getMaxBufferedDocs()
IndexWriterConfig.getMaxBufferedDocs()
instead.
setMaxBufferedDocs(int)
@Deprecated public void setRAMBufferSizeMB(double mb)
IndexWriterConfig.setRAMBufferSizeMB(double)
instead.
When this is set, the writer will flush whenever
buffered documents and deletions use this much RAM.
Pass in DISABLE_AUTO_FLUSH
to prevent
triggering a flush due to RAM usage. Note that if
flushing by document count is also enabled, then the
flush will be triggered by whichever comes first.
NOTE: the account of RAM usage for pending
deletions is only approximate. Specifically, if you
delete by Query, Lucene currently has no way to measure
the RAM usage if individual Queries so the accounting
will under-estimate and you should compensate by either
calling commit() periodically yourself, or by using
setMaxBufferedDeleteTerms(int)
to flush by count
instead of RAM usage (each buffered delete Query counts
as one).
NOTE: because IndexWriter uses
int
s when managing its internal storage,
the absolute maximum value for this setting is somewhat
less than 2048 MB. The precise limit depends on
various factors, such as how large your documents are,
how many fields have norms, etc., so it's best to set
this value comfortably under 2048.
The default value is DEFAULT_RAM_BUFFER_SIZE_MB
.
IllegalArgumentException
- if ramBufferSize is
enabled but non-positive, or it disables ramBufferSize
when maxBufferedDocs is already disabled@Deprecated public double getRAMBufferSizeMB()
IndexWriterConfig.getRAMBufferSizeMB()
instead.
setRAMBufferSizeMB(double)
if enabled.
@Deprecated public void setMaxBufferedDeleteTerms(int maxBufferedDeleteTerms)
IndexWriterConfig.setMaxBufferedDeleteTerms(int)
instead.
Determines the minimal number of delete terms required before the buffered in-memory delete terms are applied and flushed. If there are documents buffered in memory at the time, they are merged and a new segment is created.
Disabled by default (writer flushes by RAM usage).
IllegalArgumentException
- if maxBufferedDeleteTerms
is enabled but smaller than 1setRAMBufferSizeMB(double)
@Deprecated public int getMaxBufferedDeleteTerms()
IndexWriterConfig.getMaxBufferedDeleteTerms()
instead
setMaxBufferedDeleteTerms(int)
@Deprecated public void setMergeFactor(int mergeFactor)
LogMergePolicy.setMergeFactor(int)
directly.
Note that this method is a convenience method: it
just calls mergePolicy.setMergeFactor as long as
mergePolicy is an instance of LogMergePolicy
.
Otherwise an IllegalArgumentException is thrown.
This must never be less than 2. The default value is 10.
@Deprecated public int getMergeFactor()
LogMergePolicy.getMergeFactor()
directly.
Returns the number of segments that are merged at once and also controls the total number of segments allowed to accumulate in the index.
Note that this method is a convenience method: it
just calls mergePolicy.getMergeFactor as long as
mergePolicy is an instance of LogMergePolicy
.
Otherwise an IllegalArgumentException is thrown.
setMergeFactor(int)
public static void setDefaultInfoStream(PrintStream infoStream)
setInfoStream(java.io.PrintStream)
public static PrintStream getDefaultInfoStream()
setDefaultInfoStream(java.io.PrintStream)
public void setInfoStream(PrintStream infoStream) throws IOException
IOException
public PrintStream getInfoStream()
setInfoStream(java.io.PrintStream)
public boolean verbose()
@Deprecated public void setWriteLockTimeout(long writeLockTimeout)
IndexWriterConfig.setWriteLockTimeout(long)
instead
to change the default value for all instances of IndexWriter.
@Deprecated public long getWriteLockTimeout()
IndexWriterConfig.getWriteLockTimeout()
setWriteLockTimeout(long)
@Deprecated public static void setDefaultWriteLockTimeout(long writeLockTimeout)
IndexWriterConfig.setDefaultWriteLockTimeout(long)
instead
@Deprecated public static long getDefaultWriteLockTimeout()
IndexWriterConfig.getDefaultWriteLockTimeout()
instead
setDefaultWriteLockTimeout(long)
public void close() throws CorruptIndexException, IOException
commit()
for
caveats about write caching done by some IO devices.
If an Exception is hit during close, eg due to disk full or some other reason, then both the on-disk index and the internal state of the IndexWriter instance will be consistent. However, the close will not be complete even though part of it (flushing buffered documents) may have succeeded, so the write lock will still be held.
If you can correct the underlying cause (eg free up some disk space) then you can call close() again. Failing that, if you want to force the write lock to be released (dangerous, because you may then lose buffered docs in the IndexWriter instance) then you can do something like this:
try { writer.close(); } finally { if (IndexWriter.isLocked(directory)) { IndexWriter.unlock(directory); } }after which, you must be certain not to use the writer instance anymore.
NOTE: if this method hits an OutOfMemoryError you should immediately close the writer, again. See above for details.
close
in interface Closeable
CorruptIndexException
- if the index is corrupt
IOException
- if there is a low-level IO errorpublic void close(boolean waitForMerges) throws CorruptIndexException, IOException
NOTE: if this method hits an OutOfMemoryError you should immediately close the writer, again. See above for details.
NOTE: it is dangerous to always call close(false), especially when IndexWriter is not open for very long, because this can result in "merge starvation" whereby long merges will never have a chance to finish. This will cause too many segments in your index over time.
waitForMerges
- if true, this call will block
until all merges complete; else, it will ask all
running merges to abort, wait until those merges have
finished (which should be at most a few seconds), and
then return.
CorruptIndexException
IOException
public Directory getDirectory()
public Analyzer getAnalyzer()
public int maxDoc()
numDocs()
public int numDocs() throws IOException
commit()
first.
IOException
numDocs()
public boolean hasDeletions() throws IOException
IOException
public void addDocument(Document doc) throws CorruptIndexException, IOException
setMaxFieldLength(int)
terms for a given field, the remainder are
discarded.
Note that if an Exception is hit (for example disk full) then the index will be consistent, but this document may not have been added. Furthermore, it's possible the index will have one segment in non-compound format even when using compound files (when a merge has partially succeeded).
This method periodically flushes pending documents
to the Directory (see above), and
also periodically triggers segment merges in the index
according to the MergePolicy
in use.
Merges temporarily consume space in the
directory. The amount of space required is up to 1X the
size of all segments being merged, when no
readers/searchers are open against the index, and up to
2X the size of all segments being merged when
readers/searchers are open against the index (see
forceMerge(int)
for details). The sequence of
primitive merge operations performed is governed by the
merge policy.
Note that each term in the document can be no longer than 16383 characters, otherwise an IllegalArgumentException will be thrown.
Note that it's possible to create an invalid Unicode string in java if a UTF16 surrogate pair is malformed. In this case, the invalid characters are silently replaced with the Unicode replacement character U+FFFD.
NOTE: if this method hits an OutOfMemoryError you should immediately close the writer. See above for details.
CorruptIndexException
- if the index is corrupt
IOException
- if there is a low-level IO errorpublic void addDocument(Document doc, Analyzer analyzer) throws CorruptIndexException, IOException
getAnalyzer()
. If the document contains more than
setMaxFieldLength(int)
terms for a given field, the remainder are
discarded.
See addDocument(Document)
for details on
index and IndexWriter state after an Exception, and
flushing/merging temporary free space requirements.
NOTE: if this method hits an OutOfMemoryError you should immediately close the writer. See above for details.
CorruptIndexException
- if the index is corrupt
IOException
- if there is a low-level IO errorpublic void addDocuments(Collection<Document> docs) throws CorruptIndexException, IOException
WARNING: the index does not currently record which documents were added as a block. Today this is fine, because merging will preserve the block (as long as none them were deleted). But it's possible in the future that Lucene may more aggressively re-order documents (for example, perhaps to obtain better index compression), in which case you may need to fully re-index your documents at that time.
See addDocument(Document)
for details on
index and IndexWriter state after an Exception, and
flushing/merging temporary free space requirements.
NOTE: tools that do offline splitting of an index (for example, IndexSplitter in contrib) or re-sorting of documents (for example, IndexSorter in contrib) are not aware of these atomically added documents and will likely break them up. Use such tools at your own risk!
NOTE: if this method hits an OutOfMemoryError you should immediately close the writer. See above for details.
CorruptIndexException
- if the index is corrupt
IOException
- if there is a low-level IO errorpublic void addDocuments(Collection<Document> docs, Analyzer analyzer) throws CorruptIndexException, IOException
CorruptIndexException
- if the index is corrupt
IOException
- if there is a low-level IO errorpublic void updateDocuments(Term delTerm, Collection<Document> docs) throws CorruptIndexException, IOException
addDocuments(Collection)
.
CorruptIndexException
- if the index is corrupt
IOException
- if there is a low-level IO errorpublic void updateDocuments(Term delTerm, Collection<Document> docs, Analyzer analyzer) throws CorruptIndexException, IOException
addDocuments(Collection)
.
CorruptIndexException
- if the index is corrupt
IOException
- if there is a low-level IO errorpublic void deleteDocuments(Term term) throws CorruptIndexException, IOException
term
.
NOTE: if this method hits an OutOfMemoryError you should immediately close the writer. See above for details.
term
- the term to identify the documents to be deleted
CorruptIndexException
- if the index is corrupt
IOException
- if there is a low-level IO errorpublic void deleteDocuments(Term... terms) throws CorruptIndexException, IOException
NOTE: if this method hits an OutOfMemoryError you should immediately close the writer. See above for details.
terms
- array of terms to identify the documents
to be deleted
CorruptIndexException
- if the index is corrupt
IOException
- if there is a low-level IO errorpublic void deleteDocuments(Query query) throws CorruptIndexException, IOException
NOTE: if this method hits an OutOfMemoryError you should immediately close the writer. See above for details.
query
- the query to identify the documents to be deleted
CorruptIndexException
- if the index is corrupt
IOException
- if there is a low-level IO errorpublic void deleteDocuments(Query... queries) throws CorruptIndexException, IOException
NOTE: if this method hits an OutOfMemoryError you should immediately close the writer. See above for details.
queries
- array of queries to identify the documents
to be deleted
CorruptIndexException
- if the index is corrupt
IOException
- if there is a low-level IO errorpublic void updateDocument(Term term, Document doc) throws CorruptIndexException, IOException
term
and then adding the new
document. The delete and then add are atomic as seen
by a reader on the same index (flush may happen only after
the add).
NOTE: if this method hits an OutOfMemoryError you should immediately close the writer. See above for details.
term
- the term to identify the document(s) to be
deleteddoc
- the document to be added
CorruptIndexException
- if the index is corrupt
IOException
- if there is a low-level IO errorpublic void updateDocument(Term term, Document doc, Analyzer analyzer) throws CorruptIndexException, IOException
term
and then adding the new
document. The delete and then add are atomic as seen
by a reader on the same index (flush may happen only after
the add).
NOTE: if this method hits an OutOfMemoryError you should immediately close the writer. See above for details.
term
- the term to identify the document(s) to be
deleteddoc
- the document to be addedanalyzer
- the analyzer to use when analyzing the document
CorruptIndexException
- if the index is corrupt
IOException
- if there is a low-level IO error@Deprecated public void optimize() throws CorruptIndexException, IOException
CorruptIndexException
IOException
@Deprecated public void optimize(int maxNumSegments) throws CorruptIndexException, IOException
CorruptIndexException
IOException
@Deprecated public void optimize(boolean doWait) throws CorruptIndexException, IOException
CorruptIndexException
IOException
public void forceMerge(int maxNumSegments) throws CorruptIndexException, IOException
MergePolicy
.
This is a horribly costly operation, especially when
you pass a small maxNumSegments
; usually you
should only call this if the index is static (will no
longer be changed).
Note that this requires up to 2X the index size free
space in your Directory (3X if you're using compound
file format). For example, if your index size is 10 MB
then you need up to 20 MB free for this to complete (30
MB if you're using compound file format). Also,
it's best to call commit()
afterwards,
to allow IndexWriter to free up disk space.
If some but not all readers re-open while merging is underway, this will cause > 2X temporary space to be consumed as those new readers will then hold open the temporary segments at that time. It is best not to re-open readers while merging is running.
The actual temporary usage could be much less than these figures (it depends on many factors).
In general, once the this completes, the total size of the index will be less than the size of the starting index. It could be quite a bit smaller (if there were many pending deletes) or just slightly smaller.
If an Exception is hit, for example due to disk full, the index will not be corrupt and no documents will have been lost. However, it may have been partially merged (some segments were merged but not all), and it's possible that one of the segments in the index will be in non-compound format even when using compound file format. This will occur when the Exception is hit during conversion of the segment into compound format.
This call will merge those segments present in the index when the call started. If other threads are still adding documents and flushing segments, those newly created segments will not be merged unless you call forceMerge again.
NOTE: if this method hits an OutOfMemoryError you should immediately close the writer. See above for details.
NOTE: if you call close(boolean)
with false, which aborts all running merges,
then any thread still running this method might hit a
MergePolicy.MergeAbortedException
.
maxNumSegments
- maximum number of segments left
in the index after merging finishes
CorruptIndexException
- if the index is corrupt
IOException
- if there is a low-level IO errorMergePolicy.findMerges(org.apache.lucene.index.SegmentInfos)
public void forceMerge(int maxNumSegments, boolean doWait) throws CorruptIndexException, IOException
forceMerge(int)
, except you can
specify whether the call should block until
all merging completes. This is only meaningful with a
MergeScheduler
that is able to run merges in
background threads.
NOTE: if this method hits an OutOfMemoryError you should immediately close the writer. See above for details.
CorruptIndexException
IOException
@Deprecated public void expungeDeletes(boolean doWait) throws CorruptIndexException, IOException
CorruptIndexException
IOException
public void forceMergeDeletes(boolean doWait) throws CorruptIndexException, IOException
forceMergeDeletes()
, except you can
specify whether the call should block until the
operation completes. This is only meaningful with a
MergeScheduler
that is able to run merges in
background threads.
NOTE: if this method hits an OutOfMemoryError you should immediately close the writer. See above for details.
NOTE: if you call close(boolean)
with false, which aborts all running merges,
then any thread still running this method might hit a
MergePolicy.MergeAbortedException
.
CorruptIndexException
IOException
@Deprecated public void expungeDeletes() throws CorruptIndexException, IOException
CorruptIndexException
IOException
public void forceMergeDeletes() throws CorruptIndexException, IOException
MergePolicy
. For example,
the default TieredMergePolicy
will only
pick a segment if the percentage of
deleted docs is over 10%.
This is often a horribly costly operation; rarely is it warranted.
To see how
many deletions you have pending in your index, call
IndexReader.numDeletedDocs()
.
NOTE: this method first flushes a new segment (if there are indexed documents), and applies all buffered deletes.
NOTE: if this method hits an OutOfMemoryError you should immediately close the writer. See above for details.
CorruptIndexException
IOException
public final void maybeMerge() throws CorruptIndexException, IOException
NOTE: if this method hits an OutOfMemoryError you should immediately close the writer. See above for details.
CorruptIndexException
IOException
public Collection<SegmentInfo> getMergingSegments()
MergePolicy
to avoid
selecting merges for segments already being merged.
The returned collection is not cloned, and thus is
only safe to access if you hold IndexWriter's lock
(which you do when IndexWriter invokes the
MergePolicy).
Do not alter the returned collection!
public MergePolicy.OneMerge getNextMerge()
MergeScheduler
calls this method
to retrieve the next merge requested by the
MergePolicy
public void rollback() throws IOException
IndexWriter
without committing
any changes that have occurred since the last commit
(or since it was opened, if commit hasn't been called).
This removes any temporary files that had been created,
after which the state of the index will be the same as
it was when commit() was last called or when this
writer was first opened. This also clears a previous
call to prepareCommit()
.
rollback
in interface TwoPhaseCommit
IOException
- if there is a low-level IO errorpublic void deleteAll() throws IOException
This method will drop all buffered documents and will
remove all segments from the index. This change will not be
visible until a commit()
has been called. This method
can be rolled back using rollback()
.
NOTE: this method is much faster than using deleteDocuments( new MatchAllDocsQuery() ).
NOTE: this method will forcefully abort all merges
in progress. If other threads are running forceMerge(int)
, addIndexes(IndexReader[])
or
forceMergeDeletes(boolean)
methods, they may receive
MergePolicy.MergeAbortedException
s.
IOException
public void waitForMerges()
It is guaranteed that any merges started prior to calling this method will have completed once this method completes.
@Deprecated public void addIndexesNoOptimize(Directory... dirs) throws CorruptIndexException, IOException
addIndexes(Directory...)
instead
CorruptIndexException
IOException
public void addIndexes(Directory... dirs) throws CorruptIndexException, IOException
This may be used to parallelize batch indexing. A large document collection can be broken into sub-collections. Each sub-collection can be indexed in parallel, on a different thread, process or machine. The complete index can then be created by merging sub-collection indexes with this method.
NOTE: the index in each Directory
must not be
changed (opened by a writer) while this method is
running. This method does not acquire a write lock in
each input Directory, so it is up to the caller to
enforce this.
This method is transactional in how Exceptions are handled: it does not commit a new segments_N file until all indexes are added. This means if an Exception occurs (for example disk full), then either no indexes will have been added or they all will have been.
Note that this requires temporary free space in the
Directory
up to 2X the sum of all input indexes
(including the starting index). If readers/searchers
are open against the starting index, then temporary
free space required will be higher by the size of the
starting index (see forceMerge(int)
for details).
NOTE: this method only copies the segments of the incomning indexes and does not merge them. Therefore deleted documents are not removed and the new segments are not merged with the existing ones.
This requires this index not be among those to be added.
NOTE: if this method hits an OutOfMemoryError you should immediately close the writer. See above for details.
CorruptIndexException
- if the index is corrupt
IOException
- if there is a low-level IO errorpublic void addIndexes(IndexReader... readers) throws CorruptIndexException, IOException
IndexReader
. Otherwise, using
addIndexes(Directory...)
is highly recommended for performance
reasons. It uses the MergeScheduler
and MergePolicy
set
on this writer, which may perform merges in parallel.
The provided IndexReaders are not closed.
NOTE: this method does not merge the current segments, only the incoming ones.
See addIndexes(Directory...)
for details on transactional
semantics, temporary free space required in the Directory,
and non-CFS segments on an Exception.
NOTE: if this method hits an OutOfMemoryError you should immediately close the writer. See above for details.
NOTE: if you call close(boolean)
with false, which aborts all running merges,
then any thread still running this method might hit a
MergePolicy.MergeAbortedException
.
CorruptIndexException
- if the index is corrupt
IOException
- if there is a low-level IO errorprotected void doAfterFlush() throws IOException
IOException
protected void doBeforeFlush() throws IOException
IOException
public final void prepareCommit() throws CorruptIndexException, IOException
NOTE: if this method hits an OutOfMemoryError you should immediately close the writer. See above for details.
prepareCommit
in interface TwoPhaseCommit
CorruptIndexException
IOException
prepareCommit(Map)
public final void prepareCommit(Map<String,String> commitUserData) throws CorruptIndexException, IOException
Expert: prepare for commit, specifying
commitUserData Map (String -> String). This does the
first phase of 2-phase commit. This method does all
steps necessary to commit changes since this writer
was opened: flushes pending added and deleted docs,
syncs the index files, writes most of next segments_N
file. After calling this you must call either commit()
to finish the commit, or rollback()
to revert the commit and undo all changes
done since the writer was opened.
You can also just call commit(Map)
directly
without prepareCommit first in which case that method
will internally call prepareCommit.
NOTE: if this method hits an OutOfMemoryError you should immediately close the writer. See above for details.
prepareCommit
in interface TwoPhaseCommit
commitUserData
- Opaque Map (String->String)
that's recorded into the segments file in the index,
and retrievable by IndexCommit.getUserData()
. Note that when
IndexWriter commits itself during close()
, the
commitUserData is unchanged (just carried over from
the prior commit). If this is null then the previous
commitUserData is kept. Also, the commitUserData will
only "stick" if there are actually changes in the
index to commit.
CorruptIndexException
IOException
TwoPhaseCommit.prepareCommit()
public final void commit() throws CorruptIndexException, IOException
Commits all pending changes (added & deleted documents, segment merges, added indexes, etc.) to the index, and syncs all referenced index files, such that a reader will see the changes and the index updates will survive an OS or machine crash or power loss. Note that this does not wait for any running background merges to finish. This may be a costly operation, so you should test the cost in your application and do it only when really necessary.
Note that this operation calls Directory.sync on the index files. That call should not return until the file contents & metadata are on stable storage. For FSDirectory, this calls the OS's fsync. But, beware: some hardware devices may in fact cache writes even during fsync, and return before the bits are actually on stable storage, to give the appearance of faster performance. If you have such a device, and it does not have a battery backup (for example) then on power loss it may still lose data. Lucene cannot guarantee consistency on such devices.
NOTE: if this method hits an OutOfMemoryError you should immediately close the writer. See above for details.
commit
in interface TwoPhaseCommit
CorruptIndexException
IOException
prepareCommit()
,
commit(Map)
public final void commit(Map<String,String> commitUserData) throws CorruptIndexException, IOException
prepareCommit(Map)
(if you didn't
already call it) and then finishCommit()
.
NOTE: if this method hits an OutOfMemoryError you should immediately close the writer. See above for details.
commit
in interface TwoPhaseCommit
CorruptIndexException
IOException
TwoPhaseCommit.commit()
,
TwoPhaseCommit.prepareCommit(Map)
protected final void flush(boolean triggerMerge, boolean flushDocStores, boolean flushDeletes) throws CorruptIndexException, IOException
CorruptIndexException
IOException
protected final void flush(boolean triggerMerge, boolean applyAllDeletes) throws CorruptIndexException, IOException
triggerMerge
- if true, we may merge segments (if
deletes or docs were flushed) if necessaryapplyAllDeletes
- whether pending deletes should also
CorruptIndexException
IOException
public final long ramSizeInBytes()
public final int numRamDocs()
public void merge(MergePolicy.OneMerge merge) throws CorruptIndexException, IOException
CorruptIndexException
IOException
public String segString() throws IOException
IOException
public String segString(Iterable<SegmentInfo> infos) throws IOException
IOException
public String segString(SegmentInfo info) throws IOException
IOException
public static boolean isLocked(Directory directory) throws IOException
true
iff the index in the named directory is
currently locked.
directory
- the directory to check for a lock
IOException
- if there is a low-level IO errorpublic static void unlock(Directory directory) throws IOException
Caution: this should only be used by failure recovery code, when it is known that no other process nor thread is in fact currently accessing this index.
IOException
@Deprecated public void setMergedSegmentWarmer(IndexWriter.IndexReaderWarmer warmer)
IndexWriterConfig.setMergedSegmentWarmer(org.apache.lucene.index.IndexWriter.IndexReaderWarmer)
instead.
IndexWriter.IndexReaderWarmer
.
@Deprecated public IndexWriter.IndexReaderWarmer getMergedSegmentWarmer()
IndexWriterConfig.getMergedSegmentWarmer()
instead.
IndexWriter.IndexReaderWarmer
.
public void deleteUnusedFiles() throws IOException
IndexWriter normally deletes unused files itself, during indexing. However, on Windows, which disallows deletion of open files, if there is a reader open on the index then those files cannot be deleted. This is fine, because IndexWriter will periodically retry the deletion.
However, IndexWriter doesn't try that often: only on open, close, flushing a new segment, and finishing a merge. If you don't do any of these actions with your IndexWriter, you'll see the unused files linger. If that's a problem, call this method to delete them (once you've closed the open readers that were preventing their deletion).
In addition, you can call this method to delete
unreferenced index commits. This might be useful if you
are using an IndexDeletionPolicy
which holds
onto index commits until some criteria are met, but those
commits are no longer needed. Otherwise, those commits will
be deleted the next time commit() is called.
IOException
public void setPayloadProcessorProvider(PayloadProcessorProvider pcp)
PayloadProcessorProvider
to use when merging payloads.
Note that the given pcp
will be invoked for every segment that
is merged, not only external ones that are given through
addIndexes(org.apache.lucene.store.Directory...)
. If you want only the payloads of the external segments
to be processed, you can return null
whenever a
PayloadProcessorProvider.ReaderPayloadProcessor
is requested for the Directory
of the
IndexWriter
.
The default is null
which means payloads are processed
normally (copied) during segment merges. You can also unset it by passing
null
.
NOTE: the set PayloadProcessorProvider
will be in effect
immediately, potentially for already running merges too. If you want to be
sure it is used for further operations only, such as addIndexes(org.apache.lucene.store.Directory...)
or
forceMerge(int)
, you can call waitForMerges()
before.
public PayloadProcessorProvider getPayloadProcessorProvider()
PayloadProcessorProvider
that is used during segment
merges to process payloads.
|
||||||||||
PREV CLASS NEXT CLASS | FRAMES NO FRAMES | |||||||||
SUMMARY: NESTED | FIELD | CONSTR | METHOD | DETAIL: FIELD | CONSTR | METHOD |