Uploaded image for project: 'Infinispan'
  1. Infinispan
  2. ISPN-1362

Reduce the number of files a FileCacheStore creates

    XMLWordPrintable

    Details

    • Type: Enhancement
    • Status: Resolved (View Workflow)
    • Priority: Minor
    • Resolution: Rejected
    • Affects Version/s: 5.0.0.FINAL, 5.1.0.FINAL, 5.1.1.FINAL
    • Fix Version/s: None
    • Component/s: Loaders and Stores
    • Labels:

      Description

      It seems that after ISPN-1300 we allow the FileCacheStore to use only approximately 4 millions of files, this is still too much as the original issue description reports:

      When trying to initalize my index for Hibernate search with persistance I get the following exception after several hours of indexing:
      [2011-08-29 11:30:53,425] ERROR FileCacheStore.java:317 Hibernate Search: indexwriter-154 ) ISPN000063: Exception while saving bucket Bucket{entries={_4o.fdt|M|cnwk.foreman.model.SoftwareDownload=ImmortalCacheEntry{key=_4o.fdt|M|cnwk.foreman.model.SoftwareDownload, value=ImmortalCacheValue{value=FileMetadata{lastModified=1314642653425, size=32768}}}}, bucketId='1509281792'}
      java.io.FileNotFoundException: /var/opt/fullTextStore/LuceneIndexesMetadata/1509281792 (Too many open files)
      at java.io.RandomAccessFile.open(Native Method)
      at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
      at org.infinispan.loaders.file.FileCacheStore$BufferedFileSync.createChannel(FileCacheStore.java:494)
      at org.infinispan.loaders.file.FileCacheStore$BufferedFileSync.write(FileCacheStore.java:472)
      at org.infinispan.loaders.file.FileCacheStore.updateBucket(FileCacheStore.java:315)
      at org.infinispan.loaders.bucket.BucketBasedCacheStore.insertBucket(BucketBasedCacheStore.java:137)
      at org.infinispan.loaders.bucket.BucketBasedCacheStore.storeLockSafe(BucketBasedCacheStore.java:94)
      at org.infinispan.loaders.bucket.BucketBasedCacheStore.storeLockSafe(BucketBasedCacheStore.java:49)
      at org.infinispan.loaders.LockSupportCacheStore.store(LockSupportCacheStore.java:195)
      at org.infinispan.interceptors.CacheStoreInterceptor.visitPutKeyValueCommand(CacheStoreInterceptor.java:210)
      at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:77)
      at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:119)
      at org.infinispan.interceptors.CacheLoaderInterceptor.visitPutKeyValueCommand(CacheLoaderInterceptor.java:82)
      at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:77)
      at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:119)
      at org.infinispan.interceptors.base.CommandInterceptor.handleDefault(CommandInterceptor.java:133)
      at org.infinispan.commands.AbstractVisitor.visitPutKeyValueCommand(AbstractVisitor.java:60)
      at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:77)
      at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:119)
      at org.infinispan.interceptors.TxInterceptor.enlistWriteAndInvokeNext(TxInterceptor.java:214)
      at org.infinispan.interceptors.TxInterceptor.visitPutKeyValueCommand(TxInterceptor.java:162)
      at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:77)
      at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:119)
      at org.infinispan.interceptors.CacheMgmtInterceptor.visitPutKeyValueCommand(CacheMgmtInterceptor.java:114)
      at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:77)
      at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:119)
      at org.infinispan.interceptors.InvocationContextInterceptor.handleAll(InvocationContextInterceptor.java:104)
      at org.infinispan.interceptors.InvocationContextInterceptor.handleDefault(InvocationContextInterceptor.java:64)
      at org.infinispan.commands.AbstractVisitor.visitPutKeyValueCommand(AbstractVisitor.java:60)
      at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:77)
      at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:119)
      at org.infinispan.interceptors.BatchingInterceptor.handleDefault(BatchingInterceptor.java:77)
      at org.infinispan.commands.AbstractVisitor.visitPutKeyValueCommand(AbstractVisitor.java:60)
      at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:77)
      at org.infinispan.interceptors.InterceptorChain.invoke(InterceptorChain.java:274)
      at org.infinispan.CacheImpl.put(CacheImpl.java:515)
      at org.infinispan.CacheSupport.put(CacheSupport.java:51)
      at org.infinispan.lucene.InfinispanIndexOutput.close(InfinispanIndexOutput.java:206)
      at org.apache.lucene.util.IOUtils.closeSafely(IOUtils.java:80)
      at org.apache.lucene.index.FieldsWriter.close(FieldsWriter.java:111)
      at org.apache.lucene.index.FieldsWriter.abort(FieldsWriter.java:121)
      at org.apache.lucene.index.StoredFieldsWriter.abort(StoredFieldsWriter.java:90)
      at org.apache.lucene.index.DocFieldProcessor.abort(DocFieldProcessor.java:71)
      at org.apache.lucene.index.DocumentsWriter.abort(DocumentsWriter.java:421)
      at org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter.java:729)
      at org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:2042)
      at org.hibernate.search.backend.impl.lucene.works.AddWorkDelegate.performWork(AddWorkDelegate.java:76)
      at org.hibernate.search.backend.impl.batchlucene.DirectoryProviderWorkspace.doWorkInSync(DirectoryProviderWorkspace.java:96)
      at org.hibernate.search.backend.impl.batchlucene.DirectoryProviderWorkspace$AsyncIndexRunnable.run(DirectoryProviderWorkspace.java:144)
      at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
      at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
      at java.lang.Thread.run(Thread.java:680)

      The open file limit on my machine has already been increased to try to fix the issue.

      This is the configuration used when the exception is thrown:

      ]<?xml version="1.0" encoding="UTF-8"?>
      <infinispan
      xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
      xsi:schemaLocation="urn:infinispan:config:4.2 http://www.infinispan.org/schemas/infinispan-config-4.2.xsd"
      xmlns="urn:infinispan:config:4.2">

      <!-- *************************** -->
      <!-- System-wide global settings -->
      <!-- *************************** -->

      <global>

      <!-- Duplicate domains are allowed so that multiple deployments with default configuration
      of Hibernate Search applications work - if possible it would be better to use JNDI to share
      the CacheManager across applications -->
      <globalJmxStatistics
      enabled="true"
      cacheManagerName="HibernateSearch"
      allowDuplicateDomains="true"/>

      <!-- If the transport is omitted, there is no way to create distributed or clustered
      caches. There is no added cost to defining a transport but not creating a cache that uses one,
      since the transport is created and initialized lazily. -->
      <transport
      clusterName="HibernateSearch-Infinispan-cluster"
      distributedSyncTimeout="50000">
      <!-- Note that the JGroups transport uses sensible defaults if no configuration
      property is defined. See the JGroupsTransport javadocs for more flags -->
      </transport>

      <!-- Used to register JVM shutdown hooks. hookBehavior: DEFAULT, REGISTER, DONT_REGISTER.
      Hibernate Search takes care to stop the CacheManager so registering is not needed -->
      <shutdown
      hookBehavior="DONT_REGISTER"/>

      </global>

      <!-- *************************** -->
      <!-- Default "template" settings -->
      <!-- *************************** -->

      <default>

      <locking
      lockAcquisitionTimeout="20000"
      writeSkewCheck="false"
      concurrencyLevel="500"
      useLockStriping="false"/>

      <lazyDeserialization
      enabled="false"/>

      <!-- Invocation batching is required for use with the Lucene Directory -->
      <invocationBatching
      enabled="true"/>

      <!-- This element specifies that the cache is clustered. modes supported: distribution
      (d), replication (r) or invalidation . Don't use invalidation to store Lucene indexes (as
      with Hibernate Search DirectoryProvider). Replication is recommended for best performance of
      Lucene indexes, but make sure you have enough memory to store the index in your heap.
      Also distribution scales much better than replication on high number of nodes in the cluster. -->
      <clustering
      mode="replication">

      <!-- Prefer loading all data at startup than later -->
      <stateRetrieval
      timeout="60000"
      logFlushTimeout="30000"
      fetchInMemoryState="true"
      alwaysProvideInMemoryState="true"/>

      <!-- Network calls are synchronous by default -->
      <sync
      replTimeout="20000"/>
      </clustering>

      <jmxStatistics
      enabled="true"/>

      <eviction
      maxEntries="-1"
      strategy="NONE"/>

      <expiration
      maxIdle="-1"/>

      </default>

      <!-- ******************************************************************************* -->
      <!-- Individually configured "named" caches. -->
      <!-- -->
      <!-- While default configuration happens to be fine with similar settings across the -->
      <!-- three caches, they should generally be different in a production environment. -->
      <!-- -->
      <!-- Current settings could easily lead to OutOfMemory exception as a CacheStore -->
      <!-- should be enabled, and maybe distribution is desired. -->
      <!-- ******************************************************************************* -->

      <!-- *************************************** -->
      <!-- Cache to store Lucene's file metadata -->
      <!-- *************************************** -->
      <namedCache name="LuceneIndexesMetadata">

      <clustering mode="replication">
      <stateRetrieval
      fetchInMemoryState="true"
      logFlushTimeout="30000"/>
      <sync replTimeout="25000"/>
      </clustering>
      <loaders preload="true">
      <loader class="org.infinispan.loaders.file.FileCacheStore" fetchPersistentState="true">
      <properties>
      <property name="location" value="/var/opt/fullTextStore"/>
      </properties>
      </loader>
      </loaders>
      </namedCache>

      <!-- **************************** -->
      <!-- Cache to store Lucene data -->
      <!-- **************************** -->
      <namedCache name="LuceneIndexesData">

      <clustering mode="replication">
      <stateRetrieval
      fetchInMemoryState="true"
      logFlushTimeout="30000"/>
      <sync
      replTimeout="25000"/>
      </clustering>
      <loaders>
      <loader class="org.infinispan.loaders.file.FileCacheStore" fetchPersistentState="true">
      <properties>
      <property name="location" value="/var/opt/fullTextStore"/>
      </properties>
      </loader>
      </loaders>
      </namedCache>

      <!-- ***************************** -->
      <!-- Cache to store Lucene locks -->
      <!-- ***************************** -->
      <namedCache
      name="LuceneIndexesLocking">
      <clustering
      mode="replication">
      <stateRetrieval
      fetchInMemoryState="true"
      logFlushTimeout="30000"/>
      <sync
      replTimeout="25000"/>
      </clustering>
      </namedCache>

      </infinispan>

      There are 10160 open files in the cache store when the exception is thrown and a total of 10178 files visible in the cache store.

      Submitting this so the issue can be tracked after being suggested to do so on Hibernate Search Forums.

        Attachments

          Activity

            People

            Assignee:
            manik Manik Surtani (Inactive)
            Reporter:
            taunderwood Todd Underwood (Inactive)
            Votes:
            1 Vote for this issue
            Watchers:
            3 Start watching this issue

              Dates

              Created:
              Updated:
              Resolved: