Details
-
Bug
-
Resolution: Done
-
Major
-
2.10.0-fuse-71-047
-
None
-
None
Description
I have a simple camel route that takes file from a camel-file consumer endpoint and sends to a camel-hdfs producer endpoint:
<from uri="file:/local/workspace/inbox?delete=true"/>
<to uri="hdfs://localhost:9000/local/workspace/outbox/file1"/>
However, my Hadoop server only creates a zero length file "file1.opened" unless I stop camel route or a splitting condition is met with a "splitStratedy" option added to URI. In above cases, a file called "file1" is created with proper contents and the "file1.opened" is disappeared.
It looks like that close() function of HdfsOutputStream is never called unless the camel route/context is stopping or we are splitting the file by looking at source code.