-
Bug
-
Resolution: Unresolved
-
Normal
-
None
-
rhos-18.0.18
-
None
-
4
-
False
-
-
False
-
?
-
rhos-storage-glance
-
None
-
-
-
-
Low
When using `glance image-create-via-import` to import large images (>100MB), the S3 backend uses single-part upload instead of multipart upload, resulting in significantly slower upload performance.
The `set_image_data()` function in the import flow doesn't pass the `size` parameter to `image.set_data()`, causing it to default to `size=0` (unknown size). When the S3 driver receives `image_size=0`, it incorrectly determines that `0 < 100MB threshold`, so it uses single-part upload instead of multipart.
The `image.size` is available (set during the staging step), but it's not being passed to `set_data()` in the import flow.
Steps to Reproduce:
1. Configure Glance with S3 backend (Swift S3 API)
2. Create a 2GB test image file:
$ dd if=/dev/zero of=/tmp/test_2gb.img bs=1M count=2048
3. Import the image using glance-direct method:
$ glance image-create-via-import --name test_2gb_import --disk-format raw --container-format bare < /tmp/test_2gb.img
4. Monitor Glance API logs for S3 upload activity
Expected Behavior:
- Multipart upload should be used for images > 100MB (default threshold)
- Upload should use parallel threads (s3_store_thread_pools)
- Upload should be chunked (s3_store_large_object_chunk_size)
- Faster upload performance for large images
Actual Behavior:
- Single PutObject is used (no multipart)
- No parallelization (single-threaded)
- No chunking (one large request)
- Slow upload performance (e.g., 2GB takes ~5 minutes at ~7 MB/s)
Workaround:
Set `s3_store_large_object_size = 0` in glance-api.conf to force multipart upload for all images, even when size is unknown:
[s3_fast]
s3_store_large_object_size = 0
s3_store_large_object_chunk_size = 10
s3_store_thread_pools = 10
Then restart the Glance API service.