-
Bug
-
Resolution: Unresolved
-
Normal
-
None
-
4.21.0
Description of problem:
Some StorageClass names in some production environments were found to be too long to create a file with them. While this did not cause a runtime error, a "too long file name" error occurred when trying to handle the file from the archive. This is due to a limitation of the operating system's file system and applies only to the file name, not the full path.
How reproducible:
Straightforward
Steps to Reproduce:
1. Create a new StorageClass with name longer than 255 chars, in a running cluster.
2. Trigger a new gathering of the Insights Operator
3. Log in to the IO pod or download the resulting archive
4. Try to decompress the archive
e.g. StorageClass
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: ztcxbqdsvfywlbxikgoghzencgivfulyulepwzzfzxvbkjlgexcmscugjkxfmuygmrduonrsabqlizasdsvthmjhxujtfzgwoluoeegcmkhdvztacbuzinqyjqlbyybtuvspdfzukabroalbvprjogfrudymmbpiatxlizcvzzfmzqpaxorqcmsabnqawbywsrllhgrjpqsviovqwxacyrfujaduwhiiskzqpvhllehlmmzvpghxndwsnvfn
annotations:
storageclass.kubernetes.io/is-default-class: 'true'
provisioner: pd.csi.storage.gke.io
parameters:
replication-type: none
type: pd-standard
reclaimPolicy: Delete
allowVolumeExpansion: true
volumeBindingMode: WaitForFirstConsumer
Actual results:
Trying to decompress the arhive produces the following error:
tar: config/storage/storageclasses/ztcxbqdsvfywlbxikgoghzencgivfulyulepwzzfzxvbkjlgexcmscugjkxfmuygmrduonrsabqlizasdsvthmjhxujtfzgwoluoeegcmkhdvztacbuzinqyjqlbyybtuvspdfzukabroalbvprjogfrudymmbpiatxlizcvzzfmzqpaxorqcmsabnqawbywsrllhgrjpqsviovqwxacyrfujaduwhiiskzqpvhllehlmmzvpghxndwsnvfn.json: Cannot open: File name too long
Expected results:
Decompressing the archive does not cause any error
Additional info: