-
Task
-
Resolution: Done
-
Major
-
None
-
None
-
None
-
None
-
5
-
False
-
None
-
False
-
No
-
MGDSRVS-43 - Support Customer with different limits in the fleet
-
---
-
---
-
MK - Sprint 221
WHAT
The KafkaRequest schema returns kafka_storage_size in the following format:
{ "id": "dummy-kafka-id", ... "kafka_storage_size": "10Gi" ... }
This schema is used for the following endpoints:
In comparison, the SupportedKafkaInstanceSize schema returns 'Quantity' limits in the following format:
{ "instance_types": { "value": { "id": "developer", "display_name": "Trial", "sizes": [ { "id": "x1", "max_data_retention_size": { # same as kafka_storage_size "bytes": 1073741800000 }, ... } ] } } }
This schema is used for the following endpoints:
WHY
We should have a consistent format for these reported capacity limits.
HOW
Reach an agreement on the correct format for these fields and modify the schemas in the openapi spec.
We should also make the naming of kafka_storage_size/max_data_retention_size consistent in the API response to avoid confusion.
DONE
- Both KafkaRequest and SupportedKafkaInstanceSize schemas should have the same return format for 'Quantity' limits such as max_data_retention_size/kafka_storage_size
Guidelines
The following steps should be adhered to:
- Required tests should be put in place - unit, integration, manual test cases (if necessary)
- CI and all relevant tests passing
- Changes have been verified by one additional reviewer against:
- each required environment
- each supported upgrade path
- If the changes could have an impact on the clients (either UI or CLI), a JIRA should be created for making the required changes on the client side and acknowledged by one of the client side team members. PR has been merged
- is related to
-
MGDSTRM-8982 Use max_data_retention_size on kafka storage expansion
- Closed
-
MGDSTRM-8970 Remove kafka_storage_size from the public api
- Closed
-
MGDSTRM-8478 Group reported limits as its own object in the KafkaRequest schema
- Closed