-
Epic
-
Resolution: Unresolved
-
Normal
-
None
-
None
-
None
-
Implement mocked APIs with MSW
-
False
-
-
True
-
To Do
-
100% To Do, 0% In Progress, 0% Done
We currently handle mocked APIs using a python server. All mocked data is stored in folders reflecting the API endpoints structure. Mocked responses are stored in json files.
While being a simple system, it's not a flexible one. It's difficult to guarantee that mocked json are always accurate because they are not typed. It's also difficult to simulate different types of errors or particular conditions that depend on APIs requests params.
By adopting a modern solution like MSW, we will get different improvements:
- using typed responses we can be sure they respect the corresponding OpenAPI spec
- we will be able to customize error handling and cover specific scenarios
- we can run msw or use its mocks inside unit tests
- we can reuse msw or its handlers inside storybook when covering legacy components
We currently have some scrips (see /mockdata/record*.sh) to record APIs responses to json files and add new clusters to the mocks. In theory after switching to TS mocks we could reuse a base cluster response and eventually manually alter only what's needed instead of having to record whole clusters and related responses every time. We probably won't need to port all those scripts, we can investigate what is worth to keep.
Acceptance Criteria
Replace the python server with MSW
Convert existing json mocks to MSW handlers and typed responses
Evaluate which recording scripts (see /mockdata/record*.sh) are still needed with the new system and eventually convert them to js/ts.