-
Story
-
Resolution: Unresolved
-
Major
-
None
-
None
-
None
-
None
-
Quality / Stability / Reliability
-
False
-
-
False
-
None
-
3
-
None
-
None
-
None
To keep high quality of test scripts, I'd like to create an AI tool, maybe a Claude agent from the o/cvo repo, that can perform code reviews before submitting a PR, maybe a new makefile command which can be triggered after build command, this is just a rough idea.
Code review includes format review, comment review, and naming convention review.
Here are some suggestions from https://github.com/copilot:
- Test Structure & Organization
1. Tests are organized in logical Describe blocks by feature/component
2. Context blocks group related scenarios clearly
3. Related tests are grouped together, not scattered
- Setup & Teardown (BeforeEach/AfterEach)
1. BeforeEach initializes test dependencies properly
2. AfterEach cleans up resources (DB connections, files, mocks, etc.)
3. No shared state between tests (each test is independent)
4. BeforeSuite/AfterSuite used for expensive one-time operations only
5. Cleanup is performed even if tests fail
- Assertions & Matchers
1. Error assertions include helpful context
2. Assertions are clear and specific
- Test Data & Fixtures
1. Test data is explicit and easy to understand
2. No hardcoded magic values without explanation
3. Sensitive data is not exposed in tests
- Test Independence & Isolation
1. Tests don't depend on execution order
2. Tests don't share state between runs
3. Tests are isolated from external systems
4. Database/file system operations are isolated
5. No global variables or singletons in tests
- Performance & Flakiness
1. Parallel execution is safe (ginkgo -p)
2. Tests don't interfere with each other
3. No resource leaks (goroutines, connections, files)
- Documentation & Comments
1. Complex test logic is documented
2. Test purpose is clear from the description
3. Edge cases and assumptions are explained
4. Comments explain "why", not "what"
5. No commented-out code
- Code Quality
1. No code duplication (extract into helpers/factories)
2. Helper functions are well-named and documented
3. Test code follows same style as production code
4. No hardcoded paths or environment-specific values
5. Constants used for repeated values
- Ginkgo-Specific Best Practices
1. RegisterFailHandler(Fail) called in test entry point
2. RunSpecs(t, "SuiteName") used correctly
3. Dot imports used appropriately (. "github.com/onsi/ginkgo/v2")
4. Focus specs (FDescribe, FIt) are removed before merge
5. Pending specs (PDescribe, PIt) have clear reasons
======================================
-
- Code Review Template for Ginkgo Tests
*Structure:*
- [ ] Logical organization with Describe/Context blocks
- [ ] Descriptive test names
- [ ] No duplicate tests
*Setup & Cleanup:*
- [ ] Proper BeforeEach/AfterEach
- [ ] Resources cleaned up
- [ ] No shared state
*Assertions:*
- [ ] Uses Gomega matchers
- [ ] Clear and specific
- [ ] One logical assertion per test
*Async Testing:*
- [ ] Eventually/Consistently used correctly
- [ ] No arbitrary time.Sleep
- [ ] Proper timeouts set
*Mocking:*
- [ ] Mocks only external dependencies
- [ ] Clear expectations
- [ ] Not over-specified
*Test Data:*
- [ ] Explicit and clear
- [ ] Builders for complex objects
- [ ] Table-driven for multiple cases
*Error Handling:*
- [ ] Error cases tested
- [ ] Edge cases covered
- [ ] Error messages verified
*Independence:*
- [ ] Tests don't depend on order
- [ ] No shared state
- [ ] Properly isolated
*Performance:*
- [ ] Fast execution
- [ ] No flaky tests
- [ ] No resource leaks
*Documentation:*
- [ ] Purpose is clear
- [ ] Complex logic explained
- [ ] No commented code
*Quality:*
- [ ] No duplication
- [ ] Good naming
- [ ] Follows conventions
*Coverage:*
- [ ] Adequate coverage
- [ ] Critical paths tested
- [ ] Quality over quantity