Uploaded image for project: 'Container Tools'
  1. Container Tools
  2. RUN-3243

Investigate feasibility and impact of skipping tests on Podman GitHub PRs

XMLWordPrintable

    • 3
    • False
    • Hide

      None

      Show
      None
    • False
    • rhel-container-tools
    • RUN 277

      This spike aims to investigate the technical feasibility, potential benefits, and risks associated with implementing a mechanism to allow developers to intentionally skip some or all tests on Podman GitHub Pull Requests (PRs).

      The primary motivation is to improve developer velocity by reducing CI/CD build times for minor changes or documentation-only PRs, where a full test suite run might be unnecessary and time-consuming. This also extends to intelligently skipping tests when the changes are unrelated to specific environments or EOL distros. This must be balanced against maintaining code quality and preventing regressions.

      Goals of this Spike:

      • Understand Current CI/CD Setup:
        • Map out the current GitHub Actions (or other CI) workflows triggered on Podman PRs.
        • Identify which tests are run and their typical execution times.
        • Determine how PRs are currently classified (e.g., code changes, docs-only, dependencies).
      • Research Skipping Mechanisms:
        • Investigate common patterns and best practices for conditionally skipping CI/CD jobs/steps in GitHub Actions. This might include:
          • Using commit messages (e.g., [ci skip], [skip tests]).
          • Using PR labels (e.g., ci/skip-tests).
          • Analyzing changed files (e.g., paths-ignore in workflows, custom scripts to check git diff).
          • Leveraging GitHub Actions expressions and conditions.
        • Explore tools or actions that facilitate this (e.g., those that check commit messages or PR titles).
      • Investigate Automated Skipping Logic based on Change Type and Environment:
        • Code Changes: How can CI/CD detect if a PR only affects documentation, comments, or very specific, isolated code paths that don't warrant a full test suite? (e.g., using git diff, file path patterns).
        • Distribution/Environment Specificity: Can tests be skipped if the changes are entirely unrelated to specific Linux distributions, operating systems (e.g., macOS, Windows if applicable), or architecture (e.g., s390x, ppc64le)?
        • End-of-Life (EOL) Distros: Can tests for EOL distributions be automatically skipped for general PRs, only running them if the PR specifically targets EOL distro compatibility or fixes a bug unique to them?
        • Other Contextual Skips: Explore other intelligent criteria for skipping tests (e.g., if only build tooling changes, if only dependencies are updated, if specific sub-components are untouched).
      • Identify Use Cases for Skipping:
        • Documentation-only changes: PRs that only modify Markdown files, comments, or documentation.
        • Trivial code changes: Extremely minor code fixes (e.g., typos in comments, small refactors with no functional impact) where a full test run might be overkill.
        • Dependency updates: For automated dependency updates where a specific set of tests might be sufficient, or only linting/build checks are needed initially.
        • Force-push scenarios (with caution): Investigating if there's a safe way to re-run specific subsets of tests after a force push without rerunning everything.
      • Assess Risks and Mitigations:
        • Risk of introducing regressions: How to ensure that necessary tests are always run for functional changes.
        • Developer misuse: How to prevent developers from indiscriminately skipping tests.
        • Maintainability of CI/CD: How to keep the skipping logic clear and easy to maintain, especially with automated rules.
        • Reporting: How to clearly indicate when tests were skipped (manually or automatically) in the GitHub PR UI.
        • Impact on test coverage metrics: How skipped tests might affect internal metrics.
        • Complexity: Balancing the benefits of automated skipping with the complexity of maintaining the logic.
      • Propose a Recommendation:
        • Suggest one or more viable approaches for implementing test skipping, including both manual and automated methods.
        • Outline the pros and cons of each approach.
        • Recommend a policy or guidelines for when and how developers should use the skipping mechanism (manual) and when it will be automatically applied.
        • Provide a high-level plan for implementation, including any necessary changes to GitHub Actions workflows or repository settings.

      Deliverables:

      • A summary document (e.g., Confluence page, Google Doc) detailing:
        • Current CI/CD overview for Podman PRs.
        • Analysis of investigated skipping mechanisms (manual and automated).
        • Identified use cases and proposed policy/guidelines for both manual and automated skips.
        • Assessment of risks and mitigation strategies.
        • A clear recommendation for implementation.
      • A brief presentation or discussion with relevant stakeholders (e.g., maintainers, core contributors) to share findings and solicit feedback.
      • If applicable, a simple proof-of-concept (e.g., a branch with a minimal workflow demonstrating a skipping mechanism, perhaps for docs-only changes).
      • A list of potential follow-up tasks (e.g., new user stories for implementing the chosen solution, updates to contributing guidelines).

              rh-ee-tizhou Tim Zhou
              mboddu Mohan Boddu
              Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

                Created:
                Updated:
                Resolved: