-
Bug
-
Resolution: Unresolved
-
Minor
-
None
-
None
-
False
-
-
False
-
-
Description
Summary:
TestFP8TypesCPU test class is failing during PyTorch CPU unit test execution due to missing exception.
Test Class: inductor/test_fp8.py::TestFP8TypesCPU
Number of Failing Tests: 1
Platform: CPU
Test Type: Unit Test
Version Information:
- PyTorch Commit: 6bdd8c9
- Branch: main
- Test Date: 2026-01-14
- Python Version: 3.12.11
- Sprint: Sprint 24
Failure Pattern:
Single root cause - expected exception not raised
Common Error:
AssertionError: BackendCompilerFailed not raised
Failing Tests:
1. test_bad_cast_cpu
Steps to Reproduce:
1. Set up PyTorch environment with Python 3.12
2. Execute the failing test:
TEST_CONFIG=cpu python3 test/run_test.py -i inductor/test_fp8 TEST_CONFIG=cuda python3 test/run_test.py -i inductor/test_fp8 TEST_CONFIG=inductor python3 test/run_test.py -i inductor/test_fp8
3. Observe AssertionError indicating expected exception was not raised
Expected Result:
Test should verify that invalid FP8 cast operations raise BackendCompilerFailed exception
Actual Result:
Test fails because BackendCompilerFailed exception is not raised for bad FP8 cast
Root Cause Analysis:
The failure indicates:
- Invalid FP8 cast operation is not being properly detected by backend compiler
- Compiler is accepting or silently handling the bad cast instead of failing
- FP8 type validation may have been relaxed or is missing on CPU backend
- Test expectation may need updating if behavior changed intentionally
Potential Solutions:
1. Add proper FP8 type validation in CPU backend compiler
2. Ensure invalid cast operations are detected and raise appropriate exceptions
3. Review recent changes to FP8 type handling in inductor
4. Update test expectations if new behavior is correct
5. Check if CPU backend should support FP8 operations differently than GPU
Logs:
Test execution logs available in CPU test suite
Log location: /home/ktanmay/Downloads/Run 1-20260120T060019Z-1-001/Run 1/20260114_024940_commit_6bdd8c9/cpu_tests.log
Additional Context:
- FP8 (8-bit floating point) is primarily used for GPU training
- Test verifies error handling for invalid FP8 operations on CPU
- Related ticket
AIPCC-8253exists for similar failure on sGPU platform - May indicate difference in FP8 support between CPU and GPU backends
Severity: Medium
Priority: P3