Type: Feature Request
Status: Open (View Workflow)
Affects Version/s: None
Fix Version/s: None
As mentioned on https://github.com/bytemanproject/byteman/pull/43, it would be useful to have a way of writing tests that rules fail to compile. Ideally specific failure reasons could be matched on.
There is a test added for
BYTEMAN-307 which could use this functionality, and other IMPORT-related functionality could benefit such as ensuring that classes aren't visible without IMPORTS.
Andrew's comments from the PR:
It would be possible to implement a test which uses Install to auto-load the agent, Submit to submit the rule, exercises the trigger method (to ensure the rule gets processed) and then uses the same Submit instance to check the status of the rule. As it currently stands that would require parsing the output of the Submit task to see whether the rule suffered any injection failures.
It would be sensible to add something to the Submit API itself which allowed the current injection state of rules to be retrieved in a pre-parsed form. Indeed, on top of that BMUnit could export some capability to validate rule injection status.
For example, BMUnit could retrieve the injection status of all rules before it unsubmits at test or class scope. It should fail the test if a rule was not injected correctly modulo some annotation on the test. The annotation could identify rules which are expected to fail with either a parse or type error or type warning (i.e. must fail that way) and rules which are allowed to fail with one or more of them (may fail). We might even be able to be more specific than that as to the details of the failure.
Most of the work could probably be shared with
BYTEMAN-308, which is about rule failures being reported with more detail. This feature for a test being able to expect it and that one about reporting it to the user.