If when validating a work item and one of its acceptance criteria fails, but after discussing this failed status with your Product Owner she decides to overlook for now this one acceptance criteria (because it is a “rare use case, after all”), what test status would you give this "failed but now dropped " test scenario / step: Failed, Blocked, or Untested?
I do not want to create a test report that has any red / failed status, given this particular circumstance. (Note that I have fully documented the test run with these details, for the sake of posterity – in the event this use case returns to us later in the form of a feature enhancement request or customer complaint.)
How would be “Accepted as failed”?
I try to avoid any test reporting with hard wired status.
It’s a limitation, which hides important information.
There are more status than any developer could predict.
e.g. “I found 3 bugs, had to work around 2, still have to investigate 1, and currently pause because of an interruption”
I think a lot depends on your corporate culture and the nature of the product under test.
If you are a small company and you have a good professional relationship with the PO; and/or if your product is not public-facing and its functions are not critical to life, limb or prosperity, then by all means mark the test run as Passed but with reservations, or with non-material failures (and, of course, record your findings and, if they’re not recorded anywhere else, the conversations with the PO, their decision not to address the failure before release, and any reasons they gave).
If, however:
you are a large organisation and these decisions are a long way down the food chain from senior management; or
your product is public facing, and failures would impact public health, safety, or their livelihoods or savings
then you must mark the test prominently as FAILED and make it clear what recommendations you gave the PO and that the decision to release was not based on your opinion. I know we’re not, as a testing community, in favour of testers “signing off” on products before release, but that’s not the way a lot of the public and the judiciary see it, and if the product is not, in terms of the testing, “fit for release”, then any blame (and I’m sorry to make it into a blame game) that may accrue for the consequences of that release shouldn’t come to rest on your shoulders. Talking to a hostile media corps should not be in any tester’s job description.