There is a common misconception that the goal of Quality Assurance is simply to verify that code matches documentation. But the name “Quality Assurance” implies something much deeper. Assuring quality is about far more than satisfying a set of requirements created in a vacuum; it is about how that experience is realized in a real-world context.

In the “vacuum” of the mockup phase, everything is perfect. The data is clean, the screen sizes are standard, and the user always follows the intended path. But “Minimum Quality” does not exist in a vacuum. It is defined by the user’s lived experience with the final implementation.

If a feature meets every written requirement but feels clunky, looks broken on an unusual screen size, or creates friction during a non-standard interaction, it has failed the quality test. True quality isn’t found in the plan; it’s found in the implementation.

Defining the Terminology

To achieve a holistic view of quality, we must distinguish between technical errors and experiential failures:

  • A “bug” is a discrepancy between the technical requirements/mockups and the actual implementation. These are objectively wrong; the code failed the instructions.
  • An “issue” is anything that creates a sub-optimal or frustrating user experience, even if it “technically” follows the requirements. These are contextually wrong; the instructions failed the user.

The Safety of the Script

Historically, QA has focused on bugs because they are safe and objective; pointing out a broken rule requires no bravery. However, identifying issues requires a level of professional courage and critical thinking often missing in standard testing.

This “bug-only” mentality may be a byproduct of two factors:

  1. A Lack of Confidence: Many QA professionals haven’t been empowered—or trained—to act as user advocates. Fearing pushback for “subjective” opinions, they retreat to the safety of the requirements document.
  2. Procedural Laziness: It is cognitively easier to follow a checklist than it is to empathetically simulate a user’s frustration. True quality assurance requires the tester to move beyond the script, experiment with edge cases, and think critically about the design’s original intent.

To truly assure quality, we must move past the safety of the checklist. This requires a culture where managers actively encourage subjective feedback. Even if a stakeholder chooses not to address an experiential issue, awareness is always superior to ignorance because it enables informed decision-making. By identifying these concerns early, we allow clients to weigh UX trade-offs before launch, rather than being blindsided by them after.

The Subjectivity Gap

Standard QA often misses “issues” because they are subjective. There is no “wrong” answer in a requirement doc for a layout that looks slightly awkward on an ultra-wide monitor, so it gets passed.

Quality Assurance should aim to catch both. By identifying the subjective friction points that standard testing overlooks, we ensure a polished final product rather than just a functional one.


How to Test for Subjective Issues

Testing for experience requires moving away from the “ideal” environment of a Figma file and into the messy reality of user behavior. Here are three methods I use to surface these hidden issues:

1. Stress-Test Your Content

Mockups are usually built with “perfect” data—short names, centered images, and exactly three lines of text. Reality is rarely that clean.

  • Testing methods: Test with no content at all, longer than expected strings of text, and oddly sized images.
  • Action: Flag any configuration that looks “unusual” for design review.
    • Example: When the title is very long, the text overflows the card.

2. Test the “In-Between” Spaces

Users don’t just exist at the specific breakpoints illustrated in the mockups. They exist everywhere in between.

  • Testing methods: Drag the viewport through the full spectrum of sizes—paying special attention to “extra-wide” and “in-between” sizes.
  • Action: Flag breakpoints where the layout feels “awkward”.
    • Example: On viewports between 800px–1000px, description wraps excessively, making the card tall and thin.

3. Stray from the “Happy Path”

The “Happy Path” is the journey we hope the user takes. QA’s job is to find the alternate routes.

  • Testing methods: Experiment with unusual workflows, weird inputs, and non-linear interactions.
  • Action: Flag unexpected behaviors for functional requirement clarification.
    • Example: When I enter a phone number with a country code (e.g. +1) into the ‘phone number’ field, I cannot submit the form because the field only accepts digits.

Quality is a Choice

Just because an issue isn’t a “bug” doesn’t mean it should be ignored until after launch. Leaving these issues unresolved harms the user experience and, ultimately, your conversion rate.

At the very least, stakeholders should be given the option to address these items. While fixing a bug is a straightforward correction, fixing an issue often requires a collaborative loop: Design provides updated direction, and Requirements are clarified.

If the goal of QA is truly to assure quality, we must look beyond what is “definitively wrong” and fight for what is “exceptionally right.”

Pro Tip: Fixing a bug doesn’t usually require mockup revisions or updated requirements, while fixing an issue often does. Remember that mockups are requirements. If an issue requires a design change, updating the mockup is the best way to document the updated requirements for the development team. In such cases, the written requirements may not need to change.

Issues vs. Opportunities

One final distinction to note is the difference between an “issue” and an “opportunity.” While both represent potential improvements, their impact on the release cycle differs:

  • An opportunity is an enhancement (a “nice-to-have”) that improves the product but is not required for a feature to be considered “complete.” These are typically documented for future roadmaps. Opportunities do not block a release.
  • An issue is a critical opportunity that must be resolved to meet the minimum acceptable standard of quality. A feature is not considered “complete” until all issues are addressed. Issues block a release.

Ultimately, the client is the arbiter of what defines “acceptable quality.” Prioritizing which issues require immediate action versus which opportunities can be roadmapped for future development is itself a subjective exercise; however, this informed decision-making process can only occur once those suggestions have been identified and logged, which is why it is essential for this process to happen as part of Quality Assurance.