In the previous article, I explained the FIRST principles: Fast, Isolated, Repeatable, Self-validated, and Timely. It's a solid blueprint for creating resilient tests, and also, as the name itself suggests, very good for beginners, for getting started. However, as we gain experience, we see that it's possible to do everything right, and still have things not work as expected. That's why I present a more advanced second reading of FIRST, a new mindset.
When FIRST Principles doesn't work.
FIRST principles are a good starting point, and it establishes a series of minimum concepts to follow. These help understand testing concepts, such as how using random or date makes testing more difficult (doesn't fulfill Repeatable) and therefore we need to look for alternatives (aka mocks).
But once these minimum principles are achieved, we might still find that the tests don't work well. We can end up creating a very fast, isolated, and repeatable test, but despite passing successfully, the software unexpectedly fails in production.
Indeed, the FIRST principles, despite being good, can be interpreted in various ways. Therefore, we'll add some context to better frame these principles, with a new reading of the FIRST acronym.
The FIRST Mindset
While the initial reading of FIRST provided guidance on how to begin, the expert sees a series of nuances in each property. Thinking about these nuances, we can extract a new meaning for this FIRST. A new meaning that doesn't replace the previous ones, but complements them:
- Focused: Tell a clear story
- Integrated: Reflect real usage
- Reliable: Fail for real problems, not technicalities
- Significant: Tests should validate what matters to users
- Thoughtful: Written when we understand what needs testing
Focused
"Narrow scope, clear purpose, right speed"
In the first reading, we had set the goal for the test to be fast. Now we'll also set the goal for it to be well-focused. Because it's possible to create a test that executes quickly but loses sight of our objectives.
Therefore, when writing a test, we'll need to focus it on checking a specific behavior. And here we'll need to review the test to ensure it only tests one aspect of our software, not two or more. We'll need to think of a clear name, and we'll need to ensure it has only one purpose. It's like the Single Responsibility Principle of testing.
Here, it's important to see that we're not talking so much about how we implement the test, but rather how we approach the test. While the code can easily reveal that we've lost focus, properly focusing the test is more about managing to define the right test than about the code itself. Broadly speaking, we could say it's about avoiding having to use the word "and" in the test name.
An example could be:
// ❌ Not focused: testing too many behaviors
test('counter component', () => {
const { getByText, getByRole } = render(<Counter />);
expect(getByText('0')).toBeInTheDocument();
fireEvent.click(getByRole('button'));
expect(getByText('1')).toBeInTheDocument();
fireEvent.click(getByRole('button'));
expect(getByText('2')).toBeInTheDocument();
expect(getByRole('button')).toBeEnabled();
expect(getByText('Count:')).toBeInTheDocument();
});
// ✅ Focused: tests initial render
test('counter starts at zero', () => {
const { getByText } = render(<Counter />);
expect(getByText('0')).toBeInTheDocument();
});
// ✅ Focused: tests increment behavior
test('counter increments when clicked', () => {
const { getByRole, getByText } = render(<Counter />);
fireEvent.click(getByRole('button'));
expect(getByText('1')).toBeInTheDocument();
});
Applying this principle, we'll see that many times the tests naturally turn fast on their own because they're better focused on their purpose, and this reduces their scope. Moreover, in case of failure, it facilitates maintenance and debugging, since we know exactly which behavior has failed.
Integrated
"Connect what belongs together, isolate what doesn't"
When we talked about Isolated, we mentioned that the test had to be resistant to external effects. We needed to isolate the test from issues with globals, initializations with other tests, … However, the word isolated can lead to error.
As it turns out, when misinterpreted, the word Isolated from FIRST principles leads to error. A second interpretation is that instead of isolating the test, we want to isolate the code being tested, and because of this, we end up using an excessive number of mocks, or testing an insufficient amount of code. And in these cases, we'll often find that a test can be successful, yet the application fails because the connections are wrong.
Therefore, when writing the test, we'll have to think about what needs to be integrated. And if it needs to be integrated at all. And this will be different test by test, even if the behaviors being tested affect similar parts, with similar behaviors. For each test, we'll have to think about its natural boundaries, always trying to use the minimum dependencies and integrate the minimum. We should include other dependencies as we need them.
// ❌ Over-isolated: everything mocked
test('checkout process', () => {
const mockCart = { total: 100 };
const mockPayment = jest.fn();
const mockInventory = jest.fn();
const result = checkout(mockCart, mockPayment, mockInventory);
expect(result.success).toBe(true);
});
// ✅ Integrated: real workflow
test('completes checkout with real cart and payment', async () => {
// Real cart with real items
const cart = new ShoppingCart();
cart.add({ id: 'book', price: 10 });
// Real payment processing (test mode)
const result = await checkout(cart, testPaymentGateway);
expect(result.success).toBe(true);
expect(result.orderNumber).toBeTruthy();
});
And we need to connect scarcely. There are things that don't need to be connected. And tests that don't need to be integrated. We should only do it where necessary. Otherwise, we'll lose speed and flexibility to change.
Reliable
"Consistently meaningful results"
It's possible to achieve a completely Repeatable test that nevertheless occasionally fails. The fact is that no matter how much we try to control the test environment to make it repeatable, machine errors and certain test implementations end up causing random failures.
And here the risk arises. If the probability of failure is low, we might end up repeating the test when it happens, taking the result as a temporary environment failure, and dismissing any possible real failure in the code. Always, any failing test should send us a clear signal that the code needs fixing, and if not, we lose confidence in the tests. Therefore, we must take measures to prevent random failures. And sometimes, tests fail randomly because the test implementation is wrong.
Well, most of the time, the origin of these problems comes from timeouts: operations that need a certain time to execute and which the test needs to wait for. Therefore, in these scenarios, we'll need to rethink how we wait for the different stages of the test. Try to think of smarter strategies for handling code waits. And think step by step about what can happen.
But one thing to watch carefully for are race conditions: situations where we've unwittingly assumed that executions will have a certain speed, and where a possible change can break a test. For example, imagine a test that verifies we want to clean a list, so, we request a listing, press clean, and check that the list is empty. So, what could be the race condition here? What happens if getting the listing is slower than the test? If the listing hasn't loaded yet when we check that the list is empty, will the test fail? Well no, and this is the problem — the test can silently succeed when the delete code hasn't worked because the list is not loaded yet. Therefore, in code like this, we'll need to wait for the list to load before continuing with the test execution.
// ❌ Could have race conditions and fail to test
test('clears shopping cart items', async () => {
render(<ShoppingCart />);
// Bad practice: Not waiting for items to load
const clearButton = await screen.findByText('Clear Cart');
fireEvent.click(clearButton);
// This might pass incorrectly if items haven't loaded yet!
expect(screen.queryByTestId('cart-item')).not.toBeInTheDocument();
});
// ✅ Reliable: Properly handles the waiting for the items
test('clears shopping cart items', async () => {
render(<ShoppingCart />);
// Wait for items to appear (confirms data was actually loaded)
await screen.findAllByTestId('cart-item');
// Now we can safely test the clear functionality
const clearButton = screen.getByText('Clear Cart');
fireEvent.click(clearButton);
// Verify cart is empty
expect(screen.queryByTestId('cart-item')).not.toBeInTheDocument();
});
Note: A technique that usually works very well is to synchronize with any loading messages that might exist. When you request the load, verify that the loading indicator is present, and wait for it to disappear. And when it disappears, then continue the execution. Well implemented, this can become a shared tool for your tests.
Significant
"Testing what matters to users and business"
And following the same line, and complementing Timely, we need to think about our objectives before we write a single line of code. It's good to know what we want to test, but we'll also need to be clear about what, and discuss it with the team, especially with business.
The issue is that many of the requirements we encounter, and many of the user stories, tend to be incomplete and full of holes and cases that haven't been considered. Here, we should think about all the tests that are missing — we can limit ourselves to writing the test names — and present to business the different scenarios we've thought of. They'll tell us how important they are, if they're necessary at first, and we'll even help them see possibilities they hadn't considered.
This will help to have much better defined tests, and to shorten the necessary implementation time.
Also, although I talk about doing it before implementing, maybe a user story has more than one aspect, and perhaps we can go aspect by aspect instead of doing everything all at once. Therefore, we could keep asking as we progress and get to know the problem better, about all those cases we had missed. But, ideally, always before starting any implementation, whether it's code or test.
Not a replacement
The final mindset, "Thoughtful," brings us full circle to where we began. While the original FIRST principles taught us how to write tests, and they are necessary. The principles are the minimums of what we have to accomplish, but we need more. That's why we need to evolve from following principles to developing a mindset.
Principles tell us what to do, but a mindset guides how we think.
A principle says "make tests repeatable," but a mindset asks "what makes this test consistently meaningful?" A principle demands isolation, but a mindset considers what truly needs to be connected or separated.
This evolution reflects our own learning as developers:
- From writing focused tests to thinking about clear testing stories
- From isolating components to understanding natural system boundaries
- From making tests repeatable to ensuring reliable signals
- From validating features to verifying what matters to users
- From writing tests early to writing them thoughtfully
The principles haven't changed — they're still our foundation. But just as a house needs more than a foundation to be a home, our tests need more than principles to be truly valuable. They need the guidance of an experienced mindset.
So the next time you sit down to write a test, remember: You're not just following FIRST principles, you're applying the FIRST mindset too. Because in testing, as in development, it's not just about what we do, but how we think about what we do. Get the right mindset.
Thanks for the read. I usually like to write stories to think about how we understand and apply software engineering, and to make us think about what we could improve. If you liked the article, don't forget to clap, comment and share. For more insights and discussions, explore my most successful stories on Medium.
Are you interested in the 'Improve Your Testing' series?
The previous article: