Tag: Tests

React Component Tests for Humans

React component tests should be interesting, straightforward, and easy for a human to build and maintain.

Yet, the current state of the testing library ecosystem is not sufficient to motivate developers to write consistent JavaScript tests for React components. Testing React components—and the DOM in general—often require some kind of higher-level wrapper around popular testing frameworks like Jest or Mocha.

Here’s the problem

Writing component tests with the tools available today is boring, and even when you get to writing them, it takes lots of hassle. Expressing test logic following a jQuery-like style (chaining) is confusing. It doesn’t jive with how React components are usually built.

The Enzyme code below is readable, but a bit too bulky because it uses too many words to express something that is ultimately simple markup.

expect(screen.find(".view").hasClass("technologies")).to.equal(true); expect(screen.find("h3").text()).toEqual("Technologies:"); expect(screen.find("ul").children()).to.have.lengthOf(4); expect(screen.contains([   <li>JavaScript</li>,   <li>ReactJs</li>,   <li>NodeJs</li>,   <li>Webpack</li> ])).to.equal(true); expect(screen.find("button").text()).toEqual("Back"); expect(screen.find("button").hasClass("small")).to.equal(true);

The DOM representation is just this:

<div className="view technologies">   <h3>Technologies:</h3>   <ul>     <li>JavaScript</li>     <li>ReactJs</li>     <li>NodeJs</li>     <li>Webpack</li>   </ul>   <button className="small">Back</button> </div>

What if you need to test heavier components? While the syntax is still bearable, it doesn’t help your brain grasp the structure and logic. Reading and writing several tests like this is bound to wear you out—it certainly wears me out. That’s because React components follow certain principles to generate HTML code at the end. Tests that express the same principles, on the other hand, are not straightforward. Simply using JavaScript chaining won’t help in the long run.

There are two main issues with testing in React:

  • How to even approach writing tests specifically for components
  • How to avoid all the unnecessary noise

Let’s further expand those before jumping into the real examples.

Approaching React component tests

A simple React component may look like this:

function Welcome(props) {   return <h1>Hello, {props.name}</h1>; }

This is a function that accepts a props object and returns a DOM node using the JSX syntax.

Since a component can be represented by a function, it is all about testing functions. We need to account for arguments and how they influence the returned result. Applying that logic to React components, the focus in the tests should be on setting up props and testing for the DOM rendered in the UI. Since user actions like mouseover, click, typing, etc. may also lead to UI changes, you will need to find a way to programmatically trigger those too.

Hiding the unnecessary noise in tests

Tests require a certain level of readability achieved by both slimming the wording down and following a certain pattern to describe each scenario.

Component tests flow through three phases:

  1. Preparation (setup): The component props are prepared.
  2. Render (action): The component needs to render its DOM to the UI before either triggering any actions on it or testing for certain texts and attributes. That’s when actions can be programmatically triggered.
  3. Validation (verify): The expectations are set, verifying certain side effects over the component markup.

Here is an example:

it("should click a large button", () => {   // 1️⃣ Preparation   // Prepare component props   props.size = "large";    // 2️⃣ Render   // Render the Button's DOM and click on it   const component = mount(<Button {...props}>Send</Button>);   simulate(component, { type: "click" });    // 3️⃣ Validation   // Verify a .clicked class is added    expect(component, "to have class", "clicked"); });

For simpler tests, the phases can merge:

it("should render with a custom text", () => {   // Mixing up all three phases into a single expect() call   expect(     // 1️⃣ Preparation     <Button>Send</Button>,      // 2️⃣ Render     "when mounted",     // 3️⃣ Validation     "to have text",      "Send"   ); });

Writing component tests today

Those two examples above look logical but are anything but trivial. Most of the testing tools do not provide such a level of abstraction, so we have to handle it ourselves. Perhaps the code below looks more familiar.

it("should display the technologies view", () => {   const container = document.createElement("div");   document.body.appendChild(container);      act(() => {     ReactDOM.render(<ProfileCard {...props} />, container);   });      const button = container.querySelector("button");      act(() => {     button.dispatchEvent(new window.MouseEvent("click", { bubbles: true }));   });      const details = container.querySelector(".details");      expect(details.classList.contains("technologies")).toBe(true);   expect(details.querySelector("h3").textContent, "to be", "Technologies");   expect(details.querySelector("button").textContent, "to be", "View Bio"); });

Compare that with the same test, only with an added layer of abstraction:

it("should display the technologies view", () => {   const component = mount(<ProfileCard {...props} />);    simulate(component, {     type: "click",     target: "button",   });    expect(     component,     "queried for first",     ".details",     "to exhaustively satisfy",     <div className="details technologies">       <h3>Technologies</h3>       <div>         <button>View Bio</button>       </div>     </div>   ); });

It does look much better. Less code, obvious flow, and more DOM instead of JavaScript. This is not a fiction test, but something you can achieve with UnexpectedJS today.

The following section is a deep dive into testing React components without getting too deep into UnexpectedJS. Its documentation more than does the job. Instead, we’ll focus on usage, examples, and possibilities.

Writing React Tests with UnexpectedJS

UnexpectedJS is an extensible assertion toolkit compatible with all test frameworks. It can be extended with plugins, and some of those plugins are used in the test project below. Probably the best thing about this library is the handy syntax it provides to describe component test cases in React.

The example: A Profile Card component

The subject of the tests is a Profile card component.

A card component where the persons name, photo, and number of posts are displayed to the left in a single column with a light red background, and the bio is displayed on the right in paragraph form with a title against a white background.

And here is the full component code of ProfileCard.js:

// ProfileCard.js export default function ProfileCard({   data: {     name,     posts,     isOnline = false,     bio = "",     location = "",     technologies = [],     creationDate,     onViewChange,   }, }) {   const [isBioVisible, setIsBioVisible] = useState(true);    const handleBioVisibility = () => {     setIsBioVisible(!isBioVisible);     if (typeof onViewChange === "function") {       onViewChange(!isBioVisible);     }   };    return (     <div className="ProfileCard">       <div className="avatar">         <h2>{name}</h2>         <i className="photo" />         <span>{posts} posts</span>         <i className={`status $ {isOnline ? "online" : "offline"}`} />       </div>       <div className={`details $ {isBioVisible ? "bio" : "technologies"}`}>         {isBioVisible ? (           <>             <h3>Bio</h3>             <p>{bio !== "" ? bio : "No bio provided yet"}</p>             <div>               <button onClick={handleBioVisibility}>View Skills</button>               <p className="joined">Joined: {creationDate}</p>             </div>           </>         ) : (           <>             <h3>Technologies</h3>             {technologies.length > 0 && (               <ul>                 {technologies.map((item, index) => (                   <li key={index}>{item}</li>                 ))}               </ul>             )}             <div>               <button onClick={handleBioVisibility}>View Bio</button>               {!!location && <p className="location">Location: {location}</p>}             </div>           </>         )}       </div>     </div>   ); }

We will work with the component’s desktop version. You can read more about device-driven code split in React but note that testing mobile components is still pretty straightforward.

Setting up the example project

Not all tests are covered in this article, but we will certainly look at the most interesting ones. If you want to follow along, view this component in the browser, or check all its tests, go ahead and clone the GitHub repo.

## 1. Clone the project: git clone git@github.com:moubi/profile-card.git  ## 2. Navigate to the project folder: cd profile-card  ## 3. Install the dependencies: yarn  ## 4. Start and view the component in the browser: yarn start  ## 5. Run the tests: yarn test

Here’s how the <ProfileCard /> component and UnexpectedJS tests are structured once the project has spun up:

/src   └── /components       ├── /ProfileCard       |   ├── ProfileCard.js       |   ├── ProfileCard.scss       |   └── ProfileCard.test.js       └── /test-utils            └── unexpected-react.js

Component tests

Let’s take a look at some of the component tests. These are located in src/components/ProfileCard/ProfileCard.test.js. Note how each test is organized by the three phases we covered earlier.

  1. Setting up required component props for each test.
beforeEach(() => {   props = {     data: {       name: "Justin Case",       posts: 45,       creationDate: "01.01.2021",     },   }; });

Before each test, a props object with the required <ProfileCard /> props is composed, where props.data contains the minimum info for the component to render.

  1. Render with a default set of props.

This test checks the whole DOM produced by the component when passing name, posts, and creationDate fields.

Here’s what the result produces in the UI:

And here’s the test case for it:

it("should render default", () => {   // "to exhaustively satisfy" ensures all classes/attributes are also matching   expect(     <ProfileCard {...props} />,     "when mounted",     "to exhaustively satisfy",     <div className="ProfileCard">       <div className="avatar">         <h2>Justin Case</h2>         <i className="photo" />         <span>45{" posts"}</span>         <i className="status offline" />       </div>       <div className="details bio">         <h3>Bio</h3>         <p>No bio provided yet</p>         <div>           <button>View Skills</button>           <p className="joined">{"Joined: "}01.01.2021</p>         </div>       </div>     </div>   ); });
  1. Render with status online.

Now we check if the profile renders with the “online” status icon.

And the test case for that:

it("should display online icon", () => {   // Set the isOnline prop   props.data.isOnline = true;    // The minimum to test for is the presence of the .online class   expect(     <ProfileCard {...props} />,     "when mounted",     "queried for first",     ".status",     "to have class",     "online"   ); });
  1. Render with bio text.

<ProfileCard /> accepts any arbitrary string for its bio.

So, let’s write a test case for that:

it("should display online icon", () => {   // Set the isOnline prop   props.data.isOnline = true;    // The minimum to test for is the presence of the .online class   expect(     <ProfileCard {...props} />,     "when mounted",     "queried for first",     ".status",     "to have class",     "online"   ); });
  1. Render “Technologies” view with an empty list.

Clicking on the “View Skills” link should switch to a list of technologies for this user. If no data is passed, then the list should be empty.

Here’s that test case:

it("should display the technologies view", () => {   // Mount <ProfileCard /> and obtain a ref   const component = mount(<ProfileCard {...props} />);    // Simulate a click on the button element ("View Skills" link)   simulate(component, {     type: "click",     target: "button",   });    // Check if the .details element contains the technologies view   expect(     component,     "queried for first",     ".details",     "to exhaustively satisfy",     <div className="details technologies">       <h3>Technologies</h3>       <div>         <button>View Bio</button>       </div>     </div>   ); });
  1. Render a list of technologies.

If a list of technologies is passed, it will display in the UI when clicking on the “View Skills” link.

Yep, another test case:

it("should display list of technologies", () => {   // Set the list of technologies   props.data.technologies = ["JavaScript", "React", "NodeJs"];     // Mount ProfileCard and obtain a ref   const component = mount(<ProfileCard {...props} />);    // Simulate a click on the button element ("View Skills" link)   simulate(component, {     type: "click",     target: "button",   });    // Check if the list of technologies is present and matches prop values   expect(     component,     "queried for first",     ".technologies ul",     "to exhaustively satisfy",     <ul>       <li>JavaScript</li>       <li>React</li>       <li>NodeJs</li>     </ul>   ); });
  1. Render a user location.

That information should render in the DOM only if it was provided as a prop.

The test case:

it("should display location", () => {   // Set the location    props.data.location = "Copenhagen, Denmark";    // Mount <ProfileCard /> and obtain a ref   const component = mount(<ProfileCard {...props} />);      // Simulate a click on the button element ("View Skills" link)   // Location render only as part of the Technologies view   simulate(component, {     type: "click",     target: "button",   });    // Check if the location string matches the prop value   expect(     component,     "queried for first",     ".location",     "to have text",     "Location: Copenhagen, Denmark"   ); });
  1. Calling a callback when switching views.

This test does not compare DOM nodes but does check if a function prop passed to <ProfileCard /> is executed with the correct argument when switching between the Bio and Technologies views.

it("should call onViewChange prop", () => {   // Create a function stub (dummy)   props.data.onViewChange = sinon.stub();      // Mount ProfileCard and obtain a ref   const component = mount(<ProfileCard {...props} />);    // Simulate a click on the button element ("View Skills" link)   simulate(component, {     type: "click",     target: "button",   });    // Check if the stub function prop is called with false value for isBioVisible   // isBioVisible is part of the component's local state   expect(     props.data.onViewChange,     "to have a call exhaustively satisfying",     [false]   ); });

Running all the tests

Now, all of the tests for <ProfileCard /> can be executed with a simple command:

yarn test

Notice that tests are grouped. There are two independent tests and two groups of tests for each of the <ProfileCard /> views—bio and technologies. Grouping makes test suites easier to follow and is a nice way to organize logically-related UI units.

Some final words

Again, this is meant to be a fairly simple example of how to approach React component tests. The essence is to look at components as simple functions that accept props and return a DOM. From that point on, choosing a testing library should be based on the usefulness of the tools it provides for handling component renders and DOM comparisons. UnexpectedJS happens to be very good at that in my experience.

What should be your next steps? Look at the GitHub project and give it a try if you haven’t already! Check all the tests in ProfileCard.test.js and perhaps try to write a few of your own. You can also look at src/test-utils/unexpected-react.js which is a simple helper function exporting features from the third-party testing libraries.

And lastly, here are a few additional resources I’d suggest checking out to dig even deeper into React component testing:


The post React Component Tests for Humans appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , ,

React Integration Testing: Greater Coverage, Fewer Tests

Integration tests are a natural fit for interactive websites, like ones you might build with React. They validate how a user interacts with your app without the overhead of end-to-end testing. 

This article follows an exercise that starts with a simple website, validates behavior with unit and integration tests, and demonstrates how integration testing delivers greater value from fewer lines of code. The content assumes a familiarity with React and testing in JavaScript. Experience with Jest and React Testing Library is helpful but not required.

There are three types of tests:

  • Unit tests verify one piece of code in isolation. They are easy to write, but can miss the big picture.
  • End-to-end tests (E2E) use an automation framework — such as Cypress or Selenium — to interact with your site like a user: loading pages, filling out forms, clicking buttons, etc. They are generally slower to write and run, but closely match the real user experience.
  • Integration tests fall somewhere in between. They validate how multiple units of your application work together but are more lightweight than E2E tests. Jest, for example, comes with a few built-in utilities to facilitate integration testing; Jest uses jsdom under the hood to emulate common browser APIs with less overhead than automation, and its robust mocking tools can stub out external API calls.

Another wrinkle: In React apps, unit and integration are written the same way, with the same tools. 

Getting started with React tests

I created a simple React app (available on GitHub) with a login form. I wired this up to reqres.in, a handy API I found for testing front-end projects.

You can log in successfully:

…or encounter an error message from the API:

The code is structured like this:

LoginModule/ ├── components/ ⎪   ├── Login.js // renders LoginForm, error messages, and login confirmation ⎪   └── LoginForm.js // renders login form fields and button ├── hooks/ ⎪    └── useLogin.js // connects to API and manages state └── index.js // stitches everything together

Option 1: Unit tests

If you’re like me, and like writing tests — perhaps with your headphones on and something good on Spotify — then you might be tempted to knock out a unit test for every file. 

Even if you’re not a testing aficionado, you might be working on a project that’s “trying to be good with testing” without a clear strategy and a testing approach of “I guess each file should have its own test?”

That would look something like this (where I’ve added unit to test file names for clarity):

LoginModule/ ├── components/ ⎪   ├── Login.js ⎪   ├── Login.unit.test.js ⎪   ├── LoginForm.js ⎪   └── LoginForm.unit.test.js ├── hooks/ ⎪   ├── useLogin.js  ⎪   └── useLogin.unit.test.js ├── index.js └── index.unit.test.js

I went through the exercise of adding each of these unit tests on on GitHub, and created a test:coverage:unit  script to generate a coverage report (a built-in feature of Jest). We can get to 100% coverage with the four unit test files:

100% coverage is usually overkill, but it’s achievable for such a simple codebase.

Let’s dig into one of the unit tests created for the onLogin React hook. Don’t worry if you’re not well-versed in React hooks or how to test them.

test('successful login flow', async () => {   // mock a successful API response   jest     .spyOn(window, 'fetch')     .mockResolvedValue({ json: () => ({ token: '123' }) }); 
   const { result, waitForNextUpdate } = renderHook(() => useLogin()); 
   act(() => {     result.current.onSubmit({       email: 'test@email.com',       password: 'password',     });   }); 
   // sets state to pending   expect(result.current.state).toEqual({     status: 'pending',     user: null,     error: null,   }); 
   await waitForNextUpdate(); 
   // sets state to resolved, stores email address   expect(result.current.state).toEqual({     status: 'resolved',     user: {       email: 'test@email.com',     },     error: null,   }); });

This test was fun to write (because React Hooks Testing Library makes testing hooks a breeze), but it has a few problems. 

First, the test validates that a piece of internal state changes from 'pending' to 'resolved'; this implementation detail is not exposed to the user, and therefore, probably not a good thing to be testing. If we refactor the app, we’ll have to update this test, even if nothing changes from the user’s perspective.

Additionally, as a unit test, this is just part of the picture. If we want to validate other features of the login flow, such as the submit button text changing to “Loading,” we’ll have to do so in a different test file.

Option 2: Integration tests

Let’s consider the alternative approach of adding one integration test to validate this flow:

LoginModule/ ├── components/ ⎪   ├─ Login.js ⎪   └── LoginForm.js ├── hooks/ ⎪   └── useLogin.js  ├── index.js └── index.integration.test.js

I implemented this test and a test:coverage:integration script to generate a coverage report. Just like the unit tests, we can get to 100% coverage, but this time it’s all in one file and requires fewer lines of code.

Here’s the integration test covering a successful login flow:

test('successful login', async () => {   // mock a successful API response   jest     .spyOn(window, 'fetch')     .mockResolvedValue({ json: () => ({ token: '123' }) }); 
   const { getByLabelText, getByText, getByRole } = render(<LoginModule />); 
   const emailField = getByLabelText('Email');   const passwordField = getByLabelText('Password');   const button = getByRole('button'); 
   // fill out and submit form   fireEvent.change(emailField, { target: { value: 'test@email.com' } });   fireEvent.change(passwordField, { target: { value: 'password' } });   fireEvent.click(button); 
   // it sets loading state   expect(button.disabled).toBe(true);   expect(button.textContent).toBe('Loading...'); 
   await waitFor(() => {     // it hides form elements     expect(button).not.toBeInTheDocument();     expect(emailField).not.toBeInTheDocument();     expect(passwordField).not.toBeInTheDocument(); 
     // it displays success text and email address     const loggedInText = getByText('Logged in as');     expect(loggedInText).toBeInTheDocument();     const emailAddressText = getByText('test@email.com');     expect(emailAddressText).toBeInTheDocument();   }); });

I really like this test, because it validates the entire login flow from the user’s perspective: the form, the loading state, and the success confirmation message. Integration tests work really well for React apps for precisely this use case; the user experience is the thing we want to test, and that almost always involves several different pieces of code working together.

This test has no specific knowledge of the components or hook that makes the expected behavior work, and that’s good. We should be able to rewrite and restructure such implementation details without breaking the tests, so long as the user experience remains the same.

I’m not going to dig into the other integration tests for the login flow’s initial state and error handling, but I encourage you to check them out on GitHub.

So, what does need a unit test?

Rather than thinking about unit vs. integration tests, let’s back up and think about how we decide what needs to be tested in the first place. LoginModule needs to be tested because it’s an entity we want consumers (other files in the app) to be able to use with confidence.

The onLogin hook, on the other hand, does not need to be tested because it’s only an implementation detail of LoginModule. If our needs change, however, and onLogin has use cases elsewhere, then we would want to add our own (unit) tests to validate its functionality as a reusable utility. (We’d also want to move the file because it wouldn’t be specific to LoginModule anymore.)

There are still plenty of use cases for unit tests, such as the need to validate reusable selectors, hooks, and plain functions. When developing your code, you might also find it helpful to practice test-driven development with a unit test, even if you later move that logic higher up to an integration test.

Additionally, unit tests do a great job of exhaustively testing against multiple inputs and use cases. For example, if my form needed to show inline validations for various scenarios (e.g. invalid email, missing password, short password), I would cover one representative case in an integration test, then dig into the specific cases in a unit test.

Other goodies

While we’re here, I want to touch on few syntactic tricks that helped my integration tests stay clear and organized.

Big waitFor Blocks

Our test needs to account for the delay between the loading and success states of LoginModule:

const button = getByRole('button'); fireEvent.click(button); 
 expect(button).not.toBeInTheDocument(); // too soon, the button is still there!

We can do this with DOM Testing Library’s waitFor helper:

const button = getByRole('button'); fireEvent.click(button); 
 await waitFor(() => {   expect(button).not.toBeInTheDocument(); // ahh, that's better });

But, what if we want to test some other items too? There aren’t a lot of good examples of how to handle this online, and in past projects, I’ve dropped additional items outside of the waitFor:

// wait for the button await waitFor(() => {   expect(button).not.toBeInTheDocument(); }); 
 // then test the confirmation message const confirmationText = getByText('Logged in as test@email.com'); expect(confirmationText).toBeInTheDocument();

This works, but I don’t like it because it makes the button condition look special, even though we could just as easily switch the order of these statements:

// wait for the confirmation message await waitFor(() => {   const confirmationText = getByText('Logged in as test@email.com');   expect(confirmationText).toBeInTheDocument(); }); 
 // then test the button expect(button).not.toBeInTheDocument();

It’s much better, in my opinion, to group everything related to the same update together inside the waitFor callback:

await waitFor(() => {   expect(button).not.toBeInTheDocument();      const confirmationText = getByText('Logged in as test@email.com');   expect(confirmationText).toBeInTheDocument(); });

Interestingly, an empty waitFor will also get the job done, because waitFor has a default timeout of 50ms. I find this slightly less declarative than putting your expectations inside of the waitFor, but some indentation-averse developers may prefer it: 

await waitFor(() => {}); // or maybe a custom util, `await waitForRerender()` 
 expect(button).not.toBeInTheDocument(); // I pass!

For tests with a few steps, we can have multiple waitFor blocks in row:

const button = getByRole('button'); const emailField = getByLabelText('Email'); 
 // fill out form fireEvent.change(emailField, { target: { value: 'test@email.com' } }); 
 await waitFor(() => {   // check button is enabled   expect(button.disabled).toBe(false); }); 
 // submit form fireEvent.click(button); 
 await waitFor(() => {   // check button is no longer present   expect(button).not.toBeInTheDocument(); });

Inline it comments

Another testing best practice is to write fewer, longer tests; this allows you to correlate your test cases to significant user flows while keeping tests isolated to avoid unexpected behavior. I subscribe to this approach, but it can present challenges in keeping code organized and documenting desired behavior. We need future developers to be able to return to a test and understand what it’s doing, why it’s failing, etc.

For example, let’s say one of these expectations starts to fail:

it('handles a successful login flow', async () => {   // beginning of test hidden for clarity 
   expect(button.disabled).toBe(true);   expect(button.textContent).toBe('Loading...'); 
   await waitFor(() => {     expect(button).not.toBeInTheDocument();     expect(emailField).not.toBeInTheDocument();     expect(passwordField).not.toBeInTheDocument(); 
     const confirmationText = getByText('Logged in as test@email.com');     expect(confirmationText).toBeInTheDocument();   }); });

A developer looking into this can’t easily determine what is being tested and might have trouble deciding whether the failure is a bug (meaning we should fix the code) or a change in behavior (meaning we should fix the test).

My favorite solution to this problem is using the lesser-known test syntax for each test, and adding inline it-style comments describing each key behavior being tested:

test('successful login', async () => {   // beginning of test hidden for clarity 
   // it sets loading state   expect(button.disabled).toBe(true);   expect(button.textContent).toBe('Loading...'); 
   await waitFor(() => {     // it hides form elements     expect(button).not.toBeInTheDocument();     expect(emailField).not.toBeInTheDocument();     expect(passwordField).not.toBeInTheDocument(); 
     // it displays success text and email address     const confirmationText = getByText('Logged in as test@email.com');     expect(confirmationText).toBeInTheDocument();   }); });

These comments don’t magically integrate with Jest, so if you get a failure, the failing test name will correspond to the argument you passed to your test tag, in this case 'successful login'. However, Jest’s error messages contain surrounding code, so these it comments still help identify the failing behavior. Here’s the error message I got when I removed the not from one of my expectations:

For even more explicit errors, there’s package called jest-expect-message that allows you to define error messages for each expectation:

expect(button, 'button is still in document').not.toBeInTheDocument();

Some developers prefer this approach, but I find it a little too granular in most situations, since a single it often involves multiple expectations.

Next steps for teams

Sometimes I wish we could make linter rules for humans. If so, we could set up a prefer-integration-tests rule for our teams and call it a day.

But alas, we need to find a more analog solution to encourage developers to opt for integration tests in a situation, like the LoginModule example we covered earlier. Like most things, this comes down to discussing your testing strategy as a team, agreeing on something that makes sense for the project, and — hopefully — documenting it in an ADR.

When coming up with a testing plan, we should avoid a culture that pressures developers to write a test for every file. Developers need to feel empowered to make smart testing decisions, without worrying that they’re “not testing enough.” Jest’s coverage reports can help with this by providing a sanity check that you’re achieving good coverage, even if the tests are consolidated that the integration level.

I still don’t consider myself an expert on integration tests, but going through this exercise helped me break down a use case where integration testing delivered greater value than unit testing. I hope that sharing this with your team, or going through a similar exercise on your codebase, will help guide you in incorporating integration tests into your workflow.

The post React Integration Testing: Greater Coverage, Fewer Tests appeared first on CSS-Tricks.

CSS-Tricks

, , , , , ,
[Top]

Create Amazingly Stable Tests Your Way — Coded and Code-Less

Testim’s end-to-end test automation delivers the speed and stability of AI-based codeless tests, with the power of code. You get the flexibility to record or code tests, run on third-party grids, fit your workflow and tools including CI, Git and more. Join the Dev Kit beta to start writing stable tests in code.

About Testim

At Testim, we too are developers and testers, striving to release quality software faster. We built Testim because writing stable end-to-end tests was just too hard. We were the first to build an AI-based functional test automation solution. Since we launched in 2014, we’ve been adding features, improving quality, and proving our solution with customers every day.

Create tests your way

There are two ways to create tests in Testim — record them using the Testim Extension or code them in your IDE. Actually, there’s a third and recommended approach for those who want to code—record the test, export it as code and modify it in your IDE.

When you record a test, each UI action leverages our AI-based platform to analyze the DOM, weigh the attributes associated with an element and create Smart Locators that uniquely identify elements. If attributes such as the color, class, text or location of an element are changed by development, our AI will still be able to uniquely identify the element so that your test doesn’t fail.

Customize tests

Testim has many features built into the visual test editor to help you customize your tests such as validation of text, email, PDFs, or conditions and loops. You can also insert JavaScript into any step so that you can handle nearly any UI situation.

If you are into JavaScript, you can also export your test as code and customize, debug, or refactor it your IDE; the choice is yours.

Execute your cross-browser tests

When your tests are ready, you can run on-demand or schedule a test run within Testim. We also make it really easy to run your test following a CI action. Tests can be parallelized and across all browsers on our test cloud or any Selenium-compatible test cloud.

Report results

Regardless of how your tests were created (code or codeless) when they run, you will see the results in Testim so you can troubleshoot. If you find an application defect, our bug capture tool makes it really easy to create a bug report, complete with screenshots, video and the test steps to recreate it. Pop it into Jira or Trello to submit your report in minutes. Our reporting shows test run history, including managerial reports so you can show all of the great work you’ve been doing and that releases are ready to ship.

Testim is the fastest way to create your most resilient end-to-end tests, your way—codeless, coded or both. For more information go to www.testim.io.

The post Create Amazingly Stable Tests Your Way — Coded and Code-Less appeared first on CSS-Tricks.

CSS-Tricks

, , , , ,
[Top]

Types or Tests: Why Not Both?

Every now and then, a debate flares up about the value of typed JavaScript. “Just write more tests!” yell some opponents. “Replace unit tests with types!” scream others. Both are right in some ways, and wrong in others. Twitter affords little room for nuance. But in the space of this article we can try to lay out a reasoned argument for how both can and should coexist.

Correctness: what we all really want

It’s best to start at the end. What we really want out of all this meta-engineering at the end is correctness. I don’t mean the strict theoretical computer science definition of it, but a more general adherence of program behavior to its specification: We have an idea of how our program ought to work in our heads, and the process of programming organizes bits and bytes to make that idea into reality. Because we aren’t always precise about what we want, and because we’d like to have confidence that our program didn’t break when we made a change, we write types and tests on top of the raw code we already have to write just to make things work in the first place.

So, if we accept that correctness is what we want, and types and tests are just automated ways to get there, it would be great to have a visual model of how types and tests help us achieve correctness, and therefore understand where they overlap and where they complement each other.

A visual model of program correctness

If we imagine the entire infinite Turing-complete possible space of everything programs can ever possibly do — inclusive of failures — as a vast gray expanse, then what we want our program to do, our specification, is a very, very, very small subset of that possible space (the green diamond below, exaggerated in size for sake of showing something):

A large gray box with a green diamond in the bottom-right hand corner that is labeled correct.

Our job in programming is to wrangle our program as close to the specification as possible (knowing, of course, we are imperfect, and our spec is constantly in motion, e.g. due to human error, new features or under-specified behavior; so we never quite manage to achieve exact overlap):

The same gray box and green diamond shown earlier, but with a green border around the diamond that is slightly off center to indicate room for error.

Note, again, that the boundaries of our program’s behavior also include planned and unplanned errors for the purposes of our discussion here. Our meaning of “correctness” includes planned errors, but does not include unplanned errors.

Tests and Correctness

We write tests to ensure that our program fits our expectations, but have a number of choices of things to test:

A series of red, purple and orange dots have been added to the diagram to represent different possible tests.

The ideal tests are the orange dots in the diagram — they accurately test that our program does overlap the spec. In this visualization, we don’t really distinguish between types of tests, but you might imagine unit tests as really small dots, while integration/end-to-end tests are large dots. Either way, they are dots, because no one test fully describes every path through a program. (In fact, you can have 100% code coverage and still not test every path because of the combinatorial explosion!)

The blue dot in this diagram is a bad test. Sure, it tests that our program works, but it doesn’t actually pin it to the underlying spec (what we really want out of our program, at the end of the day). The moment we fix our program to align closer to spec, this test breaks, giving us a false positive.

The purple dot is a valuable test because it tests how we think our program should work and identifies an area where our program currently doesn’t. Leading with purple tests and fixing the program implementation accordingly is also known as Test-Driven Development.

The red test in this diagram is a rare test. Instead of normal (orange) tests that test “happy paths” (including planned error states), this is a test that expects and verifies that “unhappy paths” fail. If this test “passes” where it should “fail,” that is a huge early warning sign that something went wrong — but it is basically impossible to write enough tests to cover the vast expanse of possible unhappy paths that exist outside of the green spec area. People rarely find value testing that things that shouldn’t work don’t work, so they don’t do it; but it can still be a helpful early warning sign when things go wrong.

Types and Correctness

Where tests are single points on the possibility space of what our program can do, types represent categories carving entire sections from the total possible space. We can visualize them as rectangles:

A purple box has been drawn around the green bordered diamond in the chart to represent the boundary for different types of tests for the program.

We pick a rectangle to contrast the diamond representing the program, because no type system alone can fully describe our program behavior using types alone. (To pick a trivial example of this, an id that should always be a positive integer is a number type, but the number type also accepts fractions and negative numbers. There is no way to restrict a number type to a specific range, beyond a very simple union of number literals.)

Several more different colored borders are added to the diamond in the chart to represent different tests. Any tests outside of the purple box that was drawn earlier are considered invalid.

Types serve as a constraint on where our program can go as you code. If our program starts to exceed the specified boundaries of your program’s types, our type-checker (like TypeScript or Flow) will simply refuse to let us compile our program. This is nice, because in a dynamic language like JavaScript, it is very easy to accidentally create a crashing program that certainly wasn’t something you intended. The simplest value add is automated null checking. If foo has no method called bar, then calling foo.bar() will cause the all-too-familiar undefined is not a function runtime exception. If foo were typed at all, this could have been caught by the type-checker while writing, with specific attribution to the problematic line of code (with autocomplete as a concomitant benefit). This is something tests simply cannot do.

We might want to write strict types for our program as though we are trying to write the smallest possible rectangle that still fits our spec. However, this has a learning curve, because taking full advantage of type systems involves learning a whole new syntax and grammar of operators and generic type logic needed to model the full dynamic range of JavaScript. Handbooks and Cheatsheets help lower this learning curve, and more investment is needed here.

Fortunately, this adoption/learning curve doesn’t have to stop us. Since type-checking is an opt-in process with Flow and configurable strictness with TypeScript (with the ability to selectively ignore troublesome lines of code), we have our pick from a spectrum of type safety. We can even model this, too:

Larger green and red box borders have been drawn around the tests. With the purple box, these represent types of tests.

Larger rectangles, like the big red one in the chart above, represent a very permissive adoption of a type system on your codebase — for example, allowing implicitAny and fully relying on type inference to merely restrict our program from the worst of our coding.

Moderate strictness (like the medium-size green rectangle) could represent a more faithful typing, but with plenty of escape hatches, like using explicit instances of any all over the codebase and manual type assertions. Still, the possible surface area of valid programs that don’t match our spec is massively reduced even with this light typing work.

Maximum strictness, like the purple rectangle, keeps things so tight to our spec that it sometimes finds parts of your program that don’t fit (and these are often unplanned errors in your program behavior). Finding bugs in an existing program like this is a very common story from teams converting vanilla JavaScript codebases. However, getting maximum type safety out of our type-checker likely involves taking advantage of generic types and special operators designed to refine and narrow the possible space of types for each variable and function.

Notice that we don’t technically have to write our program first before writing the types. After all, we just want our types to closely model our spec, so really we can write our types first and then backfill the implementation later. In theory, this would be Type-Driven Development; in practice, few people actually develop this way since types intimately permeate and interleave with our actual program code.

Putting them together

What we are eventually building up to is an intuitive visualization of how both types and tests complement each other in guaranteeing our program’s correctness.

Back to the original diagram with a green diamond representing correctness, a green border that is slightly off center that represents parameters for correctness, an orange dot in each border of the green diamond border representing tests, and a purple box border around everything to represent the possible test types.

Our Tests assert that our program specifically performs as intended in select key paths (although there are certain other variations of tests as discussed above, the vast majority of tests do this). In the language of the visualization we have developed, they “pin” the dark green diamond of our program to the light green diamond of our spec. Any movement away by our program breaks these tests, which makes them squawk. This is excellent! Tests are also infinitely flexible and configurable for the most custom of use cases.

Our Types assert that our program doesn’t run away from us by disallowing possible failure modes beyond a boundary that we draw, hopefully as tightly as possible around our spec. In the language of our visualization, they “contain” the possible drift of our program away from our spec (as we are always imperfect, and every mistake we make adds additional failure behavior to our program). Types are also blunt, but powerful (because of type inference and editor tooling) tools that benefit from a strong community supplying types you don’t have to write from scratch.

In short:

  • Tests are best at ensuring happy paths work.
  • Types are best at preventing unhappy paths from existing.

Use them together based on their strengths, for best results!


If you’d like to read more about how Types and Tests intersect, Gary Bernhardt’s excellent talk on Boundaries and Kent C. Dodds’ Testing Trophy were significant influences in my thinking for this article.

The post Types or Tests: Why Not Both? appeared first on CSS-Tricks.

CSS-Tricks

, ,
[Top]

Using Cypress to Write Tests for a React Application

End-to-end tests are written to assert the flow of an application from start to finish. Instead of handling the tests yourself — you know, manually clicking all over the application — you can write a test that runs as you build the application. That’s what we call continuous integration and it’s a beautiful thing. Write some code, save it, and let tooling do the dirty work of making sure it doesn’t break anything.

>Cypress is just one end-to-end testing framework that does all that clicking work for us and that’s what we’re going to look at in this post. It’s really for any modern JavaScript library, but we’re going to integrate it with React in the examples.

Let’s set up an app to test

In this tutorial, we will write tests to cover a todo application I’ve built. You can clone the repository to follow along as we plug it into Cypress.

git clone git@github.com:kinsomicrote/cypress-react-tutorial.git

Navigate into the application, and install the dependencies:

cd cypress-react-tutorial yarn install

Cypress isn’t part of the dependencies, but you can install it by running this:

yarn add cypress --dev

Now, run this command to open Cypress:

node_modules/.bin/cypress open

Typing that command to the terminal over and over can get exhausting, but you can add this script to the package.json file in the project root:

"cypress": "cypress open"

Now, all you have to do is do npm run cypress once and Cypress will be standing by at all times. To have a feel of what the application we’ll be testing looks like, you can start the React application by running yarn start.

We will start by writing a test to confirm that Cypress works. In the cypress/integration folder, create a new file called init.spec.js. The test asserts that true is equal to true. We only need it to confirm that’s working to ensure that Cypress is up and running for the entire application.

describe('Cypress', () => {   it('is working', () => {     expect(true).to.equal(true)   }) })

You should have a list of tests open. Go there and select init.spec.js.

That should cause the test to run and pop up a screen that shows the test passing.

While we’re still in init.spec.js, let’s add a test to assert that we can visit the app by hitting http://localhost:3000 in the browser. This’ll make sure the app itself is running.

it('visits the app', () => {   cy.visit('http://localhost:3000') })

We call the method visit() and we pass it the URL of the app. We have access to a global object called cy for calling the methods available to us on Cypress.

To avoid having to write the URL time and again, we can set a base URL that can be used throughout the tests we write. Open the cypress.json file in the home directory of the application and add define the URL there:

{   "baseUrl": "http://localhost:3000" }

You can change the test block to look like this:

it('visits the app', () => {   cy.visit('/') })

…and the test should continue to pass. 🤞

Testing form controls and inputs

The test we’ll be writing will cover how users interact with the todo application. For example, we want to ensure the input is in focus when the app loads so users can start entering tasks immediately. We also want to ensure that there’s a default task in there so the list is not empty by default. When there are no tasks, we want to show text that tells the user as much.

To get started, go ahead and create a new file in the integration folder called form.spec.js. The name of the file isn’t all that important. We’re prepending “form” because what we’re testing is ultimately a form input. You may want to call it something different depending on how you plan on organizing tests.

We’re going to add a describe block to the file:

describe('Form', () => {   beforeEach(() => {     cy.visit('/')   })    it('it focuses the input', () => {     cy.focused().should('have.class', 'form-control')   }) })

The beforeEach block is used to avoid unnecessary repetition. For each block of test, we need to visit the application. It would be redundant to repeat that line each time beforeEach ensures Cypress visits the application in each case.

For the test, let’s check that the DOM element in focus when application first loads has a class of form-control. If you check the source file, you will see that the input element has a class called form-control set to it, and we have autoFocus as one of the element attributes:

<input   type="text"   autoFocus   value={this.state.item}   onChange={this.handleInputChange}   placeholder="Enter a task"   className="form-control" />

When you save that, go back to the test screen and select form.spec.js to run the test.

The next thing we’ll do is test whether a user can successfully enter a value into the input field.

it('accepts input', () => {   const input = "Learn about Cypress"   cy.get('.form-control')     .type(input)     .should('have.value', input) })

We’ve added some text (“Learn about Cypress”) to the input. Then we make use of cy.get to obtain the DOM element with the form-control class name. We could also do something like cy.get('input') and get the same result. After getting the element, cy.type() is used to enter the value we assigned to the input, then we assert that the DOM element with class form-control has a value that matches the value of input.

In other words:

Does the input have a class of form-control and does it contain Learn About Cypress in it?

Our application should also have two todos that have been created by default when the app runs. It’s important we have a test that checks that they are indeed listed.

What do we want? In our code, we are making use of the list item (<li>) element to display tasks as items in a list. Since we have two items listed by default, it means that the list should have a length of two at start. So, the test will look something like this:

it('displays list of todo', () => {   cy.get('li')     .should('have.length', 2) })

Oh! And what would this app be if a user was unable to add a new task to the list? We’d better test that as well.

it('adds a new todo', () => {   const input = "Learn about cypress"   cy.get('.form-control')     .type(input)     .type('{enter}')     .get('li')     .should('have.length', 3) })

This looks similar to what we wrote in the last two tests. We obtain the input and simulate typing a value into it. Then, we simulate submitting a task that should update the state of the application, thereby increasing the length from 2 to 3. So, really, we can build off of what we already have!

Changing the value from three to two will cause the test to fail — that’s what we’d expect because the list should have two tasks by default and submitting once should produce a total of three.

You might be wondering what would happen if the user deletes either (or both) of the default tasks before attempting to submit a new task. Well, we could write a test for that as well, but we’re not making that assumption in this example since we only want to confirm that tasks can be submitted. This is an easy way for us to test the basic submitting functionality as we develop and we can account for advanced/edge cases later.

The last feature we need to test is the deleting tasks. First, we want to delete one of the default task items and then see if there is one remaining once the deletion happens. It’s the same sort of deal as before, but we should expect one item left in the list instead of the three we expected when adding a new task to the list.

it('deletes a todo', () => {   cy.get('li')     .first()     .find('.btn-danger')     .click()     .get('li')     .should('have.length', 1) })

OK, so what happens if we delete both of the default tasks in the list and the list is completely empty? Let’s say we want to display this text when no more items are in the list: “All of your tasks are complete. Nicely done!”

This isn’t too different from what we have done before. You can try it out first then come back to see the code for it.

it.only('deletes all todo', () => {   cy.get('li')     .first()     .find('.btn-danger')     .click()     .get('li')     .first()     .find('.btn-danger')     .click()     .get('.no-task')     .should('have.text', 'All of your tasks are complete. Nicely done!') })

Both tests look similar: we get the list item element, target the first one, and make use of cy.find() to look for the DOM element with a btn-danger class name (which, again, is a totally arbitrary class name for the delete button in this example app). We simulate a click event on the element to delete the task item.

I’m checking for “No task!” in this particular test.

Testing network requests

Network requests are kind of a big deal because that’s often the source of data used in an application. Say we have a component in our app that makes a request to the server to obtain data which will be displayed to user. Let’s say the component markup looks like this:

class App extends React.Component {   state = {     isLoading: true,     users: [],     error: null   };   fetchUsers() {     fetch(`https://jsonplaceholder.typicode.com/users`)       .then(response => response.json())       .then(data =>         this.setState({           users: data,           isLoading: false,         })       )       .catch(error => this.setState({ error, isLoading: false }));   }   componentDidMount() {     this.fetchUsers();   }   render() {     const { isLoading, users, error } = this.state;     return (       <React.Fragment>         <h1>Random User</h1>         {error ? <p>{error.message}</p> : null}         {!isLoading ? (           users.map(user => {             const { username, name, email } = user;             return (               <div key={username}>                 <p>Name: {name}</p>                 <p>Email Address: {email}</p>                 <hr />               </div>             );           })         ) : (           <h3>Loading...</h3>         )}       </React.Fragment>     );   } }

Here, we are making use of the JSON Placeholder API as an example. We can have a test like this to test the response we get from the server:

describe('Request', () => {   it('displays random users from API', () => {     cy.request('https://jsonplaceholder.typicode.com/users')       .should((response) => {         expect(response.status).to.eq(200)         expect(response.body).to.have.length(10)         expect(response).to.have.property('headers')         expect(response).to.have.property('duration')       })   }) })

The benefit of testing the server (as opposed to stubbing it) is that we are certain the response we get is the same as that which a user will get. To learn more about network requests and how you can stub network requests, see this page in the Cypress documentation.

Running tests from the command line

Cypress tests can run from the terminal without the provided UI:

./node_modules/.bin/cypress run

…or

npx cypress run

Let’s run the form tests we wrote:

npx cypress run --record --spec "cypress/integration/form.spec.js"

Terminal should output the results right there with a summary of what was tested.

There’s a lot more about using Cypress with the command line in the documentation.

That’s a wrap!

Tests are something that either gets people excited or scared, depending on who you talk to. Hopefully what we’ve looked at in this post gets everyone excited about implementing tests in an application and shows how relatively straightforward it can be. Cypress is an excellent tool and one I’ve found myself reaching for in my own work, but there are others as well. Regardless of what tool you use (and how you feel about tests), hopefully you see the benefits of testing and are more compelled to give them a try.

Related resources

The post Using Cypress to Write Tests for a React Application appeared first on CSS-Tricks.

CSS-Tricks

, , , , ,
[Top]

Writing Tests for React Applications Using Jest and Enzyme

While it is important to have a well-tested API, solid test coverage is a must for any React application. Tests increase confidence in the code and helps prevent shipping bugs to users.

That’s why we’re going to focus on testing in this post, specifically for React applications. By the end, you’ll be up and running with tests using Jest and Enzyme.

No worries if those names mean nothing to you because that’s where we’re headed right now!

Installing the test dependencies

Jest is a unit testing framework that makes testing React applications pretty darn easy because it works seamlessly with React (because, well, the Facebook team made it, though it is compatible with other JavaScript frameworks). It serves as a test runner that includes an entire library of predefined tests with the ability to mock functions as well.

Enzyme is designed to test components and it’s a great way to write assertions (or scenarios) that simulate actions that confirm the front-end UI is working correctly. In other words, it seeks out components on the front end, interacts with them, and raises a flag if any of the components aren’t working the way it’s told they should.

So, Jest and Enzyme are distinct tools, but they complement each other well.

For our purposes, we will spin up a new React project using create-react-app because it comes with Jest configured right out of the box.

yarn create react-app my-app

We still need to install enzyme and enzyme-adapter-react-16 (that number should be based on whichever version of React version you’re using).

yarn add enzyme enzyme-adapter-react-16 --dev

OK, that creates our project and gets us both Jest and Enzyme in our project in two commands. Next, we need to create a setup file for our tests. We’ll call this file setupTests.js and place it in the src folder of the project.

Here’s what should be in that file:

import { configure } from 'enzyme'; import Adapter from 'enzyme-adapter-react-16'; configure({ adapter: new Adapter() });

This brings in Enzyme and sets up the adapter for running our tests.

To make things easier on us, we are going to write tests for a React application I have already built. Grab a copy of the app over on GitHub.

Taking snapshots of tests

Snapshot testing is used to keep track of changes in the app UI. If you’re wonder whether we’re dealing with literal images of the UI, the answer is no, but snapshots are super useful because they capture the code of a component at a moment in time so we can compare the component in one state versus any other possible states it might take.

The first time a test runs, a snapshot of the component code is composed and saved in a new __snapshots__ folder in the src directory. On test runs, the current UI is compared to the existing. Here’s a snapshot of a successful test of the sample project’s App component.

it("renders correctly", () => {   const wrapper = shallow(     <App />   );   expect(wrapper).toMatchSnapshot(); });

Every new snapshot that gets generated when the test suite runs will be saved in the __tests__ folder. What’s great about that Jest will check to see if the component matches is then on subsequent times when we run the test, Jest will check to see if the component matches the snapshot on subsequent tests. Here’s how that files looks.

Let’s create a conditions where the test fails. We’ll change the <h2> tag of our component from <h2>Random User</h2> to <h2>CSSTricks Tests</h2> and here’s what we get in the command line when the tests run:

If we want our change to pass the test, we either change the heading to what it was before, or we can update the snapshot file. Jest even provides instructions for how to update the snapshot right from the command line so there’s no need to update the snapshot manually:

Inspect your code changes or press `u` to update them.

So, that’s what we’ll do in this case. We press u to update the snapshot, the test passes, and we move on.

Did you catch the shallow method in our test snapshot? That’s from the Enzyme package and instructs the test to run a single component and nothing else — not even any child components that might be inside it. It’s a nice clean way to isolate code and get better information when debugging and is especially great for simple, non-interactive components.

In addition to shallow, we also have render for snapshot testing. What’s the difference, you ask? While shallow excludes child components when testing a component, render includes them while rendering to static HTML.

There is one more method in the mix to be aware of: mount. This is the most engaging type of test in the bunch because it fully renders components (like shallow and render) and their children (like render) but puts them in the DOM, which means it can fully test any component that interacts with the DOM API as well as any props that are passed to and from it. It’s a comprehensive test for interactivity. It’s also worth noting that, since it does a full mount, we’ll want to make a call to .unmount on the component after the test runs so it doesn’t conflict with other tests.

Testing Component’s Lifecycle Methods

Lifecycle methods are hooks provided by React, which get called at different stages of a component’s lifespan. These methods come in handy when handling things like API calls.
Since they are often used in React components, you can have your test suite cover them to ensure all things work as expected.

We do the fetching of data from the API when the component mounts. We can check if the lifecycle method gets called by making use of jest, which makes it possible for us to mock lifecycle methods used in React applications.

it('calls componentDidMount', () => {   jest.spyOn(App.prototype, 'componentDidMount')   const wrapper = shallow(<App />)   expect(App.prototype.componentDidMount.mock.calls.length).toBe(1) })

We attach spy to the component’s prototype, and the spy on the componentDidMount() lifecycle method of the component. Next, we assert that the lifecycle method is called once by checking for the call length.

Testing component props

How can you be sure that props from one component are being passed to another? We have a test confirm it, of course! The Enzyme API allows us to create a “mock” function so tests can simulate props being passed between components.

Let’s say we are passing user props from the main App component into a Profile component. In other words, we want the App to inform the Profile with details about user information to render a profile for that user.

First, let’s mock the user props:

const user = {   name: 'John Doe',   email: 'johndoe@gmail.com',   username: 'johndoe',   image: null }

Mock functions look a lot like other tests in that they’re wrapped around the components. However, we’re using an additional describe layer that takes the component being tested, then allows us to proceed by telling the test the expected props and values that we expect to be passed.

describe ('<Profile />', () => {   it ('contains h4', () => {     const wrapper = mount(<Profile user={user} />)     const value = wrapper.find('h4').text()     expect(value).toEqual('John Doe')   })   it ('accepts user props', () => {     const wrapper = mount(<Profile user={user} />);     expect(wrapper.props().user).toEqual(user)   }) })

This particular example contains two tests. In the first test, we pass the user props to the mounted Profile component. Then, we check to see if we can find a <h4> element that corresponds to what we have in the Profile component.

In the second test, we want to check if the props we passed to the mounted component equals the mock props we created above. Note that even though we are destructing the props in the Profile component, it does not affect the test.

Mock API calls

There’s a part in the project we’ve been using where an API call is made to fetch a list of users. And guess what? We can test that API call, too!

The slightly tricky thing about testing API calls is that we don’t actually want to hit the API. Some APIs have call limits or even costs for making making calls, so we want to avoid that. Thankfully, we can use Jest to mock axios requests. See this post for a more thorough walkthrough of using axios to make API calls.

First, we’ll create a new folder called __mock__ in the same directory where our __tests__ folder lives. This is where our mock request files will be created when the tests run.

module.exports = {   get: jest.fn(() => {     return Promise.resolve({     data: [       {         id: 1,         name: 'Jane Doe',         email: 'janedoe@gmail.com',         username: 'jdoe'       }     ]     })   }) }

We want to check and see that the GET request is made. We’ll import axios for that:

import axios from 'axios';

Just below the import statements, we need Jest to replace axios with our mock, so we add this:

jest.mock('axios')

The Jest API has a spyOn() method that takes an accessType? argument that can be used to check whether we are able to “get” data from an API call. We use jest.spyOn() to call the spied method, which we implemented in our __mock__ file, and it can be used with the shallow, render and mount tests we covered earlier.

it('fetches a list of users', () => {   const getSpy = jest.spyOn(axios, 'get')   const wrapper = shallow(     <App />   )   expect(getSpy).toBeCalled() })

We passed the test!

That’s a primer into the world of testing in a React application. Hopefully you now see the value that testing adds to a project and how relatively easy it can be to implement, thanks to the heavy lifting done by the joint powers of Jest and Enzyme.

Further reading

, , , , , ,
[Top]