Tag: Tests

Types or Tests: Why Not Both?

Every now and then, a debate flares up about the value of typed JavaScript. “Just write more tests!” yell some opponents. “Replace unit tests with types!” scream others. Both are right in some ways, and wrong in others. Twitter affords little room for nuance. But in the space of this article we can try to lay out a reasoned argument for how both can and should coexist.

Correctness: what we all really want

It’s best to start at the end. What we really want out of all this meta-engineering at the end is correctness. I don’t mean the strict theoretical computer science definition of it, but a more general adherence of program behavior to its specification: We have an idea of how our program ought to work in our heads, and the process of programming organizes bits and bytes to make that idea into reality. Because we aren’t always precise about what we want, and because we’d like to have confidence that our program didn’t break when we made a change, we write types and tests on top of the raw code we already have to write just to make things work in the first place.

So, if we accept that correctness is what we want, and types and tests are just automated ways to get there, it would be great to have a visual model of how types and tests help us achieve correctness, and therefore understand where they overlap and where they complement each other.

A visual model of program correctness

If we imagine the entire infinite Turing-complete possible space of everything programs can ever possibly do — inclusive of failures — as a vast gray expanse, then what we want our program to do, our specification, is a very, very, very small subset of that possible space (the green diamond below, exaggerated in size for sake of showing something):

A large gray box with a green diamond in the bottom-right hand corner that is labeled correct.

Our job in programming is to wrangle our program as close to the specification as possible (knowing, of course, we are imperfect, and our spec is constantly in motion, e.g. due to human error, new features or under-specified behavior; so we never quite manage to achieve exact overlap):

The same gray box and green diamond shown earlier, but with a green border around the diamond that is slightly off center to indicate room for error.

Note, again, that the boundaries of our program’s behavior also include planned and unplanned errors for the purposes of our discussion here. Our meaning of “correctness” includes planned errors, but does not include unplanned errors.

Tests and Correctness

We write tests to ensure that our program fits our expectations, but have a number of choices of things to test:

A series of red, purple and orange dots have been added to the diagram to represent different possible tests.

The ideal tests are the orange dots in the diagram — they accurately test that our program does overlap the spec. In this visualization, we don’t really distinguish between types of tests, but you might imagine unit tests as really small dots, while integration/end-to-end tests are large dots. Either way, they are dots, because no one test fully describes every path through a program. (In fact, you can have 100% code coverage and still not test every path because of the combinatorial explosion!)

The blue dot in this diagram is a bad test. Sure, it tests that our program works, but it doesn’t actually pin it to the underlying spec (what we really want out of our program, at the end of the day). The moment we fix our program to align closer to spec, this test breaks, giving us a false positive.

The purple dot is a valuable test because it tests how we think our program should work and identifies an area where our program currently doesn’t. Leading with purple tests and fixing the program implementation accordingly is also known as Test-Driven Development.

The red test in this diagram is a rare test. Instead of normal (orange) tests that test “happy paths” (including planned error states), this is a test that expects and verifies that “unhappy paths” fail. If this test “passes” where it should “fail,” that is a huge early warning sign that something went wrong — but it is basically impossible to write enough tests to cover the vast expanse of possible unhappy paths that exist outside of the green spec area. People rarely find value testing that things that shouldn’t work don’t work, so they don’t do it; but it can still be a helpful early warning sign when things go wrong.

Types and Correctness

Where tests are single points on the possibility space of what our program can do, types represent categories carving entire sections from the total possible space. We can visualize them as rectangles:

A purple box has been drawn around the green bordered diamond in the chart to represent the boundary for different types of tests for the program.

We pick a rectangle to contrast the diamond representing the program, because no type system alone can fully describe our program behavior using types alone. (To pick a trivial example of this, an id that should always be a positive integer is a number type, but the number type also accepts fractions and negative numbers. There is no way to restrict a number type to a specific range, beyond a very simple union of number literals.)

Several more different colored borders are added to the diamond in the chart to represent different tests. Any tests outside of the purple box that was drawn earlier are considered invalid.

Types serve as a constraint on where our program can go as you code. If our program starts to exceed the specified boundaries of your program’s types, our type-checker (like TypeScript or Flow) will simply refuse to let us compile our program. This is nice, because in a dynamic language like JavaScript, it is very easy to accidentally create a crashing program that certainly wasn’t something you intended. The simplest value add is automated null checking. If foo has no method called bar, then calling foo.bar() will cause the all-too-familiar undefined is not a function runtime exception. If foo were typed at all, this could have been caught by the type-checker while writing, with specific attribution to the problematic line of code (with autocomplete as a concomitant benefit). This is something tests simply cannot do.

We might want to write strict types for our program as though we are trying to write the smallest possible rectangle that still fits our spec. However, this has a learning curve, because taking full advantage of type systems involves learning a whole new syntax and grammar of operators and generic type logic needed to model the full dynamic range of JavaScript. Handbooks and Cheatsheets help lower this learning curve, and more investment is needed here.

Fortunately, this adoption/learning curve doesn’t have to stop us. Since type-checking is an opt-in process with Flow and configurable strictness with TypeScript (with the ability to selectively ignore troublesome lines of code), we have our pick from a spectrum of type safety. We can even model this, too:

Larger green and red box borders have been drawn around the tests. With the purple box, these represent types of tests.

Larger rectangles, like the big red one in the chart above, represent a very permissive adoption of a type system on your codebase — for example, allowing implicitAny and fully relying on type inference to merely restrict our program from the worst of our coding.

Moderate strictness (like the medium-size green rectangle) could represent a more faithful typing, but with plenty of escape hatches, like using explicit instances of any all over the codebase and manual type assertions. Still, the possible surface area of valid programs that don’t match our spec is massively reduced even with this light typing work.

Maximum strictness, like the purple rectangle, keeps things so tight to our spec that it sometimes finds parts of your program that don’t fit (and these are often unplanned errors in your program behavior). Finding bugs in an existing program like this is a very common story from teams converting vanilla JavaScript codebases. However, getting maximum type safety out of our type-checker likely involves taking advantage of generic types and special operators designed to refine and narrow the possible space of types for each variable and function.

Notice that we don’t technically have to write our program first before writing the types. After all, we just want our types to closely model our spec, so really we can write our types first and then backfill the implementation later. In theory, this would be Type-Driven Development; in practice, few people actually develop this way since types intimately permeate and interleave with our actual program code.

Putting them together

What we are eventually building up to is an intuitive visualization of how both types and tests complement each other in guaranteeing our program’s correctness.

Back to the original diagram with a green diamond representing correctness, a green border that is slightly off center that represents parameters for correctness, an orange dot in each border of the green diamond border representing tests, and a purple box border around everything to represent the possible test types.

Our Tests assert that our program specifically performs as intended in select key paths (although there are certain other variations of tests as discussed above, the vast majority of tests do this). In the language of the visualization we have developed, they “pin” the dark green diamond of our program to the light green diamond of our spec. Any movement away by our program breaks these tests, which makes them squawk. This is excellent! Tests are also infinitely flexible and configurable for the most custom of use cases.

Our Types assert that our program doesn’t run away from us by disallowing possible failure modes beyond a boundary that we draw, hopefully as tightly as possible around our spec. In the language of our visualization, they “contain” the possible drift of our program away from our spec (as we are always imperfect, and every mistake we make adds additional failure behavior to our program). Types are also blunt, but powerful (because of type inference and editor tooling) tools that benefit from a strong community supplying types you don’t have to write from scratch.

In short:

  • Tests are best at ensuring happy paths work.
  • Types are best at preventing unhappy paths from existing.

Use them together based on their strengths, for best results!


If you’d like to read more about how Types and Tests intersect, Gary Bernhardt’s excellent talk on Boundaries and Kent C. Dodds’ Testing Trophy were significant influences in my thinking for this article.

The post Types or Tests: Why Not Both? appeared first on CSS-Tricks.

CSS-Tricks

, ,

Using Cypress to Write Tests for a React Application

End-to-end tests are written to assert the flow of an application from start to finish. Instead of handling the tests yourself — you know, manually clicking all over the application — you can write a test that runs as you build the application. That’s what we call continuous integration and it’s a beautiful thing. Write some code, save it, and let tooling do the dirty work of making sure it doesn’t break anything.

>Cypress is just one end-to-end testing framework that does all that clicking work for us and that’s what we’re going to look at in this post. It’s really for any modern JavaScript library, but we’re going to integrate it with React in the examples.

Let’s set up an app to test

In this tutorial, we will write tests to cover a todo application I’ve built. You can clone the repository to follow along as we plug it into Cypress.

git clone git@github.com:kinsomicrote/cypress-react-tutorial.git

Navigate into the application, and install the dependencies:

cd cypress-react-tutorial yarn install

Cypress isn’t part of the dependencies, but you can install it by running this:

yarn add cypress --dev

Now, run this command to open Cypress:

node_modules/.bin/cypress open

Typing that command to the terminal over and over can get exhausting, but you can add this script to the package.json file in the project root:

"cypress": "cypress open"

Now, all you have to do is do npm run cypress once and Cypress will be standing by at all times. To have a feel of what the application we’ll be testing looks like, you can start the React application by running yarn start.

We will start by writing a test to confirm that Cypress works. In the cypress/integration folder, create a new file called init.spec.js. The test asserts that true is equal to true. We only need it to confirm that’s working to ensure that Cypress is up and running for the entire application.

describe('Cypress', () => {   it('is working', () => {     expect(true).to.equal(true)   }) })

You should have a list of tests open. Go there and select init.spec.js.

That should cause the test to run and pop up a screen that shows the test passing.

While we’re still in init.spec.js, let’s add a test to assert that we can visit the app by hitting http://localhost:3000 in the browser. This’ll make sure the app itself is running.

it('visits the app', () => {   cy.visit('http://localhost:3000') })

We call the method visit() and we pass it the URL of the app. We have access to a global object called cy for calling the methods available to us on Cypress.

To avoid having to write the URL time and again, we can set a base URL that can be used throughout the tests we write. Open the cypress.json file in the home directory of the application and add define the URL there:

{   "baseUrl": "http://localhost:3000" }

You can change the test block to look like this:

it('visits the app', () => {   cy.visit('/') })

…and the test should continue to pass. 🤞

Testing form controls and inputs

The test we’ll be writing will cover how users interact with the todo application. For example, we want to ensure the input is in focus when the app loads so users can start entering tasks immediately. We also want to ensure that there’s a default task in there so the list is not empty by default. When there are no tasks, we want to show text that tells the user as much.

To get started, go ahead and create a new file in the integration folder called form.spec.js. The name of the file isn’t all that important. We’re prepending “form” because what we’re testing is ultimately a form input. You may want to call it something different depending on how you plan on organizing tests.

We’re going to add a describe block to the file:

describe('Form', () => {   beforeEach(() => {     cy.visit('/')   })    it('it focuses the input', () => {     cy.focused().should('have.class', 'form-control')   }) })

The beforeEach block is used to avoid unnecessary repetition. For each block of test, we need to visit the application. It would be redundant to repeat that line each time beforeEach ensures Cypress visits the application in each case.

For the test, let’s check that the DOM element in focus when application first loads has a class of form-control. If you check the source file, you will see that the input element has a class called form-control set to it, and we have autoFocus as one of the element attributes:

<input   type="text"   autoFocus   value={this.state.item}   onChange={this.handleInputChange}   placeholder="Enter a task"   className="form-control" />

When you save that, go back to the test screen and select form.spec.js to run the test.

The next thing we’ll do is test whether a user can successfully enter a value into the input field.

it('accepts input', () => {   const input = "Learn about Cypress"   cy.get('.form-control')     .type(input)     .should('have.value', input) })

We’ve added some text (“Learn about Cypress”) to the input. Then we make use of cy.get to obtain the DOM element with the form-control class name. We could also do something like cy.get('input') and get the same result. After getting the element, cy.type() is used to enter the value we assigned to the input, then we assert that the DOM element with class form-control has a value that matches the value of input.

In other words:

Does the input have a class of form-control and does it contain Learn About Cypress in it?

Our application should also have two todos that have been created by default when the app runs. It’s important we have a test that checks that they are indeed listed.

What do we want? In our code, we are making use of the list item (<li>) element to display tasks as items in a list. Since we have two items listed by default, it means that the list should have a length of two at start. So, the test will look something like this:

it('displays list of todo', () => {   cy.get('li')     .should('have.length', 2) })

Oh! And what would this app be if a user was unable to add a new task to the list? We’d better test that as well.

it('adds a new todo', () => {   const input = "Learn about cypress"   cy.get('.form-control')     .type(input)     .type('{enter}')     .get('li')     .should('have.length', 3) })

This looks similar to what we wrote in the last two tests. We obtain the input and simulate typing a value into it. Then, we simulate submitting a task that should update the state of the application, thereby increasing the length from 2 to 3. So, really, we can build off of what we already have!

Changing the value from three to two will cause the test to fail — that’s what we’d expect because the list should have two tasks by default and submitting once should produce a total of three.

You might be wondering what would happen if the user deletes either (or both) of the default tasks before attempting to submit a new task. Well, we could write a test for that as well, but we’re not making that assumption in this example since we only want to confirm that tasks can be submitted. This is an easy way for us to test the basic submitting functionality as we develop and we can account for advanced/edge cases later.

The last feature we need to test is the deleting tasks. First, we want to delete one of the default task items and then see if there is one remaining once the deletion happens. It’s the same sort of deal as before, but we should expect one item left in the list instead of the three we expected when adding a new task to the list.

it('deletes a todo', () => {   cy.get('li')     .first()     .find('.btn-danger')     .click()     .get('li')     .should('have.length', 1) })

OK, so what happens if we delete both of the default tasks in the list and the list is completely empty? Let’s say we want to display this text when no more items are in the list: “All of your tasks are complete. Nicely done!”

This isn’t too different from what we have done before. You can try it out first then come back to see the code for it.

it.only('deletes all todo', () => {   cy.get('li')     .first()     .find('.btn-danger')     .click()     .get('li')     .first()     .find('.btn-danger')     .click()     .get('.no-task')     .should('have.text', 'All of your tasks are complete. Nicely done!') })

Both tests look similar: we get the list item element, target the first one, and make use of cy.find() to look for the DOM element with a btn-danger class name (which, again, is a totally arbitrary class name for the delete button in this example app). We simulate a click event on the element to delete the task item.

I’m checking for “No task!” in this particular test.

Testing network requests

Network requests are kind of a big deal because that’s often the source of data used in an application. Say we have a component in our app that makes a request to the server to obtain data which will be displayed to user. Let’s say the component markup looks like this:

class App extends React.Component {   state = {     isLoading: true,     users: [],     error: null   };   fetchUsers() {     fetch(`https://jsonplaceholder.typicode.com/users`)       .then(response => response.json())       .then(data =>         this.setState({           users: data,           isLoading: false,         })       )       .catch(error => this.setState({ error, isLoading: false }));   }   componentDidMount() {     this.fetchUsers();   }   render() {     const { isLoading, users, error } = this.state;     return (       <React.Fragment>         <h1>Random User</h1>         {error ? <p>{error.message}</p> : null}         {!isLoading ? (           users.map(user => {             const { username, name, email } = user;             return (               <div key={username}>                 <p>Name: {name}</p>                 <p>Email Address: {email}</p>                 <hr />               </div>             );           })         ) : (           <h3>Loading...</h3>         )}       </React.Fragment>     );   } }

Here, we are making use of the JSON Placeholder API as an example. We can have a test like this to test the response we get from the server:

describe('Request', () => {   it('displays random users from API', () => {     cy.request('https://jsonplaceholder.typicode.com/users')       .should((response) => {         expect(response.status).to.eq(200)         expect(response.body).to.have.length(10)         expect(response).to.have.property('headers')         expect(response).to.have.property('duration')       })   }) })

The benefit of testing the server (as opposed to stubbing it) is that we are certain the response we get is the same as that which a user will get. To learn more about network requests and how you can stub network requests, see this page in the Cypress documentation.

Running tests from the command line

Cypress tests can run from the terminal without the provided UI:

./node_modules/.bin/cypress run

…or

npx cypress run

Let’s run the form tests we wrote:

npx cypress run --record --spec "cypress/integration/form.spec.js"

Terminal should output the results right there with a summary of what was tested.

There’s a lot more about using Cypress with the command line in the documentation.

That’s a wrap!

Tests are something that either gets people excited or scared, depending on who you talk to. Hopefully what we’ve looked at in this post gets everyone excited about implementing tests in an application and shows how relatively straightforward it can be. Cypress is an excellent tool and one I’ve found myself reaching for in my own work, but there are others as well. Regardless of what tool you use (and how you feel about tests), hopefully you see the benefits of testing and are more compelled to give them a try.

Related resources

The post Using Cypress to Write Tests for a React Application appeared first on CSS-Tricks.

CSS-Tricks

, , , , ,
[Top]

Writing Tests for React Applications Using Jest and Enzyme

While it is important to have a well-tested API, solid test coverage is a must for any React application. Tests increase confidence in the code and helps prevent shipping bugs to users.

That’s why we’re going to focus on testing in this post, specifically for React applications. By the end, you’ll be up and running with tests using Jest and Enzyme.

No worries if those names mean nothing to you because that’s where we’re headed right now!

Installing the test dependencies

Jest is a unit testing framework that makes testing React applications pretty darn easy because it works seamlessly with React (because, well, the Facebook team made it, though it is compatible with other JavaScript frameworks). It serves as a test runner that includes an entire library of predefined tests with the ability to mock functions as well.

Enzyme is designed to test components and it’s a great way to write assertions (or scenarios) that simulate actions that confirm the front-end UI is working correctly. In other words, it seeks out components on the front end, interacts with them, and raises a flag if any of the components aren’t working the way it’s told they should.

So, Jest and Enzyme are distinct tools, but they complement each other well.

For our purposes, we will spin up a new React project using create-react-app because it comes with Jest configured right out of the box.

yarn create react-app my-app

We still need to install enzyme and enzyme-adapter-react-16 (that number should be based on whichever version of React version you’re using).

yarn add enzyme enzyme-adapter-react-16 --dev

OK, that creates our project and gets us both Jest and Enzyme in our project in two commands. Next, we need to create a setup file for our tests. We’ll call this file setupTests.js and place it in the src folder of the project.

Here’s what should be in that file:

import { configure } from 'enzyme'; import Adapter from 'enzyme-adapter-react-16'; configure({ adapter: new Adapter() });

This brings in Enzyme and sets up the adapter for running our tests.

To make things easier on us, we are going to write tests for a React application I have already built. Grab a copy of the app over on GitHub.

Taking snapshots of tests

Snapshot testing is used to keep track of changes in the app UI. If you’re wonder whether we’re dealing with literal images of the UI, the answer is no, but snapshots are super useful because they capture the code of a component at a moment in time so we can compare the component in one state versus any other possible states it might take.

The first time a test runs, a snapshot of the component code is composed and saved in a new __snapshots__ folder in the src directory. On test runs, the current UI is compared to the existing. Here’s a snapshot of a successful test of the sample project’s App component.

it("renders correctly", () => {   const wrapper = shallow(     <App />   );   expect(wrapper).toMatchSnapshot(); });

Every new snapshot that gets generated when the test suite runs will be saved in the __tests__ folder. What’s great about that Jest will check to see if the component matches is then on subsequent times when we run the test, Jest will check to see if the component matches the snapshot on subsequent tests. Here’s how that files looks.

Let’s create a conditions where the test fails. We’ll change the <h2> tag of our component from <h2>Random User</h2> to <h2>CSSTricks Tests</h2> and here’s what we get in the command line when the tests run:

If we want our change to pass the test, we either change the heading to what it was before, or we can update the snapshot file. Jest even provides instructions for how to update the snapshot right from the command line so there’s no need to update the snapshot manually:

Inspect your code changes or press `u` to update them.

So, that’s what we’ll do in this case. We press u to update the snapshot, the test passes, and we move on.

Did you catch the shallow method in our test snapshot? That’s from the Enzyme package and instructs the test to run a single component and nothing else — not even any child components that might be inside it. It’s a nice clean way to isolate code and get better information when debugging and is especially great for simple, non-interactive components.

In addition to shallow, we also have render for snapshot testing. What’s the difference, you ask? While shallow excludes child components when testing a component, render includes them while rendering to static HTML.

There is one more method in the mix to be aware of: mount. This is the most engaging type of test in the bunch because it fully renders components (like shallow and render) and their children (like render) but puts them in the DOM, which means it can fully test any component that interacts with the DOM API as well as any props that are passed to and from it. It’s a comprehensive test for interactivity. It’s also worth noting that, since it does a full mount, we’ll want to make a call to .unmount on the component after the test runs so it doesn’t conflict with other tests.

Testing Component’s Lifecycle Methods

Lifecycle methods are hooks provided by React, which get called at different stages of a component’s lifespan. These methods come in handy when handling things like API calls.
Since they are often used in React components, you can have your test suite cover them to ensure all things work as expected.

We do the fetching of data from the API when the component mounts. We can check if the lifecycle method gets called by making use of jest, which makes it possible for us to mock lifecycle methods used in React applications.

it('calls componentDidMount', () => {   jest.spyOn(App.prototype, 'componentDidMount')   const wrapper = shallow(<App />)   expect(App.prototype.componentDidMount.mock.calls.length).toBe(1) })

We attach spy to the component’s prototype, and the spy on the componentDidMount() lifecycle method of the component. Next, we assert that the lifecycle method is called once by checking for the call length.

Testing component props

How can you be sure that props from one component are being passed to another? We have a test confirm it, of course! The Enzyme API allows us to create a “mock” function so tests can simulate props being passed between components.

Let’s say we are passing user props from the main App component into a Profile component. In other words, we want the App to inform the Profile with details about user information to render a profile for that user.

First, let’s mock the user props:

const user = {   name: 'John Doe',   email: 'johndoe@gmail.com',   username: 'johndoe',   image: null }

Mock functions look a lot like other tests in that they’re wrapped around the components. However, we’re using an additional describe layer that takes the component being tested, then allows us to proceed by telling the test the expected props and values that we expect to be passed.

describe ('<Profile />', () => {   it ('contains h4', () => {     const wrapper = mount(<Profile user={user} />)     const value = wrapper.find('h4').text()     expect(value).toEqual('John Doe')   })   it ('accepts user props', () => {     const wrapper = mount(<Profile user={user} />);     expect(wrapper.props().user).toEqual(user)   }) })

This particular example contains two tests. In the first test, we pass the user props to the mounted Profile component. Then, we check to see if we can find a <h4> element that corresponds to what we have in the Profile component.

In the second test, we want to check if the props we passed to the mounted component equals the mock props we created above. Note that even though we are destructing the props in the Profile component, it does not affect the test.

Mock API calls

There’s a part in the project we’ve been using where an API call is made to fetch a list of users. And guess what? We can test that API call, too!

The slightly tricky thing about testing API calls is that we don’t actually want to hit the API. Some APIs have call limits or even costs for making making calls, so we want to avoid that. Thankfully, we can use Jest to mock axios requests. See this post for a more thorough walkthrough of using axios to make API calls.

First, we’ll create a new folder called __mock__ in the same directory where our __tests__ folder lives. This is where our mock request files will be created when the tests run.

module.exports = {   get: jest.fn(() => {     return Promise.resolve({     data: [       {         id: 1,         name: 'Jane Doe',         email: 'janedoe@gmail.com',         username: 'jdoe'       }     ]     })   }) }

We want to check and see that the GET request is made. We’ll import axios for that:

import axios from 'axios';

Just below the import statements, we need Jest to replace axios with our mock, so we add this:

jest.mock('axios')

The Jest API has a spyOn() method that takes an accessType? argument that can be used to check whether we are able to “get” data from an API call. We use jest.spyOn() to call the spied method, which we implemented in our __mock__ file, and it can be used with the shallow, render and mount tests we covered earlier.

it('fetches a list of users', () => {   const getSpy = jest.spyOn(axios, 'get')   const wrapper = shallow(     <App />   )   expect(getSpy).toBeCalled() })

We passed the test!

That’s a primer into the world of testing in a React application. Hopefully you now see the value that testing adds to a project and how relatively easy it can be to implement, thanks to the heavy lifting done by the joint powers of Jest and Enzyme.

Further reading

, , , , , ,
[Top]