3 Common Mistakes in Front-end Testing

When doing front-end testing, choosing an appropriate testing strategy is far more important than a frenzy of testing. If the wrong testing strategy is chosen, it is easy to write poorly maintained and unstable test cases. Once the business changes, the use cases all fall apart. Maybe this is one of the reasons why everyone hates writing tests.

This article introduces 3 common mistakes in front-end testing. I hope it can help you avoid mistakes when doing front-end testing.

1. Implementation Details of the Test Code

To be honest, I really like this myth, because it was a serious problem during testing. Writing tests like this doesn't give you the confidence to do it. Below is an example of the implementation details of the test code.

// counter.js
import * as react from 'react'

export class Counter extends React.Component {
   state = {count: 0}
   increment = () => this.setState(({count}) => ({count: count + 1}))
   render() {
     const {count} = this.state
     return <button onClick={this.increment}>{count}</button>
   }
}

// __tests__/counter.js
import * as React from 'react'

// It is difficult to test the details of the code implementation with the React Testing Library, so here we use enzyme to test
import {mount} from 'enzyme'
import {Counter} from '../counter'

test('the increment method increments count', () => {
   const wrapper = mount(<Counter />)
   // don't do this
   expect(wrapper.instance().state.count).toBe(0)
   wrapper.instance().increment()
   expect(wrapper.instance().state.count).toBe(1)
})

Why do you say that the above is testing the implementation details of the code? Why is testing code details bad? Over-testing implementation details as above have two consequences.

I can crash the business code if the tests pass completely. For example, deliberately writing the wrong variable name when assigning onClick.

I can crash test cases while refactoring business code. For example, rename increment to updateCount, and the test crashes. But the business code can run normally. (When changing the business code logic, the test code should not be changed, because the business logic has not changed, but the implementation method has changed)

Test cases like this are the hardest to maintain because you're constantly updating them. At the same time, they don't give you more code confidence.

2. 100% Code Coverage

Another mistake is to force 100% code coverage. Interestingly, I often see projects where 100% coverage is mandated. Regardless of where this rule comes from, it's actually a misunderstanding of code coverage. Because that doesn't give you the corresponding confidence in the code.

Code coverage can only tell you one thing, that this line of code was run by a test case. However, it doesn't tell you more, such as the following three points.

Does the code work as required by the business?

Does the code work with other code in the project?

What happens when the project crashes? (here refers to accidental crash)

There is another problem with code coverage. : For each additional line of code coverage, the overall coverage will also be increased. That is, if you want to increase overall coverage, adding tests to the "payment page" will have the same effect as adding tests to the "about" page (overall coverage is higher). Another problem more serious than this is that such coverage doesn't give you a deep understanding of your project.

Currently, there is no one-size-fits-all solution to get accurate code coverage, as each project's needs are different. I generally don't focus too much on code coverage, but more on whether the important parts of the project are covered in place. After identifying key parts of the project, I use coverage reports to identify edge cases that have not been covered by tests.

For the record, 100% code coverage is perfectly reasonable for open source modules. Because they are generally easier to achieve 100% coverage (projects are small and relatively simple). And they are all very important codes that will be referenced by many other projects.

3. Test Repeatedly

Compared to integration tests and unit tests, most people complain that E2E is slow and unreliable at best. You can't make a single E2E test run fast and be as stable as a single test. It's impossible anyway. But then again, a single E2E test will give you more code confidence than a single test. In many cases, unit tests cannot bring as much confidence in code as E2E. So writing some E2E tests in the project is definitely worth it!

Of course, that doesn't mean we can't make our E2E tests faster and more reliable. Among them, repeated testing is a pit that people often step on when writing E2E tests, which will reduce the performance and reliability of the entire test.

We should run tests in isolation. In theory, each individual E2E test should be executed as if a different user were using the software. In this case, every time you run the test, you have to go through the registration and login process to create a new user, right? It looks like it's right. Then you have to click the button every time and enter the user information to register and log in. This is to use the user's login state in the business, right? wrong! this is not right!

Let's go back and think. Why do we write tests? Because this way you can deliver projects that are more confident and less prone to crashing! Suppose, you have 100 tests that need to be executed with a logged-in user. So how many times should you run the signup/login flow to convince you that the code is ok? 100 times or 1 time? Normal people will choose 1 time because as long as it is successful once, it should be successful everywhere. So the remaining 99 extra tests won't give you any confidence in your code. They are just doing nothing.

So what should you do? Now that we've set up an isolated environment for our tests, we shouldn't be sharing the same user between tests. What I recommend is to send the same HTTP request in your project every time you want to register and log in as a new user! Sending a request is definitely faster than clicking to check an input box on a page and entering a username and password, and it will generate fewer false errors. As long as you can guarantee that there is one process that can completely run through the registration/login process, then you will not lose confidence in this login registration process.

Summary

So, always remember that we write tests to improve code confidence. If what you're doing doesn't increase your confidence in your code, consider whether you really want to do it!

Well, this foreign language is brought here for you. This article mainly lists 3 misconceptions, namely avoiding over-testing code details, avoiding 100% coverage, and avoiding duplication of testing. These three misunderstandings are caused by our failure to understand the essence of testing, which is to improve code confidence. When you're having a hard time writing test cases, chances are you've sunk into the horns and writing tests in the wrong direction. It's time to stop at this point, and then go back and think about what you can do to improve your code confidence?



Leave a reply



Submit