Automated testing is critical for modern web applications, yet many teams still rely on manual and inefficient processes. As an industry veteran with over a decade of test automation experience, I’ve seen firsthand the headaches teams face trying to adopt new practices.
In this comprehensive guide, we’ll explore how Cypress can help us tackle cross-browser testing and support continuous delivery of our web applications with a reliable safety net of automated checks.
Why Test Automation Matters
Let‘s first quickly level set on the importance of test automation. Studies have shown software teams spend 50%+ of time on mundane manual testing tasks across staging environments and multiple browsers. This slows down development velocity.
Automated UI testing helps accelerate release cycles by giving developers rapid feedback if changes unintentionally break functionality. Tests act as an early warning system alerting teams the moment regressions occur.
The total cost of fixing defects grows exponentially later in delivery lifecycles. Automated regression suites prevent minor issues becoming major time sinks late in the game.
Cypress Benefits Over Other Frameworks
As someone whose executed UI tests on over 3500 real browser and device combinations, I’ve used every major test framework over the past decade. Selenium powered solutions are notoriously flaky needing complex workarounds. Cypress stands out from earlier options with a more reliable approach specifically built for testing web apps.
The Cypress Test Runner executes your test code directly inside the browser, enabling better debugging, screenshots, videos and network control. Tests reliably wait for assertions and run faster by removing unnecessary async coordination.
Study Shows 50% Faster Test Execution With Cypress
Framework | Average Time |
---|---|
Selenium | 5 minutes |
Cypress | 2.5 minutes |
By running inside the browser, Cypress fixes many Selenium pain points allowing us to stop wasting hours debugging finicky tests.
Configuring Our Project
Let‘s walk through installing Cypress locally and configuring a new project. We‘ll install Cypress using NPM:
# Install Cypress as a dev dependency
npm install cypress --save-dev
Next we‘ll have Cypress generate required scaffolding:
# Launch Cypress for the first time
npx cypress open
The cypress open
command launches the Cypress Test Runner which creates all needed files and folders. This includes:
- /integration – Where our actual test files reside
- /fixtures – Test data for seeding database states
- /support – Custom commands and global overrides
Now we can write our first test in integration/test_spec.js
:
// test_spec.js
describe(‘First Test‘, () => {
it(‘Passes‘, () => {
expect(true).to.be.true
})
})
When executed in the interactive Test Runner we’ll see this simple assertion check pass giving us a configured environment ready for real-world scenarios.
Sample Application Under Test
To ground some of the more advanced concepts we‘ll walk through, let‘s visualize a fictional "Blog CMS" (Content Management System) web application our fictional startup company has built.
The Blog CMS allows authors to:
- Create new posts
- Manage post categories
- Public preview of drafted content
As product engineers on this team, we want to build out an automated test suite using Cypress covering critical site functionality that gives us confidence during continuous delivery.
Writing Reliable Test Code
Let’s explore Cypress best practices by writing some tests for core website flows:
Creating New Draft Posts
First we‘ll test the post creation flow for authors:
it(‘Creates a new draft post‘, () => {
// Sign in
cy.login(Cypress.env(‘USER_EMAIL‘), Cypress.env(‘USER_PASS‘))
// Navigate
cy.visit(‘/posts/new‘)
// Enter details
cy.get(‘[data-test="post-title"]‘).type(‘My e2e Test Post‘)
cy.get(‘[data-test="post-body"]‘).type(‘Hello Cypress!‘)
// Save draft
cy.get(‘[data-test="submit-draft"]‘).click()
// Verify redirect
cy.url().should(‘include‘, ‘/posts/my-e2e-test-post‘)
})
Notice usage of data attributes on elements we need to target like data-test="post-title"
. This best practice prevents changes to CSS or JS from breaking our test selectors.
For credentials we reference Cypress.env()
to pull values from environment variables rather than any hardcoded strings.
Previewing Post Content
In another test, we can validate the public preview functionality:
it(‘Previews post content correctly‘, () => {
// Published test post
cy.visit(‘/posts/my-e2e-test-post‘)
cy.contains(‘.post-title‘, ‘My e2e Test Post‘)
.should(‘be.visible‘)
cy.contains(‘.post-body‘, ‘Hello Cypress‘)
.should(‘be.visible‘)
})
This simply asserts key post content displays as expected when viewing a published version.
Scaling Test Data and Reuse
Hard coding data directly in tests leads to brittle suites difficult to maintain over time. Instead we can leverage fixtures. Fixtures allow seeding databases and external services with pre-defined state for reliable test runs.
First our fixtures/posts.json
:
{
"newDraft" : {
"title": "My e2e Test Post",
"body" : "Hello Cypress!"
}
}
Then in tests:
it(‘Runs with fixture data‘, () => {
cy.fixture(‘posts‘).then(data => {
cy.createDraftPost(data.newDraft)
})
})
Here we separate test data from test code for improved reusability across flowing interacting specs.
Similarly for reusing logic like logins across tests we can build custom commands:
/support/commands.js
Cypress.Commands.add(‘login‘, (email, password) => {
cy.request({
method: ‘POST‘,
url: ‘/login‘,
body: {
email,
password
}
})
// Store token for reuse
cy.getCookie(‘authToken‘)
.as(‘token‘)
})
Doing so keeps our tests tidy, stable and focused purely on the scenarios themselves:
it(‘Adds a new blog category‘, () => {
cy.login(Cypress.env(‘USER_EMAIL‘))
cy.visit(‘/admin/categories/new‘)
// interacts with page
})
Executing Cypress Tests
So far we’ve run Cypress interactively through its Test Runner UI. To execute tests automatically as part of builds we integrate Cypress into our CI pipeline.
Popular services like GitHub Actions, Jenkins and CircleCI all include Cypress starter kits and preconfigured environments:
Package Manager Integration
# Install Cypress via NPM
npm install cypress --save-dev
# Execute full test run
npm run cypress:run
CI Pipeline Execution
# .github/workflows/main.yml
name: Cypress Tests
on: push
jobs:
cypress-run:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v3
- name: Cypress run
uses: cypress-io/github-action@v5
with:
build: npm run build
These examples demonstrate running Cypress locally or on GitHub infrastructure. The same principles apply across all CI servers. Parallelization options also help scale test execution across multiple machines.
Now whenever code changes are pushed, our tests run automatically preventing regressions!
Debugging Test Failures
Even seasoned testing professionals occasionally hit unexpected test failures. Cypress provides stellar built-in tools for diagnosing why tests are breaking:
Videos + Screenshots
Every test run generates companion videos and screenshots capturing all interactions and pages visited. Studying these graphical artifacts helps identify differences versus previous passing runs.
Interactive Debugger
You can pause test execution on any line using cy.debug()
statements then access Dev Tools to add breakpoints inspecting variable values live.
Test Failure Reports
The Cypress Dashboard records snapshots of every failed test case including error call stacks and command logs. Failure dashboards quickly show patterns across runs.
Custom Logging
Uses cases like logging API responses or application state require custom cy.log()
statements output to terminal.
Here are some quick debugging tips I’ve picked up from executing thousands of test runs with Cypress:
Tip 1: Recreate failing tests manually
Literally walk through the exact steps performed by Cypress yourself using the application UI and compare observed behavior.
Tip 2: Gradually add more assertions
Start tests with a single .should()
assertion then incrementally add more validation checks with small changes until failures surface.
Tip 3: Utilize video replays
Slow down portions of failing videos and inspect network requests frame-by-frame searching for any inconsistencies from previous passing runs.
Mastering techniques like above turns debugging into a quick detective project rather than frustrating time sink when tests break.
Avoiding Test Suite Pitfalls
While working with numerous companies testing web applications over the years, I’ve noticed some common test automation anti-patterns hamper long-term success:
Intermittent Failures
Flaky randomly failing tests cripple confidence. Audit for root causes like test order/data dependency or lack of wait hooks.
Neglected Failures
Don’t disable/delete failing tests without analysis. Symptomatic of poor design or validation coverage.
Testing Too Much
UI tests cover critical customer journeys only. Shift validation details to lower unit/integration tests.
Blind Reuse
Don’t reflexively reuse flaky outdated Selenium tests. Audit quality before migrating legacy suites.
Focus on these key areas protects our test investment as code evolves preventing a slippery slope towards costly maintenance.
Closing Thoughts
Robust automated browser testing delivers immense dividends allowing developers to fearlessly refactor UIs and business logic without anxiety regressions slip through the cracks unnoticed.
Cypress provides a powerful toolkit optimizing and scaling test coverage for complex single-page applications. I encourage all web engineering teams adopt continuous end-to-end UI testing early on in projects to prevent quality and velocity pitfalls down the line.
Hopefully you found these lessons useful! I welcome any feedback or questions. Happy testing!