Loading...
General

How to improve performance of your behat test suites

There are a few different tools in your arsenal when it comes to increasing performance of your test suite. But instead of trying out different things it is best to find out where you stand before so you have something to compare to afterwards. This is also a great way to find out where your tests are slow and where to improve.

Stats logger:

Go ahead and install the genesis/behat-stats-logger extension.

composer require --dev genesis/behat-stats-logger

Configure tracking/logging metrics in the behat.yml file:

default:
    suites:
        default:
            contexts:
                - Genesis\Stats\Context\StatsLoggerContext:
                    filePath: test/report/
                    printToScreen: true
                    topReport:
                        count: 5
                        sortBy: maxTime
                    suiteReport:
                        step: true
                    highlight:
                        scenario:
                            red: 7
                            yellow: 3
                        step:
                            red: 3
                            yellow: 2
                            brown: 1
                        suite:
                            red: 80
                            brown: 70
                            yellow: 50

You’ve got some powerful tools here, configure as you see fit. This extension will produce on screen results as soon as you run the test suite again to analyse. Once you know where the slow bits are, improvements are made easy. If you’re running against CI it will also produce json reports for you as configured by the filePath option. The highlight section holds threshold based on time i.e seconds.

Exploiting test seams:

When writing your scenarios many developers bypass one of the fundamentals of test automation which is making good use of seams i.e an opening in the application that the user can drop into to get to the test subject as quickly as possible. If a user journey allows, your test should bypass any unnecessary steps that are not part of the test itself strictly. If you can somehow drop onto the test subject as quickly and directly as possible that is your best option to ensure best performance (no unnecessary testing) and stability (no unnecessary assertion that can potentially fail). Here is a scenario:

Scenario: The history show values correctly
  Given I am on the login page
  And I fill in my username and password
  And I submit the form
  Then I should be on the dashboard page
  When I follow the History link
  Then I should be on the History page
  And I should see the correct history

The above scenario is quite problematic.

  • It has more than one given when thens.
  • It makes more assertions than the scenario outlines i.e it has a lot more failure points.
  • It is slow because it loads unnecessary steps.
  • It is harder to maintain because it is repetitive.
  • It is unclear of what part is the test subject and what are just the preconditions.

While logging in may be a pre-requisite of the test, there is a seam that can be used which one can use i.e drop onto the page as soon as possible. The above should be re-written as:

Scenario: The history show values correctly
  Given I am logged in # This logs the user in no matter what the mechanism is, it may create a session directly on the server without accessing the page at all (seam) - best outcome.
  When I am on the History page # Visit the page directly, the link on the dashboard page is a separate test case (seam).
  Then I should see the correct history # Do the assertion as normal.

Simple, concise and clear. Winner.

Wait vs spin:

Do not use static/fixed wait calls – these are brittle and time consuming. If a transition completes in 2 seconds but waits for 5 seconds because it the application performs slowly on Charlie’s machine the test is now 3 seconds slower than it needs to be. Similarly if the transition takes 5.5 seconds then the test will fail. Introduce flexible and adaptable wait times instead so the test scenarios execute even in slow environments with concreteness and stability. Use a spin which periodically executes (once every fraction of a second) for a few seconds until it succeeds or the threshold is met. In the above example we can set it to 10 seconds and it will work on fast and slow environments but it will perform best on each one of these as it is adaptable. Here is an article on its implementation: http://inevitabletech.uk/blog/testing-js-apps-with-behat/

Leave a Reply