Loading...
AgileTest AutomationTesting

Guidelines on automated testing

Have you experienced too much automation effort and not much output? Breaking tests here and there despite you’re undying efforts?

Test automation is fantastic, it can save everyone a lot of time. But building a maintainable, stable pack is more important as that is how everyone can save time. Everyone wants to run the pack, but a very few can put in the hours to maintain it. Having a guide on how to build this maintainable pack is essential and this is what this document intends to cover. How to determine what is suitable for automation and what to automate. 

Your decision on what to automate should be a collaboration between yourself and your QA. Automation does not mean there won’t be any manual testing left. There is no substitute for some manual testing i.e usability testing. Automation aids or cuts down the amount of manual testing that needs to be done and increases your confidence in the product functionality significantly before moving on from development to the next stage – narrowing the feedback gap without involving other human resource. 

  

Questions you should ask yourself: 

  • What is the likelihood of this feature changing later on? 
  • How much effort is involved in automating it? 
  • Can the complex parts be tested manually? 
  • What priority bug can we expect to get from an issue in this feature? 

The likelihood defines the experimental stage of the feature. You may not have the answer to this, but something that changes often has more bugs. Such as a campaign etc, they are short lived and are purely there to drive an increase in sales for a short period of time. On the other hand, your header and footer rarely change. They are tested almost daily by your navigation of the site or someone else’s. Do you need to spend much time automating that? 

Since manual testing isn’t going away, its important to leverage and incorporate that effort into our decision making. I would suggest not automating features that are part of the daily exercise and not very critical.   

What to cover? 

  • The core functionality – determined by the fact that if something goes wrong with that particular path of execution, the end user won’t be able to meet the objective of using that feature. 
  • The rest of the functionality in priority order. 
  • In cases where 90% of the test can be covered easily but if you have to spend the same amount of time to cover the rest of the 10%, give it a good though whether its worth the effort. 
  • Deal with edge cases as you would with exceptions, lets not be automation addicts. Is it reasonable to automate? You may want to choose to automate based on the priority of the bug raised, automate P0, P1 – but don’t automate P3 and lower and choose to automate P2’s. 

  

How to cover? 

I have known test automation to be quite artistic. If done wrong, it can be a real stress point. The following qualities are a must have: 

  1. Resilience 
  2. Breaking tests 
  3. Trace your break 
  4. Elements by name rather than identifiers 
  5. Don’t validate content, but behaviour 
  6. Small scenarios 
  7. No re-validate. 
  8. No CSS validation 
  9. Everyone gets it 

 

  1. Resilience: 
    1. In the world of Behaviour testing (web apps), we need to target elements to interact with them. There are several ways to target them, and this determines the resilience factor of a test pack. You can use: 
      1. Xpath – Most specific and changes very often 
      2. CSS classes – Least specific 
      3. CSS ids – Specific and doesn’t change much 
      4. named locator – Content driven 
    2. Your pack will be most resilient if uses an identifier that doesn’t change with page changes and doesn’t serve any other purpose than IDing your element. For this purpose the ‘id’ property is the best one because: 
      1. It doesn’t change if the surrounding page changes like XPath. 
      2. It isn’t used for css styling as much any longer. 
      3. It isn’t content based necessarily. A button can have an id=”primary” but displays “Save”. The button may later on be changed to “Update” but the id is less likely to be changed, your tests will not be affected. 
    3. In cases where you have to test a list of items, classes are more suited as you can find all the relevant items much more easily and likewise assert them.
    4. Avoid wait/sleep calls at all cost – use the spin method where needed. A wait call makes the assumption that in x seconds you’ll have an action finished to progress to the next step. This is problematic in two ways:
      1. Speed
        1. Speed being a major player in any automation pack, you should take an action as soon as possible. For this reason, use the spin() method provided. It will try to execute the action every second for a number of seconds, if it can’t it will die out.
      2. Flakiness
        1. It assumes that the test will be ready to execute in x seconds. This x seconds can vary from device to device depending on performance giving you flakiness. You will try to constantly reduce the time and what works for you may not work for your co-workers. Use the spin method instead with say 5 seconds. Remember while it seems a lot, if the test can execute after 1 second or even instantly it will. It will adapt on its own.
  2. Breaking tests: 
    1. Your test pack doesn’t serve much of a test if it doesn’t break when a feature changes its behaviour. So that kind of break is good news, you’ve covered it well. 
  3. Trace your break: 
    1. A behaviour test pack looks at the highest level of output i.e the UI or service layer. It doesn’t care about what code is executed unlike a unit test. For this reason, a break in this test pack requires more time and effort to trace. To help yourself later on, throw errors wherever you can possibly validate your actions. Its very easy to find a broken test signalled broken on one page but the issue being on the previous one. The problem in such a case is missing validation on the previous page. But be careful, over assertion isn’t great either. Build these validations implicitly into your step definitions. 
  4. Elements by name rather than identifiers: (feature files only) 
    1. No one likes ‘Given I click on “.homepage__buttons .list__item .item .button–apply”’. Instead works with ids. We’re using kebab case as a convention. Naming your id’s sensibly will lead to great things here. You can have a step definition that converts your words into kebab case and finds the right button to click. For example ‘Given I click on “apply button“’ would be converted to #apply-button. 
  5. Don’t validate content, but behaviour: 
    1. Static content changes often, and it doesn’t have an impact on the behaviour of the pack. It usually isn’t a major issue if the content has a typo in it. End users can still get through to what they want to do. If you like, you can cover the presence of the content instead. But having content validation is brittle to your tests, sometimes this content changes unexpectedly by external teams and that is the last thing you want to break your tests. If there is content that changes as part of the behaviour of the application, then that is worth testing i.e you’ve sent an email to someone and you display a confirmation when it will be actioned and the email itself. But where it is less useful is messages, generic content, you should cover their existence only. 
  6. Small scenarios: 
    1. Often times we have an urge to perform end to end tests. In test automation, this isn’t great. You’ll love when its all green, but what if it goes red? Say there was a scenario of 20 steps. Something goes wrong on line 5, now you can’t test all that happens after – creating uncertainty of how big the issue really is. Your scenarios should be chunked so it covers the feature behaviour by behaviour. The preferred way is 1 Given When Then sequentially. You may have many “ands” in between them. But given one sequence of events, when the following actions are taken, then the following behaviour is produced. This will help you drastically in maintaining the pack and confidence levels even if certain scenarios fail. 
  7. No re-validate: 
    1. When you implement the above (small scenarios) you’ll find yourself repeating certain steps to get to the position of asserting the next thing. The point is this, do not re-assert your previously made assertions. This will create a dependency on the previous feature working correctly. Your Given’s are a means to get you into a particular state, not to validate the current scenario at hand. If there is similar validation, expect all your tests to fail when that single feature does not work. In fact, bypass that feature completely if you can. For example: 
      1. Scenario: When I search from the home page 
        1. Given I am on the home page 
        2. When I click on search 
        3. Then I should be on the search results page 
      2. Scenario: When I am displayed the sailings on the search results page. 
        1. Given I am on the search results page 
        2. Then I should see sailings with the following set of data:… 
      3. In the above example, the 2nd scenario has to validate sailings on the search results page. To get to that page, you can either repeat the steps in scenario 1, or you can bypass them directly by constructing the correct url in the background breaking the dependency between them. Do the later where possible. 
  8. No CSS validation: 
    1. You may be using the UI layer to perform the test, but you are not deliberately testing the UI. The UI is a means to test the behaviour of the application, some of it consequently gets tested. Mixing these can give you very brittle tests. Best to kept separate. If you have a library of UI components, you can introduce an automated testing layer in that pack instead. 
  9. Everyone gets it: 
    1. Your feature files should be understandable by anyone. If someone new joins the team, you should be able to offload these feature files to that new person and they should be able to get up to speed using that. If you have a story that amends the behaviour of an existing feature – your PO should be able to use the existing Scenario and amend it to show you exactly what needs changing. This has been done in practice and definitely is a great quality of the test pack to have. 

Your test pack is only as good as the quality of the test you put into it. By incorporating the above points, you should come out with a very stable and maintainable pack. 

Leave a Reply