In my previous blog, I have written about my impressions on the Test Talks event and shared some highlights from the first two sessions: Process and Strategy & Organization. Today, I’ll be talking about the last part of the event: Technology & Tools.

Here we go.

This session has started with an interesting question raised by the moderator addressing all participants: “Do we need to automate test scenarios?” Well, as you can guess almost everyone in the room undoubtedly responded “Yes”. Then the moderator popped up another question asking what portion of our manual test suites have been automatized so far. This question was followed with a questionnaire. The findings showed that 1-10% of manual test cases have been automatized amongst the vast majority of participants.

According to one of the most reputable software testing organizations ISTQB’s report, test executives aimed to increase the automatized testing coverage up to 40% in the short run, so emphasizing the importance of test automation systems.

The next topic was shaped around the essential question: “What are your most frequently faced challenges?” Participants’ results were gathered around two main topics:

  • Lack of skills (causes higher maintenance effort)
  • Lack of test data management and test automation process execution

There is a common misconception about test automation; most of the manual test professionals feel confident when executing automatized testing.  However, they are missing the right vision on the way. Unlike manual testing, test automation engineering requires software development skills and/or having a solid tendency to acquire competency in software architectures. It would be better to keep in mind that test automation is also Dev-Ops activity by its nature.



Test data management strategy, a main component of test automation strategy, is crucial to establish sustainable and optimized testing processes. Usually, almost 50% of overall testing effort is assigned to test data generation and process management activities. At this point, automated testing takes the stage and significantly decreases test data generation and integration efforts. Considering test automation best practices such as isolating test data from test scripts and codebase can be beneficial.

Quick Tip: It’s important to optimize the number of test cases assigned to each team member (around 200-300) to ensure flawless and manageable test processes.

Besides, event-based keyword driven approach needs to be adopted to overcome platform dependency in the mobile specific domain.

The abovementioned best practices are in parallel with the findings from Keytorc’s research on test automation. When these best practices were adopted, test data effort and similar errors decreased by 50% whereas data reusing increased by 50% and invalid defects decreases by 20%

Generating a variety of test data is a secure way of getting well-gathered results however, some companies go for production data to acquire results. Using production data is an established practice however, there are some specific concerns to consider. Dramatically, 70% of companies still use production data that lacks privacy protection precautions. Some masking techniques are needed to meet the requirements set by BTK, BDDK and various regulatory authorities and established information security policies.

Let’s continue with other mostly encountered challenges faced throughout test automation processes. Here are the top two;

  • Low hardware configuration in testing environments
  • Lack of clarity in service level agreements (SLA)

Scalability matters in lower hardware configuration compared to production environment while conducting performance tests in testing environment. Thus, SLA metrics should be determined well in advance. Unfortunately, 9 out of 10 companies do not set SLA metrics and load models before the process starts. As we are talking about metrics, some useful metrics are; maximum concurrent user, average user time, throughput, etc.



To get the most out of performance testing teams need to plan the prime load model accurately in terms of users vs. transactions. For instance, instead of creating ten thousand users at once and generating individual transactions for each user, generating ten thousand users by setting the appropriate ramp-up time (e.g. 1 sec) and generating multiple transactions is much more accurate and realistic method in terms of the load model. In the end, you will see a 200% improvement in the overall app performance and a 20% decrease in performance defects (on live), which is quite a desired outcome in testing environments (Keytorc Research).

The event ended with a draw for “TestIstanbul 2016” event tickets and a closing ceremony with a photo shoot for the event’s page. The results gathered during the event will be published as a part of the “Test Manager Insight Report 2015-2016.”

Leave a Comment