Sunday 17 December 2023

Traffic Parrot 5.43.2 released, what's new?

We have just released version 5.43.2. Here is a list of the changes that came with the release:


  • Added support for editing gRPC response delays in the UI
  • Added support for editing HTTP/gRPC response delays with uniform min/max distribution in the UI
  • Added support for connecting to IBM® MQ with an unspecified or null username/password


  • Library upgrades to fix OWASP issues
  • Timeout shutdown if tray icon takes too long to close


  • Upgraded protoc from 3.24.4 to 3.25.1
  • Upgraded gRPC from 1.58.0 to 1.60.0
  • Upgraded bundled JRE from 8u382 to 8u392
  • Increased shutdown logging verbosity
  • Windows executable metadata was updated
  • The default virtual service CA certificate was renewed
  • Amazon SQS HTTP plugin retired, please contact if you need extended support

Wednesday 29 November 2023

Extension to OpenAPI specs that documents which responses to return when requests are invalid.

We have added a specification extension to OpenAPI specs that documents which responses to return when requests are invalid.

It allows your API consumers to understand in more detail what validation responses look like just by looking at the OpenAPI spec. It is handy for external APIs where talking directly to support might take time.

It also allows Traffic Parrot to generate API mocks that mimic those validation rules on the fly from just the OpenAPI spec without any need for coding. That means the API consumers have API mocks ready in milliseconds without any coding required!

For example, to validate the entire request and return a specific response object, you can add to your OpenAPI example:

- type: schema
  message: ${in} parameter ${name} has invalid value ${value} because  ${reason}

We are adding more sophisticated rules to allow embedding even more details in the OpenAPI spec.

Comment below "Keep me posted!" if you want to hear more when ready!

Wednesday 18 October 2023

Traffic Parrot 5.42.2 released, what's new?

We have just released version 5.42.2. Here is a list of the changes that came with the release:


  • Added support for casting to a boolean value
  • Added request matching and response templating examples to check for presence of an optional JSON field:
    {{#if (jsonPath request.body '$.[?(@.field)]') }}field is present{{/if}}
    {{#if (not (jsonPath request.body '$.[?(@.field)]') ) }}field is not present{{/unless}}
    {{#if (jsonPath request.body '$.[?(!(@.field))]') }}field is not present{{/unless}}
    {{#unless (jsonPath request.body '$.[?(@.field)]') }}field is not present{{/unless}}
  • Added new Couchbase configuration options
    • Added support for executing a Couchbase warmup query on startup, which can improve performance of subsequent
    • Added support for setting Couchbase IO options enableDnsSrv and networkResolution
    • Enable in database-connections.json by setting:
          "connectionId": "couchbase.db",
          "type": "COUCHBASE_CONNECTION",
          "connectionString": "couchbase://localhost:32784",
          "username": "Administrator",
          "password": "password",
          "warmupQuery": "SELECT COUNT(*) FROM bucket_a UNION SELECT COUNT(*) FROM bucket_b",
          "enableDnsSrv": true,
          "networkResolution": "auto"
  • Added support for OpenAPI request schema validation annotations (beta)
    • Enable in by setting:
      # OFF will turn off validation
      # DEFAULT_VALIDATION will turn on validation and provide default responses when not annotated
      # ONLY_ANNOTATED will turn on validation only for annotated specifications
    • Place OpenAPI specifications in the openapi configuration directory
    • Requests that match the OpenAPI URL will be validated according to the schema of the parameters
    • By default, Traffic Parrot will return an HTTP 400 response with a human-readable plain text validation message of the first validation error encountered
    • You can customize which fields to validate and the error response, by annotating the OpenAPI YAML file with:
              # standard OpenAPI parameters with schema definitions
                      # standard OpenAPI response schema definition
                x-traffic-parrot-validation: # array of validations that trigger this response code
    • To validate the entire request and return a plain text validation message:
      - type: schema
        response: ${in} parameter ${name} has invalid value ${value} because ${reason}
    • To validate the entire request and return a specific response object:
      - type: schema
          code: INVALID_REQUEST
          message: ${in} parameter ${name} has invalid value ${value} because ${reason}
    • To validate particular query/path/header parameters and return a specific response object:
      - type: schema
        - name: style
          in: query
        - name: limit
          in: query
        - name: id
          in: path
        - name: X-Request-Label
          in: header
          message: ${in} parameter ${name} has invalid value ${value} because ${reason}
    • To validate the request body:
      - type: schema
        - name: *
          in: requestBody
          code: REQUEST_BODY_INVALID
          message: Request body has invalid value ${value} because ${reason}
    • To perform a specific validation and fall back on a catch-all validation:
      - type: schema
        - name: style
          in: query
          code: INVALID_QUERY
          message: query parameter ${name} has invalid value ${value} because ${reason}
      - type: schema
          code: INVALID_REQUEST
          message: ${in} parameter ${name} has invalid value ${value} because ${reason}
    • Multiple response codes can be annotated, but only one response code per OpenAPI path should have a catch-all validation
    • The template parameters ${name} and ${value} will be populated with the name and value of the field that failed validation
    • The template parameter ${in} will be populated with query/path/header/requestBody
    • The template parameter ${reason} will be populated with a human-readable description of why the validation failed


  • Library upgrades to fix OWASP issues
  • Upgrade script now migrates any previous logback configuration
  • Fixed an issue where {{ dataSource }} errors would shadow all other context variables
  • Improved internal error reporting when an expected Handlebars variable is not found or of the wrong type
  • Validate that mapping queue name is not empty when saving a mapping and when starting replay mode
  • Improved performance when using custom request matchers for requests that do not match the standard matcher checks


  • Upgraded WireMock from 2.35.0 to 2.35.1
  • Upgraded protoc from 3.23.4 to 3.24.4
  • Upgraded gRPC from 1.56.1 to 1.58.0
  • Upgraded bundled JRE from 8u372 to 8u382

Saturday 26 August 2023

Traffic Parrot 5.41.7 released, what's new?

We have just released version 5.41.7. Here is a list of the changes that came with the release:


  • Fixed support for modifying more than one header in a Handlebars script:
    {{ modifyResponse 'headerValue' 'header-name-1' 'header-value-1' }}
    {{ modifyResponse 'headerValue' 'header-name-2' 'header-value-2' }}
  • Improved system tray OS compatibility
  • Fixed a bug in the script impacting Apple ARM based M1/M2 processors
  • The UI log file viewer now supports viewing .gz and .zip log files


  • Logback can now be specified as a logging provider in jvm.args:
  • For organizations that have parted ways with all versions of Log4j, we now offer a distribution with those JAR files removed entirely

Thursday 10 August 2023

Traffic Parrot 5.41.0 released, what's new?

We have just released version 5.41.0. Here is a list of the changes that came with the release:


  • Random mixed case strings of a fixed length can now be generated using
    {{ randomValue length=10 type='ALPHANUMERIC' mixedcase=true }}
  • Header values can now be set in Handlebars scripts using
    {{ modifyResponse 'headerValue' 'header-name' 'header-value' }}
  • New simplified OpenAPI mock import
    • Enable in by setting:
    • When OpenAPI response examples are present, they are used directly as mock responses
    • Otherwise, OpenAPI schema data types and field structures are used to generate a valid mock response
    • The request header x-traffic-parrot-select-response-status can be set to a numeric response code to select which response to return
    • The default response returned is the success response
    • The request body is not included in matching, to simplify the response selection
    • The request URL is used for matching, including checking for mandatory path and query parameters
  • Role based UI access to allow defining some read only UI users that are allowed view permissions but not edit permissions
    • Enable in by setting:
    • Define users in by setting:
    • HTTP basic authentication popup prompts users to specify a username and password
  • Allow loading gRPC existing proto set files in proto-bin with .set.bin extension (e.g. as output by gRPCurl)
  • Added a UI page with instructions on how to provide the required JAR files when working with IBM® MQ


  • Library upgrades to fix OWASP issues
  • {{ evaluate 'variable' }} now recognizes a variable that was set using {{#assign 'variable'}}value{{/assign}}
  • Fixed an issue with the default permissions of the lib/external folder on Linux
  • Fixed an issue with saving very large XML body mappings in the UI


  • Upgraded protoc from 3.21.9 to 3.23.4
  • Upgraded gRPC from 1.51.0 to 1.56.1

Monday 7 August 2023

Aligning company objectives when planning a system simulation project

When planning a system simulation (service virtualization) project, aligning the project with the enterprise strategy is essential. For example, here is what I have been discussing recently with a prospect.

CIO's objective: High uptime.

Director of QA objective: Quality product (feeds into high uptime enterprise objective)

Director of QA key result: Do root cause analysis on all 2023 incidents resulting in lowering downtime of the product (allows a targeted approach to resolving uptime issues)

Director of QA key result: increase automated test coverage by 30% to be able to prevent root cause issues from happening again

Director of QA key result: Create three system simulators (virtual services) to allow for increased automated test coverage for user journeys X, Y and Z

Result: The Director of QA can communicate to the CIO that selected categories of issues causing uptime downgrade will be mitigated going forward, and the expected uptime is increased by X.

If you want to learn about options your specific situation, please contact us

Monday 31 July 2023

Case study: estimating the cost of in-house development of mocks

We had a call yesterday with an architect working for a global aerospace enterprise. We discussed the pros and cons of building a mocking tool in-house vs. purchasing off-the-shelf.

He estimated his developer would need around ten days to create the basic version of the mock for one service and then roughly two days a month to maintain it. So, this tool project would take up 34 days of the developer's time in the next 12 months. Roughly a €9,000 ($10,000) cost in the first 12 months (€63k average dev salary / 240 work days a year * 34 days for this project).

This is a very conservative estimate, in my experience, but it's a helpful start.

On top of that, I suggested a back-of-the-envelope calculation, and an employee generates, on average, $1550 in revenue per day for that company ($26b yearly revenue / 70k employees / 240 work days a year). 34 days would be worth roughly $53,000.

So the cost of developing in-house based on these estimations is MAX($53,000, $10,000) = $53,000 for the first 12 months.

So, the ROI of using Traffic Parrot instead of developing in-house is high, even just for the first service he wanted to mock.

On top of that, Traffic Parrot might be considered a low-risk option for the company, as our tool is already working. It has been tested and has been used by companies globally since 2016. The company developer hovever would be starting from scratch with "can it do XYZ" requests from other developers week by week.

Off-the-shelf can also be future-proof. For example, our clients use our tool in different environments ranging from local, CI/CD and shared environments across many operating systems. Over the years, we have developed features that improve the developer experience in all of those cases, which, when you are developing in-house, you might need in the future, even if you do not see it today.

I also suggested considering pulling into the discussion the Director Of Product. He might have more visibility on the cost of those 34 dev days to the product revenue (I believe it's at least 3-5x the average employee number I used above). He might also have a budget to purchase tools if the line manager does not.

Ultimately it's the prospect's call to either develop in-house or use off-the-shelf. We are here to help them make an informed decision by walking around the abovementioned items.

If you want to learn about other options for your specific situation, please contact us

Thursday 1 June 2023

How do system simulators (service virtualization) and real systems work together?

We have been asked this week by a Director of QA working for a south american energy company, "How do system simulators and real systems work together?".

Several times, different prospects have asked this question, so we had diagrams ready to show.

The current situation in your test and development environment is most likely that you have problems with third-party or backend systems (red on the diagram) that result in blockages of your regression testing (red on the diagram) and new feature development (red on the diagram).

Depending on your specific situation, several options are available to add API and system simulators to your environments. One of the popular options among directors asking this question is introducing a new test environment, the "simulated" one. If you do that, most of your regression and new feature testing can be done with simulators (green), allowing for faster time to market for new features and lower costs of introducing changes to your product.

Here is a sample ROI model explaining the benefits of QA using API and system simulators.

If you want to learn about other options for your specific situation, please contact us

Traffic Parrot 5.40.2 released, what's new?

We have just released version 5.40.2. Here is a list of the changes that came with the release:


  • Added Couchbase data source support:
    {{ dataSource 'couchbase.db' 'INSERT INTO PERSON(KEY, VALUE) VALUES ("$id", {"id" : $id,"name" : $name})' id=1000 name='some-name' syntax='N1QL' }}
    {{ dataSource 'couchbase.db' 'SELECT name FROM PERSON USE KEYS "$id"' id=1000 single=true syntax='N1QL' }}
    {{ dataSource 'couchbase.db' 'INSERT INTO PERSON(KEY, VALUE) VALUES ("$id", $object)' id=1000 object=example syntax='N1QL' }}
  • New helper to iterate a certain number of times:
    {{#times 10}}{{@index}}{{#unless @last}},{{/unless}}{{/times}}
  • New HTTP mapping bulk delete API:
    DELETE http://localhost:8080/api/http/mappings/bulk/UUID1,UUID2,UUID3
  • New gRPC mapping bulk delete API:
    DELETE http://localhost:8080/api/grpc/mappings/bulk/UUID1,UUID2,UUID3


  • Library upgrades to fix OWASP issues


  • Upgraded bundled JRE from 8u362 to 8u372
  • Helper {{#equal a b }}true{{else}}false{{/equal} can now be written inline as {{ equal a b }}
  • When using Handlebars partial files, it is now possible to use a different file extension than .hbs for example {{>example.txt}} will read from file example.txt

12 Microservices Testing Techniques for Dependency Control

Friday 5 May 2023

Traffic Parrot 5.39.5 released, what's new?

We have just released version 5.38.1. Here is a list of the changes that came with the release:


  • Multiple gRPC servers can now be recorded at the same time, for example:
    com.example.Service/method -> host1:port1
    com.example.packageA.* -> host2:port2
    com.example.packageB.* -> host3:port3
  • Added support for Java object Serializable Proto messages sent over JMS connections e.g. RabbitMQ RPCs
  • Added support for sending multiple types of request/response message (e.g. two different Java Proto request/response pairs) on a single request/response queue pair
  • Added support for RabbitMQ contentType header
  • Added PostgreSQL example connection to database-connections.json
  • JSON request/response field data in mappings can now be externalized to/from MappingData.csv for an alternative way to edit field data across multiple mappings in a table format
  • Tools UI menu with links to simple file editor and mapping field data externalization


  • JMS advanced request and response types are now retained in the UI form when starting/stopping recording
  • Improved HTTP recording UI error message when there is a problem fetching recording status in the UI
  • Increase HTTP recording UI status timeout to cope better with slow browser connections
  • If there is a problem decoding a recorded JMS message, fall back on a raw bytes recording representation
  • Mention which actual connection ids were present when a JMS connection id is not found
  • Removed deprecated "matches script" option from the request body matcher dropdown (replaced by dedicated fields)


  • Upgraded bundled JRE from 8u352 to 8u362
  • When matching JMS requests, mappings for Java object Serializable requests now must have the bodyType specified
  • The default UI HTTPS certificate user-interface-https.jks was renewed

Saturday 29 April 2023

What is the difference between API-first, design-first and code-first APIs?

What does it mean to follow the API-first approach? It means that APIs are first-class citizens that can be products the company offers its customers; they are critical business assets.

What is the API design-first approach? It means working closely with the client, product, architecture, testing and development teams to design the services available in the API and the data format before actually starting to code them.

It contrasts with code-first APIs, where you dive straight into coding and creating the API without any upfront specifications. The API schema can be defined in a format like OpenAPI for HTTP APIs, Protobuf for gRPC or AsyncAPI for RabbitMQ, Kafka and other asynchronous systems.

The API design-first approach is proven to parallelise teamwork and get products to customers much faster, which is still relevant. Design-first fits into the API-first approach adopted by companies that treat APIs as first-class citizens critical to their business success. 

Difference between API-first, design-first and code-first APIs

We can help you with derisking your API-first and design-first API initiatives. Reach out to us and talk with our development and QA experts.

Sunday 9 April 2023

Traffic Parrot 5.38.1 released, what's new?

We have just released version 5.38.1. Here is a list of the changes that came with the release:


  • Add request/response recording Protobuf type fields to JMS UI

Monday 3 April 2023

Traffic Parrot 5.38.0 released, what's new?

We have just released version 5.38.0. Here is a list of the changes that came with the release:


  • RabbitMQ JMS now supports record/replay of Protobuf messages using their JSON representation
  • Added new dedicated endpoints that can be used for Docker and OpenShift health check probes:
    • http://localhost:8080/api/health
    • http://localhost:8081/__admin/health
  • The trafficparrot.license comment line is now printed in the logs to help clarify which license is being used

Monday 27 March 2023

Traffic Parrot 5.37.8 released, what's new?

We have just released version 5.37.8. Here is a list of the changes that came with the release:


  • Added support for protoc on Apple ARM based M1/M2 processors running inside of Docker

Roll-out Traffic Parrot across the whole enterprise

A number of Traffic Parrot customers use the simulation tool across their whole enterprise as the primary and recommended API, system and service simulation tool.

We often get asked what would a large-scale rollout like that look like. We have created a high-level project overview that captures the process we have seen work well for our existing customers: API and Service Simulation Tool Implementation Project Plan

Tuesday 21 March 2023

How to deploy Traffic Parrot in OpenShift

A significant number of Traffic Parrot customers run our tool in OpenShift. This blog post discusses a template process you can follow to do the same. 

Here’s a quick terminology review to get us on the same page:

  • docker = used to build, push and run container images
  • podman = docker replacement that OpenShift recommend using (developed by OpenShift), almost identical to standard docker
  • oc = OpenShift command line interface to deploy and configure OpenShift services etc.
  • helm = tool used to package and deploy OpenShift applications automatically
  • TPLS = Traffic Parrot License Server
  • TP = Traffic Parrot

If the end goal is to deploy Traffic Parrot in OpenShift, you need the following:

  • TPLS Docker image built and pushed to OpenShift Docker registry
  • TPLS Helm Chart built and deployed to OpenShift Cluster
  • TP Docker image built and pushed to OpenShift Docker registry
  • TP Helm Chart built and deployed to OpenShift Cluster

There will be several steps to deploy things in OpenShift that we need to go through:

  1. (One-time learning) Discuss the industry standard Docker and OpenShift workflow
  2. (One-time setup) Establish OpenShift cluster connectivity from your localhost
  3. (One-time setup) Establish Docker registry connectivity from your localhost
  4. (One-time learning) Discuss the client-specific OpenShift workflow (there may be additional steps beyond the industry standard ones)
  5. (One-time setup) Create a TPLS Docker image using podman
  6. (One-time test) Deploy TPLS in OpenShift using oc and test it works as expected
  7. (One-time test) Test that localhost TP can connect to OpenShift TPLS
  8. (One-time setup) Create a TP Docker image using podman
  9. (One-time test) Deploy TP and TPLS in OpenShift using oc and test it works as expected
  10. (One-time setup) Create a pipeline to deploy TPLS in OpenShift from GitHub sources
  11. (One-time setup) Create a pipeline to test TP API mocks and  deploy TP in OpenShift from GitHub sources
  12. (One-time setup) Create a client internal confluence page for API mocks developers on how to make changes to Traffic Parrot client API mocks and deploy them using the automated pipelines
  13. (One-time test) Test TP API mocks with a selected use case
  14. (Ongoing process) Develop TP API mocks locally, then commit configuration to GitHub (which will be deployed by a pipeline automatically to OpenShift)
  15. (Ongoing process) Run TP pipeline to test TP API mocks and deploy to OpenShift automatically with rolling releases for partners
We can help you with all those by providing standard scripts our customers use.

Monday 20 March 2023

Traffic Parrot 5.37.7 released, what's new?

We have just released version 5.37.7. Here is a list of the changes that came with the release:


  • New HTTP request matching script can be used to e.g. match a request header using a regular expression
  • New {{evaluate}} helper to evaluate a JavaScript command
  • JMS recording now supports requests that use JMSReplyTo
  • JMS IBM MQ new toggle in connection to allow MQ JMS to copy request message id to response message id
  • RabbitMQ JMS improvements including:
    • RabbitMQ RPC support
    • RabbitMQ replyTo support
    • RabbitMQ virtual host support
    • Declare queue if it does not already exist in the broker on record/replay start using the arguments specified in the connection e.g. x-max-length
  • Object store improvements including:
    • All operations are now thread safe within a single Traffic Parrot instance
    • A default value can be specified to return if an object is not found
  • property is now also taken into account for {{ dataSource }} CSV file access


  • Allow API requests to set the Accept header with multiple content types, so long as the first one listed is application/json
  • Fixed an issue with the scenario dropdown when no default port specified
  • If there is an exception when starting, e.g. due to a missing property, startup fails immediately instead of waiting the full timeout duration to fail
  • If the trafficparrot.license does not exist on startup, a relevant message is now displayed in the UI
  • Improved compatibility with OpenShift 4 when using a TLS passthrough route to the license server


  • gRPC matching now returns the closest match if multiple candidate responses match and warns in the logs about multiple matches
  • Bundle instead of many small license text files, which were slow to extract on Windows

Friday 24 February 2023

Effective OKR for a Director of Quality Assurance

An effective OKR (Objectives and Key Results) for a Director of Quality Assurance (QA) should include the following attributes:

  • Clear and Specific Objective: The objective of the OKR should be clear, specific, and measurable. For example, "Improve the quality of our software products" is too broad, while "Reduce customer-reported software bugs by 30%" is specific and measurable.
  • Alignment with Company Goals: The OKR should align with the company's goals and mission. It should support the company's strategic objectives and contribute to its success.
  • Challenging but Achievable: The OKR should be challenging enough to motivate the team but achievable within the given time frame. It should stretch the team's capabilities and drive them to perform at their best.
  • Focus on Outcomes, not Activities: The OKR should focus on outcomes, not activities. It should specify the desired outcome, and the team should have the autonomy to decide the best way to achieve it.
  • Measurable Key Results: The Key Results should be measurable and quantifiable and provide clear evidence of progress towards the objective. For example, "Reduce software bugs by 30%" is measurable, while "Improve software quality" is not.
  • Time-Bound: The OKR should have a specific time frame for achieving the objective and key results. This creates a sense of urgency and accountability and helps the team stay focused on the goal.
  • Relevant to the Role: The OKR should be relevant to the Director of QA's role and responsibilities. It should align with their areas of expertise and the available resources.

Overall, an effective OKR for a Director of QA should be clear, specific, challenging but achievable, aligned with company goals, focused on outcomes, measurable, time-bound, and relevant to the role. 

ORKs and KPIs for a QA Manager in 2023

As a QA Manager in 2023, keeping track of your team's performance and ensuring they are meeting company goals is essential. One way to do this is by implementing OKRs (Objectives and Key Results) and KPIs (Key Performance Indicators) for your team.

OKRs are a management tool that helps to align your team's goals with the company's objectives. They consist of an objective, the goal you want to achieve, and key results, which are the measurable outcomes that show progress towards that goal.

On the other hand, KPIs are metrics that help you track your team's performance. They can measure anything from the number of bugs found per sprint to the time it takes to complete a task.

To implement OKRs and KPIs effectively, it's essential to follow a few key steps:

  1. Define your company's objectives: This should be done at the beginning of each quarter or year. These objectives should be specific, measurable, and achievable.
  2. Set individual objectives: Once you have your company objectives in place, set individual objectives for your team members. These should align with the company objectives and be specific to their role.
  3. Define key results: Determine the key results that will show progress towards achieving the objectives. These should be measurable and achievable within the timeframe.
  4. Set KPIs: Identify the KPIs that will help you track your team's performance towards the objectives. These should be measurable, relevant, and timely.
  5. Review and adjust: Regularly review your team's progress towards the objectives and adjust as needed. Celebrate successes and identify areas for improvement.

By implementing OKRs and KPIs, you can ensure that your team is aligned with the company's objectives and track their performance towards achieving them. This will help you identify improvement areas and ensure that your team is working effectively and efficiently.

Wednesday 22 February 2023

OKRs for 2023 for a Director of QA

As a Director of QA, you’re likely always looking for ways to improve your team’s performance and the quality of your products. One effective method for achieving these goals is implementing OKRs – Objectives and Key Results – into your company’s operations. In this blog post, we’ll explore how OKRs can benefit your team and discuss some possible OKRs you could set for 2023.

What Are OKRs?

OKRs are a goal-setting framework that helps organizations define and track their objectives and the measurable outcomes they aim to achieve. OKRs typically include two components: an objective and a set of key results.

The objective is a high-level, qualitative statement that describes what the team wants to achieve. It should be ambitious and aligned with the company’s mission and values. The key results are specific, measurable, and time-bound milestones that help track progress towards the objective.

Benefits of OKRs for QA Directors

OKRs can help QA Directors in several ways:

1. Focuses the team on what’s important

OKRs help align the team’s efforts with the company’s goals, ensuring everyone is working towards the same objectives. This can help eliminate distractions and unnecessary work that doesn’t contribute to the team’s success.

2. Enables better tracking of progress

By setting specific, measurable goals, OKRs enable the team to track progress and adjust their efforts. This helps prevent stagnation and ensures that the team is always moving forward.

3. Provides a framework for continuous improvement

OKRs can help identify areas of improvement and encourage the team to learn and grow. This can lead to better performance and more significant contributions to the company’s success.

OKRs for QA Directors in 2023

Here are some OKRs that QA Directors could set for their teams in 2023:

Improve product quality
Key Results:
Decrease the number of critical defects by 50%
Increase customer satisfaction scores by 10%
Reduce the time to fix defects by 30%

Increase automation coverage
Key Results:
Increase the number of automated tests by 50%
Improve test automation coverage from 50% to 75%
Reduce the time to execute automated tests by 50%

Develop a culture of continuous improvement
Key Results:
Implement a monthly team-wide retrospective
Increase the number of team members receiving training by 50%
Launch a QA mentorship program for junior team members

By setting OKRs, QA Directors can help focus their teams on what’s essential, track progress, and drive continuous improvement. By setting goals aligned with the company’s mission and values, QA Directors can help their teams significantly contribute to its success in 2023 and beyond.

Friday 17 February 2023

3 KPIs for Director of QA and Head of QA for 2023

As a Head of QA, it's essential to measure your team's performance and ensure that your organization's products meet the highest standards of quality. Here are three KPIs that are commonly used in QA departments to track and measure quality:

  • Defect density - Defect density is a metric that measures the number of defects found in a particular phase of software development or a given amount of code. This metric helps you to identify areas of the development process that require improvement and measure the effectiveness of the testing process.
  • Test coverage - Test coverage is the percentage of the software code that has been tested by your team. This KPI provides a way to measure the thoroughness of your testing efforts and helps you identify areas of the code that have not been tested. By increasing test coverage, you can increase the quality of your products and reduce the number of defects that make it into production.
  • Mean time to detect (MTTD) - MTTD measures the time it takes to detect a defect from the moment it is introduced into the code. This KPI is an essential measure of the effectiveness of your testing process and helps you identify the most critical areas for improvement. By reducing MTTD, you can minimize the impact of defects on your customers and ensure that your products are of the highest quality.

By tracking these KPIs and continuously working to improve them, you can ensure that your organization's products meet the highest standards of quality and improve customer satisfaction. 

Wednesday 15 February 2023

API mocks and simulators for API consumers - supporting client and partner API integration

If you create APIs for your partners and customers, you might need API mocks and simulators to help them onboard faster and with fewer issues. In this video, we describe the options you have to provide those mocks and simulators to your user base.

Monday 30 January 2023

Traffic Parrot 5.36.4 released, what's new?

We have just released version 5.36.4. Here is a list of the changes that came with the release:


  • New upgrade script supporting Linux/Mac/Windows
  • The tray icon now displays the installation location and has a link to the latest Traffic Parrot documentation
  • The default implementation of {{now}} can be toggled using
    • To use the {{now}} helper:
    • To use the WireMock {{now}} helper:
  • Alternatively, you can add the provider parameter per call to dynamically switch between them, otherwise the default implementation specified in will be used
    • To enable this set in
    • To use the WireMock implementation:
      {{now offset='2 years' format='epoch' provider='WIREMOCK'}}
    • To use the implementation:
      {{now format='short' provider='HANDLEBARS'}}


  • Fixed a bug when parsing single quotes in OpenAPI specifications


  • Upgraded bundled JRE from 8u345 to 8u352
  • This release is compatible with the MQ JAR up to version
  • Mention both tp.out and trafficparrot.log when there is a problem starting

Thursday 26 January 2023

Building high-performance teams by allowing people to be themselves - Anatoli Iliev

We have asked Anatoli Iliev about his experience building high-performance teams.  He is a Software Engineering Manager with a decade of experience building teams for companies such as Infragistics, SumUp and VMware.

What obstacles would typically, in your experience, prevent the team from delivering high-quality software?

Firstly, I would like to start with a quick disclaimer. I'm expressing my own opinion here, and that opinion is not related to anyone, any company or any physical person with that I have a connection to.

In terms of obstacles, your question was very good. What are the obstacles that prevent a team from delivering high-quality software? I think that there are probably a few obstacles and they can be all together or they can be alone one by one. But the one thing that's probably most influential on the quality of the software that the team is producing is the team culture. I really think that the team should have an open culture, a culture which allows every single individual to be themselves and to express their feelings, their concerns, their comments and so on.

This should happen in this so-called healthy environment so people can be themselves without hurting others. I can say that the team that works as a single thing is surely more successful than individual contributors themselves. I can give you an example. No matter how good the football player could be, like Ronaldo or Messi, a single Ronaldo or Messi cannot be better than a whole team playing against themselves. That's why the team spirit is most important and the team culture and team values. I think that's the most important thing. However, there are other factors that are impacting the quality of the software that's being delivered. I have seen teams and companies, organizations that are changing their priorities all the time. They are changing their priorities not every day, but twice or three times a day for more than months. And that's affecting the high quality and the delivery of software.

And of course, we have one more factor that's quite important, I have seen this, the lack of experience in newly formed teams. So when you have a newly formed team, you have individuals clashing with each other, trying to dominate, trying to find their place. And this is preventing teams from delivering high quality software. I can say. So as long as the team members know that there is a place for everyone and everyone is happy with this place, this is helpful for the quality of delivery.

When you talk about these personalities or egos clashing and people trying to find their place, and you say one way you've attacked that was to in some ways highlight that there's a place for everyone in this team. Any, anything specific that you have done that people could use? 

I have seen situations where people were nominated to lead, but for one reason or another, they were not able to do it properly. The interesting thing is that these people always know that about themselves. It is a matter of open and honest communication to settle this and readjust things in a way that everyone feels more effective and happier. I can say. That's not always the most comfortable position for everyone. Sometimes people need to step out of their comfort zone to unleash their full potential. But that's more of a sense. A leader is gathered by lots of communication, honest and frank communication with my team. The one thing that's really helpful for me is to be fully transparent to everyone. This helps me set the team in the proper configuration, I can say.

And do you use any traditional Scrum ceremonies to apply full transparency, or how do you handle conflicts when they arise? Would you just approach team members and say let's talk, or do you wait for a specific meeting?

I am using all those Scrum ceremonies, in fact. But for the question you are asking, it's more
about the current situation. Sometimes there is a need for a deep technical conversation with arguments on what approach the team should take. If you have two strong individuals arguing on that, it's good to invite the whole team and have this deep technical conversation. On the other hand, there are situations where these two strong individuals are arguing about things that are more on the personal side, like character approach, and so on."

If you have in the team someone who has been there for a long time and they are performing very well and so on, they know that they are the edge of the team, let's say, in a way. And you have a rising star in the team that is quite pushy and trying to make their way, this can make the person who is on the edge feel a bit more pushed and so on in a personal matter. They can start arguing and be defensive for every single proposition made in a technical proposition, maybe by the other person. So this is something that's handled more on one-to-one conversations. And I can say that different from the previous example is that one-to-one conversations are the key here. And, of course, this needs to be done open and frankly.

So the big thing is team culture, and I said that transparency and authenticity drive me towards building that team culture, an open culture. Anything else that comes to mind is that it's important to have clear communication and expectations within the team?

As I mentioned, people should know and feel comfortable with who is who in the team. That's another big thing. They should be themselves and accept others. It takes some time for this to happen, but once you have this culture properly set in the team, it becomes easier for every next person joining the team. That makes sense. At the beginning, it takes more effort to do.

Once you've got this team culture built and you've got this team that's not storming anymore that might be performing, you can talk about looking at the metrics to drive continuous improvement of the software delivery process. What's your experience with that?

Yes, that's exactly the case. You need proper metrics to measure what you are doing and how you are progressing. And there are a few things that are equally important. I think that I can summarize that these are three things: customer satisfaction, team velocity, and product quality. These three things are equally important. You cannot have one over the others in any matter. Customer satisfaction is important, but you can have a happy customer without having a team performing at their best. So this way you are leaking resources, and you can be even better if you optimize the team and increase efficiency. On the other side, you can have good team velocity, but with low quality. This will affect customer satisfaction and future deliveries. Because for example, your architecture is not well defined and you cannot build on top of leaky foundations.

So do you have any examples of how you actually measured those three? So customer satisfaction, what did you look at? For example, if you have a mobile app, you can look at the app store ratings or how many tickets you get for a specific feature. Have you had anything specific you could share?

Well, I'm a big fan of proactive information gathering. I prefer direct communication with customers and users so they can provide their view. I'm a big fan of having feedback buttons or forms that are easy to reach, not too noisy or annoying, but easy to reach. So people can leave their view with just one or two clicks. I really like the approach that Zoom has. After each Zoom call, you have either a thumbs up or thumbs down and it's one click customer feedback. But I'm sure that means a lot. So that's something I strive to have in my products.

Having an easy and low barrier for providing feedback is important for getting more responses and gathering more information. At the same time, it's automated enough that you can process and analyze the feedback at scale. And yes, based on the feedback provided, I also directly communicate with customers to gather more information and get a deeper understanding of their needs and concern.

How do you measure team velocity and can you explain more about it?

Team velocity is a way to measure the amount of work a team completes in a given period of time, such as one sprint, two weeks, or one month. The team should have their own way of estimating work, and over time they should become accustomed to it. This makes it easy to understand how much work was done in a specific period. It's important to note that work should be considered either fully done or not done at all, and not in a half-done state. To accurately measure velocity, it's important to break down work into small, meaningful chunks for the user. This takes time, but eventually it results in a fine-grained velocity that can be accurately measured.

And how about quality? So how would you measure quality?

Measuring quality can be relatively straightforward once you have effective communication with your users. One way to measure it is by tracking the number of defects or issues that arise over a certain period. These can be grouped by impact or severity to gauge the level of pain they cause for customers. However, it is important to note that this measurement is only effective when customers are engaged and invested in providing feedback. Without their engagement, it can be difficult to get an accurate picture of the quality of your product.

Measuring the time for reaction to critical customer issues is an important metric for my teams. We measure how quickly we can resolve these issues, which helps improve team culture and overall performance.

Tuesday 3 January 2023

Agile and DevOps transformations done in a Waterfall way

Too often, our clients reach out to us seeking API simulation help, and during initial calls, we discover they also need help with transformation and technology adoption side of the project. We see that they approach the Agile and DevOps transformation in a “Waterfall way”, doing large upfront designs and planning mass migrations.

For example, a company that hired one of our consultants to help with API simulation spent a year before engaging with us building a new tech platform with designs done upfront and little user feedback. The migration to the new platform was problematic due to unforeseen issues during the first usage of the new platform. The company sought our guidance on how to "force" the new platform and API simulation onto the developers and testers. Our feedback in this situation is to keep in mind the J-Curve effect and the change capacity of teams, and instead of forcing the new solution onto teams, listen to their feedback carefully. The issue with the migration could have been prevented by doing smaller-scale incremental migrations with early feedback from the developers and testers working on the projects. We have engaged with the development and testing teams and applied API simulation to solve the teams' high-priority problems, such as high numbers of UAT bugs. An incremental approach of introducing one new technique at a time allowed for building mutual trust between the transformation and project teams.

Sonya Siderova has explained the general principle very well in her article "How the J Curve Effect Defines the Success of Your Transformation Initiative"

If you need help planning a project to use API simulation to accelerate your digital transformation and create automated tests faster, feel free to reach out to us.