Thursday 25 March 2021

Traffic Parrot 5.24.9 released, what's new?

We have just released version 5.24.9. Here is a list of the changes that came with the release:


  • Extended virtual service support for using multiple native IBM® MQ virtual services (beta) linked to separate mappings:
    • The scenarios/VirtualServiceName/ now allows for configuring:
    • The scenarios/VirtualServiceName/ibm-mq-connections.json file is used to specify the queue manager connections per virtual service
    • The scenarios/VirtualServiceName/ibm-mq-mappings directory contains the mappings files per virtual service
    • The UI allows starting MQ replay for multiple virtual services at a time
  • The configuration points database-connections.jsondata and ibm-mq-connections.json are now configured per virtual service relative to the virtual service directory rather than globally
  • The file is now sorted by key when changes are made via the UI
  • The logs now contain the virtual service name to help determine which log lines correspond to which virtual services


  • Improved OpenAPI import examples compatibility
  • Fixed an issue where the total number of MQ writeConnectionsToOpen specified in ibm-mq-connections.json was incorrectly being read from the readConnectionsToOpen field
  • Fixed a race condition where a scenario change could result in HTTP mapping file bodies being momentarily loaded from the incorrect directory
  • Fixed an issue where some temporary directories were not being cleaned up properly on shutdown
  • Fixed an issue where refreshing the page after stopping replay mode would result in attempting a form resubmission
  • Fixed an issue where the UI log pages would sometimes be incorrectly detected as binary files by the browser and downloaded instead of displayed


  • The xPath helper now allows iteration over specific extracted nodes as an alternative to using the xPathList helper, which iterates over all of the child nodes
  • The equal helper now additionally considers the arguments to be equal if they convert to the same string representation when rendered (e.g. XML nodes)
  • Upgraded bundled JRE from 8u262 to 8u282
  • Upgraded WireMock from 2.27.1 to 2.27.2
  • Upgraded gRPC from 1.31.1 to 1.36.0
  • Upgraded protoc from 3.12.0 to 3.15.5
  • This release is compatible with the MQ JAR up to version

Tuesday 23 March 2021

Can Traffic Parrot be used to automate regression testing?

One of the use cases for using Traffic Parrot is automating regression testing. Our customers typically use it in those scenarios to simulate backend or third-party dependencies that are slow to respond or where setting up test data is problematic. Please find a sample use case below where we demonstrate a sample use case with Selenium. For more details, please each out to so we can help you understand how our product can help with automating your regression testing.

Traffic Parrot in regression testing with Selenium

Tuesday 9 March 2021

Traffic Parrot POC/evaluation project plan

After working with many enterprises as well as startups that have evaluated our product we hade identified a few key points that constitute a successful POC/evaluation plan. Here is the high level POC plan that has worked for our clients so far.

Timeline: 2-3 weeks

High-level project plan:

  • Pre-requisites stage

    • Day 1: [TP and CLIENT] Schedule POC/evaluation 1-hour kick-off meeting with Traffic Parrot (TP) and client

      • Walkthrough the plan for the POC

      • Clarify value delivery milestone

      • Roles and responsibilities

      • Initial training plan

      • MNDA

    • Day 1: [CLIENT] Identify the name of the project that will use Traffic Parrot

    • Day 2-3: [CLIENT] Define the POC/evaluation project scope

      • [CLIENT] What is the value delivery milestone - how do we know TP was successful and the investment in new technology was a good investment

      • [CLIENT] What tests would you like to run during those 14 days

        • [CLIENT] Prepare specs for the APIs to be mocked

        • [CLIENT] Prepare a list of tests to run

  • Evaluation stage

    • Day 4-18: Evaluate Traffic Parrot product

      • [TP] share the product installation files including the evaluation license

      • [CLIENT] runs tests against Traffic Parrot product

  • Close stage

    • Day 19-20: POC/evaluation Report

      • [CLIENT] prepares a summary report with POC/evaluation outcomes

Example project plan

Example prerequisites outcome delivered by Client to Traffic Parrot.

  • Project name: Finance System Mocking

  • Value delivery milestone

    • If we use Traffic Parrot to mock the third party payment API our payments team does not have to rely on their manual process to get API responses and we will be able to complete our tests in days not weeks. This will save our team 1-2 months a year in time to market not having to wait for the third party manual processes to complete.

  • Roles and responsibilities:

    • John - architect

    • Bob - provides sample JSON requests and responses for the REST APIs

    • Sally - verifies solution provided by TP folks is sufficient

  • Initial training plan

  • POC/evaluation scope

    • Sally and John - Prepare specs for the APIs to be mocked

  • Evaluation

    • Liam from Traffic Parrot - create the first mock based on the API specs provided by Sally and John

    • Sally - create the remaining mocks after seeing the first one created by Liam

  • Report and closing

    • Sally - create the evaluation report

    • John - provide a go/no-go decision from a technical perspective


Recording or creating manually IBM MQ virtual services?

The two most common ways of creating virtual services/mocks/simulators in Traffic Parrot for systems communicating with IBM MQ queues is either recording the messages to create Traffic Parrot request to response mappings or creating the request to response mappings manually based on the system under test documentation.

Both approaches have their pros and cons. The table below contains a high-level overview of the major differences between them. The right solution in your case might be either or a combination of both approaches. Please reach out to us in case you have specific questions regarding your current project 

Create virtual services/mocks/simulators manually by recording IBM MQ messages to the real service/API

Create virtual services/mocks/simulators manually by providing sample IBM MQ request and response messages based on API/service documentation

The real service/API has to exist



Requires reconfiguring of the system under test to point at virtual queues



The virtual services built are real examples from a test environment, not just hypothetical ones based on documentation



Key to the table:
Yellow - a potential risk factor
Green - typically a risk-free area