Sunday, 13 June 2021

Traffic Parrot 5.26.2 released, what's new?

We have just released version 5.25.0. Here is a list of the changes that came with the release:


  • Added a new helper to allow calculating and formatting date offsets in responses e.g. {{ dateOffset (jsonPath request.body '$.date_field') add=true format='yyyy-MM-dd' days=1 }}
  • Added support for accessing a standard set of runtime properties in plugins so that e.g. plugins can be supplied with configuration files at runtime that are placed in virtual service specific or global shared directories


  • Improved support for handlebars partial template files:
    • The root for partials now takes into account the directory specified in trafficparrot.virtualservice.trafficFilesRootUrl=file:.
    • The root for partials when using virtual services is the named directory scenarios/ScenarioName/*.hbs

Monday, 10 May 2021

Traffic Parrot component archtecture

"Other commercial service virtualization tools we have been using at our company for years offer a client application to create virtual services and a virtual service environment on the server-side to run the simulators. Does Traffic Parrot offer separate components to create virtual services and run virtual services?" - Senior Service Virtualization Specialist working for a national service provider.

Traffic Parrot encapsulates both the client-side and server-side components in one executable. This means when you install Traffic Parrot on your laptop or on a server both of those installations have the capabilities to create as well as run the virtual services.

Traffic Parrot has adopted this approach as it has been proved effective by several Next-Gen Service Virtualisation offerings that are popular for both traditional monolithic system testing as well as the new world of the container and microservice use-cases (e.g. WiremockMountebank, or Hoverfly).

Our clients find this characteristic of our offering valuable compared to other traditional commercial service virtualization offerings because it allows them to use Traffic Parrot in both traditional monolithic environments as well as when working with containers and microservices. They say it is especially beneficial when migrating to the new world of Kubernetes (Openshift, Rancher, etc.) from traditional monolithic architectures because they can support both new and existing infrastructure allowing for a smooth transition for their developers and testers and overall less time and effort spent on the migration.

Please find a few high-level components architecture comparison diagrams below. If you have any questions or would like to discuss your company's specific use case please reach out to us and will be glad to help.

Friday, 7 May 2021

Traffic Parrot 5.25.0 released, what's new?

We have just released version 5.25.0. Here is a list of the changes that came with the release:


Friday, 16 April 2021

Traffic Parrot 5.24.11 released, what's new?

We have just released version 5.24.11. Here is a list of the changes that came with the release:


  • Properties not defined in will now fall back on default values. The settings UI page and log file will display which properties are using default values not taken from the properties file.


  • Improved diagnostic logging when ports are already in use at startup time
  • Maven MQ examples project now includes MQCSP parameter


  • This release is compatible with the MQ JAR up to version

Thursday, 25 March 2021

Traffic Parrot 5.24.9 released, what's new?

We have just released version 5.24.9. Here is a list of the changes that came with the release:


  • Extended virtual service support for using multiple native IBM® MQ virtual services (beta) linked to separate mappings:
    • The scenarios/VirtualServiceName/ now allows for configuring:
    • The scenarios/VirtualServiceName/ibm-mq-connections.json file is used to specify the queue manager connections per virtual service
    • The scenarios/VirtualServiceName/ibm-mq-mappings directory contains the mappings files per virtual service
    • The UI allows starting MQ replay for multiple virtual services at a time
  • The configuration points database-connections.jsondata and ibm-mq-connections.json are now configured per virtual service relative to the virtual service directory rather than globally
  • The file is now sorted by key when changes are made via the UI
  • The logs now contain the virtual service name to help determine which log lines correspond to which virtual services


  • Improved OpenAPI import examples compatibility
  • Fixed an issue where the total number of MQ writeConnectionsToOpen specified in ibm-mq-connections.json was incorrectly being read from the readConnectionsToOpen field
  • Fixed a race condition where a scenario change could result in HTTP mapping file bodies being momentarily loaded from the incorrect directory
  • Fixed an issue where some temporary directories were not being cleaned up properly on shutdown
  • Fixed an issue where refreshing the page after stopping replay mode would result in attempting a form resubmission
  • Fixed an issue where the UI log pages would sometimes be incorrectly detected as binary files by the browser and downloaded instead of displayed


  • The xPath helper now allows iteration over specific extracted nodes as an alternative to using the xPathList helper, which iterates over all of the child nodes
  • The equal helper now additionally considers the arguments to be equal if they convert to the same string representation when rendered (e.g. XML nodes)
  • Upgraded bundled JRE from 8u262 to 8u282
  • Upgraded WireMock from 2.27.1 to 2.27.2
  • Upgraded gRPC from 1.31.1 to 1.36.0
  • Upgraded protoc from 3.12.0 to 3.15.5
  • This release is compatible with the MQ JAR up to version

Tuesday, 23 March 2021

Can Traffic Parrot be used to automate regression testing?

One of the use cases for using Traffic Parrot is automating regression testing. Our customers typically use it in those scenarios to simulate backend or third-party dependencies that are slow to respond or where setting up test data is problematic. Please find a sample use case below where we demonstrate a sample use case with Selenium. For more details, please each out to so we can help you understand how our product can help with automating your regression testing.

Traffic Parrot in regression testing with Selenium

Tuesday, 9 March 2021

Traffic Parrot POC/evaluation project plan

After working with many enterprises as well as startups that have evaluated our product we hade identified a few key points that constitute a successful POC/evaluation plan. Here is the high level POC plan that has worked for our clients so far.

Timeline: 2-3 weeks

High-level project plan:

  • Pre-requisites stage

    • Day 1: [TP and CLIENT] Schedule POC/evaluation 1-hour kick-off meeting with Traffic Parrot (TP) and client

      • Walkthrough the plan for the POC

      • Clarify value delivery milestone

      • Roles and responsibilities

      • Initial training plan

      • MNDA

    • Day 1: [CLIENT] Identify the name of the project that will use Traffic Parrot

    • Day 2-3: [CLIENT] Define the POC/evaluation project scope

      • [CLIENT] What is the value delivery milestone - how do we know TP was successful and the investment in new technology was a good investment

      • [CLIENT] What tests would you like to run during those 14 days

        • [CLIENT] Prepare specs for the APIs to be mocked

        • [CLIENT] Prepare a list of tests to run

  • Evaluation stage

    • Day 4-18: Evaluate Traffic Parrot product

      • [TP] share the product installation files including the evaluation license

      • [CLIENT] runs tests against Traffic Parrot product

  • Close stage

    • Day 19-20: POC/evaluation Report

      • [CLIENT] prepares a summary report with POC/evaluation outcomes

Example project plan

Example prerequisites outcome delivered by Client to Traffic Parrot.

  • Project name: Finance System Mocking

  • Value delivery milestone

    • If we use Traffic Parrot to mock the third party payment API our payments team does not have to rely on their manual process to get API responses and we will be able to complete our tests in days not weeks. This will save our team 1-2 months a year in time to market not having to wait for the third party manual processes to complete.

  • Roles and responsibilities:

    • John - architect

    • Bob - provides sample JSON requests and responses for the REST APIs

    • Sally - verifies solution provided by TP folks is sufficient

  • Initial training plan

  • POC/evaluation scope

    • Sally and John - Prepare specs for the APIs to be mocked

  • Evaluation

    • Liam from Traffic Parrot - create the first mock based on the API specs provided by Sally and John

    • Sally - create the remaining mocks after seeing the first one created by Liam

  • Report and closing

    • Sally - create the evaluation report

    • John - provide a go/no-go decision from a technical perspective


Recording or creating manually IBM MQ virtual services?

The two most common ways of creating virtual services/mocks/simulators in Traffic Parrot for systems communicating with IBM MQ queues is either recording the messages to create Traffic Parrot request to response mappings or creating the request to response mappings manually based on the system under test documentation.

Both approaches have their pros and cons. The table below contains a high-level overview of the major differences between them. The right solution in your case might be either or a combination of both approaches. Please reach out to us in case you have specific questions regarding your current project 

Create virtual services/mocks/simulators manually by recording IBM MQ messages to the real service/API

Create virtual services/mocks/simulators manually by providing sample IBM MQ request and response messages based on API/service documentation

The real service/API has to exist



Requires reconfiguring of the system under test to point at virtual queues



The virtual services built are real examples from a test environment, not just hypothetical ones based on documentation



Key to the table:
Yellow - a potential risk factor
Green - typically a risk-free area

Friday, 26 February 2021

How can I manage HTTPS certificates with Kubernetes Ingress and Traffic Parrot?

Kubernetes Ingress is often used to expose HTTPS services outside the cluster. There are several options available for managing HTTPS certificates, which are described below.

Option A - Certificates only in Ingress

  • Use TP HTTP for both HTTP and HTTPS use cases
  • Ingress manages all HTTPS certificates
  • Ingress does all TLS offloading
  • Ingress can have different HTTPS certificates per hostname
  • TP only receives HTTP requests never HTTPS

Option B - Certificates only in TP

  • Use TP HTTP for HTTP use cases
  • Use TP HTTPS for HTTPS use cases
  • Ingress passes through HTTPS connections to TP HTTPS backend using SNI TLS proxy via SNI hostname
  • TP manages all HTTPS certificates
  • TP receives HTTPS requests
  • TP does all TLS offloading
  • Requires compatible HTTPS clients that support client SNI (e.g. not supported in Java 6)
  • Ingress performance overhead incurred due to ingress SNI TLS proxy

Option C - Certificates in both Ingress and TP

  • Use TP HTTP for HTTP use cases
  • Use TP HTTPS for HTTPS use cases
  • Ingress does TLS offloading for original request from the client
  • Ingress has a copy of the certificates used by the client
  • Ingress initiates new HTTPS connection to TP HTTPS backend with client certificates
  • TP receives HTTPS requests
  • TP does TLS offloading for the request from the ingress controller
  • Ingress performance overhead incurred due to initiating new connection with client certificates
  • Maintenance overhead incurred, end users now need to provide the client certificates too


If the same HTTPS certificates are used globally then Option A is the simplest.

If certificates are per virtual service then Option B is preferable over Option C, unless you are using HTTPS clients that do not support SNI, in which case Option C could be used instead.

You may prefer Option A over both Option B and C for certificates per virtual service if you would like HTTPS to be managed fully by Ingress instead of TP. Option A is the simplest infrastructure but means there are two configuration points: TP for the HTTP virtual service configuration and Ingress for the HTTPS certificate configuration.

Monday, 8 February 2021

How can I expose Traffic Parrot virtual services outside my Kubernetes cluster?

When hosting Traffic Parrot virtual services in a Kubernetes cluster, sometimes access to the virtual services is required from traffic outside of the Kubernetes cluster. In these cases, it is typical to use Kubernetes Ingress alongside a DNS name server to provide access to the services inside the cluster. Here are two common options used to achieve this.

Option A - Static domain with unique Ingress path mappings

  • DNS server points static domain record to static Ingress controller IP
  • Unique paths are used to identify a unique backend service port
  • Ingress rules use the path prefix to route to a backend service port
  • Example
    • points to Ingress controller IP using static DNS entry
    • Ingress path prefix maps to backend service1 port 1234
    • Ingress path prefix maps to backend service1 port 4567
    • Ingress path prefix maps to backend service2 port 1234

Option B - Wildcard domain with unique Ingress host mappings

  • DNS server points wildcard domain record to static Ingress controller IP
  • Unique domain names are used to identify a unique backend service port
  • Ingress rules use the host field to route to a backend service port
  • Example

Monday, 1 February 2021

Traffic Parrot 5.23.0 released, what's new?

We have just released version 5.23.0. Here is a list of the changes that came with the release:


  • Initial release of Traffic Parrot workstation client (beta):
    • The workstation (beta) is part of a new client-server product offering for customers
    • Traffic Parrot workstation clients create virtual services and save configuration files
    • Traffic Parrot servers run virtual services using the configuration files created by the clients
    • Configuration files can be stored and retrieved from version control systems or artifact repositories
  • Initial support for using multiple HTTP/HTTPS ports linked to separate mappings (beta):
    • New virtual services (beta) page to view/add/edit/delete virtual services
    • A distinct HTTP/HTTPS port may be assigned per virtual service
    • Only one virtual service can be edited at a time by selecting from the dropdown
    • The home page now lists all virtual service names and ports
    • Configuration is currently stored in scenarios/VirtualServiceName/ directory structure
    • Increasing the number of virtual services will have a direct impact on memory usage of the server
    • Consider increasing your jvm.args configuration heap size -Xmx128m if you plan to use multiple virtual services per Traffic Parrot instance
  • New arithmetic operations in dynamic responses


  • Fixed an issue where the navigation bar overlapped the main content on some smaller screen sizes
  • Fixed an issue where transformer names would be duplicated in mapping files on save


  • The scenarios feature is being renamed to virtual services (beta)
  • The API /api/scenarios/* is deprecated and will be renamed to /api/virtualServices/* in a future release
  • The scenarios directory is deprecated and will be renamed to to virtual-services in a future release

Friday, 6 November 2020

Data integrity when using API mocks or service virtualization in shared environments

Our customers find that using API mocking or service virtualization in a shared test environment comes with a task of managing potential data integrity issues between the System Under Test and backend or third party databases. Please find below a set of diagrams highlighting different categories of approaches along with their tradeoffs. 

These diagrams highlight categories of solutions. The details will be heavily dependent on customer's use case.

Tuesday, 3 November 2020

Four categories of deployment options when using API mocking or service virtualization

Our customers typically see four categories of deployments when developing or testing software in shared test environments.

Please find below a high-level introduction to the example of a roadmap to removing back-end and third-party dependencies - these diagrams highlight categories of solutions. The details will be heavily dependent on customer's use case. Please contact, and we can advise on a recommended architecture for your specific needs based on our other customers' experience.

No mocking or service virtualizartion

Friday, 23 October 2020

Can I use Traffic Parrot over VPN?

"Our offshore developers will be using Traffic Parrot. Can you please clarify how that will impact the number of parallel licenses used? If a VPN connection is terminated, the license should stop being used?" - Software Developer working for a global financial institution.

We understand you wanted to know if a Traffic Parrot (TP) license will be checked back into the licensing server pool if the connection is dropped (for example when a VPN connection is closed).

The answer is yes. If the person was running TP outside via VPN connection and the discontinued VPN connection results in the lack of connectivity to the licensing server, the TP license server will recognise a timeout and will check back the license into the pool after the configurable timeout passes. The TP instance without the connection will shut down and the developer will not be able to use it.

If however, the developer was using VPN to access a development machine inside the organization then the connection would not be terminated, and the TP license would be still checked out by the TP instance running on the VM.

Please find the sample diagrams below. 

Monday, 19 October 2020

Traffic Parrot 5.22.0 released, what's new?

We have just released version 5.22.0. Here is a list of the changes that came with the release:


  • Made mapping name editable in the UI for all mapping types


  • Ensure internal server errors log the full stack trace


  • Native IBM® MQ connections will now warn once in the logs the first time that a connection is shared between multiple threads, which can degrade performance. The mapping fields receiveThreads and sendThreads control the number of threads per queue. The ibm-mq-connections.json configuration controls the total readConnectionsToOpen and writeConnectionsToOpen per queue manager.