Friday 6 November 2020

Data integrity when using API mocks or service virtualization in shared environments

Our customers find that using API mocking or service virtualization in a shared test environment comes with a task of managing potential data integrity issues between the System Under Test and backend or third party databases. Please find below a set of diagrams highlighting different categories of approaches along with their tradeoffs. 

These diagrams highlight categories of solutions. The details will be heavily dependent on customer's use case.

Tuesday 3 November 2020

Four categories of deployment options when using API mocking or service virtualization

Our customers typically see four categories of deployments when developing or testing software in shared test environments.

Please find below a high-level introduction to the example of a roadmap to removing back-end and third-party dependencies - these diagrams highlight categories of solutions. The details will be heavily dependent on customer's use case. Please contact, and we can advise on a recommended architecture for your specific needs based on our other customers' experience.

No mocking or service virtualizartion

Friday 23 October 2020

Can I use Traffic Parrot over VPN?

"Our offshore developers will be using Traffic Parrot. Can you please clarify how that will impact the number of parallel licenses used? If a VPN connection is terminated, the license should stop being used?" - Software Developer working for a global financial institution.

We understand you wanted to know if a Traffic Parrot (TP) license will be checked back into the licensing server pool if the connection is dropped (for example when a VPN connection is closed).

The answer is yes. If the person was running TP outside via VPN connection and the discontinued VPN connection results in the lack of connectivity to the licensing server, the TP license server will recognise a timeout and will check back the license into the pool after the configurable timeout passes. The TP instance without the connection will shut down and the developer will not be able to use it.

If however, the developer was using VPN to access a development machine inside the organization then the connection would not be terminated, and the TP license would be still checked out by the TP instance running on the VM.

Please find the sample diagrams below. 

Monday 19 October 2020

Traffic Parrot 5.22.0 released, what's new?

We have just released version 5.22.0. Here is a list of the changes that came with the release:


  • Made mapping name editable in the UI for all mapping types


  • Ensure internal server errors log the full stack trace


  • Native IBM® MQ connections will now warn once in the logs the first time that a connection is shared between multiple threads, which can degrade performance. The mapping fields receiveThreads and sendThreads control the number of threads per queue. The ibm-mq-connections.json configuration controls the total readConnectionsToOpen and writeConnectionsToOpen per queue manager.

Sunday 11 October 2020

Traffic Parrot 5.21.6 released, what's new?

We have just released version 5.21.6. Here is a list of the changes that came with the release:


  • New property trafficparrot.virtualservice.http.monitorPerformance can be used to enable HTTP response timings in the logs:
    2020-10-07 20:09:32,330 INFO  Request to 'GET /test123' was received on '2020-10-07T19:09:32.326Z' from ''. Response was sent on '2020-10-07T19:09:32.397Z' to ''. Total processing time 71ms
  • Added new PassthroughMessage MQ proxy example to the SDK workspace


  • Fixed support for OpenAPI array examples specified as a comma separated string
  • Ensure startup error causes are recorded in the logs


  • Upgraded bundled JRE from 8u262 to 8u265
  • Display system tray in a background thread to improve startup times on some systems
  • Valid JSON bodies in mapping files will be persisted as inline JSON rather than an escaped string

Friday 25 September 2020

Traffic Parrot 5.20.2 released, what's new?

We have just released version 5.20.2. Here is a list of the changes that came with the release:


  • Mappings list user interface improvements for all protocols:
    • Added support for customizing which columns are displayed in the mappings list in the UI. The selected settings are stored in the browser to allow different Traffic Parrot users to set different preferences.
    • Edit mapping tags that can be used to organize and search mappings
    • Added new tags column which is hidden by default
    • Display mapping file name when editing a mapping
  • Added mapping file name to the logs when reporting request match status for MQ/JMS/Files mappings


  • Switching scenario with mappings caches enabled now resets the caches
  • Traffic Parrot now shuts down gracefully on Windows when the stop script is run
  • Fixed an issue where the HTTP virtual service would not respond if a mapping file was deleted while the request was being served

Monday 21 September 2020

A global retail bank has migrated one of their departments off CA Lisa and now saves 51% on tool costs per year

In September 2020, after a thorough evaluation, a global retail bank selected Traffic Parrot for their application testing needs. They have used WireMock for HTTP service virtualization needs and Traffic Parrot for IBM® MQ in both their performance testing and system testing environments.

"We have migrated our department off CA Lisa to reduce our operational costs in performance and system testing environments. We used Wiremock for HTTP stubbing and mocking and Traffic Parrot for IBM MQ. Neither Wiremock nor Traffic Parrot provides all the features available in CA Lisa. Still, after initial proof of concept assessment, we have decided those tools feature sets will be sufficient for our needs and significantly cut costs. It took our team six months to migrate HTTP to Wiremock and another three months working closely with the Traffic Parrot team to improve their offering to meet our load testing and mainframe integration requirements. With this investment, we cut our tooling costs in half."

​ Executive working for a  global retail bank.

Next steps

Please contact or call +44 20 3239 7753 (available 9am to 5pm United Kingdom time) for more details on how we can help your team save on service virtualization costs.

Monday 14 September 2020

Traffic Parrot 5.19.4 released, what's new?

  We have just released version 5.19.4. Here is a list of the changes that came with the release:


  • Added support for configuring gRPC delays via fixedDelayMilliseconds or distributionDelay in the mapping files
  • Exposed new properties for tuning gRPC performance:
    • trafficparrot.virtualservice.grpc.server.receiveThreads=DOUBLE_NUMBER_OF_PROCESSORS
    • trafficparrot.virtualservice.grpc.server.sendThreads=DOUBLE_NUMBER_OF_PROCESSORS
    • trafficparrot.virtualservice.grpc.server.replay.maxMessagesWaitingToBeSent=1000


  • Traffic Parrot now shuts down gracefully when the launcher process is killed (e.g. during a docker stop)


  • Only consider TCP ports when printing diagnostic information when there is a port clash on startup

Saturday 5 September 2020

Traffic Parrot 5.18.0 released, whats new?

  We have just released version 5.18.0. Here is a list of the changes that came with the release:



  • Fixes to the gRPC mappings list editing and search mechanism
  • Fixed a bug where the edit modal would incorrectly close automatically if closed and reopened too quickly
  • Fixed a bug where the edit modal would show the previous flash message if closed and reopened too quickly
  • Fixes for JMS object support
  • Fixes for port in use diagnostic message when multiple PIDs are involved


  • The property has been replaced by
  • Logging properties file names indicate which type of logging configuration to use:
    • Names like *.log4j.* are assumed to be Log4j version 1 configuration
    • Names like *.log4j2.* are assumed to be Log4j version 2 configuration
  • The MQ message putDateTime log line now has a millisecond (instead of second) resolution to help investigate performance issues when working with Native IBM MQ
  • Upgraded commons-codec to 1.13 from 1.11
  • Upgraded commons-lang3 from 3.8.1 to 3.11
  • Upgraded commons-io from 2.6 to 2.7
  • Upgraded gson from 2.8.6 to 2.8.7

Wednesday 26 August 2020

How many licenses do I need to purchase for 10 users in a shared QA environment?

"At the beginning, we would have about 10 users that would use Traffic Parrot in a shared QA environment, possibly all at the same time. Can they all use the same instance over the web application user interface of Traffic Parrot? How many licenses would I need to purchase in this case?" - Software Architect working for an E-commerce company.

Please have a look at a definition of how Traffic Parrot (TP) licensing works: What is the "concurrent floating license"?

An instance is a process that may run on a server or a laptop. Every instance you need to run in parallel (at the same time) needs a license.

As teams and test environments scale in size, our customers find that using multiple licenses reduces risk and increases testing throughput (test turnaround time).

Our customers have found that teams that are working on different subsystems or projects or a collection of related features are best served by having their own license per environment. If multiple teams or initiatives are working on the same shared environment then we recommend using one TP instance per third party and backend service in each environment.

In addition, if your developers are using TP on a laptop you can start to run into contention issues after three or four developers using it to debug virtual services at the same time before commits, two developers intensive use can benefit from their own licenses.

These guidelines will minimize the complexity of your virtual services, both for initial use and during their subsequent maintenance while running TP in shared environments such as QA or SIT. 

Performance testing may require high core count machines or multiple load-balanced instances.

We would be happy to schedule a call at your convenience to discuss this, please contact to schedule a call to discuss your requirements.

Wednesday 19 August 2020

Why would I need multiple floating licenses?

 "Why would I need multiple floating licenses?" - Software Developer working for an E-commerce company.

Here is how TP (Traffic Parrot) licensing works:

If you want to run more than one TP process in parallel in different places you will need more than one license, one for each process.

Monday 10 August 2020

Traffic Parrot 5.16.0 released, whats new?

 We have just released version 5.16.0. Here is a list of the changes that came with the release:


  • Select handlebars helper CSV file caching. When you set
    the CSV file loading performance will be significantly improved.
  • Supporting large number of CPU cores. When you run Traffic Parrot on many cores (40 or more) it would fail to start and report an error
    javax.servlet.ServletException: java.lang.IllegalStateException: Insufficient configured threads: required=212 < max=200 for QueuedThreadPool[qtp318353283]@12f9af83{STARTED,8<=168<=200,i=0,r=20,q=0}[ReservedThreadExecutor@71b3bc45{s=0/20,p=0}]
    The fix is to increase the number of threads Jetty can spin up. This was done by exposing two properties to configure HTTP Jetty server thread queues:


  • Performance improvements to dynamic responses
  • Jetty HTTP Server upgrade from 9.4.20.v20190813 to 9.4.30.v20200611
  • Upgraded Wiremock from 2.25.1 to 2.27.1
  • Upgraded Http client from 2.6.2 to 2.7.0
  • Several other library version upgrades

Loading data from a CSV file to create dynamic responses?

"How can I load data from a CSV file and create a dynamic response based on XML request body?" - Software Consultant at a global software consultancy

You can nest the "xPath" helper in the "select" helper, for example: 

{{select 'Age from UserData.csv where Username equals' (xPath request.body '//User/Username/text()')}}

Friday 17 July 2020

Traffic Parrot 5.15.0 released, whats new?

We have just released version 5.15.0. Here is a list of the changes that came with the release:


  • Traffic Parrot can now connect to IBM MQ queue managers via SSL channels. You can do this via Native IBM MQ. We have added two new ibm-mq-connections.json properties to support Native IBM MQ SSL connections:
    "sslCipherSuite": "TLS_RSA_WITH_AES_128_CBC_SHA",
    "sslPeerName": "OU=TP IBM MQ"
    To provide the server and client certificates you can add the following config to jvm.args:

Friday 10 July 2020

Traffic Parrot 5.14.1 released, whats new?

We have just released version 5.14.1. Here is a list of the changes that came with the release:


  • A new property that allows to force logging Native IBM MQ message bodys as printable characters instead of UTF8


  • The Native IBM MQ messages can now contain UTF-8 characters, for example Arabic text
  • The Native IBM MQ logs are now correctly displaying UTF-8 characters, for example Arabic text
  • The JMS IBM MQ tutorial example fruit ordering system now supports UTF8 characters, for example Arabic text

Tuesday 7 July 2020

Traffic Parrot 5.13.0 released, whats new?

We have just released version 5.13.0. Here is a list of the changes that came with the release:


  • New environment variable that allows configuring how long the startup scripts will wait for TP to start up use TP_STARTUP_WAIT_MILLIS, for example:
    test@test-pcs:~/Downloads/trafficparrot-no-jre-5.x.y$ export TP_STARTUP_WAIT_MILLIS=180000
    test@test-pcs:~/Downloads/trafficparrot-no-jre-5.x.y$ ./
    Picked up environment startup timeout in milliseconds 180000
  • A new property that allows to specify a replay Native IBM MQ script that will be run on startup


  • The SDK Workspace allows now for Native IBM MQ message transformers to specify an MQMessage directly. This allows for creating transforming proxies of messages. A sample has been provided in the SDK workspace.


  • Users were unable to update Native IBM MQ mappings in renamed files.

Saturday 20 June 2020 Cannot run program "hostname": error=2, No such file or directory

"I am getting an error when starting Traffic Parrot < Cannot run program "hostname": error=2, No such file or directory>" - Senior Performance Engineer working for an Australian Bank

It looks like an operating system configuration issue. You are missing the "hostname" command.

To check if that is the case, please SSH to the container running TP and execute "hostname" in the shell.

To solve this issue, install "hostname" in your docker image. For example, add to your Dockerfile:

yum -y install hostname


apt install hostname

Tuesday 9 June 2020

Is there a way to import requests and responses (either in JSON/TXT/XML format) to create virtual services?

"Is there a way to import request/response (either in JSON/TXT / XML format) and create HTTP and Native IBM MQ virtual services? Is OpenAPI specification the only way to create virtual services other than recording?" Senior Architect working for a global airline

For HTTP you can:
Traffic Parrot has support for the following formats to import HTTP:
  • WireMock 2.x mappings in ZIP format
  • Swagger 1.x and 2.x
  • OpenAPI 2.x and 3.x
  • RAML 0.8
For Native IBM MQ MQ you can:

Wednesday 27 May 2020

Traffic Parrot 5.12.0 released, whats new?

We have just released version 5.12.0. Here is a list of the changes that came with the release:


  • New property that allows configuring how IBM® MQ queues will be accessed
  • New property that allows skipping logging message body for native IBM® MQ
  • Native IBM® MQ now respects the existing properties that allow for caching mappings in memory
  • You can now configure per connection to an IBM® MQ broker how many read and how many write connections to open. You do it in the ibm-mq-connections.json. For example:
    "readConnectionsToOpen": 5,
    "writeConnectionsToOpen": 5
  • The Native IBM® MQ mapping allows for configuring how many threads should be used to access the queue, for example:
    "receiveThreads" : 5,
    "sendThreads" : 1
  • You can define in the mapping how many messages the Native IBM® MQ connector will be keeping in memory at once, which will be relevant for scenarios when sending delayd messages, for example:
    "maxMessagesInProgress": 1000000
  • You can start the Native IBM® MQ replay with a script, for example:
    # This is a sample comment 1
    RequestQueueManager:'Request QM1'
    ResponseQueueManager:'Response QM1'
    # This is a sample comment 2
    RequestQueueManager:'Request QM2'
    ResponseQueueManager:'Response QM2'
  • Handlebar templates are cached allowing for better performance


  • There has been performance improvements made to the Native IBM® MQ connector. See the performance benchmarks for more details.
  • Native IBM® MQ response messages during a replay have the replyToQueueEmpty and replyToQueueManager set to the connected queue manager if they did not have any of these attributes set on the request message
  • Native IBM® MQ sends the queue manager name along with queue name when sending a message. This is applicable in scenarios with multiple queue managers set up in a cluster without the cluster sender and cluster receiver channels
  • Logging more runtime information on startup
  • Additional logging when connections to IBM® MQ fail

Thursday 7 May 2020

What network config do I need to change when installing Traffic Parrot?

If you are just starting with Traffic Parrot and would like to know which network configurations need changing to use Traffic Parrot, please contact to confirm the current network setup and how we can help.

You can also have a look at the following setup as a starting point (click to enlarge)


Thursday 30 April 2020

What is the "concurrent floating license"?

With Traffic Parrot (TP) floating licensing, a limited number of licenses are shared among a larger number of users or environments over time.

An instance is a process that may run on a server or a laptop. Every instance you want to run in parallel (simultaneously) needs a license. See the diagram below for an example.

When an authorized user or environment wishes to run the application, they request a license from a central license server. The license server allows the application to run if a license is available. When they finish using the application or when the allowed license period expires, the license is reclaimed by the license server and made available to other authorized users or environments or build agents.

Let us say you are running the following configuration:
  • TP installed on 10 Jenkins Slaves (TeamCity build agents, etc) but at most 3 of them use TP at the same time
  • 1 TP instance installed and used in the performance testing environment where the tests are run overnight
  • 1 TP instance installed and used in the system testing environment where the tests are run during the day
In this case, you use only 4 instances of Traffic Parrot running inside the organization at the same time.
It means you need to purchase 4 floating licenses.

To help us determine how many licenses you might need, please answer these questions:
  • How many environments will you have where the TP will be running for long periods of time (QA, SIT, ...), please include protocols per environment?
  • How many performance environments will you have where the TP will be running?
  • How many build/pipeline runners (e.g. Jenkins slaves, TeamCity agents, ...) do you have?
  • How many developers and/or testers would use Traffic parrot on their laptops/desktops/VMs?
  • How many services and APIs would you like to virtualize, please specify protocols and number of services (i.e. 120 HTTP, 3 IBM MQ, 25 gRPC, ...)?
Then, to get a quote, please fill in the form 

Friday 3 April 2020

What are the challenges you see with third party inbound and outbound testing when using IBM MQ queues?

"What are the challenges you see with third party inbound and outbound testing when using IBM MQ queues?" - Software tester working for a multinational enterprise.

Sometimes third-party systems can cause issues when testing your enterprise systems, for example:
  • The third-party system is not available 24/7 for testing, you need to schedule time on test environments as it is shared between many teams which result in the lower time to market
  • The third-party system does not allow for simulating error responses
  • The third-party test environment might not support the load you require for running your performance tests

In this case, you can use service virtualization or mocking tool to simulate the third party system.

Here is a sample tutorial for Traffic Parrot if you are using IBM MQ via JMS APIs.

Monday 30 March 2020

Bestow has used Traffic Parrot gRPC mocks to deliver features faster to customers

After a thorough evaluation, Bestow Inc. selected Traffic Parrot's service virtualization and API mocking tool in April 2019 for their application development needs. In this case study, we will look at the details of their infrastructure, how they applied Traffic Parrot, and what issues they have come across.
  • Traffic Parrot is specifically designed to maximize the productivity of developers writing automated tests and to enable them to mock out microservices for local development. Their lightweight platform with gRPC support was a good fit for our Docker and Go-based development environment. They provided strong support during the POC and continue to track the rapid evolution of gRPC, acting as an effective extension to our team.
    Brian Romanko, VP Engineering at Bestow


Bestow has challenged industry assumptions with a new underwriting framework that provides affordable term life insurance in minutes instead of weeks. They use Traffic Parrot to unblock teams and allow them to work independently. Bestow uses Traffic Parrot gRPC mocks in their microservice CI regression testing suites to detect breaking changes in their microservice APIs.

Technology stack: Docker, GoLang and gRPC

The core technology they rely on includes:
  • Container-based infrastructure, running Docker in Kubernetes on GCP
  • Microservices in a variety of languages, including GoLang and Python
  • Microservices communicate using gRPC APIs, with API contracts defined in Proto files
Bestow colocated teams developing a microservice to encourage close communication. gRPC APIs connect microservices, which are sometimes owned by different teams. Bestow designs gRPC APIs using Proto files, which form the contract between microservices.

Problem: teams are blocked waiting for APIs

Starting more than a year ago, Bestow developed multiple microservices in parallel. For example, the Policy Administration team provided gRPC APIs for the Enrollment team to consume. This meant that developers on the Enrollment team were sometimes waiting for the Policy Administration team to deliver their microservice APIs before they could start working.
This led to blocked timelines between teams, which meant Bestow could not deliver at the fast pace required for their customers. It was urgent for Bestow to find a solution to allow the teams to work independently.

Solution: decouple teams by using gRPC mocks

Traffic Parrot was identified as a candidate for a gRPC API mocking solution that could help unblock the timelines between the teams at Bestow. After a two week technical evaluation by VP of Engineering Brian Romanko, it was clear that the open-source alternatives did not provide adequate capabilities and Traffic Parrot was chosen to fulfil Bestow development needs.
Teams at Bestow use Traffic Parrot to develop both sides of their gRPC APIs in parallel, without having to wait for the server code to be written before a client can be tested. They run automated test suites on their CI build agents, with Traffic Parrot running in a Docker container on the agent.

Wednesday 25 March 2020

How to choose a service virtualization tool?

Most companies like to evaluate several tools before they commit to a purchase.

Typically they evaluate the service virtualization tools based on many factors such as:
  • Cost
  • Protocols and technologies supported
  • Features
  • Performance benchmarks
  • Support level

Here are a few additional technical questions might help decide which of the tools you are looking at is best:
  • Would you like to have a central team of administrators managing the new tool?
  • What kind of footprint would you like (RAM, disk usage, ...)?
  • What kind of licensing model would work best for your use case?
  • Do you need to source control of the virtual services and deployment scripts?
  • Are you looking for a tool that fits more the microservices architecture or a monolithic architecture?
These questions are based on:

Wednesday 26 February 2020

How long will it take to create 100 virtual services or API mocks and how many people do I need?

“I would like to create API mocks for 100 HTTP SOAP/JSON and JMS Active MQ services. How much time will it take and how many people do I need on my team?” - Software Architect working for a software consultancy

This is a concern many of our clients face. We typically recommend rephrasing the question to include the problem description as well.

Key takeaways:

  • You must define the problem you are solving before you can start a service virtualization project
  • State clearly the value the service virtualization project is to deliver
  • Running a pilot project to demonstrate value is key
  • There are several key factors that influence the length and size of the project
  • Categorising services can help estimate the scope of the project

A significant portion of our customers engages in large scale service virtualization and API mocking projects that need to be managed accordingly. That means they would like to know what is required to complete the project: how many man-days, how many people and what hardware resources are needed.

The typical driver for those big projects is removing bottlenecks such as test data setup times or driving faster time to market.

What we typically recommend is to rephrase the problem in the context of the theory of constraints and narrow the scope down to “what is the amount of mocking we should do to solve the problem well enough so it's not a problem any more and we can focus on other development and testing priorities”.

So in other words, it might not be a good idea to aim to create the mocks for all services, it might be a better idea to decide what is the critical mass of mocks to create for the problem you are facing to go away so you can focus your efforts on other priorities.

An example of that would be a major UK bank that had an issue with setting up test data in the mainframe systems as it was done by an external team that took 2-3 weeks to prepare the data for every sprint’s testing efforts. Only a subset of those APIs was being used by the mobile application. That mobile application was being evolved quite rapidly compared to the other consumers of those APIs. The architects have decided to create mocks for a subset of the mainframe services only for a subset of their use cases. They have driven the efforts by the suite of mobile tests so that the automated regression suite of tests could be run against API mocks per sprint. This allowed them to reduce the complexity of the mocks because they did not have to account for other teams’ use cases and complexities.

Another example is a US insurance company that uses a third-party payment gateway service. They have created a mock that covered only a subset of the payment API features but allowed them to test all of their use cases of that service.

Having said all of that, the problem of scoping the project still remains. Let’s explore that in more detail.

The time will be different for every project and team but there are ways to estimate

The required complexity of the services or API mocks depends on the complexity of the services themselves and also on the complexity of the usage pattern of those services, e.g. how many test cases you run.

As discussed in How much time it will take to build/virtualize a simple, medium and complex service?:

  • A simple service depending on your team's maturity and setup will typically take a few minutes to a day
  • A medium service can take from a few hours to a few days
  • A complex service can take from a few days to a few weeks

Because of the uncertainty of how long building individual services will take we recommend that you run two activities in parallel:

  • Assign a small team or an individual to deliver the first virtual services to hit the first value realisation milestone
  • Assign a small team or an individual to estimate the size of the full project

Once that is done, you will have a better understanding of the complexity of the project.

One important point to notice is that if the value realisation milestone pilot project takes more than several weeks that means it might be too big, it should ideally be a couple of days to a few weeks. In case it’s too big, you might need to revisit your testing strategy and testing pyramid. If you have problems defining your value realisation milestone pilot project please reach out to us at and we will be happy to advise.

How to deliver the first value realisation milestone?

In order to prove that the API mocking and service virtualization approach is capable of a good ROI at your company, we recommend you assign a capable technical person or couple of people to realise the first milestone. Once that is done, they can share the findings and knowledge with the rest of the team.

The key requirements for the people involved in the first project:

  • Has worked on the project for at least 12 months already
  • A person that has experience running pilot projects
  • Good understanding of the business domain
  • Good understanding of the system under test
  • Good understanding of the technologies involved
  • Good communication skills
  • A pragmatic approach to problem-solving

We recommend choosing a value realisation milestone that is clear to the business. Here are a few examples of value realisation milestones you could look for:

  • Reduce test data setup time from 2 weeks to 1 minute by using pre-configure Traffic Parrot API mocks instead of waiting on the test data setup team to deliver new test data in the SIT environment mainframe systems
  • Allow for simulating several typical error responses observed in production for one of the services which will result in better regressions testing coverage and no production bugs in that area
  • Have a look at the typical problems solved by service virtualization/mocking/stubbing for more ideas on how to find a value realisation milestone

After going through this exercise, you might have a few candidates for a good value realisation milestone pilot project. The best project to use out of those is the least complex one that takes the least amount of effort; pick the low hanging fruit. You can then demonstrate the ROI fast, and onboard the team to the new testing approach. Once the team is familiar with the new approach to testing, you can tackle more complex projects.

How to estimate the size of the project?

To help you estimate the size of the full project we advise to:

  • Categorise complexity of services
    • Low complexity service
    • Medium complexity service
    • High complexity service
  • Categorise complexity of the individual tests and how they use the services
    • Low complexity test
    • Medium complexity test
    • High complexity test
  • Categorise the value of individual tests by how much value would be captured if those tests started using mocks instead of the real services
    • Little value
    • Moderate value
    • High value
  • Categorise by the rate of change of APIs and tests
    • Low change rate
    • Medium change rate
    • High change rate
  • Categorise services and tests by production critical issue risk
    • Low impact on production incidents
    • Medium impact on production incidents
    • High impact on production incidents

This is to give you an idea of what is involved in delivering the project and help prioritise work. For example, high complexity tests that use high and medium complexity services will take orders of magnitude longer to deliver than low complexity tests that use low and medium complexity services. You might find low hanging fruit, like high-value and low complexity tests onboard to use virtual services as a priority. The categorisation will also help you prioritise which services and tests to deliver next.

For example, a simple user login test might use one simple service and which is simple to mock, but delivers a lot of value because if the user cannot log in you cannot run 70% of the other test cases.

Another example could be a payment test, which is medium in complexity but is critical to the business as if the payment does not work then the company does not make money.

Once you have the categories identified, you can assign an estimate S (small), M (medium), L (large) to each task and take the rough estimate we have provided in the article above. While you work on the tasks one by one and deliver S, M and L tasks you will get more visibility on how long S, M and L tasks take to complete in your environment and increase the accuracy of your overall estimate. You can then assign more people to the project if needed.

We would be happy to assist you in your estimations efforts, please reach out to us by emailing us at

Should I deliver mocks per-service or per-test?

Depending on the complexity of your tests and the complexity of the services you will be mocked, you might drive the change by implementing mocks for services or mocks for tests.

For example, a UK retail bank used a third party threat detection HTTP service that has a simple API, which takes several input parameters and returns a risk score. This service usage pattern remains the same regardless of the complexity of the test cases that rely on the connectivity to this service. The developers have decided to implement a dynamic mock service that depending on the request data returns a predefined risk score. This allowed it to be re-used in many tests.

Another example, a UK challenger bank had a user onboarding test that required connectivity to over 20 complex third-party and mainframe services. Those services were used not only in this test but also in other tests not related to user onboarding. The team decided to implement the mocks per-test not per-service as it reduced the complexity of the solution.

We would be happy to help you assess how to approach the mocking on a case by case basis, feel free to reach out to for more details.

Monday 17 February 2020

Why do I have to create a new test environment for service virtualization?

"Why do I have to create a new test environment for service virtualization? Will it be more work for us to test now in two environments?" - Test Architect at a UK test consultancy

Create a separate environment for a new testing phase is a standard industry approach for implementing API mocking. This short case study provides an example of how it has been at a company we have worked with.

A global media company was developing a new product to be released to the market in 7 months time. The testers and developers were using a third party API for mobile number porting. The communication with the service was done asynchronously using files.

There was an issue with that API because they had to wait for 24 hours for requests to be processed and responses returned.

The developers and testers decided to take control of their testing schedule by mocking the third party service. They did this by introducing a new environment where they tested with service virtualization. They used Traffic Parrot technology to create API mocks.

The testers were happy to have two environments because it allowed them for more predictability and control over their test cases when testing with mocks. They used the integrated environment only when integrating with new APIs or changing user journeys significantly.

The result was that they were able to save 3 or more days a week in lead time for product development.

Monday 27 January 2020

How to combine microservices and BigData?

"I've just joined a company and the architects love microservices but the developers love Big Data solutions. Do they mix? Can you point me in the direction of where I can read more about marrying the two together?" - Big Data Engineer working for a UK financial startup.

Microservice architectures are a tool to solve a specific problems an organisation might have. If you have problems that can be solved by using microservice architectures, generate 2-3 options and if the microservice route looks most promising, go for it.

Our general recommendation would be to focus on the problems you have to solve and the constraints you are working with, generate a few options and possible solutions and choose the one that seems most promising.

There will be certain scenarios where BigData and microservices will work well together, and others where they will not make sense. We would have to know more details to be of further help, please email contact us to schedule a call to discuss your specific requirements.

We recommend reading Sequoia's Guide to Microservices and Martin Fowler's blog on microservices as a good starting point to problems microservice architectures help solving.

Friday 17 January 2020

Traffic Parrot 5.10.0 released, whats new?

We have just released version 5.10.0. Here is a list of the changes that came with the release:


  • Added support for IBM®MQ message delays in replay mode
  • Added support for specifying multiple IBM®MQ queues, using queue manager names from ibm-mq-connections.json to record using syntax in script tab in the UI:
    # This is a sample comment
    QueueManager:'Local Docker MQ 9'
    # This is a sample comment
    QueueManager:'Local Docker MQ 9'


  • Ensure IBM®MQ channel connections are always released after use

Monday 13 January 2020

How can I distinguish mapping A (which belongs to service A) and mapping B (which belongs to service B)?

"How can I distinguish mapping A (which belongs to service A) and mapping B (which belongs to service B)?" - Intern at a global software development consultancy.

Good question!

You can use the URL or the service. For example, typically, the Users service will have a /users URL.Is this the case in your company as well? If you need more details please contact and we will be more than happy to help!

Sunday 12 January 2020

How to use Mockito to mock grpc ServiceBlockingStub?

"How do I use Mockito to mock grpc ServiceBlockingStub to throw StatusRuntimeException with a specific status?" - Java developer

You have a few options:
Note why mocking final, in this case, might be a bad idea: Mocking final classes or methods might be a bad idea, depending on the case. The devil is in the details. In your situation, you are creating a mock of the generated code, so you are assuming how that generated code will behave in the future. gRPC and Protobuf are still rapidly evolving, so it might be risky to make those assumptions, as they might change and you won't notice because you do not check your mocks against the generated code. Hence, it's not a good idea to mock the generated code unless you really have to.

Friday 3 January 2020

QAs testing microservices can use third party and backend mocks to speed up testing

While testing microservices you will often need to mock or stub the third-party and legacy backend systems.

Third-party and backend system mocks and stubs will help you resolve issues with:
* Setting up test data in third party and backend systems
* Simulating API and backend system error messages (for negative or sad path testing)
* Simulating protocol-specific errors (for negative or sad path testing)
* Simulating slow responses (for performance testing)

You can create HTTPgRPCJMS IBM MQ and other types of mocks with Traffic Parrot.
You can run Traffic Parrot in Docker and OpenShift and Kubernetes, which is what you need when working with microservice architectures.

For more details how to use mocks when testing microservices watch "How software testers can test microservices".

See the diagram below for example usage of Traffic Parrot when testing microservices: