Sunday, 16 June 2019

How to approach Traffic Parrot speed issues during load testing

We had one of our prospects reach out to us this week:

"I am stuck with my POC (Proof Of Concept project) on Traffic Parrot as I am facing out of memory issues at 30 TPS for 100 concurrent users for HTTP and IBM MQ JMS, please advise." -  Test Automation and Service Virtualization Consultant at an Australian Bank

Typically we advise starting by having a look at common approaches to speeding up Traffic Parrot for performance testing which is located here:

If the guidelines outlined there do not resolve the issue, please contact, and we will suggest how we can improve the performance of Traffic Parrot in your specific use case.

In this particular case increasing the memory from -Xmx128mb to -Xmx1g (Traffic Parrot JVM heap size increase from 128MB to 1GB) in jvm.args resolved the issue. The issue was that the request to response mappings used by the prospect were complex enough to require higher than standard use of memory for 100 concurrent users.

Saturday, 15 June 2019

I want to learn Service Virtualization

"I want to learn Service Virtualization for mocking. How can I do that?" - An anonymous website visitor.

We recommend looking at the Service Virtualization and API mocking tutorials available on our website, starting with Chapter 1: Getting started with stubbing, mocking and service virtualization and then Chapter 2: How to record and replay HTTP SOAP requests and responses to create API mocks.

If you would like to schedule a free 45-minute "Service Virtualization and API mocking" webinar for your company or your team, please reach out to, and we will organise it at a time convenient to you and your team.

Wednesday, 12 June 2019

Do you have single user licenses?

Recently a website visitor asked:

"Do you have single user licenses?" - QA Lead at a Russell 3000 company, Pennsylvania.

Yes, we do support the single user scenario. For a single user, we have an option for a single floating license (only one instance allowed to be running at the same time).

If you are interested in single user licenses, we would be happy to provide a quote according to your requirements. Please contact for more information.

Friday, 31 May 2019

Error 2085 when creating IBM MQ mocks

Recently we have received a query from an engineer evaluating Traffic Parrot:

"Hi! I am trying to do an MQ stub, but facing some issues. I get an error code 2085, can you please help me out?" - Software Engineer at a Bank, Australia.

Typically error 2085 means the queue or topic your application or Traffic Parrot connects to does not exist on the IBM MQ broker.

It will be hard to help more just by looking at the error code alone. If you see an error like this, please contact to schedule a quick call with screen sharing to resolve your issue.

Here is a tutorial of JMS IBM MQ:
Here is a video tutorial JMS IBM MQ: 
Here are the docs for JMS:
Here is how you can run IBM MQ on Docker for development and testing purposes:

Tuesday, 28 May 2019

Is there a free version of Traffic Parrot?

Recently we have received a query from one of our website visitors:

"Hello! Is there free version of Traffic Parrot available?" - Software Engineer at a Software Consultancy, Ukraine.

We do not offer a free version, however, we do have a 14-day trial available.

If you are interested in the enterprise edition, we would be happy to set up a call to discuss budgetary pricing according to your requirements, please contact

Alternatively, you can compare the open source tools available on Wikipedia.

Thursday, 4 April 2019

Service Virtualization Test Container Pattern

Traffic Parrot has first-class support for the Service Virtualization Test Container Pattern (also called API Mock Test Container or API Simulation Test Container)

Let's assume if you work with a Jenkins Continuous Integration pipeline and test your microservices there to get to production fast. The more external dependencies the build pipeline has, the more risk you have those dependencies will cause the build to be unstable and fail because of external dependency issues, not related to the code quality. External dependencies should be kept to a minimum. For example, run your database in Docker per build (on Jenkins Slave), not in a shared environment. It is a pattern similar to the "sidecar" container pattern recommended by Jenkins just used in tests.

We recommend running your API mocks in the same process as your tests (and using Service Virtualization As Code - API Mocking As Code). If you decide to run your API mocks or virtual services in a separate process though, run them in a docker container. This way you are taking your automation to the next level, you have a fully automated and reproducible build pipeline.
Because of issues with dependencies, avoid using centralized instances running API mocks and virtual services outside your build pipeline. You can use them for exposing mocks to other teams and departments and for manual exploratory testing, but not in your automated CI builds.

Running in a container like Docker also means you are doing Infrastructure as code with all of its benefits to your business.

Friday, 29 March 2019

How to run Traffic Parrot behind a firewall? How to use a proxy?

Recently one of the Traffic Parrot users asked us:

"When I try to capture traffic from a HTTP API it does not register anything or generate an error, it just does not capture anything. I have a proxy in my company, but I can not find how to tell the application to go through that proxy to make the recording. Please advise." - A developer at a Colombian bank

Are you running Traffic Parrot inside an organisation where you have to use a HTTP Proxy to access APIs?

No problem!

You can find documentation here how to use a HTTP Proxy here:

Friday, 15 March 2019

How to add requests and responses manually

A user recently asked us this good question.

> I would like to know your inputs for below query: if we do not have any restful API details apart from one URI and one set of Request and response.
> That to URI is not accessible in our dev cloud. then we do not have any recording url for that to put on since it is not accessible. How can we do virtualization here using traffic parrot

You can add requests and responses manually to Traffic Parrot by clicking on HTTP->Add/Edit.

Monday, 4 March 2019

Delaying JMS messages in IBM MQ or Active MQ (mocking and service virtualization)

Starting from version 4.4.0 Traffic Parrot supports delayed JMS messages.

Here is the documentation how to enable delayed messages:

It is useful in scenarios like performance testing, when you would like to simulate a dependency system that takes a while to process messages asynchronously and its impact on your systems performance.

Wednesday, 13 February 2019

Tutorial: Mocking and simulating gRPC APIs

gRPC is an open source protocol used for remote procedure calls which was initially developed at Google. Traffic Parrot support gRPC service virtualization and API mocking.

If you would like to learn more have a look at the new tutorial Chapter 5: Mocking and simulating gRPC.

Saturday, 19 January 2019

Tutorial: How to use Traffic Parrot APIs

Sometimes developers would like to configure Traffic Parrot programmatically. You can do that using Traffic Parrot APIs.
Have a look at the video below for more details. Here is the Postman collection that should allow you to explore what APIs are available. Here you can also find the Swagger/OpenAPI documentation of the APIs.

Sunday, 30 December 2018

Wednesday, 19 December 2018

3 bitter types of technical debt

We have noticed a pattern on the projects we have worked on for our clients. Technical debt develops in three common categories. The interest you will be paying on that debt looks very different those cases.

Eventually repaid

Some teams never allow technical debt to grow to levels where they even talk about technical debt. It's just business, as usual, to refactor on daily basis.
Very low negligible technical debt
Other teams may start with a noticeable level of technical debt because they are taking on work on legacy systems, but they reduce it as they go along by refactoring while adding new features.
Eventually repaid technical debt
The amount of interest you pay on technical debt is eventually so low, you barely notice it.


Sustainable technical debt
Teams add new features but refactor the code as they go along and the technical debt stays more or less at the same level. This means that even though there is noticeable technical debt present in the code, the team can deliver features at a sustainable peace.
So, they are continually repaying the interest on the technical debt, but the amount of the debt stays the same. The difference between “eventually repaid” and “sustainable” is the amount of interest you pay on a daily basis while making changes to the code. In “eventually repaid” the interest gets close to zero, in “sustainable” it stays at a constant level throughout the lifecycle of the project.

Compound growth

Teams stop refactoring but continue to add new features. Teams stop paying interest on the debt, and the interest is consolidated into the debt. The debt goes into a compound interest cycle and starts growing exponentially.

Compound growth technical debt

Similar to the the Broken Windows Theory, just applied to quality of code, if you see that non-paying technical debt is the norm in your team, you personally don't repay it either, just add new features and the debt grows.

This typically ends with postponed releases, bugs in production and a lot of tension in meetings where people blame each other for not working well or hard enough. It takes several postponed releases and a few major bugs with visibility in high management levels to realise the teams need to repay some debt to go faster.

Unfortunately, the cost of repaying debt is much higher by that point, just because of the compound interest you have to pay back that was consolidated into the debt. In other words, 2 hours invested in repaying technical debt 6 months ago, could be equivalent to 1 day of work today to repay the same amount of debt.

The problem with this type of approach is it feels you are going fast to start with because you are delivering features and the technical debt is not hurting you as much at the very beginning. The problem is you are putting yourself on the compound interest curve, instead of staying linear. Linear and compound curves look similar at the start, very different later on.

In most cases, you want to avoid ending up in this category. An example of where this type of debt is acceptable is when you need to hit a regulatory deadline, where the cost of not hitting the deadline outweighs the cost of repaying the compound debt accumulated later on.

What is the lesson learned here?

Keep in mind which category you wanna be in and make the conscious choice to invest in technical excellence where necessary. Aim for low levels of technical debt or constant medium levels, and never compound growth of debt.

Eventually repaid
Compound growth
Type of debt.
You eventually repay almost all debt.
You keep debt at the same level by paying off all the interest.
You do not repay debt and interest is added to the debt and debt grows exponentially.
Daily repayment amount
Develop features and refactor the new and old code heavily.
Develop features and refactor the new and old code a bit.
Only developing new features with little to no refactoring.
Daily interest teams pay
Linear, going down.

For example, at first developers are going 24% slower because of debt, eventually, 0% slower if they repaid all of the debt.
Linear, constant.

For example, developers are constantly 12% slower because of the debt.
Grows exponentially

For example, developers are 22% slower initially but in 12 months time, they are 370% slower compared to if they had no technical debt.

Next steps

At Traffic Parrot, we are building a framework to help teams visualise, manage and reduce technical debt. It will help you decide on the level of technical debt that is acceptable and make the right choices at the right time so that you do not end up in the “compound growth” category. Sign up here to get notified when it's available:

What is your experience with technical debt?

Friday, 7 December 2018

You need Refactoring to stay Agile and competitive

Do you want to release software to production often and fast, continuously? Do you want to release high-quality software? In other words, do you want to deliver value to your customers before your competitors do it for you?

One of the building blocks for high-quality software is the practice of refactoring. If you cannot refactor but continue to add new features, your software quality will most likely degrade over time, and you will accumulate high levels of technical debt.

I have seen that recently at a project for a client where we had a bunch of good developers, but the fact that they did not refactor the code on a regular basis and continued to introduce new features meant the project came to a point where there were significant issues with every release. Pretty much most of the recent releases would not go to production on time, and if they did, there were major bugs discovered by customers.

The three main contributors to lack of refactoring, in this case, were (surprise!?) 

  1. Not enough interest from the business in investing in technical excellence ("Continuous attention to technical excellence and good design enhances agility") 
  2. Lack of sufficient levels of acceptance/functional UI and API testing. No confidence in the tests meant developers were afraid to change production code in case they break existing features. 
  3. Root cause analysis of issues leads to developers having to explain themselves, which teaches developers to stay under the radar and not change anything that is not necessary. 

How can we resolve these issues? 

  1. Educate the business on the consequences and trade-offs they are making when they are not investing in technical excellence, especially refactoring in this case. For example, lack of refactoring together with continually adding new features is likely to result in lower quality software (resulting in more likely customer facing issues business have to report to FCA) and less predictable and less frequent releases (resulting in less value delivered to the customers and allowing competitors deliver that value instead). 
  2. Build up a robust suite of tests that the developers trust — for example, a testing pyramid that would include unit tests, integration tests, BDD API acceptance tests, BDD UI acceptance tests and contract tests. 
  3. Create a culture of "The process is to blame, not individuals". Whenever something terrible happens, figure out how to change the development process rather than point at individuals. For example, instead of blaming a developer for introducing a bug, figure out what types of tests should we start writing to avoid these types of bugs leaking to production in the future. 
Transformations like these are a process of continuous improvement. Fortunately, this particular client, every time we explained the consequences of the actions they take, they would listen carefully, engage in a dialogue, help us understand their environment better and help us help them change where necessary. The end goal for everybody was always to deliver more value to customers faster.

What is your experience with lack of refactoring?

Thursday, 6 December 2018

Invitation for a group discussion: The meaning of Agile in 2018

Join Wojciech to talk about Agile in software testing at the European Software Testing Summit "Group discussion: The meaning of Agile in 2018" on the 12th of December 2018.

Wojciech will lead a group discussion with the audience about the state of Agile software testing and development, its history and where the industry is today. We will talk about where Agile comes from, the Agile Manifesto, how we practice it today, what works well and where we can improve.