Friday, 3 April 2020

What are the challenges you see with third party inbound and outbound testing when using IBM MQ queues?

"What are the challenges you see with third party inbound and outbound testing when using IBM MQ queues?" - Software tester working for a multinational enterprise.

Sometimes third-party systems can cause issues when testing your enterprise systems, for example:
  • The third-party system is not available 24/7 for testing, you need to schedule time on test environments as it is shared between many teams which result in the lower time to market
  • The third-party system does not allow for simulating error responses
  • The third-party test environment might not support the load you require for running your performance tests

In this case, you can use service virtualization or mocking tool to simulate the third party system.

Here is a sample tutorial for Traffic Parrot if you are using IBM MQ via JMS APIs.

Monday, 30 March 2020

Bestow has used Traffic Parrot gRPC mocks to deliver features faster to customers

After a thorough evaluation, Bestow Inc. selected Traffic Parrot's service virtualization and API mocking tool in April 2019 for their application development needs. In this case study, we will look at the details of their infrastructure, how they applied Traffic Parrot, and what issues they have come across.
  • Traffic Parrot is specifically designed to maximize the productivity of developers writing automated tests and to enable them to mock out microservices for local development. Their lightweight platform with gRPC support was a good fit for our Docker and Go-based development environment. They provided strong support during the POC and continue to track the rapid evolution of gRPC, acting as an effective extension to our team.
    Brian Romanko, VP Engineering at Bestow


Bestow has challenged industry assumptions with a new underwriting framework that provides affordable term life insurance in minutes instead of weeks. They use Traffic Parrot to unblock teams and allow them to work independently. Bestow uses Traffic Parrot gRPC mocks in their microservice CI regression testing suites to detect breaking changes in their microservice APIs.

Technology stack: Docker, GoLang and gRPC

The core technology they rely on includes:
  • Container-based infrastructure, running Docker in Kubernetes on GCP
  • Microservices in a variety of languages, including GoLang and Python
  • Microservices communicate using gRPC APIs, with API contracts defined in Proto files
Bestow colocated teams developing a microservice to encourage close communication. gRPC APIs connect microservices, which are sometimes owned by different teams. Bestow designs gRPC APIs using Proto files, which form the contract between microservices.

Problem: teams are blocked waiting for APIs

Starting more than a year ago, Bestow developed multiple microservices in parallel. For example, the Policy Administration team provided gRPC APIs for the Enrollment team to consume. This meant that developers on the Enrollment team were sometimes waiting for the Policy Administration team to deliver their microservice APIs before they could start working.
This led to blocked timelines between teams, which meant Bestow could not deliver at the fast pace required for their customers. It was urgent for Bestow to find a solution to allow the teams to work independently.

Solution: decouple teams by using gRPC mocks

Traffic Parrot was identified as a candidate for a gRPC API mocking solution that could help unblock the timelines between the teams at Bestow. After a two week technical evaluation by VP of Engineering Brian Romanko, it was clear that the open-source alternatives did not provide adequate capabilities and Traffic Parrot was chosen to fulfil Bestow development needs.
Teams at Bestow use Traffic Parrot to develop both sides of their gRPC APIs in parallel, without having to wait for the server code to be written before a client can be tested. They run automated test suites on their CI build agents, with Traffic Parrot running in a Docker container on the agent.

Wednesday, 25 March 2020

How to choose a service virtualization tool?

Most companies like to evaluate several tools before they commit to a purchase.

Typically they evaluate the service virtualization tools based on many factors such as:
  • Cost
  • Protocols and technologies supported
  • Features
  • Performance benchmarks
  • Support level

Here are a few additional technical questions might help decide which of the tools you are looking at is best:
  • Would you like to have a central team of administrators managing the new tool?
  • What kind of footprint would you like (RAM, disk usage, ...)?
  • What kind of licensing model would work best for your use case?
  • Do you need to source control of the virtual services and deployment scripts?
  • Are you looking for a tool that fits more the microservices architecture or a monolithic architecture?
These questions are based on:

Monday, 27 January 2020

How to combine microservices and BigData?

"I've just joined a company and the architects love microservices but the developers love Big Data solutions. Do they mix? Can you point me in the direction of where I can read more about marrying the two together?" - Big Data Engineer working for a UK financial startup.

Microservice architectures are a tool to solve a specific problems an organisation might have. If you have problems that can be solved by using microservice architectures, generate 2-3 options and if the microservice route looks most promising, go for it.

Our general recommendation would be to focus on the problems you have to solve and the constraints you are working with, generate a few options and possible solutions and choose the one that seems most promising.

There will be certain scenarios where BigData and microservices will work well together, and others where they will not make sense. We would have to know more details to be of further help, please email contact us to schedule a call to discuss your specific requirements.

We recommend reading Sequoia's Guide to Microservices and Martin Fowler's blog on microservices as a good starting point to problems microservice architectures help solving.

Friday, 17 January 2020

Traffic Parrot 5.10.0 released, whats new?

We have just released version 5.10.0. Here is a list of the changes that came with the release:


  • Added support for IBM®MQ message delays in replay mode
  • Added support for specifying multiple IBM®MQ queues, using queue manager names from ibm-mq-connections.json to record using syntax in script tab in the UI:
    # This is a sample comment
    QueueManager:'Local Docker MQ 9'
    # This is a sample comment
    QueueManager:'Local Docker MQ 9'


  • Ensure IBM®MQ channel connections are always released after use

Monday, 13 January 2020

How can I distinguish mapping A (which belongs to service A) and mapping B (which belongs to service B)?

"How can I distinguish mapping A (which belongs to service A) and mapping B (which belongs to service B)?" - Intern at a global software development consultancy.

Good question!

You can use the URL or the service. For example, typically, the Users service will have a /users URL.Is this the case in your company as well? If you need more details please contact and we will be more than happy to help!

Sunday, 12 January 2020

How to use Mockito to mock grpc ServiceBlockingStub?

"How do I use Mockito to mock grpc ServiceBlockingStub to throw StatusRuntimeException with a specific status?" - Java developer

You have a few options:
Note why mocking final, in this case, might be a bad idea: Mocking final classes or methods might be a bad idea, depending on the case. The devil is in the details. In your situation, you are creating a mock of the generated code, so you are assuming how that generated code will behave in the future. gRPC and Protobuf are still rapidly evolving, so it might be risky to make those assumptions, as they might change and you won't notice because you do not check your mocks against the generated code. Hence, it's not a good idea to mock the generated code unless you really have to.

Friday, 3 January 2020

QAs testing microservices can use third party and backend mocks to speed up testing

While testing microservices you will often need to mock or stub the third-party and legacy backend systems.

Third-party and backend system mocks and stubs will help you resolve issues with:
* Setting up test data in third party and backend systems
* Simulating API and backend system error messages (for negative or sad path testing)
* Simulating protocol-specific errors (for negative or sad path testing)
* Simulating slow responses (for performance testing)

You can create HTTPgRPCJMS IBM MQ and other types of mocks with Traffic Parrot.
You can run Traffic Parrot in Docker and OpenShift and Kubernetes, which is what you need when working with microservice architectures.

For more details how to use mocks when testing microservices watch "How software testers can test microservices".

See the diagram below for example usage of Traffic Parrot when testing microservices:

Sunday, 15 December 2019

Traffic Parrot 5.9.0 released - what's new?

We have just released version 5.9.0. Here is a list of the changes that came with the release:


  • Improvements to gRPC mappings list user interface:
    • Edit mapping tags that can be used to organize and search mappings
    • Added new tags column which is hidden by default


  • WireMock compatible HTTP APIs are now hosted on both:
    • http://localhost:8080/api/http/__admin/*
    • http://localhost:8080/api/http/*

    This means that the standard WireMock client libraries can be used when connecting to the API port e.g. when using WireMock.configureFor("localhost", 8080, "/api/http");

    Custom HTTP clients or a browser may continue to drop the /__admin section of the path so that API calls such as http://localhost:8080/api/http/mappings can be made

Sunday, 1 December 2019

Traffic Parrot 5.8.0 released, what's new?

We have just released version 5.8.0. Here is a list of the changes that came with the release:


  • Improvements to gRPC mappings list user interface:
    • Added sorting to columns
    • Added a button to pause auto reload
    • Added a button to select which columns to display
    • Added two new columns which are hidden by default: id and name


  • Pause gRPC mappings list auto reload when highlighting text so that it is possible to highlight and copy text without the auto reload clearing the highlight
  • Allow spaces in gRPC mapping file name template

Monday, 25 November 2019

Traffic Parrot 5.7.3 released, what's new?

We have just released version 5.7.3. Here is a list of the changes that came with the release:


  • gRPC mapping file name templates when saving mappings via the UI can now be customized with the following properties:
    # Template used for gRPC mapping file names
    # Available properties:
    # {{ }}
    # {{ mapping.package }}
    # {{ mapping.service }}
    # {{ mapping.method }}
    # {{ mapping.isSuccess }}
    # {{ }}
    # {{ mapping.status.code }}
    # Available commands:
    # {{ countGrpcMappings package=mapping.package service=mapping.service method=mapping.method success=true offset=1 format='%02d' }}{{ mapping.package }}.{{ countGrpcMappings package=mapping.package offset=1 format='%02d' }}.{{ mapping.service }}-{{ mapping.method }}[{{ }}]{{ }}
  • When editing a gRPC mapping in the UI you can now choose whether to keep the existing file name or generate a new one according to the template properties when the mapping is saved

Thursday, 14 November 2019

How to return different responses for the same URL when mocking REST HTTP APIs in Traffic Parrot

Typical problems solved by service virtualization/mocking/stubbing

In the first chapter of our tutorial series, we have a section that covers some of the typical problems that a service virtualization/mocking/stubbing approach addresses.
Do you experience similar problems? Please email us at to schedule a call where we can discuss your requirements.
Here are several of the more typical problems testers will experience while testing web applications, mobile applications, legacy systems and APIs and how service virtualization can help with solving them.
  • Too much pressure on the testing team and not enough time to test
    This is a common problem and service virtualization alone will not fix it, but it is an essential part of the solution. If you can utilize service virtualization in your builds and pipelines you can have more confidence in your release cycle. The focus changes from trying to test everything to testing what could be at risk of having issues. These kinds of checks can give you a quick health status of the application, or a piece of it, without dedicating a lot of testing time. Quick feedback means quick wins overall.
  • Testers being blocked by other teams not delivering APIs on time, leaving them unable to test
    In this case, you can use virtual services to simulate the non-existing API. If there are business requirements or documentation, you can create virtual services based on them to use until the real service is available to test. After you verify the system works in isolation according to your expectations, you can wait for the dependent API to be ready and do integration testing using the same tests. This is an example of using TDD for integration testing.
  • Hard to reproduce production bugs resulting in frustration and time loss
    Test data and complex interactions with backend and third party systems can cause issues reproducing bugs found in production. Issues with setting up test data in multiple systems simultaneously, simulating error messages or simulating protocol issues are hard to accomplish tasks getting in the way of easy reproduction of the environment where the bug was found in production. Fortunately, complex environments like these and many others can be simulated with virtual services, allowing for more flexibility and peace of mind when reproducing bugs.
  • Creating test data is complex and takes time, slowing the testing team down
    Often when you test old systems you have to wait for test data to be created, sometimes it’s even impossible to create the test data you need (especially in third party systems). Also, backend and third party systems can be hard to set up to return error responses, or throttle network responses, on demand. The solution is to use virtual services which are under your control, so you can set up any type of test data on demand.
  • Scheduling time on test environments resulting in inefficient use of tester’s time
    Especially large banks with old mainframe systems experience this issue where the test environments are costly to create, so there is a limited number of them shared across many testing teams and projects. Those old APIs almost never get changed but they are included in many test plans, so you have to schedule time on those environments to run your tests. You can simulate the API that never changes with virtual services and test your application under test more often without having to wait for the API availability. It is also less likely to impact other teams by burning through their test data causing further delays.
  • Cost of third party test transactions can complicate testing efforts
    In most cases, those paid APIs are very simple in nature and very rarely change. The cost of accessing third party APIs can complicate performance testing efforts as well. Simulating them using virtual services can reduce the third party transaction costs.
  • Unreliable performance tests
    When you performance test many components simultaneously, it is often challenging to pinpoint the code change that causes a performance issue. The feedback loop is too large, and you are testing too many moving parts at one time. You ideally want to test small code releases in isolation and a limited number of components. That will ensure that if you see a degradation in performance, you can quickly point to a code changeset that was likely the cause. To run performance tests in isolation, you need virtual services. Most service virtualization tools on top of setting up test data capabilities will offer the possibility to define response times for the virtual services so you can create a production-like environment.
When you select a tool to use for service virtualization, keep the problem you are trying to solve in mind. Ask yourself a few questions about the problem at hand and what service virtualization can do to solve it.
Here are a few sample questions which could jumpstart the process for you:
  • Are service calls to real services slowing automation efforts?
  • Is lack of access to third party or backend services limiting testing efforts?
  • Is using real services the best place to test new code?
  • What problems might we need to address to create and maintain a virtual service for testing?