Thursday, 17 February 2022

Traffic Parrot 5.32.0 released, what's new?

We have just released version 5.32.0. Here is a list of the changes that came with the release:

Features

  • Added support for using a proxy server with path based routing between Traffic Parrot and the license server

Sunday, 30 January 2022

Traffic Parrot 5.31.2 released, what's new?

We have just released version 5.31.2. Here is a list of the changes that came with the release:

Features

  • Added ability to import and export IBM MQ mappings in ZIP file format
  • JMS response messages now populate the response JMSCorrelationID using the JMSCorrelationID from the request message if it is present or else the JMSMessageID from the request
  • JMS response messages are now sent to the JMSReplyTo destination if present or else the mapping destination is used

Changes

  • Updated VS HTTPS certificates with new 3 year validity period

Wednesday, 26 January 2022

Is trunk-based development the same as a monorepo?

"Is trunk-based development the same as a monorepo?" - an executive working for a large financial institution 

Monorepo means you have one source code repository for many projects/deliverables/modules/... which means all sources are version controlled in the same place.

Trunk based development means everybody works on the same branch (trunk/master/...) which allows for high levels of continuous integration, because everybody sees the changes you commit as soon as you commit them, and you see other developers changes soon as well.

The alternative is feature branching, where you integrate when you merge your branch to the trunk/master.

Do you need help with understanding the details of trunk-based development and monorepos and the impact on your organization? Feel free to contact us.

Monday, 24 January 2022

Accelerate delivery, improve quality and reduce time-to-market

Traffic Parrot can help you with:

  • Unblocking testers and developers waiting for APIs (faster time to market)
  • Easier and faster to reproduce production bugs (faster time to market)
  • Speed up setting up test data (faster time to market, higher quality)
  • Eliminate the need for scheduling time on environments (faster time to market, higher quality)
  • Reduce third-party transaction costs (lower costs)
  • Make your performance tests more reliable (higher quality)
  • Allow to develop automated test sooner and run the builds run faster (faster time to market, higher quality)
Please, contact us for more details. Feel free to review our information on "Getting started with stubbing, mocking and service virtualization".



Friday, 7 January 2022

Traffic Parrot 5.30.1 released, what's new?

We have just released version 5.30.1. Here is a list of the changes that came with the release:

Features

  • Added support for connecting to an internal or external ActiveMQ broker via JMS AMQP 1.0
  • Added support for username/password when connecting to external ActiveMQ broker

Changes

  • Include ActiveMQ sample JMS AMQP 1.0 connection

Fixes

  • Fixed an issue with the {{ regex }} helper when using an {{ xPath }} result as a parameter
  • Library upgrades to fix OWASP issues

Tuesday, 4 January 2022

Three case studies on API-first Development and Consumer-Driven Contracts

At Traffic Parrot we have seen several companies embark on the API-first journey, a few of which also approach their API strategy consumer-first and implemented Consumer-Driven contracts.

In this short article, we summarise two categories of engagements we have seen faced by the teams:

  • Implementing API-first approach for teams that have not done API-first before and starting their journey with microservices (up to 5-20 microservices in production)

  • Implementing Consumer-Driven Contract Testing on teams that are already proficient in API-first development approach and face issues with large scale contract testing of microservices (more than 50-100 microservices in production)

Useful terms to know

  • API-first development: the move to microservices typically also drives an API-first development approach, where teams define upfront business contracts between each other using APIs. A sample API specification language: OpenAPI or Protobuf

  • Consumer-Driven Contracts: When you design an API, its syntax and semantics will be designed by the API producer team by working closely with the teams that will consume the API as opposed to the API producer team making assumptions based on the data model how the API should look like. The API consumer teams drive the shape of the APIs.

  • Consumer-Driven Contract Testing: A type of contract testing where we are ensuring in automated tests that the contracts designed in a consumer-driven fashion are met. This allows confidence that a new release of a microservice will not result in breaking API changes in production. A sample tool: Pact-JVM

High-level summary

It has been our clients’ experience that:

  1. The API-first approach is an effective way to parallelise work between teams working in fast-paced environments and microservice-based architectures.

  2. Allowing teams to work closely with each other and design APIs in a consumer-driven fashion helps drive quality and reduce time to market no matter the team size and deployment scale.

  3. Although consumer-driven testing is a well-grounded way of testing contracts it can be counterproductive to introduce it to teams that are not experienced in the API-first approach or run less than 5-20 microservices in production.


Case study: a large media company using an API-first approach and Consumer-driven testing 

A global media company had an existing stack of 100+ microservices that was primarily tested with automated BDD E2E tests. The releases were done every 2-4 weeks.

These tests were costly to maintain because it took significant developer time to write and debug the suite of tests. The suite of tests took 2 hours to run, and up to a week to investigate issues before every release.

Developers were often frustrated as the tests were flaky due to the complexity of the system under test, leading to many non-deterministic failure points. These tests would prevent them from releasing new features on-demand as the tests took a few hours to run. 

With this experience, the company decided to avoid E2E testing for the new product they were working on.

The main way they decided to grow confidence in the contracts between new microservices and the behaviour of the product as a whole was to design contracts in a consumer-driven way. The company chose consumer-driven contract testing with Pact-JVM to test those contracts. Most of the teams were entirely new to consumer-driven contracts. 

Equipped with a solid API-first design approach background across all teams, and several years of experience with automated BDD-style Acceptance and E2E testing, as well as TDD on unit-level they felt confident in learning the consumer-driven contract testing approach. Unfortunately, that proved not to be true after a few months of working on the problem. It was the teams’ experience that about a fifth of the developers did pick up the new workflow rapidly but the majority of the team had issues understanding the new workflow even after a few months of working on it and about a fifth of the team even after 12 months did not onboard themselves fully to the new process. 

There was a change to infrastructure happening at the same time, moving away from bare-metal to Kuberenetes and Docker deployments. 

The J-Curve Effect of this transformation was noticeable but manageable. Teams adopted just a few new tools and practices as part of this transformation (PactJVM, Docker, Kubernetes) and it was enough to keep developers and testers busy changing their daily habits. In retrospect, the investment in consumer-driven contract testing was justified but the ROI was not significant, close to breaking even. 

Case study: a global e-commerce giant using API-first approach

The company decided to move away from a monolithic architecture to more autonomous teams and microservices. As part of that transition, they decided to recommend good practices rather than force the use of specific technologies and solutions onto teams, trusting the teams to make the right judgements on which tools and techniques will make the most positive impact on their daily activities.

The developers wrote integration tests and used an API mocking tool Traffic Parrot to mock dependent components. They also wrote Cucumber/Gherkin BDD acceptance API tests to capture the business requirements (they called these "contract tests"), which use a Docker image of the microservice and a Docker image with the Traffic Parrot API mocks. The BDD tests verify both the microservice API and interactions with dependent components by verifying the interactions on the API mocks. That way, the BDD tests verify both microservice API request and response and all communication with dependent components by assertions and verifications.

The company decided to create the API mocks in two ways.

First, if the API that a developer wants to consume already exists, they create the API mocks by recording requests and responses. A developer starts by creating a new test on their computer. They then run the test and create API mocks by recording them. They commit the tests and mocks to the microservice project in Git. In a QA pipeline (a pipeline that is run per commit to checking the quality of the product), they start a Docker container that runs the API mocking tool and mounts the mock definitions from the microservice project.

Second, if the API the microservice will consume does not exist yet, a developer will create the API mocks from OpenAPI specifications for HTTP REST APIs or create the API mocks from protocol buffer specification files for gRPC APIs.

They also develop and run automated E2E smoke tests. This is one of the techniques to test contracts between microservices and it makes sure groups of microservices work well together. The presence of the E2E test suite is justified as it is tested not only the producer side of contracts, which is tested in the BDD tests, but also the consumer side, and so provides more confidence. The architects monitor the number of E2E tests. They keep the complexity of the E2E test suite at a level that does not cripple the release process or daily development activities.

The move to consumer-driven contract testing was not justified in the eyes of the architects and individual teams on an enterprise-scale as the J-Curve Effect impact would be too significant and the ROI not justifiable in a reasonable timeframe. Out of the hundreds of developers that work for the company, only a handful decided to use consumer-driven contract testing, as they were already very familiar with the API-first approach and had several years of experience in BDD.

Case study: an InsurTech startup using API-first approach

The company had a handful of teams working on a dozen of microservices. The microservices were replacing part of a deprecated monolith.  

The producer teams designed the APIs for the teams to consume. To manage contracts between teams and allow teams to work in parallel, they decided to use API mocks that the API producers created and shared with the API consumers. They created the gRPC API service mocks using an API mocking tool Traffic Parrot.

They also used a handful of manual E2E tests in a pre-production environment to make sure that the microservices would work together in the absence of sufficient automated testing. They would fill the gaps in automated testing on an ongoing basis, which took 12 months of ongoing part-time effort.

To make sure mocks do not get out of date, and to hit their aggressive deadlines, the company decided to test the API mocks by firing a request at both a mock and a real microservice. They compared both responses to a contract definition of expected request/response pairs as defined in a company-specific custom format. This way, the API mocks and the real service was proven to be up to date when compared to the latest definition of the expected behaviour defined in the contract file. This proved to be a good enough solution to remove the bottleneck of testing contacts and allow the teams to focus on delivering the features to customers.

The team contemplated using consumer-driven contact testing but at first glance, it seemed like they would be shooting a fly from a cannon and they have decided to stick to their original good-enough testing plan and revisit the issue in 12-24 months while keeping an eye on the complexity of the existing suite of tests so the maintenance costs do not become significant.

Next steps

At Traffic Parrot we would be happy to help you and your team provide feedback on your transformation strategy. Reach out to us to schedule a call where we walk around your requirements and provide early feedback. If your experience with Consumer-Driven Contract testing was different, please let us know in the comments below!

Friday, 31 December 2021

Traffic Parrot 5.29.7 released, what's new?

We have just released version 5.29.7. Here is a list of the changes that came with the release:

Features

  • Added new trafficparrot.properties to allow configuring separate HTTP and HTTPS ports for the UI:
    • trafficparrot.gui.http.port=8080
    • trafficparrot.gui.http.port.enabled=true
    • trafficparrot.gui.https.port=8079
    • trafficparrot.gui.https.port.enabled=true
    • trafficparrot.gui.https.jks.resource=certificates/user-interface-https.jks
    • trafficparrot.gui.https.jks.password=trafficparrot
  • Added support for connecting to an external RabbitMQ 3 broker via JMS
  • Added support for HTTP webhooks including:
    • UI to configure callback on the HTTP Add/Edit mapping page
    • Send request callback after a fixed delay
    • Use data from the original request and response body in the callback request template
    • Enable callback to be sent or not based on script
  • Added support for uploading data directory when uploading an HTTP mappings zip file
  • Added support for DOTALL option in {{ regexExtract }} helper

Changes

  • Upgraded WireMock from 2.29.1 to 2.31.0
    • WireMock removed support for parameter normalizing in response templates
    • To restore parameter normalizing behaviour, please set: trafficparrot.virtualservice.handlebars.normalizeHelperParameters=true
  • Upgraded gRPC from 1.39.0 to 1.43.1
  • Upgraded protoc from 3.17.3 to 3.19.1
  • Include RabbitMQ sample JMS connection

Fixes

  • Fixed a bug with indexed {{ dataSource }} parameters
  • Fixed a bug with mixed type XLS columns
  • Fixed a bug with the tray icon on Windows
  • Library upgrades to fix OWASP issues

Wednesday, 1 December 2021

Do you need an asynchronous API mocking tool, asynchronous API testing tool or both?

In most cases, companies reach out to us because they are looking for a tool with service virtualization or API mocking capabilities. There are, however, situations where it is unclear if that is the only use case they would like us to help them with, especially when it comes to asynchronous message processing like IBM MQ, ActiveMQ, or RabbitMQ.

To help us better understand your needs, please review the diagram below and let us know if you would like us to help you with the blue elements on the diagram, the green ones, or both?

Do you need an asynchronous API mocking tool, asynchronous API testing tool or both?


Thursday, 18 November 2021

What is the cost of delay of not using API-first development and API mocks?

InfoQ has just published our latest article on “ Using API-First Development and API Mocking to Break Critical Path Dependencies ”.

Key takeaways:

  • Many organizations are moving towards API-first development in order to decrease the coupling between teams and increase development velocity. Using API mocks can support the goals of this initiative.

  • We present a case study on API-first development that used API mocks to enable teams to work in parallel and be faster to market

  • There is a simple way of estimating the value parallelizing teamwork using API mocks will bring to your organization, based on the cost of delay

  • The spreadsheet model provided in this article can be used to calculate potential savings 

  • Adoption of API mocking requires just one team -- there is no need to migrate the whole organization at once

If you would like to see a cost of delay calculation sheet specific to your circumstances, please reach out to us by replying to this email, filling in the contact form or calling +44 20 3239 7753. We offer models for different cases like slow CI/CD builds, unavailable third party dependencies and more. 



Wednesday, 17 November 2021

Pros and cons of running API mocks inside CI/CD

Our customers choose to run their mocks inside and also outside their CI/CD builds and pipelines. There are advantages and disadvantages to both approaches.

Typically, if the system under test is running (microservice under test) inside CI/CD then our customers would also run their API mocks inside CI/CD.

On the first diagram below, we show how running API mocks inside CI/CD would look like, on the second diagram we show how running outside CI/CD would look like. 

Pros of running inside CI/CD

  • Major: More resilient builds because they do not rely on any external components or infrastructure
  • Major: More resilient builds due to relying on known API mock state by using ephemeral per-test-run API mock instances with the test-primed state instead of long-running API mock instances with undefined state
  • Major: Easier to set up and maintain builds by running API mocking (service virtualization) tool from maven/Gradle before the tests and tear down after tests

Cons of running inside CI/CD

  • Critical: To run API mocks inside CI/CD the system under test needs to be capable of running inside CI/CD as well
  • Major: People need to be trained on how to use mocks inside CI/CD
  • Major: Need an API mocking (service virtualization) tool with a low footprint that is designed to be run per test or per build (i.e Wiremock, Mountebank or Traffic Parrot)
  • Major: Performance testing might need additional hardware resources to support 1000+ TPS (req/s)
  • Minor: Debugging builds with no access to a running API mock tool instance means the observability of the API mocking tool needs to be high - the logs need to contain enough information to   
  • Minor: additional hardware resources needed on the build agent/slave/runner





Wednesday, 20 October 2021

Thursday, 7 October 2021

Typical Kubernetes deployment setup for large scale Traffic Parrot virtual service and API mock installations

Traffic Parrot customers use Kubernetes and Docker to deploy virtual services in environments used by both QA and development teams.  

Below is a typical Kubernetes infrastructure for deploying applications using a structured build automation process.




Our customers typically use Ingress to expose the services outside of the cluster. 

We recommend doing blue-green rolling releases to maintain the uptime of the virtual services.

If you use JMS or IBM MQ you will need to reach out to the message broker that is hosted outside the cluster.

In this example, Splunk is used for the log aggregator. Flux is often used to maintain the cluster's state inside a Git repository and let that sync up itself using the tool. We don't recommend using kubectl commands or Rancher UI, or anything like that to make manual changes to the cluster.

Prometheus can be useful for health checking and making sure things are still up and running.  

Traffic Parrot stores its configuration in JSON configuration files. And those configuration files define which endpoints are supported for a particular pod and determine what kind of virtual services or API mocks it has. 

Delivery Pipeline

This diagram shows an approach for setting up a continuous delivery pipeline to deploy Traffic Parrot containers running API mocks and virtual services in Kubernetes. 



This is an example of a delivery pipeline with a separated deployment configuration from the virtual service configuration. The developers, once they've prototyped their virtual services locally, they'll commit the Traffic Parrot configuration to a system under test repo. 

In “2. commit TP config”, a Docker image with TP config files is built that can then be used inside the Kubernetes cluster. 

The SUT repo has many Traffic Parrot configs, and it's triggering a pipeline so one or more of those configs are then built into the image.

In “5. Commit TP Deployment” where Traffic Parrot is deployed with a Docker image name and version, the deployment configuration is held in a Git Flux repository and will be synced to the Kuberenetes cluster using the Flux-CD tool. 


For more information tailored to your company’s specific needs please contact us.

Friday, 10 September 2021

Traffic Parrot 5.28.2 released, what's new?

We have just released version 5.28.2. Here is a list of the changes that came with the release:

Features

  • Added a new property that allows to skip starting the web user interface trafficparrot.gui.enabled. By default, the user interface is enabled, but it can be disabled for example in CI/CD builds or pipelines to allow TP to use less resources.
  • Added a new property to allow skipping MQ RFH headers when matching the body of an MQ request message trafficparrot.ibmmq.skipHeadersInRequestMessageBodyWhenMatching=true
  • Added UI for viewing archived log files. Note that this currently supports only plain text archived log files without compression, as used in the default trafficparrotserver.log4j.properties configuration
  • Improve list of gRPC skeletons when multiple oneof or optional fields used in response

Changes

  • Upgraded WireMock from 2.29.0 to 2.29.1