We have just released version 5.32.0. Here is a list of the changes that came with the release:
Features
- Added support for using a proxy server with path based routing between Traffic Parrot and the license server
We have just released version 5.32.0. Here is a list of the changes that came with the release:
We have just released version 5.31.2. Here is a list of the changes that came with the release:
"Is trunk-based development the same as a monorepo?" - an executive working for a large financial institution
Monorepo means you have one source code repository for many projects/deliverables/modules/... which means all sources are version controlled in the same place.
Trunk based development means everybody works on the same branch (trunk/master/...) which allows for high levels of continuous integration, because everybody sees the changes you commit as soon as you commit them, and you see other developers changes soon as well.
The alternative is feature branching, where you integrate when you merge your branch to the trunk/master.
Do you need help with understanding the details of trunk-based development and monorepos and the impact on your organization? Feel free to contact us.
Traffic Parrot can help you with:
We have just released version 5.30.1. Here is a list of the changes that came with the release:
At Traffic Parrot we have seen several companies embark on the API-first journey, a few of which also approach their API strategy consumer-first and implemented Consumer-Driven contracts.
In this short article, we summarise two categories of engagements we have seen faced by the teams:
Implementing API-first approach for teams that have not done API-first before and starting their journey with microservices (up to 5-20 microservices in production)
Implementing Consumer-Driven Contract Testing on teams that are already proficient in API-first development approach and face issues with large scale contract testing of microservices (more than 50-100 microservices in production)
API-first development: the move to microservices typically also drives an API-first development approach, where teams define upfront business contracts between each other using APIs. A sample API specification language: OpenAPI or Protobuf.
Consumer-Driven Contracts: When you design an API, its syntax and semantics will be designed by the API producer team by working closely with the teams that will consume the API as opposed to the API producer team making assumptions based on the data model how the API should look like. The API consumer teams drive the shape of the APIs.
Consumer-Driven Contract Testing: A type of contract testing where we are ensuring in automated tests that the contracts designed in a consumer-driven fashion are met. This allows confidence that a new release of a microservice will not result in breaking API changes in production. A sample tool: Pact-JVM.
It has been our clients’ experience that:
The API-first approach is an effective way to parallelise work between teams working in fast-paced environments and microservice-based architectures.
Allowing teams to work closely with each other and design APIs in a consumer-driven fashion helps drive quality and reduce time to market no matter the team size and deployment scale.
Although consumer-driven testing is a well-grounded way of testing contracts it can be counterproductive to introduce it to teams that are not experienced in the API-first approach or run less than 5-20 microservices in production.
A global media company had an existing stack of 100+ microservices that was primarily tested with automated BDD E2E tests. The releases were done every 2-4 weeks.
These tests were costly to maintain because it took significant developer time to write and debug the suite of tests. The suite of tests took 2 hours to run, and up to a week to investigate issues before every release.
Developers were often frustrated as the tests were flaky due to the complexity of the system under test, leading to many non-deterministic failure points. These tests would prevent them from releasing new features on-demand as the tests took a few hours to run.
With this experience, the company decided to avoid E2E testing for the new product they were working on.
The main way they decided to grow confidence in the contracts between new microservices and the behaviour of the product as a whole was to design contracts in a consumer-driven way. The company chose consumer-driven contract testing with Pact-JVM to test those contracts. Most of the teams were entirely new to consumer-driven contracts.
Equipped with a solid API-first design approach background across all teams, and several years of experience with automated BDD-style Acceptance and E2E testing, as well as TDD on unit-level they felt confident in learning the consumer-driven contract testing approach. Unfortunately, that proved not to be true after a few months of working on the problem. It was the teams’ experience that about a fifth of the developers did pick up the new workflow rapidly but the majority of the team had issues understanding the new workflow even after a few months of working on it and about a fifth of the team even after 12 months did not onboard themselves fully to the new process.
There was a change to infrastructure happening at the same time, moving away from bare-metal to Kuberenetes and Docker deployments.
The J-Curve Effect of this transformation was noticeable but manageable. Teams adopted just a few new tools and practices as part of this transformation (PactJVM, Docker, Kubernetes) and it was enough to keep developers and testers busy changing their daily habits. In retrospect, the investment in consumer-driven contract testing was justified but the ROI was not significant, close to breaking even.
The company decided to move away from a monolithic architecture to more autonomous teams and microservices. As part of that transition, they decided to recommend good practices rather than force the use of specific technologies and solutions onto teams, trusting the teams to make the right judgements on which tools and techniques will make the most positive impact on their daily activities.
The developers wrote integration tests and used an API mocking tool Traffic Parrot to mock dependent components. They also wrote Cucumber/Gherkin BDD acceptance API tests to capture the business requirements (they called these "contract tests"), which use a Docker image of the microservice and a Docker image with the Traffic Parrot API mocks. The BDD tests verify both the microservice API and interactions with dependent components by verifying the interactions on the API mocks. That way, the BDD tests verify both microservice API request and response and all communication with dependent components by assertions and verifications.
The company decided to create the API mocks in two ways.
First, if the API that a developer wants to consume already exists, they create the API mocks by recording requests and responses. A developer starts by creating a new test on their computer. They then run the test and create API mocks by recording them. They commit the tests and mocks to the microservice project in Git. In a QA pipeline (a pipeline that is run per commit to checking the quality of the product), they start a Docker container that runs the API mocking tool and mounts the mock definitions from the microservice project.
Second, if the API the microservice will consume does not exist yet, a developer will create the API mocks from OpenAPI specifications for HTTP REST APIs or create the API mocks from protocol buffer specification files for gRPC APIs.
They also develop and run automated E2E smoke tests. This is one of the techniques to test contracts between microservices and it makes sure groups of microservices work well together. The presence of the E2E test suite is justified as it is tested not only the producer side of contracts, which is tested in the BDD tests, but also the consumer side, and so provides more confidence. The architects monitor the number of E2E tests. They keep the complexity of the E2E test suite at a level that does not cripple the release process or daily development activities.
The move to consumer-driven contract testing was not justified in the eyes of the architects and individual teams on an enterprise-scale as the J-Curve Effect impact would be too significant and the ROI not justifiable in a reasonable timeframe. Out of the hundreds of developers that work for the company, only a handful decided to use consumer-driven contract testing, as they were already very familiar with the API-first approach and had several years of experience in BDD.
The company had a handful of teams working on a dozen of microservices. The microservices were replacing part of a deprecated monolith.
The producer teams designed the APIs for the teams to consume. To manage contracts between teams and allow teams to work in parallel, they decided to use API mocks that the API producers created and shared with the API consumers. They created the gRPC API service mocks using an API mocking tool Traffic Parrot.
They also used a handful of manual E2E tests in a pre-production environment to make sure that the microservices would work together in the absence of sufficient automated testing. They would fill the gaps in automated testing on an ongoing basis, which took 12 months of ongoing part-time effort.
To make sure mocks do not get out of date, and to hit their aggressive deadlines, the company decided to test the API mocks by firing a request at both a mock and a real microservice. They compared both responses to a contract definition of expected request/response pairs as defined in a company-specific custom format. This way, the API mocks and the real service was proven to be up to date when compared to the latest definition of the expected behaviour defined in the contract file. This proved to be a good enough solution to remove the bottleneck of testing contacts and allow the teams to focus on delivering the features to customers.
The team contemplated using consumer-driven contact testing but at first glance, it seemed like they would be shooting a fly from a cannon and they have decided to stick to their original good-enough testing plan and revisit the issue in 12-24 months while keeping an eye on the complexity of the existing suite of tests so the maintenance costs do not become significant.
We have just released version 5.29.7. Here is a list of the changes that came with the release:
InfoQ has just published our latest article on “ Using API-First Development and API Mocking to Break Critical Path Dependencies ”.
Key takeaways:
Many organizations are moving towards API-first development in order to decrease the coupling between teams and increase development velocity. Using API mocks can support the goals of this initiative.
We present a case study on API-first development that used API mocks to enable teams to work in parallel and be faster to market
There is a simple way of estimating the value parallelizing teamwork using API mocks will bring to your organization, based on the cost of delay
The spreadsheet model provided in this article can be used to calculate potential savings
Adoption of API mocking requires just one team -- there is no need to migrate the whole organization at once
If you would like to see a cost of delay calculation sheet specific to your circumstances, please reach out to us by replying to this email, filling in the contact form or calling +44 20 3239 7753. We offer models for different cases like slow CI/CD builds, unavailable third party dependencies and more.
Our customers choose to run their mocks inside and also outside their CI/CD builds and pipelines. There are advantages and disadvantages to both approaches.
Typically, if the system under test is running (microservice under test) inside CI/CD then our customers would also run their API mocks inside CI/CD.
On the first diagram below, we show how running API mocks inside CI/CD would look like, on the second diagram we show how running outside CI/CD would look like.
Jérôme Petazzoni has a great introduction to Kubernetes:
Traffic Parrot customers use Kubernetes and Docker to deploy virtual services in environments used by both QA and development teams.
Below is a typical Kubernetes infrastructure for deploying applications using a structured build automation process.
Our customers typically use Ingress to expose the services outside of the cluster.
We recommend doing blue-green rolling releases to maintain the uptime of the virtual services.
If you use JMS or IBM MQ you will need to reach out to the message broker that is hosted outside the cluster.
In this example, Splunk is used for the log aggregator. Flux is often used to maintain the cluster's state inside a Git repository and let that sync up itself using the tool. We don't recommend using kubectl commands or Rancher UI, or anything like that to make manual changes to the cluster.
Prometheus can be useful for health checking and making sure things are still up and running.
Traffic Parrot stores its configuration in JSON configuration files. And those configuration files define which endpoints are supported for a particular pod and determine what kind of virtual services or API mocks it has.
This diagram shows an approach for setting up a continuous delivery pipeline to deploy Traffic Parrot containers running API mocks and virtual services in Kubernetes.
This is an example of a delivery pipeline with a separated deployment configuration from the virtual service configuration. The developers, once they've prototyped their virtual services locally, they'll commit the Traffic Parrot configuration to a system under test repo.
In “2. commit TP config”, a Docker image with TP config files is built that can then be used inside the Kubernetes cluster.
The SUT repo has many Traffic Parrot configs, and it's triggering a pipeline so one or more of those configs are then built into the image.
In “5. Commit TP Deployment” where Traffic Parrot is deployed with a Docker image name and version, the deployment configuration is held in a Git Flux repository and will be synced to the Kuberenetes cluster using the Flux-CD tool.
We have just released version 5.28.2. Here is a list of the changes that came with the release:
Copyright © 2014- Traffic Parrot. All Right Reserved. TrafficParrot is a product and registered trademark of Traffic Parrot Ltd. Company number 10048075. | (UK freephone) 0800 688 9806 | +44 20 3239 7753 | contact@trafficparrot.com
Blog template by designer blogs