Friday, 29 March 2019

How to run Traffic Parrot behind a firewall? How to use a proxy?

Recently one of the Traffic Parrot users asked us:

"When I try to capture traffic from a HTTP API it does not register anything or generate an error, it just does not capture anything. I have a proxy in my company, but I can not find how to tell the application to go through that proxy to make the recording. Please advise." - A developer at a Colombian bank

Are you running Traffic Parrot inside an organisation where you have to use a HTTP Proxy to access APIs?

No problem!

You can find documentation here how to use a HTTP Proxy here: https://trafficparrot.com/documentation/4.4.x/user_guide.html#outbound-http-proxy

Friday, 15 March 2019

How to add requests and responses manually

A user recently asked us this good question.

> I would like to know your inputs for below query: if we do not have any restful API details apart from one URI and one set of Request and response.
> That to URI is not accessible in our dev cloud. then we do not have any recording url for that to put on since it is not accessible. How can we do virtualization here using traffic parrot

You can add requests and responses manually to Traffic Parrot by clicking on HTTP->Add/Edit.


Monday, 4 March 2019

Delaying JMS messages in IBM MQ or Active MQ (mocking and service virtualization)

Starting from version 4.4.0 Traffic Parrot supports delayed JMS messages.

Here is the documentation how to enable delayed messages: https://trafficparrot.com/documentation/4.4.x/jms.html#edit-jms-delay

It is useful in scenarios like performance testing, when you would like to simulate a dependency system that takes a while to process messages asynchronously and its impact on your systems performance.

Wednesday, 13 February 2019

Tutorial: Mocking and simulating gRPC APIs

gRPC is an open source protocol used for remote procedure calls which was initially developed at Google. Traffic Parrot support gRPC service virtualization and API mocking.

If you would like to learn more have a look at the new tutorial Chapter 5: Mocking and simulating gRPC.

Saturday, 19 January 2019

Tutorial: How to use Traffic Parrot APIs

Sometimes developers would like to configure Traffic Parrot programmatically. You can do that using Traffic Parrot APIs.
Have a look at the video below for more details. Here is the Postman collection that should allow you to explore what APIs are available. Here you can also find the Swagger/OpenAPI documentation of the APIs.

Sunday, 30 December 2018

Wednesday, 19 December 2018

3 bitter types of technical debt




We have noticed a pattern on the projects we have worked on for our clients. Technical debt develops in three common categories. The interest you will be paying on that debt looks very different those cases.

Eventually repaid

Some teams never allow technical debt to grow to levels where they even talk about technical debt. It's just business, as usual, to refactor on daily basis.
Very low negligible technical debt
Other teams may start with a noticeable level of technical debt because they are taking on work on legacy systems, but they reduce it as they go along by refactoring while adding new features.
Eventually repaid technical debt
The amount of interest you pay on technical debt is eventually so low, you barely notice it.

Sustainable

Sustainable technical debt
Teams add new features but refactor the code as they go along and the technical debt stays more or less at the same level. This means that even though there is noticeable technical debt present in the code, the team can deliver features at a sustainable peace.
So, they are continually repaying the interest on the technical debt, but the amount of the debt stays the same. The difference between “eventually repaid” and “sustainable” is the amount of interest you pay on a daily basis while making changes to the code. In “eventually repaid” the interest gets close to zero, in “sustainable” it stays at a constant level throughout the lifecycle of the project.

Compound growth

Teams stop refactoring but continue to add new features. Teams stop paying interest on the debt, and the interest is consolidated into the debt. The debt goes into a compound interest cycle and starts growing exponentially.


Compound growth technical debt


Similar to the the Broken Windows Theory, just applied to quality of code, if you see that non-paying technical debt is the norm in your team, you personally don't repay it either, just add new features and the debt grows.


This typically ends with postponed releases, bugs in production and a lot of tension in meetings where people blame each other for not working well or hard enough. It takes several postponed releases and a few major bugs with visibility in high management levels to realise the teams need to repay some debt to go faster.


Unfortunately, the cost of repaying debt is much higher by that point, just because of the compound interest you have to pay back that was consolidated into the debt. In other words, 2 hours invested in repaying technical debt 6 months ago, could be equivalent to 1 day of work today to repay the same amount of debt.


The problem with this type of approach is it feels you are going fast to start with because you are delivering features and the technical debt is not hurting you as much at the very beginning. The problem is you are putting yourself on the compound interest curve, instead of staying linear. Linear and compound curves look similar at the start, very different later on.


In most cases, you want to avoid ending up in this category. An example of where this type of debt is acceptable is when you need to hit a regulatory deadline, where the cost of not hitting the deadline outweighs the cost of repaying the compound debt accumulated later on.

What is the lesson learned here?

Keep in mind which category you wanna be in and make the conscious choice to invest in technical excellence where necessary. Aim for low levels of technical debt or constant medium levels, and never compound growth of debt.



Eventually repaid
Sustainable
Compound growth
Type of debt.
You eventually repay almost all debt.
You keep debt at the same level by paying off all the interest.
You do not repay debt and interest is added to the debt and debt grows exponentially.
Daily repayment amount
Develop features and refactor the new and old code heavily.
Develop features and refactor the new and old code a bit.
Only developing new features with little to no refactoring.
Daily interest teams pay
Linear, going down.

For example, at first developers are going 24% slower because of debt, eventually, 0% slower if they repaid all of the debt.
Linear, constant.

For example, developers are constantly 12% slower because of the debt.
Grows exponentially

For example, developers are 22% slower initially but in 12 months time, they are 370% slower compared to if they had no technical debt.


Next steps

At Traffic Parrot, we are building a framework to help teams visualise, manage and reduce technical debt. It will help you decide on the level of technical debt that is acceptable and make the right choices at the right time so that you do not end up in the “compound growth” category. Sign up here to get notified when it's available: https://trafficparrot.com/technical_debt.html


What is your experience with technical debt?

Friday, 7 December 2018

You need Refactoring to stay Agile and competitive

Do you want to release software to production often and fast, continuously? Do you want to release high-quality software? In other words, do you want to deliver value to your customers before your competitors do it for you?

One of the building blocks for high-quality software is the practice of refactoring. If you cannot refactor but continue to add new features, your software quality will most likely degrade over time, and you will accumulate high levels of technical debt.

I have seen that recently at a project for a client where we had a bunch of good developers, but the fact that they did not refactor the code on a regular basis and continued to introduce new features meant the project came to a point where there were significant issues with every release. Pretty much most of the recent releases would not go to production on time, and if they did, there were major bugs discovered by customers.

The three main contributors to lack of refactoring, in this case, were (surprise!?) 


  1. Not enough interest from the business in investing in technical excellence ("Continuous attention to technical excellence and good design enhances agility") 
  2. Lack of sufficient levels of acceptance/functional UI and API testing. No confidence in the tests meant developers were afraid to change production code in case they break existing features. 
  3. Root cause analysis of issues leads to developers having to explain themselves, which teaches developers to stay under the radar and not change anything that is not necessary. 

How can we resolve these issues? 


  1. Educate the business on the consequences and trade-offs they are making when they are not investing in technical excellence, especially refactoring in this case. For example, lack of refactoring together with continually adding new features is likely to result in lower quality software (resulting in more likely customer facing issues business have to report to FCA) and less predictable and less frequent releases (resulting in less value delivered to the customers and allowing competitors deliver that value instead). 
  2. Build up a robust suite of tests that the developers trust — for example, a testing pyramid that would include unit tests, integration tests, BDD API acceptance tests, BDD UI acceptance tests and contract tests. 
  3. Create a culture of "The process is to blame, not individuals". Whenever something terrible happens, figure out how to change the development process rather than point at individuals. For example, instead of blaming a developer for introducing a bug, figure out what types of tests should we start writing to avoid these types of bugs leaking to production in the future. 
Transformations like these are a process of continuous improvement. Fortunately, this particular client, every time we explained the consequences of the actions they take, they would listen carefully, engage in a dialogue, help us understand their environment better and help us help them change where necessary. The end goal for everybody was always to deliver more value to customers faster.

What is your experience with lack of refactoring?

Thursday, 6 December 2018

Invitation for a group discussion: The meaning of Agile in 2018

Join Wojciech to talk about Agile in software testing at the European Software Testing Summit "Group discussion: The meaning of Agile in 2018" on the 12th of December 2018.

Wojciech will lead a group discussion with the audience about the state of Agile software testing and development, its history and where the industry is today. We will talk about where Agile comes from, the Agile Manifesto, how we practice it today, what works well and where we can improve.

Sunday, 21 October 2018

How much time it will take to build/virtualize a simple, medium and complex service?

Question

I'd like to know how much time it will take to build(Virtualize) a simple, medium and Complex Service

Answer

A simple service depending on your team's maturity and setup will typically take a few minutes to a day.
A medium service can take from a few hours to a few days.
A complex service can take from a few days to a few weeks.

Question

What are all the activities involved from start to finish

Answer

That depends on your environment and team structure. As an example, I recommend having a look at http://blog.trafficparrot.com/2016/09/five-steps-on-how-to-implement-service.html

We would like to learn more about your organisation in order to help you deliver your server virtualization project. Please email us at consultation@trafficparrot.com to schedule a call where we can discuss your requirements.

Saturday, 20 October 2018

If-else dynamic API mock responses

Question

How do I generate dynamic API mock/stub responses and use if + else if + else in GRPC?

Answer

You can start with multiple equals blocks one after another. For example:

{{#equal (jsonPath request.body '$.itemCode') '100'}}
    { countryCode: 'ES' }
{{/equal}}
{{#equal (jsonPath request.body '$.itemCode') '200'}}
    { countryCode: 'IC' }
{{/equal}}
{{#equal (jsonPath request.body '$.itemCode') '300'}}
    { countryCode: 'TW' }
{{/equal}}
{{#equal (jsonPath request.body '$.itemCode') '400'}}
    { countryCode: 'CH' }
{{/equal}}

If that does not satisfy your needs, you can do nested equals blocks.

{{#equal thing 'something'}}
   {{#equal 'one' 'two'}}
       Yes!
   {{else}}
       No!
   {{/equal}}
{{else}}
   Not a thing!
{{/equal}}

Saturday, 6 October 2018

Sidecar Container Pattern in Service Virtualizaion and API mocking

Traffic Parrot has first-class support for the sidecar docker container pattern.

Let's assume if you work with a Jenkins Continuous Integration pipeline and test your microservices there to get to production fast. The more external dependencies the build pipeline has, the more risk you have those dependencies will cause the build to be unstable and fail because of external dependency issues, not related to the code quality. External dependencies should be kept to a minimum. For example, run your database in Docker per build (on Jenkins Slave), not in a shared environment. It is a pattern similar to the "sidecar" container pattern recommended by Jenkins.

We recommend running your API mocks in the same process as your tests. If you decide to run your API mocks or virtual services in a separate process though, run them in a docker container. This way you are taking your automation to the next level, you have a fully automated and reproducible build pipeline.

Because of issues with dependencies, avoid using centralized instances running API mocks and virtual services outside your build pipeline. You can use them for exposing mocks to other teams and departments and for manual exploratory testing, but not in your automated CI builds.

Wednesday, 19 September 2018

Service Virtualization As Code (API Mocking As Code)

Traffic Parrot has first-class support for the service virtualization as code pattern (also called API mocking as code).

If you have any state or configuration you manage in your IT infrastructure the best solution in most cases is to version control it in a source control system like Git. For example, if you are running on Kubernetes and Docker, your whole infrastructure might be defined in a source control repository as Dockerfiles and Terraform Kubernetes configuration files. Its called Infrastructure as code

It is advised to do the same with your API mocks and virtual services. All your mocks and virtual services should be stored in a version control system such as Git. In the case of Traffic Parrot, this is possible since all request to response mapping files are stored on the filesystem as JSON files. Alternatively, you can use the JUnit TrafficParrotRule directly in your JUnit tests. This way you are taking your automation to the next level, you have a fully automated and reproducible build pipeline.

Because of the issues that can arise with manual processes, avoid having API mocks and virtual services that are updated manually and never stored in Git or a similar source control system. Store all your API mocks and virtual services in a source control system.

Wednesday, 8 August 2018

Case study: a Fortune 500 E-Commerce Company Moves To A Microservice Architecture and uses Traffic Parrot

A Fortune 500 retail e-commerce company software architects purchased Traffic Parrot Enterprise to speed up their software delivery process and reduce the costs of development and testing. In this article, we will explore the details of why they chose Traffic Parrot and how they have used it to help with development and testing of their microservice architecture.

Executive summary:
  • Traffic Parrot is a tool used by developers to deliver high-quality microservices faster
  • Traffic Parrot is designed for autonomous product teams developing microservices, other service virtualization tools do not work well in those environments
  • Purchasing Traffic Parrot resulted in faster time to market and lower long-term maintenance costs

Value to the IT architects

'We are building a new platform based on microservices and needed a testing infrastructure that allowed all of our testers to work autonomously, using local machines to record their communication streams between their services and inside our pipelines in order to virtualize a service. Traffic Parrot, while not as fully featured as offerings from HP and IBM, offered the key capabilities that our teams needed such as HTTPS and JMS IBM MQ and a substantially smaller footprint that allowed it to run locally. We have been very pleased by Traffic Parrot's email support and responsiveness to our requests and would recommend them to other teams doing microservice-based architectures." Chief Architect at a Fortune 500 company

Developers working for the company are building microservices that handle the backend processing of transactions. The e-commerce website is based on the IBM WebSphere eCommerce platform. The applications they work with communicate via HTTP JSON REST APIs, GRPC, ActiveMQ via JMS and IBM MQ via JMS. All microservices are deployed in OpenShift in Docker containers.

One of the challenges with moving to a microservice architecture is that there are many components that need testing. In most cases, microservice architectures will require a Continuous Delivery environment with a Continuous Integration build pipeline for every microservice. That means we will need to test microservices in isolation in automated builds. They will be also developed and tested on developers’ desktops or laptops.

This particular company had the following technical challenges:
  • Need to create mockups for HTTP, GRPC, JMS ActiveMQ and JMS IBM MQ services
  • Need to deploy in Docker to OpenShift (on Red Hat Enterprise Linux Atomic Host)
  • Need to test in Bamboo
  • Need a graphical/web user interface in the mock
  • Need dynamic responses (scripting) in the mock
  • Need to choose JMS target queue name based on message content


Diagram 1: Production environment setup at the company

There are open-source tools on the market that can be deployed using Docker or run in CI. Unfortunately, they come with limited support for different protocols; you would have to use several tools and develop missing protocols in-house. They also often come with no UI or commercial support.

There are other commercial tools that support many protocols and provide advanced dynamic response templating. They also provide richer user interfaces allowing to develop more complex workflows without having to program in Java. Unfortunately, they typically require developers to use thick clients for creating the virtual services, which requires everybody to install them and pay for the licenses. They are also designed to be deployed in a central place managed by one team of administrators rather than running on any laptop or inside an automated CI build. Also, the mock definition artifacts they produce, compared to open source alternatives, are not easy to version control in Git or Subversion.

It was important for the architects to give more autonomy to the development teams and not having a centralized tool to create the virtual services and API mockups. They did not want to create another bottleneck, the centralized administrators team.

They have decided not to build the solution in-house and instead focus on the development of the e-commerce platform, and use external tools like Traffic Parrot to reduce the time to market and decrease long term maintenance costs.

They also did not need all the features that the other bigger tools provide.

Traffic Parrot was chosen by the company because it gave them a mix of what the well-established commercial and open source tools provide. It has more functionality and protocols than the open source tools. It also has a more flexible deployment and licensing model than the other commercial tools. The architects wanted a tool that was simple to use and lightweight. Traffic Parrot satisfied all of those requirements.



Diagram 2: How the developers test microservices in isolation

How developers use Traffic Parrot at the company:
  • Develop the microservice tests in SoapUI on their laptop
  • Start Traffic Parrot on their laptop
  • Use the Traffic Parrot Web UI to record traffic (which could be many different protocols) and create mocks by running the SoapUI tests with the microservice that will hit Traffic Parrot in recording mode
  • Push the mock definitions (request-response mappings) to Git
How the tests are run in Bamboo at the company:
  • Checkout SoapUI tests and Traffic Parrot mocks from Git
  • Maven runs Traffic Parrot plugin to start the HTTP, ActiveMQ and IBM MQ mocks
  • Maven SoapUI plugin runs the tests that hit the microservice
  • The microservice communicates with Traffic Parrot, which is pretending to be the real HTTP, ActiveMQ, gRPC and IBM MQ dependencies

Value to the business

The multinational e-commerce marketplace is competitive. Moving to a microservice architecture has resulted in an increased productivity of the IT software department, resulting in decreased costs and time to market for both new and existing products.

To make that transition smooth and fast, the company needed tools that are designed to be used in microservice architectures by a team doing Continuous Integration.

The commercial offerings are too heavyweight to be used in the CI environment the company has in place. Also, the licensing costs are too high to justify the value of those tools.

The architects reached out to the Traffic Parrot team for a beta version with JMS support (currently JMS support is not in beta any more). The architects have decided to purchase an unlimited user license for Traffic Parrot. Over a period of 6 weeks, the Traffic Parrot team delivered the missing functionalities as per the requirements of the company architects.

It is difficult to attract and retain development talent. The company has chosen to pay for Traffic Parrot to get guaranteed support and avoid spending internal development time and energy on what would essentially be re-inventing the wheel. They focus their talents on additions to their core offering that increase the value to their end customers.
Summary

If your company is moving to a microservice architecture, you might need more than what the open source tools provide. If you need an API-mocking tool that supports many protocols but is also user-friendly UI and designed to be used with microservices and decentralized autonomous teams that follow CI, Agile and DevOps consider using Traffic Parrot.

Next steps

  • Download Traffic Parrot trial
  • Contact Traffic Parrot to schedule a demo
  • If you have any requirements that are missing in Traffic Parrot, please contact us to discuss potential roadmap acceleration