Saturday, 26 August 2023

Traffic Parrot 5.41.7 released, what's new?

We have just released version 5.41.7. Here is a list of the changes that came with the release:

Fixes

  • Fixed support for modifying more than one header in a Handlebars script:
    {{ modifyResponse 'headerValue' 'header-name-1' 'header-value-1' }}
    {{ modifyResponse 'headerValue' 'header-name-2' 'header-value-2' }}
  • Improved system tray OS compatibility
  • Fixed a bug in the stop.sh script impacting Apple ARM based M1/M2 processors
  • The UI log file viewer now supports viewing .gz and .zip log files

Changes

  • Logback can now be specified as a logging provider in jvm.args:
    trafficparrotserver.logging.properties.filename=trafficparrotserver.logback.xml
  • For organizations that have parted ways with all versions of Log4j, we now offer a distribution with those JAR files removed entirely

Thursday, 10 August 2023

Traffic Parrot 5.41.0 released, what's new?

We have just released version 5.41.0. Here is a list of the changes that came with the release:

Features

  • Random mixed case strings of a fixed length can now be generated using
    {{ randomValue length=10 type='ALPHANUMERIC' mixedcase=true }}
  • Header values can now be set in Handlebars scripts using
    {{ modifyResponse 'headerValue' 'header-name' 'header-value' }}
  • New simplified OpenAPI mock import
    • Enable in trafficparrot.properties by setting:
      trafficparrot.openapi.import.mode=SELECT_RESPONSE_STATUS
      trafficparrot.openapi.skeletons.mode=SELECT_RESPONSE_STATUS
    • When OpenAPI response examples are present, they are used directly as mock responses
    • Otherwise, OpenAPI schema data types and field structures are used to generate a valid mock response
    • The request header x-traffic-parrot-select-response-status can be set to a numeric response code to select which response to return
    • The default response returned is the success response
    • The request body is not included in matching, to simplify the response selection
    • The request URL is used for matching, including checking for mandatory path and query parameters
  • Role based UI access to allow defining some read only UI users that are allowed view permissions but not edit permissions
    • Enable in trafficparrot.properties by setting:
      trafficparrot.gui.security.mode=LOGIN_PROPERTIES
    • Define users in trafficparrot.gui.login.properties by setting:
      admin=password,traffic-parrot-gui-role,traffic-parrot-gui-edit-role
      readonly=password,traffic-parrot-gui-role
    • HTTP basic authentication popup prompts users to specify a username and password
  • Allow loading gRPC existing proto set files in proto-bin with .set.bin extension (e.g. as output by gRPCurl)
  • Added a UI page with instructions on how to provide the required JAR files when working with IBM® MQ

Fixes

  • Library upgrades to fix OWASP issues
  • {{ evaluate 'variable' }} now recognizes a variable that was set using {{#assign 'variable'}}value{{/assign}}
  • Fixed an issue with the default permissions of the lib/external folder on Linux
  • Fixed an issue with saving very large XML body mappings in the UI

Changes

  • Upgraded protoc from 3.21.9 to 3.23.4
  • Upgraded gRPC from 1.51.0 to 1.56.1

Monday, 7 August 2023

Aligning company objectives when planning a system simulation project

When planning a system simulation (service virtualization) project, aligning the project with the enterprise strategy is essential. For example, here is what I have been discussing recently with a prospect.

CIO's objective: High uptime.

Director of QA objective: Quality product (feeds into high uptime enterprise objective)

Director of QA key result: Do root cause analysis on all 2023 incidents resulting in lowering downtime of the product (allows a targeted approach to resolving uptime issues)

Director of QA key result: increase automated test coverage by 30% to be able to prevent root cause issues from happening again

Director of QA key result: Create three system simulators (virtual services) to allow for increased automated test coverage for user journeys X, Y and Z

Result: The Director of QA can communicate to the CIO that selected categories of issues causing uptime downgrade will be mitigated going forward, and the expected uptime is increased by X.


If you want to learn about options your specific situation, please contact us https://trafficparrot.com/contact.html

Monday, 31 July 2023

Case study: estimating the cost of in-house development of mocks

We had a call yesterday with an architect working for a global aerospace enterprise. We discussed the pros and cons of building a mocking tool in-house vs. purchasing off-the-shelf.

He estimated his developer would need around ten days to create the basic version of the mock for one service and then roughly two days a month to maintain it. So, this tool project would take up 34 days of the developer's time in the next 12 months. Roughly a €9,000 ($10,000) cost in the first 12 months (€63k average dev salary / 240 work days a year * 34 days for this project).

This is a very conservative estimate, in my experience, but it's a helpful start.

On top of that, I suggested a back-of-the-envelope calculation, and an employee generates, on average, $1550 in revenue per day for that company ($26b yearly revenue / 70k employees / 240 work days a year). 34 days would be worth roughly $53,000.

So the cost of developing in-house based on these estimations is MAX($53,000, $10,000) = $53,000 for the first 12 months.

So, the ROI of using Traffic Parrot instead of developing in-house is high, even just for the first service he wanted to mock.

On top of that, Traffic Parrot might be considered a low-risk option for the company, as our tool is already working. It has been tested and has been used by companies globally since 2016. The company developer hovever would be starting from scratch with "can it do XYZ" requests from other developers week by week.

Off-the-shelf can also be future-proof. For example, our clients use our tool in different environments ranging from local, CI/CD and shared environments across many operating systems. Over the years, we have developed features that improve the developer experience in all of those cases, which, when you are developing in-house, you might need in the future, even if you do not see it today.

I also suggested considering pulling into the discussion the Director Of Product. He might have more visibility on the cost of those 34 dev days to the product revenue (I believe it's at least 3-5x the average employee number I used above). He might also have a budget to purchase tools if the line manager does not.

Ultimately it's the prospect's call to either develop in-house or use off-the-shelf. We are here to help them make an informed decision by walking around the abovementioned items.

If you want to learn about other options for your specific situation, please contact us https://trafficparrot.com/contact.html

Thursday, 1 June 2023

How do system simulators (service virtualization) and real systems work together?

We have been asked this week by a Director of QA working for a south american energy company, "How do system simulators and real systems work together?".

Several times, different prospects have asked this question, so we had diagrams ready to show.

The current situation in your test and development environment is most likely that you have problems with third-party or backend systems (red on the diagram) that result in blockages of your regression testing (red on the diagram) and new feature development (red on the diagram).

Depending on your specific situation, several options are available to add API and system simulators to your environments. One of the popular options among directors asking this question is introducing a new test environment, the "simulated" one. If you do that, most of your regression and new feature testing can be done with simulators (green), allowing for faster time to market for new features and lower costs of introducing changes to your product.

Here is a sample ROI model explaining the benefits of QA using API and system simulators.

If you want to learn about other options for your specific situation, please contact us https://trafficparrot.com/contact.html






Traffic Parrot 5.40.2 released, what's new?

We have just released version 5.40.2. Here is a list of the changes that came with the release:

Features

  • Added Couchbase data source support:
    {{ dataSource 'couchbase.db' 'INSERT INTO PERSON(KEY, VALUE) VALUES ("$id", {"id" : $id,"name" : $name})' id=1000 name='some-name' syntax='N1QL' }}
    {{ dataSource 'couchbase.db' 'SELECT name FROM PERSON USE KEYS "$id"' id=1000 single=true syntax='N1QL' }}
    {{ dataSource 'couchbase.db' 'INSERT INTO PERSON(KEY, VALUE) VALUES ("$id", $object)' id=1000 object=example syntax='N1QL' }}
  • New helper to iterate a certain number of times:
    {{#times 10}}{{@index}}{{#unless @last}},{{/unless}}{{/times}}
  • New HTTP mapping bulk delete API:
    DELETE http://localhost:8080/api/http/mappings/bulk/UUID1,UUID2,UUID3
  • New gRPC mapping bulk delete API:
    DELETE http://localhost:8080/api/grpc/mappings/bulk/UUID1,UUID2,UUID3

Fixes

  • Library upgrades to fix OWASP issues

Changes

  • Upgraded bundled JRE from 8u362 to 8u372
  • Helper {{#equal a b }}true{{else}}false{{/equal} can now be written inline as {{ equal a b }}
  • When using Handlebars partial files, it is now possible to use a different file extension than .hbs for example {{>example.txt}} will read from file example.txt

12 Microservices Testing Techniques for Dependency Control

Friday, 5 May 2023

Traffic Parrot 5.39.5 released, what's new?

We have just released version 5.38.1. Here is a list of the changes that came with the release:

Features

  • Multiple gRPC servers can now be recorded at the same time, for example:
    com.example.Service/method -> host1:port1
    com.example.packageA.* -> host2:port2
    com.example.packageB.* -> host3:port3
  • Added support for Java object Serializable Proto messages sent over JMS connections e.g. RabbitMQ RPCs
  • Added support for sending multiple types of request/response message (e.g. two different Java Proto request/response pairs) on a single request/response queue pair
  • Added support for RabbitMQ contentType header
  • Added PostgreSQL example connection to database-connections.json
  • JSON request/response field data in mappings can now be externalized to/from MappingData.csv for an alternative way to edit field data across multiple mappings in a table format
  • Tools UI menu with links to simple file editor and mapping field data externalization

Fixes

  • JMS advanced request and response types are now retained in the UI form when starting/stopping recording
  • Improved HTTP recording UI error message when there is a problem fetching recording status in the UI
  • Increase HTTP recording UI status timeout to cope better with slow browser connections
  • If there is a problem decoding a recorded JMS message, fall back on a raw bytes recording representation
  • Mention which actual connection ids were present when a JMS connection id is not found
  • Removed deprecated "matches script" option from the request body matcher dropdown (replaced by dedicated fields)

Changes

  • Upgraded bundled JRE from 8u352 to 8u362
  • When matching JMS requests, mappings for Java object Serializable requests now must have the bodyType specified
  • The default UI HTTPS certificate user-interface-https.jks was renewed

Saturday, 29 April 2023

What is the difference between API-first, design-first and code-first APIs?

What does it mean to follow the API-first approach? It means that APIs are first-class citizens that can be products the company offers its customers; they are critical business assets.

What is the API design-first approach? It means working closely with the client, product, architecture, testing and development teams to design the services available in the API and the data format before actually starting to code them.

It contrasts with code-first APIs, where you dive straight into coding and creating the API without any upfront specifications. The API schema can be defined in a format like OpenAPI for HTTP APIs, Protobuf for gRPC or AsyncAPI for RabbitMQ, Kafka and other asynchronous systems.

The API design-first approach is proven to parallelise teamwork and get products to customers much faster, which is still relevant. Design-first fits into the API-first approach adopted by companies that treat APIs as first-class citizens critical to their business success. 

Difference between API-first, design-first and code-first APIs

We can help you with derisking your API-first and design-first API initiatives. Reach out to us and talk with our development and QA experts.

Sunday, 9 April 2023

Traffic Parrot 5.38.1 released, what's new?

We have just released version 5.38.1. Here is a list of the changes that came with the release:

Features

  • Add request/response recording Protobuf type fields to JMS UI

Monday, 3 April 2023

Traffic Parrot 5.38.0 released, what's new?

We have just released version 5.38.0. Here is a list of the changes that came with the release:

Features

  • RabbitMQ JMS now supports record/replay of Protobuf messages using their JSON representation
  • Added new dedicated endpoints that can be used for Docker and OpenShift health check probes:
    • http://localhost:8080/api/health
    • http://localhost:8081/__admin/health
  • The trafficparrot.license comment line is now printed in the logs to help clarify which license is being used

Monday, 27 March 2023

Traffic Parrot 5.37.8 released, what's new?

We have just released version 5.37.8. Here is a list of the changes that came with the release:

Fixes

  • Added support for protoc on Apple ARM based M1/M2 processors running inside of Docker

Roll-out Traffic Parrot across the whole enterprise

A number of Traffic Parrot customers use the simulation tool across their whole enterprise as the primary and recommended API, system and service simulation tool.

We often get asked what would a large-scale rollout like that look like. We have created a high-level project overview that captures the process we have seen work well for our existing customers: API and Service Simulation Tool Implementation Project Plan



Tuesday, 21 March 2023

How to deploy Traffic Parrot in OpenShift

A significant number of Traffic Parrot customers run our tool in OpenShift. This blog post discusses a template process you can follow to do the same. 

Here’s a quick terminology review to get us on the same page:

  • docker = used to build, push and run container images
  • podman = docker replacement that OpenShift recommend using (developed by OpenShift), almost identical to standard docker
  • oc = OpenShift command line interface to deploy and configure OpenShift services etc.
  • helm = tool used to package and deploy OpenShift applications automatically
  • TPLS = Traffic Parrot License Server
  • TP = Traffic Parrot

If the end goal is to deploy Traffic Parrot in OpenShift, you need the following:

  • TPLS Docker image built and pushed to OpenShift Docker registry
  • TPLS Helm Chart built and deployed to OpenShift Cluster
  • TP Docker image built and pushed to OpenShift Docker registry
  • TP Helm Chart built and deployed to OpenShift Cluster

There will be several steps to deploy things in OpenShift that we need to go through:

  1. (One-time learning) Discuss the industry standard Docker and OpenShift workflow
  2. (One-time setup) Establish OpenShift cluster connectivity from your localhost
  3. (One-time setup) Establish Docker registry connectivity from your localhost
  4. (One-time learning) Discuss the client-specific OpenShift workflow (there may be additional steps beyond the industry standard ones)
  5. (One-time setup) Create a TPLS Docker image using podman
  6. (One-time test) Deploy TPLS in OpenShift using oc and test it works as expected
  7. (One-time test) Test that localhost TP can connect to OpenShift TPLS
  8. (One-time setup) Create a TP Docker image using podman
  9. (One-time test) Deploy TP and TPLS in OpenShift using oc and test it works as expected
  10. (One-time setup) Create a pipeline to deploy TPLS in OpenShift from GitHub sources
  11. (One-time setup) Create a pipeline to test TP API mocks and  deploy TP in OpenShift from GitHub sources
  12. (One-time setup) Create a client internal confluence page for API mocks developers on how to make changes to Traffic Parrot client API mocks and deploy them using the automated pipelines
  13. (One-time test) Test TP API mocks with a selected use case
  14. (Ongoing process) Develop TP API mocks locally, then commit configuration to GitHub (which will be deployed by a pipeline automatically to OpenShift)
  15. (Ongoing process) Run TP pipeline to test TP API mocks and deploy to OpenShift automatically with rolling releases for partners
We can help you with all those by providing standard scripts our customers use.