Sunday, 1 December 2019

Traffic Parrot 5.8.0 released, what's new?

We have just released version 5.8.0. Here is a list of the changes that came with the release:

Features

  • Improvements to gRPC mappings list user interface:
    • Added sorting to columns
    • Added a button to pause auto reload
    • Added a button to select which columns to display
    • Added two new columns which are hidden by default: id and name

Fixes

  • Pause gRPC mappings list auto reload when highlighting text so that it is possible to highlight and copy text without the auto reload clearing the highlight
  • Allow spaces in gRPC mapping file name template

Monday, 25 November 2019

Traffic Parrot 5.7.3 released, what's new?

We have just released version 5.7.3. Here is a list of the changes that came with the release:

Features

  • gRPC mapping file name templates when saving mappings via the UI can now be customized with the following properties:
    #
    # Template used for gRPC mapping file names
    #
    # Available properties:
    # {{ mapping.id }}
    # {{ mapping.package }}
    # {{ mapping.service }}
    # {{ mapping.method }}
    # {{ mapping.isSuccess }}
    # {{ mapping.status.name }}
    # {{ mapping.status.code }}
    #
    # Available commands:
    # {{ countGrpcMappings package=mapping.package service=mapping.service method=mapping.method success=true offset=1 format='%02d' }}
    trafficparrot.virtualservice.grpc.saved.mapping.file.name.template={{ mapping.package }}.{{ countGrpcMappings package=mapping.package offset=1 format='%02d' }}.{{ mapping.service }}-{{ mapping.method }}[{{ mapping.status.name }}]
    trafficparrot.virtualservice.grpc.saved.mapping.file.name.collision.suffix.template=-{{ mapping.id }}
                                            
  • When editing a gRPC mapping in the UI you can now choose whether to keep the existing file name or generate a new one according to the template properties when the mapping is saved

Thursday, 14 November 2019

How to return different responses for the same URL when mocking REST HTTP APIs in Traffic Parrot

Typical problems solved by service virtualization/mocking/stubbing

In the first chapter of our tutorial series, we have a section that covers some of the typical problems that a service virtualization/mocking/stubbing approach addresses.
Do you experience similar problems? Please email us at consultation@trafficparrot.com to schedule a call where we can discuss your requirements.
Here are several of the more typical problems testers will experience while testing web applications, mobile applications, legacy systems and APIs and how service virtualization can help with solving them.
  • Too much pressure on the testing team and not enough time to test
    This is a common problem and service virtualization alone will not fix it, but it is an essential part of the solution. If you can utilize service virtualization in your builds and pipelines you can have more confidence in your release cycle. The focus changes from trying to test everything to testing what could be at risk of having issues. These kinds of checks can give you a quick health status of the application, or a piece of it, without dedicating a lot of testing time. Quick feedback means quick wins overall.
  • Testers being blocked by other teams not delivering APIs on time, leaving them unable to test
    In this case, you can use virtual services to simulate the non-existing API. If there are business requirements or documentation, you can create virtual services based on them to use until the real service is available to test. After you verify the system works in isolation according to your expectations, you can wait for the dependent API to be ready and do integration testing using the same tests. This is an example of using TDD for integration testing.
  • Hard to reproduce production bugs resulting in frustration and time loss
    Test data and complex interactions with backend and third party systems can cause issues reproducing bugs found in production. Issues with setting up test data in multiple systems simultaneously, simulating error messages or simulating protocol issues are hard to accomplish tasks getting in the way of easy reproduction of the environment where the bug was found in production. Fortunately, complex environments like these and many others can be simulated with virtual services, allowing for more flexibility and peace of mind when reproducing bugs.
  • Creating test data is complex and takes time, slowing the testing team down
    Often when you test old systems you have to wait for test data to be created, sometimes it’s even impossible to create the test data you need (especially in third party systems). Also, backend and third party systems can be hard to set up to return error responses, or throttle network responses, on demand. The solution is to use virtual services which are under your control, so you can set up any type of test data on demand.
  • Scheduling time on test environments resulting in inefficient use of tester’s time
    Especially large banks with old mainframe systems experience this issue where the test environments are costly to create, so there is a limited number of them shared across many testing teams and projects. Those old APIs almost never get changed but they are included in many test plans, so you have to schedule time on those environments to run your tests. You can simulate the API that never changes with virtual services and test your application under test more often without having to wait for the API availability. It is also less likely to impact other teams by burning through their test data causing further delays.
  • Cost of third party test transactions can complicate testing efforts
    In most cases, those paid APIs are very simple in nature and very rarely change. The cost of accessing third party APIs can complicate performance testing efforts as well. Simulating them using virtual services can reduce the third party transaction costs.
  • Unreliable performance tests
    When you performance test many components simultaneously, it is often challenging to pinpoint the code change that causes a performance issue. The feedback loop is too large, and you are testing too many moving parts at one time. You ideally want to test small code releases in isolation and a limited number of components. That will ensure that if you see a degradation in performance, you can quickly point to a code changeset that was likely the cause. To run performance tests in isolation, you need virtual services. Most service virtualization tools on top of setting up test data capabilities will offer the possibility to define response times for the virtual services so you can create a production-like environment.
When you select a tool to use for service virtualization, keep the problem you are trying to solve in mind. Ask yourself a few questions about the problem at hand and what service virtualization can do to solve it.
Here are a few sample questions which could jumpstart the process for you:
  • Are service calls to real services slowing automation efforts?
  • Is lack of access to third party or backend services limiting testing efforts?
  • Is using real services the best place to test new code?
  • What problems might we need to address to create and maintain a virtual service for testing?

Sunday, 10 November 2019

Traffic Parrot 5.6.0 released, whats new?

We have just released version 5.6.0. Here is a list of the changes that came with the release:

Features

  • gRPC unmatched requests now display the closest match in both the logs and error response description. For example:
    UNKNOWN: Traffic Parrot Virtual Service: No responses matched the given request. See the Traffic Parrot logs for more details.
                                                   Request was not matched
                                                   =======================
    
    -----------------------------------------------------------------------------------------------------------------------
    | Closest stub                                             | Request                                                  |
    -----------------------------------------------------------------------------------------------------------------------
                                                               |
    helloworldgreetersayhello-38d8ac56-38fe-4a57-89b3-5af8368  |
    874a5.json                                                 |
                                                               |
    ANY                                                        | ANY
    helloworld.Greeter/SayHello                                | helloworld.Greeter/SayHello
                                                               |
    Protocol: GRPC                                             | Protocol: GRPC
                                                               |
    {                                                          | {                                                   <<<<< Body does not match [equalToJson]
      "inputName" : "example"                                  |   "inputName" : "not matching"
    }                                                          | }
                                                               |
    -----------------------------------------------------------------------------------------------------------------------
  • Upgraded WireMock to 2.25.0
  • Upgraded Proto compiler to 3.10.1

Fixes

  • Fixed support for reserved fields in gRPC proto files
  • Errors when compiling proto files are now reported in the logs
  • Swagger/OpenAPI example dates are now considered when performing an import

Changes

  • New toggle trafficparrot.virtualservice.grpc.check.mapping.schema.allow.missing.fields that can be set to true to allow gRPC mapping files to have fields missing when compared to the proto schema. This is to allow for usages where there are fields that are intentionally omitted to pick up on the default.
We also made the following changes in the 5.5.3 release:

Fixes

  • Fixed support for gRPC error mappings in the new static analysis check
  • Fixed support for gRPC dynamic response mappings in the new static analysis check
  • Fixed support for gRPC mappings with repeated fields in the new static analysis check
  • More consistent error reporting format for errors in dynamic responses, to make it clear which helper had a problem
  • Always print log file in the Maven/Gradle plugin console if Traffic Parrot fails to start

Features

  • New state management examples added to the examples project

Wednesday, 16 October 2019

Traffic Parrot 5.5.1 released, whats new?

We have just released version 5.5.1. Here is a list of the changes that came with the release:

Features

  • Traffic Parrot can now check that gRPC mapping files match proto schema on startup. You can enable this check to prevent startup if mapping files are out of date when compared to the proto schema.
    • The following property enables the feature: trafficparrot.virtualservice.grpc.check.mapping.schema.on.startup=true
    • Currently supports checking gRPC mapping equalToJson request matchers
    • Currently supports checking gRPC mapping response
    • This feature is currently in public beta and is subject to change
    • Please let us know your feedback feedback@trafficparrot.com and help us improve this feature

Changes

  • Traffic Parrot now only requires a single port for the UI:
    • trafficparrot.gui.http.port=8080
    The following ports are no longer used by the UI:
    • trafficparrot.virtualservice.http.management.port=8083
    • trafficparrot.virtualservice.jms.management.port=9093
    • trafficparrot.virtualservice.http.port=8081
We also made the following fix in the 5.4.5 release:

Fixes

  • gRPC mapping files will now be written with the correct status code of 0 to represent a successful response status. gRPC mapping files with status code 200 will continue to be interpreted as status code 0 to preserve backwards compatibility.

Thursday, 10 October 2019

How can I expose multiple port numbers for stubs?

"How can I expose multiple port numbers for stubs, so that each downstream component has an individual port number?"

In Traffic Parrot, you have only one port per Traffic Parrot mock instance. If you would like to use separate ports, you will need to start one Traffic Parrot instance per port.

Let’s discuss the problem you are trying to solve by using multiple ports, please email us at help@trafficparrot.com

We also recommend reading "3 ways to deploy and share virtual services and API mocks"

Sunday, 15 September 2019

What is the difference between JMS IBM MQ and Native IBM MQ?

Traffic Parrot allows mocking and service virtualization of IBM MQ via both Java JMS and IBM MQ Java APIs. What does that mean?

First, let's start with what is an API. In simple terms, an API is a specification that describes how two or more systems will communicate with each other. For example, how your application will communicate with a third party service. 

If you are using IBM MQ as your message broker, your application will use an API to put and read messages to and from the IBM MQ queues.

There are several ways your application can put and read messages to and from IBM MQ queues.

For example, if your application is written in Java, it can use the IBM MQ classes for JMS or the IBM MQ classes for Java. So, you would be using either the JMS API or the (Native) MQ Java API.

Both methods are APIs to the same Java interface to the MQI and share the majority of features. So, it should not matter which API you use for mocking and service virtualization, as they should be compatible. You should be able to swap them around with no impact in many situations. Just remember, if you use some features that are only available in IBM MQ classes for JMS, then Traffic Parrot will not be able to support them if you use Traffic Parrot Native IBM MQ.

The recommended way of doing mocking and service virtualization of IBM MQ using Traffic Parrot would be to use the same API your application is using.

If your application is using IBM MQ classes for JMS, we advise you to use Traffic Parrot JMS IBM MQ. If your application is using the IBM MQ classes for Java, then we recommend you use Traffic Parrot Native IBM MQ. If your application is using both APIs, we recommend you use those both in Traffic Parrot as well.

Monday, 26 August 2019

Mocking SOAP services

"Can we mock soap services?" - website visitor, USA.

Yes! Here is a tutorial how to do that:

Monday, 19 August 2019

Twitter @java: Testing Microservices - Overview of 12 Useful Techniques - Part 1 with Liam Williams

Thanks to @java for the recognition by Tweeting Liam's and Wojciech's article!


"The combination of a microservice architectural style and container-based infrastructure requires a testing strategy that is compatible with this brave new world. A microservice architecture relies more on over-the-wire (remote) dependencies and less on in-process components, and your testing strategy and test environments need to adapt to these changes." - Wojciech Bulaty and Liam Williams for InfoQ

Read more: https://www.infoq.com/articles/twelve-testing-techniques-microservices-intro/

Trending article: 12 useful techniques when testing microservices

Liam and Wojciech's article "Testing Microservices: Overview of 12 Useful Techniques - Part 1" is trending at position 1 on InfoQ!


"When working with microservices, you have more options because microservices are deployed typically in environments that use containers like Docker. In microservice architectures, your teams are likely to use a wider variety of testing techniques. Also, since microservices communicate more over the wire, you need to test the impact of network connections more thoroughly. Using tools and techniques that better fit the new architecture can allow for faster time to market, less cost, and less risk." - Wojciech Bulaty and Liam Williams for InfoQ.

Read more: https://www.infoq.com/articles/twelve-testing-techniques-microservices-intro/

Saturday, 17 August 2019

Traffic Parrot 5.4.0 released, whats new?

We have just released version 5.4.0. Here is a list of the changes that came with the release:

Features

  • Added JMS REST API to count the number of requests made that match the given criteria using POST /api/jms/requests/count and reset the count using DELETE /api/jms/requests
  • See the OpenAPI documentation or import the Postman workspace for more information on the API usage

Fixes

  • Fixed usage of {{ anyNumber }} to match numbers within JSON string fields