Monday 25 November 2019

Traffic Parrot 5.7.3 released, what's new?

We have just released version 5.7.3. Here is a list of the changes that came with the release:

Features

  • gRPC mapping file name templates when saving mappings via the UI can now be customized with the following properties:
    #
    # Template used for gRPC mapping file names
    #
    # Available properties:
    # {{ mapping.id }}
    # {{ mapping.package }}
    # {{ mapping.service }}
    # {{ mapping.method }}
    # {{ mapping.isSuccess }}
    # {{ mapping.status.name }}
    # {{ mapping.status.code }}
    #
    # Available commands:
    # {{ countGrpcMappings package=mapping.package service=mapping.service method=mapping.method success=true offset=1 format='%02d' }}
    trafficparrot.virtualservice.grpc.saved.mapping.file.name.template={{ mapping.package }}.{{ countGrpcMappings package=mapping.package offset=1 format='%02d' }}.{{ mapping.service }}-{{ mapping.method }}[{{ mapping.status.name }}]
    trafficparrot.virtualservice.grpc.saved.mapping.file.name.collision.suffix.template=-{{ mapping.id }}
                                            
  • When editing a gRPC mapping in the UI you can now choose whether to keep the existing file name or generate a new one according to the template properties when the mapping is saved

Thursday 14 November 2019

How to return different responses for the same URL when mocking REST HTTP APIs in Traffic Parrot

Typical problems solved by service virtualization/mocking/stubbing

In the first chapter of our tutorial series, we have a section that covers some of the typical problems that a service virtualization/mocking/stubbing approach addresses.
Do you experience similar problems? Please email us at consultation@trafficparrot.com to schedule a call where we can discuss your requirements.
Here are several of the more typical problems testers will experience while testing web applications, mobile applications, legacy systems and APIs and how service virtualization can help with solving them.
  • Too much pressure on the testing team and not enough time to test
    This is a common problem and service virtualization alone will not fix it, but it is an essential part of the solution. If you can utilize service virtualization in your builds and pipelines you can have more confidence in your release cycle. The focus changes from trying to test everything to testing what could be at risk of having issues. These kinds of checks can give you a quick health status of the application, or a piece of it, without dedicating a lot of testing time. Quick feedback means quick wins overall.
  • Testers being blocked by other teams not delivering APIs on time, leaving them unable to test
    In this case, you can use virtual services to simulate the non-existing API. If there are business requirements or documentation, you can create virtual services based on them to use until the real service is available to test. After you verify the system works in isolation according to your expectations, you can wait for the dependent API to be ready and do integration testing using the same tests. This is an example of using TDD for integration testing.
  • Hard to reproduce production bugs resulting in frustration and time loss
    Test data and complex interactions with backend and third party systems can cause issues reproducing bugs found in production. Issues with setting up test data in multiple systems simultaneously, simulating error messages or simulating protocol issues are hard to accomplish tasks getting in the way of easy reproduction of the environment where the bug was found in production. Fortunately, complex environments like these and many others can be simulated with virtual services, allowing for more flexibility and peace of mind when reproducing bugs.
  • Creating test data is complex and takes time, slowing the testing team down
    Often when you test old systems you have to wait for test data to be created, sometimes it’s even impossible to create the test data you need (especially in third party systems). Also, backend and third party systems can be hard to set up to return error responses, or throttle network responses, on demand. The solution is to use virtual services which are under your control, so you can set up any type of test data on demand.
  • Scheduling time on test environments resulting in inefficient use of tester’s time
    Especially large banks with old mainframe systems experience this issue where the test environments are costly to create, so there is a limited number of them shared across many testing teams and projects. Those old APIs almost never get changed but they are included in many test plans, so you have to schedule time on those environments to run your tests. You can simulate the API that never changes with virtual services and test your application under test more often without having to wait for the API availability. It is also less likely to impact other teams by burning through their test data causing further delays.
  • Cost of third party test transactions can complicate testing efforts
    In most cases, those paid APIs are very simple in nature and very rarely change. The cost of accessing third party APIs can complicate performance testing efforts as well. Simulating them using virtual services can reduce the third party transaction costs.
  • Unreliable performance tests
    When you performance test many components simultaneously, it is often challenging to pinpoint the code change that causes a performance issue. The feedback loop is too large, and you are testing too many moving parts at one time. You ideally want to test small code releases in isolation and a limited number of components. That will ensure that if you see a degradation in performance, you can quickly point to a code changeset that was likely the cause. To run performance tests in isolation, you need virtual services. Most service virtualization tools on top of setting up test data capabilities will offer the possibility to define response times for the virtual services so you can create a production-like environment.
When you select a tool to use for service virtualization, keep the problem you are trying to solve in mind. Ask yourself a few questions about the problem at hand and what service virtualization can do to solve it.
Here are a few sample questions which could jumpstart the process for you:
  • Are service calls to real services slowing automation efforts?
  • Is lack of access to third party or backend services limiting testing efforts?
  • Is using real services the best place to test new code?
  • What problems might we need to address to create and maintain a virtual service for testing?

Sunday 10 November 2019

Traffic Parrot 5.6.0 released, whats new?

We have just released version 5.6.0. Here is a list of the changes that came with the release:

Features

  • gRPC unmatched requests now display the closest match in both the logs and error response description. For example:
    UNKNOWN: Traffic Parrot Virtual Service: No responses matched the given request. See the Traffic Parrot logs for more details.
                                                   Request was not matched
                                                   =======================
    
    -----------------------------------------------------------------------------------------------------------------------
    | Closest stub                                             | Request                                                  |
    -----------------------------------------------------------------------------------------------------------------------
                                                               |
    helloworldgreetersayhello-38d8ac56-38fe-4a57-89b3-5af8368  |
    874a5.json                                                 |
                                                               |
    ANY                                                        | ANY
    helloworld.Greeter/SayHello                                | helloworld.Greeter/SayHello
                                                               |
    Protocol: GRPC                                             | Protocol: GRPC
                                                               |
    {                                                          | {                                                   <<<<< Body does not match [equalToJson]
      "inputName" : "example"                                  |   "inputName" : "not matching"
    }                                                          | }
                                                               |
    -----------------------------------------------------------------------------------------------------------------------
  • Upgraded WireMock to 2.25.0
  • Upgraded Proto compiler to 3.10.1

Fixes

  • Fixed support for reserved fields in gRPC proto files
  • Errors when compiling proto files are now reported in the logs
  • Swagger/OpenAPI example dates are now considered when performing an import

Changes

  • New toggle trafficparrot.virtualservice.grpc.check.mapping.schema.allow.missing.fields that can be set to true to allow gRPC mapping files to have fields missing when compared to the proto schema. This is to allow for usages where there are fields that are intentionally omitted to pick up on the default.
We also made the following changes in the 5.5.3 release:

Fixes

  • Fixed support for gRPC error mappings in the new static analysis check
  • Fixed support for gRPC dynamic response mappings in the new static analysis check
  • Fixed support for gRPC mappings with repeated fields in the new static analysis check
  • More consistent error reporting format for errors in dynamic responses, to make it clear which helper had a problem
  • Always print log file in the Maven/Gradle plugin console if Traffic Parrot fails to start

Features

  • New state management examples added to the examples project