Thursday 29 December 2022

Tech Excellence and Traffic Parrot

We are excited to announce that Traffic Parrot is part of the Tech Excellence network.

Wojciech will speak at a Tech Excellence event in May 2023, sharing his thoughts on "Testing Microservices: 12 Black-Box Testing Techniques". Stay tuned for more details by subscribing to our newsletter by leaving your email address in the field on the right!


Tech Excellence

Learn how to deliver quality software faster. Our vision is to raise the bar of technical excellence across the world.



Optivem

Better Software Faster: Helping backend development teams deliver quality software faster with TDD, Clean Architecture & Clean Code.



Monday 19 December 2022

Traffic Parrot 5.35.8 released, what's new?

We have just released version 5.35.8. Here is a list of the changes that came with the release:

Features

  • Added new "does not contain" request matcher support to the UI

Fixes

  • API now allows cross-origin requests
  • Trim whitespace from JSONPath request matcher to avoid confusing not matched issues
  • Fixed an issue exporting/importing HTTP mappings ZIP between different scenarios
  • Fixed a UI issue where gRPC mappings were saved with empty webhooks
  • Library upgrades to fix OWASP issues

Changes

  • UI now makes it clear that HTTP record/proxy/passthrough is not supported in a non default scenario yet

Tuesday 13 December 2022

How do I ignore JSON array order during request matching?

When matching JSON requests, sometimes the array order is unpredictable e.g. if the array is used to represent a set of unordered objects.

{
  "exampleId": 1
  "exampleRoot": {
    "exampleArray": [
      {
        "id": 1
      },
      {
        "id": 2
      }
    ]
  }
}

To match this type of request in Traffic Parrot, there are several options, here are some of them:

  • matchesJson matcher with {{anyElements}} wildcard token
    {
      "exampleId": 1,
      "exampleRoot": {
        "exampleArray": "{{anyElements}}"
      }
    }
  • matchesJsonPath matcher which uses standard JSONPath notation
    $[?(@.exampleId == 1 && '1' in @.exampleRoot.exampleArray[*].id && '2' in @.exampleRoot.exampleArray[*].id)]

You can find more examples of advanced matchers in the Traffic Parrot documentation

Wednesday 7 December 2022

Traffic Parrot 5.35.6 released, what's new?

We have just released version 5.35.6. Here is a list of the changes that came with the release:

Fixes

  • Library upgrades to fix OWASP issues

Changes

  • HTTP mapping ZIP export now includes data directory

Monday 28 November 2022

Codemotion: Design-first gRPC APIs for microservices: a sample workflow for parallel teamwork

We have just published an article on Codemotion about "Design-first gRPC APIs for microservices: a sample workflow for parallel teamwork".

Key Takeaways:

  • You can allow teams to work in parallel by designing gRPC APIs before starting to code. 
  • When teams work in parallel, you significantly decrease the time-to-market of your products.
  • A sample workflow has a few key stages.
    • The teams start by designing the gRPC API
    • Then, producer and consumer teams can work in parallel on their microservices.
    • The consumer team can use mocks to simulate the backend producer service.
    • Communicating feedback about the API specification during the development phase is essential.
    • Once the microservices are ready, they can test them together and release them to production.
  • You can start by onboarding just one API to this new workflow; you don’t have to migrate your whole company, to begin with
  • Working in parallel is excellent for team morale as there is less hard-deadline pressure
  • You can estimate the return on investment by using a simple spreadsheet.




Thursday 10 November 2022

Traffic Parrot 5.35.5 released, what's new?

We have just released version 5.35.5. Here is a list of the changes that came with the release:

Features

  • Added support for Apache Thrift service simulation, see a demo video
  • Added ability to configure HTTP basic authentication for the Traffic Parrot user interface
    • Enable in the trafficparrot.properties file by setting:
      trafficparrot.gui.security.mode=LOGIN_PROPERTIES
    • Configure users in the trafficparrot.gui.login.properties file, in Jetty HashLoginService format:
      admin=password,traffic-parrot-gui-role

Fixes

  • Fixed a UI issue with mappings edited multiple times during the same UI session
  • Added support for protoc on Apple ARM based M1/M2 processors
  • Library upgrades to fix OWASP issues

Changes

  • Upgraded bundled JRE from 8u332 to 8u345
  • Upgraded WireMock from 2.33.2 to 2.35.0
  • Upgraded gRPC from 1.46.0 to 1.50.2
  • Upgraded protoc from 3.20.1 to 3.21.9

Thursday 20 October 2022

Need help migrating from Broadcom CA Lisa Service Virtualization to another API mocking tool?

Today, many teams are looking for help migrating their existing service virtualization to more cost-effective and DevOps friendly solutions, including both Open Source and commercial tools, such as the ones described above.

Traffic Parrot has released a free CA Lisa MAR Analyzer tool that will help you assess the complexity of your existing CA Lisa (Broadcom Service Virtualization) MAR files and to help prepare for a migration to other tools.

We can help you migrate your MAR files with our automated MAR migrator. We also support formats from other vendors and open source projects, feel free to contact Traffic Parrot for more details.

You can read more about how a global retail bank has migrated one of their departments off CA Lisa and now saves 51% on tool costs per year on our website.

Tuesday 18 October 2022

The API Landscape: a comprehensive view of all stakeholders creating the programmable economy

We recently listed on the apidays API landscape, who are collecting a comprehensive view of all stakeholders creating the programmable economy.
apidays API landscape
API testing is a key component of the programmable economy. Traffic Parrot mocks and simulates APIs and services so that you can test your applications and microservices without worrying about test data set-up or environment availability.

Friday 14 October 2022

Traffic Parrot 5.34.7 released, what's new?

We have just released version 5.34.7. Here is a list of the changes that came with the release:

Fixes

  • Fixed an issue with the enabled script for HTTP webhook callbacks
  • Fixed an issue with XLS data stores when storing and retrieving values with leading zeros
  • Library upgrades to fix OWASP issues

Wednesday 21 September 2022

Traffic Parrot supports GCP Dialogflow gRPC mocks

When working with complex cloud services such as GCP Dialogflow, it can be useful to use API mocks to test different hypothetical integration scenarios, without the need for a real GCP service to be configured and available.

We have a sample project available which you can use with a free Traffic Parrot trial to mock bidirectional streaming Dialogflow services, without the need for a live GCP instance.

Friday 16 September 2022

Traffic Parrot 5.34.3 released, what's new?

We have just released version 5.34.3. Here is a list of the changes that came with the release:

Features

Fixes

  • Library upgrades to fix OWASP issues

Monday 12 September 2022

How do I forward the HTTP requests I don't have mappings for to another endpoint?

Sometimes we need to work with a mixture of real and mock APIs, for example using mocks for APIs that are not yet developed and real APIs for those that already exist.

To support this use case, you can use Traffic Parrot to forward HTTP requests to another endpoint instead of returning the default "not found" response.

You can use the proxy responses feature to provide a default response for unmatched requests:
  1. Set a low priority like 10 for the mapping with the proxy response
  2. Leave default priority for other non-proxy response mappings
  3. Now any unmatched requests will return a response from the proxy mapping

Thursday 1 September 2022

How pair-programming brings down knowledge silos

One of the proven ways of eliminating knowledge silos is pair-programming (or pair-testing for QAs). A team of researchers from Pivotal has concluded that a flavour of pair programming with overlapping pair rotation described in “Sustainable Software Development through Overlapping Pair Rotation“encompasses principles, policies, and practices aiming at removing knowledge silos and improving code quality (including discover-ability and readability), hence leading to development sustainability.”

They note that “Conventional wisdom says that team disruptions should be avoided, and that extensive documentation is needed to prevent knowledge loss during team churn. Unfortunately, documentation often quickly becomes out-of-date and unreliable. The theory positions team code ownership with overlapping pair rotation and knowledge pollination as an alternative and potentially more effective strategy to mitigate against knowledge loss”.

From our experience gathered with Traffic Parrot customers, we can confirm these findings and that pair programming can be used as an effective way to eliminate knowledge silos and prevent critical knowledge loss. We have seen pairing developers with testers work equally well at sharing knowledge and helping bring down silos between QA and Development.

Thursday 25 August 2022

3 challenges when introducing API-first to microservices - an interview with Alin Daniel Ferenczi

We have asked Alin Daniel Ferenczi about his experience with the API-first approach.  He is a Solution Architect and Deep Tech Investor with 7 years of experience working to disrupt traditional business workflows through automation or API integrations, microservice architecture and multi-cloud adoption for startups and large enterprises.

Traffic Parrot: Alin, in your experience, what is the difference in API-first approaches for monolithic and microservice architectures?

Alin: The main difference from an architecture standpoint consists in the distribution of responsibilities. Designing a system must focus on following the single responsibility principle. It is one of the simplest and most complex design principles and it can have a major impact on the entire approach. Splitting the responsibilities properly and following an API-first approach allows for teams to work in parallel increasing the speed to market.

Alin: In the monolith architecture, cross-cutting concerns such as logging, error handling, caching and performance monitoring affect only one application that encompasses the APIs. These services are usually more tightly coupled, and their responsibilities can become blurred at times. This implies an additional effort required to keep a close eye and avoid cluttering the code with logic that is redundant for the whole system.

Alin: Designing microservices requires a good segregation of responsibilities. This also implies that cross-cutting concerns need to be designed as individual services or integrated in each service individually which adds additional layers of complexity to each implementation.

Alin: Designing a good API-first architecture from the start is crucial for each system to ensure a good development cycle, faster delivery, scalability and reliability. Deciding on the responsibilities of each component requires a different paradigm between a monolith and microservices. 

Traffic Parrot: Alin, in your experience, what are the three main challenges for companies starting their API-first journey for microservice architectures?

Alin: There are quite a few issues that might arise from developing microservice architectures. My clients whether they have an existing monolith or are developing a brand-new API-first application based on microservice seem to struggle the most with synchronization between releases in order to properly test the integration of each API. Integration testing is a big part of a good release cycle and when the development time between services is quite large, it may cause unnecessary downtime for development teams. This happens especially if multiple frameworks are used for different APIs and the developers cannot contribute to the other projects.

Alin: API-first adoption additionally requires a shift in thinking to create better reusable APIs and the planning phase either is too hasty and the reusability is not as expected, or the plan takes too long to establish that other chances arise and need to be considered.

Alin: Another big issue that I have encountered is the lack of adoption of proper tools for documenting, style validation, mocking and versioning of the APIs. Onboarding new hires on existing projects requires proper documentation or sandbox environments so that the developers can try out the API endpoints to get familiar with the environment. A proper DevOps culture and a “set up once” mentality needs to be established together with processes for documentation, testing and releases in order to avoid repetitive tasks and focus the attention of the team towards more creative requirements.

Traffic Parrot: Alin, how do you currently solve those problems?

Alin:    We have managed to tackle the issue with synchronization between releases by sticking to a limited set of technologies whenever possible. Adopting alongside the API-first approach, an inner source initiative has also enabled people to collaborate better and help redistribute the free capacity of teams that had a faster pace.

Alin:    The inner source adoption has also helped the planning and managing of reusable components. Since everyone has access to every component, it is far easier to find overlaps between each API that is being developed and contribute to the reusable modules’ library.

 Alin:  Proper tooling might not be only a decision from the technology perspective. Financial aspects need to be taken into account and depending on the size of the organization it may take months before a tool can be adopted. Another alternative to acquiring tools can be improving the current workflow depending on the needs of each project. Since I am working both on architecture and on budget allocations / raising funds depending on the company, I am advocating for the adoption of new tools but if something can be achieved through better workflow management and better DevOps processes, I might decide to postpone or avoid additional costs.

Traffic Parrot: Alin, in your experience, what are the three main challenges for companies already using API-first?

Alin: From my experience the tasks from API-first architectures that require the most amount of effort/time and provide the least benefits are: understanding the differences between documentation and the actual implementation due to a lack of updates to the documentation or due to technical debt, mocking data for edge cases and requirements that change on a timely basis and failing to have a proper error handling system or contingency plans for APIs that are unhealthy.

Alin: The enormous amount of technical debt is an issue that every company struggles with or will do in the future, and it is the most likely cause for a company to refactor one or more APIs from their structure. This is a very good reason why the microservice architecture should be adopted whenever possible. Technical debt usually comes together with mismatches within documentation which can hurt the productivity of other people that will require to interact with the application.

Alin: Mocking data for edge cases and requirements can take days cumulatively for people to compose which even if done preventively it can still take time and additional costs to set up and if it is by any chance not as expected it can have an impact on the development cycle. Considering situations where data is time dependent this can become a major pain point for each release. Even with automated processes it may not cover all cases. If the data is tied to an external provider, it may not be as accurate as from the source directly.

Alin: API-first architectures that are structured as microservices imply that each component can have independent failures. This can be an advantage if properly structured to handle errors as it can quickly detect failures using health probes and take actions accordingly through disaster recovery plans without taking the entire system down. The issues and struggles that I have encountered of companies have been related to the design or misconfiguration of those plans either when checking the health system, taking actions, reporting / monitoring and mitigating business risks such as loss of data, breach of SLAs and so on.

Traffic Parrot: Alin, how do you currently address those challenges?

Alin:    Technical debt is hard to avoid. Refactoring is necessary from time to time and if it makes sense from a cost benefit analysis, I am suggesting pursuing such tedious tasks. There are a lot of examples of instances where refactoring of popular apps has helped them evolve and adapt to a larger user base. Especially when going from a proof of concept to large scale adoption of a new service, refactoring can be unavoidable. Knowing to focus on delivery and come with improvements later has helped me deliver faster and provider quicker adoption for applications.

Alin:    My favorite subject to talk about is automation and even when developing APIs and applications for automation, there are still cases where we can automate our own processes. Data generation and mocking is a great example of such a case. Pushing for prioritizing the automation of repetitive tasks has always been beneficial. If there are also companies such as TrafficParrot which can provide some of the automation beforehand and if the ROI of adopting external tools makes sense, it is even more beneficial so that internal developers can focus on more productive tasks.

Alin:    Testing disaster recovery plans regularly and improving them with each iteration is becoming more and more popular within multiple companies that I have interacted with. I am always trying to be up to date with the latest techniques for proper configurations and implementing them whenever capacity is available or when preparing for a disaster recovery exercise.

Next steps

Would you like to share your experience with the API-first approach? Please reach out to us and would be very much interested in hearing your thoughts and sharing them with our audience.

Would you like help with your API-first approach? A number of Traffic Parrot clients follow the API-first approach, and we would be happy to help you on your journey. 


Tuesday 2 August 2022

Traffic Parrot 5.34.0 released, what's new?

We have just released version 5.34.0. Here is a list of the changes that came with the release:

Features

Fixes

  • Library upgrades to fix OWASP issues

Tuesday 26 July 2022

Interview: Traffic Parrot API mocking helped an Agile team with automated testing

An interview with an Agile Java Developer working for a global media company that has used Traffic Parrot for a number of years now.

1. What caused you to look for a service virtualization and API mocking solution?

I was working as an Agile Java Developer on a cross-functional team. The team was used to high levels of automation and testing, using techniques such as TDD daily. It was common practice at the company to use both in-memory and over-the-wire mocks to facilitate automated and manual testing.

2. How was mocking handled previously? How would you define the specific challenges / limitations with this approach?

Before adopting Traffic Parrot technology, the mocks we used on this project were all in memory mocks using libraries such as Mockito. This was sufficient to write unit-level tests but did not allow us to write over-the-wire integration/acceptance tests or enable our manual tester to test microservices in isolation.

3. Was there any urgency to find a solution? What was driving that urgency?

Yes, in the sense that this was part of the existing approach the team took to testing. It became urgent at the point that we needed to write integration/acceptance tests to help demonstrate functionality to the business analysts on the team.

4. How did you go about trying to solve the problem prior to implementing a solution?

We lived with the problem; we didn’t write some tests and didn’t do some manual testing with mocks.

5. Which resources were most useful in creating a vendor list? Least useful?

Google searches and word of mouth were enough; other resources were not explored.

6. How did you discover Traffic Parrot/Mountebank/Wiremock/…?

Google/GitHub/word of mouth referral from other team members.

7. What was compelling enough about Traffic Parrot that led you to evaluate?

The ability to use the same mocks (via shared configuration files) for both automated and manual testing, as well as the presence of a simple UI to configure/enable the mocks as required.

8. Which vendors did you consider? Why?

WireMock was also considered, but it lacked a UI so fell short of being useful for our manual tester on the team.

9. Can you briefly describe the team's decision making process?

The fastest route to a working solution wins. Something that satisfies each role on the team: developers, testers, business analysts.

10. What role did you play in the project?

Agile Java Developer, writing production/test/infrastructure code.

11. What were your most important evaluation criteria?

Programmatically configurable APIs, request matching, request/response monitoring.

12. Who was the ultimate decision maker?

All roles on the team needed to be satisfied with the decision.

13. What was the single most important reason for the win/loss?

Time to solution.

14. How would you articulate Traffic Parrot's value proposition?

Test in isolation, infrastructure as code, share mocks between development/testing/across teams.

15. What do you feel were the key strengths and weaknesses of Traffic Parrot?

Strengths: customer support is excellent, range of configurations from UI driven to code-driven is very helpful

Weaknesses: lacking an extensive library of templates and code examples to get started faster

16. Which vendor did you like the most and why?

Traffic Parrot because it matched our expectations, we didn’t hit any blockers when implementing a solution. 

17. Which vendor did you like the least and why?

Other open source vendors without Java language bindings would have had to write our own Java bindings for our test mock usage.

18. Can you tell me about pricing? How big of factor did pricing play into your decision?

Pricing was handled 2 layers above my role, I did not consider pricing in my own decision.

19. What did you like / dislike about our sales and/or customer success or support efforts? How would you compare your evaluation experience with us compared to other vendors?

The support we received was much more timely and helpful than I have experienced with other vendors, including open source. The feedback we provided made it’s way into the product within days!

20. What could Traffic Parrot have done better?

More brand visibility, we only found out about Traffic Parrot by word of mouth.

21.  How likely is it that you would recommend Traffic Parrot to a friend or colleague on a scale of 0-10 (0=not likely at all, 10=extremely likely)?

10 - when you need a UI, programmatic APIs, infrastructure as code and shared mocks.

22.  What’s the one word that pops into your mind when you think of Traffic Parrot?

Configurable.

23.  How many people use Traffic Parrot at your company?

5

24.  What are your favourite features or capabilities with Traffic Parrot?

Infrastructure as code, ability to version control mock configuration.

25.  What’s the business value you get from Traffic Parrot?

Supports cross-functional teams' ability to move fast.

26.   Were there any unplanned wins? (e.g. capabilities or use cases that you didn't envision during the initial purchase but later realize would be valuable to you?

We also realised we could use the same technology to provide mocks to other departments, to decouple our development schedules and interoperate via mock API specifications.

27.  What does the future look like? How do you plan to evolve and grow with Traffic Parrot?

We are trying to share our experience with other teams and encourage wider adoption. However, the company culture is such that each team tends to choose their own tooling and are often resistant to recommendations from other teams.


Monday 11 July 2022

Service virtualization in ephemeral containers

Product teams that move fast would like to have more control over the infrastructure of the tools they run. They would also like to run the service virtualization the same way they run their microservices and other tools, in ephemeral containers that they spin up based on demand.

This is a high level comparison of the two options available, the traditional approach of using service virtualizatino managed by a central team and the new approach that is in line with the new industry tremds.

Traffic Parrot supports the new model and teams that want to migrate from the traditional model to the new model.



Centrally managed service virtualization (i.e Center Of Excellence)

Self-service ephemeral service virtualization (also called API simulation)

Management of the SV tool servers

Typically central team of administrators (COE)

The product team that needs the virtual services

Purchasing of SV tool licenses

Typically central team of administrators (COE)

The product team that needs the virtual services

Creation of virtual services

The product team that needs the virtual services

The product team that needs the virtual services

Lifecycle

Typically long-lived 24/7 running instances

Typically ephemeral instances spin up based on demand

Development technology

Typically VMs

Typically ephemeral containers

Deployment architecture

Typically a few shared instances of service virtualization environments

Many ephemeral instances running on local laptops, inside CI/CD pipelines, in OpenShit, etc.

Wednesday 6 July 2022

Service Virtualization As Code (API Mocking As Code)

Traffic Parrot has first-class support for the service virtualization as code pattern (also called API mocking as code).

If you have any state or configuration you manage in your IT infrastructure the best solution in most cases is to version control it in a source control system like Git. For example, if you are running on Kubernetes and Docker, your whole infrastructure might be defined in a source control repository as Dockerfiles and Terraform Kubernetes configuration files. Its called Infrastructure as code

It is advised to do the same with your API mocks and virtual services. All your mocks and virtual services should be stored in a version control system such as Git. In the case of Traffic Parrot, this is possible since all request to response mapping files are stored on the filesystem as JSON files. Alternatively, you can use the JUnit TrafficParrotRule directly in your JUnit tests. This way you are taking your automation to the next level, and you have a fully automated and reproducible build pipeline.

Because of the issues that can arise with manual processes, avoid having API mocks and virtual services that are updated manually and never stored in Git or a similar source control system. Store all your API mocks and virtual services in a source control system.

The business justification (quoting Wikipedia): "Infrastructure automation enables speed through faster execution when configuring your infrastructure and aims at providing visibility to help other teams across the enterprise work quickly and more efficiently. Automation removes the risk associated with human error, like manual misconfiguration; removing this can decrease downtime and increase reliability. These outcomes and attributes help the enterprise move towards implementing a culture of DevOps".

Monday 13 June 2022

How to get a FedEx testing tracking number?

The short answer (detailed answer below)

To get one of the testing FedEx tracking numbers, you can use one of the tracking numbers from the table below (copied from Mock Tracking Numbers for FedEx Express and FedEx Ground) or the number 123456789012.

Note that the FedEx Sandbox environment might be unstable during your testing and you might need to use FedEx API mocks instead (either built in-house using Wiremock or ready-made by a vendor).

Scan EventTracking NumberTrack By Reference Info.Shipment Account NumberShip DateDestination Postal Code
Shipment information sent to FedEx449044304137821Customer Reference: 115380173Ground Shipment ID: DMWsGWdnNN51008800015-08-202033126
Tendered149331877648230Ground Shipment ID: 14933187764823051008800015-08-202028752
Picked up020207021381215Ground Shipment ID: 5308952851008800015-08-202030549
Arrived at FedEx location403934084723025Ground Shipment ID: 403934084723025Department: 3182672251008800015-08-202099206
At local FedEx facility920241085725456Customer Reference: 0014243047/34684523 Ground Shipment ID: 92024108572545651008800015-08-202019720
At destination sort facility568838414941Shipper Reference: P218101_004154359Purchase Order: P218101_00415435951008800015-08-202085388
Departed FedEx location039813852990618Customer Reference: 4631962Ground Shipment ID: THE HOUSEDepartment: McGeePurchase Order: 338515851008800015-08-202024740
On FedEx vehicle for delivery231300687629630Customer Reference: W62283340102Ground Shipment ID: 231300687629630Purchase Order: 622833451008800015-08-202033126
International shipment release797806677146N/A
Customer not available or business closed (Delivery Exception 007)377101283611590Ground Shipment ID: 37710128361159051008800015-08-202095815
Local delivery restriction (Delivery Exception 083)852426136339213Customer Reference: 11840271301351008800015-08-202011375
Incorrect address (Delivery Exception 03)797615467620Shipper Reference: OTHER-TK10451008800015-08-202070810
Unable to deliver (Shipment Exception 099)957794015041323N/A
Returned to sender/shipper076288115212522Invoice: 8190951008800015-08-202050323
Clearance delay (International)581190049992N/A
Delivered122816215025810Customer Reference: PO#17472451008800015-08-202024273
Hold at location843119172384577N/A
Shipment canceled070358180009382Customer Reference: 15241402Ground Shipment ID: 07035818000938251008800015-08-202094545
Duplicate Tracking Number713062653486Unique Identifier 1: 2457821000~713062653486~FXUnique Identifier 2: 2457835000~713062653486~FX

The longer answer

The FedEx Sandbox has numerous stability and test data problems. For example, if you use tracking code 123456789012, you will get a different response status depending on the state of the FedEx Sandbox environment.

If you are performing a one-off manual test of your Magento integration with FedEx API, then using the FedEx Sandbox should be sufficient, and you can go ahead and use one of the codes provided above.

If, however, you are running automated regression or integration tests on a schedule or per commit, then the instability of the environment might cripple your testing. For example, you might see flaky tests and flaky builds and have to spend time investigating them even though there is nothing wrong with your code or tests; it's just an issue with the FedEx Sandbox API.

To resolve the issue with FedEx Sandbox environment stability, you have three options, which are essentially about using test-doubles instead of the real FedEx Sandbox API:

  • Create and use mocks in your code instead of using the FedEx Sandbox. For example, Mockito in Java or uniuttest.mock in Python. This means you will have to write the FedEx Mockito mocks yourself. And it comes with several tradeoffs. This is a popular option for developers that have time on their hands and can build solutions internally, and don't work on a tight deadline.
  • Create over-the-wire mocks with tools like Wiremock or Mountebank instead of using the FedEx Sandbox. There is a list of tools like that on Wikipedia. The issue here again is that you have to build those over-the-wire mocks yourself, which can be problematic if it's more than one API and you have a lot of work on your plate anyway.
  • Use ready-made FedEx API mocks like the one from Traffic Parrot instead of using the FedEx Sandbox. The advantage is that experts create them and can use them within minutes. Just copy their mock API URL and use it instead of the FedEx one. Good idea if you are on a tight deadline. You will have to pay for using them, which can sometimes be a downside of this option.




enter image description here



Wednesday 8 June 2022

Avoid production data in test environments!

As ThoughtWorks have observed in 2021, using production test data in test environments has significant risks and drawbacks, for example:

  • test environments have lower security constraints than production environments, so they expose client data to threats
  • incorrect email test alerts sent to real users 
  • obfuscation often only addresses parts of the data sets
  • copying data can infringe on data protection laws
One way our customers avoid having to copy production data into the test environment is by using synthetic data created in virtual services, or API mocks in our commercial tool Traffic Parrot and a combination of open-source tools like Wiremock or Mountebank.

This often requires an upfront investment to define user personas that can be reused across test cases but removes the risks listed above. It also allows for increased test coverage and enables automated testing. 

As one of our customers put it:

"We have used Traffic Parrot to simulate third-party HTTP services and test our product in isolation. Traffic Parrot consultants have developed custom-made simulators for us. During the first testing cycle, it allowed us to find 28 issues with our product’s release candidate and fix them early in the software lifecycle. This means completing the testing schedule more than 2 months earlier than anticipated just for the first release alone.
We have also used Traffic Parrot to create third party system simulators that communicate over IBM MQ queues. The previous process took at least 30 minutes of manual effort per test case to verify when the interface was available—and sometimes up to 3 days when testing needed to be scheduled with external teams. Our Traffic Parrot simulators respond within milliseconds and are always up. They enabled our team to create automated tests that rely on those service responses and meet our department automated testing KPIs. This allowed us to reduce the regression testing cycle from 30 days to a week."
QA Director working for a software company responsible for government projects

If you would like to see what would be the Return On Investment (ROI) for your team for switching from using production data in test environments to service virtualization and API mocking, please reach out to us.

Self-service service virtualization tool for platform engineering product teams

As Thoughtworks have observed, platform engineering product teams are a sensible default to delivering results to customers in competitive fast-paced environments.

"The emphasis here is on API-driven self-service and supporting tools, with delivery teams still responsible for supporting what they deploy onto the platform. Organizations that consider establishing such a platform team should be very cautious not to accidentally create a separate DevOps team, nor should they simply relabel their existing hosting and operations structure as a platform."  - ThoughtWorks Technology Radar 2017 edition

Commercial tool Traffic Parrot and opensource tools like Wiremock or Mountebank are compatible with self-service teams where there is no centralised admin team managing the service virtualization and API mocking infrastructure. The cross-functional product teams manage Traffic Parrot instances and mocks themselves for their product needs.

As one of our customers put it back in 2017:

'We are building a new platform based on microservices and needed a testing infrastructure that allowed all of our testers to work autonomously, using local machines to record their communication streams between their services and inside our pipelines in order to virtualize a service. Traffic Parrot, while not as fully featured as offerings from HP and IBM, offered the key capabilities that our teams needed such as HTTPS and JMS IBM MQ and a substantially smaller footprint that allowed it to run locally. We have been very pleased by Traffic Parrot's email support and responsiveness to our requests and would recommend them to other teams doing microservice-based architectures."Chief Architect at a Fortune 500 company

And another team that migrated off a centrally managed solution:

"We have migrated our department off CA Lisa to reduce our operational costs in performance and system testing environments. We used WireMock for HTTP stubbing and mocking and Traffic Parrot for IBM MQ. Neither WireMock nor Traffic Parrot provides all the features available in CA Lisa. Still, after initial proof of concept assessment, we have decided those tools' feature sets will be sufficient for our needs and significantly cut costs. It took our team six months to migrate HTTP to WireMock and another three months working closely with the Traffic Parrot team to improve their offering to meet our load testing and mainframe integration requirements. With this investment, we cut our tooling costs in half." - Executive working for a global retail bank.

If you currently use WireMock or Mountebank but need gRPC, JMS, ActiveMQ, RabbitMQ, IBM® WebSphere MQ 7.5+, AMQP, or other protocol support, let’s schedule a 20-minute call to explore if Traffic Parrot is a better fit for your needs.

Friday 3 June 2022

Traffic Parrot 5.33.11 released, what's new?

We have just released version 5.33.11. Here is a list of the changes that came with the release:

Features

  • Added access logging for UI interactions, written to logs/http-ui-access/http-ui-access-yyyy_mm_dd.log

Changes

  • Upgraded bundled JRE from 8u282 to 8u332
  • Upgraded gRPC from 1.43.1 to 1.46.0
  • Upgraded protoc from 3.19.1 to 3.20.1
  • Upgraded WireMock from 2.31.0 to 2.33.2
  • HTTP VS access logs are now written to logs/http-vs-access/http-vs-access-yyyy_mm_dd.log
  • Upgraded Gradle version in examples project from 4.10.1 to 7.4.2
  • Logs file listing in UI is now ordered by last modified date

Fixes

  • Library upgrades to fix OWASP issues

Monday 9 May 2022

Traffic Parrot 5.33.5 released, what's new?

We have just released version 5.33.5. Here is a list of the changes that came with the release:

Features

  • Added new trafficparrot.properties to allow sending custom headers to the on premise license server: trafficparrot.license.usage.custom.headers=ExampleA:ValueA,ExampleB:ValueB

Changes

  • Pressing the Cancel button on the Add mapping form now clears the form fields

Fixes

  • Library upgrades to fix OWASP issues

Tuesday 3 May 2022

Testing FedEx API Integration: How to work around FedEx Sandbox Issues?

Many companies use FedEx to ship goods to their customers and they will often connect to FedEx APIs to fulfil those needs. One challenge developers and testers face early on and then throughout the product lifecycle is testing that the application integrates well with the FedEx APIs.

Diagram 1: Application connects to FedEx APIs

Diagram 1: Application connects to FedEx APIs


We hear many developers and testers complain about the FedEx Sandbox API environment. Their challenges include that the FedEx Sandbox API environment can have issues without an estimated time to be fixed, return intermittent errors, issues with test data, intermittent downtime and others. There are also issues with performance testing. See “Appendix A - FedEx Sandbox Typical Integration Issues” below.

Diagram 2: Development and Testing with FedEx Sandbox APIs

Diagram 2: Development and Testing with FedEx Sandbox APIs

To work around that problem, we see companies use one of the following options:

  • They continue testing with FedEx Sandbox APIs.

    • They live with the FedEx Sandbox API problems mentioned, ultimately compromising on their software quality, incurring additional development and testing costs and compromising on time to market.

  • They test with production FedEx APIs.

    • This has its drawbacks, as you are actually executing real business transactions inside FedEx systems like ordering shipments etc. Although it's easy to get started with, it is typically not sustainable long term if you plan to test your moderate to high complexity applications, particularly if you have automated regression test suites.

  • They build FedEx API mocks or stubs in-house.

    • Mocks allow them for more flexibility around testing. The challenge is the team has to have the capability to develop, maintain and support them.

  • They rely on third-party vendor solutions such as Traffic Parrot FedEx Sandbox environment that provides FedEx API mocks but also more.

    • Instead of re-inventing the wheel, they purchase ready-made and proven solutions to the problem.


Diagram 3: Using Traffic Parrot FedEx Sandbox and ready-made APIs
Diagram 3: Using Traffic Parrot FedEx Sandbox and ready-made APIs



Pros

Cons

Risks

Continue testing with FedEx Sandbox APIs.

Supported directly by FedEx.

Has a negative impact on development  testing costs and schedules. 


Minimal hypothetical situation testing capabilities.


Slows projects as complexity grows.


Can get costly as complexity grows.

Schedule slips due to intermittent FedEx Sandbox issues.


Risk of not following the testing pyramid.

Test with production FedEx API

Supported directly by FedEx.


Better uptime and availability than the FedEx Sandbox environment.

Impacts production business transactions, which need to be rolled back.


Need to pay for production API usage.


Minimal hypothetical situation testing capabilities.


Slows projects as complexity grows.


Can get costly as complexity grows.

Risk of committing to real business transactions, impacting real customers or inventories.


Risk of not following the testing pyramid.

Build FedEX API mocks or stubs in-house.

Hypothetical scenarios can increase your test coverage.

Need in-house development and maintenance efforts.


Need in-house infrastructure to host the mocks.


Re-inventing the wheel, off-the-shelf solutions are available.

Developing in-house puts schedules at risk, as your team has to develop and maintain mocks.


Developers and testers that work under pressure often do not have time to work on additional tools, they would rather use a ready-made solution. 

Rely on third party vendor solutions such as Traffic Parrot FedEx Sandbox environment

Quick to get started with, ready within 10-40 minutes.


Provide not just mocks but a full Sandbox environment, for more details see “Diagram 4: In-house development of mocks vs. using Traffic Parrot Sandbox environments”

Need a purchase from a third party vendor.

 


For a detailed comparison between mocks and simulators/sandbox environments, please review Testing Microservices: Examining the Tradeoffs of Twelve Techniques - Part 2.


Diagram 4: In-house development of mocks vs. using Traffic Parrot Sandbox environments
Diagram 4: In-house development of mocks vs. using Traffic Parrot Sandbox environments


Next steps

Contact us to schedule a free 30-minute call so we can discuss the best option for your situation.



Appendix A - FedEx API Sandbox Environment Typical Integration Issues

Service Unavailable

HTTP/1.1 503 Service Unavailable

X-API-Mode: Sandbox

Content-Encoding: gzip

Content-Type: application/json;charset=UTF-8

Content-Length: 236

Server: Layer7-API-Gateway

Date: Mon, 18 Apr 2022 13:59:07 GMT

Connection: close

Vary: Accept-Encoding

Server-Timing: cdn-cache; desc=MISS

Server-Timing: edge; dur=105

Server-Timing: origin; dur=6067


{

    "transactionId": "18b5df2b-2786-41f3-83e2-22f1307f2672",

    "errors": [

        {

            "code": "SERVICE.UNAVAILABLE.ERROR",

            "message": "The service is currently unavailable and we are working to resolve the issue. We apologize for any inconvenience. Please check back at a later time."

        }

    ]

}


Unknown tracking number

HTTP/1.1 200 OK

X-API-Mode: Sandbox

Content-Encoding: gzip

Content-Type: application/json

Content-Length: 301

Server: Layer7-API-Gateway

Date: Tue, 19 Apr 2022 12:29:04 GMT

Connection: keep-alive

Vary: Accept-Encoding

Set-Cookie: JSESSIONID=7A3EB75A8A39D3F4C5231A03786A1EC8; Path=/track/v1; Secure; HttpOnly; Domain=apis-sandbox.fedex.com

Set-Cookie: __VCAP_ID__=e063ffd3-0bf4-41a1-6598-4e4f; Path=/track/v1; HttpOnly; Secure; Domain=apis-sandbox.fedex.com

Server-Timing: cdn-cache; desc=MISS

Server-Timing: edge; dur=93

Server-Timing: origin; dur=197


{

    "transactionId": "a6a9dbff-7d26-477d-bdbd-688645a9585f",

    "customerTransactionId": "22467615-2115-4c0b-b169-fade80e11aec",

    "output": {

        "completeTrackResults": [

            {

                "trackingNumber": "920241085725456",

                "trackResults": [

                    {

                        "trackingNumberInfo": {

                            "trackingNumber": "920241085725456",

                            "trackingNumberUniqueId": "",

                            "carrierCode": ""

                        },

                        "error": {

                            "code": "TRACKING.TRACKINGNUMBER.NOTFOUND",

                            "message": "Tracking number cannot be found. Please correct the tracking number and try again."

                        }

                    }

                ]

            }

        ]

    }

}


System unavailable

HTTP/1.1 200 OK

X-API-Mode: Sandbox

Content-Encoding: gzip

Content-Type: application/json

Content-Length: 262

Server: Layer7-API-Gateway

Date: Tue, 19 Apr 2022 12:31:05 GMT

Connection: keep-alive

Vary: Accept-Encoding

Set-Cookie: JSESSIONID=1CA447C0AC9255093CB82E169C292684; Path=/track/v1; Secure; HttpOnly; Domain=apis-sandbox.fedex.com

Set-Cookie: __VCAP_ID__=f54d4735-4d59-46e2-7967-6877; Path=/track/v1; HttpOnly; Secure; Domain=apis-sandbox.fedex.com

Server-Timing: cdn-cache; desc=MISS

Server-Timing: edge; dur=92

Server-Timing: origin; dur=5235


{

    "transactionId": "1378803b-d312-4014-af90-a7ad734dd4a8",

    "customerTransactionId": "c5dd0962-7c96-436d-8a6b-35dea6c32f08",

    "output": {

        "completeTrackResults": [

            {

                "trackingNumber": "568838414941",

                "trackResults": [

                    {

                        "trackingNumberInfo": {

                            "trackingNumber": "568838414941",

                            "trackingNumberUniqueId": "",

                            "carrierCode": ""

                        },

                        "error": {

                            "code": "SYSTEM.UNAVAILABLE.EXCEPTION",

                            "message": "default.error"

                        }

                    }

                ]

            }

        ]

    }

}


Ambiguous message Invalid Account Number

Fedex API Rates and Transit Times API + LTL Freight: Account Number Invalid 

400: ACCOUNT.NUMBER.INVALID


Lacking clarity which credentials to use 

Rating is temporarily unavailable - Error in FEDEX


“Error returned in processing transaction

Severity: FAILURE

Source: crs

Code: 7

Message: Rating is temporarily unavailable, please try again later.

LocalizedMessage: Rating is temporarily unavailable, please try again later.”


“try using production credentials. I did has the same issue with test server ans was able to resolve it through production credentials”


General Failure when servers are down

fedex General Failure


“when the FedEx servers are down, you get "General failure" as the error message”


Fedex Sandbox has limited number of test credentials

fedex shipping web service, test account authentication failed


“appears Fedex has not gotten their act together regarding their test credentials. I was told by someone in Web services tech support that there is a limited array of test credentials and there are some that are simply not valid. Turns out, once you get test credentials with your user account, they will always be the same when requested from that account.”


FedEx Shipping Codes are not clear

FedEx Shipping Codes

“The problem is, the values your 3PL expects may not be the true "carrier" values so there is no way to provide you with such a list. These are the values that FedEx expects for Web Services which is the current list. The XML API has been retired and did not support all services.”


The courtesy rates service intermittent availability

Unable to obtain courtesy rates in Fed Ex Web Service


“Unable to obtain courtesy rates”


“There is nothing wrong in the code this is mainly due to website issue. I tried the same thing after few hours and it was working perfectly.”


Lack of clarity on available test data

How to get Fedex testing tracking number?


“** Every carrier (UPS, USPS, etc...) has lots of tracking numbers for lots of use-cases, but not FedEx... I find that very strange.


* Also... I am guessing FedEx expects you to use real tracking numbers even in their test environment. It's how I have been testing for multiple use-cases.


I hope this helps anyone, I have spent a while digging around.”


Monday 4 April 2022

The difference in service virtualization for monolithic and microservice architectures?

We see our customers embark on a migration to microservices from monolithic architectures. 

We get asked often: How does Service Virtualization (SV) and API mocking fit into both worlds and what changes when you migrate from a monolithic architecture to a microservices architecture?

In the diagram below, we demonstrate how our customers typically deploy two instances of an SV solution in a monolithic environment. They would have a sandbox service virtualization environment used by developers and testers to create their virtual services and then a test service used by the development and test environments. See Figure 1.

Figure 1: Service Virtualization in a monolithic architecture


When moving from monolithic architectures to microservice architectures, we see our customers:

  • Create production-like environments prepared on-demand using scripts instead of manually configured environments

  • Use containerisation technologies such as Docker and Kubernetes (OpenShift, Rancher and similar) instead of running on bare hardware of virtual machines.

  • They run production-like environments even locally on developers’ and tester’s laptops or desktops and in CI/CD builds and pipelines instead of development and testing in shared DEV, SIT and QA environments.

A distributed architecture with many production-like development and test environments on local machines, in CD/CI or Kubernetes (or OpenShift or Rachner), requires a service virutlaization tool that can be deployed in many of those environments as well. Those service virtualization tools allow for the new decentralised development and testing approach to be used throughout the lifecycle. This is demonstrated in Figure 2.


Figure 2: Service Virtualization while migrating to a microservices architecture


OpenSource tools like Wiremock or Mountebank allow for that usage pattern, as well as selected commercial offerings such as Traffic Parrot.

Once the migration to microservices has been completed, the centralised SV infrastructure can be deprecated and decommissioned, as demonstrated in Figure 3.


Figure 3: Service Virtualization in microservice architectures


Some of our customers ask us why would they not just use the existing SV approach in the new world of microservices?

Although using the centrally managed solutions in microservice architectures is possible, it comes with maintenance costs caused by:

  • Cannot create production-like environments on-demand by programmable scripts resulting in increased maintenance costs

  • Many laptops, CI/CD pipelines and K8S environments connecting to the same instance of SV create a bottleneck resulting in increased maintenance costs and a single point of failure.

  • Investing in building SV artefacts in monolithic tooling that is costly to migrate to a fit for purpose microservice and container compatible SV tooling incurs long term migration costs.

If you would like to learn more about microservice architectures and what changes when moving away from a monolithic approach, see the introduction to microservices by Martin Fowler and the article by Paul Hammant on release schedules.


Monday 14 March 2022

Traffic Parrot 5.33.2 released, what's new?

We have just released version 5.33.2. Here is a list of the changes that came with the release:

Features

  • Native IBM® MQ mapping multiple response messages now support zero response messages. This allows for consuming request messages without sending any response message.
  • Added new trafficparrot.properties to allow starting JMS queue replay on startup:
    • trafficparrot.jms.start.queue.replay.on.startup=false
    • trafficparrot.jms.start.queue.replay.on.startup.consumeRequestsFromConnectionId=1
    • trafficparrot.jms.start.queue.replay.on.startup.replayResponsesToConnectionId=1
  • Extended virtual service support for using multiple JMS virtual services linked to separate mappings:
    • The scenarios/VirtualServiceName/service.properties now allows for configuring:
      • jms.queue.replay.on.startup=true
      • jms.queue.replay.request.connection.id=1
      • jms.queue.replay.response.connection.id=1
    • The scenarios/VirtualServiceName/jms-connections.json file is used to specify the queue manager connections per virtual service
    • The scenarios/VirtualServiceName/jms-mappings directory contains the mappings files per virtual service
    • The UI allows starting JMS replay for multiple virtual services at a time
  • RabbitMQ JMS queue declare arguments can now be specified in the mapping JSON. Please note that the UI does not yet support this feature.
  • RabbitMQ JMS queue names may now be specified in the format exchange/routingKey instead of a direct queue name

Changes

  • Upgraded bundled JRE from 8u292 to 8u322
  • RabbitMQ JMS with AMQP compatibility mode enabled does not support topics and will now fail with an error message

Thursday 17 February 2022

Traffic Parrot 5.32.0 released, what's new?

We have just released version 5.32.0. Here is a list of the changes that came with the release:

Features

  • Added support for using a proxy server with path based routing between Traffic Parrot and the license server

Sunday 30 January 2022

Traffic Parrot 5.31.2 released, what's new?

We have just released version 5.31.2. Here is a list of the changes that came with the release:

Features

  • Added ability to import and export IBM MQ mappings in ZIP file format
  • JMS response messages now populate the response JMSCorrelationID using the JMSCorrelationID from the request message if it is present or else the JMSMessageID from the request
  • JMS response messages are now sent to the JMSReplyTo destination if present or else the mapping destination is used

Changes

  • Updated VS HTTPS certificates with new 3 year validity period