Thursday 28 December 2017

How software testers can create dynamic mocked JSON responses, no programming skills needed!

The recommended way of creating mocks or stubs of third party or backed systems and APIs is to keep them as simple as possible.  Unfortunately, sometimes it is not possible to keep things simple, and it is necessary to create more complex systems or API simulators. In this example, we will go through an example how to create a simulator of an API that will dynamically generate a list of items in a response based on a request.

Let us assume that your application will send the following requests to a backend API:
{
  "orderId": 3,
  "storeId": 2244,
  "orderItems": [
    {
      "partnumber": "101",
      "quantity": 1
    },
    {
      "partnumber": "102",
      "quantity": 3
    }
  ]
}
And, let us assume that the response from that backend API looks like this:
{
  "orderAccepted": true,
  "orderItems": [
    {
      "orderItemId": 101,
      "quantity": 1
    },
    {
      "orderItemId": 102,
      "quantity": 3
    }
  ],
  "subordersCount": 2
}
Notice, that the request contains a list of items "orderItems" that is then populated in the response. So, to create the response dynamically, we need to parse the request JSON and create the response JSON based on the request. We can do that using the Traffic Parrot jsonPathList helper.

Let us also assume that we would like to accept the order only if the number of items in the request is even and reject it if the number is odd. To do that we can use the ifEven helper.

Here is a response body that you can copy and paste into Traffic Parrot that will generate a response dynamically based on the request:
{
  "orderAccepted": {{#ifEven (size (jsonPathList request.body '$.orderItems'))}}true{{else}}false{{/ifEven}},
  "orderItems": 
    [
      {{#each (jsonPathList request.body '$.orderItems') }}
           {
          "orderItemId": {{ jsonPath this '$.orderItemId' }},
          "quantity": {{ jsonPath this '$.quantity' }}
        }
        {{#unless @last}},{{/unless}}
      {{/each}}
    ]
  "subordersCount": {{size (jsonPathList request.body '$.orderItems')}}
}
Configuring a HTTP response in Traffic Parrot
This approach does not require knowledge of any programming languages, so almost anybody can do it. You can use this approach for HTTP(S), JMS, MQ and Files simulators. To get started with Traffic Parrot have a look at the quick start guide. Contact us for a free initial 30-minute consultation.




Tuesday 19 December 2017

Mocking and simulating JMS IBM® WebSphere MQ

We have just published a new tutorial on how to mock and simulate JMS IBM® WebSphere MQ. It explains how to use Traffic Parrot systems simulation to allow for testing your application in isolation. Read more.

Wednesday 13 December 2017

Getting started with stubbing, mocking and service virtualization

We have started creating a series of tutorials on stubbing, mocking, API simulation and service virtualization. The first chapter is on getting started with stubbing, mocking and service virtualization!

Friday 8 December 2017

How can I simulate an outside system that uses MQ?

"In a real test scenario I have one test system which connects to outside system using MQ in production. How can I simulate that outside system in my test environment? Let me give example: XYZ Bank sends a message to processing system with XML fields of ACCNO, NAME, AMT, DATE etc. I am a tester of XYZ Bank Application. I want to verify the request and response. How can I do that? I am a sender and receiver but other system is outside my test environment. How to simulate it?" (source)

The simulator you are referring to is typically called a mock, stub or virtual service. If you would like to simulate the outside system to enable testing of your application you should use a mocking tool (also called service virtualization). Just make sure you use a tool that supports IBM MQ. An example of a tool that can simulate and mock IBM MQ for the purpose of testing is Traffic Parrot.

Tuesday 28 November 2017

How do I mock or simulate a message queue in JMS?


"I have a test message, which format and content I already know. I also have the Java class that parses and reads this message. In real world, this message will come from a JMS message queue. I would like to simulate, mock or generate a message queue on my local PC for testing purposes. How do I mock it?" (source)


To test an application in isolation when the real production JMS provider is not available you can use one of:
  1. JMS mock: When testing your applications you can simulate the non-existing dependencies using test doubles. You can use a JMS mock which will simulate the behaviour of a real JMS provider. You can create a JMS mock using Traffic Parrot. Using a JMS mock will allow you for a high level of flexibility during testing. You will be able to test typical production-like test scenarios but also hypothetical situations by setting up your mock to return almost any type of message. You will also be able to simulate different types of errors, which is often hard to do with real JMS providers. Have a look at this introduction video to JMS service virtualization for ActiveMq (service virtualization is a different name for a mock) or this one for IBM MQ.
  2. JMS provider test instance: You can run a JMS provider on your laptop or in one of your test environments and connect your application to it instead of the production provider. When you use open source providers in production like ActiveMQ or RabbitMQ, it should be easy to run one of them on your laptop as well because they are lightweight and free. For IBM Websphere MQ, you can use the free IBM MQ for Developers
  3. JMS class mock: You can use Mockito in unit tests to mock interactions with JMS classes. This solution comes with all the trade-offs of unit testing. For more information on those see testing pyramid. If you would like to black box test your application, use one of the solutions I have described above. 

If you have any more questions or need help with creating your first JMS mock, schedule a free 30-minute consultation call with our consultants today.

Next steps

Saturday 14 October 2017

Next gen service virtualization for Continuous Delivery and DevOps

On the 13th of September 2017 more than 40 software development and testing enthusiasts interested in service virtualization, API mocking and system simulation have met at the “Next Gen Service Virtualization Forum” organised by WireMock and Red Hat Open Innovation Labs. There were also representatives from tool vendors including Curiosity Software IrelandHoverflyMockLab and Traffic Parrot who contributed to the discussion on a panel. In this report we will explore the highlights of the event. A full recording of the event is available online.

API mocking is becomming a critical piece of infrastructure for software teams

“I hate the term service virtualization. I spent years thinking it meant copying your services into a VMWare image.” Tom Akehurst, WireMock

As part of the introduction Tom Akehurst has described service virtualization as “simulating the interfaces that your application depends on during testing, so that you can test more quickly and more effectively.”
Tom defined the practice by stating that “I use the term service virtualization and API mocking interchangeably, because I think conceptually I think they are the same thing. The terminology you use depends on whether you come from a big banking environment or the agile community.” He has also said that “Service virtualization or API mocking can be transformative to the performance of a software team. I think it’s an underused technique. Although there are old-world big expensive vendor tools that have been in the service virtualization space for best part of the decade and the ideas have not trickled down to people in other sectors. There is an emerging set of tools, some open source and some commercial, that are more modern in nature and tend to better fit with agile practices and play more nicely with other tools and not try to lock you in into a particular ecosystem. Getting more people doing [API mocking] would be a net positive for teams delivering software”.

How does service virtualization and API mocking fit into DevOps?

Huw Price shared a story about a successful project at a bank, where he used virtual MQ messages to reduce reputational damage. “If you look at most legacy systems, there are mainframes and there are MQ messaging systems. They [software teams at large enterprises] have no idea what’s in them. For example, for one bank client we created pivot tables and showed them what they had [in the MQ messages]. They had masses of data but very little spread of the actual differences of the data. So, if you can actually fill in all the gaps and create those as virtual MQ messages, that means we would stop all of the bugs that result in reputational damage.”
Hue did raise a few issues with current state of service virtualization. He noticed that the tools still need improvement, “How good is the [API mocking] tech? Pretty average. There is still a lot of hacking and editing. The tech is pretty clunky.” He also expressed a concern with how they tools are used, “A lot of the SI’s [Software Implementation outsourcing companies] like more inefficient products, because they can make money out of them. They can add bodies to them and create a service. I’ve heard $200k to virtualise one service.”
Huw noticed that “DevOps is about keeping requirements, code and tests as close as possible together. Service virtualization is part the tests.”

Tools for Continuous Delivery are evolving

Wojciech Bulaty shared a story about his experience at a large bank and the lack of tools supporting messaging protocols that would not get in the way of Continuous Delivery practices. He described a time when there were open source tools that supported HTTP(S), but none that managed messaging such as JMS or MQ. He said that led him to developing a new tool called Traffic Parrot that supports both HTTP(S) and messaging. It is designed for both test and development teams following the Continuous Delivery and DevOps practices.

Problems today and the future of service virtualization and API mocking

The panel session discussions highlighted several key issues with service virtualization and API mocking today and generated a few ideas about where the industry might be heading and what should the community should focus on.
Masking and anonymisation is still hard
Hue Price and Dave Syer agreed that masking and anonymisation of test data is hard. There are practices to approach the problem but the tools could help with it as well. They are also agreed that failure testing is becoming more and more important, especially in the world of microservices where there are more APIs that can fail.
Understanding failure scenarios takes time
Benji Hooper from Hoverfly noticed that “The hard part about virtualizing APIs is really trying to understand what part of the interaction with the dependency or a service are you actually testing. The happy case is always the easiest to work out, but trying to work out how many bad cases are there and what are they like is difficult”. He also remarked that he had a problem virtualizing a messaging service due to a lack of technology, but he is happy that tools like Traffic Parrot that support messaging virtualisation are now available.
Tools are not intuitive enough
Dave suggested that “The tools aren’t actually that good. There is a lot of technology out there, but learning how to use it effectively is quite a challenge. Getting everybody up to speed with the practices is quite time consuming.” to which Wojciech Bulaty from Traffic Parrot added that it's an especially important issue in large organisations where you have many developers and testers of various experience levels.
Record and playback has its place but has to be used with caution
Hue and Wojciech agreed that record and playback of traffic has its applications but has to be used with great care. Hue suggested that a better approach might be “Record and Analyze”. Tom added that record and playback is “one of those things that can gain you productivity in short term at the expense costing you a lot in the long term”.
Adoption is below expectation
The panel has agreed that the adoption of the practice is below its capability.
Benji shared a story that “The name [service virtualization] does not help. Most people I talk to have no clue, but after 2 minutes they get it. They are probably doing it already with some servlet. The name is too enterprise-like.”. Wojciech added that “There are too many people that do not know that they do not know about service virtualization. We here as a community could take it to the next level and make them realise that they don’t know [about service virtualization] by having a 2 minute conversation with them.”
Changes in delivering software
Changes in software delivery and architecture are creating the pull for new tools. If you want to be able to develop at speed, you need tools to support it to stay competive. 

Summary

Service virtualization and API mocking are becoming more and more relevant in today's world of microservices and highly distributed systems. The move towards more Agile delivery models, moving away from mainframe and monolith development to more distributed microservice based systems requires new tooling. The next generation of service virtualization and API mocking tools, including HoverflyMockLabTraffic Parrot and WireMock are making it easier and more affordable to create an effective software delivery process.

Saturday 7 October 2017

Service virtualization and manual testing in Agile teams, is it still necessary?

This is a story about a time I have consulted for a department at a large multinational enterprise. There were around 80 people working there on delivering software products for internal use.

The management has implemented the Extreme Programming methodology, a type of agile software development, and used it for more than ten years. They had more than one hundred middleware applications (microservices) exposing APIs to other departments. Each application was tested by anywhere between hundreds to thousands of different types automated tests after every code commit.

Despite those high levels of automation and 83-98% automated test code coverage (depending on the application), every team was still manually exploratory testing their software, with the help of service virtualization. Let us explore how the management justified the value of manual exploratory testing and using service virtualization in that type of environment, and how has that been implemented.

Their software delivery lifecycle

Developers would pick up a new story to work on, and start with writing acceptance tests. They would discuss the acceptance tests contents with the business and the testers working in the team. They would then proceed to write unit tests and production code.

Once the user story has been implemented and was working as expected according to developers expectations, the testers would test the application manually. Quite often they would find unexpected behaviour that the developers did not anticipate. They would discuss with developers if immediate code changes were required or a new story needed to be raised to tackle the task shortly.

They would also find serious bugs once every 2-4 user stories. The costs of the bugs found leaking to production would be so high that it easily justified having an extra person on every team, a tester performing manual exploratory testing.

Why is service virtualization useful?

A tester while manually exploring how the application behaves would simulate different types of hypothetical responses from third party and backend applications. It was often because of those hypothetical scenarios that the bugs were found. The more backend or third party systems were involved in one transaction, the more likely permutations of different non-typical responses would result in unexpected system behaviour.

Often, those test scenarios also included valid responses that have not been covered by acceptance tests by developers. Developers are also people, so they sometimes missed obvious scenarios or made simple mistakes.

So, even though the developers have thought they had implemented everything well enough, and the new functionality can go to production as is, testers would make another judgment call at what else is likely to happen in production environments. That resulted in new stories or bug fixes being raised.

After testing in isolation, the tester always performed manual integration testing with backend and third party services.

Service virtualization tools used

All this would not be possible without using a service virtualization tool fitted well for manual exploratory testing in agile environments. Since there were no appropriate tools available on the market at that time, this particular organisation has decided to create one in-house. They have spent 14 days over a period of 9 months to develop and perfect a tool for that team. That tool was specific only to the applications they have been developing.

The developers have used an open source tool called Wiremock that was used in their acceptance tests. They have built a GUI on top of Wiremock with several extensions to allow for better usability for manual exploratory testing.

Because the developers were using Wiremock it in their acceptance tests it was natural for the  testers to import the same virtual services (or stubs, as they called them) to their GUI tool. Using the same base technology proved to be very efficient.

Lessons learned on custom-built tools

The technical leads have noticed that they had five teams that have developed very similar tools to perform similar service virtualization and stubbing tasks. That totalled up to around 80 man days of development resources. That was an inefficient use of resources.

An improvement on that strategy would be to use an off-the-shelf tool instead of creating new tools in-house for every new project, provided it exists. It would help hit the ground running, save time and costs, and reduce risks of bugs in the custom-built tooling.

Benefits of a lightweight tool

At Traffic Parrot, we have taken all of those experiences into account while developing a new service virtualization and API mocking tool for Agile teams called Traffic Parrot. It is a tool that provides powerful service virtualization and API mocking capabilities but stays flexible and lightweight enough to be used in highly Agile teams by both developers and testers. Thanks to its many advanced features it can be also used in less Agile environments, while transitioning to the new delivery process. It can then be very cost effective.

Summary

We have learned that manual testing can be valuable even in an environment where there is a lot of automation. We had a look at why service virtualization is key to effective manual exploratory testing in highly Agile environments. We also explored how important is the use off-the-shelf open source or commercial tools fitted for Agile teams, and listed two tools worth trying out.

Next steps

If you have a lot of automated tests but once every other release experience costly production bugs go ahead and investigate if manual exploratory testing could help to address that problem. You do not have to hire people to run this small experiment. You can start by having existing team members that did not work on a given new piece of functionality put on a “testing hat”. They can test the application pretending they do not know how to code, as a tester would.

If you are not using stubbing, API mocking or service virtualization yet in your testing but would like to reduce the number of bugs, and the risks, give Traffic Parrot try.  Get one of your tech leads to look at Traffic Parrot free community version download at https://trafficparrot.com.

Introduction to JMS service virtualization and API mocking

Recently we presented "Introduction to JMS service virtualization" at the Next Generation Service Virtualization event. Thanks to Energized Work and Red Hat Open Innovation Labs for having us. Here is the recording of the session:

 

Friday 15 September 2017

Introducing support for JMS Service Virtualization, JMS API mocking and JMS stubbing

JMS: ActiveMQ & IBM MQ

We have just released Traffic Parrot 3.8.0 which supports JMS service virtualization, JMS API mocking and JMS stubbing. Version 3.8.0 supports ActiveMQ and IBM® WebSphere MQ Version 7.5. If you need support for other brokers please get in touch and we will add them.

HTTPS Proxy

Traffic Parrot version 3.8.0 also supports HTTPS Proxy service virtualization, API mocking and stubbing (Proxy forwarding using SSL/TLS), which is useful when working Mobile apps that talk HTTP to backend servers (iPhone, Android and Genymotion) and you need scripting capabilities. For more details see the documentation.

Docker, Kubernetes (K8S)

We have improved support for Docker and Kubernetes.

Maven, Jenkins, Continous Integration

We have improved integration with continuous integration and continuous delivery tools such as Maven, Gradle, Jenkins, Teamcity, and similar.

Download latest version.


Saturday 15 July 2017

Stubbing and Service Virtualizaiton Explained for Web Testing 101

Working together with the awesome Ministry Of Testing team we have published the guide to stubbing and service virtualization.

Stubbing and service virtualization are great tools to understand and use for testing. You can also think of them as simulators for APIs. In the artcile we will explore the terminology involved with stubbing and service virtualization, and we’ll walk through a simplified example of how a service virtualization tool can work in cooperation with a testing framework. We’ll also explore a few problems testers can face when using these testing techniques.

To get the full version of the article visit https://dojo.ministryoftesting.com/lessons/stubbing-service-virtualization-explained (available to Ministry Of Testing Pro Subscribers only).

Tuesday 13 June 2017

Three 60-second videos on how to apply service virtualization in QA teams


Manual exploratory testing - QA and development working in isolation:


Starting with QA automated testing:


Continuous Delivery team - sharing stubs or virtual services:


Bonus: Getting started with Traffic Parrot in 3 minutes:


Tuesday 9 May 2017

Flaky Tests at Google

Jeff Listfield has published an interesting read on where does tests’ flakiness at Google come from.

The main takeaway from that article is that the larger the binary size and more RAM the test consumed, the more likely it was to be flaky.

So, try to avoid creating large tests, and you should have less flaky tests!

It is worth noting that one way of addressing the flakiness of certain types of tests is using stubbing, mocking or service virtualization. When used properly, it can help stabilize the environment. It has to be assessed on a case-by-case basis if that is the right thing to do. For example, system tests will often be run in isolation, but end to end or smoke tests will require the whole stack to be deployed.

Monday 1 May 2017

4 tips on "selling" service virtualization inside a large enteprise

What are the next steps if we have done a "POC" and people still don’t want to use service virtualization?

A service virtualization consultant recently asked me the following question.

"We have finished a proof of concept project and demonstrated that service virtualization would work in our company. We have presented our findings to several teams, but none were interested enough to start implementing it in their sprints. What do we do now?" Service Virtualization Specialist at a Fortune 300 company.

Selling service virtualization

There are a few ways this problem can be approached depending on the details and specifics of the organization.

There are four key points to make that should apply to your situation.

1. Selling

You are no longer only a service virtualization specialist once the proof of concept project has finished. Often, you need to become a salesman as well. You need to apply proper sales and marketing practices to win over individuals and teams inside your organization to help them solve their pressing problems.

2. Solving real problems

Good sales people solve people's problems; they do not push solutions. They make people understand what they offer is a good idea for a given situation if there is a big enough problem to solve, or walk away.

When you sell, remember you have to deliver value to the teams by solving their real problems of high priority. You have to build rapport and understand why do they think they do not need SV and what problems do they have that you could help them with. You need to “get a Ph.D.” in their internal and external issues. You also have to understand political issues inside the team that might prevent them from implementing service virtualization. Once you understand the whole environment inside out, you can suggest solutions.

3. Technology adoption lifecycle

Remember that in every large enough organization you will find people that will be very keen to try the new approach and individuals that will resist using it until everybody else is onboard. It’s called the Technology adoption lifecycle. Look for innovators and early majority.

4. Influencers

Some organizations have natural leaders, tech leads, architects or other individuals that developers and testers look up to. You could approach those influencers and work with them to spread service virtualization knowledge.

What are your experiences with “selling” service virtualization in your organization?





Friday 14 April 2017

80/20 service virtualization

How do we create virtual services for all of our test environments?

I have been asked this question recently by a managing director at a large test consultancy based in the UK. They work with financial institutions that use old test infrastructure that need service virtualization because of the typical problems like third party transaction costs, scheduling time on old mainframe environments and test data setup issues.

When we unpack that question, we realise that the short answer is, in most cases, you don’t to start with virtualizing everything. Instead, you focus on the low hanging fruit, start delivering value to testing teams and later scale from there.

80/20 rule in service virtualization

Try to apply the Pareto principle in service virtualization as often as possible. You could interpret it as “80% of value delivered by using service virtualization comes from 20% of virtual services”. Some managers also say that they would “invest time in picking the low hanging fruit first”.

An example of applying that mindset would be to find easy to virtualize APIs that are causing a lot of problems for the testing teams, virtualize them and let the test teams use those assets. Once that has been done, look for the next most important APIs to virtualize and repeat the cycle again.


Tuesday 11 April 2017

Introduction to Service Virtualization for software testers

A couple of weeks ago we talked about service virtualization at #TesterGathering in London. There were many interesting questions from the audience. Here is the recording of the talk.

Tuesday 21 March 2017

3 tips for creating virtual services for an Enterprise Service Bus (ESB)

How much of an approximation virtual services might be?


I have been asked this question recently by a senior technical test infrastructure architect. After drilling down to details, I have found that an example he was interested in was, for example, virtualizing an existing ESB (Enterprise Service Bus) in an environment where there are no stubs or virtual services used yet.

I would rate virtualizing a whole ESB a medium or hard difficulty task, depending on the details. Ideally, I would avoid virtualizing large ESBs as your first service virtualization project if possible and pick other simpler APIs to start with. If you have to do it though, here are my top 3 tips. Please bear in mind there is a possibility they might not apply in your case. Every case is different, especially when visualizing large APIs sets.

Don’t boil the ocean


I have made this mistake myself on one of the projects. We were very confident we will be able to deliver virtual services for the whole ESB in a few months, but after a month we have realised the APIs are so complex that there is no way we could even understand some of them without spending weeks in discussions with the developers.

If you have more than 50 endpoints in your ESB, do not try to virtualise everything in one go and spend months doing it. Find a small subset of 1 to 10 APIs that cause problems for the test team and virtualize them in a week or two. Then let the testing teams use those virtual services.

Do service virtualization in small increments of work, delivering value to testing teams on a regular basis. Don’t make them wait for months; it is too risky.

Use record-replay if possible


I was once on a project where the requests and responses contained a lot of data. A SOAP request or response could have included a hundred of different key-value pairs. The XML structure was simple, but the complexity was in the data.

If you have an existing environment with many requests and responses going through the ESB, it is likely those are quite complex APIs. Creating virtual services from a WSDL file will generate the data in the virtual services for you that is unlikely to be representative. It is very likely to be invalid data. To avoid problems like that, use record-replay to create virtual services, this way you capture the request and response structure as well as the data.

Is virtualizing the ESB a good idea at all?


Ask yourself this question, are you virtualizing in the best place? Maybe it would make sense to virtualise some of the backend systems “below” the ESB instead? Maybe it would be valuable and easier to virtualise somewhere “above” the ESB instead? The answer to this question depends heavily on your environment. Contact us to schedule a free 20-minute consultation to help you understand what is the best plan of action in your case.

What is your experience with virtualizing ESBs?

Sunday 5 March 2017

Reducing Third Party Test Transaction Costs

If you have never heard about stubbing or service virtualization, this article should be a good basic introduction for you on how to reduce test transaction costs. If you already know about stubbing and service virtualization have a look at Traffic Parrot introduction or browse other posts on our blog instead.

Test transaction costs in banking and insurance

Many banking and insurance systems connect to third party services that charge for usage on a per-transaction basis. This will often include but is not limited to accessing market data or detecting fraud.

Testing and test transaction costs

When software is tested together with a third party service, you have to pay for the requests your systems make. In many cases, those costs become significant reaching more than $1k per test suite run. This results in unhealthy decision dynamics, like reducing the amount of testing done or decreasing the frequency of test cycles which later increases the technical debt accumulation.

Reducing the costs

There is a simple solution to that problem. Instead of testing the system always connected to the third party service you can introduce a new testing phase and test with a test double pretending to be the real system. After the system has been tested in isolation, you can proceed integration testing but do only a minimal amount of it. This should reduce the third party test transaction costs to the absolute minimum. We have seen a reduction in third party transaction of up to 93%.

Other benefits

Service virtualization is a popular practice across the industry. Implementing it will not only reduce third party test transaction costs. It has other advantages as well, for example resolving test data setup and environments uptime issues. For more details have a look at our introduction to service virtualization video.

Next steps

Contact us for a free 20-minute consultation call to help you get started with service virtualization or have a look at our introduction to service virtualization video.