Saturday, 14 October 2017

Next gen service virtualization for Continuous Delivery and DevOps

On the 13th of September 2017 more than 40 software development and testing enthusiasts interested in service virtualization, API mocking and system simulation have met at the “Next Gen Service Virtualization Forum” organised by WireMock and Red Hat Open Innovation Labs. There were also representatives from tool vendors including Curiosity Software IrelandHoverflyMockLab and Traffic Parrot who contributed to the discussion on a panel. In this report we will explore the highlights of the event. A full recording of the event is available online.

API mocking is becomming a critical piece of infrastructure for software teams

“I hate the term service virtualization. I spent years thinking it meant copying your services into a VMWare image.” Tom Akehurst, WireMock

As part of the introduction Tom Akehurst has described service virtualization as “simulating the interfaces that your application depends on during testing, so that you can test more quickly and more effectively.”
Tom defined the practice by stating that “I use the term service virtualization and API mocking interchangeably, because I think conceptually I think they are the same thing. The terminology you use depends on whether you come from a big banking environment or the agile community.” He has also said that “Service virtualization or API mocking can be transformative to the performance of a software team. I think it’s an underused technique. Although there are old-world big expensive vendor tools that have been in the service virtualization space for best part of the decade and the ideas have not trickled down to people in other sectors. There is an emerging set of tools, some open source and some commercial, that are more modern in nature and tend to better fit with agile practices and play more nicely with other tools and not try to lock you in into a particular ecosystem. Getting more people doing [API mocking] would be a net positive for teams delivering software”.

How does service virtualization and API mocking fit into DevOps?

Huw Price shared a story about a successful project at a bank, where he used virtual MQ messages to reduce reputational damage. “If you look at most legacy systems, there are mainframes and there are MQ messaging systems. They [software teams at large enterprises] have no idea what’s in them. For example, for one bank client we created pivot tables and showed them what they had [in the MQ messages]. They had masses of data but very little spread of the actual differences of the data. So, if you can actually fill in all the gaps and create those as virtual MQ messages, that means we would stop all of the bugs that result in reputational damage.”
Hue did raise a few issues with current state of service virtualization. He noticed that the tools still need improvement, “How good is the [API mocking] tech? Pretty average. There is still a lot of hacking and editing. The tech is pretty clunky.” He also expressed a concern with how they tools are used, “A lot of the SI’s [Software Implementation outsourcing companies] like more inefficient products, because they can make money out of them. They can add bodies to them and create a service. I’ve heard $200k to virtualise one service.”
Huw noticed that “DevOps is about keeping requirements, code and tests as close as possible together. Service virtualization is part the tests.”

Tools for Continuous Delivery are evolving

Wojciech Bulaty shared a story about his experience at a large bank and the lack of tools supporting messaging protocols that would not get in the way of Continuous Delivery practices. He described a time when there were open source tools that supported HTTP(S), but none that managed messaging such as JMS or MQ. He said that led him to developing a new tool called Traffic Parrot that supports both HTTP(S) and messaging. It is designed for both test and development teams following the Continuous Delivery and DevOps practices.

Problems today and the future of service virtualization and API mocking

The panel session discussions highlighted several key issues with service virtualization and API mocking today and generated a few ideas about where the industry might be heading and what should the community should focus on.
Masking and anonymisation is still hard
Hue Price and Dave Syer agreed that masking and anonymisation of test data is hard. There are practices to approach the problem but the tools could help with it as well. They are also agreed that failure testing is becoming more and more important, especially in the world of microservices where there are more APIs that can fail.
Understanding failure scenarios takes time
Benji Hooper from Hoverfly noticed that “The hard part about virtualizing APIs is really trying to understand what part of the interaction with the dependency or a service are you actually testing. The happy case is always the easiest to work out, but trying to work out how many bad cases are there and what are they like is difficult”. He also remarked that he had a problem virtualizing a messaging service due to a lack of technology, but he is happy that tools like Traffic Parrot that support messaging virtualisation are now available.
Tools are not intuitive enough
Dave suggested that “The tools aren’t actually that good. There is a lot of technology out there, but learning how to use it effectively is quite a challenge. Getting everybody up to speed with the practices is quite time consuming.” to which Wojciech Bulaty from Traffic Parrot added that it's an especially important issue in large organisations where you have many developers and testers of various experience levels.
Record and playback has its place but has to be used with caution
Hue and Wojciech agreed that record and playback of traffic has its applications but has to be used with great care. Hue suggested that a better approach might be “Record and Analyze”. Tom added that record and playback is “one of those things that can gain you productivity in short term at the expense costing you a lot in the long term”.
Adoption is below expectation
The panel has agreed that the adoption of the practice is below its capability.
Benji shared a story that “The name [service virtualization] does not help. Most people I talk to have no clue, but after 2 minutes they get it. They are probably doing it already with some servlet. The name is too enterprise-like.”. Wojciech added that “There are too many people that do not know that they do not know about service virtualization. We here as a community could take it to the next level and make them realise that they don’t know [about service virtualization] by having a 2 minute conversation with them.”
Changes in delivering software
Changes in software delivery and architecture are creating the pull for new tools. If you want to be able to develop at speed, you need tools to support it to stay competive. 

Summary

Service virtualization and API mocking are becoming more and more relevant in today's world of microservices and highly distributed systems. The move towards more Agile delivery models, moving away from mainframe and monolith development to more distributed microservice based systems requires new tooling. The next generation of service virtualization and API mocking tools, including HoverflyMockLabTraffic Parrot and WireMock are making it easier and more affordable to create an effective software delivery process.

Saturday, 7 October 2017

Service virtualization and manual testing in Agile teams, is it still necessary?

This is a story about a time I have consulted for a department at a large multinational enterprise. There were around 80 people working there on delivering software products for internal use.

The management has implemented the Extreme Programming methodology, a type of agile software development, and used it for more than ten years. They had more than one hundred middleware applications (microservices) exposing APIs to other departments. Each application was tested by anywhere between hundreds to thousands of different types automated tests after every code commit.

Despite those high levels of automation and 83-98% automated test code coverage (depending on the application), every team was still manually exploratory testing their software, with the help of service virtualization. Let us explore how the management justified the value of manual exploratory testing and using service virtualization in that type of environment, and how has that been implemented.

Their software delivery lifecycle

Developers would pick up a new story to work on, and start with writing acceptance tests. They would discuss the acceptance tests contents with the business and the testers working in the team. They would then proceed to write unit tests and production code.

Once the user story has been implemented and was working as expected according to developers expectations, the testers would test the application manually. Quite often they would find unexpected behaviour that the developers did not anticipate. They would discuss with developers if immediate code changes were required or a new story needed to be raised to tackle the task shortly.

They would also find serious bugs once every 2-4 user stories. The costs of the bugs found leaking to production would be so high that it easily justified having an extra person on every team, a tester performing manual exploratory testing.

Why is service virtualization useful?

A tester while manually exploring how the application behaves would simulate different types of hypothetical responses from third party and backend applications. It was often because of those hypothetical scenarios that the bugs were found. The more backend or third party systems were involved in one transaction, the more likely permutations of different non-typical responses would result in unexpected system behaviour.

Often, those test scenarios also included valid responses that have not been covered by acceptance tests by developers. Developers are also people, so they sometimes missed obvious scenarios or made simple mistakes.

So, even though the developers have thought they had implemented everything well enough, and the new functionality can go to production as is, testers would make another judgment call at what else is likely to happen in production environments. That resulted in new stories or bug fixes being raised.

After testing in isolation, the tester always performed manual integration testing with backend and third party services.

Service virtualization tools used

All this would not be possible without using a service virtualization tool fitted well for manual exploratory testing in agile environments. Since there were no appropriate tools available on the market at that time, this particular organisation has decided to create one in-house. They have spent 14 days over a period of 9 months to develop and perfect a tool for that team. That tool was specific only to the applications they have been developing.

The developers have used an open source tool called Wiremock that was used in their acceptance tests. They have built a GUI on top of Wiremock with several extensions to allow for better usability for manual exploratory testing.

Because the developers were using Wiremock it in their acceptance tests it was natural for the  testers to import the same virtual services (or stubs, as they called them) to their GUI tool. Using the same base technology proved to be very efficient.

Lessons learned on custom-built tools

The technical leads have noticed that they had five teams that have developed very similar tools to perform similar service virtualization and stubbing tasks. That totalled up to around 80 man days of development resources. That was an inefficient use of resources.

An improvement on that strategy would be to use an off-the-shelf tool instead of creating new tools in-house for every new project, provided it exists. It would help hit the ground running, save time and costs, and reduce risks of bugs in the custom-built tooling.

Benefits of a lightweight tool

At Traffic Parrot, we have taken all of those experiences into account while developing a new service virtualization and API mocking tool for Agile teams called Traffic Parrot. It is a tool that provides powerful service virtualization and API mocking capabilities but stays flexible and lightweight enough to be used in highly Agile teams by both developers and testers. Thanks to its many advanced features it can be also used in less Agile environments, while transitioning to the new delivery process. It can then be very cost effective.

Summary

We have learned that manual testing can be valuable even in an environment where there is a lot of automation. We had a look at why service virtualization is key to effective manual exploratory testing in highly Agile environments. We also explored how important is the use off-the-shelf open source or commercial tools fitted for Agile teams, and listed two tools worth trying out.

Next steps

If you have a lot of automated tests but once every other release experience costly production bugs go ahead and investigate if manual exploratory testing could help to address that problem. You do not have to hire people to run this small experiment. You can start by having existing team members that did not work on a given new piece of functionality put on a “testing hat”. They can test the application pretending they do not know how to code, as a tester would.

If you are not using stubbing, API mocking or service virtualization yet in your testing but would like to reduce the number of bugs, and the risks, give Traffic Parrot try.  Get one of your tech leads to look at Traffic Parrot free community version download at https://trafficparrot.com.

Introduction to JMS service virtualization and API mocking

Recently we presented "Introduction to JMS service virtualization" at the Next Generation Service Virtualization event. Thanks to Energized Work and Red Hat Open Innovation Labs for having us. Here is the recording of the session: