Saturday, 15 October 2016

What is Traffic Parrot?

Traffic Parrot helps IT teams deliver faster and with more predictability. It also helps reduce software testing costs. We do that by implementing a practice growing in popularity called service virtualization.

Service virtualization is like using a wind tunnel to test your software. A new model of an aircraft before going to mass market production will be tested in a wind tunnel in different conditions. You can test software in a wind tunnel as well, by using Traffic Parrot.

Are your testers and developers often blocked by other teams? Do you find at least 5 bugs in production per release, and you would like to instead find them in earlier stages of SDLC to reduce costs? Is one of the systems you depend on available only 2 days a week and you need it 5 days a week for testing your application? Is setting up test data in external systems expensive or time-consuming? Are you paying high third party system transaction fees every time you run the same suite of tests? Are you waiting for an external team to deliver an API you need for functional testing of your system?

Traffic Parrot helps with these problems by simulating backend and third party dependencies. For example, a tester will use Traffic Parrot to create virtual services to simulate a backend server. That will allow her to continue testing even if the backend server is unavailable. It also helps with setting up the test data; it is much easier to do that in a virtual service she has control over. You can manually craft the virtual service based on request/response pairs or create them by doing a recording of the traffic between your application and the backend or third party system.

Saturday, 8 October 2016

Traffic Parrot for Agile testing and development teams

Why is it a good idea to buy Traffic Parrot if you are a manager of an Agile development team?

I have seen it take 3-10 days of development to create an over-the-wire stub with a GUI. Average Java developer pay in the UK is £450pd for contractors or £50k for permanent staff. £50k/252 days in a year is £200pd for a permanent developer.

An average estimate cost of in-house development could be minimum 3*£200=£600 and maximum of 10*£450=£4500. So you could be spending on average between £600 to £4500 in a year on developing a tool like this if only a team needs it.

So, a good guess could be you will spend somewhere between £600 and £4500 on a stub with a GUI, for example, £2500, if you build it in-house. Why not buy Traffic Parrot for Agile teams for only $49 a month, which is around £500 a year?

If there is more than one team developing a tool like this, which I have seen happen in one organization, you will spend a multiple of that number.

If you are doing pair programming, the estimate could probably be two times less to two more depending on the pair.

Please contact us to walk around your situation and run these calculations to see if Traffic Parrot makes sense for your team.

Sunday, 4 September 2016

Five steps on how to implement a service virtualization tool proof of concept project in less than a week

Are you a manager tasked with implementing a service virtualization project? Here is an idea on how to implement a service virtualization proof of concept pilot in less than a week:
  1. Find a small to medium size system or application managed by a maximum of 2-12 people. It will be easier to work with a smaller team and on a less complex system.
  2. Have developers working closely with testers to find an API to virtualize
    1. It should be stateless, transactional APIs are harder to virtualize
    2. It should be simple, for example, an HTTP SOAP request with not more than 20 tags to a third party, instead of an encoded custom TCP protocol call to a mainframe system
    3. Should cause significant problems that can be measured and showcased, for example:
      • The backend system exposing the API is often not available preventing testing
      • It is hard to set up test data in the backend system, the wait time is more than 4 weeks
      • The API has not yet been released to the test environments and the wait time is more than 8 weeks
      • Third party service test transactions are expensive, limiting the amount of testing done
  3. Choose a service virtualization tool that allows for a fast ROI 
    • A tool that can run on existing hardware, to save time requesting new hardware (for example TrafficParrot, Wiremock or Mountebank)
    • A tool that can be used by both developers in an automated way for long-term maintainability and testers for exploratory testing (for example TrafficParrot)
  4. Create a squad team consisting of at least one person from development, testing and environment management and trust them to do the job properly. Ideally, they should sit next to each other to improve the communication channels.
  5. Relax and trust the team will deliver. Wait for a week.
Keep in mind, this is only one of many examples how you could implement the pilot project. The devil is in the details. You can request a free, no obligations 20-minute over the phone service virtualization consultation where we will go through your most pressing problems.


Do you disagree? Leave a comment!


Thursday, 18 August 2016

Wednesday, 3 August 2016

My story with service virtualization, CA Lisa and Wiremock





My story with service virtualization

This is my story with service virtualization. If you have ever heard about service virtualization and you would be interested in somebody else's story over a period of five years, please continue reading.

I've been doing software development and testing for more than ten years now, six of which was continuous delivery, continuous integration, TDD, BDD, pair programming, and similar agile practices.

Four companies

This is going to be a story about four companies. I would like you to remember that I'm going to be presenting this story from the continuous delivery point of view. So whenever we're talking about something here, this is from the continuous delivery approach mindset.

Agile 2011-2013

The story begins in 2011-2013 when I was working for organizations that were following the agile principles to a certain extent.

Media company

The first example is a media company, which had 100+ different applications running or different code bases running on more than 200 JVMs. There were 50 people working on those: developers, QAs, database guys, ops guys, support. They were separate functional teams, but they were working very closely together.

When it comes to service virtualization, back then we used to call it stubbing. We would do in-house build stubs. That was because there were no lightweight products available at the time that would fit the continuous integration model that we could use. No open source alternatives or anything like that. Those tools were used by developers and QAs both for functional and performance testing. What I've noticed was it took two to ten days of development to develop such a tool.

Investment bank

Then the story continues and I go to an investment bank developing a risk system. This was a fairly large system running on 1,000 servers in production managed by around 30 people, very cross-functional teams mixed with functional teams. Developers would be doing quality assurance and production support.

In that organization, we also built stubs in-house for developers and it took only two days of development, on average, because they were very specialized. There wasn't much to be built. And, at that time, there still were no lightweight products available that would suit our needs, so we had to build something in-house. We didn't have to cater for QA requirements just because there were no QAs. Developers did everything.

Retail bank

The story continues to 2013-2015 where I was introducing agile to a retail bank. The bank had some issues around delivery. The releases were taking too long, 6 to 12 months. There were a lot of bugs in late stages like UAT testing and production. And it was quite expensive. The complexity of the system was more or less the same as the risk application I was working on in the investment bank I have mentioned, but the retail bank application was 20 times more expensive than that system (sic!).

There were also some architectural challenges. The webapp was a monolith code base with a lot of low-quality Java code. It was connecting to middleware gateways and C++ middleware, and then to 20+ backend systems via different protocols: HTTP, JMS, and custom TCP protocols. There was very little test automation, very long running builds and the reverse testing pyramid in practice. So, almost no unit tests some functional automated testing, some acceptance tests, and then a lot of manual testing on the top.

They had very functional teams, but not working closely together, so developers and QAs did not get along well.

Service virtualization

That was the first time I have heard about service virtualization. In the past, I called this over the wire stubbing or over the wire mocking.

The bank wanted to increase the release schedule predictability and reduce the number of production defects. They hired 15 very experienced and very high profile tech leads to make that change. Those tech leads worked on a few large, strategic projects, like for example modulizing the codebase and moving to a modern source control system. But they were also working on introducing tactical solutions to pick up the low hanging fruit and do something because the strategic solutions would take a long time to implement.

One of the things they came up with was to record the 20+ backend systems, automate testing that way and see what happens. That's how I got hired.

I got hired

I was introduced to CA LISA. CA Service Virtualization is the current name for the system.

My first impression was that it was the first time I had seen this record and replay functionality, which was pretty cool. It was doing templating, so called magic strings. That was cool as well. And it was supporting many protocols, for us, the big win here was custom TCP protocol virtualization that the bank needed to use.

We proceeded to implementation. After six months of trying to do continuous delivery, continuous integration, TDD, and those kinds of things with CA LISA, we found that CA LISA works. We've had several successes with it. We made several teams happier than they were before. They could use a tool to do stubbing or service virtualization. And we had around £300,000 of estimated cost savings.

But we've had some learnings along the way. What we found was that CA LISA didn't play very well with source control systems like GIT. It produces XML files. You can source control them in for example Git, but they're not very easy to merge or compare different versions of those files. If you keep in mind my background and what I have seen in the past, I've seen it done much nicer and much easier.

Also, the centralized deployment infrastructure didn't play very well with our Jenkins build. Ideally what you want to have is everything contained within the build agent and it should do whatever it needs to do to build and test the application. But that didn't play very well with the centralized deployment infrastructure CA LISA provides. Also, because of the licensing model and how heavy the infrastructure was, we couldn't deploy it on every build agent.

Another thing we've noticed is that it was very hard to set up the stubs or virtual services from the test. What you usually do are tests that are split into three logical sections: the "givens section", the "whens section", and the "assertions section". In the givens section, what you're going set up the prerequisites for the test. Then you execute the test. Then, in the last section, you're going to make assertions. And in the givens section, the first section of the test, what you want to do is set up your stubs for the tests to pass. Unfortunately, CA LISA wasn't helping us with that. It was possible, but it didn't look as simple as what we have seen in the past.

The BeanShell extensions were causing us problems as well when scaling to other teams. You couldn't use tools like IntelliJ or Eclipse to write those extensions which was what we have seen in the past. What I have done in the past is write an extension to a stub by simply implementing a Java interface and use code completion provided by your IDE, IntelliJ or Eclipse. But it wasn't possible here. You would have to do it in LISA workstation and it was quite counterintuitive for us. You had to learn a lot of new things before you could do something meaningful.

We had some learnings about the infrastructure as well. Our offshore VMs that were provided to our partners had very little resources and the LISA workstation that was taking a gigabyte of RAM and a lot of disk space. Those VMs had not enough RAM memory to run the LISA workstation in the configuration required by our virtual services. In the past, for the in-house build stubs, we used simple web applications and you could just use the Web browser to connect to that stub. So we felt, that might have been a bit better for us probably at that time.

The infrastructure, separate VSE, separate registry, and separate workstation caused us a few problems as well. For example, the recommended way was to record only in the workstation, and what we would like to do quite often is record actually on the VSEs. Managing the workstations across many VMs and many laptops was a problem also because of the versioning.

We had some learnings around the licensing as well. We wanted to scale to more than 70 teams and ideally keeping the continuous delivery, continuous integration and TDD in mind. We wanted to have an instance of LISA per developer or QA, but that did not look feasible because of the resources it consumed on the system on your VM or your laptop. So we thought, okay, let's have at least one per team. But that still was a problem because of the licensing model, so we had to abandon that idea.

There were a few general user experience learnings we had. We got the feeling there were too many screens, too many controls, and too many components that we just didn't need. It was a very powerful tool, but we didn't need most of the functionality.

Our main takeaway was that LISA works. It supports custom TCP, protocol virtualization, JMS, and HTTP(S). But when we tried to apply it to a continuous delivery environment, it didn't work for us as well as we would hope, keeping in mind my personal experiences with stubbing in agile environments.

It's worth mentioning here that other departments in the bank were using CA LISA as well. Those departments were usually in the maintenance mode, so there was no focus on faster release times, and no focus on things like continuous delivery. As far as I know, CA LISA has been very successful in those environments.

It's also worth mentioning that, in this retail bank, there were two in-house build tools that have been developed to do service virtualization and stubbing (or whatever you want to call it). They were done there to facilitate stubbing of specific components. Unfortunately, neither of those tools were able to record 20-plus backend systems. But what we found was that they fit very well into continuous integration mindset and it took around two to ten days, on average, to develop such a tool.

2015-2016

The story continues to 2015-2016 when I was back to an organization that does a lot of agile. The company had 130+ different applications (different codebases) running on a lot of JVMs in production. There were 50 people managing that working in functional teams very closely together.

Wiremock

By this time the Wiremock platform, an open source stubbing and service virtualization solution, was mature enough that we could start looking at using it.

We have build a service virtualization and over-the-wire stubbing solution on top of Wiremock. It was designed to use by both DEVs and QAs. DEVs would use the Wiremock remote setup programmatic APIs and QAs would use a Web GUI that was built on top of Wiremock to do their exploratory testing and to do performance testing. The QAs were not very tech-savvy in this team and they needed a simple GUI to navigate around the tool. What I found was that it took around 15 days of programming over the last 9 months to do it.

What we found was that Wiremock fits very well into the continuous integration mindset and it's a great tool for developers. It has some bugs, it's still a new tool, but I would use it again if I got the chance. Unfortunately, we had to do some internal development on top of it just because it wasn't designed with non-tech savvy QAs in mind.

Next logical step?

What would be a logical next step for you if you were in my position? Well, for me it was combining those experiences, the traditional and agile methodologies, and the experiences with cross-functional, functional, mixed teams, experiences with CA LISA, using Wiremock as well and building on top of it. Having been in those four companies, I came up with a list of requirements for a tool that would satisfy my needs in all of those circumstances.

New tool requirements

What I wanted to have is a tool that would allow me to deploy anywhere, anytime, where the licensing wouldn't get in the way. What I wanted to have is a lightweight tool that would consume tens or hundreds of megabytes instead of gigabytes of memory. Therefore, since the licensing would be flexible and the tool very lightweight, I could run it on existing hardware.

But that doesn't mean I couldn't run it in a centralized manner. If I wanted to, I could run it both locally and centralized, but the tool shouldn't get in the way of either of those.

I would also like to be able to run this locally or in a centralized environment. And I wouldn't like the tool to get in the way. It doesn't matter if you want to run it centralized on some servers or locally on a laptop. The tool should support both.

I would like the tool to be easy to integrate with CI tools like Jenkins and TeamCity. I would like to run it separately on build agents in an isolated environment to do CI properly. And I would like the tool to be designed with automation in mind, so easy to integrate with things like JBehave, Cucumber, JUnit, etc.

Also, I would like to keep in mind not only Devs but QAs as well. Developers want CI, TDD, etc. What the less tech-savvy QAs want a nice GUI on top of the same service virtualization tool developers use. They also want the same artifacts that the stub produces when used by developers.

I would like to have a tool that supports both functional, cross-functional teams, and I would like a tool that ideally is very simple to use and doesn't need any training, providing great user experience.

Traffic Parrot

And this is how I came to develop Traffic Parrot.

If you look at the tools on the market, there's definitely the option of going open source. And if I was to go open source, I'd probably go for Wiremock today. But if you want to look at other tools and things that are not necessarily Java-related, you want to do some NodeJS or Ruby or something like that, I've compiled a list of 40+ tools.

There's also an option of in-house development. It depends on the environment you're in and how experienced your developers are. That would inform how you would do it. What I have seen, when you've got mid-level to senior guys on your team it's going to take around 5 to 15 days to develop a QA GUI on top of Wiremock. And if you want to start using other protocols like JMS or custom TCP is going to take much longer than that. There's a lot of additional effort required there.

Or, you could go for something like I've already mentioned. An enterprise-ready solution like Traffic Parrot.

What is Traffic Parrot? If you are looking for over-the-wire stubbing or service virtualization tool, and you're doing continuous delivery, continuous integration I would encourage you to go to TrafficParrot.com to learn more about the tool and all the details.

The end

That's my story with service virtualization going through four companies, having different experiences using different types of tools. I hope you enjoyed it. 

Next steps

Thursday, 14 July 2016

3 do's and don'ts of service virtualization for developers and QAs

Here are three first things that came to my mind today I have found myself doing or avoiding in the past while implementing service virtualization.

Three things I always do


Do: Focus on keeping it simple

I have worked in environments where systems have been designed for testability from day 1 and where there was no focus on that.

What I have found was that creating stateful virtual services or even simulators was sometimes a necessary short-term solution, especially when you are working in an environment where the System Under Test has not been designed for testability. What I have found was that systems not designed for testability often use external APIs in a way that makes them hard to virtualize.

I have also been on projects where system testability was part of the design process from the beginning. In those environments, stubbing the external APIs was very simple. The virtual services are almost always stateless and never have any logic.

I have always focused on avoiding creating complicated virtual services. That fact that you can simulate a backend system in your virtual service does not mean you should.

Remember that service virtualization is only a part of the big picture of creating software to deliver business impact. What you want to do is help reduce the costs of software delivery as a whole, not only QA is isolation.

Create the simplest virtual services possible. If you need to create a stateful or simulator virtual service, communicate well with architects and developers to make sure they know how much effort goes into it. It might be a cheaper long term option to change/refactor the use of the APIs and the APIs themselves to make them more testable and eliminate the need for complex simulation.

Action for you: find an often used virtual service that is complex and talk about it with developers and architects about it to understand if it a good tradeoff to keep on maintaining it.

P.S. I have seen situations where a lot of effort went into building emulators when you would like to get your builds times from minutes to seconds. If all your automated CI builds run in less than 20 minutes, and you are doing not much manual exploratory testing, and you would still like to reduce the build times, you could explore this option.

Do: Remember what is the System Under Test (SUT) when virtualizing a service

Your system under test is the group of components or applications you are testing. It defines the set of functionalities you are testing.

Let us say we are testing a web application connecting to a backend system. The backend system exposes an API for users to log in. After three unsuccessful logins, the backend system will lock the account and start returning an error indicating that the account has been deactivated.

If I am testing the web application, I would like to see three messages rendered to the user: a successful login, an unsuccessful login, and an account locked message. It can be done with three stateless responses from a virtual service. Then I would proceed to test the backend system API. I would test that after 3 login attempts I get an account locked message. After, that I would proceed to test both systems in an integration environment, but I would not repeat all the tests. I would only test if I can successfully log in. The number of tests in integration environments should be reduced to a minimum giving enough confidence to proceed to production (more on the testing pyramid by Google). So we had three stages of testing, in the first one the SUT was the web application. In the second one, the SUT was the backend system. In the third one, the SUT was the web application together with the backend system. When the SUT is the web application, it does not make sense to create a stateful virtual service that will return an account locked message after three unsuccessful logins. It would create unnecessary complexity. You have to test that functionality in the backend system anyway, so you know it works. All you test in the web application is whether the messages get rendered correctly.

Action for you: what is the system under test you are testing today? What does your testing pyramid look like?

Do: Use productized solutions in most cases

You can write your own stubs, mocks, virtual services, simulators, and emulators. While this is an option to consider, you should take a pragmatic approach to this decision.

Ask yourself questions like:

  • What are the functionalities we need be able to use the tool?
  • How long will it take us to develop the new tool?
  • How much maintenance can we afford to do further down the line? 
Then you can compare the tools available on the market:
  • Are there any tools available on the market that would satisfy our needs?
  • How much do they cost?
  • Will it be cheaper or faster to implement the tools in-house and then maintain them long term in-house as well?

You would be surprised to find that there are plenty of open source and commercial tools on the market. They should satisfy requirements of most teams. I have compiled a list of 40+ tools you can download.

If you want to know more, I have prepared a video introduction for developers.

Action for you: look at the service virtualization tools comparison.

Three things I avoid doing


Avoid: Virtualizing unstable APIs

If you are virtualizing an API, you freeze it in time. What happens if the original API in under heavy development and changes every week significantly enough to cause breaking changes on your side? You need integration contract tests, consumer-driven contract tests or manual exploratory tests to make sure the original API has not drifted from the virtualized version. If you cannot afford development of those automated tests or do not have resources to spend time manually testing it you should consider either using the real version of the API or postponing consumption the APIs. You could be working on other parts of the system as a priority and come back to the unstable API once is a bit more stable.

Action for you: how often do the APIs you are developing against change? Once a week? Once a month? Once a year?

Avoid: Replacing one bottleneck with a new one

You will find many shared virtual service environments in many companies. Many teams will be using the same server with multiple virtual services deployed. While this is an option that works, I have found enabling individual teams to have their instances to be more efficient.

A virtual service or stub per developer or QA, deployable on-demand is an even more attractive option.

I have found that while a shared service virtualization environment is a good option for removing test data setup problems, environment availability, etc. it creates a strong dependency between teams using that instance. So one team doing their testing impacts many other teams. You need to assign port ranges to teams, manage credentials, make sure the hardware is fast enough to support multiple teams, control the availability of the environment, etc. You suddenly need a new team to manage the virtual service environment

So, instead of having one environment that all teams in the organisation use, I prefer every team or every developer and tester having their environment. You let people use service virtualization however they want on their hardware or laptops. It scales very well.

If you would like to do that, it might become very expensive depending on the tool you use; you need to choose a tool that can run on existing hardware and whose licensing model allows for distributed deployment like Traffic Parrot, or use a cloud offering like GetSandbox.

Avoid: Using wrong metrics

The ideal situation would be to measure, change one thing, and measure again. It is very hard, though.

What I have seen is teams measuring in different ways. Estimating cost savings after the fact (very inaccurate and subjective), counting the number of services virtualized, number of transactions on the virtual service environment. Those things could be a measure of the complexity of the service virtualization implementation you have chosen, but they do not tell you much about the state of your software delivery lifecycle.

I don't measure anything with regards to service virtualization alone. I would look at the state of the software delivery process as a whole. For example, the number of bugs in late stages of development, release lead times, the number of teams blocked, etc. When you want to measure something in the service virtualization space think about for example:

  • Total amount of time spent waiting for APIs to be delivered (lower is better)
  • Total amount of time spent waiting for test data set up
  • Average time to find a defect with big business impact (lower is better)


One exception here is measuring third party transaction costs. It is easy to assess costs savings when you use service virtualization in this case.

Action for you: read about the use of metrics in SDLC.

The end
Thanks for reading. If you disagree, please comment! If you enjoyed this and wanna see more, please like!

Next steps



Tuesday, 21 June 2016

Traffic Parrot Trademark

“Traffic Parrot” was registered 3rd of June 2016 as a trademark of WB Software Consultancy Limited. Class 9, Service virtualization software;Test automation software;Software. Class 42 Software consulting services;Software design and development;Information technology consultancy.

Tuesday, 17 May 2016

Service Virtualization and Given/When/Then

I recently found that I had problems explaining how service virtualization or over-the-wire stubbing (however you wanna call it) fits into the usual type of testing I do on a daily basis and what I mean when I say “service virtualization”. Here is a brief explanation of how that could work for developers and QAs.

Given/When/Then testing

In almost all cases when I create software these days I will have some level of automated testing. How much coverage I have depends on the context, but I almost always have a minimal amount of automated tests.

When I write the automated tests I split them into 3 logical sections. Let us imagine I am writing a simple application that talks to Markit market data APIs to fetch stock quotes and display the last price to the user.

The “given’s” section is where I set up the system and its dependencies. In this section I will make calls to over-the-wire stubs (or virtual services) to set them up for the test. For example, the test will call a Wiremock instance running in a separate process and set it up so that when the application asks for a stock quote, Wiremock will pretend to be the Markit market data API and return a canned response.

The “when’s” section is where we exercise the system under test.

The “then’s” section is where we make assertion on the state or interactions,

For example in Java with JUnit it would look like this:
import com.github.tomakehurst.wiremock.junit.WireMockRule;
import com.wbsoftwareconsutlancy.FinanceApplication;
import org.apache.http.client.fluent.Content;
import org.apache.http.client.fluent.Request;
import org.junit.After;
import org.junit.Before;
import org.junit.Rule;
import org.junit.Test;

import static com.github.tomakehurst.wiremock.client.WireMock.*;
import static junit.framework.TestCase.assertEquals;

public class FinanceApplicationTest {
    @Rule
    public WireMockRule forecastIoService = new WireMockRule();

    private FinanceApplication financeApplication = new FinanceApplication();

    @Before
    public void setUp() throws Exception {
        financeApplication.start();
    }

    @After
    public void tearDown() throws Exception {
        financeApplication.stop();
    }

    @Test
    public void parsesLastPriceFromStockQuote() throws Exception {
        forecastIoService.stubFor(get(urlEqualTo("/MODApis/Api/v2/Quote/json?symbol=AAPL"))
                .willReturn(aResponse().withBody("{\"Status\":\"SUCCESS\",\"Name\":\"Apple Inc\",\"Symbol\":\"AAPL\",\"LastPrice\":103.17}")));

        Content content = Request.Get("http://localhost:" + financeApplication.port + "/stock-quote-last-price")
                .execute()
                .returnContent();

        assertEquals("103.17", content.toString());
    }
}
 

Service Virtualization

The given/when/then framework has proven to be working very well when I was working on a greenfield project or working on a system that has been designed with testability in mind. It might be a challenge to follow this pattern if you are working with a legacy codebase that has not been designed with testability in mind. You might have to introduce a more complex, stateful over-the-wire stub (virtual service) or even implement a simulation.

When the system or module I am working on has not been designed with testability in mind, I find myself tackling this complexity by adding more complexity to the testing infrastructure. For example, in order to implement an automated suite of tests, I need to introduce a complex virtual service (stub). Introducing more complexity to tackle existing complexity is a slippery road. There are times where it is a good tradeoff. Unfortunately, most cases I have seen it being done it was a good short term solution that has turned into a high maintenance cost long term one.

So what do I do?
So, what are the actions for you today? Find a few complex virtual services you have created that require a lot of your time for maintenance. Understand how much time/effort/money goes into maintaining them. Speak to developers and architects and understand if it is a good idea to keep on maintaining them, or would it be a good idea to refactor selected system under test modules to make them more testable and reduce the overall system under test maintenance costs by getting rid of complex virtual services.

Next steps

Tuesday, 1 March 2016

InfoQ publishes "Stubbing, Mocking and Service Virtualization Differences for Test and Development Teams"

Wojciech's article on "Stubbing, Mocking and Service Virtualization Differences for Test and Development Teams" has been recently published on InfoQ. You can find it here. InfoQ has over 800,000 subscribed members and over 1 million unique visitors a month.

Monday, 25 May 2015

Sunday, 24 May 2015

What is difference between service virtualization, stubs and mocks from a manager's point of view?

This article will be published soon. Please subscribe below to be notified when it is ready.


Subscribe to blog posts via email

Saturday, 23 May 2015

What is the difference between a stub, mock and service virtualization and when do I use what?

Is this article for you?

If you have heard about mocking, stubbing and service virtualization before, but would like to deepen your knowledge this article is for you. If you have not heard about service virtualization yet, have a look at the quick introduction to service virtualization.
If you are a manager, you might also be interested in what is the difference between service virtualization, stubs and mocks from a manager's point of view.
Developers and QAs please continue reading.

Taking a step back, what is a test double?

It all starts with Test Doubles. A test double allows us to decouple yourself from your dependencies when testing the system under test (SUT). You replace the dependency with an equivalent interface that allows you to exercise a given test case. The most common categories of a test double used by developers are (examples in brackets):
  • dummy object (a string “John” or a constant integer 12345)
  • stub (a StubHttpResponse class that always returns the same response “OK”)
  • spy (a SpyHttpResponse class that records all invocations of the onGet method)
  • fake (a FakeDatabase class which persists to an in memory H2 database instead of an expensive production-like instance of Oracle)
  • mock (a dynamic proxy implementation of UserListener interface, implemented by Mockito and used in a unit test)
The most common categories of a test double used by QAs are (examples in brackets):
  • stub (a servlet in a WAR file created using SoapUI and deployed to a remote Tomcat instance at http://testEnviroment1.mycompany.com:8080/getWeatherService)
  • virtual service (an artifact created with a service virtualization tool and deployed to a remote shared virtual service enviromenent at http://vs-enviroment.mycompany.com:18034/getWeatherService)
It is often also hard to say which of the following categories a test double fits into. They should be treated as spectrums rather than strict definitions. For example, a given test double could be considered as a stub and a spy at the same time.


Mock vs. Stub vs. Virtual Service

The most commonly discussed categories of test doubles are mocks, stubs and virtual services.

A stub is a minimal implementation of an interface, usually returning hardcoded data. The hardcoded data is tightly coupled to the test suite. The tests are going to depend on that data. It is most useful when the suite of tests is simple, and keeping the hardcoded data in the stub is not an issue. Some stubs are handwritten, some can be generated by tools for you. Some developers will say that a stub can also be primed, but you cannot verify an invocation on a stub.

A mock usually verifies outputs against expectations. Those expectations are set in the test. It is also most often created by using a third party library, for example in Java that is Mockito, JMock or WireMock. It is most useful when you have a large suite of tests and the stub is not enough, because each of the tests needs different data set up. Maintaining a stub in that case could be costly, so you can use a mock instead. Also, the verification the mock does is a useful tool from a developers point of view when writing automated tests. On top of that, a mock focuses on interactions rather than state. Mocks are usually stateful, for example you can verify how many times a given method was called.

That is the main difference between a mock and a stub, from a developer’s point of view.

A virtual service is a test double usually provided as SaaS, often created by recording traffic rather than building from scratch based on documentation. It is created using one of the service virtualization platforms. Those tools establish a common ground for teams to communicate and facilitate artifact sharing. Stubs and mocks on the other hand are usually written by developers, occasionally shared with QAs and very rarely shared with other development teams due to the interoperability problems (different software platforms, deployment infrastructures, etc.). Often the service virtualization tools support many protocols (HTTP, MQ, TCP, etc.), whereas stubs and mocks frequently support only one. Most of the service virtualization tools have GUIs. It is another approach to stubbing on a larger scale, bringing QAs to the table as well. Virtual services are always called remotely (over HTTP, TCP, etc.) whereas stubs and mocks most often work in-process directly with classes, methods and functions. This means virtual services are typically used for system testing but stubs and mocks for unit/module/acceptance testing. A virtual service can be considered a stub on steroids. Often, you will see an instance of a virtual service deployed in an environment used by many teams simultaneously, whereas stubs would be individual instances per team. Virtual services often simulate non-functional aspects of the dependencies such as response times.

Download report comparing 40+ service virtualization tools.

All of the approaches mentioned above come with pros and cons. We will have a look at those later.


Data source
Data coupling
Invocation verification
Invocation protocol
Created by
Used by
Stateful
Has a GUI
Test phase
Stub
Hardcoded data or data set up by the test.
Tightly coupled to the test suite data.
Not used.
Usually in the same process (JVM, .NET, YARV, etc.). Sometimes over IP such as HTTP or raw TCP protocols.
DEVs and sometimes QAs
DEVs and sometimes QAs
No
No
Usually unit, integration, system and acceptance tests.
Mock
Data set up by the test.
Can be flexible, both tightly and loosely coupled to the test suite data.
Used often.
Usually in the same process (JVM, .NET, YARV, etc.). Sometimes over IP such as HTTP or raw TCP protocols.
Mostly DEVs
DEVs and sometimes QAs
Yes
Sometimes command line
Usually unit, integration, system and acceptance tests
Virtual service
Recorded data (possibly manually modified after the recording) or hardcoded data.
Tightly coupled to the test suite data.
Sometimes testers will look at the virtual service logs while doing testing.
Always over a network layer. Often supports many protocols such as HTTP, MQ, FIX, etc.
Mostly QAs
Mostly QAs
Yes
Yes
Usually system tests.



What is Service Virtualization?

It is the practice of creating virtual services and sharing them across teams. Developers and testers working on the same product can use the same virtual service artifacts or even virtual services. Another example is QA teams across a large enterprise using the same virtual service artifacts. It promotes communication between DEV and QA teams across many departments. It also attempts to address the problem of duplicated efforts by creating stubs for the same APIs within a large organisation by many teams simultaneously. It is a glorified stub, that can be stateful. All of that comes with a cost as usual, we will look at that later.

Is service virtualization better than stubbing and mocking?

Is using an electric screwdriver better than using a small manual one? It depends on the job you need to to. The small manual screw driver works very well when you need to disassemble your laptop, which is very fragile and requires careful handling. The electric screw driver works very well if you have got a wooden furniture set and you need to assemble it yourself. It is a bit more expensive than the manual one as well. That means it is best to have both of those tools in your toolbox.
Similar with service virtualization, stubs and mocks. They solve different problems. Some problems can be addressed by both stubs, mocks and virtual services. Some problems should be addressed only with mocks and stubs. To highlight only a few of the most common concerns please have a look at the comparison below.


Main pros
Main cons
When to use it
When NOT to use it
Stubbing
A lot of open source software available.
A lot of information available on techniques online.
Tests are tightly coupled to the stub because of the hardcoded data.
When the test suite requires a lot of stubs. Than the data should live with the tests, use mocks instead.
If you are willing to learn how to use stubs it is a good idea. A moderate level of technical background is often required. Avoid using stubs with hardcoded data in acceptance tests.
Mocking
A lot of open source software available.
A lot of information available on techniques online.
A tool for developers mainly. QAs do not use mocks often.
All levels of testing, whilst remembering about the test boundaries and the SUT.
If you are willing to learn how to use mocking it is always a good idea. A substantial level of technical background is often required.
Service Virtualization
Steep learning curve - easy to pick up. An all-in-one solution. Many protocols supported by most tools. Well tested tools. Can record traffic. Easy to share across the teams once the tools are established within the company.
The tools are expensive. Thoughtworks observed in July 2014 that “Big enterprise solutions often obstruct effective delivery due to their accumulated bloat, cumbersome licensing restrictions, and feature sets that are driven by check-lists and imaginary requirements far removed from the realities of most development teams”.
Some of the tools are sold to CX0 level managers without proper consultation with specialists before the deal which results in poor user experience and fit.
You couple the test to the data in the virtual service, same as in a stub.
The market leader tools work in a shared virtual service environment model, which create dependencies between individuals and teams.
Large scale problems. A lot of APIs to stub out, and a lot of people that want to use it starting today.
Small agile teams in small to medium size companies where developers work in an agile manner and know how to create stubs and mocks themselves using the open source tools. Avoid using in acceptance tests.

Which problems can be addressed by service virtualization but not by stubbing nor mocking?

If you are in a team or organisation where nobody has ever done much stubbing or mocking and you would like to see return on investment quickly writing stubs yourself especially if you are a QA can be a daunting task. That is why there are tools that have GUIs so that you get can get up to speed quickly. Also, they are extensible, which provides flexibility once you get comfortable with the subject. Some of the the service virtualization tools can help you achieve that. So, if you need something to get up to speed quickly, but need the flexibility once you are there, the service virtualization tools might be helpful.
Also, if you are running an organisation with a top-down waterfall approach to software development, with the IT department treated as a cost centre rather than the core of the business (which is inadvisable, because it is has proved many times to be a failed software development approach and even enterprise scale organisations are moving away from it), you can use the service virtualization tools as another governed and recommended tool for your organisation. You can use some of those tools as a tactical quick win, whilst you get your organisation in a place where it can be mature enough to consider other approaches.
You need to be careful though. According to the ThoughtWorks Technology Radar big enterprise solutions are getting worse at supplying what users need. Often, it is best to go with the targeted simple solutions rather than bloated enterprise platforms, one like Traffic Parrot if you need to deploy it on your own infrastructure, or for example Sandbox when you are looking for a SaaS solution.

A few random examples

Here are a few examples to give you a better idea of when we could use a stub, mock or a virtual service. The list is just to give you an idea of the possible scenarios. It is not extensive in any way.
I would use a stub when for example:
  • I am a backend developer working on a small new application that uses a third party library to communicate with an external API. I would then create an interface in my code that will decouple me from that third party library. One implementation of that interface would use the third party library classes and would be used in an end to end test. The other implementation would be a stub that would return hardcoded values, and would be used in my unit tests.
  • I am a tester and I need to test the application my team is working on in isolation. I would create a stub for the HTTP RESTfull APIs the application depends on using in house built tools provided to me by our team’s developers.
I would use a mock when for example:
  • I am a backend developer working on a fairly large application and I need to decouple myself from my HTTP API dependencies. I would use a remote mocking framework such as WireMock or moutebank and prime the mocks in the acceptance tests.
  • I am a backend developer and I am working with a codebase with thousands of class unit tests. I would use Mockito to mock the dependency classes in my tests.
I would use a virtual service when for example:
  • I am a developer working on a very large complex legacy application that has many dependencies, with test coverage that is less than 3%. Also, the dependencies of the system are unavailable 60% of the time. I would use a service virtualization tool to create virtual services by recording the traffic. That would help me decoule myself from the downtime of the dependencies. I would also let my testers use those virtual services. This would be a short term tactical solution before I spend the time on creating an automated suite of tests for my application, that would not use recorded but primed data.
  • I am a frontend developer working with a public SOAP Weather API. I would use Sandbox to generate the virtual service for me using a WSDL file. I would set up the test data in the virtual service so that all my test cases are being represented, like for example temperatures below -60°C which are never being returned from the real Weather API.
  • I am a NFT tester responsible for performance of a large banking application. I would use tcpdump to record incoming and outgoing traffic in production for a day. Than I would import the outgoing traffic using a service virtualization tool and create a virtual service. I would import the incoming traffic to JMeter using tcpreplay and JMeter Proxy. I would run a test suite in my performance environment using JMeter, and use the virtual services to decouple myself from dependencies. I would repeat the recording process every week.

How do I start doing service virtualization, stubbing or mocking today?

A good overview of stubs and mocks can be found in the the GooS book. If you are a software developer try using Mockito to learn how to do do mocking. If you are a software tester try doing stubbing using TrafficParrot or SoapUI MockService. It's also worth reviewing any of the enterprise vendors like CAIBM or Parasoft but as ThoughtWorks observed in Jan 2014 most of the innovation comes from practitioners contributing to open source.

Need more help?

Contact us:

Subscribe to blog posts via email

Friday, 2 May 2014

Do you have problems with your test environments?

Is one of the systems you depend on available only 2 days a week and you need it 5 days a week for testing?
Is setting up test data in external systems expensive or time consuming?
Are you paying unnecessary third party system transaction fees every time you run the same suite of tests?
Are you waiting for an external team to deliver an API you need for functional testing of your system?
You cannot performance test your application because a third party system can’t handle the load?

If you answered yes to any of the above questions you might find interest in learning about service virtualization.

For one of our recent clients, we increased the test infrastructure availability from 2 days a week to 7 days a week. A different client was able to test its software before an external team delivered an API she depended on. We have achieved that by replacing parts of the testing environment infrastructure with virtual services. We created them with the industry leading software for service virtualization.

Here are several videos by the industry leaders, that will introduce you to the subject.

Computer Associates (CA)


Helett-Packard (HP)


IBM


Parasoft