Saturday 15 October 2016

What is Traffic Parrot?

Traffic Parrot helps IT teams deliver faster and with more predictability. It also helps reduce software testing costs. We do that by implementing a practice growing in popularity called service virtualization.

Service virtualization is like using a wind tunnel to test your software. A new model of an aircraft before going to mass market production will be tested in a wind tunnel in different conditions. You can test software in a wind tunnel as well, by using Traffic Parrot.

Are your testers and developers often blocked by other teams? Do you find at least 5 bugs in production per release, and you would like to instead find them in earlier stages of SDLC to reduce costs? Is one of the systems you depend on available only 2 days a week and you need it 5 days a week for testing your application? Is setting up test data in external systems expensive or time-consuming? Are you paying high third party system transaction fees every time you run the same suite of tests? Are you waiting for an external team to deliver an API you need for functional testing of your system?

Traffic Parrot helps with these problems by simulating backend and third party dependencies. For example, a tester will use Traffic Parrot to create virtual services to simulate a backend server. That will allow her to continue testing even if the backend server is unavailable. It also helps with setting up the test data; it is much easier to do that in a virtual service she has control over. You can manually craft the virtual service based on request/response pairs or create them by doing a recording of the traffic between your application and the backend or third party system.

Saturday 8 October 2016

Traffic Parrot for Agile testing and development teams

Why is it a good idea to buy Traffic Parrot if you are a manager of an Agile development team?

I have seen it take 3-10 days of development to create an over-the-wire stub with a GUI. Average Java developer pay in the UK is £450pd for contractors or £50k for permanent staff. £50k/252 days in a year is £200pd for a permanent developer.

An average estimate cost of in-house development could be minimum 3*£200=£600 and maximum of 10*£450=£4500. So you could be spending on average between £600 to £4500 in a year on developing a tool like this if only a team needs it.

So, a good guess could be you will spend somewhere between £600 and £4500 on a stub with a GUI, for example, £2500, if you build it in-house. Why not buy Traffic Parrot for Agile teams for only $49 a month, which is around £500 a year?

If there is more than one team developing a tool like this, which I have seen happen in one organization, you will spend a multiple of that number.

If you are doing pair programming, the estimate could probably be two times less to two more depending on the pair.

Please contact us to walk around your situation and run these calculations to see if Traffic Parrot makes sense for your team.

Sunday 4 September 2016

Five steps on how to implement a service virtualization tool proof of concept project in less than a week

Are you a manager tasked with implementing a service virtualization project? Here is an idea on how to implement a service virtualization proof of concept pilot in less than a week:
  1. Find a small to medium size system or application managed by a maximum of 2-12 people. It will be easier to work with a smaller team and on a less complex system.
  2. Have developers working closely with testers to find an API to virtualize
    1. It should be stateless, transactional APIs are harder to virtualize
    2. It should be simple, for example, an HTTP SOAP request with not more than 20 tags to a third party, instead of an encoded custom TCP protocol call to a mainframe system
    3. Should cause significant problems that can be measured and showcased, for example:
      • The backend system exposing the API is often not available preventing testing
      • It is hard to set up test data in the backend system, the wait time is more than 4 weeks
      • The API has not yet been released to the test environments and the wait time is more than 8 weeks
      • Third party service test transactions are expensive, limiting the amount of testing done
  3. Choose a service virtualization tool that allows for a fast ROI 
    • A tool that can run on existing hardware, to save time requesting new hardware (for example TrafficParrot, Wiremock or Mountebank)
    • A tool that can be used by both developers in an automated way for long-term maintainability and testers for exploratory testing (for example TrafficParrot)
  4. Create a squad team consisting of at least one person from development, testing and environment management and trust them to do the job properly. Ideally, they should sit next to each other to improve the communication channels.
  5. Relax and trust the team will deliver. Wait for a week.
Keep in mind, this is only one of many examples how you could implement the pilot project. The devil is in the details. You can request a free, no obligations 20-minute over the phone service virtualization consultation where we will go through your most pressing problems.


Do you disagree? Leave a comment!


Thursday 18 August 2016

Service virtualization and stubbing tools list comparison (revisited 2016)



Open in full screen »


Wednesday 3 August 2016

My story with service virtualization, CA Lisa and Wiremock





My story with service virtualization

This is my story with service virtualization. If you have ever heard about service virtualization and you would be interested in somebody else's story over a period of five years, please continue reading.

I've been doing software development and testing for more than ten years now, six of which was continuous delivery, continuous integration, TDD, BDD, pair programming, and similar agile practices.

Four companies

This is going to be a story about four companies. I would like you to remember that I'm going to be presenting this story from the continuous delivery point of view. So whenever we're talking about something here, this is from the continuous delivery approach mindset.

Agile 2011-2013

The story begins in 2011-2013 when I was working for organizations that were following the agile principles to a certain extent.

Media company

The first example is a media company, which had 100+ different applications running or different code bases running on more than 200 JVMs. There were 50 people working on those: developers, QAs, database guys, ops guys, support. They were separate functional teams, but they were working very closely together.

When it comes to service virtualization, back then we used to call it stubbing. We would do in-house build stubs. That was because there were no lightweight products available at the time that would fit the continuous integration model that we could use. No open source alternatives or anything like that. Those tools were used by developers and QAs both for functional and performance testing. What I've noticed was it took two to ten days of development to develop such a tool.

Investment bank

Then the story continues and I go to an investment bank developing a risk system. This was a fairly large system running on 1,000 servers in production managed by around 30 people, very cross-functional teams mixed with functional teams. Developers would be doing quality assurance and production support.

In that organization, we also built stubs in-house for developers and it took only two days of development, on average, because they were very specialized. There wasn't much to be built. And, at that time, there still were no lightweight products available that would suit our needs, so we had to build something in-house. We didn't have to cater for QA requirements just because there were no QAs. Developers did everything.

Retail bank

The story continues to 2013-2015 where I was introducing agile to a retail bank. The bank had some issues around delivery. The releases were taking too long, 6 to 12 months. There were a lot of bugs in late stages like UAT testing and production. And it was quite expensive. The complexity of the system was more or less the same as the risk application I was working on in the investment bank I have mentioned, but the retail bank application was 20 times more expensive than that system (sic!).

There were also some architectural challenges. The webapp was a monolith code base with a lot of low-quality Java code. It was connecting to middleware gateways and C++ middleware, and then to 20+ backend systems via different protocols: HTTP, JMS, and custom TCP protocols. There was very little test automation, very long running builds and the reverse testing pyramid in practice. So, almost no unit tests some functional automated testing, some acceptance tests, and then a lot of manual testing on the top.

They had very functional teams, but not working closely together, so developers and QAs did not get along well.

Service virtualization

That was the first time I have heard about service virtualization. In the past, I called this over the wire stubbing or over the wire mocking.

The bank wanted to increase the release schedule predictability and reduce the number of production defects. They hired 15 very experienced and very high profile tech leads to make that change. Those tech leads worked on a few large, strategic projects, like for example modulizing the codebase and moving to a modern source control system. But they were also working on introducing tactical solutions to pick up the low hanging fruit and do something because the strategic solutions would take a long time to implement.

One of the things they came up with was to record the 20+ backend systems, automate testing that way and see what happens. That's how I got hired.

I got hired

I was introduced to CA LISA. CA Service Virtualization is the current name for the system.

My first impression was that it was the first time I had seen this record and replay functionality, which was pretty cool. It was doing templating, so called magic strings. That was cool as well. And it was supporting many protocols, for us, the big win here was custom TCP protocol virtualization that the bank needed to use.

We proceeded to implementation. After six months of trying to do continuous delivery, continuous integration, TDD, and those kinds of things with CA LISA, we found that CA LISA works. We've had several successes with it. We made several teams happier than they were before. They could use a tool to do stubbing or service virtualization. And we had around £300,000 of estimated cost savings.

But we've had some learnings along the way. What we found was that CA LISA didn't play very well with source control systems like GIT. It produces XML files. You can source control them in for example Git, but they're not very easy to merge or compare different versions of those files. If you keep in mind my background and what I have seen in the past, I've seen it done much nicer and much easier.

Also, the centralized deployment infrastructure didn't play very well with our Jenkins build. Ideally what you want to have is everything contained within the build agent and it should do whatever it needs to do to build and test the application. But that didn't play very well with the centralized deployment infrastructure CA LISA provides. Also, because of the licensing model and how heavy the infrastructure was, we couldn't deploy it on every build agent.

Another thing we've noticed is that it was very hard to set up the stubs or virtual services from the test. What you usually do are tests that are split into three logical sections: the "givens section", the "whens section", and the "assertions section". In the givens section, what you're going set up the prerequisites for the test. Then you execute the test. Then, in the last section, you're going to make assertions. And in the givens section, the first section of the test, what you want to do is set up your stubs for the tests to pass. Unfortunately, CA LISA wasn't helping us with that. It was possible, but it didn't look as simple as what we have seen in the past.

The BeanShell extensions were causing us problems as well when scaling to other teams. You couldn't use tools like IntelliJ or Eclipse to write those extensions which was what we have seen in the past. What I have done in the past is write an extension to a stub by simply implementing a Java interface and use code completion provided by your IDE, IntelliJ or Eclipse. But it wasn't possible here. You would have to do it in LISA workstation and it was quite counterintuitive for us. You had to learn a lot of new things before you could do something meaningful.

We had some learnings about the infrastructure as well. Our offshore VMs that were provided to our partners had very little resources and the LISA workstation that was taking a gigabyte of RAM and a lot of disk space. Those VMs had not enough RAM memory to run the LISA workstation in the configuration required by our virtual services. In the past, for the in-house build stubs, we used simple web applications and you could just use the Web browser to connect to that stub. So we felt, that might have been a bit better for us probably at that time.

The infrastructure, separate VSE, separate registry, and separate workstation caused us a few problems as well. For example, the recommended way was to record only in the workstation, and what we would like to do quite often is record actually on the VSEs. Managing the workstations across many VMs and many laptops was a problem also because of the versioning.

We had some learnings around the licensing as well. We wanted to scale to more than 70 teams and ideally keeping the continuous delivery, continuous integration and TDD in mind. We wanted to have an instance of LISA per developer or QA, but that did not look feasible because of the resources it consumed on the system on your VM or your laptop. So we thought, okay, let's have at least one per team. But that still was a problem because of the licensing model, so we had to abandon that idea.

There were a few general user experience learnings we had. We got the feeling there were too many screens, too many controls, and too many components that we just didn't need. It was a very powerful tool, but we didn't need most of the functionality.

Our main takeaway was that LISA works. It supports custom TCP, protocol virtualization, JMS, and HTTP(S). But when we tried to apply it to a continuous delivery environment, it didn't work for us as well as we would hope, keeping in mind my personal experiences with stubbing in agile environments.

It's worth mentioning here that other departments in the bank were using CA LISA as well. Those departments were usually in the maintenance mode, so there was no focus on faster release times, and no focus on things like continuous delivery. As far as I know, CA LISA has been very successful in those environments.

It's also worth mentioning that, in this retail bank, there were two in-house build tools that have been developed to do service virtualization and stubbing (or whatever you want to call it). They were done there to facilitate stubbing of specific components. Unfortunately, neither of those tools were able to record 20-plus backend systems. But what we found was that they fit very well into continuous integration mindset and it took around two to ten days, on average, to develop such a tool.

2015-2016

The story continues to 2015-2016 when I was back to an organization that does a lot of agile. The company had 130+ different applications (different codebases) running on a lot of JVMs in production. There were 50 people managing that working in functional teams very closely together.

Wiremock

By this time the Wiremock platform, an open source stubbing and service virtualization solution, was mature enough that we could start looking at using it.

We have build a service virtualization and over-the-wire stubbing solution on top of Wiremock. It was designed to use by both DEVs and QAs. DEVs would use the Wiremock remote setup programmatic APIs and QAs would use a Web GUI that was built on top of Wiremock to do their exploratory testing and to do performance testing. The QAs were not very tech-savvy in this team and they needed a simple GUI to navigate around the tool. What I found was that it took around 15 days of programming over the last 9 months to do it.

What we found was that Wiremock fits very well into the continuous integration mindset and it's a great tool for developers. It has some bugs, it's still a new tool, but I would use it again if I got the chance. Unfortunately, we had to do some internal development on top of it just because it wasn't designed with non-tech savvy QAs in mind.

Next logical step?

What would be a logical next step for you if you were in my position? Well, for me it was combining those experiences, the traditional and agile methodologies, and the experiences with cross-functional, functional, mixed teams, experiences with CA LISA, using Wiremock as well and building on top of it. Having been in those four companies, I came up with a list of requirements for a tool that would satisfy my needs in all of those circumstances.

New tool requirements

What I wanted to have is a tool that would allow me to deploy anywhere, anytime, where the licensing wouldn't get in the way. What I wanted to have is a lightweight tool that would consume tens or hundreds of megabytes instead of gigabytes of memory. Therefore, since the licensing would be flexible and the tool very lightweight, I could run it on existing hardware.

But that doesn't mean I couldn't run it in a centralized manner. If I wanted to, I could run it both locally and centralized, but the tool shouldn't get in the way of either of those.

I would also like to be able to run this locally or in a centralized environment. And I wouldn't like the tool to get in the way. It doesn't matter if you want to run it centralized on some servers or locally on a laptop. The tool should support both.

I would like the tool to be easy to integrate with CI tools like Jenkins and TeamCity. I would like to run it separately on build agents in an isolated environment to do CI properly. And I would like the tool to be designed with automation in mind, so easy to integrate with things like JBehave, Cucumber, JUnit, etc.

Also, I would like to keep in mind not only Devs but QAs as well. Developers want CI, TDD, etc. What the less tech-savvy QAs want a nice GUI on top of the same service virtualization tool developers use. They also want the same artifacts that the stub produces when used by developers.

I would like to have a tool that supports both functional, cross-functional teams, and I would like a tool that ideally is very simple to use and doesn't need any training, providing great user experience.

Traffic Parrot

And this is how I came to develop Traffic Parrot.

If you look at the tools on the market, there's definitely the option of going open source. And if I was to go open source, I'd probably go for Wiremock today. But if you want to look at other tools and things that are not necessarily Java-related, you want to do some NodeJS or Ruby or something like that, I've compiled a list of 40+ tools.

There's also an option of in-house development. It depends on the environment you're in and how experienced your developers are. That would inform how you would do it. What I have seen, when you've got mid-level to senior guys on your team it's going to take around 5 to 15 days to develop a QA GUI on top of Wiremock. And if you want to start using other protocols like JMS or custom TCP is going to take much longer than that. There's a lot of additional effort required there.

Or, you could go for something like I've already mentioned. An enterprise-ready solution like Traffic Parrot.

What is Traffic Parrot? If you are looking for over-the-wire stubbing or service virtualization tool, and you're doing continuous delivery, continuous integration I would encourage you to go to TrafficParrot.com to learn more about the tool and all the details.

The end

That's my story with service virtualization going through four companies, having different experiences using different types of tools. I hope you enjoyed it. 

Thursday 14 July 2016

3 do's and don'ts of service virtualization for developers and QAs

Here are three first things that came to my mind today I have found myself doing or avoiding in the past while implementing service virtualization.

Three things I always do


Do: Focus on keeping it simple

I have worked in environments where systems have been designed for testability from day 1 and where there was no focus on that.

What I have found was that creating stateful virtual services or even simulators was sometimes a necessary short-term solution, especially when you are working in an environment where the System Under Test has not been designed for testability. What I have found was that systems not designed for testability often use external APIs in a way that makes them hard to virtualize.

I have also been on projects where system testability was part of the design process from the beginning. In those environments, stubbing the external APIs was very simple. The virtual services are almost always stateless and never have any logic.

I have always focused on avoiding creating complicated virtual services. That fact that you can simulate a backend system in your virtual service does not mean you should.

Remember that service virtualization is only a part of the big picture of creating software to deliver business impact. What you want to do is help reduce the costs of software delivery as a whole, not only QA is isolation.

Create the simplest virtual services possible. If you need to create a stateful or simulator virtual service, communicate well with architects and developers to make sure they know how much effort goes into it. It might be a cheaper long term option to change/refactor the use of the APIs and the APIs themselves to make them more testable and eliminate the need for complex simulation.

Action for you: find an often used virtual service that is complex and talk about it with developers and architects about it to understand if it a good tradeoff to keep on maintaining it.

P.S. I have seen situations where a lot of effort went into building emulators when you would like to get your builds times from minutes to seconds. If all your automated CI builds run in less than 20 minutes, and you are doing not much manual exploratory testing, and you would still like to reduce the build times, you could explore this option.

Do: Remember what is the System Under Test (SUT) when virtualizing a service

Your system under test is the group of components or applications you are testing. It defines the set of functionalities you are testing.

Let us say we are testing a web application connecting to a backend system. The backend system exposes an API for users to log in. After three unsuccessful logins, the backend system will lock the account and start returning an error indicating that the account has been deactivated.

If I am testing the web application, I would like to see three messages rendered to the user: a successful login, an unsuccessful login, and an account locked message. It can be done with three stateless responses from a virtual service. Then I would proceed to test the backend system API. I would test that after 3 login attempts I get an account locked message. After, that I would proceed to test both systems in an integration environment, but I would not repeat all the tests. I would only test if I can successfully log in. The number of tests in integration environments should be reduced to a minimum giving enough confidence to proceed to production (more on the testing pyramid by Google). So we had three stages of testing, in the first one the SUT was the web application. In the second one, the SUT was the backend system. In the third one, the SUT was the web application together with the backend system. When the SUT is the web application, it does not make sense to create a stateful virtual service that will return an account locked message after three unsuccessful logins. It would create unnecessary complexity. You have to test that functionality in the backend system anyway, so you know it works. All you test in the web application is whether the messages get rendered correctly.

Action for you: what is the system under test you are testing today? What does your testing pyramid look like?

Do: Use productized solutions in most cases

You can write your own stubs, mocks, virtual services, simulators, and emulators. While this is an option to consider, you should take a pragmatic approach to this decision.

Ask yourself questions like:

  • What are the functionalities we need be able to use the tool?
  • How long will it take us to develop the new tool?
  • How much maintenance can we afford to do further down the line? 
Then you can compare the tools available on the market:
  • Are there any tools available on the market that would satisfy our needs?
  • How much do they cost?
  • Will it be cheaper or faster to implement the tools in-house and then maintain them long term in-house as well?

You would be surprised to find that there are plenty of open source and commercial tools on the market. They should satisfy requirements of most teams. I have compiled a list of 40+ tools you can download.

If you want to know more, I have prepared a video introduction for developers.

Action for you: look at the service virtualization tools comparison.

Three things I avoid doing


Avoid: Virtualizing unstable APIs

If you are virtualizing an API, you freeze it in time. What happens if the original API in under heavy development and changes every week significantly enough to cause breaking changes on your side? You need integration contract tests, consumer-driven contract tests or manual exploratory tests to make sure the original API has not drifted from the virtualized version. If you cannot afford development of those automated tests or do not have resources to spend time manually testing it you should consider either using the real version of the API or postponing consumption the APIs. You could be working on other parts of the system as a priority and come back to the unstable API once is a bit more stable.

Action for you: how often do the APIs you are developing against change? Once a week? Once a month? Once a year?

Avoid: Replacing one bottleneck with a new one

You will find many shared virtual service environments in many companies. Many teams will be using the same server with multiple virtual services deployed. While this is an option that works, I have found enabling individual teams to have their instances to be more efficient.

A virtual service or stub per developer or QA, deployable on-demand is an even more attractive option.

I have found that while a shared service virtualization environment is a good option for removing test data setup problems, environment availability, etc. it creates a strong dependency between teams using that instance. So one team doing their testing impacts many other teams. You need to assign port ranges to teams, manage credentials, make sure the hardware is fast enough to support multiple teams, control the availability of the environment, etc. You suddenly need a new team to manage the virtual service environment

So, instead of having one environment that all teams in the organisation use, I prefer every team or every developer and tester having their environment. You let people use service virtualization however they want on their hardware or laptops. It scales very well.

If you would like to do that, it might become very expensive depending on the tool you use; you need to choose a tool that can run on existing hardware and whose licensing model allows for distributed deployment like Traffic Parrot, or use a cloud offering like GetSandbox.

Avoid: Using wrong metrics

The ideal situation would be to measure, change one thing, and measure again. It is very hard, though.

What I have seen is teams measuring in different ways. Estimating cost savings after the fact (very inaccurate and subjective), counting the number of services virtualized, number of transactions on the virtual service environment. Those things could be a measure of the complexity of the service virtualization implementation you have chosen, but they do not tell you much about the state of your software delivery lifecycle.

I don't measure anything with regards to service virtualization alone. I would look at the state of the software delivery process as a whole. For example, the number of bugs in late stages of development, release lead times, the number of teams blocked, etc. When you want to measure something in the service virtualization space think about for example:

  • Total amount of time spent waiting for APIs to be delivered (lower is better)
  • Total amount of time spent waiting for test data set up
  • Average time to find a defect with big business impact (lower is better)


One exception here is measuring third party transaction costs. It is easy to assess costs savings when you use service virtualization in this case.

Action for you: read about the use of metrics in SDLC.

The end
Thanks for reading. If you disagree, please comment! If you enjoyed this and wanna see more, please like!

Tuesday 21 June 2016

Traffic Parrot Trademark

“Traffic Parrot” was registered 3rd of June 2016 as a trademark of WB Software Consultancy Limited. Class 9, Service virtualization software;Test automation software;Software. Class 42 Software consulting services;Software design and development;Information technology consultancy.

Tuesday 17 May 2016

Service Virtualization and Given/When/Then

I recently found that I had problems explaining how service virtualization or over-the-wire stubbing (however you wanna call it) fits into the usual type of testing I do on a daily basis and what I mean when I say “service virtualization”. Here is a brief explanation of how that could work for developers and QAs.

Given/When/Then testing

In almost all cases when I create software these days I will have some level of automated testing. How much coverage I have depends on the context, but I almost always have a minimal amount of automated tests.

When I write the automated tests I split them into 3 logical sections. Let us imagine I am writing a simple application that talks to Markit market data APIs to fetch stock quotes and display the last price to the user.

The “given’s” section is where I set up the system and its dependencies. In this section I will make calls to over-the-wire stubs (or virtual services) to set them up for the test. For example, the test will call a Wiremock instance running in a separate process and set it up so that when the application asks for a stock quote, Wiremock will pretend to be the Markit market data API and return a canned response.

The “when’s” section is where we exercise the system under test.

The “then’s” section is where we make assertion on the state or interactions,

For example in Java with JUnit it would look like this:
import com.github.tomakehurst.wiremock.junit.WireMockRule;
import com.wbsoftwareconsutlancy.FinanceApplication;
import org.apache.http.client.fluent.Content;
import org.apache.http.client.fluent.Request;
import org.junit.After;
import org.junit.Before;
import org.junit.Rule;
import org.junit.Test;

import static com.github.tomakehurst.wiremock.client.WireMock.*;
import static junit.framework.TestCase.assertEquals;

public class FinanceApplicationTest {
    @Rule
    public WireMockRule forecastIoService = new WireMockRule();

    private FinanceApplication financeApplication = new FinanceApplication();

    @Before
    public void setUp() throws Exception {
        financeApplication.start();
    }

    @After
    public void tearDown() throws Exception {
        financeApplication.stop();
    }

    @Test
    public void parsesLastPriceFromStockQuote() throws Exception {
        forecastIoService.stubFor(get(urlEqualTo("/MODApis/Api/v2/Quote/json?symbol=AAPL"))
                .willReturn(aResponse().withBody("{\"Status\":\"SUCCESS\",\"Name\":\"Apple Inc\",\"Symbol\":\"AAPL\",\"LastPrice\":103.17}")));

        Content content = Request.Get("http://localhost:" + financeApplication.port + "/stock-quote-last-price")
                .execute()
                .returnContent();

        assertEquals("103.17", content.toString());
    }
}
 

Service Virtualization

The given/when/then framework has proven to be working very well when I was working on a greenfield project or working on a system that has been designed with testability in mind. It might be a challenge to follow this pattern if you are working with a legacy codebase that has not been designed with testability in mind. You might have to introduce a more complex, stateful over-the-wire stub (virtual service) or even implement a simulation.

When the system or module I am working on has not been designed with testability in mind, I find myself tackling this complexity by adding more complexity to the testing infrastructure. For example, in order to implement an automated suite of tests, I need to introduce a complex virtual service (stub). Introducing more complexity to tackle existing complexity is a slippery road. There are times where it is a good tradeoff. Unfortunately, most cases I have seen it being done it was a good short term solution that has turned into a high maintenance cost long term one.

So what do I do?
So, what are the actions for you today? Find a few complex virtual services you have created that require a lot of your time for maintenance. Understand how much time/effort/money goes into maintaining them. Speak to developers and architects and understand if it is a good idea to keep on maintaining them, or would it be a good idea to refactor selected system under test modules to make them more testable and reduce the overall system under test maintenance costs by getting rid of complex virtual services.

Tuesday 1 March 2016

InfoQ publishes "Stubbing, Mocking and Service Virtualization Differences for Test and Development Teams"

Wojciech's article on "Stubbing, Mocking and Service Virtualization Differences for Test and Development Teams" has been recently published on InfoQ. You can find it here. InfoQ has over 800,000 subscribed members and over 1 million unique visitors a month.