Wednesday, 3 August 2016

My story with service virtualization, CA Lisa and Wiremock





My story with service virtualization

This is my story with service virtualization. If you have ever heard about service virtualization and you would be interested in somebody else's story over a period of five years, please continue reading.

I've been doing software development and testing for more than ten years now, six of which was continuous delivery, continuous integration, TDD, BDD, pair programming, and similar agile practices.

Four companies

This is going to be a story about four companies. I would like you to remember that I'm going to be presenting this story from the continuous delivery point of view. So whenever we're talking about something here, this is from the continuous delivery approach mindset.

Agile 2011-2013

The story begins in 2011-2013 when I was working for organizations that were following the agile principles to a certain extent.

Media company

The first example is a media company, which had 100+ different applications running or different code bases running on more than 200 JVMs. There were 50 people working on those: developers, QAs, database guys, ops guys, support. They were separate functional teams, but they were working very closely together.

When it comes to service virtualization, back then we used to call it stubbing. We would do in-house build stubs. That was because there were no lightweight products available at the time that would fit the continuous integration model that we could use. No open source alternatives or anything like that. Those tools were used by developers and QAs both for functional and performance testing. What I've noticed was it took two to ten days of development to develop such a tool.

Investment bank

Then the story continues and I go to an investment bank developing a risk system. This was a fairly large system running on 1,000 servers in production managed by around 30 people, very cross-functional teams mixed with functional teams. Developers would be doing quality assurance and production support.

In that organization, we also built stubs in-house for developers and it took only two days of development, on average, because they were very specialized. There wasn't much to be built. And, at that time, there still were no lightweight products available that would suit our needs, so we had to build something in-house. We didn't have to cater for QA requirements just because there were no QAs. Developers did everything.

Retail bank

The story continues to 2013-2015 where I was introducing agile to a retail bank. The bank had some issues around delivery. The releases were taking too long, 6 to 12 months. There were a lot of bugs in late stages like UAT testing and production. And it was quite expensive. The complexity of the system was more or less the same as the risk application I was working on in the investment bank I have mentioned, but the retail bank application was 20 times more expensive than that system (sic!).

There were also some architectural challenges. The webapp was a monolith code base with a lot of low-quality Java code. It was connecting to middleware gateways and C++ middleware, and then to 20+ backend systems via different protocols: HTTP, JMS, and custom TCP protocols. There was very little test automation, very long running builds and the reverse testing pyramid in practice. So, almost no unit tests some functional automated testing, some acceptance tests, and then a lot of manual testing on the top.

They had very functional teams, but not working closely together, so developers and QAs did not get along well.

Service virtualization

That was the first time I have heard about service virtualization. In the past, I called this over the wire stubbing or over the wire mocking.

The bank wanted to increase the release schedule predictability and reduce the number of production defects. They hired 15 very experienced and very high profile tech leads to make that change. Those tech leads worked on a few large, strategic projects, like for example modulizing the codebase and moving to a modern source control system. But they were also working on introducing tactical solutions to pick up the low hanging fruit and do something because the strategic solutions would take a long time to implement.

One of the things they came up with was to record the 20+ backend systems, automate testing that way and see what happens. That's how I got hired.

I got hired

I was introduced to CA LISA. CA Service Virtualization is the current name for the system.

My first impression was that it was the first time I had seen this record and replay functionality, which was pretty cool. It was doing templating, so called magic strings. That was cool as well. And it was supporting many protocols, for us, the big win here was custom TCP protocol virtualization that the bank needed to use.

We proceeded to implementation. After six months of trying to do continuous delivery, continuous integration, TDD, and those kinds of things with CA LISA, we found that CA LISA works. We've had several successes with it. We made several teams happier than they were before. They could use a tool to do stubbing or service virtualization. And we had around £300,000 of estimated cost savings.

But we've had some learnings along the way. What we found was that CA LISA didn't play very well with source control systems like GIT. It produces XML files. You can source control them in for example Git, but they're not very easy to merge or compare different versions of those files. If you keep in mind my background and what I have seen in the past, I've seen it done much nicer and much easier.

Also, the centralized deployment infrastructure didn't play very well with our Jenkins build. Ideally what you want to have is everything contained within the build agent and it should do whatever it needs to do to build and test the application. But that didn't play very well with the centralized deployment infrastructure CA LISA provides. Also, because of the licensing model and how heavy the infrastructure was, we couldn't deploy it on every build agent.

Another thing we've noticed is that it was very hard to set up the stubs or virtual services from the test. What you usually do are tests that are split into three logical sections: the "givens section", the "whens section", and the "assertions section". In the givens section, what you're going set up the prerequisites for the test. Then you execute the test. Then, in the last section, you're going to make assertions. And in the givens section, the first section of the test, what you want to do is set up your stubs for the tests to pass. Unfortunately, CA LISA wasn't helping us with that. It was possible, but it didn't look as simple as what we have seen in the past.

The BeanShell extensions were causing us problems as well when scaling to other teams. You couldn't use tools like IntelliJ or Eclipse to write those extensions which was what we have seen in the past. What I have done in the past is write an extension to a stub by simply implementing a Java interface and use code completion provided by your IDE, IntelliJ or Eclipse. But it wasn't possible here. You would have to do it in LISA workstation and it was quite counterintuitive for us. You had to learn a lot of new things before you could do something meaningful.

We had some learnings about the infrastructure as well. Our offshore VMs that were provided to our partners had very little resources and the LISA workstation that was taking a gigabyte of RAM and a lot of disk space. Those VMs had not enough RAM memory to run the LISA workstation in the configuration required by our virtual services. In the past, for the in-house build stubs, we used simple web applications and you could just use the Web browser to connect to that stub. So we felt, that might have been a bit better for us probably at that time.

The infrastructure, separate VSE, separate registry, and separate workstation caused us a few problems as well. For example, the recommended way was to record only in the workstation, and what we would like to do quite often is record actually on the VSEs. Managing the workstations across many VMs and many laptops was a problem also because of the versioning.

We had some learnings around the licensing as well. We wanted to scale to more than 70 teams and ideally keeping the continuous delivery, continuous integration and TDD in mind. We wanted to have an instance of LISA per developer or QA, but that did not look feasible because of the resources it consumed on the system on your VM or your laptop. So we thought, okay, let's have at least one per team. But that still was a problem because of the licensing model, so we had to abandon that idea.

There were a few general user experience learnings we had. We got the feeling there were too many screens, too many controls, and too many components that we just didn't need. It was a very powerful tool, but we didn't need most of the functionality.

Our main takeaway was that LISA works. It supports custom TCP, protocol virtualization, JMS, and HTTP(S). But when we tried to apply it to a continuous delivery environment, it didn't work for us as well as we would hope, keeping in mind my personal experiences with stubbing in agile environments.

It's worth mentioning here that other departments in the bank were using CA LISA as well. Those departments were usually in the maintenance mode, so there was no focus on faster release times, and no focus on things like continuous delivery. As far as I know, CA LISA has been very successful in those environments.

It's also worth mentioning that, in this retail bank, there were two in-house build tools that have been developed to do service virtualization and stubbing (or whatever you want to call it). They were done there to facilitate stubbing of specific components. Unfortunately, neither of those tools were able to record 20-plus backend systems. But what we found was that they fit very well into continuous integration mindset and it took around two to ten days, on average, to develop such a tool.

2015-2016

The story continues to 2015-2016 when I was back to an organization that does a lot of agile. The company had 130+ different applications (different codebases) running on a lot of JVMs in production. There were 50 people managing that working in functional teams very closely together.

Wiremock

By this time the Wiremock platform, an open source stubbing and service virtualization solution, was mature enough that we could start looking at using it.

We have build a service virtualization and over-the-wire stubbing solution on top of Wiremock. It was designed to use by both DEVs and QAs. DEVs would use the Wiremock remote setup programmatic APIs and QAs would use a Web GUI that was built on top of Wiremock to do their exploratory testing and to do performance testing. The QAs were not very tech-savvy in this team and they needed a simple GUI to navigate around the tool. What I found was that it took around 15 days of programming over the last 9 months to do it.

What we found was that Wiremock fits very well into the continuous integration mindset and it's a great tool for developers. It has some bugs, it's still a new tool, but I would use it again if I got the chance. Unfortunately, we had to do some internal development on top of it just because it wasn't designed with non-tech savvy QAs in mind.

Next logical step?

What would be a logical next step for you if you were in my position? Well, for me it was combining those experiences, the traditional and agile methodologies, and the experiences with cross-functional, functional, mixed teams, experiences with CA LISA, using Wiremock as well and building on top of it. Having been in those four companies, I came up with a list of requirements for a tool that would satisfy my needs in all of those circumstances.

New tool requirements

What I wanted to have is a tool that would allow me to deploy anywhere, anytime, where the licensing wouldn't get in the way. What I wanted to have is a lightweight tool that would consume tens or hundreds of megabytes instead of gigabytes of memory. Therefore, since the licensing would be flexible and the tool very lightweight, I could run it on existing hardware.

But that doesn't mean I couldn't run it in a centralized manner. If I wanted to, I could run it both locally and centralized, but the tool shouldn't get in the way of either of those.

I would also like to be able to run this locally or in a centralized environment. And I wouldn't like the tool to get in the way. It doesn't matter if you want to run it centralized on some servers or locally on a laptop. The tool should support both.

I would like the tool to be easy to integrate with CI tools like Jenkins and TeamCity. I would like to run it separately on build agents in an isolated environment to do CI properly. And I would like the tool to be designed with automation in mind, so easy to integrate with things like JBehave, Cucumber, JUnit, etc.

Also, I would like to keep in mind not only Devs but QAs as well. Developers want CI, TDD, etc. What the less tech-savvy QAs want a nice GUI on top of the same service virtualization tool developers use. They also want the same artifacts that the stub produces when used by developers.

I would like to have a tool that supports both functional, cross-functional teams, and I would like a tool that ideally is very simple to use and doesn't need any training, providing great user experience.

Traffic Parrot

And this is how I came to develop Traffic Parrot.

If you look at the tools on the market, there's definitely the option of going open source. And if I was to go open source, I'd probably go for Wiremock today. But if you want to look at other tools and things that are not necessarily Java-related, you want to do some NodeJS or Ruby or something like that, I've compiled a list of 40+ tools.

There's also an option of in-house development. It depends on the environment you're in and how experienced your developers are. That would inform how you would do it. What I have seen, when you've got mid-level to senior guys on your team it's going to take around 5 to 15 days to develop a QA GUI on top of Wiremock. And if you want to start using other protocols like JMS or custom TCP is going to take much longer than that. There's a lot of additional effort required there.

Or, you could go for something like I've already mentioned. An enterprise-ready solution like Traffic Parrot.

What is Traffic Parrot? If you are looking for over-the-wire stubbing or service virtualization tool, and you're doing continuous delivery, continuous integration I would encourage you to go to TrafficParrot.com to learn more about the tool and all the details.

The end

That's my story with service virtualization going through four companies, having different experiences using different types of tools. I hope you enjoyed it. 

No comments:

Post a Comment