Wednesday 26 February 2020

How long will it take to create 100 virtual services or API mocks and how many people do I need?

“I would like to create API mocks for 100 HTTP SOAP/JSON and JMS Active MQ services. How much time will it take and how many people do I need on my team?” - Software Architect working for a software consultancy

This is a concern many of our clients face. We typically recommend rephrasing the question to include the problem description as well.

Key takeaways:

  • You must define the problem you are solving before you can start a service virtualization project
  • State clearly the value the service virtualization project is to deliver
  • Running a pilot project to demonstrate value is key
  • There are several key factors that influence the length and size of the project
  • Categorising services can help estimate the scope of the project


A significant portion of our customers engages in large scale service virtualization and API mocking projects that need to be managed accordingly. That means they would like to know what is required to complete the project: how many man-days, how many people and what hardware resources are needed.

The typical driver for those big projects is removing bottlenecks such as test data setup times or driving faster time to market.

What we typically recommend is to rephrase the problem in the context of the theory of constraints and narrow the scope down to “what is the amount of mocking we should do to solve the problem well enough so it's not a problem any more and we can focus on other development and testing priorities”.

So in other words, it might not be a good idea to aim to create the mocks for all services, it might be a better idea to decide what is the critical mass of mocks to create for the problem you are facing to go away so you can focus your efforts on other priorities.

An example of that would be a major UK bank that had an issue with setting up test data in the mainframe systems as it was done by an external team that took 2-3 weeks to prepare the data for every sprint’s testing efforts. Only a subset of those APIs was being used by the mobile application. That mobile application was being evolved quite rapidly compared to the other consumers of those APIs. The architects have decided to create mocks for a subset of the mainframe services only for a subset of their use cases. They have driven the efforts by the suite of mobile tests so that the automated regression suite of tests could be run against API mocks per sprint. This allowed them to reduce the complexity of the mocks because they did not have to account for other teams’ use cases and complexities.

Another example is a US insurance company that uses a third-party payment gateway service. They have created a mock that covered only a subset of the payment API features but allowed them to test all of their use cases of that service.

Having said all of that, the problem of scoping the project still remains. Let’s explore that in more detail.

The time will be different for every project and team but there are ways to estimate


The required complexity of the services or API mocks depends on the complexity of the services themselves and also on the complexity of the usage pattern of those services, e.g. how many test cases you run.




As discussed in How much time it will take to build/virtualize a simple, medium and complex service?:

  • A simple service depending on your team's maturity and setup will typically take a few minutes to a day
  • A medium service can take from a few hours to a few days
  • A complex service can take from a few days to a few weeks


Because of the uncertainty of how long building individual services will take we recommend that you run two activities in parallel:

  • Assign a small team or an individual to deliver the first virtual services to hit the first value realisation milestone
  • Assign a small team or an individual to estimate the size of the full project


Once that is done, you will have a better understanding of the complexity of the project.

One important point to notice is that if the value realisation milestone pilot project takes more than several weeks that means it might be too big, it should ideally be a couple of days to a few weeks. In case it’s too big, you might need to revisit your testing strategy and testing pyramid. If you have problems defining your value realisation milestone pilot project please reach out to us at support@trafficparrot.com and we will be happy to advise.

How to deliver the first value realisation milestone?


In order to prove that the API mocking and service virtualization approach is capable of a good ROI at your company, we recommend you assign a capable technical person or couple of people to realise the first milestone. Once that is done, they can share the findings and knowledge with the rest of the team.

The key requirements for the people involved in the first project:

  • Has worked on the project for at least 12 months already
  • A person that has experience running pilot projects
  • Good understanding of the business domain
  • Good understanding of the system under test
  • Good understanding of the technologies involved
  • Good communication skills
  • A pragmatic approach to problem-solving


We recommend choosing a value realisation milestone that is clear to the business. Here are a few examples of value realisation milestones you could look for:

  • Reduce test data setup time from 2 weeks to 1 minute by using pre-configure Traffic Parrot API mocks instead of waiting on the test data setup team to deliver new test data in the SIT environment mainframe systems
  • Allow for simulating several typical error responses observed in production for one of the services which will result in better regressions testing coverage and no production bugs in that area
  • Have a look at the typical problems solved by service virtualization/mocking/stubbing for more ideas on how to find a value realisation milestone

After going through this exercise, you might have a few candidates for a good value realisation milestone pilot project. The best project to use out of those is the least complex one that takes the least amount of effort; pick the low hanging fruit. You can then demonstrate the ROI fast, and onboard the team to the new testing approach. Once the team is familiar with the new approach to testing, you can tackle more complex projects.

How to estimate the size of the project?


To help you estimate the size of the full project we advise to:

  • Categorise complexity of services
    • Low complexity service
    • Medium complexity service
    • High complexity service
  • Categorise complexity of the individual tests and how they use the services
    • Low complexity test
    • Medium complexity test
    • High complexity test
  • Categorise the value of individual tests by how much value would be captured if those tests started using mocks instead of the real services
    • Little value
    • Moderate value
    • High value
  • Categorise by the rate of change of APIs and tests
    • Low change rate
    • Medium change rate
    • High change rate
  • Categorise services and tests by production critical issue risk
    • Low impact on production incidents
    • Medium impact on production incidents
    • High impact on production incidents

This is to give you an idea of what is involved in delivering the project and help prioritise work. For example, high complexity tests that use high and medium complexity services will take orders of magnitude longer to deliver than low complexity tests that use low and medium complexity services. You might find low hanging fruit, like high-value and low complexity tests onboard to use virtual services as a priority. The categorisation will also help you prioritise which services and tests to deliver next.

For example, a simple user login test might use one simple service and which is simple to mock, but delivers a lot of value because if the user cannot log in you cannot run 70% of the other test cases.

Another example could be a payment test, which is medium in complexity but is critical to the business as if the payment does not work then the company does not make money.

Once you have the categories identified, you can assign an estimate S (small), M (medium), L (large) to each task and take the rough estimate we have provided in the article above. While you work on the tasks one by one and deliver S, M and L tasks you will get more visibility on how long S, M and L tasks take to complete in your environment and increase the accuracy of your overall estimate. You can then assign more people to the project if needed.

We would be happy to assist you in your estimations efforts, please reach out to us by emailing us at support@trafficparrot.com


Should I deliver mocks per-service or per-test?


Depending on the complexity of your tests and the complexity of the services you will be mocked, you might drive the change by implementing mocks for services or mocks for tests.

For example, a UK retail bank used a third party threat detection HTTP service that has a simple API, which takes several input parameters and returns a risk score. This service usage pattern remains the same regardless of the complexity of the test cases that rely on the connectivity to this service. The developers have decided to implement a dynamic mock service that depending on the request data returns a predefined risk score. This allowed it to be re-used in many tests.

Another example, a UK challenger bank had a user onboarding test that required connectivity to over 20 complex third-party and mainframe services. Those services were used not only in this test but also in other tests not related to user onboarding. The team decided to implement the mocks per-test not per-service as it reduced the complexity of the solution.

We would be happy to help you assess how to approach the mocking on a case by case basis, feel free to reach out to support@trafficparrot.com for more details.

No comments:

Post a Comment