Monday, 28 November 2022

Codemotion: Design-first gRPC APIs for microservices: a sample workflow for parallel teamwork

We have just published an article on Codemotion about "Design-first gRPC APIs for microservices: a sample workflow for parallel teamwork".

Key Takeaways:

  • You can allow teams to work in parallel by designing gRPC APIs before starting to code. 
  • When teams work in parallel, you significantly decrease the time-to-market of your products.
  • A sample workflow has a few key stages.
    • The teams start by designing the gRPC API
    • Then, producer and consumer teams can work in parallel on their microservices.
    • The consumer team can use mocks to simulate the backend producer service.
    • Communicating feedback about the API specification during the development phase is essential.
    • Once the microservices are ready, they can test them together and release them to production.
  • You can start by onboarding just one API to this new workflow; you don’t have to migrate your whole company, to begin with
  • Working in parallel is excellent for team morale as there is less hard-deadline pressure
  • You can estimate the return on investment by using a simple spreadsheet.




Thursday, 10 November 2022

Traffic Parrot 5.35.5 released, what's new?

We have just released version 5.35.5. Here is a list of the changes that came with the release:

Features

  • Added support for Apache Thrift service simulation, see a demo video
  • Added ability to configure HTTP basic authentication for the Traffic Parrot user interface
    • Enable in the trafficparrot.properties file by setting:
      trafficparrot.gui.security.mode=LOGIN_PROPERTIES
    • Configure users in the trafficparrot.gui.login.properties file, in Jetty HashLoginService format:
      admin=password,traffic-parrot-gui-role

Fixes

  • Fixed a UI issue with mappings edited multiple times during the same UI session
  • Added support for protoc on Apple ARM based M1/M2 processors
  • Library upgrades to fix OWASP issues

Changes

  • Upgraded bundled JRE from 8u332 to 8u345
  • Upgraded WireMock from 2.33.2 to 2.35.0
  • Upgraded gRPC from 1.46.0 to 1.50.2
  • Upgraded protoc from 3.20.1 to 3.21.9

Thursday, 20 October 2022

Need help migrating from Broadcom CA Lisa Service Virtualization to another API mocking tool?

Today, many teams are looking for help migrating their existing service virtualization to more cost-effective and DevOps friendly solutions, including both Open Source and commercial tools, such as the ones described above.

Traffic Parrot has released a free CA Lisa MAR Analyzer tool that will help you assess the complexity of your existing CA Lisa (Broadcom Service Virtualization) MAR files and to help prepare for a migration to other tools.

We can help you migrate your MAR files with our automated MAR migrator. We also support formats from other vendors and open source projects, feel free to contact Traffic Parrot for more details.

You can read more about how a global retail bank has migrated one of their departments off CA Lisa and now saves 51% on tool costs per year on our website.

Tuesday, 18 October 2022

The API Landscape: a comprehensive view of all stakeholders creating the programmable economy

We recently listed on the apidays API landscape, who are collecting a comprehensive view of all stakeholders creating the programmable economy.
apidays API landscape
API testing is a key component of the programmable economy. Traffic Parrot mocks and simulates APIs and services so that you can test your applications and microservices without worrying about test data set-up or environment availability.

Friday, 14 October 2022

Traffic Parrot 5.34.7 released, what's new?

We have just released version 5.34.7. Here is a list of the changes that came with the release:

Fixes

  • Fixed an issue with the enabled script for HTTP webhook callbacks
  • Fixed an issue with XLS data stores when storing and retrieving values with leading zeros
  • Library upgrades to fix OWASP issues

Wednesday, 21 September 2022

Traffic Parrot supports GCP Dialogflow gRPC mocks

When working with complex cloud services such as GCP Dialogflow, it can be useful to use API mocks to test different hypothetical integration scenarios, without the need for a real GCP service to be configured and available.

We have a sample project available which you can use with a free Traffic Parrot trial to mock bidirectional streaming Dialogflow services, without the need for a live GCP instance.

Friday, 16 September 2022

Traffic Parrot 5.34.3 released, what's new?

We have just released version 5.34.3. Here is a list of the changes that came with the release:

Features

Fixes

  • Library upgrades to fix OWASP issues

Monday, 12 September 2022

How do I forward the HTTP requests I don't have mappings for to another endpoint?

Sometimes we need to work with a mixture of real and mock APIs, for example using mocks for APIs that are not yet developed and real APIs for those that already exist.

To support this use case, you can use Traffic Parrot to forward HTTP requests to another endpoint instead of returning the default "not found" response.

You can use the proxy responses feature to provide a default response for unmatched requests:
  1. Set a low priority like 10 for the mapping with the proxy response
  2. Leave default priority for other non-proxy response mappings
  3. Now any unmatched requests will return a response from the proxy mapping

Thursday, 1 September 2022

How pair-programming brings down knowledge silos

One of the proven ways of eliminating knowledge silos is pair-programming (or pair-testing for QAs). A team of researchers from Pivotal has concluded that a flavour of pair programming with overlapping pair rotation described in “Sustainable Software Development through Overlapping Pair Rotation“encompasses principles, policies, and practices aiming at removing knowledge silos and improving code quality (including discover-ability and readability), hence leading to development sustainability.”

They note that “Conventional wisdom says that team disruptions should be avoided, and that extensive documentation is needed to prevent knowledge loss during team churn. Unfortunately, documentation often quickly becomes out-of-date and unreliable. The theory positions team code ownership with overlapping pair rotation and knowledge pollination as an alternative and potentially more effective strategy to mitigate against knowledge loss”.

From our experience gathered with Traffic Parrot customers, we can confirm these findings and that pair programming can be used as an effective way to eliminate knowledge silos and prevent critical knowledge loss. We have seen pairing developers with testers work equally well at sharing knowledge and helping bring down silos between QA and Development.

Thursday, 25 August 2022

3 challenges when introducing API-first to microservices - an interview with Alin Daniel Ferenczi

We have asked Alin Daniel Ferenczi about his experience with the API-first approach.  He is a Solution Architect and Deep Tech Investor with 7 years of experience working to disrupt traditional business workflows through automation or API integrations, microservice architecture and multi-cloud adoption for startups and large enterprises.

Traffic Parrot: Alin, in your experience, what is the difference in API-first approaches for monolithic and microservice architectures?

Alin: The main difference from an architecture standpoint consists in the distribution of responsibilities. Designing a system must focus on following the single responsibility principle. It is one of the simplest and most complex design principles and it can have a major impact on the entire approach. Splitting the responsibilities properly and following an API-first approach allows for teams to work in parallel increasing the speed to market.

Alin: In the monolith architecture, cross-cutting concerns such as logging, error handling, caching and performance monitoring affect only one application that encompasses the APIs. These services are usually more tightly coupled, and their responsibilities can become blurred at times. This implies an additional effort required to keep a close eye and avoid cluttering the code with logic that is redundant for the whole system.

Alin: Designing microservices requires a good segregation of responsibilities. This also implies that cross-cutting concerns need to be designed as individual services or integrated in each service individually which adds additional layers of complexity to each implementation.

Alin: Designing a good API-first architecture from the start is crucial for each system to ensure a good development cycle, faster delivery, scalability and reliability. Deciding on the responsibilities of each component requires a different paradigm between a monolith and microservices. 

Traffic Parrot: Alin, in your experience, what are the three main challenges for companies starting their API-first journey for microservice architectures?

Alin: There are quite a few issues that might arise from developing microservice architectures. My clients whether they have an existing monolith or are developing a brand-new API-first application based on microservice seem to struggle the most with synchronization between releases in order to properly test the integration of each API. Integration testing is a big part of a good release cycle and when the development time between services is quite large, it may cause unnecessary downtime for development teams. This happens especially if multiple frameworks are used for different APIs and the developers cannot contribute to the other projects.

Alin: API-first adoption additionally requires a shift in thinking to create better reusable APIs and the planning phase either is too hasty and the reusability is not as expected, or the plan takes too long to establish that other chances arise and need to be considered.

Alin: Another big issue that I have encountered is the lack of adoption of proper tools for documenting, style validation, mocking and versioning of the APIs. Onboarding new hires on existing projects requires proper documentation or sandbox environments so that the developers can try out the API endpoints to get familiar with the environment. A proper DevOps culture and a “set up once” mentality needs to be established together with processes for documentation, testing and releases in order to avoid repetitive tasks and focus the attention of the team towards more creative requirements.

Traffic Parrot: Alin, how do you currently solve those problems?

Alin:    We have managed to tackle the issue with synchronization between releases by sticking to a limited set of technologies whenever possible. Adopting alongside the API-first approach, an inner source initiative has also enabled people to collaborate better and help redistribute the free capacity of teams that had a faster pace.

Alin:    The inner source adoption has also helped the planning and managing of reusable components. Since everyone has access to every component, it is far easier to find overlaps between each API that is being developed and contribute to the reusable modules’ library.

 Alin:  Proper tooling might not be only a decision from the technology perspective. Financial aspects need to be taken into account and depending on the size of the organization it may take months before a tool can be adopted. Another alternative to acquiring tools can be improving the current workflow depending on the needs of each project. Since I am working both on architecture and on budget allocations / raising funds depending on the company, I am advocating for the adoption of new tools but if something can be achieved through better workflow management and better DevOps processes, I might decide to postpone or avoid additional costs.

Traffic Parrot: Alin, in your experience, what are the three main challenges for companies already using API-first?

Alin: From my experience the tasks from API-first architectures that require the most amount of effort/time and provide the least benefits are: understanding the differences between documentation and the actual implementation due to a lack of updates to the documentation or due to technical debt, mocking data for edge cases and requirements that change on a timely basis and failing to have a proper error handling system or contingency plans for APIs that are unhealthy.

Alin: The enormous amount of technical debt is an issue that every company struggles with or will do in the future, and it is the most likely cause for a company to refactor one or more APIs from their structure. This is a very good reason why the microservice architecture should be adopted whenever possible. Technical debt usually comes together with mismatches within documentation which can hurt the productivity of other people that will require to interact with the application.

Alin: Mocking data for edge cases and requirements can take days cumulatively for people to compose which even if done preventively it can still take time and additional costs to set up and if it is by any chance not as expected it can have an impact on the development cycle. Considering situations where data is time dependent this can become a major pain point for each release. Even with automated processes it may not cover all cases. If the data is tied to an external provider, it may not be as accurate as from the source directly.

Alin: API-first architectures that are structured as microservices imply that each component can have independent failures. This can be an advantage if properly structured to handle errors as it can quickly detect failures using health probes and take actions accordingly through disaster recovery plans without taking the entire system down. The issues and struggles that I have encountered of companies have been related to the design or misconfiguration of those plans either when checking the health system, taking actions, reporting / monitoring and mitigating business risks such as loss of data, breach of SLAs and so on.

Traffic Parrot: Alin, how do you currently address those challenges?

Alin:    Technical debt is hard to avoid. Refactoring is necessary from time to time and if it makes sense from a cost benefit analysis, I am suggesting pursuing such tedious tasks. There are a lot of examples of instances where refactoring of popular apps has helped them evolve and adapt to a larger user base. Especially when going from a proof of concept to large scale adoption of a new service, refactoring can be unavoidable. Knowing to focus on delivery and come with improvements later has helped me deliver faster and provider quicker adoption for applications.

Alin:    My favorite subject to talk about is automation and even when developing APIs and applications for automation, there are still cases where we can automate our own processes. Data generation and mocking is a great example of such a case. Pushing for prioritizing the automation of repetitive tasks has always been beneficial. If there are also companies such as TrafficParrot which can provide some of the automation beforehand and if the ROI of adopting external tools makes sense, it is even more beneficial so that internal developers can focus on more productive tasks.

Alin:    Testing disaster recovery plans regularly and improving them with each iteration is becoming more and more popular within multiple companies that I have interacted with. I am always trying to be up to date with the latest techniques for proper configurations and implementing them whenever capacity is available or when preparing for a disaster recovery exercise.

Next steps

Would you like to share your experience with the API-first approach? Please reach out to us and would be very much interested in hearing your thoughts and sharing them with our audience.

Would you like help with your API-first approach? A number of Traffic Parrot clients follow the API-first approach, and we would be happy to help you on your journey. 


Tuesday, 2 August 2022

Traffic Parrot 5.34.0 released, what's new?

We have just released version 5.34.0. Here is a list of the changes that came with the release:

Features

Fixes

  • Library upgrades to fix OWASP issues

Tuesday, 26 July 2022

Interview: Traffic Parrot API mocking helped an Agile team with automated testing

An interview with an Agile Java Developer working for a global media company that has used Traffic Parrot for a number of years now.

1. What caused you to look for a service virtualization and API mocking solution?

I was working as an Agile Java Developer on a cross-functional team. The team was used to high levels of automation and testing, using techniques such as TDD daily. It was common practice at the company to use both in-memory and over-the-wire mocks to facilitate automated and manual testing.

2. How was mocking handled previously? How would you define the specific challenges / limitations with this approach?

Before adopting Traffic Parrot technology, the mocks we used on this project were all in memory mocks using libraries such as Mockito. This was sufficient to write unit-level tests but did not allow us to write over-the-wire integration/acceptance tests or enable our manual tester to test microservices in isolation.

3. Was there any urgency to find a solution? What was driving that urgency?

Yes, in the sense that this was part of the existing approach the team took to testing. It became urgent at the point that we needed to write integration/acceptance tests to help demonstrate functionality to the business analysts on the team.

4. How did you go about trying to solve the problem prior to implementing a solution?

We lived with the problem; we didn’t write some tests and didn’t do some manual testing with mocks.

5. Which resources were most useful in creating a vendor list? Least useful?

Google searches and word of mouth were enough; other resources were not explored.

6. How did you discover Traffic Parrot/Mountebank/Wiremock/…?

Google/GitHub/word of mouth referral from other team members.

7. What was compelling enough about Traffic Parrot that led you to evaluate?

The ability to use the same mocks (via shared configuration files) for both automated and manual testing, as well as the presence of a simple UI to configure/enable the mocks as required.

8. Which vendors did you consider? Why?

WireMock was also considered, but it lacked a UI so fell short of being useful for our manual tester on the team.

9. Can you briefly describe the team's decision making process?

The fastest route to a working solution wins. Something that satisfies each role on the team: developers, testers, business analysts.

10. What role did you play in the project?

Agile Java Developer, writing production/test/infrastructure code.

11. What were your most important evaluation criteria?

Programmatically configurable APIs, request matching, request/response monitoring.

12. Who was the ultimate decision maker?

All roles on the team needed to be satisfied with the decision.

13. What was the single most important reason for the win/loss?

Time to solution.

14. How would you articulate Traffic Parrot's value proposition?

Test in isolation, infrastructure as code, share mocks between development/testing/across teams.

15. What do you feel were the key strengths and weaknesses of Traffic Parrot?

Strengths: customer support is excellent, range of configurations from UI driven to code-driven is very helpful

Weaknesses: lacking an extensive library of templates and code examples to get started faster

16. Which vendor did you like the most and why?

Traffic Parrot because it matched our expectations, we didn’t hit any blockers when implementing a solution. 

17. Which vendor did you like the least and why?

Other open source vendors without Java language bindings would have had to write our own Java bindings for our test mock usage.

18. Can you tell me about pricing? How big of factor did pricing play into your decision?

Pricing was handled 2 layers above my role, I did not consider pricing in my own decision.

19. What did you like / dislike about our sales and/or customer success or support efforts? How would you compare your evaluation experience with us compared to other vendors?

The support we received was much more timely and helpful than I have experienced with other vendors, including open source. The feedback we provided made it’s way into the product within days!

20. What could Traffic Parrot have done better?

More brand visibility, we only found out about Traffic Parrot by word of mouth.

21.  How likely is it that you would recommend Traffic Parrot to a friend or colleague on a scale of 0-10 (0=not likely at all, 10=extremely likely)?

10 - when you need a UI, programmatic APIs, infrastructure as code and shared mocks.

22.  What’s the one word that pops into your mind when you think of Traffic Parrot?

Configurable.

23.  How many people use Traffic Parrot at your company?

5

24.  What are your favourite features or capabilities with Traffic Parrot?

Infrastructure as code, ability to version control mock configuration.

25.  What’s the business value you get from Traffic Parrot?

Supports cross-functional teams' ability to move fast.

26.   Were there any unplanned wins? (e.g. capabilities or use cases that you didn't envision during the initial purchase but later realize would be valuable to you?

We also realised we could use the same technology to provide mocks to other departments, to decouple our development schedules and interoperate via mock API specifications.

27.  What does the future look like? How do you plan to evolve and grow with Traffic Parrot?

We are trying to share our experience with other teams and encourage wider adoption. However, the company culture is such that each team tends to choose their own tooling and are often resistant to recommendations from other teams.


Monday, 11 July 2022

Service virtualization in ephemeral containers

Product teams that move fast would like to have more control over the infrastructure of the tools they run. They would also like to run the service virtualization the same way they run their microservices and other tools, in ephemeral containers that they spin up based on demand.

This is a high level comparison of the two options available, the traditional approach of using service virtualizatino managed by a central team and the new approach that is in line with the new industry tremds.

Traffic Parrot supports the new model and teams that want to migrate from the traditional model to the new model.



Centrally managed service virtualization (i.e Center Of Excellence)

Self-service ephemeral service virtualization (also called API simulation)

Management of the SV tool servers

Typically central team of administrators (COE)

The product team that needs the virtual services

Purchasing of SV tool licenses

Typically central team of administrators (COE)

The product team that needs the virtual services

Creation of virtual services

The product team that needs the virtual services

The product team that needs the virtual services

Lifecycle

Typically long-lived 24/7 running instances

Typically ephemeral instances spin up based on demand

Development technology

Typically VMs

Typically ephemeral containers

Deployment architecture

Typically a few shared instances of service virtualization environments

Many ephemeral instances running on local laptops, inside CI/CD pipelines, in OpenShit, etc.