
The Smart EV Tariff that Fits You Might Not be the One You Expect
15 April 2026Stop the bugs before QA ever sees them
Picture this: it’s a Tuesday afternoon. A tester on the QA team files a bug. The notification service is returning a userId field as an integer where every consumer in the system expects a string. Somewhere upstream, a backend engineer made a “harmless” refactor four days ago. The code is merged, reviewed, and deployed. The developer has moved on to the next ticket.
Sound familiar? It was a recurring pattern across our core microservices. Not dramatic outages. Just the slow drip of data type mismatches, missing fields, and response shape changes that would surface in QA days after they were introduced. Each one a small tax on the team’s time and momentum.
We could have added more manual checks. Written more thorough test cases. But the real question was: why are these bugs reaching QA at all?
The bugs weren’t hard to prevent. They were just being caught in the wrong place, by the wrong people, at the wrong time.
The root cause: no automated contract enforcement
A microservices architecture is fundamentally a set of promises. Service A promises to return a response that looks a certain way. Service B and the mobile apps promise to consume it correctly. When those promises drift — when the implementation drifts from what consumers expect — bugs happen.
We had OpenAPI specifications for every service. We just weren’t enforcing them automatically. The spec was documentation. Nobody was checking whether the running service actually kept its promise.
That’s the gap contract testing fills. And closing it meant two things had to be true simultaneously.
The prerequisite nobody talks about: API-first
Here’s the thing about contract testing that most articles gloss over: it only works if your specification is trustworthy.
In a lot of teams, the OpenAPI spec is generated from the code — it’s a reflection of what the implementation does, not a definition of what it should do. If you write contract tests against that spec, you’re essentially asking “does the code match itself?” Which isn’t a very useful question.
At PowerVerse, we do things differently. The OpenAPI specification is written before a single line of implementation code. It’s a design artefact — the agreement between the team building the service and every team consuming it. The implementation is built to satisfy the spec, not the other way around.
When the spec is written first, it becomes a genuine contract. Not documentation that describes what happened, but a promise about what will happen.
This changes everything about what contract testing can mean. When we validate a response against the live spec, we’re not checking whether the code matches itself. We’re checking whether the implementation keeps the promise the team made before writing any code. That’s a test worth running.
It also has a practical side effect: breaking changes become visible. If an engineer wants to change a response field, they have to update the spec first — which makes the change a deliberate, reviewable decision rather than an accidental side effect.
What we built
The system has three moving parts: the Robot Framework – Automation framework for test automation, test suites that live in each microservice’s repository, a shared Python library called pv_robot_framework that contains the core validation logic, and a Jenkins pipeline that uses the test results to control whether a build is allowed to reach QA.
The design principle was simple: the test should be easy to write, and it should never need to be updated just because the API evolved normally.
Fetching the contract live
The first thing a contract test suite does is fetch the OpenAPI specification directly from the running service. Not from a file we maintain. Not from source control. From the actual endpoint that the service exposes at runtime.
Fetch Openapi Spec ${API_BASE_URL}/notification-service/docs/json contract/docs api_spec.json
Validate Openapi Spec contract/docs/api_spec.json
The Fetch Openapi Spec keyword launches a headless Chromium browser, navigates to the /docs/json endpoint, pulls the specification, and saves it to disk. We then validate the spec itself against OpenAPI 3.1 standards. If the spec is malformed — which occasionally happens during service development — the suite fails early with a clear error before we’ve made a single API call.
This matters because it means the contract tests are always validating against what the service actually serves, right now. If the spec changes, the tests adapt automatically. There’s nothing to update.
Making the API call and validating the response
Each test case makes a real HTTP call, asserts the status code, and then passes the response to the validator:
Post Notifications
${payload}= Build Notification With test-newsletter ${user_id}
${response}= Post On Session api_session /v2/notifications
… json=${payload} expected_status=201
Validate Response Body Against Openapi Schema
… ${response.json()} /v2/notifications post 201
The Validate Response Body Against Openapi Schema keyword, which lives in our shared library, does the heavy lifting. It takes the actual response, navigates the spec to find the expected schema for that endpoint, method, and status code, resolves any $ref references, and runs JSON Schema validation against the result.
If a field is the wrong type, or a required property is missing, or an enum value is invalid — the test fails immediately with a message that identifies the exact field and path that violated the contract.
| What this catches | Real structural violations
Wrong data types (integer where string is expected), missing required fields, unexpected null values on non-nullable properties, unknown enum values, malformed UUIDs or datetime strings. The kind of bugs that are trivial to cause and tedious to track down. |
Keeping test data clean with the Builder pattern
We were also deliberate about keeping the tests themselves readable. Constructing payloads inline makes test cases noisy — you end up reading through dictionary construction to find the actual assertion. So we implemented a Fluent Builder pattern in Robot Framework:
# A single expressive line instead of inline dictionary construction
${payload}= Build Notification With test-newsletter ${user_id}
# Preferences test data, just as clean
${update}= Build Update Notification Preference With ${False} 9:00 17:00
The builder lives in notification_test_builder.robot and uses a chain of With User / With Type / With Channel keywords. It’s consistent across all five services, which means any engineer picking up a test suite in any service immediately understands the structure.
The gate that makes it real
Writing the tests was only half the job. The other half was putting them in a position where they actually matter.
Contract tests that run alongside regression tests in QA have already lost most of their value. By the time something is in QA, engineers have moved on. The feedback cycle is too long. The fix is too expensive.
So we made the contract tests a mandatory gate in the Jenkins CD pipeline, between the Dev environment and QA:
- Build is deployed to Dev
- Contract tests run against the Dev deployment
- On a clean pass: build is promoted to QA
- On a failure: deployment is blocked, and the engineer who triggered it finds out immediately
This small structural decision changes the economics completely. Instead of a tester discovering a contract violation in QA three days after the fact, the engineer who introduced it finds out within minutes, while the change is still fresh in their mind, in the environment where fixing it is cheapest.
QA now only ever receives builds where every service’s API responses have been verified to structurally conform to their own specification. The floor has been raised.
Write it once, never touch it again
The most pleasant surprise from this system has been how little maintenance it needs.
Traditional contract tests require updates every time an endpoint evolves. Add an optional field? Update the test. Change a response structure? Update the test. Each evolution creates maintenance overhead, and there’s always the risk that the test doesn’t get updated and either silently passes invalid responses or fails for the wrong reason.
Because we validate against the live spec rather than a hardcoded expectation, the tests just work. An engineer adds a new field to the notification response? The contract test validates the new field’s type on the next run automatically. No one needs to touch the test file.
Across five microservices and dozens of endpoints, the cost of maintaining contract coverage is close to zero. The investment is front-loaded: write the test once per endpoint, and then leave it alone.
What we’d tell ourselves at the start
The spec is the real investment
The quality of your contract tests is a direct function of the quality of your OpenAPI specifications. Early on, we found endpoints where the spec was incomplete — fields that were actually returned by the service weren’t described in the spec, so the validator had nothing to check them against. Fixing this was the right move, and it had benefits beyond testing: better documentation, cleaner SDK generation, fewer surprises for consumers.
Centralise the validation logic ruthlessly
Putting Validate Response Body Against Openapi Schema in a shared library was the right call from day one. When we needed to handle edge cases — $ref resolution, the DELETE skip logic, format detection for JSON and YAML specs — we fixed it in one place and all five services got the fix immediately. A small discipline upfront that saves a lot of drift over time.
Pipeline placement is everything
A contract test in the wrong place in your pipeline is barely better than no contract test. The whole value is in catching violations early, at the moment they’re introduced, by the person who introduced them. Put the gate between Dev and QA. Not after QA. Not as a weekly batch run. At the deployment boundary.
API-first is the actual foundation
This is worth saying again plainly: everything described in this article only works because the API spec is written before the code. If your spec is auto-generated from the implementation, your contract tests are circular. The practice of API-first design is what gives the spec its authority, and what gives the tests their meaning.
Where we’re going next
The current system validates response bodies. The next step is validating request payloads too — ensuring that what we send in each test actually conforms to the request schema. This would catch tests that drift from the spec, not just services.
We’re also looking at surfacing contract test results in our Xray/Jira test management board so that contract coverage is visible alongside regression coverage per service. Right now the signal is in the Jenkins pipeline. Making it visible in the QA health dashboard would help stakeholders understand what “contract clean” actually means for a given build.
The bottom line
We didn’t set out to build something sophisticated. We set out to stop a specific, boring class of bugs from reaching QA. A data type that changed. A field that went missing. Things that shouldn’t need a human to catch.
The solution turned out to be straightforward: use the OpenAPI spec as the contract, fetch it live so there’s nothing to maintain, validate every response against it, and put the test in front of the deployment gate where it can actually stop the bugs. The API-first practice is what makes the spec worth trusting.
Five microservices, one shared library, one pipeline gate, and a class of bugs that now gets caught in Dev instead of QA. That feels like a good trade.
| Tech stack
Test framework Robot Framework Custom library pv_robot_framework (Python) Schema validation openapi-schema-validator, jsonschema, jsonref Spec fetching Playwright / Browser Library (headless Chromium) Pipeline Jenkins CD — Dev → QA gate Services covered notification · inventory · charger · vehicle · asset-controls OpenAPI version 3.1 |
Written by Hari Prasath, QA Engineer at Powerverse

