AI seems to help almost every job in some way.
As SDET / QA, what if your test cases could be auto-generated by AI and automatically verified using Pytest ?
Hence, I recently tried integrating OpenAI with Pytest to automate API test generation. Here’s what I learned.
Before diving in, here some tools and resources you’ll need:
OpenAI Python Library \n https://github.com/openai/openai-python
Pytest (Python testing framework) \n https://github.com/pytest-dev/pytest
API Under Test — FakeStoreAPI (Cart endpoint)
https://fakestoreapi.com/docs#tag/Carts/operation/addCart
\
We’ll start by asking OpenAI to generate simple API test cases using structured prompts. Then, we use Pytest to run and validate those cases against the real API.
\
$ pip install openai $ pip install pytest
https://gist.github.com/shyinlim/76bc4b8a37df414cccf29ea227ef6ab4?embedable=true
2. Define the Test Case Generator Function
We’ll create a function to send a prompt to OpenAI, instructing it to behave like a Senior SDET / QA and return structured API test cases in JSON format.
https://gist.github.com/shyinlim/7190ab613a342ce77c94bc906a55e258?embedable=true
3. Let OpenAI Build and Verify Basic Test Cases
Before defining the test function, we need to prepare a prompt that describes the API’s method, endpoint, and a sample of the request and response structure. This prompt will guide OpenAI to generate the test cases.
https://gist.github.com/shyinlim/1f6fe4fa0cc99edcff9e82504164ebfd?embedable=true
Once the prompt is ready, we define a test function that sends this information to OpenAI, retrieves the generated test cases, and runs them using Pytest.
https://gist.github.com/shyinlim/58334767807f41f76d5fbf73b4ac1f60?embedable=true
Okay, here’s a sample test cases of what OpenAI might return:
\
\
While AI can quickly generate basic test cases. However, edge cases, business-specific logic, or tricky validation rules often still require human insight.
Use AI as a starting point, then build on it with your domain knowledge.
https://gist.github.com/shyinlim/709a54c3b9491a8f2f0b6aa171145211?embedable=true
\
This experiment shows a great way to boost testing efficiency by combining OpenAI and Pytest.
It’s not about replacing SDET / QA Engineers, but about helping us get started faster, cover more ground, and focus on the tricky stuff.
The key takeaway ? \n The magic isn’t just in the AI, it’s in the prompt.
Good prompts don’t just show up by magic, they come from your mix of :
\ \ Here’s my GitHub repo showing a straightforward example:
https://github.com/shyinlim/openaiwithpytestsimpleversion?source=postpage-----77e9cdb5b716---------------------------------------&embedable=true
\ Test smart, not hard. And, Happy Testing :)
Sh-Yin Lim (LinkedIn)



