AI seems to help almost every job in some way.
As SDET / QA, what if your test cases could be auto-generated by AI and automatically verified using Pytest ?
Hence, I recently tried integrating OpenAI with Pytest to automate API test generation. Here’s what I learned.
Before diving in, here some tools and resources you’ll need:
We’ll start by asking OpenAI to generate simple API test cases using structured prompts. Then, we use Pytest to run and validate those cases against the real API.
\
$ pip install openai $ pip install pytest
https://gist.github.com/taurus5650/76bc4b8a37df414cccf29ea227ef6ab4?embedable=true
2. Define the Test Case Generator Function
We’ll create a function to send a prompt to OpenAI, instructing it to behave like a Senior SDET / QA and return structured API test cases in JSON format.
https://gist.github.com/taurus5650/7190ab613a342ce77c94bc906a55e258?embedable=true
3. Let OpenAI Build and Verify Basic Test Cases
Before defining the test function, we need to prepare a prompt that describes the API’s method, endpoint, and a sample of the request and response structure. This prompt will guide OpenAI to generate the test cases.
https://gist.github.com/taurus5650/1f6fe4fa0cc99edcff9e82504164ebfd?embedable=true
Once the prompt is ready, we define a test function that sends this information to OpenAI, retrieves the generated test cases, and runs them using Pytest.
https://gist.github.com/taurus5650/58334767807f41f76d5fbf73b4ac1f60?embedable=true
Okay, here’s a sample test cases of what OpenAI might return:
\
\ 
While AI can quickly generate basic test cases. However, edge cases, business-specific logic, or tricky validation rules often still require human insight.
Use AI as a starting point, then build on it with your domain knowledge.
https://gist.github.com/taurus5650/709a54c3b9491a8f2f0b6aa171145211?embedable=true
This experiment shows a great way to boost testing efficiency by combining OpenAI and Pytest.
It’s not about replacing SDET / QA Engineers, but about helping us get started faster, cover more ground, and focus on the tricky stuff.
The key takeaway ?
==The magic isn’t just in the AI, it’s in the prompt.==
\ Good prompts don’t just show up by magic, they come from your mix of :
\ Here’s my GitHub repo showing a straightforward example: \n https://github.com/shyinlim/open_ai_with_pytest_simple_version
\ \ Test smart, not hard. And, Happy Testing :)



