AI seems to help almost every job in some way. What if your test cases could be auto-generated by AI and automatically verified using Pytest ?Hence, I recently tried integrating OpenAI with Pytest to automate API test generation. Here’s what I learned.AI seems to help almost every job in some way. What if your test cases could be auto-generated by AI and automatically verified using Pytest ?Hence, I recently tried integrating OpenAI with Pytest to automate API test generation. Here’s what I learned.

Let AI Write Your Pytest Suite: A Practical Guide for SDETs

AI seems to help almost every job in some way.

As SDET / QA, what if your test cases could be auto-generated by AI and automatically verified using Pytest ?

Hence, I recently tried integrating OpenAI with Pytest to automate API test generation. Here’s what I learned.

What You Need to Know and Prepare

Before diving in, here some tools and resources you’ll need:

  • OpenAI Python Library \n https://github.com/openai/openai-python
  • Pytest (Python testing framework) \n https://github.com/pytest-dev/pytest
  • API Under Test — FakeStoreAPI (Cart endpoint) https://fakestoreapi.com/docs#tag/Carts/operation/addCart

Setup Overview

Let OpenAI Build and Verify Basic Test Cases

We’ll start by asking OpenAI to generate simple API test cases using structured prompts. Then, we use Pytest to run and validate those cases against the real API.

\

  1. Install and Import Required Packages

$ pip install openai $ pip install pytest

https://gist.github.com/taurus5650/76bc4b8a37df414cccf29ea227ef6ab4?embedable=true

2. Define the Test Case Generator Function

We’ll create a function to send a prompt to OpenAI, instructing it to behave like a Senior SDET / QA and return structured API test cases in JSON format.

https://gist.github.com/taurus5650/7190ab613a342ce77c94bc906a55e258?embedable=true

3. Let OpenAI Build and Verify Basic Test Cases

Before defining the test function, we need to prepare a prompt that describes the API’s method, endpoint, and a sample of the request and response structure. This prompt will guide OpenAI to generate the test cases.

https://gist.github.com/taurus5650/1f6fe4fa0cc99edcff9e82504164ebfd?embedable=true

Once the prompt is ready, we define a test function that sends this information to OpenAI, retrieves the generated test cases, and runs them using Pytest.

https://gist.github.com/taurus5650/58334767807f41f76d5fbf73b4ac1f60?embedable=true

Okay, here’s a sample test cases of what OpenAI might return:

\

  • createcartwithvalidsingle_product
  • createcartwithmultipleproducts
  • createcartwithminimumvalid_values

\

Expand with Human-Designed Cases

While AI can quickly generate basic test cases. However, edge cases, business-specific logic, or tricky validation rules often still require human insight.

Use AI as a starting point, then build on it with your domain knowledge.

https://gist.github.com/taurus5650/709a54c3b9491a8f2f0b6aa171145211?embedable=true

Knowing When to Let AI Handle It

AI-generated test functions work best when:

  • API specs are clear and well-defined, including methods, parameters, and expected responses
  • Field validations are needed (data types, required fields, ranges).
  • Request / response flows are standard without complex business logic.
  • Need to quickly bootstrap test coverage, especially for repetitive or CRUD endpoints

You’ll still want to write tests manually when:

  • Business logic is complex or conditional, which AI may not fully understand.
  • Heavily DB-dependent tests, they often require manual setup and domain knowledge to ensure correctness.
  • Mocking or stubbing other services is needed, especially async or dependent ones.
  • Test results depend on side effects beyond response values, like logs or DB updates.
  • Security testing such as auth, permission, or injection checks is involved.

Final Thoughts

This experiment shows a great way to boost testing efficiency by combining OpenAI and Pytest.

It’s not about replacing SDET / QA Engineers, but about helping us get started faster, cover more ground, and focus on the tricky stuff.

The key takeaway ?

==The magic isn’t just in the AI, it’s in the prompt.==

\ Good prompts don’t just show up by magic, they come from your mix of :

  • Testing experience
  • Testing techniques
  • Testing mindset
  • Communication skills
  • Product understanding

\ Here’s my GitHub repo showing a straightforward example: \n https://github.com/shyinlim/open_ai_with_pytest_simple_version

\ \ Test smart, not hard. And, Happy Testing :)

Market Opportunity
null Logo
null Price(null)
--
----
USD
null (null) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Michigan’s Stalled Reserve Bill Advances After 7 Months

Michigan’s Stalled Reserve Bill Advances After 7 Months

The post Michigan’s Stalled Reserve Bill Advances After 7 Months appeared on BitcoinEthereumNews.com. After seven months of inactivity, Michigan’s Bitcoin Reserve Bill, HB 4087, made progress Thursday by advancing to the second reading in the state House of Representatives. The bill, introduced in February, aims to establish a strategic bitcoin BTC$115,427.11 reserve by authorizing the state treasury to invest up to 10% of its reserves in the largest cryptocurrency and possibly others. It has now been referred to the Committee on Government Operations. If approved, Michigan would join the three states — Texas, New Hampshire and Arizona — that have enacted bitcoin reserve laws. While Texas allocated $10 million to purchase BTC in June, the other two have yet to fund the reserve with state money. Recently, the U.S. House directed the Treasury Department to study the feasibility and governance of a strategic bitcoin reserve, including key areas such as custody, cybersecurity and accounting standards. Sovereign adoption of bitcoin has emerged as one of the defining trends of 2025, with several U.S. states and countries considering or implementing BTC reserves as part of their public finance strategy. That’s in addition to the growing corporate adoption of bitcoin in company treasuries. This institutional embrace has contributed to a significant boost in bitcoin’s market valuation. The BTC price has increased 25% this year, and touched a record high near $124,500 in August, CoinDesk data show. Despite the enthusiasm, skeptics remain concerned about the risks posed by bitcoin’s notorious price volatility. Source: https://www.coindesk.com/policy/2025/09/19/michigan-s-stalled-bitcoin-reserve-bill-advances-after-7-months
Share
BitcoinEthereumNews2025/09/20 04:26
DeFi Leaders Raise Alarm Over Market Structure Bill’s Shaky Future

DeFi Leaders Raise Alarm Over Market Structure Bill’s Shaky Future

US Senate Postpones Markup of Digital Asset Market Clarity Act Amid Industry Concerns The proposed Digital Asset Market Clarity Act (CLARITY) in the U.S. Senate
Share
Crypto Breaking News2026/01/17 06:20
BlackRock shifts $185B model portfolios deeper into US stocks and AI

BlackRock shifts $185B model portfolios deeper into US stocks and AI

BlackRock is steering $185 billion worth of model portfolios deeper into US stocks and artificial intelligence. The decision came this week as the asset manager adjusted its entire model suite, increasing its equity allocation and dumping exposure to international developed markets. The firm now sits 2% overweight on stocks, after money moved between several of […]
Share
Cryptopolitan2025/09/18 00:08