Developers often ask LLMs like OpenAI Codex to write tests for their code - and they do. The tests compile, run, and even pass, giving a sense of confidence. But how much can we really trust those tests? If an LLM only sees the source code and a few comments, does it truly understand what the software is supposed to do?Developers often ask LLMs like OpenAI Codex to write tests for their code - and they do. The tests compile, run, and even pass, giving a sense of confidence. But how much can we really trust those tests? If an LLM only sees the source code and a few comments, does it truly understand what the software is supposed to do?

The Limits of LLM-Generated Unit Tests

2025/10/24 23:46

The OpenAI Codex documentation includes a simple example prompt:

\ It sounds effortless - just ask Codex to write tests, and it will. And in most cases, it does: the tests compile, run, and even pass. Everyone seems satisfied.

\ But this raises a crucial question: are those tests actually good?

\ Let’s take a step back and think: why do we write tests? We use tests to check our code against the requirements. When we simply ask an LLM to write tests, are we sure the LLM knows all those requirements?

\ If no additional context is provided, all the LLM has is the code and, at best, inline documentation and comments. But is that enough? Let's check with several examples. To illustrate, let's start with a simple specification.

Requirements

Imagine that we have the following requirements:

  • We need to implement a new Product Service in the service layer of our application.
  • The service should have a method to retrieve the product price by product ID.
  • If the product ID is empty, an exception should be thrown with code 0.
  • The method should retrieve the product by ID from the database (using the product repository service).
  • If the product is not found, another exception should be thrown with code 1.
  • The product price should be returned.
  • The Product entity also has: ID, name, price, and cost price.

\ We will use PHP as an example, but the conclusions of this article are applicable to all languages.

Baseline Implementation

The following classes make up our starting point:

final class ProductService { public function __construct(private ProductRepository $repository) { } /** * Returns the product price or throws on error. * * @throws EmptyProductIdException When product ID is empty (code 0) * @throws ProductNotFoundException When product is not found (code 1) */ public function getProductPrice(string $productId): float { $productId = trim($productId); if ($productId === '') { throw new EmptyProductIdException(); } $product = $this->repository->findById($productId); if ($product === null) { throw new ProductNotFoundException($productId); } return $product->getPrice(); } }

\ Notice that the getProductPrice method is documented with a straightforward docblock describing its return value and expected exceptions.

\ The following supporting classes are not central to the article but are included for completeness. Feel free to skip them if you’re focusing on the main idea.

final class Product { public function __construct( private string $id, private string $name, private float $price, private float $costPrice ) { } public function getId(): string { return $this->id; } public function getName(): string { return $this->name; } public function getPrice(): float { return $this->price; } public function getCostPrice(): float { return $this->costPrice; } }

final class ProductNotFoundException extends RuntimeException { public function __construct(string $productId, ?\Throwable $previous = null) { parent::__construct("Product not found: {$productId}", 1, $previous); } }

final class EmptyProductIdException extends InvalidArgumentException { public function __construct(string $message = 'Product ID is empty', ?\Throwable $previous = null) { parent::__construct($message, 0, $previous); } }

Baseline Tests

Let's use OpenAI Codex to generate some tests for our project. I used Codex v0.25 with the GPT-5 high model.

\ For this baseline, I deliberately took the naive approach and asked:

▌Write unit tests for all public methods of the ProductService

\ Codex generated reasonable tests: two successful cases and two unsuccessful cases. It even inferred from the docblock that exceptions should carry specific error codes.

final class ProductServiceTest extends TestCase { public function testGetProductPriceReturnsPriceForExistingProduct(): void { $repo = $this->createMock(ProductRepository::class); $product = new Product('p1', 'Product 1', 19.99, 10.0); $repo->expects($this->once()) ->method('findById') ->with('p1') ->willReturn($product); $service = new ProductService($repo); $price = $service->getProductPrice('p1'); $this->assertEqualsWithDelta(19.99, $price, 0.0001); } public function testGetProductPriceTrimsIdAndReturnsPrice(): void { $repo = $this->createMock(ProductRepository::class); $product = new Product('ABC', 'Trimmed Product', 5.5, 2.0); $repo->expects($this->once()) ->method('findById') ->with('ABC') // ensure the ID is trimmed before lookup ->willReturn($product); $service = new ProductService($repo); $price = $service->getProductPrice(' ABC '); $this->assertEqualsWithDelta(5.5, $price, 0.0001); } public function testGetProductPriceThrowsOnEmptyId(): void { $repo = $this->createMock(ProductRepository::class); $service = new ProductService($repo); $this->expectException(EmptyProductIdException::class); $this->expectExceptionCode(0); $this->expectExceptionMessage('Product ID is empty'); $service->getProductPrice(' '); // becomes empty after trim } public function testGetProductPriceThrowsWhenProductNotFound(): void { $repo = $this->createMock(ProductRepository::class); $repo->expects($this->once()) ->method('findById') ->with('missing') ->willReturn(null); $service = new ProductService($repo); $this->expectException(ProductNotFoundException::class); $this->expectExceptionCode(1); $this->expectExceptionMessage('Product not found: missing'); $service->getProductPrice('missing'); } }

\ At first glance, these tests look solid. But that’s also the limitation: they only reflect what’s written in the code and comments. Do they really guard against deeper mistakes?

Mutations

To explore this, I deliberately mutated the ProductService code and asked Codex to generate tests again. Each time, I deleted the existing tests, restarted Codex to clear the context, and used the same prompt: “Write unit tests for all public methods of the ProductService.”

Case 1 -- Flipped Empty Condition

In this mutation, I inverted the condition that checks whether the product ID is empty:

- if ($productId === '') { + if ($productId !== '') {

\ Codex wrote unit tests equivalent to the previous ones, with only minor differences in line order, product names, and similar details. The tests were correct: Codex identified the error in the code, wrote appropriate tests, and even proposed a fix for the method:

Why Tests Fail - In src/Service/ProductService.php: - The condition is inverted: - Current: `if ($productId !== '') { throw new EmptyProductIdException(); }` - Intended: throw when empty/whitespace after trim. - This causes: - Non-empty IDs to wrongly throw EmptyProductIdException. - Empty/whitespace IDs to continue and hit the repository call unexpectedly. Suggested Fix (not applied) - In src/Service/ProductService.php, change the condition to: - if ($productId === '') { throw new EmptyProductIdException(); }

\ This suggests that Codex is able to handle at least obvious logical inversions.

Case 2 -- Flipped Condition During Product Search

In this mutation, I inverted the condition that checks whether the product exists in the repository:

- if ($product === null) { + if ($product !== null) {

\ The outcome was the same as in Case 1: Codex produced correct tests and proposed a fix for the bug.

Case 3 -- Return Cost Price Instead of Price

In this mutation, I changed the method to return the product’s cost price instead of its price:

- return $product->getPrice(); + return $product->getCostPrice();

\ Codex again generated four tests, but it did not realize that the method must return the product price rather than the cost price - even though this is stated in the docblock. The generated suite confirmed the mutated behavior instead of the requirement.

... public function testGetProductPriceReturnsCostPriceWhenFound(): void { $repo = $this->createMock(ProductRepository::class); $product = new Product('p1', 'Product 1', /* $price */ 19.99, /* $costPrice */ 7.49); $repo->expects($this->once()) ->method('findById') ->with('p1') ->willReturn($product); $service = new ProductService($repo); $price = $service->getProductPrice('p1'); $this->assertSame(7.49, $price); } ...

\ Unlike the earlier mutations, this one slipped through: Codex followed the code and accepted the cost price as if it were correct.

Single Coding Session Run

Before making conclusions, let’s run another experiment. This time, I asked Codex to write the code itself first, so it had more context before generating tests. All the following prompts were made within a single Codex session:

▌Create a PHP file containing a PHP class. The class is called ProductService. It has a method called getProductPrice. It receives a product ID as a string and checks that it is not empty. If it is empty, throw an exception with error code 0 (empty). It then retrieves the product by ID from the database using another service, ProductRepository. If the repository returns null, throw an exception with error code 1 (not found). Then get the product price. (Create a simple Product class with only ID, name, price, and costPrice.) All classes should be in their own files. File structure: an src folder containing Domain, Exception, Repository, and Service directories.

\ Codex produced a similar ProductService, though it didn’t add a docblock for the getProductPrice method. This made it even more interesting to see how it would handle mutations:

... public function getProductPrice(string $productId): float { if (trim($productId) === '') { throw ProductException::emptyId(); } $product = $this->productRepository->findById($productId); if ($product === null) { throw ProductException::notFound($productId); } return $product->getPrice(); } ...

\ First, I asked Codex to write tests. No surprises here: four correct unit tests were written, including checks for exception error codes in the negative cases.

\ Then I mutated the service in the same way as before. The only difference was that I slightly modified the prompt to make Codex understand there were no tests anymore:

▌Check whether tests for all public methods of ProductService still exist, and write them if they are missing

\ Codex successfully handled the inverted conditions: bugs were fixed automatically, and correct tests were generated.

\ And the most interesting part: the same happened when I replaced getPrice with getCostPrice:

Fixes Made - Restored missing test file tests/Service/ProductServiceTest.php. - Corrected ProductService::getProductPrice to return $product->getPrice().

\ So, as expected, even without additional context from a docblock, Codex was able to generate correct tests and repair the code, relying on the initial requirements given at the start of the session.

Conclusion

These experiments show that the naive approach to writing tests with an LLM does not deliver the expected results. Yes, tests will be generated -- but they will simply mirror the current code, even if that code contains bugs. An LLM can identify obvious logic errors, but when the code involves complex business rules or formulas, the generated tests will not meet the goals of unit testing.

\ Here are a few practical lessons:

  • Provide more context. Add inline comments and documentation blocks before generating tests. This may help, but it still cannot guarantee correct unit tests or meaningful bug detection.
  • Write code and tests in the same session. If the LLM writes the code and the tests together, it has a better chance of enforcing the original requirements, as the single-session run demonstrated.
  • Review everything. Unit tests from an LLM should never be committed blindly -- they require the same careful review as hand-written tests.

\ LLMs can certainly help with testing, but without clear requirements and human review, they will only certify the code you already have -- not the behavior you actually need.

:::warning Disclaimer: Although I'm currently working as a Lead Backend Engineer at Bumble, the content in this article does not refer to my work or experience at Bumble.

:::

\

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

SEC Backs Nasdaq, CBOE, NYSE Push to Simplify Crypto ETF Rules

SEC Backs Nasdaq, CBOE, NYSE Push to Simplify Crypto ETF Rules

The US SEC on Wednesday approved new listing rules for major exchanges, paving the way for a surge of crypto spot exchange-traded funds. On Wednesday, the regulator voted to let Nasdaq, Cboe BZX and NYSE Arca adopt generic listing standards for commodity-based trust shares. The decision clears the final hurdle for asset managers seeking to launch spot ETFs tied to cryptocurrencies beyond Bitcoin and Ether. In July, the SEC outlined how exchanges could bring new products to market under the framework. Asset managers and exchanges must now meet specific criteria, but will no longer need to undergo drawn-out case-by-case reviews. Solana And XRP Funds Seen to Be First In Line Under the new system, the time from filing to launch can shrink to as little as 75 days, compared with up to 240 days or more under the old rules. “This is the crypto ETP framework we’ve been waiting for,” Bloomberg research analyst James Seyffart said on X, predicting a wave of new products in the coming months. The first filings likely to benefit are those tracking Solana and XRP, both of which have sat in limbo for more than a year. SEC Chair Paul Atkins said the approval reflects a commitment to reduce barriers and foster innovation while maintaining investor protections. The move comes under the administration of President Donald Trump, which has signaled strong support for digital assets after years of hesitation during the Biden era. New Standards Replace Lengthy Reviews And Repeated Denials Until now, the commission reviewed each application separately, requiring one filing from the exchange and another from the asset manager. This dual process often dragged on for months and led to repeated denials. Even Bitcoin spot ETFs, finally approved in Jan. 2024, arrived only after years of resistance and a legal battle with Grayscale. According to Bloomberg ETF analyst Eric Balchunas, the streamlined rules could apply to any cryptocurrency with at least six months of futures trading on the Coinbase Derivatives Exchange. That means more than a dozen tokens may now qualify for listing, potentially unleashing a new wave of altcoin ETFs. SEC Clears Grayscale Large Cap Fund Tracking CoinDesk 5 Index The SEC also approved the Grayscale Digital Large Cap Fund, which tracks the CoinDesk 5 Index, including Bitcoin, Ether, XRP, Solana and Cardano. Alongside this, it cleared the launch of options linked to the Cboe Bitcoin US ETF Index and its mini contract, broadening the set of crypto-linked derivatives on regulated US markets. Analysts say the shift shows how far US policy has moved. Where once regulators resisted digital assets, the latest changes show a growing willingness to bring them into the mainstream financial system under established safeguards
Share
CryptoNews2025/09/18 12:40