The post Researcher uncovers a critical SSRF vulnerability in ChatGPT’s Custom GPT appeared on BitcoinEthereumNews.com. OpenAI’s large language model ChatGPT fixed a security flaw found earlier this week by a researcher within the “Actions” feature of Custom GPTs. Attackers could have exploited a Server-Side Request Forgery (SSRF) bug to expose internal credentials within the AI model’s cloud, the investigator claimed. As an Open Security Engineer and bug hunter, SirLeeroyJenkins was creating his first Custom GPT, and he “sensed” there was an SSRF vulnerability. The Actions feature enables users to define external APIs using OpenAPI schemas for the AI to call them for specific tasks, such as fetching weather data. While testing his own API, SirLeeroyJenkins discovered the system returned data from a user-provided URL. Alarmed by this behavior, he conducted more tests, suspecting a potential SSRF issue. “Once I realized this feature could return data from any user-provided URL, the hacker instinct kicked in,” he said. “I had to check for SSRF.” SSRF vulnerability could make custom GPTs unsafe  As explained by Jenkins in his Medium post published earlier this week, Server-Side Request Forgery is a web vulnerability that tricks applications into making requests to unintended destinations. If the application does not properly validate user-supplied URLs, attackers can use the server’s access privileges to reach internal networks or cloud metadata services. Basic full-read SSRF chart. Source: SirLeeroyJenkins Medium blog. SSRF was prevalent enough to make the OWASP Top 10 list in 2021 and has now expanded its potential damage because insecure default configurations in cloud environments can expose critical systems. Jenkins explained that there are two main SSRF types, namely full-read and blind. Full-read SSRF returns data from the target service directly to the attacker. At the same time, blind SSRF does not reveal the response but still allows them to interact with internal services, for example, through timing-based port scanning. He tested the vulnerability… The post Researcher uncovers a critical SSRF vulnerability in ChatGPT’s Custom GPT appeared on BitcoinEthereumNews.com. OpenAI’s large language model ChatGPT fixed a security flaw found earlier this week by a researcher within the “Actions” feature of Custom GPTs. Attackers could have exploited a Server-Side Request Forgery (SSRF) bug to expose internal credentials within the AI model’s cloud, the investigator claimed. As an Open Security Engineer and bug hunter, SirLeeroyJenkins was creating his first Custom GPT, and he “sensed” there was an SSRF vulnerability. The Actions feature enables users to define external APIs using OpenAPI schemas for the AI to call them for specific tasks, such as fetching weather data. While testing his own API, SirLeeroyJenkins discovered the system returned data from a user-provided URL. Alarmed by this behavior, he conducted more tests, suspecting a potential SSRF issue. “Once I realized this feature could return data from any user-provided URL, the hacker instinct kicked in,” he said. “I had to check for SSRF.” SSRF vulnerability could make custom GPTs unsafe  As explained by Jenkins in his Medium post published earlier this week, Server-Side Request Forgery is a web vulnerability that tricks applications into making requests to unintended destinations. If the application does not properly validate user-supplied URLs, attackers can use the server’s access privileges to reach internal networks or cloud metadata services. Basic full-read SSRF chart. Source: SirLeeroyJenkins Medium blog. SSRF was prevalent enough to make the OWASP Top 10 list in 2021 and has now expanded its potential damage because insecure default configurations in cloud environments can expose critical systems. Jenkins explained that there are two main SSRF types, namely full-read and blind. Full-read SSRF returns data from the target service directly to the attacker. At the same time, blind SSRF does not reveal the response but still allows them to interact with internal services, for example, through timing-based port scanning. He tested the vulnerability…

Researcher uncovers a critical SSRF vulnerability in ChatGPT’s Custom GPT

2025/11/13 18:42
4분 읽기
이 콘텐츠에 대한 의견이나 우려 사항이 있으시면 crypto.news@mexc.com으로 연락주시기 바랍니다

OpenAI’s large language model ChatGPT fixed a security flaw found earlier this week by a researcher within the “Actions” feature of Custom GPTs. Attackers could have exploited a Server-Side Request Forgery (SSRF) bug to expose internal credentials within the AI model’s cloud, the investigator claimed.

As an Open Security Engineer and bug hunter, SirLeeroyJenkins was creating his first Custom GPT, and he “sensed” there was an SSRF vulnerability. The Actions feature enables users to define external APIs using OpenAPI schemas for the AI to call them for specific tasks, such as fetching weather data.

While testing his own API, SirLeeroyJenkins discovered the system returned data from a user-provided URL. Alarmed by this behavior, he conducted more tests, suspecting a potential SSRF issue.

“Once I realized this feature could return data from any user-provided URL, the hacker instinct kicked in,” he said. “I had to check for SSRF.”

SSRF vulnerability could make custom GPTs unsafe 

As explained by Jenkins in his Medium post published earlier this week, Server-Side Request Forgery is a web vulnerability that tricks applications into making requests to unintended destinations. If the application does not properly validate user-supplied URLs, attackers can use the server’s access privileges to reach internal networks or cloud metadata services.

Basic full-read SSRF chart. Source: SirLeeroyJenkins Medium blog.

SSRF was prevalent enough to make the OWASP Top 10 list in 2021 and has now expanded its potential damage because insecure default configurations in cloud environments can expose critical systems.

Jenkins explained that there are two main SSRF types, namely full-read and blind. Full-read SSRF returns data from the target service directly to the attacker. At the same time, blind SSRF does not reveal the response but still allows them to interact with internal services, for example, through timing-based port scanning.

He tested the vulnerability by pointing the API URL to Azure’s Instance Metadata Service (IMDS), which stores sensitive cloud credentials. Access to this service normally requires the Meta True header, so he was alarmed when his initial attempts could not provide the header as requested.

The Custom GPT feature initially blocked the exploit because it enforced HTTPS URLs, while Azure IMDS operates over HTTP. Using a 302 redirect from an external HTTPS endpoint to the internal metadata URL, the server followed the redirect. However, Azure blocked access without the required header.

“Since the server followed 302 redirects, it returned the response from their internal metadata URL. Mission accomplished, right? Wrong. The response from their metadata service indicated that a required header was not being set,” SirLeeroyJenkins denoted.

After continuing to probe the responses, the feature allowed custom API keys that could be named arbitrarily. He attempted to name a key Metadata with the value true, where the required header was injected to grant the GPT access to the metadata service.

Jenkins promptly reported the vulnerability to OpenAI’s Bugcrowd program, and the issue was assigned high severity and then patched.

He also mentioned that Open Security previously used this type of SSRF attack chain to exploit a vulnerable invoice generation feature at a major global financial firm for security auditing.

OpenAI releases GPT-5.1 after the version 5.0 turmoil

In other related ChatGPT news, OpenAI announced the launch of GPT-5.1, boasting of several updates made from version 5.0 to improve instruction following and adaptive reasoning. 

“GPT-5.1 is out! It’s a nice upgrade. I particularly like the improvements in instruction following, and the adaptive thinking. The intelligence and style improvements are good too,” wrote CEO Sam Altman on X late Wednesday.

Tech writer Mehul Gupta tested GPT-5.1 against its predecessor, noting that GPT-5, while polished and helpful, sometimes overcomplicates simple tasks. GPT-5.1’s instant version supposedly had an improved understanding and subtle adaptive pauses that gave more “context-aware” responses.

In one test, Gupta asked both models to reply in six words. GPT-5 attempted to overexplain, while GPT-5.1 delivered a concise and correct answer. 

Altman also announced 7 new presets, including Default, Friendly, Efficient, Professional, Candid, or Quirky, have been added, but users can choose to “tune it themselves.”

Don’t just read crypto news. Understand it. Subscribe to our newsletter. It’s free.

Source: https://www.cryptopolitan.com/chatgpt-hacked-exploiting-ssrf-vulnerability/

시장 기회
플러리싱 에이아이 로고
플러리싱 에이아이 가격(SLEEPLESSAI)
$0.01889
$0.01889$0.01889
+0.10%
USD
플러리싱 에이아이 (SLEEPLESSAI) 실시간 가격 차트
면책 조항: 본 사이트에 재게시된 글들은 공개 플랫폼에서 가져온 것으로 정보 제공 목적으로만 제공됩니다. 이는 반드시 MEXC의 견해를 반영하는 것은 아닙니다. 모든 권리는 원저자에게 있습니다. 제3자의 권리를 침해하는 콘텐츠가 있다고 판단될 경우, crypto.news@mexc.com으로 연락하여 삭제 요청을 해주시기 바랍니다. MEXC는 콘텐츠의 정확성, 완전성 또는 시의적절성에 대해 어떠한 보증도 하지 않으며, 제공된 정보에 기반하여 취해진 어떠한 조치에 대해서도 책임을 지지 않습니다. 본 콘텐츠는 금융, 법률 또는 기타 전문적인 조언을 구성하지 않으며, MEXC의 추천이나 보증으로 간주되어서는 안 됩니다.

$30,000 in PRL + 15,000 USDT

$30,000 in PRL + 15,000 USDT$30,000 in PRL + 15,000 USDT

Deposit & trade PRL to boost your rewards!