Azure OpenAI Exploitation & Security Failure: Investigative Dossier

Reporter: Gabriel Majorsky

Email: gabz@songdrop.band

Subscription ID: bfd2dc10-f4ea-4cc0-95c8-6ab6d43d26cf

Azure Ticket: billing - TrackingID#2508120050002143

Executive Summary

This dossier documents a catastrophic failure in Microsoft Azure OpenAI’s security and operational safeguards. My account suffered unauthorized exploitation consuming nearly $25,000 in sponsorship credits over a matter of days. Microsoft internally detected anomalous activity yet failed to intervene, leaving the account exposed to full financial depletion. Following this, the account was forcibly converted to Pay-As-You-Go without consent, amplifying risk. Multiple requests for logs, remediation, and transparency have been ignored or dismissed.

This represents a systemic failure affecting all Azure OpenAI customers, exposing them to financial and security risks. This dossier includes full timelines, communications, and a critique of Azure’s documentation, monitoring, and developer guidance.

Detailed Timeline

May 25, 2025

Sponsorship credits of $25,000 granted for Azure OpenAI account.

June 1–Mid June 2025

Initial GPT-Image-1 access requested. Provisioning and setup completed after approximately 1–2 weeks. User begins generating images and monitoring usage.

"Congratulations! After reviewing your application, we are pleased to inform you that you have been onboarded to the Azure OpenAI Service o3 and gpt-image-1 models." – Microsoft Email
"I received my Azure account and started setting up GPT-Image-1. It took about two weeks to deploy and understand how usage and billing worked." – Gabriel Majorsky

Late June 2025

User requests technical details on GPT-Image-1 performance and hardware to optimize usage.

"One thing stood out, that even this is a private gpt-image-1, it is super slow. It takes 50-60 seconds to generate any image. May I ask the hardware configuration which runs the model?" – Gabriel Majorsky
"Global Standard models tend to have higher response time as they use shared server nodes which retaliate to global pools. Therefore 60s is expected." – Microsoft Support

July – Aug 5, 2025

Microsoft sends multiple emails regarding Code of Conduct violations and potential resource abuse. These alerts confirm internal detection of abnormal usage and outline possible throttling or suspension actions.

"[Action Required] Your Azure OpenAI resources have been observed in violation of the Microsoft Generative AI Services Code of Conduct We have important information regarding your Azure OpenAI resource(s). Internal monitoring has detected that at least one of your Azure OpenAI resources may be in violation of the Microsoft Acceptable Use Policy. Continued violation of the policy may result in throttling and/or suspension of your resources. If your request continuously returns 400 response error messages, please refrain from making further requests immediately. This error often indicates you’re violating the Microsoft Acceptable Use Policy by sending harmful requests to Azure OpenAI models. As a result of this activity, the Azure OpenAI engineering team may take action / has taken action on the offending resource(s) in your subscription. If the action listed is 'throttled', then requests from your resource are currently being throttled. Otherwise, you have 24 hours to remedy this behavior before we take further corrective action. If the action listed is 'resource suspension' then your image generation model deployments under your resource are in a suspended state. If the action listed is 'subscription suspension', then your image generation model deployments under your subscription are currently in a suspended state. Once you have identified and remedied the source of these errors, and/or to unblock your resource(s) and/or subscription(s), you can open a support case through the Azure Portal at aka.ms/azsupt to work with the engineering team to unblock (refer to Tracking ID YLKM-DV8 and Incident ID 664717743 in your case). If you believe that you have received this message in error, or have any other questions or concerns, please open a support case at aka.ms/azsupt and reference the Incident ID and tracking ID from above."

Aug 2, 2025

Received generic email about upcoming Azure OpenAI pricing changes, creating further confusion about credit usage.

Aug 5, 2025

First warning/suspension notice received. No billing alerts provided despite abnormal usage. Tens of thousands of image requests per day observed.

"Corrective Action: resource suspension" – Microsoft Email

Aug 12, 2025

Support ticket opened immediately after noticing $12,902 consumed. Remaining credits approx. $18,000.

"HI Azure, Something wrong. gpt-image-1 keep charging like $1000/day. I had $17,000 a month ago in credit, and gpt-image-1 keeps charging $1000/day even with low usage. Something is wrong. Would you mind checking it out please ASAP?" – Gabriel Majorsky
"Thank you for the response. We’re reviewing backend billing details for gpt-image-1. Could you provide the Resource URI?" – Azure Support

Aug 12–17, 2025

Abuse escalates. Microsoft systems detect anomalies but allow exploitation to continue. Account loses $1,000+/day.

"I cannot delete the key because the abuser is using it, and Microsoft said not to delete it. I could only set the token limit to 3 to throttle requests." – Gabriel Majorsky

Aug 14, 2025

Credit usage reaches $17,560. No meaningful response from support yet.

Aug 17, 2025

Account nearly depleted: $22,710 used of $24,787 credits.

Aug 18, 2025

All $25,000 sponsorship credits exhausted. Account forcibly converted to Pay-As-You-Go without consent, creating risk of further personal financial liability.

"Your Azure Sponsorship has ended. Your subscriptions were converted to standard pay-as-you-go rates." – Microsoft Email

Aug 19–26, 2025

Multiple emails exchanged. Microsoft repeatedly requests Resource URI without providing substantive help.

"Could you please share the Resource URI of the OpenAI resource where the gpt-image-1 model is located?" – Azure Support

Aug 27, 2025

Zoom screen sharing session held. Full incident explained. Microsoft assured case is being handled seriously, but no logs or remediation provided.

Aug 29, 2025

Azure dismisses case with generic response, refuses to investigate logs or restore credits.

"Unfortunately, there is no possibility to make changes to the sponsorship usage retrospectively." – Azure Support

Sep 1–5, 2025

Follow-up emails requesting escalation, full logs, and investigation. No resolution provided.

"I am extremely disappointed with how my ticket opened 12.08.2025 has been handled. Despite raising my concerns multiple times... I have received no meaningful assistance." – Gabriel Majorsky

Sep 5–7, 2025

Microsoft provides vague assurances but no resolution. Case remains open with high-priority internal incident created.

"We've created an internal incident with high priority regarding the issue you reported." – Azure Support

Systemic Security Failures

Financial Impact

Critical Safety Failure: No Spending Limits in Azure OpenAI

One of the most alarming findings from this incident is the complete lack of spending safeguards in Azure OpenAI. While OpenAI’s official API allows developers to set hard spending limits per API key, Azure OpenAI provides no such mechanism. Once an API key is compromised, there is nothing to stop mass exploitation. The system will continue charging the account until all credits are exhausted, or until the customer notices and manually throttles usage.

Feature OpenAI API Azure OpenAI
Spending Limits per Key Yes, hard limits can be set at key generation. No, no spending limits available.
Automated Abuse Lockout Soft throttling can prevent overuse if configured. No, even when abuse is detected, the system allows continued consumption.
Fraud/Anomaly Protection Alerts available; usage caps prevent runaway costs. Alerts exist but do not block usage; account depletion occurs anyway.
Impact of Compromised Key Limited to spending cap set by user. Unlimited within sponsorship or pay-as-you-go, leading to catastrophic financial loss.

In my case, this design flaw directly led to the unauthorized depletion of $25,000 in sponsorship credits in a matter of days. Despite multiple internal alerts from Microsoft and repeated customer reports, the system allowed the abuse to continue unchecked, demonstrating a systemic security failure.

This exposes every Azure OpenAI customer to significant financial risk, and shows a troubling lack of proactive guidance, educational resources, or preventive infrastructure on the platform.

Denied Access to Infrastructure Details: Pre-Incident Inquiry

Before the incident occurred, I proactively reached out to Azure support to understand the infrastructure behind the gpt-image-1 model. My goal was to gain insight into why image generation took 60–90 seconds per image, even at maximum quality settings. This inquiry was entirely motivated by responsible engineering: I wanted to optimize usage, understand performance constraints, and ensure safe, efficient operation.

Despite these legitimate questions, Microsoft refused to provide any concrete technical details about the underlying hardware, deployment, or system architecture. Responses were vague, limited to generic documentation, and offered no actionable information for understanding performance or operational behavior.

Key Points:

Implication:

Even before the security incident, Microsoft’s refusal to provide infrastructure information highlights a systemic lack of transparency and developer support. Users are given advanced tools without guidance, education, or safety measures, leaving them vulnerable to performance bottlenecks, unexpected behavior, and, as later occurred, potential financial and security exposure.

Requested MSRC Actions

Next Steps if Ignored

Conclusion

Azure OpenAI has provided advanced AI tools without proper safeguards, fraud detection, anomaly lockout, or clear documentation. This negligence directly caused financial loss and security exposure. Immediate investigation, transparency, and remediation are required. Ignoring this incident is unacceptable both ethically and legally.

Reporter Contact

Gabriel Majorsky

Email: gabz@songdrop.band

Subscription ID: bfd2dc10-f4ea-4cc0-95c8-6ab6d43d26cf

Azure Ticket: billing - TrackingID#2508120050002143