Key Facts
- ✓ A user reported accessing OpenAI Sora videos without watermarks.
- ✓ Video generation results were reportedly ready in 1 second.
- ✓ The user investigated the platform's Network tab to find the vulnerability.
- ✓ The user claimed videos are stored on servers rather than generated on-demand.
Quick Summary
A technology enthusiast has reported discovering a method to access OpenAI Sora generated videos without watermarks. The discovery was made after the user observed that video generation times were significantly faster than expected, leading to the conclusion that files were stored on servers rather than generated in real-time.
The user described initiating an investigation into the platform's network traffic to identify a security loophole. The report suggests that the user successfully accessed the video files directly. This development follows observations of individuals selling access to watermark-free videos.
Discovery of Anomalous Generation Speeds
The investigation began when the researcher observed individuals selling access to Sora videos lacking standard watermarks. Initially, the assumption was that these sellers were utilizing machine learning combined with manual editing, such as Photoshop, to process individual frames. However, upon testing the generation process personally, the user noted a critical detail regarding the processing time.
The video result was reportedly ready in approximately 1 second. This rapid delivery time contradicted the hypothesis of manual frame-by-frame editing. Consequently, the researcher concluded that the video files must be pre-existing on servers. This realization triggered a shift in approach, moving from skepticism to active technical investigation.
"В тот момент я осознал: ЗНАЧИТ ВИДЕО ЛЕЖАТ НА СЕРВЕРАХ."
— Researcher
The Investigation Process 🕵️
Driven by what the user termed a 'hacker instinct,' the focus shifted to finding a security hole in the platform. The researcher began by examining the browser's Network tab. This standard developer tool allows for the monitoring of data requests and responses between the user and the server.
The methodology involved systematic probing of the network traffic. The user described the process as trying various methods of interaction with the interface to trigger specific network responses. This approach is commonly used to identify unsecured endpoints or exposed data. The goal was to locate the direct links to the video files that were being served to the user.
Implications of Stored Video Files
The core of the discovery relies on the premise that OpenAI stores generated content on accessible servers. If the generation process took only one second, it implies the video was retrieved rather than rendered from scratch. This distinction is vital for understanding the security posture of the platform.
If files are stored, they become potential targets for unauthorized access. The user's ability to locate these files suggests a lack of sufficient access controls or obfuscation for the generated media. This incident serves as a case study in the risks associated with cloud-based generative AI services where content persistence is required for rapid delivery.
Resolution and Tools 🛠️
The researcher concluded the investigation by documenting the vulnerability. The user indicated that the solution involved identifying the correct network requests to isolate the video file locations. This technical achievement allowed for the bypassing of the standard watermarking system.
Details regarding the specific vulnerability and the tools used were reportedly made public. The user mentioned a link to GitHub and a 'ready-made product' at the end of their original report. This suggests that the method may be reproducible by others with similar technical skills.
"Во мне проснулся инстинкт хакера, и я начал искать ДЫРУ."
— Researcher




