You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Occasionally, stream recordings become corrupted and unplayable. When the mp4 produced by OME is analyzed with ffprobe, the error moov atom not found is reported. This suggests the recordings are not finalized properly, likely due to abrupt interruptions. We found a way to consistently reproduce this, described below, but it seems like this could happen for any reason that could cause the internal recording process to exit abruptly.
To Reproduce
Steps to reproduce the behavior:
Set Server.xml as follows:
<?xml version="1.0" encoding="UTF-8"?>
<Serverversion="8">
<Name>OvenMediaEngine</Name>
<!-- Host type (origin/edge) -->
<Type>origin</Type>
<!-- Specify IP address to bind (* means all IPs) -->
<IP>*</IP>
<!-- To get the public IP address(mapped address of stun) of the local server. This is useful when OME cannot obtain a public IP from an interface, such as AWS or docker environment. If this is successful, you can use ${PublicIP} in your settings.-->
<StunServer>stun.l.google.com:19302</StunServer>
<!-- Settings for the ports to bind -->
<Bind>
<Managers>
<API>
<Port>48081</Port>
<WorkerCount>1</WorkerCount>
</API>
</Managers>
<Providers>
<!-- Pull providers <RTSPC> <WorkerCount>1</WorkerCount> </RTSPC> <OVT> <WorkerCount>1</WorkerCount> </OVT>--><!-- Push providers -->
<RTMP>
<Port>${env:OME_RTMP_PROV_PORT:1935}</Port>
<WorkerCount>4</WorkerCount>
</RTMP>
<WebRTC>
<Signalling>
<Port>${env:OME_SIGNALLING_PORT:3333}</Port>
<WorkerCount>1</WorkerCount>
</Signalling>
<IceCandidates>
<TcpRelay>*:3478</TcpRelay>
<!-- TcpForce is an option to force the use of TCP rather than UDP in WebRTC streaming. (You can omit ?transport=tcp accordingly.) If <TcpRelay> is not set, playback may fail. -->
<TcpForce>false</TcpForce>
<IceCandidate>${env:OME_ICE_CANDIDATES:*:10000-10005/udp}</IceCandidate>
<TcpRelayWorkerCount>1</TcpRelayWorkerCount>
</IceCandidates>
</WebRTC>
</Providers>
<Publishers>
<WebRTC>
<Signalling>
<Port>${env:OME_SIGNALLING_PORT:3333}</Port>
<!-- If you want to use TLS, specify the TLS port -->
<TLSPort>${env:OME_SIGNALLING_PORT_SECURE:3334}</TLSPort>
<WorkerCount>1</WorkerCount>
</Signalling>
<IceCandidates>
<!-- If you want to stream WebRTC over TCP, specify IP:Port for TURN server. This uses the TURN protocol, which delivers the stream from the built-in TURN server to the player's TURN client over TCP. For detailed information, refer https://airensoft.gitbook.io/ovenmediaengine/streaming/webrtc-publishing#webrtc-over-tcp-->
<TcpRelay>*:3478</TcpRelay>
<!-- TcpForce is an option to force the use of TCP rather than UDP in WebRTC streaming. (You can omit ?transport=tcp accordingly.) If <TcpRelay> is not set, playback may fail. -->
<TcpForce>false</TcpForce>
<IceCandidate>${env:OME_ICE_CANDIDATES:*:10000-10005/udp}</IceCandidate>
<TcpRelayWorkerCount>1</TcpRelayWorkerCount>
</IceCandidates>
</WebRTC>
<LLHLS>
<Port>799</Port>
<TLSPort>800</TLSPort>
<WorkerCount>1</WorkerCount>
</LLHLS>
<Thumbnail>
<Port>799</Port>
<TLSPort>800</TLSPort>
</Thumbnail>
<OVT>
<Port>${env:OME_OVT_PORT:9999}</Port>
<WorkerCount>8</WorkerCount>
</OVT>
</Publishers>
</Bind>
<Managers>
<Host>
<Names>
<Name>localhost</Name>
</Names>
<TLS>
<CertPath>${env:SSL_CERT_PATH:/opt/ovenmediaengine/cert.pem}</CertPath>
<KeyPath>${env:SSL_KEY_PATH:/opt/ovenmediaengine/privkey.pem}</KeyPath>
<ChainCertPath>${env:SSL_FULLCHAIN_PATH:/opt/ovenmediaengine/fullchain.pem}</ChainCertPath>
</TLS>
</Host>
<API>
<AccessToken>${env:API_ACCESS_TOKEN:zvX6CrZMc3pzVK5NPH3w5JsL}</AccessToken>
</API>
</Managers>
<VirtualHosts>
<!-- You can use wildcard like this to include multiple XMLs -->
<VirtualHost>
<Name>default</Name>
<!--Distribution is a value that can be used when grouping the same vhost distributed across multiple servers. This value is output to the events log, so you can use it to aggregate statistics. -->
<Distribution>example.com</Distribution>
<AdmissionWebhooks>
<ControlServerUrl>${env:CONTROL_SERVER_URL}</ControlServerUrl>
<SecretKey>${env:ADMISSION_SECRET_KEY:ASecureKeyValue}</SecretKey>
<Timeout>3000</Timeout>
<Enables>
<Providers>rtmp</Providers>
</Enables>
</AdmissionWebhooks>
<!-- Settings for multi ip/domain and TLS -->
<Host>
<Names>
<Name>localhost</Name>
</Names>
<TLS>
<CertPath>${env:SSL_CERT_PATH:/opt/ovenmediaengine/cert.pem}</CertPath>
<KeyPath>${env:SSL_KEY_PATH:/opt/ovenmediaengine/privkey.pem}</KeyPath>
<ChainCertPath>${env:SSL_FULLCHAIN_PATH:/opt/ovenmediaengine/fullchain.pem}</ChainCertPath>
</TLS>
</Host>
<!-- Settings for applications -->
<Applications>
<Application>
<Name>live</Name>
<!-- Application type (live/vod) -->
<Type>live</Type>
<OutputProfiles>
<Decodes>
<OnlyKeyframes>true</OnlyKeyframes>
</Decodes>
<!-- Enable this configuration if you want to hardware acceleration using GPU -->
<HardwareAcceleration>false</HardwareAcceleration>
<!-- Primary stream output -->
<OutputProfile>
<Name>bypass_stream</Name>
<OutputStreamName>${OriginStreamName}</OutputStreamName>
<Encodes>
<Audio>
<Bypass>true</Bypass>
</Audio>
<Video>
<Bypass>true</Bypass>
</Video>
<Audio>
<Codec>opus</Codec>
<Bitrate>128000</Bitrate>
<Samplerate>48000</Samplerate>
<Channel>2</Channel>
</Audio>
<Image>
<Codec>webp</Codec>
<Framerate>0.1</Framerate>
</Image>
</Encodes>
</OutputProfile>
<!-- Audio only output -->
<OutputProfile>
<Name>audio_only</Name>
<OutputStreamName>${OriginStreamName}_audio</OutputStreamName>
<Encodes>
<Audio>
<Bypass>true</Bypass>
</Audio>
<Audio>
<Codec>opus</Codec>
<Bitrate>128000</Bitrate>
<Samplerate>48000</Samplerate>
<Channel>2</Channel>
</Audio>
</Encodes>
</OutputProfile>
</OutputProfiles>
<Providers>
<WebRTC />
<RTMP>
<BlockDuplicateStreamName>false</BlockDuplicateStreamName>
</RTMP>
<WebRTC>
<Timeout>30000</Timeout>
</WebRTC>
</Providers>
<Publishers>
<!-- Increase to handle more incoming streams-->
<AppWorkerCount>16</AppWorkerCount>
<!-- Increase to handle more players-->
<StreamWorkerCount>2</StreamWorkerCount>
<WebRTC>
<Timeout>30000</Timeout>
<Rtx>false</Rtx>
<Ulpfec>false</Ulpfec>
<JitterBuffer>false</JitterBuffer>
</WebRTC>
<RTMPPush></RTMPPush>
<LLHLS>
<OriginMode>true</OriginMode>
<CacheControl>
<MasterPlaylistMaxAge>0</MasterPlaylistMaxAge>
<ChunklistMaxAge>0</ChunklistMaxAge>
<ChunklistWithDirectivesMaxAge>60</ChunklistWithDirectivesMaxAge>
<SegmentMaxAge>-1</SegmentMaxAge>
<PartialSegmentMaxAge>-1</PartialSegmentMaxAge>
</CacheControl>
<ChunkDuration>1</ChunkDuration>
<SegmentDuration>6</SegmentDuration>
<SegmentCount>10</SegmentCount>
<CrossDomains>
<Url>http://localhost</Url>
</CrossDomains>
<DVR>
<Enable>true</Enable>
<TempStoragePath>/tmp/ome_dvr/</TempStoragePath>
<MaxDuration>43200</MaxDuration> <!-- Seconds -->
</DVR>
</LLHLS>
<Thumbnail>
<CrossDomains>
<Url>http://localhost</Url>
</CrossDomains>
</Thumbnail>
<OVT />
<!-- File publisher, required for recording to work, still requires triggering recordings by calling the API.-->
<FILE>
<RootPath>/tmp/recordings</RootPath>
<FilePath>/${VirtualHost}/${Application}/${Stream}/${StartTime:YYYYMMDDhhmmss}_${EndTime:YYYYMMDDhhmmss}.mp4</FilePath>
<InfoPath>/${VirtualHost}/${Application}/${Stream}.xml</InfoPath>
<StreamMap>
<Enable>true</Enable>
<Path>./RecordingsMap.xml</Path>
</StreamMap>
</FILE>
</Publishers>
</Application>
</Applications>
</VirtualHost>
</VirtualHosts>
</Server>
The issue was reproduced using the Python script mentioned above. This behavior might be due to FFmpeg not finalizing the recordings properly when interrupted. Suggestions for ensuring recordings are playable despite network instability or process interruptions would be appreciated.
The text was updated successfully, but these errors were encountered:
Describe the bug
Occasionally, stream recordings become corrupted and unplayable. When the mp4 produced by OME is analyzed with ffprobe, the error
moov atom not found
is reported. This suggests the recordings are not finalized properly, likely due to abrupt interruptions. We found a way to consistently reproduce this, described below, but it seems like this could happen for any reason that could cause the internal recording process to exit abruptly.To Reproduce
Steps to reproduce the behavior:
Set Server.xml as follows:
RecordingsMap.xml
Use the provided Python script to simulate:
Expected behavior
All recordings should be finalized correctly and playable, even if the recording process is interrupted.
Logs
recording-debug.log
Relevant log error:
Server (please complete the following information):
Additional context
The issue was reproduced using the Python script mentioned above. This behavior might be due to FFmpeg not finalizing the recordings properly when interrupted. Suggestions for ensuring recordings are playable despite network instability or process interruptions would be appreciated.
The text was updated successfully, but these errors were encountered: