Frequently Asked Questions (FAQ) about the "LIVE555 Streaming Media" libraries

Help support improvements and extensions to the "LIVE555 Streaming Media" software: LIVE555 Funded Projects.

General questions

  1. What is the typical control flow within an application that uses these libraries, and what is the role of the various "Source" and "Sink" classes in the "liveMedia" library?
  2. How can I use this code within a specialized environment (such as an embedded system, or a GUI toolkit)?
  3. Is this code 'thread safe'? I.e., can it be accessed by more than one thread at the same time?
  4. How many concurrent connections/streams can a RTSP server (built using our code) support?
  5. When I use a receiver application (that uses the "LIVE555 Streaming Media" code) to receive an incoming RTP/UDP (or raw-UDP) stream, I see significant network packet loss. Can anything be done to improve this?
  6. Is there any more documentation for these libraries?
  7. I would like to see new feature X added to the code. How soon can you do this for me?

Questions about the libraries' source code

  1. Where is the latest version of the library code? What version of the library do I currently have?
  2. What is the copyright on the source code, and how is it licensed? What are my obligations under this license?
  3. What is the best way to modify or extend the functionality of the code?
  4. I want to subclass one of the supplied C++ classes, but I can't because some definitions that I need are "private:" rather than "protected:". What can I do about this?
  5. Do you make available or support old versions of the library?
  6. Why do you not make the code available under a 'source code repository'?

Questions about the test programs (and using their code as a model for your own applications)

  1. I have successfully used the "testRTSPClient" demo application to receive a RTSP/RTP stream. Using this application code as a model, how can I decode the received video (and/or audio) data?
  2. When I try to receive a stream using the "openRTSP" command-line client, the RTSP protocol exchange appears to work OK, but the resulting data file(s) are empty. What's wrong?
  3. The "test*Streamer" test programs read from a file. Can I modify them so that they take input from a H.264, H.265 or MPEG encoder instead, so I can stream live (rather than prerecorded) video and/or audio?
  4. But what about the "testOnDemandRTSPServer" test program (for streaming via unicast)? How can I modify it so that it takes input from a live source instead of from a file?
  5. Can the RTSP server implementation (e.g., as demonstrated by the "testOnDemandRTSPServer" test program) stream to set-top boxes (STBs)?
  6. Does the RTSP implementation (client and/or server) support 'trick mode' operations (i.e., seek, fast-forward, reverse play)?
  7. The "test*Streamer" and "test*Receiver" test programs use multicast. Can I modify them to use unicast instead?
  8. For many of the "test*Streamer" test programs, the built-in RTSP server is optional (and disabled by default). For "testAMRudioStreamer", "testMPEG4VideoStreamer", "testH264VideoStreamer", "testH265VideoStreamer", and "testWAVAudioStreamer", however, the built-in RTSP server is mandatory. Why?
  9. Where can I find an example of a MPEG-4 Elementary Stream video file that I can use (as "test.m4e") in the "testMPEG4VideoStreamer" or "testOnDemandRTSPServer" demo applications (or "live555MediaServer")?
  10. Where can I find an example of a H.264 Elementary Stream video file that I can use (as "test.264") in the "testH264VideoStreamer" or "testOnDemandRTSPServer" demo applications (or "live555MediaServer")?
  11. Where can I find an example of a H.265 Elementary Stream video file that I can use (as "test.265") in the "testH265VideoStreamer" or "testOnDemandRTSPServer" demo applications (or "live555MediaServer")?
  12. Where can I find an example of a AAC Audio (ADTS format) file that I can use (as "test.aac") in the "testOnDemandRTSPServer" demo application (or "live555MediaServer")?
  13. How can I stream JPEG video via RTP? There is no demo application for this.
  14. When I ran "testMPEG1or2VideoStreamer", I saw several error messages like "saw unexpected code 0x000001e0". What's wrong?
  15. When I stream a MP3 file (using "testMP3Streamer" or "testOnDemandRTSPServer"), I find that QuickTime Player will not play the stream. What's wrong?
  16. The calls to "doEventLoop()" in the test programs do not return. How can I arrange for "doEventLoop" to return - e.g., if the user clicks on a "stop" button in a GUI?
  17. The event loop implementation provided in the source code (i.e., "BasicTaskScheduler") can handle file/socket I/O events, or delayed (or periodic) tasks. How can I have the event loop handle other kinds of event (perhaps signaled from a separate thread, e.g., running a user interface)?
  18. I tried using one of the test programs to stream my file, but it didn't work. Why?
  19. The test programs worked OK for me, but then I modified one of them, and it no longer works. What's wrong?

Questions about RTP, RTSP, and/or SIP

  1. I tried to play a "rtsp://" URL (using testRTSPClient, openRTSP, VLC, or MPlayer), or a "sip:" URL (using playSIP), but I got an error message "RTP payload format unknown or not supported". Why?
  2. Why do most RTP sessions use separate streams for audio and video? How can a receiving client synchronize these streams?
  3. But I notice that there's an abrupt change in a stream's presentation times after the first RTCP "SR" packet has been received. Is this a bug?
  4. I am developing a RTP receiver (client). I need access to incoming packets' RTP timestamps.
  5. I have a general question about RTP/RTCP, RTSP, or SIP - not specifically related to the LIVE555 Streaming Media software. Where can I get more information?

Questions about the "live-devel" mailing list

  1. How do I ask questions about the "LIVE555 Streaming Media" software (including the "LIVE555 Media Server", the "LIVE555 Proxy Server", and the "LIVE555 HLS Proxy")?
  2. Why do I need to subscribe to the mailing list before I can send to it?
  3. When I posted a message to the mailing list, I received a response saying that my message was being moderated. Why?
  4. Why do you discriminate against people who use unprofessional email addresses ("@gmail.com" etc.)?
  5. Why did nobody answer my question?
  6. I posted a question to the mailing list, but nobody answered it. Can I post it again?

-----

What is the typical control flow within an application that uses these libraries, and what is the role of the various "Source" and "Sink" classes in the "liveMedia" library?

Applications are event-driven, using an event loop "TaskScheduler::doEventLoop()" that works basically as follows:
    while (1) {
        find a task that needs to be done (by looking on the delay queue,
                and the list of network read handlers);
        perform this task;
    }
Also, before entering this loop, applications will typically call
    someSinkObject->startPlaying();
for each sink, to start generating tasks that need to be done.

Data passes through a chain of 'source's and 'sink's - e.g.,

    'source1' -> 'source2' (a filter) -> 'source3' (a filter) -> 'sink'
(Sources that receive data from other sources are also called "filters".)

Whenever a object (a 'sink' or one of the intermediate filter 'source's) wants to get more data, it calls "FramedSource::getNextFrame()" on the object that's to its immediate left. This is implemented by the pure virtual function "FramedSource::doGetNextFrame()", that is implemented by each 'source' object.

Each 'source' object's implementation of "doGetNextFrame()" works by arranging for an 'after getting' function to be called (from an event handler) when new data becomes available for the caller.

Note that the flow of data from 'sources' to 'sinks' happens within each application, and doesn't correspond to the sending or receiving of network packets. For example, a server application (such as "testMP3Streamer" or "testOnDemandRTSPServer") that transmits RTP packets will do so using one or more "RTPSink" objects. Each "RTPSink" object will receive data from a chain of other, 'source' or 'filter' objects (e.g., to read data from a file), and, as a side effect, transmit RTP packets. Similarly, a client application (such as "testMP3Receiver" or "openRTSP") that receives RTP packets will do so using one or more "RTPSource" objects. Each "RTPSource" object will receive RTP packets from the network, and pass the network data (without network headers) through a chain of (zero or more) 'filter' objects, ending with a 'sink' object that processes the data - e.g., by writing it to a file ("FileSink"), or by decoding and rendering the audio or video data.


How can I use this code within a specialized environment (such as an embedded system, or a GUI toolkit)?

People usually do this by developing their own subclasses of the "UsageEnvironment" and "TaskScheduler" abstract base classes (see "UsageEnvironment/include/UsageEnvironment.hh"). Note that the released source code includes one particular implementation of these classes: the "BasicUsageEnvironment" library. This uses the Unix (or Windows) console for I/O, and so allows you to develop applications that you can run in a conventional console environment - e.g., for prototyping and debugging. Then, by using your own custom subclasses of "UsageEnvironment" and (perhaps) "TaskScheduler" (i.e., instead of "BasicUsageEnvironment" and "BasicTaskScheduler"), the same code will run, unchanged, in your custom environment.

In particular, to use the code within a GUI toolkit, your "TaskScheduler" subclass's implementation of "doEventLoop()" should be integrated with the GUI toolkit's own event handling mechanism.


Is this code 'thread safe'? I.e., can it be accessed by more than one thread at the same time?

Short answer: No. As noted above, the code assumes a single thread of execution, using an event loop - rather than multiple threads - for concurrency. This generally makes the code much easier to debug, and much easier to port across multiple operating systems, which may have different thread APIs, or no thread support at all. (For even stronger arguments along these same lines, see John Ousterhout's presentation.)

Therefore, although it's true to say that the code is not 'thread safe', it's also somewhat misleading. It's like saying that a high-speed rail carriage is not 'airworthy'.

Longer answer: More than one thread can still use this code, if only one thread runs the library code proper, and the other thread(s) communicate with the library only via sockets, by setting global 'flag' variables (such as event loop 'watch variables'), or by calling 'event triggers'. (Note that "triggerEvent()" is the only LIVE555 function that may be called from an external (i.e., non-LIVE555) thread.)

Another possible way to access the code from multiple threads is to have each thread use its own "UsageEnvironment" and "TaskScheduler" objects, and thus its own event loop. The objects created by each thread (i.e., using its own "UsageEnvironment") must not interact (except via global variables). Such a configuration is not recommended, however; instead, it is safer to structure such an application as multiple processes, not multiple threads.

In any case, when using the "LIVE555 Streaming Media" code, you should be familiar with event-driven programming, and understand that an event-driven application can perform at least as well as one that uses threads (unless you're actually running on a multiprocessor, in which case it's usually simpler to have your application consist of multiple processes (not just threads) - one running on each processor). Note, in particular, that you do not need multiple threads in order to transmit (or receive) multiple streams concurrently.


How many concurrent connections/streams can a RTSP server (built using our code) support?

There's no fixed limit in our code. In practice, however, the number of open files (sockets) supported by the underlying operating system often sets a limiting factor. If you can increase this number (in your operating system), then this sometimes can increase scalability.

In Windows, there is also a limit set by the "FD_SETSIZE" constant, which has a default value of 64, which produces a maximum of 32 concurrent clients. You might also wish to increase this value - e.g., by redefining FS_SETSIZE at compile time.


When I use a receiver application (that uses the "LIVE555 Streaming Media" code) to receive an incoming RTP/UDP (or raw-UDP) stream, I see significant network packet loss. Can anything be done to improve this?

First, you should make sure that your network has sufficient bandwidth for your data stream.

However, packet loss can also be caused by insufficiently large socket reception buffers in the receiver's operating system. By default, the "LIVE555 Streaming Media" code asks the operating system to allocate at least 50 kBytes of buffer memory for each incoming datagram socket. (Note the call to increaseReceiveBufferTo() in "liveMedia/MultiFramedRTPSource.cpp".) However, you can also ask to increase this buffer size, by calling increaseReceiveBufferTo() again, within your own application code. (Note that increaseReceiveBufferTo() returns the actual resulting socket buffer size, reported by the OS, so you can check the return value to verify the resulting buffer size.)

It's important to understand that because a LIVE555 Streaming Media application runs as a single thread (never writing to, or reading from, sockets concurrently), if packet loss occurs, then it must be happening either (i) on the network, or (ii) in the operating system of the sender or receiver. There's nothing in our code that can be 'losing' packets.


Is there any more documentation for these libraries?

The best way to understand how to use the libraries is to (i) study the example programs in the "testProgs" directory, (ii) study the library code itself, and (iii) ask questions on the "live-devel" mailing list. (You may also find the 'Doxygen' source code documentation - in particular, the "Medium" class hierarchy - useful.)


I would like to see new feature X added to the code. How soon can you do this for me?

The highest-priority features are those that have been requested by paying consulting clients. If your company is interested in providing funding for the development of a particular feature, please email "support(at)live555.com".

Also, some upcoming features have been given special priority - as LIVE555 Funded Projects. Please contribute to these projects if you wish to see them become part of the released code.

-----

Where is the latest version of the library code? What version of the library do I currently have?

The latest version of the "LIVE555 Streaming Media" source code can be found at http://www.live555.com/liveMedia/public/. Specifically, the latest version of the code is http://www.live555.com/liveMedia/public/live555-latest.tar.gz

Note: You should avoid using software named "live555" that you might find on other web sites (including 'software repository' sites such as GitHub). We do not endorse these versions (note that we do not provide our code from a source code repository); they may contain unknown modifications, and/or bugs (including security vulnerabilites) that have been fixed in the latest version.

To see which version of the code you currently have, look at the file "liveMedia/include/liveMedia_version.hh".


What is the copyright on the source code, and how is it licensed? What are my obligations under this license?

The source code is Copyright Live Networks, Inc. All Rights Reserved. It is licensed under the GNU LGPL.

This FAQ is not a legal document. If you have any questions about your compliance with the LGPL and its conditions, please consult your copyright attorney. However, your obligations under the LGPL include (but are not necessarily limited to) the following:

Dozens of companies throughout the world have successfully used this software under the LGPL license. If, however (after consulting with your company's copyright attorney), you feel that you have problems complying with the LGPL, then please contact us by emailing "support (at) live555.com", and we can discuss possible options/alternatives.


What is the best way to modify or extend the functionality of the code?

To add new functionality to the code, you should not modify the existing code (unless this is unavoidable). Instead, use C++ subclassing. Add your new subclass definitions and implementations in a separate directory (i.e., separate from the "live/" directory that contains the supplied source code). That way, you can easily upgrade to new versions of the supplied source code - simply by replacing the "live/" directory - without affecting your own new code.

Note also that subclassing the code considerably simplifies your obligations under the LGPL. If you modify the supplied code, and then release a product based on these modifications, then - as noted above - you are required to also make your modified source code available. If, instead, you subclass the supplied code (without modifying it), you are not required to release your subclass code (nor the rest of your application code). Your application can be 'closed source'.


I want to subclass one of the supplied C++ classes, but I can't because some definitions that I need are "private:" rather than "protected:". What can I do about this?

Send an email to the "live-devel" mailing list, and we'll try to accommodate this in the next release of the software. (Be aware, however, that not every such request will be accepted, because by design, some member functions and variables are not intended to be accessible to subclasses, or accessible from outside the class hierarchy.)


Do you make available or support old versions of the library?

No. Because the latest version of the library code contains bug fixes and improvements (possibly including fixes to security vulnerabilities), older versions of the code are not supported. Developers are expected to work with the latest version of the code. (Fortunately, major API changes happen rarely.)

It's important to understand that this software - unlike some other open source projects - does not have separate 'stable' and 'experimental' releases. Instead, there's just one release, and it can be considered 'stable'.


Why do you not make the code available under a 'source code repository'?

Unlike some other open source projects, the source code for this project is provided as a 'tarball', rather than in a source code repository - because old versions of the code are not supported. (A source code repository might also encourage developers to extend the source code by modifying it 'in place' (and then upgrading the code by 'merging diffs'). As noted above, modifying the supplied code 'in place' is something that we discourage; instead, developers should use C++ subclassing to extend the code.)

-----

I have successfully used the "testRTSPClient" demo application to receive a RTSP/RTP stream. Using this application code as a model, how can I decode the received video (and/or audio) data?

The "testRTSPClient" demo application receives each (video and/or audio) frame into a memory buffer, but does not do anything with the frame data. You can, however, use this code as a model for a 'media player' application that decodes and renders these frames. Note, in particular, the "DummySink" class that the "testRTSPClient" demo application uses - and the (non-static) "DummySink::afterGettingFrame()" function. When this function is called, a complete 'frame' (for H.264 or H.265, this will be a "NAL unit") will have already been delivered into "fReceiveBuffer". Note that our "DummySink" implementation doesn't actually do anything with this data; that's why it's called a 'dummy' sink.

If you want to decode (or otherwise process) these frames, you would replace "DummySink" with your own "MediaSink" subclass. Its "afterGettingFrame()" function would pass the data (at "fReceiveBuffer", of length "frameSize") to a decoder. (A decoder would also use the "presentationTime" timestamp to properly time the rendering of each frame, and to synchronize audio and video.)

If you are receiving H.264 video data, there is one more thing that you have to do before you start feeding frames to your decoder. H.264 streams have out-of-band configuration information ("SPS" and "PPS" NAL units) that you may need to feed to the decoder to initialize it. To get this information, call "MediaSubsession::fmtp_spropparametersets()" (on the video 'subsession' object). This will give you a (ASCII) character string. You can then pass this to "parseSPropParameterSets()" (defined in the file "include/H264VideoRTPSource.hh"), to generate binary NAL units for your decoder.

(If you are receiving H.265 video, then you do the same thing, except that you have three separate configuration strings, that you get by calling "MediaSubsession::fmtp_spropvps()", "MediaSubsession::fmtp_spropsps()", and "MediaSubsession::fmtp_sproppps()". For each of these three strings, in turn, pass them to "parseSPropParameterSets()", then feed the resulting binary NAL unit to your decoder.)

When I try to receive a stream using the "openRTSP" command-line client, the RTSP protocol exchange appears to work OK, but the resulting data file(s) are empty. What's wrong?

RTP/UDP media (audio and/or video) packets from the server are not reaching the client, most likely because there is a firewall somewhere inbetween that is blocking UDP packets. (Note that the RTSP protocol uses TCP, not UDP.) To correct this, either fix your firewall, or else request RTP-over-TCP streaming, using the "-t" option to "openRTSP".

If, instead, you're using the "testRTSPClient" demo application, note the line

    #define REQUEST_STREAMING_OVER_TCP False
If you change "False" to "True", then the "testRTSPClient" client will request RTP-over-TCP streaming.

The "test*Streamer" test programs read from a file. Can I modify them so that they take input from a H.264, H.265, or MPEG encoder instead, so I can stream live (rather than prerecorded) video and/or audio?

Yes. The easiest way to do this is to change the appropriate "test*Streamer.cpp" file to read from "stdin" (instead of "test.*"), and then pipe the output of your encoder to (your modified) "test*Streamer" application. (Even simpler, if your operating system represents the encoder device as a file, then you can just use the name of this file (instead of "test.*").)

Alternatively, if your encoder presents you with a sequence of frames (or 'NAL units'), rather than a sequence of bytes, then a more efficient solution would be to write your own "FramedSource" subclass that encapsulates your encoder, and delivers audio or video frames directly to the appropriate "*RTPSink" object. This avoids the need for an intermediate 'framer' filter that parses the input byte stream. (If, however, you are streaming H.264, H.265, or MPEG-4 (or MPEG-2 video with "B" frames), then you should insert the appropriate "*DiscreteFramer" filter between your source object and your "*RTPSink" object.)

For a model of how to do that, see "liveMedia/DeviceSource.cpp" (and "liveMedia/include/DeviceSource.hh"). You will need to fill in parts of this code to do the actual reading from your encoder.


But what about the "testOnDemandRTSPServer" test program (for streaming via unicast)? How can I modify it so that it takes input from a live source instead of from a file?

First, you will need to modify "testProgs/testOnDemandRTSPServer.cpp" to set the variable "reuseFirstSource" to "True". This tells the server to use the same input source object, even if more than one client is streaming from the server concurrently.

Then, as above, if your input device is accessible by a file name (including "stdin" for standard input), then simply replace the appropriate "test.*" file name with the file name of your input device.

If, however, you have written your own "FramedSource" subclass (e.g., based on "DeviceSource", as noted above) to encapsulate your input source, then the solution is a little more complicated. In this case, you will also need to define and implement your own new subclass of "OnDemandServerMediaSubsession" that gets its input from your live source, rather than from a file. In particular, you will need to provide your own implementation of the two pure virtual functions "createNewStreamSource()" and "createNewRTPSink()". For a model of how to do this, see the existing "FileServerMediaSubsession" subclass that is used to stream your desired type of data from an input file. (For example, if you are streaming H.264 video, you would use "H264VideoFileServerMediaSubsession" as a model.) Note that:


Can the RTSP server implementation (e.g., as demonstrated by the "testOnDemandRTSPServer" test program) stream to set-top boxes (STBs)?

Yes, our RTSP server implementation can, in principle, stream to RTSP-compliant STBs. In practice, however, there are some issues to note: However, our RTSP server implementation (and, in particular, the "LIVE555 Media Server" and the "testOnDemandRTSPServer" demo application) can stream MPEG Transport Stream data (with 'trick play' support) to Amino STBs (in particular, the AmiNet model 103 and 110). Note that, for this to work, the STB's RTSP client software must be configured to use "nCube" mode (the default?), not "Oracle" or "Mediabase" mode. Also (as noted above), the input source (to the RTSP server) must be a MPEG-2 Transport Stream.


Does the RTSP implementation (client and/or server) support 'trick mode' operations (i.e., seek, fast-forward, reverse play)?

When talking about "trick mode support", it's important to distinguish between RTSP client support, and RTSP server support.

Our RTSP client implementation fully supports 'trick play' operations. Note the "start", "end" and "scale" parameters to "RTSPClient::sendPlayCommand()". (Note also that our "openRTSP" demo RTSP client application has command-line options that can be used to demonstrate client 'trick play' operations.)

Our RTSP server implementation also supports 'trick play' operations, but note that parts of this are (necessarily) media type specific. I.e., there has to be some new code added for each different type of media file that we wish to stream. This functionality has already been provided for some types of media file.

To add 'trick play' support for a media type (that does not already support it), changes need to be made to the corresponding subclass of "ServerMediaSubsession":

  1. To add support for seeking within a stream, you will need to implement the following virtual functions:
  2. To add support for 'fast forward' and/or 'reverse play', you will also need to implement the following virtual functions:


The "test*Streamer" and "test*Receiver" test programs use multicast. Can I modify them to use unicast instead?

Yes, you can do this, but you should first convince yourself that this is something that you really want to do. If you're streaming over a LAN, then you should continue to use multicast - it's simpler, and allows more than one receiver to access the stream, without data duplication. The only time you should consider using unicast is if you are streaming over a wider-area network that does not support multicast routing. (Note also that the RTSP server that's built in to the "test*Streamer" programs does not work with unicast streams. To play unicast streams from a RTSP server, you should instead use the existing "testOnDemandRTSPServer" test program or the "LIVE555 Media Server" as a model. This is usually better than trying to modify one of the "test*Streamer" applications.)

If you still wish to change the "test*Streamer" programs to stream using unicast, then do the following:

  1. In "test*Streamer.cpp", change "destinationAddressStr" to the (unicast) IP address of the intended destination.
  2. In the corresponding "test*Receiver.cpp", change "sessionAddressStr" to "0.0.0.0".
  3. (optional) If you also want to send RTCP packets (e.g., RTCP Receiver Reports) back to the streaming server, then you will also need to do the following - in "test*Receiver.cpp" - after you've created "rtcpGroupsock". (In this example, suppose that the streaming server has IP address "10.20.30.40" and uses port 6667 for RTCP.):
        struct in_addr serverAddress;
        serverAddress.s_addr = our_inet_addr("10.20.30.40");
        rtcpGroupsock.changeDestinationParameters(serverAddress, 6667, 255);
    


For many of the "test*Streamer" test programs, the built-in RTSP server is optional (and disabled by default). For "testAMRudioStreamer", "testMPEG4VideoStreamer", "testH264VideoStreamer", "testH265VideoStreamer", and "testWAVAudioStreamer", however, the built-in RTSP server is mandatory. Why?

For those media types (AMR audio, MPEG-4 video, H.264 video, H.265 video, and PCM audio, respectively), the stream includes some codec-specific parameters that are communicated to clients out-of-band, in a SDP description. Because these parameters - and thus the SDP description - can vary from stream to stream, the only effective way to communicate this SDP description to clients is using the standard RTSP protocol. Therefore, the RTSP server is a mandatory part of these test programs.


Where can I find an example of a MPEG-4 Elementary Stream video file that I can use (as "test.m4e") in the "testMPEG4VideoStreamer" or "testOnDemandRTSPServer" demo applications (or "live555MediaServer")?

One way to get a MPEG-4 Video Elementary Stream file is to find a public MPEG-4 RTSP/RTP stream, and then run "openRTSP" on it.

If you search in an online search engine for

+"rtsp://" +".mp4"
then you may find some "rtsp://" URLs for streams that contain MPEG-4 video content. You can try receiving some of these using "openRTSP" (add the "-t" option if you're behind a firewall).

This should give you two files: "video-MP4V-ES-1" and "audio-MPEG4-GENERIC-2". (If, instead, you get a file "video-H264-1", then this is H.264 video, not MPEG-4 video. Try again with another stream.) Rename the file "video-MP4V-ES-1" as "test.m4e", and you will be able to use it in "testMPEG4VideoStreamer" and "testOnDemandRTSPServer".

We have also made some example MPEG-4 Video Elementary Stream (".m4e") files available online here.


Where can I find an example of a H.264 Elementary Stream video file that I can use (as "test.264") in the "testH264VideoStreamer" or "testOnDemandRTSPServer" demo applications (or "live555MediaServer")?

As noted in the answer to the previous question, you may be able to find some "rtsp://" URLs for online streams that contain H.264 video content. You can then use "openRTSP" to record a portion of these streams.

We have also made some example H.264 Video Elementary Stream (".264") files available online here.


Where can I find an example of a H.265 Elementary Stream video file that I can use (as "test.265") in the "testH265VideoStreamer" or "testOnDemandRTSPServer" demo applications (or "live555MediaServer")?

We have made an example file available online here.


Where can I find an example of a AAC Audio (ADTS format) file that I can use (as "test.aac") in the "testOnDemandRTSPServer" demo application (or "live555MediaServer")?

We have made some example files available online here.


How can I stream JPEG video via RTP? There is no demo application for this.

See here and here.

You should be aware, though, that JPEG is a very poor codec for video streaming, because (unlike MPEG, H.264 or H.265 video) there is no inter-frame compression. Every video frame is a 'key' frame, and is sent in its entirety. Also, each frame is typically large (and so takes up many network packets). If any of these network packets gets lost, then the whole frame must be discarded. JPEG video streaming is strongly discouraged, and should be considered (if at all) only for high-bitrate local-area networks with very low packet loss.


When I ran "testMPEG1or2VideoStreamer", I saw several error messages like "saw unexpected code 0x000001e0". What's wrong?

By default, "testMPEG1or2VideoStreamer" assumes that its input is a MPEG (1 or 2) Video Elementary Stream - i.e., a stream that consists only of MPEG video. Your input is probably instead a MPEG Program Stream - a stream that consists of both video and audio, multiplexed together. You can play this stream by uncommenting the line
    #define SOURCE_IS_PROGRAM_STREAM 1
in "testMPEG1or2VideoStreamer.cpp". Alternatively, you could run "testMPEG1or2AudioVideoStreamer" instead of "testMPEG1or2VideoStreamer" (and thereby stream audio as well as video).


When I stream a MP3 file (using "testMP3Streamer" or "testOnDemandRTSPServer"), I find that QuickTime Player will not play the stream. What's wrong?

This is a known (and longstanding) bug in QuickTime Player: It cannot play MP3 audio RTP streams. (It will play MP3 files OK, and will play MPEG layer I or layer II audio RTP streams - but not MPEG layer III (i.e., MP3) RTP streams.)

Blame Apple for this. They have known about this bug for many years, but - for some odd reason - do not consider it a high priority bug.

Instead, we recommend that you use the VLC media player.


The calls to "doEventLoop()" in the test programs do not return. How can I arrange for "doEventLoop" to return - e.g., if the user clicks on a "stop" button in a GUI?

"TaskScheduler::doEventLoop()" takes an optional "watchVariable" parameter that can be used for this purpose. (By setting this variable - perhaps from an external thread - you can signal the event loop (i.e. "doEventLoop()) to exit.) See the definition of "TaskScheduler::doEventLoop()" in the file "UsageEnvironment/include/UsageEnvironment.hh".


The event loop implementation provided in the source code (i.e., "BasicTaskScheduler") can handle file/socket I/O events, or delayed (or periodic) tasks. How can I have the event loop handle other kinds of event (perhaps signaled from a separate thread, e.g., running a user interface)?

One way to do this is to use the "EventTrigger" mechanism that's defined for the "TaskScheduler" class. This lets you define a procedure that will be called - from within the event handler - whenever your custom event is later 'triggered'. (Note that "triggerEvent()" is the only LIVE555 function that may be called from an external (i.e., non-LIVE555) thread.)

Alternatively, you could subclass "TaskScheduler" to implement your own event loop - but that is more difficult.


I tried using one of the test programs to stream my file, but it didn't work. Why?

First, are you sure that your file is of the correct type? (For example, if you are using "testMPEG1or2VideoStreamer", then your input file ("test.mpg") must be a MPEG Video Elementary Stream file.)

If you're sure that your file is of the correct type, then please put the file on a publically-accessible web (or FTP) server, and post the URL (not the file itself) to the "live-devel" mailing list, and we'll take a look at it, to see if we can figure out what's wrong.


The test programs worked OK for me, but then I modified one of them, and it no longer works. What's wrong?

Since we don't know what modifications you made, we can't tell :-) But remember: You have complete source code! You began with one of the test programs - code that already works - and then you modified it. Therefore, you should have all the information that you need to figure out what's wrong with your program. (Of course, if you find a genuine bug in the LIVE555 Streaming Media code, then please post it to "live-devel" mailing list.)

-----

I tried to play a "rtsp://" URL (using testRTSPClient, openRTSP, VLC, or MPlayer), or a "sip:" URL (using playSIP), but I got an error message "RTP payload format unknown or not supported". Why?

The problem here is that the "liveMedia" library does not support the "RTP payload format" that is used to stream data with this particular codec.

An "RTP payload format" for a codec is a set of rules that define how the codec's media frames are packed within RTP packets. This is usually defined by an IETF RFC, or - for newer payload formats - an IETF Internet-Draft. However, a few RTP payload formats (usually those whose MIME subtype begins with "X-") are proprietary, are not defined in publically-available documents.

The "liveMedia" library supports many, but not all, RTP payload formats. If you encounter a RTP payload format that is not supported, but which is defined by a publically-available document, then we may be able to add support for it, if there is sufficient interest.


Why do most RTP sessions use separate streams for audio and video? How can a receiving client synchronize these streams?

Sending audio and video in separate RTP streams provides a great deal of flexibility. For example, this makes it possible for a player to receive only the audio stream, but not video (or vice-versa). It would even be possible to have one computer receive and play audio, and a separate computer receive and play video.

These audio and video streams are synchronized using RTCP "Sender Report" (SR) packets - which map each stream's RTP timestamp to 'wall clock' (NTP) time. For more information, see the IETF's RTP/RTCP specification.

Receivers can then use this mapping to synchronize the incoming RTP streams. The LIVE555 Streaming Media code does this automatically: For subclasses of "RTPSource", the "presentationTime" parameter that's passed to the 'afterGettingFunc' of "getNextFrame()" (see "liveMedia/include/FramedSource.hh") will be an accurate, time-synchronized time. (For this to work, you need to have also created a "RTCPInstance" for each RTP source.)

For example, if you use "openRTSP" to receive RTSP/RTP streams, then the contents of each RTP stream (audio and video) are written into separate files. This is done using the "FileSink" class. If you look at the "FileSink::afterGettingFrame()" member function, you'll notice that there's a "presentationTime" parameter for each incoming frame. Some other receiver could use the "presentationTime" parameter to synchronize audio and video.


But I notice that there's an abrupt change in a stream's presentation times after the first RTCP "SR" packet has been received. Is this a bug?

No, this is normal, and expected; there's no bug here. This happens because the first few presentation times - before RTCP synchronization occurs - are just 'guesses' made by the receiving code (based on the receiver's 'wall clock' and the RTP timestamp). However, once RTCP synchronization occurs, all subsequent presentation times will be accurate.

This means is that a receiver should be prepared for the fact that the first few presentation times (until RTCP synchronization starts) will not be accurate. The code, however, can check this by calling "RTPSource:: hasBeenSynchronizedUsingRTCP()". If this returns False, then the presentation times are not accurate, and should not be used for synchronization. However, once the call to returns True, then the presentation times (from then on) will be accurate.


I am developing a RTP receiver (client). I need access to incoming packets' RTP timestamps.

No you don't. All you need is the "presentation time". It (once RTCP synchronization has started) already gives you an exact representation of the RTP timestamp, and thus the LIVE555 libraries do not expose the RTP timestamps to the application developer. That would give you no more information (and only lead to potential confusion). The LIVE555 library automatically converts RTP timestamps to presentation times (and vice versa for servers). Applications that use the LIVE555 libraries never need to concern themselves with RTP timestamps.


I have a general question about RTP/RTCP, RTSP, or SIP - not specifically related to the LIVE555 Streaming Media software. Where can I get more information?

RTP/RTCP is standardized by the IETF's Audio/Video Transport ("avt") working group. In particular, note the RTP/RTCP specification. Also, an excellent book that covers RTP/RTCP in detail is "RTP: Audio and Video for the Internet" by Colin Perkins.

RTSP is standardized by the IETF's Multiparty Multimedia Session Control ("mmusic") working group. (For more information, see www.rtsp.org)

SIP is standardized by the IETF's Session Initiation Protocol ("sip") and Session Initiation Proposal Investigation ("sipping") working groups. (For more information, see Henning Schulzrinne's site.)

-----

How do I ask questions about the "LIVE555 Streaming Media" software (including the "LIVE555 Media Server", the "LIVE555 Proxy Server", and the "LIVE555 HLS Proxy)?

Support for the "LIVE555 Streaming Media" software (including the "LIVE555 Media Server", "LIVE555 Proxy Server", and "LIVE555 HLS Proxy") is handled via the
        live-devel@lists.live555.com
mailing list. Note, however, that before you can post to the mailing list, you must first subscribe to it. Note also (as explained below) that we do not accept postings to the mailing list from unprofessional email addresses ("@gmail.com", etc.).


Why do I need to subscribe to the mailing list before I can post to it?

This is standard for all Internet mailing lists. It helps protect against spam.

(Note that you must subscribe to the mailing list using the same "From:" address that intend to use to later post messages to the list.)


When I posted a message to the mailing list, I received a response saying that my message was being moderated. Why?

This provides additional protection against spam (because spammers have been known to occasionally forge the "From:" addresses in their messages). Everyone's first posting to the mailing list will be moderated before it gets sent to the list.

(If you learn that your message to the mailing list is being moderated, then please wait until it gets approved and delivered before you post another message to the list - otherwise (to prevent abuse of the mailing list) only the last such message may get approved.)


Why do you discriminate against people who use unprofessional email addresses ("@gmail.com" etc.)?

Anyone can subscribe to the mailing list, to receive messages. (Also, of course, anyone can read the mailing list's message by reading the online archives.) However, to post to this mailing list, you need to be using a professional email address - i.e., one whose domain name identifies an organization (company or school) that you are affiliated with, or at least a personal custom domain name - not a public email service or 'portal'.

This software has always been intended for use by professional software developers: People who are affiliated with corporations using this software commercially (or schools using this software for research). As the software is open source, however, hobbyists are also free to use it, but to post to this mailing list, they are expected to demonstrate at least a minimal level of 'cluefulness' by using an email address with their own domain name - not just a generic "@gmail", "@yahoo", etc.-type email address. (Note that the underlying email service is not the problem; you can use your own domain name even with a public web-based email service like 'Gmail' or 'Yahoo Mail'.)


Why did nobody answer my question?

Not every question that's posted to this mailing list will get answered. If nobody answers your question, then it might simply be because nobody knows the answer. This might be because your question was specific to your particular environment and/or application (which the rest of us may know little about). Or perhaps it was because you made modifications to the supplied library code. (This is frowned upon; the best way to extend the library code's functionality is via subclassing.) Or perhaps it was because your question can be answered by reading this FAQ. Or perhaps people did not find your question interesting enough to respond to.


I posted a question to the mailing list, but nobody answered it. Can I post it again?

Absolutely not! This is basic mailing list 'netiquette'.

Once your question is posted to the mailing list (you can check this by looking at the list's archives), then rest assured that hundreds of people will get to see it. But sometimes, a question does not get answered (see above). If that happens, then sorry - but do not send the question to the list again.

-----


Live Networks, Inc. (LIVE555.COM)