MMUSIC Working Group M. Garcia-Martin Internet-Draft M. Willekens Intended status: Informational Nokia Siemens Networks Expires: May 19, 2008 P. Xu Huawei Technologies November 16, 2007 Multiple Packetization Times in the Session Description Protocol (SDP): Problem Statement & Requirements draft-garcia-mmusic-multiple-ptimes-problem-01.txt Status of this Memo By submitting this Internet-Draft, each author represents that any applicable patent or other IPR claims of which he or she is aware have been or will be disclosed, and any of which he or she becomes aware will be disclosed, in accordance with Section 6 of BCP 79. Internet-Drafts are working documents of the Internet Engineering Task Force (IETF), its areas, and its working groups. Note that other groups may also distribute working documents as Internet- Drafts. Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress." The list of current Internet-Drafts can be accessed at http://www.ietf.org/ietf/1id-abstracts.txt. The list of Internet-Draft Shadow Directories can be accessed at http://www.ietf.org/shadow.html. This Internet-Draft will expire on May 19, 2008. Copyright Notice Copyright (C) The IETF Trust (2007). Abstract This document provides a problem statement and requirements with respect to the presence of a single packetization time (ptime/ maxptime) attribute in SDP media descriptions that contain several media formats (audio codecs). Garcia-Martin, et al. Expires May 19, 2008 [Page 1] Internet-Draft Multiple ptime in SDP November 2007 Table of Contents 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 3 2. Some Definitions . . . . . . . . . . . . . . . . . . . . . . . 4 3. Some references . . . . . . . . . . . . . . . . . . . . . . . 4 4. Problem Statement . . . . . . . . . . . . . . . . . . . . . . 8 5. Requirements . . . . . . . . . . . . . . . . . . . . . . . . . 10 6. Solutions already proposed . . . . . . . . . . . . . . . . . . 10 6.1. Method 1 . . . . . . . . . . . . . . . . . . . . . . . . . 11 6.2. Method 2 . . . . . . . . . . . . . . . . . . . . . . . . . 11 6.3. Method 3 . . . . . . . . . . . . . . . . . . . . . . . . . 12 6.4. Method 4 . . . . . . . . . . . . . . . . . . . . . . . . . 12 6.5. Method 5 . . . . . . . . . . . . . . . . . . . . . . . . . 12 6.6. Method 6 . . . . . . . . . . . . . . . . . . . . . . . . . 12 6.7. Method 7 . . . . . . . . . . . . . . . . . . . . . . . . . 13 6.8. Method 8 . . . . . . . . . . . . . . . . . . . . . . . . . 13 6.9. Method 9 . . . . . . . . . . . . . . . . . . . . . . . . . 13 6.10. Method 10 . . . . . . . . . . . . . . . . . . . . . . . . 13 7. Conclusion and next steps . . . . . . . . . . . . . . . . . . 13 8. Security Considerations . . . . . . . . . . . . . . . . . . . 14 9. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 14 10. References . . . . . . . . . . . . . . . . . . . . . . . . . . 14 10.1. Normative References . . . . . . . . . . . . . . . . . . . 14 10.2. Informative References . . . . . . . . . . . . . . . . . . 14 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 16 Intellectual Property and Copyright Statements . . . . . . . . . . 18 Garcia-Martin, et al. Expires May 19, 2008 [Page 2] Internet-Draft Multiple ptime in SDP November 2007 1. Introduction The Session Description Protocol (SDP) [1] provides a protocol to describe multimedia sessions for the purposes of session announcement, session invitation, and other forms of multimedia session initiation. A session description in SDP includes the session name and purpose, the media comprising the session, information needed to receive the media (addresses, ports, formats, etc.) and some other information. In the SDP media description part, the m-line contains the media type (e.g. audio), a transport port, a transport protocol (e.g. RTP/AVP) and a media format description which depends on the transport protocol. For the transport protocol RTP/AVP or RTP/SAVP, the media format sub- field can contain a list of RTP payload type numbers. See RTP Profile for Audio and Video Conferences with Minimal Control [15], Table 4. For example: "m=audio 49232 RTP/AVP 3 15 18" indicates the audio encoders GSM, G728 and G729. Further, the media description part can contain additional attribute lines that complement or modify the media description line. Of interest for this memo are the 'ptime' and 'maxptime' attributes. According to RFC 4566 [1], the 'ptime' attribute gives the length of time in milliseconds represented by the media in a packet, and the 'maxptime' gives the maximum amount of media that can be encapsulated in each packet, expressed as time in milliseconds. These attributes modify the whole media description line, which can contain an extensive list of payload types. In other words, these attributes are not specific to a given codec. The RFC 4566 [1] also indicates that it should not be necessary to know ptime to decode RTP or vat audio since the 'ptime' attribute is intended as a recommendation for the encoding/packetization of audio. However, once more, the existing 'ptime' attribute defines the desired packetization time for all the payload types defined in the corresponding media description line. End-devices can sometimes be configured with different codecs and for each codec a different packetization time can be indicated. However, there is no clear way to exchange this type of information between different user agents and this can result in lower voice quality, network problems or performance problems in the end-devices. Garcia-Martin, et al. Expires May 19, 2008 [Page 3] Internet-Draft Multiple ptime in SDP November 2007 2. Some Definitions The Session Description Protocol (SDP) [1] defines the ptime and maxptime as: a=ptime:[packet time] This gives the length of time in milliseconds represented by the media in a packet. This is probably only meaningful for audio data, but may be used with other media types if it makes sense. It should not be necessary to know ptime to decode RTP or vat audio, and it is intended as a recommendation for the encoding/packetization of audio. It is a media-level attribute, and it is not dependent on charset. a=maxptime:[maximum packet time] This gives the maximum amount of media that can be encapsulated in each packet, expressed as time in milliseconds. The time SHALL be calculated as the sum of the time the media present in the packet represents. For frame-based codecs, the time SHOULD be an integer multiple of the frame size. This attribute is probably only meaningful for audio data, but may be used with other media types if it makes sense. It is a media-level attribute, and it is not dependent on charset. Note that this attribute was introduced after RFC 2327, and non-updated implementations will ignore this attribute. Additional encoding parameters MAY be defined in the future, but codec-specific parameters SHOULD NOT be added. Parameters added to an "a=rtpmap:" attribute SHOULD only be those required for a session directory to make the choice of appropriate media to participate in a session. Codec-specific parameters should be added in other attributes (for example, "a=fmtp:"). Note: RTP audio formats typically do not include information about the number of samples per packet. If a non-default (as defined in the RTP Audio/Video Profile) packetization is required, the "ptime" attribute is used as given above. 3. Some references Many RFCs make references to the "ptime/maxptime" attribute to give some definitions, recommendations, requirements, default values. SDP [1] gives definitions for ptime/maxptime. SDP Offer/answer model [2] gives some requirements for the ptime for the offerer and answerer. If the ptime attribute is present for a Garcia-Martin, et al. Expires May 19, 2008 [Page 4] Internet-Draft Multiple ptime in SDP November 2007 stream, it indicates the desired packetization interval that the offerer would like to receive. The ptime attribute MUST be greater than zero. The answerer MAY include a non-zero ptime attribute for any media stream; this indicates the packetization interval that the answerer would like to receive. There is no requirement that the packetization interval be the same in each direction for a particular stream. SDP Transport independent bandwidth modifier [6] indicates that the ptime may be a possible candidate for the bandwidth but it should be avoided to be used for that purpose. The use of another parameter is proposed. SDP Conversions for ATM bearer [7] It is not recommended that the ptime be used in ATM applications since packet period information is provided with other parameters (e.g., the profile type and number in the 'm' line, and the 'vsel', 'dsel' and 'fsel' attributes). Also, for AAL1 applications, 'ptime' is not applicable and should be flagged as an error. If used in AAL2 and AAL5 applications, 'ptime' should be consistent with the rest of the SDP description. The 'vsel', 'dsel' and 'fsel' attributes refer generically to codec-s. These can be bed for service-specific codec negotiation and assignment in non-ATM s well as ATM applications. The 'vsel' attribute indicates a prioritized list of one or more 3- tuples for voice service. Each 3-tuple indicates a codec, an optional packet length and an optional packetization period. This complements the 'm' line information and should be consistent with it. The 'vsel' attribute refers to all directions of a connection. For a bidirectional connection, these are the forward and backward directions. For a unidirectional connection, this can be either the backward or forward direction. The 'vsel' attribute is not meant to be used with bidirectional connections that have asymmetric codec configurations described in a single SDP descriptor. For these, the 'onewaySel' attribute should be used. The 'vsel' line is structured with an encodingName, a packetLength and a packetTime. The packetLength is a decimal integer representation of the packet length in octets. The packetTime is a decimal integer representation of the packetization interval in microseconds. The parameters packetLength and packetTime can be set to "-" when not needed. Also, the entire 'vsel' media attribute line can be omitted when not needed. SIP/SDP static dictionary for SigComp [8] SIP device requirements and configuration [9] In some cases, operators want to control which codecs may be used in their network. The desired subset of codecs supported by the device SHOULD be configurable along with the order of preference. Service providers SHOULD have the possibility of plugging in their own codecs of Garcia-Martin, et al. Expires May 19, 2008 [Page 5] Internet-Draft Multiple ptime in SDP November 2007 choice. The codec settings MAY include the packet length and other parameters like silence suppression or comfort noise generation. The set of available codecs will be used in the codec negotiation according to RFC3264. Example: Codecs="speex/ 8000;ptime=20;cng=on,gsm;ptime=30" RTSP [10] Format-specific parameters are conveyed using the "fmtp" media attribute. The syntax of the "fmtp" attribute is specific to the encoding(s) that the attribute refers to. Note that the packetization interval is conveyed using the "ptime" attribute. MGCP [11] The packetization period in milliseconds, encoded as the keyword "p", followed by a colon and a decimal number. If the Call Agent specifies a range of values, the range will be specified as two decimal numbers separated by a hyphen (as specified for the "ptime" parameter for SDP). MGCP ATM package [12] Packet time changed ("ptime(#)"): If armed via an R:atm/ptime, a media gateway signals a packetization period change through an O:atm/ptime. The decimal number in parentheses is optional. It is the new packetization period in milliseconds. In AAL2 applications, the pftrans event can be used to cover packetization period changes (and codec changes). Voice codec selection (vsel): This is a prioritized list of one or more 3-tuples describing voice service. Each vsel 3-tuple indicates a codec, an optional packet length and an optional packetization period. Gateway control protocol [13] Registration MIME text/red sub-type [14] RTP/AVP [15] RTP payload for MPEG4 A/V [16] RTP payload for G.711.1 [17] RTP payload for AMR, AMR-WB [18] The maxptime SHOULD be a multiple of the frame size. If this parameter is not present, the sender MAY encapsulate any number of speech frames into one RTP packet. RTP payload for distributed speech recognition [19] The maxptime SHOULD be a multiple of the frame pair size (20 ms) If this parameter is not present, maxptime is assumed to be 80ms. Note, since the performance of most speech recognizers are extremely sensitive to consecutive FP losses, if the user of the payload format expects a high packet loss ratio for the session, it MAY consider to explicitly choose a maxptime value for the session that is shorter than the Garcia-Martin, et al. Expires May 19, 2008 [Page 6] Internet-Draft Multiple ptime in SDP November 2007 default value. RTP payload for EVRC and SMV [20] The parameters maxptime and maxinterleave are exchanged at the initial setup of the session. In one-to-one sessions, the sender MUST respect these values set be the receiver, and MUST NOT interleave/bundle more packets than what the receiver signals that it can handle. This ensures that the receiver can allocate a known amount of buffer space that will be sufficient for all interleaving/bundling used in that session. During the session, the sender may decrease the bundling value or interleaving length (so that less buffer space is required at the receiver), but never exceed the maximum value set by the receiver. This prevents the situation where a receiver needs to allocate more buffer space in the middle of a session but is unable to do so. Additionally, senders have the following restrictions: MUST NOT bundle more codec data frames in a single RTP packet than indicated by maxptime (see Section 12) if it is signaled. SHOULD NOT bundle more codec data frames in a single RTP packet than will fit in the MTU of the underlying network. If maxptime is not signaled, the default maxptime value SHALL be 200 milliseconds. RTP payload for iLBC [21] The maxptime SHOULD be a multiple of the frame size. This attribute is probably only meaningful for audio data, but may be used with other media types if it makes sense. It is a media attribute, and is not dependent on charset. Note that this attribute was introduced after RFC 2327, and non updated implementations will ignore this attribute. Parameter ptime can not be used for the purpose of specifying iLBC operating mode, due to fact that for the certain values it will be impossible to distinguish which mode is about to be used (e.g., when ptime=60, it would be impossible to distinguish if packet is carrying 2 frames of 30 ms or 3 frames of 20 ms, etc.). RTP payload for 64 kbps transparent call [22] RTP payload for distributed speech recognition [23] If maxptime is not present, maxptime is assumed to be 80ms. Note, since the performance of most speech recognizers are extremely sensitive to consecutive FP losses, if the user of the payload format expects a high packet loss ratio for the session, it MAY consider to explicitly choose a maxptime value for the session that is shorter than the default value. RTP payload for AC-3 [24] RTP payload for broadVoice speech [25] The maxptime SHOULD be a multiple of the duration of a single codec data frame (5 ms). Garcia-Martin, et al. Expires May 19, 2008 [Page 7] Internet-Draft Multiple ptime in SDP November 2007 RTP payload for VMR-WB [26] The parameters "maxptime" and "ptime" should in most cases not affect the interoperability; however, the setting of the parameters can affect the performance of the application. RTP payload for AMR-WB+ [27] RTP payload MIME type registration [28] 4. Problem Statement The packetization time is an important parameter which helps in reducing the packet overhead. Many voice codecs use a certain frame length to determine the coded voice filter parameters and try to find a certain optimum between the perceived voice quality (measured by the Mean Option Score (MOS) factor), and the required bitrate. When a packet oriented network is used for the transfer, the packet header induces an additional overhead. As such, it makes sense to try to combine different voice frame data in one packet (up to a Maximum Transmission Unit (MTU)) to find a good balance between the required network resources, end-device resources and the perceived voice quality influenced by packet loss, packet delay, jitter. When the packet size decreases, the bandwidth efficiency is reduced. When the packet size increases, the packetization delay can have a negative impact on the perceived voice quality. The RTP Profile for Audio and Video Conferences with Minimal Control [15], Table 1, indicates the frame size and default packetization time for different codecs. The G728 codec has a frame size of 2.5 ms/frame and a default packetization time of 20 ms/packet. For G729 codec, the frame size is 10 ms/frame and a default packetization time of 20 ms/packet. When more and more telephony traffic is carried over IP-networks, the quality as perceived by the end-user should be no worse as the classical telephony services. For VoIP service providers, it is very important that endpoints receive audio with the best possible codec and packetization time. In particular, the packetization time depends on the selected codec for the audio communication and other factors, such as the Maximum Transmission Unit (MTU) of the network and the type of access network technology. As such, the packetization time is clearly a function of the codec and the network access technology. During the establishment of a new session or a modification of an existing session, an endpoint should be able to express its preference with respect to the packetization time for each codec. This would mean that the creator of the SDP Garcia-Martin, et al. Expires May 19, 2008 [Page 8] Internet-Draft Multiple ptime in SDP November 2007 prefers the remote endpoint to use certain packetization time when sending media with that codec. The RFC 4566 [1] provides the means for expressing a packetization time that affects all the payload types declared in the media description line. So, there are no means to indicate the desired packetization time on a per payload type basis. Implementations have been using proprietary mechanisms for indicating the packetization time per payload type, leading to lack of interoperability in this area. One of these mechanisms is the 'maxmptime' attribute, defined in the ITU-T Recommendation V.152 [3], which "indicates the supported packetization period for all codec payload types". Another one is the 'mptime' attribute, defined in the PacketCable Network-Based Call Signaling Protocol Specification [4], which indicates "a list of packetization period values the endpoint is capable of using (sending and receiving) for this connection". While all have similar semantics, there is obviously no interoperability between them, creating a nightmare for the implementer who happens to be defining a common SDP stack for different applications. A few RTP payload format descriptions, such as RFC 3267 [18], RFC 3016 [16], and RFC 3952 [21] indicate that the packetization time for such payload should be indicated in the 'ptime' attribute in SDP. However, since the 'ptime' attribute affects all the payload formats included in the media description line, it would not be possible to create a media description line that contains all the mentioned payload formats and different packetization times. The solutions range from considering a single packetization time for all the payload types, or creating a media description line that contains a single payload type. The issue of a given packetization for a specific codec has been captured in past RFCs. For example, RFC 4504 [9] contains a set of requirements for SIP telephony devices. Section 3.8 in that RFC also provides background information for the need of packetization time, which could be set by either the user or the administrator of the device, on a per codec basis. However, once more, if several payload formats are offered in the same media description line in SDP, there is no way to indicate different packetizations per payload format.. Below is an example which indicates how the ptime can cause interworking problems between different implementations. m=audio 1234 RTP/AVP 0 4 8 ptime=30 Example1 Garcia-Martin, et al. Expires May 19, 2008 [Page 9] Internet-Draft Multiple ptime in SDP November 2007 The media formats 0 and 8 are PCM U and A-law which are sample based codecs with a default packetization time of 20 ms. However, a packetization time of 30 ms can also be used. The media format 4 is a G723 frame based codec with a frame size of 30 ms. As such, the most common ptime for all these different codecs is 30 ms. When the receiver uses this ptime to initialize its buffer for its voice samples based on this 30 ms value and when the sender however is sending the media with the PCMU codec with its default packetization time of 20 ms, then the receiver has to wait for another voice packet before its buffer can be filled-up for a total duration of 30 ms. And this can cause disruptions in the synchronous playback of the digitized voice. 5. Requirements The main requirement is coming from the implementation and media gateway community making use of hardware based solutions, e.g. DSP or FPGA implementations with silicon constraints for the amount of buffer space. Some are making use of the ptime/codec information to make certain QoS budget calculations. When the packetization time is known for a codec with a certain frame size and frame datarate, the efficiency of the throughput can be calculated. Currently, the ptime and maxptime are "indication" attributes are optional. When these parameters are used for resource reservation and for hardware initializations, a negotiated value between the "offerer" and "answerer" becomes a requirement. There could be different sources for the ptime/maxptime, i.e. from RTP/AVP profile, from end-user device configuration, from network operator, from intermediaries, from receiver. The codec and ptime/maxptime in uplink and downlink can be different. 6. Solutions already proposed During last years, different solutions were already proposed and implemented with the goal to make the ptime in function of the codec instead of the media containing is list of codecs. The purpose of this list is only to indicate what kind of logical proposals were already made to find a solution for the SDP interworking issues due to implementation and RFC interpretations. It's just a list and does not impose any preference for a certain solution. Garcia-Martin, et al. Expires May 19, 2008 [Page 10] Internet-Draft Multiple ptime in SDP November 2007 In all these proposals, a semantic grouping of the codec specific information is made by giving a new interpretation of the sequence of the parameters or by providing new additional attributed. All these methods are against the basic rule indicated in the RFCs which state that a ptime and maxptime are media specific and NOT codec specific. It does not solve the interworking issues. Instead, it makes it worse due to many new interpretations and implementations. To avoid a further divergence, the implementation community is strongly asking for a standardized solution. 6.1. Method 1 Write the rtpmap first, followed by the ptime when it is related to the codec. m=audio 1234 RTP/AVP 4 0 a=rtpmap:4 G723/8000 a=rtpmap:0 PCMU/8000 a=ptime:20 a=fmtp:4 bitrate=6400 Method1 Some SDP encoders first write the media line, followed by the rtpmaps and then the value attributes. 6.2. Method 2 Grouping of all codec specific information together. m=audio 1234 RTP/AVP 4 0 a=rtpmap:4 G723/8000 a=fmtp:4 bitrate=6400 a=rtpmap:0 PCMU/8000 a=ptime:20 Method2 Most implementers are in favor of this proposal, i.e. writing the value attributes associated with an rtpmap listed immediately after it. Garcia-Martin, et al. Expires May 19, 2008 [Page 11] Internet-Draft Multiple ptime in SDP November 2007 6.3. Method 3 Use the ptime for every codec after its rtpmap definition. m=audio 1234 RTP/AVP 0 18 4 a=rtpmap:18 G729/8000 a=ptime:30 a=rtpmap:0 PCMU/8000 a=ptime:40 a=rtpmap:4 G723/8000 a=ptime:60 Method3 6.4. Method 4 Create a new "mptime" (multiple ptime) attribute with a construct similar to the m-line. m=audio 1234 RTP/AVP 0 18 4 a=mptime 40 30 60 Method4 6.5. Method 5 Use of a new "x-ptime" attribute 6.6. Method 6 Use of different m-lines with one codec per m-line m=audio 1234 RTP/AVP 0 a=rtpmap:0 PCMU/8000 a=ptime:40 m=audio 1234 RTP/AVP 18 a=rtpmap:18 G729/8000 a=ptime:30 m=audio 1234 RTP/AVP 4 a=rtpmap:4 G723/8000 a=ptime:60 Garcia-Martin, et al. Expires May 19, 2008 [Page 12] Internet-Draft Multiple ptime in SDP November 2007 Method6 6.7. Method 7 Use of the ptime in the fmtp attribute m=audio 1234 RTP/AVP 4 18 a=rtpmap:18 G729/8000 a=fmtp:18 annexb=yes;ptime=20 a=maxptime:40 a=rtpmap 4 G723/8000 a=fmtp:4 bitrate=6.3;annexa=yes;ptime=30 a=maxptime:60 Method7 6.8. Method 8 Use of the vsel parameter as done for ATM bearer connections Following example indicates first preference of G.729 or G.729a (both are interoperable) as the voice encoding scheme. A packet length of 10 octets and a packetization interval of 10 ms are associated with this codec. G726-32 is the second preference stated in this line, with an associated packet length of 40 octets and a packetization interval of 10 ms. If the packet length and packetization interval are intended to be omitted, then this media attribute line contains '-'. a=vsel:G729 10 10000 G726-32 40 10000 a=vsel:G729 - - G726-32 - - Method8 6.9. Method 9 Method 9: use of V.152 "maxmptime" attribute 6.10. Method 10 Method 10: use of PacketCable "mptime" attribute 7. Conclusion and next steps This memo advocates for the need of a standardized mechanism to Garcia-Martin, et al. Expires May 19, 2008 [Page 13] Internet-Draft Multiple ptime in SDP November 2007 indicate the packetization time on a per codec basis, allowing the creator of SDP to include several payload formats in the same media description line with different packetization times. This memo encourage discussion in the MMUSIC WG mailing list in the IETF. The ultimate goal is to define a standard mechanism that fulfils the requirements highlighted in this memo. The goal is finding a solution which does not require changes in implementations which have followed the existing RFC guidelines and which are able to receive any packetization time. A clear solution has to be described for the resource constraint problem in hardware based solutions. Either this is an extension/ modification of the current SDP or a clarification how certain issues can be solved with the existing RFCs. 8. Security Considerations This memo discusses a problem statement and requirements. As such, no protocol that can suffer attacks is defined. 9. IANA Considerations This document does not request IANA to take any action. 10. References 10.1. Normative References [1] Handley, M., Jacobson, V., and C. Perkins, "SDP: Session Description Protocol", RFC 4566, July 2006. [2] Rosenberg, J. and H. Schulzrinne, "An Offer/Answer Model with Session Description Protocol (SDP)", RFC 3264, June 2002. 10.2. Informative References [3] ITU-T, "Procedures for supporting voice-band data over IP networks", ITU-T Recommendation V.152, January 2005. [4] PacketCable, "PacketCable Network-Based Call Signaling Protocol Specification", PacketCable PKT-SP-EC-MGCP-I11-050812, August 2005. Garcia-Martin, et al. Expires May 19, 2008 [Page 14] Internet-Draft Multiple ptime in SDP November 2007 [5] ITU-T, "A Proposal on Codec Negotiation across Multiple Networks for End-to-End QoS", ITU-T Study Group 16 AVD-2938, September 2006. [6] Westerlund, M., "A Transport Independent Bandwidth Modifier for the Session Description Protocol (SDP)", RFC 3890, September 2004. [7] Kumar, R. and M. Mostafa, "Conventions for the use of the Session Description Protocol (SDP) for ATM Bearer Connections", RFC 3108, May 2001. [8] Garcia-Martin, M., Bormann, C., Ott, J., Price, R., and A. Roach, "The Session Initiation Protocol (SIP) and Session Description Protocol (SDP) Static Dictionary for Signaling Compression (SigComp)", RFC 3485, February 2003. [9] Sinnreich, H., Lass, S., and C. Stredicke, "SIP Telephony Device Requirements and Configuration", RFC 4504, May 2006. [10] Schulzrinne, H., Rao, A., and R. Lanphier, "Real Time Streaming Protocol (RTSP)", RFC 2326, April 1998. [11] Andreasen, F. and B. Foster, "Media Gateway Control Protocol (MGCP) Version 1.0", RFC 3435, January 2003. [12] Kumar, R., "Asynchronous Transfer Mode (ATM) Package for the Media Gateway Control Protocol (MGCP)", RFC 3441, January 2003. [13] Groves, C., Pantaleo, M., Anderson, T., and T. Taylor, "Gateway Control Protocol Version 1", RFC 3525, June 2003. [14] Jones, P., "Registration of the text/red MIME Sub-Type", RFC 4102, June 2005. [15] Schulzrinne, H. and S. Casner, "RTP Profile for Audio and Video Conferences with Minimal Control", STD 65, RFC 3551, July 2003. [16] Kikuchi, Y., Nomura, T., Fukunaga, S., Matsui, Y., and H. Kimata, "RTP Payload Format for MPEG-4 Audio/Visual Streams", RFC 3016, November 2000. [17] Luthi, P., "RTP Payload Format for ITU-T Recommendation G.722.1", RFC 3047, January 2001. [18] Sjoberg, J., Westerlund, M., Lakaniemi, A., and Q. Xie, "Real- Time Transport Protocol (RTP) Payload Format and File Storage Format for the Adaptive Multi-Rate (AMR) and Adaptive Multi- Garcia-Martin, et al. Expires May 19, 2008 [Page 15] Internet-Draft Multiple ptime in SDP November 2007 Rate Wideband (AMR-WB) Audio Codecs", RFC 3267, June 2002. [19] Xie, Q., "RTP Payload Format for European Telecommunications Standards Institute (ETSI) European Standard ES 201 108 Distributed Speech Recognition Encoding", RFC 3557, July 2003. [20] Li, A., "RTP Payload Format for Enhanced Variable Rate Codecs (EVRC) and Selectable Mode Vocoders (SMV)", RFC 3558, July 2003. [21] Duric, A. and S. Andersen, "Real-time Transport Protocol (RTP) Payload Format for internet Low Bit Rate Codec (iLBC) Speech", RFC 3952, December 2004. [22] Kreuter, R., "RTP Payload Format for a 64 kbit/s Transparent Call", RFC 4040, April 2005. [23] Xie, Q. and D. Pearce, "RTP Payload Formats for European Telecommunications Standards Institute (ETSI) European Standard ES 202 050, ES 202 211, and ES 202 212 Distributed Speech Recognition Encoding", RFC 4060, May 2005. [24] Link, B., Hager, T., and J. Flaks, "RTP Payload Format for AC-3 Audio", RFC 4184, October 2005. [25] Chen, J., Lee, W., and J. Thyssen, "RTP Payload Format for BroadVoice Speech Codecs", RFC 4298, December 2005. [26] Ahmadi, S., "Real-Time Transport Protocol (RTP) Payload Format for the Variable-Rate Multimode Wideband (VMR-WB) Audio Codec", RFC 4348, January 2006. [27] Sjoberg, J., Westerlund, M., Lakaniemi, A., and S. Wenger, "RTP Payload Format for the Extended Adaptive Multi-Rate Wideband (AMR-WB+) Audio Codec", RFC 4352, January 2006. [28] Casner, S., "Media Type Registration of Payload Formats in the RTP Profile for Audio and Video Conferences", RFC 4856, February 2007. Garcia-Martin, et al. Expires May 19, 2008 [Page 16] Internet-Draft Multiple ptime in SDP November 2007 Authors' Addresses Miguel A. Garcia-Martin Nokia Siemens Networks P.O.Box 6 Nokia Siemens Networks, FIN 02022 Finland Email: miguel.garcia@nsn.com Marc Willekens Nokia Siemens Networks Atealaan 34 Herentals, BE 2200 Belgium Email: marc.willekens@nsn.com Peili Xu Huawei Technologies Bantian Longgang, Shenzhen 518129 China Email: xupeili@huawei.com Garcia-Martin, et al. Expires May 19, 2008 [Page 17] Internet-Draft Multiple ptime in SDP November 2007 Full Copyright Statement Copyright (C) The IETF Trust (2007). This document is subject to the rights, licenses and restrictions contained in BCP 78, and except as set forth therein, the authors retain all their rights. This document and the information contained herein are provided on an "AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE REPRESENTS OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY, THE IETF TRUST AND THE INTERNET ENGINEERING TASK FORCE DISCLAIM ALL WARRANTIES, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF THE INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Intellectual Property The IETF takes no position regarding the validity or scope of any Intellectual Property Rights or other rights that might be claimed to pertain to the implementation or use of the technology described in this document or the extent to which any license under such rights might or might not be available; nor does it represent that it has made any independent effort to identify any such rights. Information on the procedures with respect to rights in RFC documents can be found in BCP 78 and BCP 79. Copies of IPR disclosures made to the IETF Secretariat and any assurances of licenses to be made available, or the result of an attempt made to obtain a general license or permission for the use of such proprietary rights by implementers or users of this specification can be obtained from the IETF on-line IPR repository at http://www.ietf.org/ipr. The IETF invites any interested party to bring to its attention any copyrights, patents or patent applications, or other proprietary rights that may cover technology that may be required to implement this standard. Please address the information to the IETF at ietf-ipr@ietf.org. Acknowledgment Funding for the RFC Editor function is provided by the IETF Administrative Support Activity (IASA). Garcia-Martin, et al. Expires May 19, 2008 [Page 18]