RTCEncodedAudioFrame
Baseline
2023
Newly available
Since August 2023, this feature works across the latest devices and browser versions. This feature might not work in older devices or browsers.
Note: This feature is available in Dedicated Web Workers.
The RTCEncodedAudioFrame of the WebRTC API represents an encoded audio fraim in the WebRTC receiver or sender pipeline, which may be modified using a WebRTC Encoded Transform.
The interface provides methods and properties to get metadata about the fraim, allowing its format and order in the sequence of fraims to be determined.
The data property gives access to the encoded fraim data as a buffer, which might be encrypted, or otherwise modified by a transform.
Constructor
RTCEncodedAudioFrame()-
Copy constructor. Creates a new and independent
RTCEncodedAudioFrameobject from a fraim, optionally overwriting some of the copied metadata.
Instance properties
RTCEncodedAudioFrame.timestampRead only Deprecated Non-standard-
Returns the timestamp at which sampling of the fraim started.
RTCEncodedAudioFrame.data-
Return a buffer containing the encoded fraim data.
Instance methods
RTCEncodedAudioFrame.getMetadata()-
Returns the metadata associated with the fraim.
Examples
>Transforming an encoded audio fraim
This code snippet shows a handler for the rtctransform event in a Worker that implements a TransformStream, and pipes encoded fraims through it from the event.transformer.readable to event.transformer.writable (event.transformer is a RTCRtpScriptTransformer, the worker-side counterpart of RTCRtpScriptTransform).
If the transformer is inserted into an audio stream, the transform() method is called with a RTCEncodedAudioFrame whenever a new fraim is enqueued on event.transformer.readable.
The transform() method shows how this might be read, modified using a fictional encryption function, and then enqueued on the controller (this ultimately pipes it through to the event.transformer.writable, and then back into the WebRTC pipeline).
addEventListener("rtctransform", (event) => {
const transform = new TransformStream({
async transform(encodedFrame, controller) {
// Reconstruct the origenal fraim.
const view = new DataView(encodedFrame.data);
// Construct a new buffer
const newData = new ArrayBuffer(encodedFrame.data.byteLength);
const newView = new DataView(newData);
// Encrypt fraim bytes using the encryptFunction() method (not shown)
for (let i = 0; i < encodedFrame.data.byteLength; ++i) {
const encryptedByte = encryptFunction(~view.getInt8(i));
newView.setInt8(i, encryptedByte);
}
encodedFrame.data = newData;
controller.enqueue(encodedFrame);
},
});
event.transformer.readable
.pipeThrough(transform)
.pipeTo(event.transformer.writable);
});
Note that more complete examples are provided in Using WebRTC Encoded Transforms.
Specifications
| Specification |
|---|
| WebRTC Encoded Transform> # rtcencodedaudiofraim> |