Commit b8baab92 authored by Matt Wolenetz's avatar Matt Wolenetz Committed by Chromium LUCI CQ

[MSE][WebCodecs] Plumb appendEncodedChunks to WebSourceBuffer::AppendChunks

This change:

1) Implements SourceBuffer::appendEncodedChunks(). This is the first
   promise-based MSE API. Like async appendBuffer(), once the
   synchronous prepareAppend steps are complete, an async task is
   scheduled to complete the chunks' append. Unlike async
   appendBuffer(), the 'update', 'updateend', 'abort', 'error' events
   involved in an async chunk append are not enqueued, as the same
   information is exposed instead in promise rejections. Also, mixing
   event-notification with promise resolution/rejection is confusing and
   redundant, and w3ctag guidelines for promise-based APIs need such
   events originating in a promise's async execution to be dispatched
   (not just enqueued) prior to promise resolution/rejection occurring.

2) Converts (using new local helpers in SourceBuffer) the chunks
   directly into the type used in the underlying MSE buffering
   implementation (StreamParserBuffers in a circular_deque) during the
   synchronous portion of appendEncodedChunks().

3) The async chunk task is canceled during contextDestruction; it also
   is used to gate hasPendingActivity. Essentially, it's a 3rd async
   operation the SourceBuffer could do (in addition to pre-existing
   async appendBuffer and async remove). At most one of these three
   async operations may be pending at a time. This keeps the behavior
   unsurprising and aligns better w.r.t. previous similar async MSE
   operations.

4) Adds a new WebSourceBuffer AppendChunks method and a stubbed
   implementation of it in WebSourceBufferImpl.

Later changes will update WSBI::AppendChunks() to send the buffers
through ChunkDemuxer to the WebCodecsEncodedChunkStreamParser, and add
tests for promise rejection/abort/success scenarios and basic
end-to-end buffering and playback of encoded chunks with MSE.
Refinements, such as supporting h264 chunk buffering, using
non-hardcoded audio chunk duration (from a new optional duration, but
required-for-MSE, EncodedAudioChunkInit attribute), and letting the app
provide decode timestamp in EncodedVideoChunkInit, may also come later.

BUG=1144908

Change-Id: Ieb5d0942e68f48156bee9290dcb99dad2e280e85
Reviewed-on: https://chromium-review.googlesource.com/c/chromium/src/+/2574534Reviewed-by: default avatarDaniel Cheng <dcheng@chromium.org>
Reviewed-by: default avatarDan Sanders <sandersd@chromium.org>
Commit-Queue: Matthew Wolenetz <wolenetz@chromium.org>
Cr-Commit-Position: refs/heads/master@{#836201}
parent 4a110138
......@@ -146,6 +146,15 @@ bool WebSourceBufferImpl::Append(const unsigned char* data,
return success;
}
bool WebSourceBufferImpl::AppendChunks(
std::unique_ptr<media::StreamParser::BufferQueue> buffer_queue,
double* timestamp_offset) {
// TODO(crbug.com/1144908): Continue MSE-for-WebCodecs encoded chunk buffering
// implementation from here through ChunkDemuxer/SourceBufferState/etc.
NOTIMPLEMENTED();
return false;
}
void WebSourceBufferImpl::ResetParserState() {
demuxer_->ResetParserState(id_,
append_window_start_, append_window_end_,
......
......@@ -36,6 +36,9 @@ class WebSourceBufferImpl : public blink::WebSourceBuffer {
bool Append(const unsigned char* data,
unsigned length,
double* timestamp_offset) override;
bool AppendChunks(
std::unique_ptr<media::StreamParser::BufferQueue> buffer_queue,
double* timestamp_offset) override;
void ResetParserState() override;
void Remove(double start, double end) override;
bool CanChangeType(const blink::WebString& content_type,
......
......@@ -30,6 +30,7 @@ include_rules = [
"+media/base/audio_renderer_sink.h",
"+media/base/eme_constants.h",
"+media/base/media_log.h",
"+media/base/stream_parser.h",
"+media/base/video_frame_metadata.h",
"+media/base/video_transformation.h",
"+mojo/public",
......
......@@ -31,6 +31,7 @@
#ifndef THIRD_PARTY_BLINK_PUBLIC_PLATFORM_WEB_SOURCE_BUFFER_H_
#define THIRD_PARTY_BLINK_PUBLIC_PLATFORM_WEB_SOURCE_BUFFER_H_
#include "media/base/stream_parser.h"
#include "third_party/blink/public/platform/web_string.h"
#include "third_party/blink/public/platform/web_time_range.h"
......@@ -68,14 +69,18 @@ class WebSourceBuffer {
virtual bool EvictCodedFrames(double current_playback_time,
size_t new_data_size) = 0;
// Appends data and runs the segment parser loop algorithm.
// The algorithm may update |*timestamp_offset| if |timestamp_offset| is not
// null.
// Appends data and runs the segment parser loop algorithm (or more simply
// appends and processes caller-provided media::StreamParserBuffers in the
// AppendChunks version). The algorithm and associated frame processing may
// update |*timestamp_offset| if |timestamp_offset| is not null.
// Returns true on success, otherwise the append error algorithm needs to
// run with the decode error parameter set to true.
virtual bool Append(const unsigned char* data,
unsigned length,
double* timestamp_offset) = 0;
virtual bool AppendChunks(
std::unique_ptr<media::StreamParser::BufferQueue> buffer_queue,
double* timestamp_offset) = 0;
virtual void ResetParserState() = 0;
virtual void Remove(double start, double end) = 0;
......
......@@ -2,6 +2,8 @@ include_rules = [
"-third_party/blink/renderer/modules",
"+media/base/audio_decoder_config.h",
"+media/base/logging_override_if_enabled.h",
"+media/base/stream_parser.h",
"+media/base/stream_parser_buffer.h",
"+media/base/video_decoder_config.h",
"+media/filters",
"+media/formats/mp4/box_definitions.h",
......
......@@ -264,6 +264,19 @@ SourceBuffer* MediaSource::AddSourceBufferUsingConfig(
String console_message;
CodecConfigEval eval;
#if BUILDFLAG(USE_PROPRIETARY_CODECS)
// TODO(crbug.com/1144908): The SourceBuffer needs these for converting h264
// EncodedVideoChunks. Probably best if these details are put into a new
// WebCodecs VideoDecoderHelper abstraction (or similar), since this top-level
// MediaSource impl shouldn't need to worry about the details of specific
// codec bitstream conversions (nor should the underlying implementation be
// depended upon to redo work done already in WebCodecs decoder configuration
// validation.) In initial prototype, we do not support h264 buffering, so
// will fail if these become populated by MakeMediaVideoDecoderConfig, below.
std::unique_ptr<media::H264ToAnnexBBitstreamConverter> h264_converter;
std::unique_ptr<media::mp4::AVCDecoderConfigurationRecord> h264_avcc;
#endif // BUILDFLAG(USE_PROPRIETARY_CODECS)
if (config->hasAudioConfig()) {
audio_config = std::make_unique<media::AudioDecoderConfig>();
eval = AudioDecoder::MakeMediaAudioDecoderConfig(*(config->audioConfig()),
......@@ -272,23 +285,23 @@ SourceBuffer* MediaSource::AddSourceBufferUsingConfig(
} else {
DCHECK(config->hasVideoConfig());
video_config = std::make_unique<media::VideoDecoderConfig>();
#if BUILDFLAG(USE_PROPRIETARY_CODECS)
// TODO(crbug.com/1144908): Give these to the resulting SourceBuffer for use
// in converting h264 EncodedVideoChunks. Probably best if these details are
// put into a new WebCodecs VideoDecoderHelper abstraction (or similar),
// since this top-level MediaSource impl shouldn't need to worry about the
// details of specific codec bitstream conversions (nor should the
// underlying implementation be depended upon to redo work done already
// in WebCodecs decoder configuration validation.)
std::unique_ptr<media::H264ToAnnexBBitstreamConverter> h264_converter;
std::unique_ptr<media::mp4::AVCDecoderConfigurationRecord> h264_avcc;
#endif // BUILDFLAG(USE_PROPRIETARY_CODECS)
eval = VideoDecoder::MakeMediaVideoDecoderConfig(
*(config->videoConfig()), *video_config /* out */,
#if BUILDFLAG(USE_PROPRIETARY_CODECS)
h264_converter /* out */, h264_avcc /* out */,
#endif // BUILDFLAG(USE_PROPRIETARY_CODECS)
console_message /* out */);
#if BUILDFLAG(USE_PROPRIETARY_CODECS)
// TODO(crbug.com/1144908): Initial prototype does not support h264
// buffering. See above.
if (eval == CodecConfigEval::kSupported && (h264_converter || h264_avcc)) {
eval = CodecConfigEval::kUnsupported;
console_message =
"H.264 EncodedVideoChunk buffering is not yet supported in MSE. See "
"https://crbug.com/1144908.";
video_config.reset();
}
#endif // BUILDFLAG(USE_PROPRIETARY_CODECS)
}
switch (eval) {
......
......@@ -34,6 +34,7 @@
#include <memory>
#include "base/memory/scoped_refptr.h"
#include "media/base/stream_parser.h"
#include "third_party/blink/public/platform/web_source_buffer_client.h"
#include "third_party/blink/renderer/bindings/core/v8/active_script_wrappable.h"
#include "third_party/blink/renderer/bindings/core/v8/script_promise.h"
......@@ -58,6 +59,7 @@ class ExceptionState;
class MediaSource;
class MediaSourceTracer;
class MediaSourceAttachmentSupplement;
class ScriptPromiseResolver;
class ScriptState;
class SourceBufferConfig;
class TimeRanges;
......@@ -153,6 +155,7 @@ class SourceBuffer final : public EventTargetWithInlineData,
bool PrepareAppend(double media_time, size_t new_data_size, ExceptionState&);
bool EvictCodedFrames(double media_time, size_t new_data_size);
void AppendBufferInternal(const unsigned char*, size_t, ExceptionState&);
void AppendEncodedChunksAsyncPart();
void AppendBufferAsyncPart();
void AppendError(MediaSourceAttachmentSupplement::ExclusiveKey /* passkey */);
......@@ -188,11 +191,18 @@ class SourceBuffer final : public EventTargetWithInlineData,
const String& type,
ExceptionState*,
MediaSourceAttachmentSupplement::ExclusiveKey /* passkey */);
void AppendEncodedChunks_Locked(
std::unique_ptr<media::StreamParser::BufferQueue> buffer_queue,
size_t size,
ExceptionState* exception_state,
MediaSourceAttachmentSupplement::ExclusiveKey /* passkey */);
void AppendBufferInternal_Locked(
const unsigned char*,
size_t,
ExceptionState*,
MediaSourceAttachmentSupplement::ExclusiveKey /* passkey */);
void AppendEncodedChunksAsyncPart_Locked(
MediaSourceAttachmentSupplement::ExclusiveKey /* passkey */);
void AppendBufferAsyncPart_Locked(
MediaSourceAttachmentSupplement::ExclusiveKey /* passkey */);
void RemoveAsyncPart_Locked(
......@@ -238,6 +248,7 @@ class SourceBuffer final : public EventTargetWithInlineData,
AtomicString mode_;
bool updating_;
double timestamp_offset_;
Member<AudioTrackList> audio_tracks_;
Member<VideoTrackList> video_tracks_;
......@@ -245,10 +256,25 @@ class SourceBuffer final : public EventTargetWithInlineData,
double append_window_end_;
bool first_initialization_segment_received_;
// |updating_| logic, per spec, allows at most one of the following async
// operations to be exclusively pending for this SourceBuffer: appendBuffer(),
// appendEncodedChunks(), or remove(). The following three sections
// respectively track the async state for these pending operations:
// These are valid only during the scope of synchronous and asynchronous
// follow-up of appendBuffer().
Vector<unsigned char> pending_append_data_;
wtf_size_t pending_append_data_offset_;
TaskHandle append_buffer_async_task_handle_;
// This resolver is set and valid only during the scope of synchronous and
// asynchronous follow-up of appendEncodedChunks().
std::unique_ptr<media::StreamParser::BufferQueue> pending_chunks_to_buffer_;
Member<ScriptPromiseResolver> append_encoded_chunks_resolver_;
TaskHandle append_encoded_chunks_async_task_handle_;
// These are valid only during the scope of synchronous and asynchronous
// follow-up of remove().
double pending_remove_start_;
double pending_remove_end_;
TaskHandle remove_async_task_handle_;
......
......@@ -845,6 +845,14 @@ _CONFIG = [
'media::.+',
]
},
{
'paths': [
'third_party/blink/renderer/modules/mediasource/',
],
'allowed': [
'media::.+',
]
},
{
'paths': [
'third_party/blink/renderer/modules/webcodecs/',
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment