Commit 43a49df1 authored by Andres Calderon Jaramillo's avatar Andres Calderon Jaramillo Committed by Commit Bot

VdaVD: Advertise a VideoFrame's visible size as the coded size.

Problem:

CL:2010243 made it so that the GL image for a video frame is created
using the visible size. This is correct because we want to make sure
that when we composite, we don't sample padding (which would result in
unwelcome green lines). However, the coded size of the video frame sent
to the client still contained the non-visible area. This caused
PaintCanvasVideoRenderer to sometimes issue GL commands with dimensions
exceeding those of the GL image backing the texture. These commands
would subsequently fail validation resulting in the associated video
frame not being rendered on the canvas.

Solution:

This CL is a generalization of CL:2056285 which solved the same problem
but for the VA-API video decode accelerator. The essence of the solution
here is to override ProvidePictureBuffersWithVisibleRect() in
VdaVideoDecoder. That way, we end up creating the outgoing video frame
with a coded size equal to the visible size solving the problem above.

Note that there's one caveat: there is an unusual corner case where an
H.264 video includes information about a visible rectangle whose origin
is not at (0, 0). Therefore some of the top and left parts of the video
should not be displayed. However, Chrome does not currently deal well
with this case. This CL does not make that situation better, but this
could be addressed in a follow-up CL considering that it's a rare case.

I tested across many boards. To see the test cases and screenshots of
the results, see [1] (a d08cb982 in the filename represents the base
commit before this CL).

[1] https://drive.google.com/open?id=1h94xbzNLBrlHBlbmaaasHV1P7HOaPBt1

Bug: 1062002
Test: video in bug gets rendered to canvas in kevin successfully.
Change-Id: I6daf9df07c8588d82d5396908d8f91c92ce3984d
Reviewed-on: https://chromium-review.googlesource.com/c/chromium/src/+/2122484
Commit-Queue: Andres Calderon Jaramillo <andrescj@chromium.org>
Reviewed-by: default avatarAlexandre Courbot <acourbot@chromium.org>
Reviewed-by: default avatarDan Sanders <sandersd@chromium.org>
Cr-Commit-Position: refs/heads/master@{#754840}
parent ff73d746
......@@ -518,6 +518,34 @@ void VdaVideoDecoder::ProvidePictureBuffers(uint32_t requested_num_of_buffers,
textures_per_buffer, dimensions, texture_target));
}
void VdaVideoDecoder::ProvidePictureBuffersWithVisibleRect(
uint32_t requested_num_of_buffers,
VideoPixelFormat format,
uint32_t textures_per_buffer,
const gfx::Size& dimensions,
const gfx::Rect& visible_rect,
uint32_t texture_target) {
// In ALLOCATE mode, |vda_| is responsible for allocating storage for the
// result of the decode. However, we are responsible for creating the GL
// texture to which we'll attach the decoded image. The decoder needs the
// buffer to be of size = coded size, but for the purposes of working with a
// graphics API (e.g., GL), the client does not need to know about the
// non-visible area. For example, for a 360p H.264 video with a visible
// rectangle of 0,0,640x360, the coded size is 640x368, but the GL texture
// that the client uses should be only 640x360. Therefore, we pass
// |visible_rect|.size() here as the requested size of the picture buffers.
//
// TODO(andrescj): this is not correct in the case that the visible rectangle
// does not start at (0, 0). This can happen for some exotic H.264 videos. For
// example, if the coded size is 640x368 and the visible rectangle is
// 2,2,640x360, the size of the picture buffers we pass here should be
// 642x362. That's because the compositor is responsible for calculating the
// UV coordinates in such a way that the non-visible area to the left and on
// the top of the visible rectangle are not displayed.
ProvidePictureBuffers(requested_num_of_buffers, format, textures_per_buffer,
visible_rect.size(), texture_target);
}
void VdaVideoDecoder::ProvidePictureBuffersAsync(uint32_t count,
VideoPixelFormat pixel_format,
uint32_t planes,
......
......@@ -126,6 +126,12 @@ class VdaVideoDecoder : public VideoDecoder,
uint32_t textures_per_buffer,
const gfx::Size& dimensions,
uint32_t texture_target) override;
void ProvidePictureBuffersWithVisibleRect(uint32_t requested_num_of_buffers,
VideoPixelFormat format,
uint32_t textures_per_buffer,
const gfx::Size& dimensions,
const gfx::Rect& visible_rect,
uint32_t texture_target) override;
void DismissPictureBuffer(int32_t picture_buffer_id) override;
void PictureReady(const Picture& picture) override;
void NotifyEndOfBitstreamBuffer(int32_t bitstream_buffer_id) override;
......
......@@ -1264,19 +1264,37 @@ void V4L2SliceVideoDecodeAccelerator::AssignPictureBuffersTask(
return;
}
// If a client allocate a different frame size, S_FMT should be called with
// the size.
if (!image_processor_device_ && coded_size_ != buffers[0].size()) {
const auto& new_frame_size = buffers[0].size();
const gfx::Size pic_size_received_from_client = buffers[0].size();
const gfx::Size pic_size_expected_from_client =
output_mode_ == Config::OutputMode::ALLOCATE
? decoder_->GetVisibleRect().size()
: coded_size_;
if (output_mode_ == Config::OutputMode::ALLOCATE &&
pic_size_expected_from_client != pic_size_received_from_client) {
// In ALLOCATE mode, we don't allow the client to adjust the size. That's
// because the client is responsible only for creating the GL texture and
// its dimensions should match the dimensions we use to create the GL image
// here (eventually).
LOG(ERROR)
<< "The client supplied a picture buffer with an unexpected size (Got "
<< pic_size_received_from_client.ToString() << ", expected "
<< pic_size_expected_from_client.ToString() << ")";
NOTIFY_ERROR(INVALID_ARGUMENT);
return;
} else if (output_mode_ == Config::OutputMode::IMPORT &&
!image_processor_device_ &&
pic_size_expected_from_client != pic_size_received_from_client) {
// If a client allocates a different frame size, S_FMT should be called with
// the size.
v4l2_format format = {};
format.type = V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE;
format.fmt.pix_mp.width = new_frame_size.width();
format.fmt.pix_mp.height = new_frame_size.height();
format.fmt.pix_mp.width = pic_size_received_from_client.width();
format.fmt.pix_mp.height = pic_size_received_from_client.height();
format.fmt.pix_mp.pixelformat = output_format_fourcc_->ToV4L2PixFmt();
format.fmt.pix_mp.num_planes = output_planes_count_;
if (device_->Ioctl(VIDIOC_S_FMT, &format) != 0) {
PLOG(ERROR) << "Failed with frame size adjusted by client"
<< new_frame_size.ToString();
<< pic_size_received_from_client.ToString();
NOTIFY_ERROR(PLATFORM_FAILURE);
return;
}
......@@ -1284,10 +1302,10 @@ void V4L2SliceVideoDecodeAccelerator::AssignPictureBuffersTask(
coded_size_.SetSize(format.fmt.pix_mp.width, format.fmt.pix_mp.height);
// If size specified by ProvidePictureBuffers() is adjusted by the client,
// the size must not be adjusted by a v4l2 driver again.
if (coded_size_ != new_frame_size) {
if (coded_size_ != pic_size_received_from_client) {
LOG(ERROR) << "The size of PictureBuffer is invalid."
<< " size adjusted by the client = "
<< new_frame_size.ToString()
<< pic_size_received_from_client.ToString()
<< " size adjusted by a driver = " << coded_size_.ToString();
NOTIFY_ERROR(INVALID_ARGUMENT);
return;
......@@ -1576,6 +1594,7 @@ void V4L2SliceVideoDecodeAccelerator::ImportBufferForPictureTask(
// (texture_id !=0). Moreover, if an image processor is in use, we will
// create the GL image when its buffer becomes visible in FrameProcessed().
if (iter->texture_id != 0 && !image_processor_) {
DCHECK_EQ(Config::OutputMode::ALLOCATE, output_mode_);
DCHECK_GT(handle.planes.size(), 0u);
size_t index = iter - output_buffer_map_.begin();
......
......@@ -78,19 +78,6 @@ bool IsGeminiLakeOrLater() {
return is_geminilake_or_later;
}
// Returns the size of a rectangle whose upper left corner is at the origin (0,
// 0) and whose bottom right corner is the same as that of |rect|. This is
// useful to get the size of a buffer that contains the visible rectangle plus
// the non-visible area above and to the left of the visible rectangle.
//
// An example to illustrate: suppose the visible rectangle of a decoded frame is
// 10,10,100,100. The size of this rectangle is 90x90. However, we need to
// create a texture of size 100x100 because the client will want to sample from
// the texture starting with uv coordinates corresponding to 10,10.
gfx::Size GetRectSizeFromOrigin(const gfx::Rect& rect) {
return gfx::Size(rect.bottom_right().x(), rect.bottom_right().y());
}
} // namespace
#define RETURN_AND_NOTIFY_ON_FAILURE(result, log, error_code, ret) \
......@@ -608,41 +595,20 @@ void VaapiVideoDecodeAccelerator::TryFinishSurfaceSetChange() {
}
pictures_.clear();
// In ALLOCATE mode, we are responsible for allocating storage for the result
// of the decode. However, the client is responsible for creating the GL
// texture to which we'll attach the decoded image. The decoder needs the
// buffer to be of size = |requested_pic_size_|, but for the purposes of
// working with a graphics API (e.g., GL), the client does not need to know
// about the non-visible area on the bottom or the right of the frame. For
// example, for a 360p H.264 video with a visible rectangle of 0,0,640x360,
// the coded size is 640x368, but the GL texture that the client uses should
// be only 640x360.
//
// In IMPORT mode, the client is responsible for allocating storage for the
// decoder to work with, so in that case, we must request the full coded size
// and the client is responsible for importing the decoded image into a
// graphics API correctly.
const gfx::Size pic_size_to_request_from_client =
(output_mode_ == Config::OutputMode::ALLOCATE)
? GetRectSizeFromOrigin(requested_visible_rect_)
: requested_pic_size_;
DCHECK(gfx::Rect(requested_pic_size_)
.Contains(gfx::Rect(pic_size_to_request_from_client)));
// And ask for a new set as requested.
VLOGF(2) << "Requesting " << requested_num_pics_ << " pictures of size: "
<< pic_size_to_request_from_client.ToString();
VLOGF(2) << "Requesting " << requested_num_pics_
<< " pictures of size: " << requested_pic_size_.ToString()
<< " and visible rectangle = " << requested_visible_rect_.ToString();
const base::Optional<VideoPixelFormat> format =
GfxBufferFormatToVideoPixelFormat(
vaapi_picture_factory_->GetBufferFormat());
CHECK(format);
task_runner_->PostTask(
FROM_HERE,
base::BindOnce(&Client::ProvidePictureBuffersWithVisibleRect, client_,
requested_num_pics_, *format, 1,
pic_size_to_request_from_client, requested_visible_rect_,
vaapi_picture_factory_->GetGLTextureTarget()));
FROM_HERE, base::BindOnce(&Client::ProvidePictureBuffersWithVisibleRect,
client_, requested_num_pics_, *format, 1,
requested_pic_size_, requested_visible_rect_,
vaapi_picture_factory_->GetGLTextureTarget()));
// |client_| may respond via AssignPictureBuffers().
}
......@@ -723,7 +689,7 @@ void VaapiVideoDecodeAccelerator::AssignPictureBuffers(
// Note that the |size_to_bind| is not relevant in IMPORT mode.
const gfx::Size size_to_bind =
(output_mode_ == Config::OutputMode::ALLOCATE)
? GetRectSizeFromOrigin(requested_visible_rect_)
? requested_visible_rect_.size()
: gfx::Size();
std::unique_ptr<VaapiPicture> picture = vaapi_picture_factory_->Create(
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment