Commit 03b77088 authored by mcasas's avatar mcasas Committed by Commit bot

Reland 967793002 Linux Video Capture: Add V4L2VideoCaptureDelegate{Single,Multi}Plane

FileVCD did not specify a Capture Api Type in Linux
platforms, hence the check failed down the road.

Original description: -----------------------------------------

This CL adds support for V4L2 MPLANE Capture Api.
Only supported format is YUV420M triplanar.

A new method is added to VideoCaptureDeviceClient,
namely OnIncomingCapturedYuvData(...), which forces
adding MOCKing here and there, and its own
implementation.

V4L2 MMAP API works via user mmap()ing a number of
buffer allocated by V4L2 capture device. If those
buffers are not correctly munmap()ed, bad things (c)
happen. In light of this, the manual buffer lifetime
management is changed to automatic one. Construction
(mmap()ing) of those called BufferTracker is
planarity specific (i.e. there's one such ctor in
each of BufferTracker{S,M}Plane), while the dtor
is generic and the same.

ToT class diagram:
     +------------------------------------+
     | VideoCaptureDeviceLinux            |
     |            +----------------------+|
     | <<ref>> -->| V4L2CaptureDelegate  ||
     |   cnt      |  (struct Buffer)     ||
     |            +----------------------+|
     +------------------------------------+

This CL class scheme:

 +--------------------------+
 | VideoCaptureDeviceLinux  |
 |                          |
 | <<ref_cnt>> ---+         |
 +----------------|---------+
 +----------------v-----------+ v4l2_capture_delegate.{cc,h}
 | +-----------------------+  |
 | |V4L2CaptureDelegate    |  |
 | |  (class BufferTracker)|  |
 | +-----------------------+  |
 +-------^------------------^-+
         |                  |
    +----|-------+ +--------|--+ v4l2_capture_delegate_multi_plane.{cc,h}
    | SPlane     | | MPlane    |
    | (BTSplane) | | (BTMPlane)|
    |            | +-----------+
    +------------+ v4l2_capture_delegate_single_plane.{cc,h}

- VCDevice works on the premise that its calls into
 VCDevice::Client::OnIncomingWhatever() are synchronous.
 That assumption is respected here.

- A bit of cleanup is done in OnIncomingCaptureData(),
 in what regards rotation/crop/odd sizes. A unit test
 is subsequently added.

- VideoCaptureDeviceFactory labels the devices as
 Single or Multi Planar. That labeling capture_api_type()
 needs to ripple through a bunch of files, causing some
 otherwise uninteresting changes in the patchsets.

BUG=441836

TEST= Compile and insmod vivid.ko into a kernel,
with options for supporting MPLANE api (multiplanar=2)
then capture using patched Chromium. Current vivid
does _not_ support any common Mplane format, needs
a patch:

https://github.com/miguelao/linux/tree/adding_yu12_yv12_nv12_nv21___mplane_formats___with_mods_for_kernel_3_13

that needs to be compiled against ubuntu sources 3.13 etc.

For even better coverage, use a normal WebCam and
navigate to http://goo.gl/fUcIiP, then open both
the MPlane camera mentioned in the previous paragraph
and the "normal" webcam (this is,partially, how I
try it).

TBR= posciak@chromium.org, perkj@chromium.org, magjed@chromium.org, dalecurtis@chromium.org

Review URL: https://codereview.chromium.org/1026073002

Cr-Commit-Position: refs/heads/master@{#321856}
parent 66e905bc
...@@ -41,6 +41,16 @@ class MockDeviceClient : public media::VideoCaptureDevice::Client { ...@@ -41,6 +41,16 @@ class MockDeviceClient : public media::VideoCaptureDevice::Client {
const media::VideoCaptureFormat& frame_format, const media::VideoCaptureFormat& frame_format,
int rotation, int rotation,
const base::TimeTicks& timestamp)); const base::TimeTicks& timestamp));
MOCK_METHOD9(OnIncomingCapturedYuvData,
void (const uint8* y_data,
const uint8* u_data,
const uint8* v_data,
size_t y_stride,
size_t u_stride,
size_t v_stride,
const media::VideoCaptureFormat& frame_format,
int clockwise_rotation,
const base::TimeTicks& timestamp));
MOCK_METHOD2(ReserveOutputBuffer, MOCK_METHOD2(ReserveOutputBuffer,
scoped_refptr<Buffer>(media::VideoFrame::Format format, scoped_refptr<Buffer>(media::VideoFrame::Format format,
const gfx::Size& dimensions)); const gfx::Size& dimensions));
......
...@@ -59,6 +59,16 @@ class MockDeviceClient : public media::VideoCaptureDevice::Client { ...@@ -59,6 +59,16 @@ class MockDeviceClient : public media::VideoCaptureDevice::Client {
const media::VideoCaptureFormat& frame_format, const media::VideoCaptureFormat& frame_format,
int rotation, int rotation,
const base::TimeTicks& timestamp)); const base::TimeTicks& timestamp));
MOCK_METHOD9(OnIncomingCapturedYuvData,
void (const uint8* y_data,
const uint8* u_data,
const uint8* v_data,
size_t y_stride,
size_t u_stride,
size_t v_stride,
const media::VideoCaptureFormat& frame_format,
int clockwise_rotation,
const base::TimeTicks& timestamp));
MOCK_METHOD2(ReserveOutputBuffer, MOCK_METHOD2(ReserveOutputBuffer,
scoped_refptr<Buffer>(media::VideoFrame::Format format, scoped_refptr<Buffer>(media::VideoFrame::Format format,
const gfx::Size& dimensions)); const gfx::Size& dimensions));
......
...@@ -323,6 +323,18 @@ class StubClient : public media::VideoCaptureDevice::Client { ...@@ -323,6 +323,18 @@ class StubClient : public media::VideoCaptureDevice::Client {
FAIL(); FAIL();
} }
void OnIncomingCapturedYuvData(const uint8* y_data,
const uint8* u_data,
const uint8* v_data,
size_t y_stride,
size_t u_stride,
size_t v_stride,
const media::VideoCaptureFormat& frame_format,
int clockwise_rotation,
const base::TimeTicks& timestamp) override {
FAIL();
}
scoped_refptr<media::VideoCaptureDevice::Client::Buffer> ReserveOutputBuffer( scoped_refptr<media::VideoCaptureDevice::Client::Buffer> ReserveOutputBuffer(
media::VideoFrame::Format format, media::VideoFrame::Format format,
const gfx::Size& dimensions) override { const gfx::Size& dimensions) override {
......
...@@ -487,7 +487,7 @@ void MediaInternals::UpdateVideoCaptureDeviceCapabilities( ...@@ -487,7 +487,7 @@ void MediaInternals::UpdateVideoCaptureDeviceCapabilities(
device_dict->SetString( device_dict->SetString(
"name", video_capture_device_info.name.GetNameAndModel()); "name", video_capture_device_info.name.GetNameAndModel());
device_dict->Set("formats", format_list); device_dict->Set("formats", format_list);
#if defined(OS_WIN) || defined(OS_MACOSX) #if defined(OS_WIN) || defined(OS_MACOSX) || defined(OS_LINUX)
device_dict->SetString( device_dict->SetString(
"captureApi", "captureApi",
video_capture_device_info.name.GetCaptureApiTypeString()); video_capture_device_info.name.GetCaptureApiTypeString());
......
...@@ -109,14 +109,17 @@ class MediaInternalsVideoCaptureDeviceTest : public testing::Test, ...@@ -109,14 +109,17 @@ class MediaInternalsVideoCaptureDeviceTest : public testing::Test,
MediaInternals::UpdateCallback update_cb_; MediaInternals::UpdateCallback update_cb_;
}; };
#if defined(OS_WIN) || defined(OS_MACOSX) #if defined(OS_WIN) || defined(OS_MACOSX) || defined(OS_LINUX)
TEST_F(MediaInternalsVideoCaptureDeviceTest, TEST_F(MediaInternalsVideoCaptureDeviceTest,
AllCaptureApiTypesHaveProperStringRepresentation) { AllCaptureApiTypesHaveProperStringRepresentation) {
typedef media::VideoCaptureDevice::Name VideoCaptureDeviceName; typedef media::VideoCaptureDevice::Name VideoCaptureDeviceName;
typedef std::map<VideoCaptureDeviceName::CaptureApiType, std::string> typedef std::map<VideoCaptureDeviceName::CaptureApiType, std::string>
CaptureApiTypeStringMap; CaptureApiTypeStringMap;
CaptureApiTypeStringMap m; CaptureApiTypeStringMap m;
#if defined(OS_WIN) #if defined(OS_LINUX)
m[VideoCaptureDeviceName::V4L2_SINGLE_PLANE] = "V4L2 SPLANE";
m[VideoCaptureDeviceName::V4L2_MULTI_PLANE] = "V4L2 MPLANE";
#elif defined(OS_WIN)
m[VideoCaptureDeviceName::MEDIA_FOUNDATION] = "Media Foundation"; m[VideoCaptureDeviceName::MEDIA_FOUNDATION] = "Media Foundation";
m[VideoCaptureDeviceName::DIRECT_SHOW] = "Direct Show"; m[VideoCaptureDeviceName::DIRECT_SHOW] = "Direct Show";
m[VideoCaptureDeviceName::DIRECT_SHOW_WDM_CROSSBAR] = m[VideoCaptureDeviceName::DIRECT_SHOW_WDM_CROSSBAR] =
...@@ -172,8 +175,10 @@ TEST_F(MediaInternalsVideoCaptureDeviceTest, ...@@ -172,8 +175,10 @@ TEST_F(MediaInternalsVideoCaptureDeviceTest,
#elif defined(OS_WIN) #elif defined(OS_WIN)
media::VideoCaptureDevice::Name("dummy", "dummy", media::VideoCaptureDevice::Name("dummy", "dummy",
media::VideoCaptureDevice::Name::DIRECT_SHOW), media::VideoCaptureDevice::Name::DIRECT_SHOW),
#elif defined(OS_LINUX) || defined(OS_CHROMEOS) #elif defined(OS_LINUX)
media::VideoCaptureDevice::Name("dummy", "/dev/dummy"), media::VideoCaptureDevice::Name(
"dummy", "/dev/dummy",
media::VideoCaptureDevice::Name::V4L2_SINGLE_PLANE),
#else #else
media::VideoCaptureDevice::Name("dummy", "dummy"), media::VideoCaptureDevice::Name("dummy", "dummy"),
#endif #endif
...@@ -187,7 +192,7 @@ TEST_F(MediaInternalsVideoCaptureDeviceTest, ...@@ -187,7 +192,7 @@ TEST_F(MediaInternalsVideoCaptureDeviceTest,
// exactly one device_info in the |device_infos|. // exactly one device_info in the |device_infos|.
media_internals_->UpdateVideoCaptureDeviceCapabilities(device_infos); media_internals_->UpdateVideoCaptureDeviceCapabilities(device_infos);
#if defined(OS_LINUX) || defined(OS_CHROMEOS) #if defined(OS_LINUX)
ExpectString("id", "/dev/dummy"); ExpectString("id", "/dev/dummy");
#else #else
ExpectString("id", "dummy"); ExpectString("id", "dummy");
...@@ -196,10 +201,12 @@ TEST_F(MediaInternalsVideoCaptureDeviceTest, ...@@ -196,10 +201,12 @@ TEST_F(MediaInternalsVideoCaptureDeviceTest,
base::ListValue expected_list; base::ListValue expected_list;
expected_list.AppendString(format_hd.ToString()); expected_list.AppendString(format_hd.ToString());
ExpectListOfStrings("formats", expected_list); ExpectListOfStrings("formats", expected_list);
#if defined(OS_MACOSX) #if defined(OS_LINUX)
ExpectString("captureApi", "QTKit"); ExpectString("captureApi", "V4L2 SPLANE");
#elif defined(OS_WIN) #elif defined(OS_WIN)
ExpectString("captureApi", "Direct Show"); ExpectString("captureApi", "Direct Show");
#elif defined(OS_MACOSX)
ExpectString("captureApi", "QTKit");
#endif #endif
} }
......
...@@ -29,8 +29,10 @@ ...@@ -29,8 +29,10 @@
#include "content/browser/compositor/test/no_transport_image_transport_factory.h" #include "content/browser/compositor/test/no_transport_image_transport_factory.h"
#endif #endif
using ::testing::_;
using ::testing::InSequence; using ::testing::InSequence;
using ::testing::Mock; using ::testing::Mock;
using ::testing::SaveArg;
namespace content { namespace content {
...@@ -46,7 +48,7 @@ class MockVideoCaptureControllerEventHandler ...@@ -46,7 +48,7 @@ class MockVideoCaptureControllerEventHandler
// VideoCaptureControllerEventHandler, to be used in EXPECT_CALL(). // VideoCaptureControllerEventHandler, to be used in EXPECT_CALL().
MOCK_METHOD1(DoBufferCreated, void(VideoCaptureControllerID)); MOCK_METHOD1(DoBufferCreated, void(VideoCaptureControllerID));
MOCK_METHOD1(DoBufferDestroyed, void(VideoCaptureControllerID)); MOCK_METHOD1(DoBufferDestroyed, void(VideoCaptureControllerID));
MOCK_METHOD1(DoBufferReady, void(VideoCaptureControllerID)); MOCK_METHOD2(DoBufferReady, void(VideoCaptureControllerID, const gfx::Size&));
MOCK_METHOD1(DoMailboxBufferReady, void(VideoCaptureControllerID)); MOCK_METHOD1(DoMailboxBufferReady, void(VideoCaptureControllerID));
MOCK_METHOD1(DoEnded, void(VideoCaptureControllerID)); MOCK_METHOD1(DoEnded, void(VideoCaptureControllerID));
MOCK_METHOD1(DoError, void(VideoCaptureControllerID)); MOCK_METHOD1(DoError, void(VideoCaptureControllerID));
...@@ -70,7 +72,7 @@ class MockVideoCaptureControllerEventHandler ...@@ -70,7 +72,7 @@ class MockVideoCaptureControllerEventHandler
const gfx::Rect& visible_rect, const gfx::Rect& visible_rect,
const base::TimeTicks& timestamp, const base::TimeTicks& timestamp,
scoped_ptr<base::DictionaryValue> metadata) override { scoped_ptr<base::DictionaryValue> metadata) override {
DoBufferReady(id); DoBufferReady(id, coded_size);
base::MessageLoop::current()->PostTask( base::MessageLoop::current()->PostTask(
FROM_HERE, FROM_HERE,
base::Bind(&VideoCaptureController::ReturnBuffer, base::Bind(&VideoCaptureController::ReturnBuffer,
...@@ -331,17 +333,17 @@ TEST_F(VideoCaptureControllerTest, NormalCaptureMultipleClients) { ...@@ -331,17 +333,17 @@ TEST_F(VideoCaptureControllerTest, NormalCaptureMultipleClients) {
{ {
InSequence s; InSequence s;
EXPECT_CALL(*client_a_, DoBufferCreated(client_a_route_1)).Times(1); EXPECT_CALL(*client_a_, DoBufferCreated(client_a_route_1)).Times(1);
EXPECT_CALL(*client_a_, DoBufferReady(client_a_route_1)).Times(1); EXPECT_CALL(*client_a_, DoBufferReady(client_a_route_1,_)).Times(1);
} }
{ {
InSequence s; InSequence s;
EXPECT_CALL(*client_b_, DoBufferCreated(client_b_route_1)).Times(1); EXPECT_CALL(*client_b_, DoBufferCreated(client_b_route_1)).Times(1);
EXPECT_CALL(*client_b_, DoBufferReady(client_b_route_1)).Times(1); EXPECT_CALL(*client_b_, DoBufferReady(client_b_route_1,_)).Times(1);
} }
{ {
InSequence s; InSequence s;
EXPECT_CALL(*client_a_, DoBufferCreated(client_a_route_2)).Times(1); EXPECT_CALL(*client_a_, DoBufferCreated(client_a_route_2)).Times(1);
EXPECT_CALL(*client_a_, DoBufferReady(client_a_route_2)).Times(1); EXPECT_CALL(*client_a_, DoBufferReady(client_a_route_2,_)).Times(1);
} }
device_->OnIncomingCapturedVideoFrame( device_->OnIncomingCapturedVideoFrame(
buffer, buffer,
...@@ -367,9 +369,9 @@ TEST_F(VideoCaptureControllerTest, NormalCaptureMultipleClients) { ...@@ -367,9 +369,9 @@ TEST_F(VideoCaptureControllerTest, NormalCaptureMultipleClients) {
buffer = NULL; buffer = NULL;
// The buffer should be delivered to the clients in any order. // The buffer should be delivered to the clients in any order.
EXPECT_CALL(*client_a_, DoBufferReady(client_a_route_1)).Times(1); EXPECT_CALL(*client_a_, DoBufferReady(client_a_route_1,_)).Times(1);
EXPECT_CALL(*client_b_, DoBufferReady(client_b_route_1)).Times(1); EXPECT_CALL(*client_b_, DoBufferReady(client_b_route_1,_)).Times(1);
EXPECT_CALL(*client_a_, DoBufferReady(client_a_route_2)).Times(1); EXPECT_CALL(*client_a_, DoBufferReady(client_a_route_2,_)).Times(1);
base::RunLoop().RunUntilIdle(); base::RunLoop().RunUntilIdle();
Mock::VerifyAndClearExpectations(client_a_.get()); Mock::VerifyAndClearExpectations(client_a_.get());
Mock::VerifyAndClearExpectations(client_b_.get()); Mock::VerifyAndClearExpectations(client_b_.get());
...@@ -400,16 +402,16 @@ TEST_F(VideoCaptureControllerTest, NormalCaptureMultipleClients) { ...@@ -400,16 +402,16 @@ TEST_F(VideoCaptureControllerTest, NormalCaptureMultipleClients) {
// The new client needs to be told of 3 buffers; the old clients only 2. // The new client needs to be told of 3 buffers; the old clients only 2.
EXPECT_CALL(*client_b_, DoBufferCreated(client_b_route_2)).Times(kPoolSize); EXPECT_CALL(*client_b_, DoBufferCreated(client_b_route_2)).Times(kPoolSize);
EXPECT_CALL(*client_b_, DoBufferReady(client_b_route_2)).Times(kPoolSize); EXPECT_CALL(*client_b_, DoBufferReady(client_b_route_2,_)).Times(kPoolSize);
EXPECT_CALL(*client_a_, DoBufferCreated(client_a_route_1)) EXPECT_CALL(*client_a_, DoBufferCreated(client_a_route_1))
.Times(kPoolSize - 1); .Times(kPoolSize - 1);
EXPECT_CALL(*client_a_, DoBufferReady(client_a_route_1)).Times(kPoolSize); EXPECT_CALL(*client_a_, DoBufferReady(client_a_route_1,_)).Times(kPoolSize);
EXPECT_CALL(*client_a_, DoBufferCreated(client_a_route_2)) EXPECT_CALL(*client_a_, DoBufferCreated(client_a_route_2))
.Times(kPoolSize - 1); .Times(kPoolSize - 1);
EXPECT_CALL(*client_a_, DoBufferReady(client_a_route_2)).Times(kPoolSize); EXPECT_CALL(*client_a_, DoBufferReady(client_a_route_2,_)).Times(kPoolSize);
EXPECT_CALL(*client_b_, DoBufferCreated(client_b_route_1)) EXPECT_CALL(*client_b_, DoBufferCreated(client_b_route_1))
.Times(kPoolSize - 1); .Times(kPoolSize - 1);
EXPECT_CALL(*client_b_, DoBufferReady(client_b_route_1)).Times(kPoolSize); EXPECT_CALL(*client_b_, DoBufferReady(client_b_route_1,_)).Times(kPoolSize);
base::RunLoop().RunUntilIdle(); base::RunLoop().RunUntilIdle();
Mock::VerifyAndClearExpectations(client_a_.get()); Mock::VerifyAndClearExpectations(client_a_.get());
Mock::VerifyAndClearExpectations(client_b_.get()); Mock::VerifyAndClearExpectations(client_b_.get());
...@@ -447,7 +449,7 @@ TEST_F(VideoCaptureControllerTest, NormalCaptureMultipleClients) { ...@@ -447,7 +449,7 @@ TEST_F(VideoCaptureControllerTest, NormalCaptureMultipleClients) {
buffer = NULL; buffer = NULL;
// B2 is the only client left, and is the only one that should // B2 is the only client left, and is the only one that should
// get the buffer. // get the buffer.
EXPECT_CALL(*client_b_, DoBufferReady(client_b_route_2)).Times(2); EXPECT_CALL(*client_b_, DoBufferReady(client_b_route_2,_)).Times(2);
base::RunLoop().RunUntilIdle(); base::RunLoop().RunUntilIdle();
Mock::VerifyAndClearExpectations(client_a_.get()); Mock::VerifyAndClearExpectations(client_a_.get());
Mock::VerifyAndClearExpectations(client_b_.get()); Mock::VerifyAndClearExpectations(client_b_.get());
...@@ -500,7 +502,7 @@ TEST_F(VideoCaptureControllerTest, NormalCaptureMultipleClients) { ...@@ -500,7 +502,7 @@ TEST_F(VideoCaptureControllerTest, NormalCaptureMultipleClients) {
capture_resolution).get()); capture_resolution).get());
ASSERT_FALSE(device_->ReserveOutputBuffer(media::VideoFrame::NATIVE_TEXTURE, ASSERT_FALSE(device_->ReserveOutputBuffer(media::VideoFrame::NATIVE_TEXTURE,
gfx::Size(0, 0)).get()); gfx::Size(0, 0)).get());
EXPECT_CALL(*client_b_, DoBufferReady(client_b_route_2)).Times(shm_buffers); EXPECT_CALL(*client_b_, DoBufferReady(client_b_route_2,_)).Times(shm_buffers);
EXPECT_CALL(*client_b_, DoMailboxBufferReady(client_b_route_2)) EXPECT_CALL(*client_b_, DoMailboxBufferReady(client_b_route_2))
.Times(mailbox_buffers); .Times(mailbox_buffers);
base::RunLoop().RunUntilIdle(); base::RunLoop().RunUntilIdle();
...@@ -618,30 +620,91 @@ TEST_F(VideoCaptureControllerTest, DataCaptureInEachVideoFormatInSequence) { ...@@ -618,30 +620,91 @@ TEST_F(VideoCaptureControllerTest, DataCaptureInEachVideoFormatInSequence) {
const gfx::Size capture_resolution(10, 10); const gfx::Size capture_resolution(10, 10);
ASSERT_GE(kScratchpadSizeInBytes, capture_resolution.GetArea() * 4u) ASSERT_GE(kScratchpadSizeInBytes, capture_resolution.GetArea() * 4u)
<< "Scratchpad is too small to hold the largest pixel format (ARGB)."; << "Scratchpad is too small to hold the largest pixel format (ARGB).";
const int kSessionId = 100;
// This Test skips PIXEL_FORMAT_TEXTURE and PIXEL_FORMAT_UNKNOWN. // This Test skips PIXEL_FORMAT_TEXTURE and PIXEL_FORMAT_UNKNOWN.
for (int format = 0; format < media::PIXEL_FORMAT_TEXTURE; ++format) { for (int format = 0; format < media::PIXEL_FORMAT_TEXTURE; ++format) {
media::VideoCaptureParams params; media::VideoCaptureParams params;
params.requested_format = media::VideoCaptureFormat( params.requested_format = media::VideoCaptureFormat(
capture_resolution, 30, media::VideoPixelFormat(format)); capture_resolution, 30, media::VideoPixelFormat(format));
const gfx::Size capture_resolution(320, 240);
const VideoCaptureControllerID route(0x99);
// Start with one client. // Start with one client.
controller_->AddClient(route, const VideoCaptureControllerID route_id(0x99);
controller_->AddClient(route_id,
client_a_.get(), client_a_.get(),
base::kNullProcessHandle, base::kNullProcessHandle,
100, kSessionId,
params); params);
ASSERT_EQ(1, controller_->GetClientCount()); ASSERT_EQ(1, controller_->GetClientCount());
device_->OnIncomingCapturedData( device_->OnIncomingCapturedData(
data, data,
params.requested_format.ImageAllocationSize(), params.requested_format.ImageAllocationSize(),
params.requested_format, params.requested_format,
0 /* rotation */, 0 /* clockwise_rotation */,
base::TimeTicks()); base::TimeTicks());
EXPECT_EQ(100, controller_->RemoveClient(route, client_a_.get())); EXPECT_EQ(kSessionId, controller_->RemoveClient(route_id, client_a_.get()));
Mock::VerifyAndClearExpectations(client_a_.get());
}
}
// Test that we receive the expected resolution for a given captured frame
// resolution and rotation. Odd resolutions are also cropped.
TEST_F(VideoCaptureControllerTest, CheckRotationsAndCrops) {
const int kSessionId = 100;
const struct SizeAndRotation {
gfx::Size input_resolution;
int rotation;
gfx::Size output_resolution;
} kSizeAndRotations[] = {{{6, 4}, 0, {6, 4}},
{{6, 4}, 90, {4, 6}},
{{6, 4}, 180, {6, 4}},
{{6, 4}, 270, {4, 6}},
{{7, 4}, 0, {6, 4}},
{{7, 4}, 90, {4, 6}},
{{7, 4}, 180, {6, 4}},
{{7, 4}, 270, {4, 6}}};
// The usual ReserveOutputBuffer() -> OnIncomingCapturedVideoFrame() cannot
// be used since it does not resolve rotations or crops. The memory backed
// buffer OnIncomingCapturedData() is used instead, with a dummy scratchpad
// buffer.
const size_t kScratchpadSizeInBytes = 400;
unsigned char data[kScratchpadSizeInBytes] = {};
media::VideoCaptureParams params;
for (const auto& size_and_rotation : kSizeAndRotations) {
ASSERT_GE(kScratchpadSizeInBytes,
size_and_rotation.input_resolution.GetArea() * 4u)
<< "Scratchpad is too small to hold the largest pixel format (ARGB).";
params.requested_format = media::VideoCaptureFormat(
size_and_rotation.input_resolution, 30, media::PIXEL_FORMAT_ARGB);
const VideoCaptureControllerID route_id(0x99);
controller_->AddClient(route_id, client_a_.get(), base::kNullProcessHandle,
kSessionId, params);
ASSERT_EQ(1, controller_->GetClientCount());
device_->OnIncomingCapturedData(
data,
params.requested_format.ImageAllocationSize(),
params.requested_format,
size_and_rotation.rotation,
base::TimeTicks());
gfx::Size coded_size;
{
InSequence s;
EXPECT_CALL(*client_a_, DoBufferCreated(route_id)).Times(1);
EXPECT_CALL(*client_a_, DoBufferReady(route_id, _))
.Times(1)
.WillOnce(SaveArg<1>(&coded_size));
}
base::RunLoop().RunUntilIdle();
EXPECT_EQ(coded_size.width(), size_and_rotation.output_resolution.width());
EXPECT_EQ(coded_size.height(),
size_and_rotation.output_resolution.height());
EXPECT_EQ(kSessionId, controller_->RemoveClient(route_id, client_a_.get()));
Mock::VerifyAndClearExpectations(client_a_.get()); Mock::VerifyAndClearExpectations(client_a_.get());
} }
} }
......
...@@ -73,21 +73,12 @@ void VideoCaptureDeviceClient::OnIncomingCapturedData( ...@@ -73,21 +73,12 @@ void VideoCaptureDeviceClient::OnIncomingCapturedData(
if (!frame_format.IsValid()) if (!frame_format.IsValid())
return; return;
// Chopped pixels in width/height in case video capture device has odd // |chopped_{width,height} and |new_unrotated_{width,height}| are the lowest
// numbers for width/height. // bit decomposition of {width, height}, grabbing the odd and even parts.
int chopped_width = 0; const int chopped_width = frame_format.frame_size.width() & 1;
int chopped_height = 0; const int chopped_height = frame_format.frame_size.height() & 1;
int new_unrotated_width = frame_format.frame_size.width(); const int new_unrotated_width = frame_format.frame_size.width() & ~1;
int new_unrotated_height = frame_format.frame_size.height(); const int new_unrotated_height = frame_format.frame_size.height() & ~1;
if (new_unrotated_width & 1) {
--new_unrotated_width;
chopped_width = 1;
}
if (new_unrotated_height & 1) {
--new_unrotated_height;
chopped_height = 1;
}
int destination_width = new_unrotated_width; int destination_width = new_unrotated_width;
int destination_height = new_unrotated_height; int destination_height = new_unrotated_height;
...@@ -95,6 +86,17 @@ void VideoCaptureDeviceClient::OnIncomingCapturedData( ...@@ -95,6 +86,17 @@ void VideoCaptureDeviceClient::OnIncomingCapturedData(
destination_width = new_unrotated_height; destination_width = new_unrotated_height;
destination_height = new_unrotated_width; destination_height = new_unrotated_width;
} }
DCHECK_EQ(rotation % 90, 0)
<< " Rotation must be a multiple of 90, now: " << rotation;
libyuv::RotationMode rotation_mode = libyuv::kRotate0;
if (rotation == 90)
rotation_mode = libyuv::kRotate90;
else if (rotation == 180)
rotation_mode = libyuv::kRotate180;
else if (rotation == 270)
rotation_mode = libyuv::kRotate270;
const gfx::Size dimensions(destination_width, destination_height); const gfx::Size dimensions(destination_width, destination_height);
if (!VideoFrame::IsValidConfig(VideoFrame::I420, if (!VideoFrame::IsValidConfig(VideoFrame::I420,
dimensions, dimensions,
...@@ -121,14 +123,6 @@ void VideoCaptureDeviceClient::OnIncomingCapturedData( ...@@ -121,14 +123,6 @@ void VideoCaptureDeviceClient::OnIncomingCapturedData(
int crop_y = 0; int crop_y = 0;
libyuv::FourCC origin_colorspace = libyuv::FOURCC_ANY; libyuv::FourCC origin_colorspace = libyuv::FOURCC_ANY;
libyuv::RotationMode rotation_mode = libyuv::kRotate0;
if (rotation == 90)
rotation_mode = libyuv::kRotate90;
else if (rotation == 180)
rotation_mode = libyuv::kRotate180;
else if (rotation == 270)
rotation_mode = libyuv::kRotate270;
bool flip = false; bool flip = false;
switch (frame_format.pixel_format) { switch (frame_format.pixel_format) {
case media::PIXEL_FORMAT_UNKNOWN: // Color format not set. case media::PIXEL_FORMAT_UNKNOWN: // Color format not set.
...@@ -229,6 +223,76 @@ void VideoCaptureDeviceClient::OnIncomingCapturedData( ...@@ -229,6 +223,76 @@ void VideoCaptureDeviceClient::OnIncomingCapturedData(
timestamp)); timestamp));
} }
void
VideoCaptureDeviceClient::OnIncomingCapturedYuvData(
const uint8* y_data,
const uint8* u_data,
const uint8* v_data,
size_t y_stride,
size_t u_stride,
size_t v_stride,
const VideoCaptureFormat& frame_format,
int clockwise_rotation,
const base::TimeTicks& timestamp) {
TRACE_EVENT0("video", "VideoCaptureController::OnIncomingCapturedYuvData");
DCHECK_EQ(frame_format.pixel_format, media::PIXEL_FORMAT_I420);
DCHECK_EQ(clockwise_rotation, 0) << "Rotation not supported";
scoped_refptr<Buffer> buffer = ReserveOutputBuffer(VideoFrame::I420,
frame_format.frame_size);
if (!buffer.get())
return;
// Blit (copy) here from y,u,v into buffer.data()). Needed so we can return
// the parameter buffer synchronously to the driver.
const size_t y_plane_size = VideoFrame::PlaneAllocationSize(VideoFrame::I420,
VideoFrame::kYPlane, frame_format.frame_size);
const size_t u_plane_size = VideoFrame::PlaneAllocationSize(
VideoFrame::I420, VideoFrame::kUPlane, frame_format.frame_size);
uint8* const dst_y = reinterpret_cast<uint8*>(buffer->data());
uint8* const dst_u = dst_y + y_plane_size;
uint8* const dst_v = dst_u + u_plane_size;
const size_t dst_y_stride = VideoFrame::RowBytes(
VideoFrame::kYPlane, VideoFrame::I420, frame_format.frame_size.width());
const size_t dst_u_stride = VideoFrame::RowBytes(
VideoFrame::kUPlane, VideoFrame::I420, frame_format.frame_size.width());
const size_t dst_v_stride = VideoFrame::RowBytes(
VideoFrame::kVPlane, VideoFrame::I420, frame_format.frame_size.width());
DCHECK_GE(y_stride, dst_y_stride);
DCHECK_GE(u_stride, dst_u_stride);
DCHECK_GE(v_stride, dst_v_stride);
if (libyuv::I420Copy(y_data, y_stride,
u_data, u_stride,
v_data, v_stride,
dst_y, dst_y_stride,
dst_u, dst_u_stride,
dst_v, dst_v_stride,
frame_format.frame_size.width(),
frame_format.frame_size.height())) {
DLOG(WARNING) << "Failed to copy buffer";
return;
}
scoped_refptr<VideoFrame> video_frame = VideoFrame::WrapExternalYuvData(
VideoFrame::I420, frame_format.frame_size,
gfx::Rect(frame_format.frame_size), frame_format.frame_size, y_stride,
u_stride, v_stride, dst_y, dst_u, dst_v, base::TimeDelta(),
base::Closure());
DCHECK(video_frame.get());
BrowserThread::PostTask(
BrowserThread::IO,
FROM_HERE,
base::Bind(
&VideoCaptureController::DoIncomingCapturedVideoFrameOnIOThread,
controller_,
buffer,
video_frame,
timestamp));
};
scoped_refptr<media::VideoCaptureDevice::Client::Buffer> scoped_refptr<media::VideoCaptureDevice::Client::Buffer>
VideoCaptureDeviceClient::ReserveOutputBuffer(VideoFrame::Format format, VideoCaptureDeviceClient::ReserveOutputBuffer(VideoFrame::Format format,
const gfx::Size& dimensions) { const gfx::Size& dimensions) {
......
...@@ -39,6 +39,15 @@ class CONTENT_EXPORT VideoCaptureDeviceClient ...@@ -39,6 +39,15 @@ class CONTENT_EXPORT VideoCaptureDeviceClient
const media::VideoCaptureFormat& frame_format, const media::VideoCaptureFormat& frame_format,
int rotation, int rotation,
const base::TimeTicks& timestamp) override; const base::TimeTicks& timestamp) override;
void OnIncomingCapturedYuvData(const uint8* y_data,
const uint8* u_data,
const uint8* v_data,
size_t y_stride,
size_t u_stride,
size_t v_stride,
const media::VideoCaptureFormat& frame_format,
int clockwise_rotation,
const base::TimeTicks& timestamp) override;
scoped_refptr<Buffer> ReserveOutputBuffer(media::VideoFrame::Format format, scoped_refptr<Buffer> ReserveOutputBuffer(media::VideoFrame::Format format,
const gfx::Size& size) override; const gfx::Size& size) override;
void OnIncomingCapturedVideoFrame( void OnIncomingCapturedVideoFrame(
......
...@@ -210,6 +210,12 @@ component("media") { ...@@ -210,6 +210,12 @@ component("media") {
"renderers/video_renderer_impl.h", "renderers/video_renderer_impl.h",
"video/capture/file_video_capture_device.cc", "video/capture/file_video_capture_device.cc",
"video/capture/file_video_capture_device.h", "video/capture/file_video_capture_device.h",
"video/capture/linux/v4l2_capture_delegate.cc",
"video/capture/linux/v4l2_capture_delegate.h",
"video/capture/linux/v4l2_capture_delegate_multi_plane.cc",
"video/capture/linux/v4l2_capture_delegate_multi_plane.h",
"video/capture/linux/v4l2_capture_delegate_single_plane.cc",
"video/capture/linux/v4l2_capture_delegate_single_plane.h",
"video/capture/linux/video_capture_device_chromeos.cc", "video/capture/linux/video_capture_device_chromeos.cc",
"video/capture/linux/video_capture_device_chromeos.h", "video/capture/linux/video_capture_device_chromeos.h",
"video/capture/linux/video_capture_device_linux.cc", "video/capture/linux/video_capture_device_linux.cc",
...@@ -394,6 +400,13 @@ component("media") { ...@@ -394,6 +400,13 @@ component("media") {
] ]
} }
if (is_openbsd) {
sources -= [
"video/capture/linux/v4l2_capture_delegate_multi_plane.cc",
"video/capture/linux/v4l2_capture_delegate_multi_plane.h",
]
}
if (is_ios) { if (is_ios) {
deps += [ "//media/base/mac" ] deps += [ "//media/base/mac" ]
} }
......
...@@ -567,6 +567,12 @@ ...@@ -567,6 +567,12 @@
'video/capture/file_video_capture_device.h', 'video/capture/file_video_capture_device.h',
'video/capture/file_video_capture_device_factory.cc', 'video/capture/file_video_capture_device_factory.cc',
'video/capture/file_video_capture_device_factory.h', 'video/capture/file_video_capture_device_factory.h',
'video/capture/linux/v4l2_capture_delegate.cc',
'video/capture/linux/v4l2_capture_delegate.h',
'video/capture/linux/v4l2_capture_delegate_multi_plane.cc',
'video/capture/linux/v4l2_capture_delegate_multi_plane.h',
'video/capture/linux/v4l2_capture_delegate_single_plane.cc',
'video/capture/linux/v4l2_capture_delegate_single_plane.h',
'video/capture/linux/video_capture_device_chromeos.cc', 'video/capture/linux/video_capture_device_chromeos.cc',
'video/capture/linux/video_capture_device_chromeos.h', 'video/capture/linux/video_capture_device_chromeos.h',
'video/capture/linux/video_capture_device_factory_linux.cc', 'video/capture/linux/video_capture_device_factory_linux.cc',
...@@ -757,6 +763,11 @@ ...@@ -757,6 +763,11 @@
'audio/openbsd/audio_manager_openbsd.cc', 'audio/openbsd/audio_manager_openbsd.cc',
'audio/openbsd/audio_manager_openbsd.h', 'audio/openbsd/audio_manager_openbsd.h',
], ],
}, { # else: openbsd==1
'sources!': [
'video/capture/linux/v4l2_capture_delegate_multi_plane.cc',
'video/capture/linux/v4l2_capture_delegate_multi_plane.h',
],
}], }],
['OS=="linux"', { ['OS=="linux"', {
'conditions': [ 'conditions': [
......
...@@ -31,7 +31,9 @@ void FakeVideoCaptureDeviceFactory::GetDeviceNames( ...@@ -31,7 +31,9 @@ void FakeVideoCaptureDeviceFactory::GetDeviceNames(
for (int n = 0; n < number_of_devices_; ++n) { for (int n = 0; n < number_of_devices_; ++n) {
VideoCaptureDevice::Name name(base::StringPrintf("fake_device_%d", n), VideoCaptureDevice::Name name(base::StringPrintf("fake_device_%d", n),
base::StringPrintf("/dev/video%d", n) base::StringPrintf("/dev/video%d", n)
#if defined(OS_MACOSX) #if defined(OS_LINUX)
, VideoCaptureDevice::Name::V4L2_SINGLE_PLANE
#elif defined(OS_MACOSX)
, VideoCaptureDevice::Name::AVFOUNDATION , VideoCaptureDevice::Name::AVFOUNDATION
#elif defined(OS_WIN) #elif defined(OS_WIN)
, VideoCaptureDevice::Name::DIRECT_SHOW , VideoCaptureDevice::Name::DIRECT_SHOW
......
...@@ -23,6 +23,16 @@ namespace { ...@@ -23,6 +23,16 @@ namespace {
class MockClient : public VideoCaptureDevice::Client { class MockClient : public VideoCaptureDevice::Client {
public: public:
MOCK_METHOD9(OnIncomingCapturedYuvData,
void (const uint8* y_data,
const uint8* u_data,
const uint8* v_data,
size_t y_stride,
size_t u_stride,
size_t v_stride,
const VideoCaptureFormat& frame_format,
int clockwise_rotation,
const base::TimeTicks& timestamp));
MOCK_METHOD2(ReserveOutputBuffer, MOCK_METHOD2(ReserveOutputBuffer,
scoped_refptr<Buffer>(VideoFrame::Format format, scoped_refptr<Buffer>(VideoFrame::Format format,
const gfx::Size& dimensions)); const gfx::Size& dimensions));
...@@ -80,6 +90,8 @@ class FakeVideoCaptureDeviceTest : public testing::Test { ...@@ -80,6 +90,8 @@ class FakeVideoCaptureDeviceTest : public testing::Test {
} }
void SetUp() override { void SetUp() override {
EXPECT_CALL(*client_, OnIncomingCapturedYuvData(_,_,_,_,_,_,_,_,_))
.Times(0);
EXPECT_CALL(*client_, ReserveOutputBuffer(_,_)).Times(0); EXPECT_CALL(*client_, ReserveOutputBuffer(_,_)).Times(0);
EXPECT_CALL(*client_, OnIncomingCapturedVideoFrame(_,_,_)).Times(0); EXPECT_CALL(*client_, OnIncomingCapturedVideoFrame(_,_,_)).Times(0);
} }
......
...@@ -51,6 +51,11 @@ void FileVideoCaptureDeviceFactory::GetDeviceNames( ...@@ -51,6 +51,11 @@ void FileVideoCaptureDeviceFactory::GetDeviceNames(
command_line_file_path.value(), command_line_file_path.value(),
kFileVideoCaptureDeviceName, kFileVideoCaptureDeviceName,
VideoCaptureDevice::Name::AVFOUNDATION)); VideoCaptureDevice::Name::AVFOUNDATION));
#elif defined(OS_LINUX)
device_names->push_back(VideoCaptureDevice::Name(
command_line_file_path.value(),
kFileVideoCaptureDeviceName,
VideoCaptureDevice::Name::V4L2_SINGLE_PLANE));
#else #else
device_names->push_back(VideoCaptureDevice::Name( device_names->push_back(VideoCaptureDevice::Name(
command_line_file_path.value(), command_line_file_path.value(),
......
This diff is collapsed.
// Copyright 2015 The Chromium Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#ifndef MEDIA_VIDEO_CAPTURE_LINUX_V4L2_VIDEO_CAPTURE_DELEGATE_H_
#define MEDIA_VIDEO_CAPTURE_LINUX_V4L2_VIDEO_CAPTURE_DELEGATE_H_
#if defined(OS_OPENBSD)
#include <sys/videoio.h>
#else
#include <linux/videodev2.h>
#endif
#include "base/files/scoped_file.h"
#include "base/memory/ref_counted.h"
#include "base/memory/scoped_vector.h"
#include "media/video/capture/video_capture_device.h"
namespace media {
// Class doing the actual Linux capture using V4L2 API. V4L2 SPLANE/MPLANE
// capture specifics are implemented in derived classes. Created and destroyed
// on the owner's thread, otherwise living and operating on |v4l2_task_runner_|.
class V4L2CaptureDelegate
: public base::RefCountedThreadSafe<V4L2CaptureDelegate> {
public:
// Creates the appropiate VideoCaptureDelegate according to parameters.
static scoped_refptr<V4L2CaptureDelegate> CreateV4L2CaptureDelegate(
const VideoCaptureDevice::Name& device_name,
const scoped_refptr<base::SingleThreadTaskRunner>& v4l2_task_runner,
int power_line_frequency);
// Retrieves the #planes for a given |fourcc|, or 0 if unknown.
static size_t GetNumPlanesForFourCc(uint32_t fourcc);
// Returns the Chrome pixel format for |v4l2_fourcc| or PIXEL_FORMAT_UNKNOWN.
static VideoPixelFormat V4l2FourCcToChromiumPixelFormat(uint32_t v4l2_fourcc);
// Composes a list of usable and supported pixel formats, in order of
// preference, with MJPEG prioritised depending on |prefer_mjpeg|.
static std::list<uint32_t> GetListOfUsableFourCcs(bool prefer_mjpeg);
// Forward-to versions of VideoCaptureDevice virtual methods.
void AllocateAndStart(int width,
int height,
float frame_rate,
scoped_ptr<VideoCaptureDevice::Client> client);
void StopAndDeAllocate();
void SetRotation(int rotation);
protected:
// Class keeping track of SPLANE/MPLANE V4L2 buffers, mmap()ed on construction
// and munmap()ed on destruction. Destruction is syntactically equal for
// S/MPLANE but not construction, so this is implemented in derived classes.
// Internally it has a vector of planes, which for SPLANE will contain only
// one element.
class BufferTracker : public base::RefCounted<BufferTracker> {
public:
BufferTracker();
// Abstract method to mmap() given |fd| according to |buffer|, planarity
// specific.
virtual bool Init(int fd, const v4l2_buffer& buffer) = 0;
uint8_t* const GetPlaneStart(size_t plane) const {
DCHECK_LT(plane, planes_.size());
return planes_[plane].start;
}
protected:
friend class base::RefCounted<BufferTracker>;
virtual ~BufferTracker();
// Adds a given mmap()ed plane to |planes_|.
void AddMmapedPlane(uint8_t* const start, size_t length);
private:
struct Plane {
uint8_t* start;
size_t length;
};
std::vector<Plane> planes_;
};
V4L2CaptureDelegate(
const VideoCaptureDevice::Name& device_name,
const scoped_refptr<base::SingleThreadTaskRunner>& v4l2_task_runner,
int power_line_frequency);
virtual ~V4L2CaptureDelegate();
// Creates the necessary, planarity-specific, internal tracking schemes,
virtual scoped_refptr<BufferTracker> CreateBufferTracker() const = 0;
// Fill in |format| with the given parameters, in a planarity dependent way.
virtual bool FillV4L2Format(v4l2_format* format,
uint32_t width,
uint32_t height,
uint32_t pixelformat_fourcc) const = 0;
// Finish filling |buffer| struct with planarity-dependent data.
virtual void FinishFillingV4L2Buffer(v4l2_buffer* buffer) const = 0;
// Sends the captured |buffer| to the |client_|, synchronously.
virtual void SendBuffer(
const scoped_refptr<BufferTracker>& buffer_tracker,
const v4l2_format& format) const = 0;
// A few accessors for SendBuffer()'s to access private member variables.
VideoCaptureFormat capture_format() const { return capture_format_; }
VideoCaptureDevice::Client* client() const { return client_.get(); }
int rotation() const { return rotation_; }
private:
friend class base::RefCountedThreadSafe<V4L2CaptureDelegate>;
// Returns the input |fourcc| as a std::string four char representation.
static std::string FourccToString(uint32_t fourcc);
// VIDIOC_QUERYBUFs a buffer from V4L2, creates a BufferTracker for it and
// enqueues it (VIDIOC_QBUF) back into V4L2.
bool MapAndQueueBuffer(int index);
// Fills all common parts of |buffer|. Delegates to FinishFillingV4L2Buffer()
// for filling in the planar-dependent parts.
void FillV4L2Buffer(v4l2_buffer* buffer, int i) const;
void DoCapture();
void SetErrorState(const std::string& reason);
const v4l2_buf_type capture_type_;
const scoped_refptr<base::SingleThreadTaskRunner> v4l2_task_runner_;
const VideoCaptureDevice::Name device_name_;
const int power_line_frequency_;
// The following members are only known on AllocateAndStart().
VideoCaptureFormat capture_format_;
v4l2_format video_fmt_;
scoped_ptr<VideoCaptureDevice::Client> client_;
base::ScopedFD device_fd_;
// Vector of BufferTracker to keep track of mmap()ed pointers and their use.
std::vector<scoped_refptr<BufferTracker>> buffer_tracker_pool_;
bool is_capturing_;
int timeout_count_;
// Clockwise rotation in degrees. This value should be 0, 90, 180, or 270.
int rotation_;
DISALLOW_COPY_AND_ASSIGN(V4L2CaptureDelegate);
};
} // namespace media
#endif // MEDIA_VIDEO_CAPTURE_LINUX_V4L2_VIDEO_CAPTURE_DELEGATE_H_
// Copyright 2015 The Chromium Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#include "media/video/capture/linux/v4l2_capture_delegate_multi_plane.h"
#include <sys/mman.h>
namespace media {
V4L2CaptureDelegateMultiPlane::V4L2CaptureDelegateMultiPlane(
const VideoCaptureDevice::Name& device_name,
const scoped_refptr<base::SingleThreadTaskRunner>& v4l2_task_runner,
int power_line_frequency)
: V4L2CaptureDelegate(device_name,
v4l2_task_runner,
power_line_frequency) {
}
V4L2CaptureDelegateMultiPlane::~V4L2CaptureDelegateMultiPlane() {
}
scoped_refptr<V4L2CaptureDelegate::BufferTracker>
V4L2CaptureDelegateMultiPlane::CreateBufferTracker() const {
return make_scoped_refptr(new BufferTrackerMPlane());
}
bool V4L2CaptureDelegateMultiPlane::FillV4L2Format(
v4l2_format* format,
uint32_t width,
uint32_t height,
uint32_t pixelformat_fourcc) const {
format->fmt.pix_mp.width = width;
format->fmt.pix_mp.height = height;
format->fmt.pix_mp.pixelformat = pixelformat_fourcc;
const size_t num_v4l2_planes =
V4L2CaptureDelegate::GetNumPlanesForFourCc(pixelformat_fourcc);
if (num_v4l2_planes == 0u)
return false;
DCHECK_LE(num_v4l2_planes, static_cast<size_t>(VIDEO_MAX_PLANES));
format->fmt.pix_mp.num_planes = num_v4l2_planes;
v4l2_planes_.resize(num_v4l2_planes);
return true;
}
void V4L2CaptureDelegateMultiPlane::FinishFillingV4L2Buffer(
v4l2_buffer* buffer) const {
buffer->type = V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE;
buffer->length = v4l2_planes_.size();
static const struct v4l2_plane empty_plane = {};
std::fill(v4l2_planes_.begin(), v4l2_planes_.end(), empty_plane);
buffer->m.planes = v4l2_planes_.data();
}
void V4L2CaptureDelegateMultiPlane::SendBuffer(
const scoped_refptr<BufferTracker>& buffer_tracker,
const v4l2_format& format) const {
DCHECK_EQ(capture_format().pixel_format, PIXEL_FORMAT_I420);
const size_t y_stride = format.fmt.pix_mp.plane_fmt[0].bytesperline;
const size_t u_stride = format.fmt.pix_mp.plane_fmt[1].bytesperline;
const size_t v_stride = format.fmt.pix_mp.plane_fmt[2].bytesperline;
DCHECK_GE(y_stride, 1u * capture_format().frame_size.width());
DCHECK_GE(u_stride, 1u * capture_format().frame_size.width() / 2);
DCHECK_GE(v_stride, 1u * capture_format().frame_size.width() / 2);
client()->OnIncomingCapturedYuvData(buffer_tracker->GetPlaneStart(0),
buffer_tracker->GetPlaneStart(1),
buffer_tracker->GetPlaneStart(2),
y_stride,
u_stride,
v_stride,
capture_format(),
rotation(),
base::TimeTicks::Now());
}
bool V4L2CaptureDelegateMultiPlane::BufferTrackerMPlane::Init(
int fd,
const v4l2_buffer& buffer) {
for (size_t p = 0; p < buffer.length; ++p) {
// Some devices require mmap() to be called with both READ and WRITE.
// See http://crbug.com/178582.
void* const start =
mmap(NULL, buffer.m.planes[p].length, PROT_READ | PROT_WRITE,
MAP_SHARED, fd, buffer.m.planes[p].m.mem_offset);
if (start == MAP_FAILED) {
DLOG(ERROR) << "Error mmap()ing a V4L2 buffer into userspace";
return false;
}
AddMmapedPlane(static_cast<uint8_t*>(start), buffer.m.planes[p].length);
DVLOG(3) << "Mmap()ed plane #" << p << " of " << buffer.m.planes[p].length
<< "B";
}
return true;
}
} // namespace media
// Copyright 2015 The Chromium Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#ifndef MEDIA_VIDEO_CAPTURE_LINUX_V4L2_CAPTURE_DELEGATE_MULTI_PLANE_H_
#define MEDIA_VIDEO_CAPTURE_LINUX_V4L2_CAPTURE_DELEGATE_MULTI_PLANE_H_
#include "base/memory/ref_counted.h"
#include "media/video/capture/linux/v4l2_capture_delegate.h"
#if defined(OS_OPENBSD)
#error "OpenBSD does not support MPlane capture API."
#endif
namespace base {
class SingleThreadTaskRunner;
} // namespace base
namespace media {
// V4L2 specifics for MPLANE API.
class V4L2CaptureDelegateMultiPlane final : public V4L2CaptureDelegate {
public:
V4L2CaptureDelegateMultiPlane(
const VideoCaptureDevice::Name& device_name,
const scoped_refptr<base::SingleThreadTaskRunner>& v4l2_task_runner,
int power_line_frequency);
private:
// BufferTracker derivation to implement construction semantics for MPLANE.
class BufferTrackerMPlane final : public BufferTracker {
public:
bool Init(int fd, const v4l2_buffer& buffer) override;
private:
~BufferTrackerMPlane() override {}
};
~V4L2CaptureDelegateMultiPlane() override;
// V4L2CaptureDelegate virtual methods implementation.
scoped_refptr<BufferTracker> CreateBufferTracker() const override;
bool FillV4L2Format(v4l2_format* format,
uint32_t width,
uint32_t height,
uint32_t pixelformat_fourcc) const override;
void FinishFillingV4L2Buffer(v4l2_buffer* buffer) const override;
void SendBuffer(const scoped_refptr<BufferTracker>& buffer_tracker,
const v4l2_format& format) const override;
// Vector to allocate and track as many v4l2_plane structs as planes, needed
// for v4l2_buffer.m.planes. This is a scratchpad marked mutable to enable
// using it in otherwise const methods.
mutable std::vector<struct v4l2_plane> v4l2_planes_;
};
} // namespace media
#endif // MEDIA_VIDEO_CAPTURE_LINUX_V4L2_CAPTURE_DELEGATE_SINGLE_PLANE_H_
// Copyright 2015 The Chromium Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#include "media/video/capture/linux/v4l2_capture_delegate_single_plane.h"
#include <sys/mman.h>
namespace media {
scoped_refptr<V4L2CaptureDelegate::BufferTracker>
V4L2CaptureDelegateSinglePlane::CreateBufferTracker() const {
return make_scoped_refptr(new BufferTrackerSPlane());
}
bool V4L2CaptureDelegateSinglePlane::FillV4L2Format(
v4l2_format* format,
uint32_t width,
uint32_t height,
uint32_t pixelformat_fourcc) const {
format->fmt.pix.width = width;
format->fmt.pix.height = height;
format->fmt.pix.pixelformat = pixelformat_fourcc;
return true;
}
void V4L2CaptureDelegateSinglePlane::FinishFillingV4L2Buffer(
v4l2_buffer* buffer) const {
buffer->type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
}
void V4L2CaptureDelegateSinglePlane::SendBuffer(
const scoped_refptr<BufferTracker>& buffer_tracker,
const v4l2_format& format) const {
const size_t data_length = format.fmt.pix.sizeimage;
DCHECK_GE(data_length, capture_format().ImageAllocationSize());
client()->OnIncomingCapturedData(
buffer_tracker->GetPlaneStart(0),
data_length,
capture_format(),
rotation(),
base::TimeTicks::Now());
}
bool V4L2CaptureDelegateSinglePlane::BufferTrackerSPlane::Init(
int fd,
const v4l2_buffer& buffer) {
// Some devices require mmap() to be called with both READ and WRITE.
// See http://crbug.com/178582.
void* const start = mmap(NULL, buffer.length, PROT_READ | PROT_WRITE,
MAP_SHARED, fd, buffer.m.offset);
if (start == MAP_FAILED) {
DLOG(ERROR) << "Error mmap()ing a V4L2 buffer into userspace";
return false;
}
AddMmapedPlane(static_cast<uint8_t*>(start), buffer.length);
return true;
}
} // namespace media
// Copyright 2015 The Chromium Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#ifndef MEDIA_VIDEO_CAPTURE_LINUX_V4L2_CAPTURE_DELEGATE_SINGLE_PLANE_H_
#define MEDIA_VIDEO_CAPTURE_LINUX_V4L2_CAPTURE_DELEGATE_SINGLE_PLANE_H_
#include "base/memory/ref_counted.h"
#include "media/video/capture/linux/v4l2_capture_delegate.h"
#include "media/video/capture/video_capture_device.h"
namespace base {
class SingleThreadTaskRunner;
} // namespace base
namespace media {
// V4L2 specifics for SPLANE API.
class V4L2CaptureDelegateSinglePlane final : public V4L2CaptureDelegate {
public:
V4L2CaptureDelegateSinglePlane(
const VideoCaptureDevice::Name& device_name,
const scoped_refptr<base::SingleThreadTaskRunner>& v4l2_task_runner,
int power_line_frequency)
: V4L2CaptureDelegate(device_name,
v4l2_task_runner,
power_line_frequency) {}
private:
// BufferTracker derivation to implement construction semantics for SPLANE.
class BufferTrackerSPlane final : public BufferTracker {
public:
bool Init(int fd, const v4l2_buffer& buffer) override;
private:
~BufferTrackerSPlane() override {}
};
~V4L2CaptureDelegateSinglePlane() override {}
// V4L2CaptureDelegate virtual methods implementation.
scoped_refptr<BufferTracker> CreateBufferTracker() const override;
bool FillV4L2Format(v4l2_format* format,
uint32_t width,
uint32_t height,
uint32_t pixelformat_fourcc) const override;
void FinishFillingV4L2Buffer(v4l2_buffer* buffer) const override;
void SendBuffer(const scoped_refptr<BufferTracker>& buffer_tracker,
const v4l2_format& format) const override;
};
} // namespace media
#endif // MEDIA_VIDEO_CAPTURE_LINUX_V4L2_CAPTURE_DELEGATE_MULTI_PLANE_H_
...@@ -25,7 +25,7 @@ ...@@ -25,7 +25,7 @@
namespace media { namespace media {
static bool HasUsableFormats(int fd, uint32 capabilities) { static bool HasUsableFormats(int fd, uint32 capabilities) {
const std::list<int>& usable_fourccs = const std::list<uint32_t>& usable_fourccs =
VideoCaptureDeviceLinux::GetListOfUsableFourCCs(false); VideoCaptureDeviceLinux::GetListOfUsableFourCCs(false);
static const struct { static const struct {
...@@ -48,6 +48,7 @@ static bool HasUsableFormats(int fd, uint32 capabilities) { ...@@ -48,6 +48,7 @@ static bool HasUsableFormats(int fd, uint32 capabilities) {
} }
} }
} }
DLOG(ERROR) << "No usable formats found";
return false; return false;
} }
...@@ -182,9 +183,11 @@ void VideoCaptureDeviceFactoryLinux::GetDeviceNames( ...@@ -182,9 +183,11 @@ void VideoCaptureDeviceFactoryLinux::GetDeviceNames(
!(cap.capabilities & V4L2_CAP_VIDEO_OUTPUT) && !(cap.capabilities & V4L2_CAP_VIDEO_OUTPUT) &&
!(cap.capabilities & V4L2_CAP_VIDEO_OUTPUT_MPLANE)) && !(cap.capabilities & V4L2_CAP_VIDEO_OUTPUT_MPLANE)) &&
HasUsableFormats(fd.get(), cap.capabilities)) { HasUsableFormats(fd.get(), cap.capabilities)) {
VideoCaptureDevice::Name device_name(base::StringPrintf("%s", cap.card), device_names->push_back(VideoCaptureDevice::Name(
unique_id); base::StringPrintf("%s", cap.card), unique_id,
device_names->push_back(device_name); (cap.capabilities & V4L2_CAP_VIDEO_CAPTURE_MPLANE)
? VideoCaptureDevice::Name::V4L2_MULTI_PLANE
: VideoCaptureDevice::Name::V4L2_SINGLE_PLANE));
} }
} }
} }
...@@ -200,10 +203,14 @@ void VideoCaptureDeviceFactoryLinux::GetDeviceSupportedFormats( ...@@ -200,10 +203,14 @@ void VideoCaptureDeviceFactoryLinux::GetDeviceSupportedFormats(
return; return;
supported_formats->clear(); supported_formats->clear();
const v4l2_buf_type kCaptureTypes[] = {V4L2_BUF_TYPE_VIDEO_CAPTURE, DCHECK_NE(device.capture_api_type(),
V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE}; VideoCaptureDevice::Name::API_TYPE_UNKNOWN);
for (const auto& buf_type : kCaptureTypes) const v4l2_buf_type buf_type =
GetSupportedFormatsForV4L2BufferType(fd.get(), buf_type, supported_formats); (device.capture_api_type() == VideoCaptureDevice::Name::V4L2_MULTI_PLANE)
? V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE
: V4L2_BUF_TYPE_VIDEO_CAPTURE;
GetSupportedFormatsForV4L2BufferType(fd.get(), buf_type, supported_formats);
return; return;
} }
......
...@@ -20,11 +20,13 @@ ...@@ -20,11 +20,13 @@
namespace media { namespace media {
class V4L2CaptureDelegate;
// Linux V4L2 implementation of VideoCaptureDevice. // Linux V4L2 implementation of VideoCaptureDevice.
class VideoCaptureDeviceLinux : public VideoCaptureDevice { class VideoCaptureDeviceLinux : public VideoCaptureDevice {
public: public:
static VideoPixelFormat V4l2FourCcToChromiumPixelFormat(uint32 v4l2_fourcc); static VideoPixelFormat V4l2FourCcToChromiumPixelFormat(uint32 v4l2_fourcc);
static std::list<int> GetListOfUsableFourCCs(bool favour_mjpeg); static std::list<uint32_t> GetListOfUsableFourCCs(bool favour_mjpeg);
explicit VideoCaptureDeviceLinux(const Name& device_name); explicit VideoCaptureDeviceLinux(const Name& device_name);
~VideoCaptureDeviceLinux() override; ~VideoCaptureDeviceLinux() override;
...@@ -38,10 +40,11 @@ class VideoCaptureDeviceLinux : public VideoCaptureDevice { ...@@ -38,10 +40,11 @@ class VideoCaptureDeviceLinux : public VideoCaptureDevice {
void SetRotation(int rotation); void SetRotation(int rotation);
private: private:
static int TranslatePowerLineFrequencyToV4L2(int frequency);
// Internal delegate doing the actual capture setting, buffer allocation and // Internal delegate doing the actual capture setting, buffer allocation and
// circulacion with the V4L2 API. Created and deleted in the thread where // circulacion with the V4L2 API. Created and deleted in the thread where
// VideoCaptureDeviceLinux lives but otherwise operating on |v4l2_thread_|. // VideoCaptureDeviceLinux lives but otherwise operating on |v4l2_thread_|.
class V4L2CaptureDelegate;
scoped_refptr<V4L2CaptureDelegate> capture_impl_; scoped_refptr<V4L2CaptureDelegate> capture_impl_;
base::Thread v4l2_thread_; // Thread used for reading data from the device. base::Thread v4l2_thread_; // Thread used for reading data from the device.
......
...@@ -24,7 +24,14 @@ VideoCaptureDevice::Name::Name() {} ...@@ -24,7 +24,14 @@ VideoCaptureDevice::Name::Name() {}
VideoCaptureDevice::Name::Name(const std::string& name, const std::string& id) VideoCaptureDevice::Name::Name(const std::string& name, const std::string& id)
: device_name_(name), unique_id_(id) {} : device_name_(name), unique_id_(id) {}
#if defined(OS_WIN) #if defined(OS_LINUX)
VideoCaptureDevice::Name::Name(const std::string& name,
const std::string& id,
const CaptureApiType api_type)
: device_name_(name),
unique_id_(id),
capture_api_class_(api_type) {}
#elif defined(OS_WIN)
VideoCaptureDevice::Name::Name(const std::string& name, VideoCaptureDevice::Name::Name(const std::string& name,
const std::string& id, const std::string& id,
const CaptureApiType api_type) const CaptureApiType api_type)
...@@ -32,9 +39,7 @@ VideoCaptureDevice::Name::Name(const std::string& name, ...@@ -32,9 +39,7 @@ VideoCaptureDevice::Name::Name(const std::string& name,
unique_id_(id), unique_id_(id),
capture_api_class_(api_type), capture_api_class_(api_type),
capabilities_id_(id) {} capabilities_id_(id) {}
#endif #elif defined(OS_MACOSX)
#if defined(OS_MACOSX)
VideoCaptureDevice::Name::Name(const std::string& name, VideoCaptureDevice::Name::Name(const std::string& name,
const std::string& id, const std::string& id,
const CaptureApiType api_type) const CaptureApiType api_type)
...@@ -57,7 +62,19 @@ VideoCaptureDevice::Name::Name(const std::string& name, ...@@ -57,7 +62,19 @@ VideoCaptureDevice::Name::Name(const std::string& name,
VideoCaptureDevice::Name::~Name() {} VideoCaptureDevice::Name::~Name() {}
#if defined(OS_WIN) #if defined(OS_LINUX)
const char* VideoCaptureDevice::Name::GetCaptureApiTypeString() const {
switch (capture_api_type()) {
case V4L2_SINGLE_PLANE:
return "V4L2 SPLANE";
case V4L2_MULTI_PLANE:
return "V4L2 MPLANE";
default:
NOTREACHED() << "Unknown Video Capture API type!";
return "Unknown API";
}
}
#elif defined(OS_WIN)
const char* VideoCaptureDevice::Name::GetCaptureApiTypeString() const { const char* VideoCaptureDevice::Name::GetCaptureApiTypeString() const {
switch(capture_api_type()) { switch(capture_api_type()) {
case MEDIA_FOUNDATION: case MEDIA_FOUNDATION:
......
...@@ -41,7 +41,14 @@ class MEDIA_EXPORT VideoCaptureDevice { ...@@ -41,7 +41,14 @@ class MEDIA_EXPORT VideoCaptureDevice {
Name(); Name();
Name(const std::string& name, const std::string& id); Name(const std::string& name, const std::string& id);
#if defined(OS_WIN) #if defined(OS_LINUX)
// Linux/CrOS targets Capture Api type: it can only be set on construction.
enum CaptureApiType {
V4L2_SINGLE_PLANE,
V4L2_MULTI_PLANE,
API_TYPE_UNKNOWN
};
#elif defined(OS_WIN)
// Windows targets Capture Api type: it can only be set on construction. // Windows targets Capture Api type: it can only be set on construction.
enum CaptureApiType { enum CaptureApiType {
MEDIA_FOUNDATION, MEDIA_FOUNDATION,
...@@ -49,8 +56,7 @@ class MEDIA_EXPORT VideoCaptureDevice { ...@@ -49,8 +56,7 @@ class MEDIA_EXPORT VideoCaptureDevice {
DIRECT_SHOW_WDM_CROSSBAR, DIRECT_SHOW_WDM_CROSSBAR,
API_TYPE_UNKNOWN API_TYPE_UNKNOWN
}; };
#endif #elif defined(OS_MACOSX)
#if defined(OS_MACOSX)
// Mac targets Capture Api type: it can only be set on construction. // Mac targets Capture Api type: it can only be set on construction.
enum CaptureApiType { enum CaptureApiType {
AVFOUNDATION, AVFOUNDATION,
...@@ -64,7 +70,7 @@ class MEDIA_EXPORT VideoCaptureDevice { ...@@ -64,7 +70,7 @@ class MEDIA_EXPORT VideoCaptureDevice {
OTHER_TRANSPORT OTHER_TRANSPORT
}; };
#endif #endif
#if defined(OS_WIN) || defined(OS_MACOSX) #if defined(OS_WIN) || defined(OS_MACOSX) || defined(OS_LINUX)
Name(const std::string& name, Name(const std::string& name,
const std::string& id, const std::string& id,
const CaptureApiType api_type); const CaptureApiType api_type);
...@@ -102,7 +108,7 @@ class MEDIA_EXPORT VideoCaptureDevice { ...@@ -102,7 +108,7 @@ class MEDIA_EXPORT VideoCaptureDevice {
return unique_id_ < other.id(); return unique_id_ < other.id();
} }
#if defined(OS_WIN) || defined(OS_MACOSX) #if defined(OS_WIN) || defined(OS_MACOSX) || defined(OS_LINUX)
CaptureApiType capture_api_type() const { CaptureApiType capture_api_type() const {
return capture_api_class_.capture_api_type(); return capture_api_class_.capture_api_type();
} }
...@@ -133,7 +139,7 @@ class MEDIA_EXPORT VideoCaptureDevice { ...@@ -133,7 +139,7 @@ class MEDIA_EXPORT VideoCaptureDevice {
private: private:
std::string device_name_; std::string device_name_;
std::string unique_id_; std::string unique_id_;
#if defined(OS_WIN) || defined(OS_MACOSX) #if defined(OS_WIN) || defined(OS_MACOSX) || defined(OS_LINUX)
// This class wraps the CaptureApiType to give it a by default value if not // This class wraps the CaptureApiType to give it a by default value if not
// initialized. // initialized.
class CaptureApiClass { class CaptureApiClass {
...@@ -195,9 +201,22 @@ class MEDIA_EXPORT VideoCaptureDevice { ...@@ -195,9 +201,22 @@ class MEDIA_EXPORT VideoCaptureDevice {
virtual void OnIncomingCapturedData(const uint8* data, virtual void OnIncomingCapturedData(const uint8* data,
int length, int length,
const VideoCaptureFormat& frame_format, const VideoCaptureFormat& frame_format,
int rotation, // Clockwise. int clockwise_rotation,
const base::TimeTicks& timestamp) = 0; const base::TimeTicks& timestamp) = 0;
// Captured a 3 planar YUV frame. Planes are possibly disjoint.
// |frame_format| must indicate I420.
virtual void OnIncomingCapturedYuvData(
const uint8* y_data,
const uint8* u_data,
const uint8* v_data,
size_t y_stride,
size_t u_stride,
size_t v_stride,
const VideoCaptureFormat& frame_format,
int clockwise_rotation,
const base::TimeTicks& timestamp) = 0;
// Reserve an output buffer into which contents can be captured directly. // Reserve an output buffer into which contents can be captured directly.
// The returned Buffer will always be allocated with a memory size suitable // The returned Buffer will always be allocated with a memory size suitable
// for holding a packed video frame with pixels of |format| format, of // for holding a packed video frame with pixels of |format| format, of
......
...@@ -67,6 +67,16 @@ class MockClient : public VideoCaptureDevice::Client { ...@@ -67,6 +67,16 @@ class MockClient : public VideoCaptureDevice::Client {
MOCK_METHOD2(ReserveOutputBuffer, MOCK_METHOD2(ReserveOutputBuffer,
scoped_refptr<Buffer>(VideoFrame::Format format, scoped_refptr<Buffer>(VideoFrame::Format format,
const gfx::Size& dimensions)); const gfx::Size& dimensions));
MOCK_METHOD9(OnIncomingCapturedYuvData,
void (const uint8* y_data,
const uint8* u_data,
const uint8* v_data,
size_t y_stride,
size_t u_stride,
size_t v_stride,
const VideoCaptureFormat& frame_format,
int clockwise_rotation,
const base::TimeTicks& timestamp));
MOCK_METHOD3(OnIncomingCapturedVideoFrame, MOCK_METHOD3(OnIncomingCapturedVideoFrame,
void(const scoped_refptr<Buffer>& buffer, void(const scoped_refptr<Buffer>& buffer,
const scoped_refptr<VideoFrame>& frame, const scoped_refptr<VideoFrame>& frame,
...@@ -127,6 +137,8 @@ class VideoCaptureDeviceTest : public testing::Test { ...@@ -127,6 +137,8 @@ class VideoCaptureDeviceTest : public testing::Test {
VideoCaptureDeviceAndroid::RegisterVideoCaptureDevice( VideoCaptureDeviceAndroid::RegisterVideoCaptureDevice(
base::android::AttachCurrentThread()); base::android::AttachCurrentThread());
#endif #endif
EXPECT_CALL(*client_, OnIncomingCapturedYuvData(_,_,_,_,_,_,_,_,_))
.Times(0);
EXPECT_CALL(*client_, ReserveOutputBuffer(_,_)).Times(0); EXPECT_CALL(*client_, ReserveOutputBuffer(_,_)).Times(0);
EXPECT_CALL(*client_, OnIncomingCapturedVideoFrame(_,_,_)).Times(0); EXPECT_CALL(*client_, OnIncomingCapturedVideoFrame(_,_,_)).Times(0);
} }
...@@ -179,7 +191,8 @@ class VideoCaptureDeviceTest : public testing::Test { ...@@ -179,7 +191,8 @@ class VideoCaptureDeviceTest : public testing::Test {
} }
} }
} }
DVLOG(1) << "No camera can capture the format: " << pixel_format; DVLOG_IF(1, pixel_format != PIXEL_FORMAT_MAX) << "No camera can capture the"
<< " format: " << VideoCaptureFormat::PixelFormatToString(pixel_format);
return scoped_ptr<VideoCaptureDevice::Name>(); return scoped_ptr<VideoCaptureDevice::Name>();
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment