Commit d3d37d0f authored by mcasas's avatar mcasas Committed by Commit bot

Linux Video Capture: Add V4L2VideoCaptureDelegate{Single,Multi}Plane.

This CL adds support for V4L2 MPLANE Capture Api.
Only supported format is YUV420M triplanar.

A new method is added to VideoCaptureDeviceClient,
namely OnIncomingCapturedYuvData(...), which forces
adding MOCKing here and there, and its own
implementation.

V4L2 MMAP API works via user mmap()ing a number of
buffer allocated by V4L2 capture device. If those
buffers are not correctly munmap()ed, bad things (c)
happen. In light of this, the manual buffer lifetime
management is changed to automatic one. Construction
(mmap()ing) of those called BufferTracker is
planarity specific (i.e. there's one such ctor in
each of BufferTracker{S,M}Plane), while the dtor
is generic and the same.

ToT class diagram:
     +------------------------------------+
     | VideoCaptureDeviceLinux            |
     |            +----------------------+|
     | <<ref>> -->| V4L2CaptureDelegate  ||
     |   cnt      |  (struct Buffer)     ||
     |            +----------------------+|
     +------------------------------------+

This CL class scheme:

 +--------------------------+
 | VideoCaptureDeviceLinux  |
 |                          |
 | <<ref_cnt>> ---+         |
 +----------------|---------+
 +----------------v-----------+ v4l2_capture_delegate.{cc,h}
 | +-----------------------+  |
 | |V4L2CaptureDelegate    |  |
 | |  (class BufferTracker)|  |
 | +-----------------------+  |
 +-------^------------------^-+
         |                  |
    +----|-------+ +--------|--+ v4l2_capture_delegate_multi_plane.{cc,h}
    | SPlane     | | MPlane    |
    | (BTSplane) | | (BTMPlane)|
    |            | +-----------+
    +------------+ v4l2_capture_delegate_single_plane.{cc,h}

- VCDevice works on the premise that its calls into
 VCDevice::Client::OnIncomingWhatever() are synchronous.
 That assumption is respected here.

- A bit of cleanup is done in OnIncomingCaptureData(),
 in what regards rotation/crop/odd sizes. A unit test
 is subsequently added.

- VideoCaptureDeviceFactory labels the devices as
 Single or Multi Planar. That labeling capture_api_type()
 needs to ripple through a bunch of files, causing some
 otherwise uninteresting changes in the patchsets.

BUG=441836

TEST= Compile and insmod vivid.ko into a kernel,
with options for supporting MPLANE api (multiplanar=2)
then capture using patched Chromium. Current vivid
does _not_ support any common Mplane format, needs
a patch:

https://github.com/miguelao/linux/tree/adding_yu12_yv12_nv12_nv21___mplane_formats___with_mods_for_kernel_3_13

that needs to be compiled against ubuntu sources 3.13 etc.

For even better coverage, use a normal WebCam and
navigate to http://goo.gl/fUcIiP, then open both
the MPlane camera mentioned in the previous paragraph
and the "normal" webcam (this is,partially, how I
try it).

Review URL: https://codereview.chromium.org/967793002

Cr-Commit-Position: refs/heads/master@{#321612}
parent d5c5bf15
...@@ -41,6 +41,16 @@ class MockDeviceClient : public media::VideoCaptureDevice::Client { ...@@ -41,6 +41,16 @@ class MockDeviceClient : public media::VideoCaptureDevice::Client {
const media::VideoCaptureFormat& frame_format, const media::VideoCaptureFormat& frame_format,
int rotation, int rotation,
const base::TimeTicks& timestamp)); const base::TimeTicks& timestamp));
MOCK_METHOD9(OnIncomingCapturedYuvData,
void (const uint8* y_data,
const uint8* u_data,
const uint8* v_data,
size_t y_stride,
size_t u_stride,
size_t v_stride,
const media::VideoCaptureFormat& frame_format,
int clockwise_rotation,
const base::TimeTicks& timestamp));
MOCK_METHOD2(ReserveOutputBuffer, MOCK_METHOD2(ReserveOutputBuffer,
scoped_refptr<Buffer>(media::VideoFrame::Format format, scoped_refptr<Buffer>(media::VideoFrame::Format format,
const gfx::Size& dimensions)); const gfx::Size& dimensions));
......
...@@ -59,6 +59,16 @@ class MockDeviceClient : public media::VideoCaptureDevice::Client { ...@@ -59,6 +59,16 @@ class MockDeviceClient : public media::VideoCaptureDevice::Client {
const media::VideoCaptureFormat& frame_format, const media::VideoCaptureFormat& frame_format,
int rotation, int rotation,
const base::TimeTicks& timestamp)); const base::TimeTicks& timestamp));
MOCK_METHOD9(OnIncomingCapturedYuvData,
void (const uint8* y_data,
const uint8* u_data,
const uint8* v_data,
size_t y_stride,
size_t u_stride,
size_t v_stride,
const media::VideoCaptureFormat& frame_format,
int clockwise_rotation,
const base::TimeTicks& timestamp));
MOCK_METHOD2(ReserveOutputBuffer, MOCK_METHOD2(ReserveOutputBuffer,
scoped_refptr<Buffer>(media::VideoFrame::Format format, scoped_refptr<Buffer>(media::VideoFrame::Format format,
const gfx::Size& dimensions)); const gfx::Size& dimensions));
......
...@@ -323,6 +323,18 @@ class StubClient : public media::VideoCaptureDevice::Client { ...@@ -323,6 +323,18 @@ class StubClient : public media::VideoCaptureDevice::Client {
FAIL(); FAIL();
} }
void OnIncomingCapturedYuvData(const uint8* y_data,
const uint8* u_data,
const uint8* v_data,
size_t y_stride,
size_t u_stride,
size_t v_stride,
const media::VideoCaptureFormat& frame_format,
int clockwise_rotation,
const base::TimeTicks& timestamp) override {
FAIL();
}
scoped_refptr<media::VideoCaptureDevice::Client::Buffer> ReserveOutputBuffer( scoped_refptr<media::VideoCaptureDevice::Client::Buffer> ReserveOutputBuffer(
media::VideoFrame::Format format, media::VideoFrame::Format format,
const gfx::Size& dimensions) override { const gfx::Size& dimensions) override {
......
...@@ -487,7 +487,7 @@ void MediaInternals::UpdateVideoCaptureDeviceCapabilities( ...@@ -487,7 +487,7 @@ void MediaInternals::UpdateVideoCaptureDeviceCapabilities(
device_dict->SetString( device_dict->SetString(
"name", video_capture_device_info.name.GetNameAndModel()); "name", video_capture_device_info.name.GetNameAndModel());
device_dict->Set("formats", format_list); device_dict->Set("formats", format_list);
#if defined(OS_WIN) || defined(OS_MACOSX) #if defined(OS_WIN) || defined(OS_MACOSX) || defined(OS_LINUX)
device_dict->SetString( device_dict->SetString(
"captureApi", "captureApi",
video_capture_device_info.name.GetCaptureApiTypeString()); video_capture_device_info.name.GetCaptureApiTypeString());
......
...@@ -109,14 +109,17 @@ class MediaInternalsVideoCaptureDeviceTest : public testing::Test, ...@@ -109,14 +109,17 @@ class MediaInternalsVideoCaptureDeviceTest : public testing::Test,
MediaInternals::UpdateCallback update_cb_; MediaInternals::UpdateCallback update_cb_;
}; };
#if defined(OS_WIN) || defined(OS_MACOSX) #if defined(OS_WIN) || defined(OS_MACOSX) || defined(OS_LINUX)
TEST_F(MediaInternalsVideoCaptureDeviceTest, TEST_F(MediaInternalsVideoCaptureDeviceTest,
AllCaptureApiTypesHaveProperStringRepresentation) { AllCaptureApiTypesHaveProperStringRepresentation) {
typedef media::VideoCaptureDevice::Name VideoCaptureDeviceName; typedef media::VideoCaptureDevice::Name VideoCaptureDeviceName;
typedef std::map<VideoCaptureDeviceName::CaptureApiType, std::string> typedef std::map<VideoCaptureDeviceName::CaptureApiType, std::string>
CaptureApiTypeStringMap; CaptureApiTypeStringMap;
CaptureApiTypeStringMap m; CaptureApiTypeStringMap m;
#if defined(OS_WIN) #if defined(OS_LINUX)
m[VideoCaptureDeviceName::V4L2_SINGLE_PLANE] = "V4L2 SPLANE";
m[VideoCaptureDeviceName::V4L2_MULTI_PLANE] = "V4L2 MPLANE";
#elif defined(OS_WIN)
m[VideoCaptureDeviceName::MEDIA_FOUNDATION] = "Media Foundation"; m[VideoCaptureDeviceName::MEDIA_FOUNDATION] = "Media Foundation";
m[VideoCaptureDeviceName::DIRECT_SHOW] = "Direct Show"; m[VideoCaptureDeviceName::DIRECT_SHOW] = "Direct Show";
m[VideoCaptureDeviceName::DIRECT_SHOW_WDM_CROSSBAR] = m[VideoCaptureDeviceName::DIRECT_SHOW_WDM_CROSSBAR] =
...@@ -172,8 +175,10 @@ TEST_F(MediaInternalsVideoCaptureDeviceTest, ...@@ -172,8 +175,10 @@ TEST_F(MediaInternalsVideoCaptureDeviceTest,
#elif defined(OS_WIN) #elif defined(OS_WIN)
media::VideoCaptureDevice::Name("dummy", "dummy", media::VideoCaptureDevice::Name("dummy", "dummy",
media::VideoCaptureDevice::Name::DIRECT_SHOW), media::VideoCaptureDevice::Name::DIRECT_SHOW),
#elif defined(OS_LINUX) || defined(OS_CHROMEOS) #elif defined(OS_LINUX)
media::VideoCaptureDevice::Name("dummy", "/dev/dummy"), media::VideoCaptureDevice::Name(
"dummy", "/dev/dummy",
media::VideoCaptureDevice::Name::V4L2_SINGLE_PLANE),
#else #else
media::VideoCaptureDevice::Name("dummy", "dummy"), media::VideoCaptureDevice::Name("dummy", "dummy"),
#endif #endif
...@@ -187,7 +192,7 @@ TEST_F(MediaInternalsVideoCaptureDeviceTest, ...@@ -187,7 +192,7 @@ TEST_F(MediaInternalsVideoCaptureDeviceTest,
// exactly one device_info in the |device_infos|. // exactly one device_info in the |device_infos|.
media_internals_->UpdateVideoCaptureDeviceCapabilities(device_infos); media_internals_->UpdateVideoCaptureDeviceCapabilities(device_infos);
#if defined(OS_LINUX) || defined(OS_CHROMEOS) #if defined(OS_LINUX)
ExpectString("id", "/dev/dummy"); ExpectString("id", "/dev/dummy");
#else #else
ExpectString("id", "dummy"); ExpectString("id", "dummy");
...@@ -196,10 +201,12 @@ TEST_F(MediaInternalsVideoCaptureDeviceTest, ...@@ -196,10 +201,12 @@ TEST_F(MediaInternalsVideoCaptureDeviceTest,
base::ListValue expected_list; base::ListValue expected_list;
expected_list.AppendString(format_hd.ToString()); expected_list.AppendString(format_hd.ToString());
ExpectListOfStrings("formats", expected_list); ExpectListOfStrings("formats", expected_list);
#if defined(OS_MACOSX) #if defined(OS_LINUX)
ExpectString("captureApi", "QTKit"); ExpectString("captureApi", "V4L2 SPLANE");
#elif defined(OS_WIN) #elif defined(OS_WIN)
ExpectString("captureApi", "Direct Show"); ExpectString("captureApi", "Direct Show");
#elif defined(OS_MACOSX)
ExpectString("captureApi", "QTKit");
#endif #endif
} }
......
...@@ -29,8 +29,10 @@ ...@@ -29,8 +29,10 @@
#include "content/browser/compositor/test/no_transport_image_transport_factory.h" #include "content/browser/compositor/test/no_transport_image_transport_factory.h"
#endif #endif
using ::testing::_;
using ::testing::InSequence; using ::testing::InSequence;
using ::testing::Mock; using ::testing::Mock;
using ::testing::SaveArg;
namespace content { namespace content {
...@@ -46,7 +48,7 @@ class MockVideoCaptureControllerEventHandler ...@@ -46,7 +48,7 @@ class MockVideoCaptureControllerEventHandler
// VideoCaptureControllerEventHandler, to be used in EXPECT_CALL(). // VideoCaptureControllerEventHandler, to be used in EXPECT_CALL().
MOCK_METHOD1(DoBufferCreated, void(VideoCaptureControllerID)); MOCK_METHOD1(DoBufferCreated, void(VideoCaptureControllerID));
MOCK_METHOD1(DoBufferDestroyed, void(VideoCaptureControllerID)); MOCK_METHOD1(DoBufferDestroyed, void(VideoCaptureControllerID));
MOCK_METHOD1(DoBufferReady, void(VideoCaptureControllerID)); MOCK_METHOD2(DoBufferReady, void(VideoCaptureControllerID, const gfx::Size&));
MOCK_METHOD1(DoMailboxBufferReady, void(VideoCaptureControllerID)); MOCK_METHOD1(DoMailboxBufferReady, void(VideoCaptureControllerID));
MOCK_METHOD1(DoEnded, void(VideoCaptureControllerID)); MOCK_METHOD1(DoEnded, void(VideoCaptureControllerID));
MOCK_METHOD1(DoError, void(VideoCaptureControllerID)); MOCK_METHOD1(DoError, void(VideoCaptureControllerID));
...@@ -70,7 +72,7 @@ class MockVideoCaptureControllerEventHandler ...@@ -70,7 +72,7 @@ class MockVideoCaptureControllerEventHandler
const gfx::Rect& visible_rect, const gfx::Rect& visible_rect,
const base::TimeTicks& timestamp, const base::TimeTicks& timestamp,
scoped_ptr<base::DictionaryValue> metadata) override { scoped_ptr<base::DictionaryValue> metadata) override {
DoBufferReady(id); DoBufferReady(id, coded_size);
base::MessageLoop::current()->PostTask( base::MessageLoop::current()->PostTask(
FROM_HERE, FROM_HERE,
base::Bind(&VideoCaptureController::ReturnBuffer, base::Bind(&VideoCaptureController::ReturnBuffer,
...@@ -331,17 +333,17 @@ TEST_F(VideoCaptureControllerTest, NormalCaptureMultipleClients) { ...@@ -331,17 +333,17 @@ TEST_F(VideoCaptureControllerTest, NormalCaptureMultipleClients) {
{ {
InSequence s; InSequence s;
EXPECT_CALL(*client_a_, DoBufferCreated(client_a_route_1)).Times(1); EXPECT_CALL(*client_a_, DoBufferCreated(client_a_route_1)).Times(1);
EXPECT_CALL(*client_a_, DoBufferReady(client_a_route_1)).Times(1); EXPECT_CALL(*client_a_, DoBufferReady(client_a_route_1,_)).Times(1);
} }
{ {
InSequence s; InSequence s;
EXPECT_CALL(*client_b_, DoBufferCreated(client_b_route_1)).Times(1); EXPECT_CALL(*client_b_, DoBufferCreated(client_b_route_1)).Times(1);
EXPECT_CALL(*client_b_, DoBufferReady(client_b_route_1)).Times(1); EXPECT_CALL(*client_b_, DoBufferReady(client_b_route_1,_)).Times(1);
} }
{ {
InSequence s; InSequence s;
EXPECT_CALL(*client_a_, DoBufferCreated(client_a_route_2)).Times(1); EXPECT_CALL(*client_a_, DoBufferCreated(client_a_route_2)).Times(1);
EXPECT_CALL(*client_a_, DoBufferReady(client_a_route_2)).Times(1); EXPECT_CALL(*client_a_, DoBufferReady(client_a_route_2,_)).Times(1);
} }
device_->OnIncomingCapturedVideoFrame( device_->OnIncomingCapturedVideoFrame(
buffer, buffer,
...@@ -367,9 +369,9 @@ TEST_F(VideoCaptureControllerTest, NormalCaptureMultipleClients) { ...@@ -367,9 +369,9 @@ TEST_F(VideoCaptureControllerTest, NormalCaptureMultipleClients) {
buffer = NULL; buffer = NULL;
// The buffer should be delivered to the clients in any order. // The buffer should be delivered to the clients in any order.
EXPECT_CALL(*client_a_, DoBufferReady(client_a_route_1)).Times(1); EXPECT_CALL(*client_a_, DoBufferReady(client_a_route_1,_)).Times(1);
EXPECT_CALL(*client_b_, DoBufferReady(client_b_route_1)).Times(1); EXPECT_CALL(*client_b_, DoBufferReady(client_b_route_1,_)).Times(1);
EXPECT_CALL(*client_a_, DoBufferReady(client_a_route_2)).Times(1); EXPECT_CALL(*client_a_, DoBufferReady(client_a_route_2,_)).Times(1);
base::RunLoop().RunUntilIdle(); base::RunLoop().RunUntilIdle();
Mock::VerifyAndClearExpectations(client_a_.get()); Mock::VerifyAndClearExpectations(client_a_.get());
Mock::VerifyAndClearExpectations(client_b_.get()); Mock::VerifyAndClearExpectations(client_b_.get());
...@@ -400,16 +402,16 @@ TEST_F(VideoCaptureControllerTest, NormalCaptureMultipleClients) { ...@@ -400,16 +402,16 @@ TEST_F(VideoCaptureControllerTest, NormalCaptureMultipleClients) {
// The new client needs to be told of 3 buffers; the old clients only 2. // The new client needs to be told of 3 buffers; the old clients only 2.
EXPECT_CALL(*client_b_, DoBufferCreated(client_b_route_2)).Times(kPoolSize); EXPECT_CALL(*client_b_, DoBufferCreated(client_b_route_2)).Times(kPoolSize);
EXPECT_CALL(*client_b_, DoBufferReady(client_b_route_2)).Times(kPoolSize); EXPECT_CALL(*client_b_, DoBufferReady(client_b_route_2,_)).Times(kPoolSize);
EXPECT_CALL(*client_a_, DoBufferCreated(client_a_route_1)) EXPECT_CALL(*client_a_, DoBufferCreated(client_a_route_1))
.Times(kPoolSize - 1); .Times(kPoolSize - 1);
EXPECT_CALL(*client_a_, DoBufferReady(client_a_route_1)).Times(kPoolSize); EXPECT_CALL(*client_a_, DoBufferReady(client_a_route_1,_)).Times(kPoolSize);
EXPECT_CALL(*client_a_, DoBufferCreated(client_a_route_2)) EXPECT_CALL(*client_a_, DoBufferCreated(client_a_route_2))
.Times(kPoolSize - 1); .Times(kPoolSize - 1);
EXPECT_CALL(*client_a_, DoBufferReady(client_a_route_2)).Times(kPoolSize); EXPECT_CALL(*client_a_, DoBufferReady(client_a_route_2,_)).Times(kPoolSize);
EXPECT_CALL(*client_b_, DoBufferCreated(client_b_route_1)) EXPECT_CALL(*client_b_, DoBufferCreated(client_b_route_1))
.Times(kPoolSize - 1); .Times(kPoolSize - 1);
EXPECT_CALL(*client_b_, DoBufferReady(client_b_route_1)).Times(kPoolSize); EXPECT_CALL(*client_b_, DoBufferReady(client_b_route_1,_)).Times(kPoolSize);
base::RunLoop().RunUntilIdle(); base::RunLoop().RunUntilIdle();
Mock::VerifyAndClearExpectations(client_a_.get()); Mock::VerifyAndClearExpectations(client_a_.get());
Mock::VerifyAndClearExpectations(client_b_.get()); Mock::VerifyAndClearExpectations(client_b_.get());
...@@ -447,7 +449,7 @@ TEST_F(VideoCaptureControllerTest, NormalCaptureMultipleClients) { ...@@ -447,7 +449,7 @@ TEST_F(VideoCaptureControllerTest, NormalCaptureMultipleClients) {
buffer = NULL; buffer = NULL;
// B2 is the only client left, and is the only one that should // B2 is the only client left, and is the only one that should
// get the buffer. // get the buffer.
EXPECT_CALL(*client_b_, DoBufferReady(client_b_route_2)).Times(2); EXPECT_CALL(*client_b_, DoBufferReady(client_b_route_2,_)).Times(2);
base::RunLoop().RunUntilIdle(); base::RunLoop().RunUntilIdle();
Mock::VerifyAndClearExpectations(client_a_.get()); Mock::VerifyAndClearExpectations(client_a_.get());
Mock::VerifyAndClearExpectations(client_b_.get()); Mock::VerifyAndClearExpectations(client_b_.get());
...@@ -500,7 +502,7 @@ TEST_F(VideoCaptureControllerTest, NormalCaptureMultipleClients) { ...@@ -500,7 +502,7 @@ TEST_F(VideoCaptureControllerTest, NormalCaptureMultipleClients) {
capture_resolution).get()); capture_resolution).get());
ASSERT_FALSE(device_->ReserveOutputBuffer(media::VideoFrame::NATIVE_TEXTURE, ASSERT_FALSE(device_->ReserveOutputBuffer(media::VideoFrame::NATIVE_TEXTURE,
gfx::Size(0, 0)).get()); gfx::Size(0, 0)).get());
EXPECT_CALL(*client_b_, DoBufferReady(client_b_route_2)).Times(shm_buffers); EXPECT_CALL(*client_b_, DoBufferReady(client_b_route_2,_)).Times(shm_buffers);
EXPECT_CALL(*client_b_, DoMailboxBufferReady(client_b_route_2)) EXPECT_CALL(*client_b_, DoMailboxBufferReady(client_b_route_2))
.Times(mailbox_buffers); .Times(mailbox_buffers);
base::RunLoop().RunUntilIdle(); base::RunLoop().RunUntilIdle();
...@@ -618,30 +620,91 @@ TEST_F(VideoCaptureControllerTest, DataCaptureInEachVideoFormatInSequence) { ...@@ -618,30 +620,91 @@ TEST_F(VideoCaptureControllerTest, DataCaptureInEachVideoFormatInSequence) {
const gfx::Size capture_resolution(10, 10); const gfx::Size capture_resolution(10, 10);
ASSERT_GE(kScratchpadSizeInBytes, capture_resolution.GetArea() * 4u) ASSERT_GE(kScratchpadSizeInBytes, capture_resolution.GetArea() * 4u)
<< "Scratchpad is too small to hold the largest pixel format (ARGB)."; << "Scratchpad is too small to hold the largest pixel format (ARGB).";
const int kSessionId = 100;
// This Test skips PIXEL_FORMAT_TEXTURE and PIXEL_FORMAT_UNKNOWN. // This Test skips PIXEL_FORMAT_TEXTURE and PIXEL_FORMAT_UNKNOWN.
for (int format = 0; format < media::PIXEL_FORMAT_TEXTURE; ++format) { for (int format = 0; format < media::PIXEL_FORMAT_TEXTURE; ++format) {
media::VideoCaptureParams params; media::VideoCaptureParams params;
params.requested_format = media::VideoCaptureFormat( params.requested_format = media::VideoCaptureFormat(
capture_resolution, 30, media::VideoPixelFormat(format)); capture_resolution, 30, media::VideoPixelFormat(format));
const gfx::Size capture_resolution(320, 240);
const VideoCaptureControllerID route(0x99);
// Start with one client. // Start with one client.
controller_->AddClient(route, const VideoCaptureControllerID route_id(0x99);
controller_->AddClient(route_id,
client_a_.get(), client_a_.get(),
base::kNullProcessHandle, base::kNullProcessHandle,
100, kSessionId,
params); params);
ASSERT_EQ(1, controller_->GetClientCount()); ASSERT_EQ(1, controller_->GetClientCount());
device_->OnIncomingCapturedData( device_->OnIncomingCapturedData(
data, data,
params.requested_format.ImageAllocationSize(), params.requested_format.ImageAllocationSize(),
params.requested_format, params.requested_format,
0 /* rotation */, 0 /* clockwise_rotation */,
base::TimeTicks());
EXPECT_EQ(kSessionId, controller_->RemoveClient(route_id, client_a_.get()));
Mock::VerifyAndClearExpectations(client_a_.get());
}
}
// Test that we receive the expected resolution for a given captured frame
// resolution and rotation. Odd resolutions are also cropped.
TEST_F(VideoCaptureControllerTest, CheckRotationsAndCrops) {
const int kSessionId = 100;
const struct SizeAndRotation {
gfx::Size input_resolution;
int rotation;
gfx::Size output_resolution;
} kSizeAndRotations[] = {{{6, 4}, 0, {6, 4}},
{{6, 4}, 90, {4, 6}},
{{6, 4}, 180, {6, 4}},
{{6, 4}, 270, {4, 6}},
{{7, 4}, 0, {6, 4}},
{{7, 4}, 90, {4, 6}},
{{7, 4}, 180, {6, 4}},
{{7, 4}, 270, {4, 6}}};
// The usual ReserveOutputBuffer() -> OnIncomingCapturedVideoFrame() cannot
// be used since it does not resolve rotations or crops. The memory backed
// buffer OnIncomingCapturedData() is used instead, with a dummy scratchpad
// buffer.
const size_t kScratchpadSizeInBytes = 400;
unsigned char data[kScratchpadSizeInBytes] = {};
media::VideoCaptureParams params;
for (const auto& size_and_rotation : kSizeAndRotations) {
ASSERT_GE(kScratchpadSizeInBytes,
size_and_rotation.input_resolution.GetArea() * 4u)
<< "Scratchpad is too small to hold the largest pixel format (ARGB).";
params.requested_format = media::VideoCaptureFormat(
size_and_rotation.input_resolution, 30, media::PIXEL_FORMAT_ARGB);
const VideoCaptureControllerID route_id(0x99);
controller_->AddClient(route_id, client_a_.get(), base::kNullProcessHandle,
kSessionId, params);
ASSERT_EQ(1, controller_->GetClientCount());
device_->OnIncomingCapturedData(
data,
params.requested_format.ImageAllocationSize(),
params.requested_format,
size_and_rotation.rotation,
base::TimeTicks()); base::TimeTicks());
EXPECT_EQ(100, controller_->RemoveClient(route, client_a_.get())); gfx::Size coded_size;
{
InSequence s;
EXPECT_CALL(*client_a_, DoBufferCreated(route_id)).Times(1);
EXPECT_CALL(*client_a_, DoBufferReady(route_id, _))
.Times(1)
.WillOnce(SaveArg<1>(&coded_size));
}
base::RunLoop().RunUntilIdle();
EXPECT_EQ(coded_size.width(), size_and_rotation.output_resolution.width());
EXPECT_EQ(coded_size.height(),
size_and_rotation.output_resolution.height());
EXPECT_EQ(kSessionId, controller_->RemoveClient(route_id, client_a_.get()));
Mock::VerifyAndClearExpectations(client_a_.get()); Mock::VerifyAndClearExpectations(client_a_.get());
} }
} }
......
...@@ -73,21 +73,12 @@ void VideoCaptureDeviceClient::OnIncomingCapturedData( ...@@ -73,21 +73,12 @@ void VideoCaptureDeviceClient::OnIncomingCapturedData(
if (!frame_format.IsValid()) if (!frame_format.IsValid())
return; return;
// Chopped pixels in width/height in case video capture device has odd // |chopped_{width,height} and |new_unrotated_{width,height}| are the lowest
// numbers for width/height. // bit decomposition of {width, height}, grabbing the odd and even parts.
int chopped_width = 0; const int chopped_width = frame_format.frame_size.width() & 1;
int chopped_height = 0; const int chopped_height = frame_format.frame_size.height() & 1;
int new_unrotated_width = frame_format.frame_size.width(); const int new_unrotated_width = frame_format.frame_size.width() & ~1;
int new_unrotated_height = frame_format.frame_size.height(); const int new_unrotated_height = frame_format.frame_size.height() & ~1;
if (new_unrotated_width & 1) {
--new_unrotated_width;
chopped_width = 1;
}
if (new_unrotated_height & 1) {
--new_unrotated_height;
chopped_height = 1;
}
int destination_width = new_unrotated_width; int destination_width = new_unrotated_width;
int destination_height = new_unrotated_height; int destination_height = new_unrotated_height;
...@@ -95,6 +86,17 @@ void VideoCaptureDeviceClient::OnIncomingCapturedData( ...@@ -95,6 +86,17 @@ void VideoCaptureDeviceClient::OnIncomingCapturedData(
destination_width = new_unrotated_height; destination_width = new_unrotated_height;
destination_height = new_unrotated_width; destination_height = new_unrotated_width;
} }
DCHECK_EQ(rotation % 90, 0)
<< " Rotation must be a multiple of 90, now: " << rotation;
libyuv::RotationMode rotation_mode = libyuv::kRotate0;
if (rotation == 90)
rotation_mode = libyuv::kRotate90;
else if (rotation == 180)
rotation_mode = libyuv::kRotate180;
else if (rotation == 270)
rotation_mode = libyuv::kRotate270;
const gfx::Size dimensions(destination_width, destination_height); const gfx::Size dimensions(destination_width, destination_height);
if (!VideoFrame::IsValidConfig(VideoFrame::I420, if (!VideoFrame::IsValidConfig(VideoFrame::I420,
dimensions, dimensions,
...@@ -121,14 +123,6 @@ void VideoCaptureDeviceClient::OnIncomingCapturedData( ...@@ -121,14 +123,6 @@ void VideoCaptureDeviceClient::OnIncomingCapturedData(
int crop_y = 0; int crop_y = 0;
libyuv::FourCC origin_colorspace = libyuv::FOURCC_ANY; libyuv::FourCC origin_colorspace = libyuv::FOURCC_ANY;
libyuv::RotationMode rotation_mode = libyuv::kRotate0;
if (rotation == 90)
rotation_mode = libyuv::kRotate90;
else if (rotation == 180)
rotation_mode = libyuv::kRotate180;
else if (rotation == 270)
rotation_mode = libyuv::kRotate270;
bool flip = false; bool flip = false;
switch (frame_format.pixel_format) { switch (frame_format.pixel_format) {
case media::PIXEL_FORMAT_UNKNOWN: // Color format not set. case media::PIXEL_FORMAT_UNKNOWN: // Color format not set.
...@@ -229,6 +223,76 @@ void VideoCaptureDeviceClient::OnIncomingCapturedData( ...@@ -229,6 +223,76 @@ void VideoCaptureDeviceClient::OnIncomingCapturedData(
timestamp)); timestamp));
} }
void
VideoCaptureDeviceClient::OnIncomingCapturedYuvData(
const uint8* y_data,
const uint8* u_data,
const uint8* v_data,
size_t y_stride,
size_t u_stride,
size_t v_stride,
const VideoCaptureFormat& frame_format,
int clockwise_rotation,
const base::TimeTicks& timestamp) {
TRACE_EVENT0("video", "VideoCaptureController::OnIncomingCapturedYuvData");
DCHECK_EQ(frame_format.pixel_format, media::PIXEL_FORMAT_I420);
DCHECK_EQ(clockwise_rotation, 0) << "Rotation not supported";
scoped_refptr<Buffer> buffer = ReserveOutputBuffer(VideoFrame::I420,
frame_format.frame_size);
if (!buffer.get())
return;
// Blit (copy) here from y,u,v into buffer.data()). Needed so we can return
// the parameter buffer synchronously to the driver.
const size_t y_plane_size = VideoFrame::PlaneAllocationSize(VideoFrame::I420,
VideoFrame::kYPlane, frame_format.frame_size);
const size_t u_plane_size = VideoFrame::PlaneAllocationSize(
VideoFrame::I420, VideoFrame::kUPlane, frame_format.frame_size);
uint8* const dst_y = reinterpret_cast<uint8*>(buffer->data());
uint8* const dst_u = dst_y + y_plane_size;
uint8* const dst_v = dst_u + u_plane_size;
const size_t dst_y_stride = VideoFrame::RowBytes(
VideoFrame::kYPlane, VideoFrame::I420, frame_format.frame_size.width());
const size_t dst_u_stride = VideoFrame::RowBytes(
VideoFrame::kUPlane, VideoFrame::I420, frame_format.frame_size.width());
const size_t dst_v_stride = VideoFrame::RowBytes(
VideoFrame::kVPlane, VideoFrame::I420, frame_format.frame_size.width());
DCHECK_GE(y_stride, dst_y_stride);
DCHECK_GE(u_stride, dst_u_stride);
DCHECK_GE(v_stride, dst_v_stride);
if (libyuv::I420Copy(y_data, y_stride,
u_data, u_stride,
v_data, v_stride,
dst_y, dst_y_stride,
dst_u, dst_u_stride,
dst_v, dst_v_stride,
frame_format.frame_size.width(),
frame_format.frame_size.height())) {
DLOG(WARNING) << "Failed to copy buffer";
return;
}
scoped_refptr<VideoFrame> video_frame = VideoFrame::WrapExternalYuvData(
VideoFrame::I420, frame_format.frame_size,
gfx::Rect(frame_format.frame_size), frame_format.frame_size, y_stride,
u_stride, v_stride, dst_y, dst_u, dst_v, base::TimeDelta(),
base::Closure());
DCHECK(video_frame.get());
BrowserThread::PostTask(
BrowserThread::IO,
FROM_HERE,
base::Bind(
&VideoCaptureController::DoIncomingCapturedVideoFrameOnIOThread,
controller_,
buffer,
video_frame,
timestamp));
};
scoped_refptr<media::VideoCaptureDevice::Client::Buffer> scoped_refptr<media::VideoCaptureDevice::Client::Buffer>
VideoCaptureDeviceClient::ReserveOutputBuffer(VideoFrame::Format format, VideoCaptureDeviceClient::ReserveOutputBuffer(VideoFrame::Format format,
const gfx::Size& dimensions) { const gfx::Size& dimensions) {
......
...@@ -39,6 +39,15 @@ class CONTENT_EXPORT VideoCaptureDeviceClient ...@@ -39,6 +39,15 @@ class CONTENT_EXPORT VideoCaptureDeviceClient
const media::VideoCaptureFormat& frame_format, const media::VideoCaptureFormat& frame_format,
int rotation, int rotation,
const base::TimeTicks& timestamp) override; const base::TimeTicks& timestamp) override;
void OnIncomingCapturedYuvData(const uint8* y_data,
const uint8* u_data,
const uint8* v_data,
size_t y_stride,
size_t u_stride,
size_t v_stride,
const media::VideoCaptureFormat& frame_format,
int clockwise_rotation,
const base::TimeTicks& timestamp) override;
scoped_refptr<Buffer> ReserveOutputBuffer(media::VideoFrame::Format format, scoped_refptr<Buffer> ReserveOutputBuffer(media::VideoFrame::Format format,
const gfx::Size& size) override; const gfx::Size& size) override;
void OnIncomingCapturedVideoFrame( void OnIncomingCapturedVideoFrame(
......
...@@ -211,6 +211,12 @@ component("media") { ...@@ -211,6 +211,12 @@ component("media") {
"renderers/video_renderer_impl.h", "renderers/video_renderer_impl.h",
"video/capture/file_video_capture_device.cc", "video/capture/file_video_capture_device.cc",
"video/capture/file_video_capture_device.h", "video/capture/file_video_capture_device.h",
"video/capture/linux/v4l2_capture_delegate.cc",
"video/capture/linux/v4l2_capture_delegate.h",
"video/capture/linux/v4l2_capture_delegate_multi_plane.cc",
"video/capture/linux/v4l2_capture_delegate_multi_plane.h",
"video/capture/linux/v4l2_capture_delegate_single_plane.cc",
"video/capture/linux/v4l2_capture_delegate_single_plane.h",
"video/capture/linux/video_capture_device_chromeos.cc", "video/capture/linux/video_capture_device_chromeos.cc",
"video/capture/linux/video_capture_device_chromeos.h", "video/capture/linux/video_capture_device_chromeos.h",
"video/capture/linux/video_capture_device_linux.cc", "video/capture/linux/video_capture_device_linux.cc",
...@@ -395,6 +401,13 @@ component("media") { ...@@ -395,6 +401,13 @@ component("media") {
] ]
} }
if (is_openbsd) {
sources -= [
"video/capture/linux/v4l2_capture_delegate_multi_plane.cc",
"video/capture/linux/v4l2_capture_delegate_multi_plane.h",
]
}
if (is_ios) { if (is_ios) {
deps += [ "//media/base/mac" ] deps += [ "//media/base/mac" ]
} }
......
...@@ -568,6 +568,12 @@ ...@@ -568,6 +568,12 @@
'video/capture/file_video_capture_device.h', 'video/capture/file_video_capture_device.h',
'video/capture/file_video_capture_device_factory.cc', 'video/capture/file_video_capture_device_factory.cc',
'video/capture/file_video_capture_device_factory.h', 'video/capture/file_video_capture_device_factory.h',
'video/capture/linux/v4l2_capture_delegate.cc',
'video/capture/linux/v4l2_capture_delegate.h',
'video/capture/linux/v4l2_capture_delegate_multi_plane.cc',
'video/capture/linux/v4l2_capture_delegate_multi_plane.h',
'video/capture/linux/v4l2_capture_delegate_single_plane.cc',
'video/capture/linux/v4l2_capture_delegate_single_plane.h',
'video/capture/linux/video_capture_device_chromeos.cc', 'video/capture/linux/video_capture_device_chromeos.cc',
'video/capture/linux/video_capture_device_chromeos.h', 'video/capture/linux/video_capture_device_chromeos.h',
'video/capture/linux/video_capture_device_factory_linux.cc', 'video/capture/linux/video_capture_device_factory_linux.cc',
...@@ -758,6 +764,11 @@ ...@@ -758,6 +764,11 @@
'audio/openbsd/audio_manager_openbsd.cc', 'audio/openbsd/audio_manager_openbsd.cc',
'audio/openbsd/audio_manager_openbsd.h', 'audio/openbsd/audio_manager_openbsd.h',
], ],
}, { # else: openbsd==1
'sources!': [
'video/capture/linux/v4l2_capture_delegate_multi_plane.cc',
'video/capture/linux/v4l2_capture_delegate_multi_plane.h',
],
}], }],
['OS=="linux"', { ['OS=="linux"', {
'conditions': [ 'conditions': [
......
...@@ -31,7 +31,9 @@ void FakeVideoCaptureDeviceFactory::GetDeviceNames( ...@@ -31,7 +31,9 @@ void FakeVideoCaptureDeviceFactory::GetDeviceNames(
for (int n = 0; n < number_of_devices_; ++n) { for (int n = 0; n < number_of_devices_; ++n) {
VideoCaptureDevice::Name name(base::StringPrintf("fake_device_%d", n), VideoCaptureDevice::Name name(base::StringPrintf("fake_device_%d", n),
base::StringPrintf("/dev/video%d", n) base::StringPrintf("/dev/video%d", n)
#if defined(OS_MACOSX) #if defined(OS_LINUX)
, VideoCaptureDevice::Name::V4L2_SINGLE_PLANE
#elif defined(OS_MACOSX)
, VideoCaptureDevice::Name::AVFOUNDATION , VideoCaptureDevice::Name::AVFOUNDATION
#elif defined(OS_WIN) #elif defined(OS_WIN)
, VideoCaptureDevice::Name::DIRECT_SHOW , VideoCaptureDevice::Name::DIRECT_SHOW
......
...@@ -23,6 +23,16 @@ namespace { ...@@ -23,6 +23,16 @@ namespace {
class MockClient : public VideoCaptureDevice::Client { class MockClient : public VideoCaptureDevice::Client {
public: public:
MOCK_METHOD9(OnIncomingCapturedYuvData,
void (const uint8* y_data,
const uint8* u_data,
const uint8* v_data,
size_t y_stride,
size_t u_stride,
size_t v_stride,
const VideoCaptureFormat& frame_format,
int clockwise_rotation,
const base::TimeTicks& timestamp));
MOCK_METHOD2(ReserveOutputBuffer, MOCK_METHOD2(ReserveOutputBuffer,
scoped_refptr<Buffer>(VideoFrame::Format format, scoped_refptr<Buffer>(VideoFrame::Format format,
const gfx::Size& dimensions)); const gfx::Size& dimensions));
...@@ -80,6 +90,8 @@ class FakeVideoCaptureDeviceTest : public testing::Test { ...@@ -80,6 +90,8 @@ class FakeVideoCaptureDeviceTest : public testing::Test {
} }
void SetUp() override { void SetUp() override {
EXPECT_CALL(*client_, OnIncomingCapturedYuvData(_,_,_,_,_,_,_,_,_))
.Times(0);
EXPECT_CALL(*client_, ReserveOutputBuffer(_,_)).Times(0); EXPECT_CALL(*client_, ReserveOutputBuffer(_,_)).Times(0);
EXPECT_CALL(*client_, OnIncomingCapturedVideoFrame(_,_,_)).Times(0); EXPECT_CALL(*client_, OnIncomingCapturedVideoFrame(_,_,_)).Times(0);
} }
......
// Copyright 2015 The Chromium Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#include "media/video/capture/linux/v4l2_capture_delegate.h"
#include <poll.h>
#include <sys/fcntl.h>
#include <sys/ioctl.h>
#include <sys/mman.h>
#include "base/bind.h"
#include "base/files/file_enumerator.h"
#include "base/posix/eintr_wrapper.h"
#include "base/strings/stringprintf.h"
#include "media/base/bind_to_current_loop.h"
#include "media/video/capture/linux/v4l2_capture_delegate_multi_plane.h"
#include "media/video/capture/linux/v4l2_capture_delegate_single_plane.h"
#include "media/video/capture/linux/video_capture_device_linux.h"
namespace media {
// Desired number of video buffers to allocate. The actual number of allocated
// buffers by v4l2 driver can be higher or lower than this number.
// kNumVideoBuffers should not be too small, or Chrome may not return enough
// buffers back to driver in time.
const uint32 kNumVideoBuffers = 4;
// Timeout in milliseconds v4l2_thread_ blocks waiting for a frame from the hw.
const int kCaptureTimeoutMs = 200;
// The number of continuous timeouts tolerated before treated as error.
const int kContinuousTimeoutLimit = 10;
// MJPEG is preferred if the requested width or height is larger than this.
const int kMjpegWidth = 640;
const int kMjpegHeight = 480;
// Typical framerate, in fps
const int kTypicalFramerate = 30;
// V4L2 color formats supported by V4L2CaptureDelegate derived classes.
// This list is ordered by precedence of use -- but see caveats for MJPEG.
static struct{
uint32_t fourcc;
VideoPixelFormat pixel_format;
size_t num_planes;
} const kSupportedFormatsAndPlanarity[] = {
{V4L2_PIX_FMT_YUV420, PIXEL_FORMAT_I420, 1},
{V4L2_PIX_FMT_YUYV, PIXEL_FORMAT_YUY2, 1},
{V4L2_PIX_FMT_UYVY, PIXEL_FORMAT_UYVY, 1},
#if !defined(OS_OPENBSD)
// TODO(mcasas): add V4L2_PIX_FMT_YVU420M when available in bots.
{V4L2_PIX_FMT_YUV420M, PIXEL_FORMAT_I420, 3},
#endif
// MJPEG is usually sitting fairly low since we don't want to have to decode.
// However, is needed for large resolutions due to USB bandwidth limitations,
// so GetListOfUsableFourCcs() can duplicate it on top, see that method.
{V4L2_PIX_FMT_MJPEG, PIXEL_FORMAT_MJPEG, 1},
// JPEG works as MJPEG on some gspca webcams from field reports, see
// https://code.google.com/p/webrtc/issues/detail?id=529, put it as the least
// preferred format.
{V4L2_PIX_FMT_JPEG, PIXEL_FORMAT_MJPEG, 1},
};
// static
scoped_refptr<V4L2CaptureDelegate>
V4L2CaptureDelegate::CreateV4L2CaptureDelegate(
const VideoCaptureDevice::Name& device_name,
const scoped_refptr<base::SingleThreadTaskRunner>& v4l2_task_runner,
int power_line_frequency) {
switch (device_name.capture_api_type()) {
case VideoCaptureDevice::Name::V4L2_SINGLE_PLANE:
return make_scoped_refptr(new V4L2CaptureDelegateSinglePlane(
device_name, v4l2_task_runner, power_line_frequency));
case VideoCaptureDevice::Name::V4L2_MULTI_PLANE:
#if !defined(OS_OPENBSD)
return make_scoped_refptr(new V4L2CaptureDelegateMultiPlane(
device_name, v4l2_task_runner, power_line_frequency));
default:
#endif
NOTIMPLEMENTED() << "Unknown V4L2 capture API type";
return scoped_refptr<V4L2CaptureDelegate>();
}
}
//static
size_t V4L2CaptureDelegate::GetNumPlanesForFourCc(uint32_t fourcc) {
for (const auto& fourcc_and_pixel_format : kSupportedFormatsAndPlanarity) {
if (fourcc_and_pixel_format.fourcc == fourcc)
return fourcc_and_pixel_format.num_planes;
}
DVLOG(1) << "Unknown fourcc " << FourccToString(fourcc);
return 0;
}
// static
VideoPixelFormat V4L2CaptureDelegate::V4l2FourCcToChromiumPixelFormat(
uint32_t v4l2_fourcc) {
for (const auto& fourcc_and_pixel_format : kSupportedFormatsAndPlanarity) {
if (fourcc_and_pixel_format.fourcc == v4l2_fourcc)
return fourcc_and_pixel_format.pixel_format;
}
// Not finding a pixel format is OK during device capabilities enumeration.
// Let the caller decide if PIXEL_FORMAT_UNKNOWN is an error or not.
DVLOG(1) << "Unsupported pixel format: " << FourccToString(v4l2_fourcc);
return PIXEL_FORMAT_UNKNOWN;
}
// static
std::list<uint32_t> V4L2CaptureDelegate::GetListOfUsableFourCcs(
bool prefer_mjpeg) {
std::list<uint32_t> supported_formats;
for (const auto& format : kSupportedFormatsAndPlanarity)
supported_formats.push_back(format.fourcc);
// Duplicate MJPEG on top of the list depending on |prefer_mjpeg|.
if (prefer_mjpeg)
supported_formats.push_front(V4L2_PIX_FMT_MJPEG);
return supported_formats;
}
//static
std::string V4L2CaptureDelegate::FourccToString(uint32_t fourcc) {
return base::StringPrintf("%c%c%c%c", fourcc & 0xFF, (fourcc >> 8) & 0xFF,
(fourcc >> 16) & 0xFF, (fourcc >> 24) & 0xFF);
}
V4L2CaptureDelegate::BufferTracker::BufferTracker() {
}
V4L2CaptureDelegate::BufferTracker::~BufferTracker() {
for (const auto& plane : planes_) {
if (plane.start == nullptr)
continue;
const int result = munmap(plane.start, plane.length);
PLOG_IF(ERROR, result < 0) << "Error munmap()ing V4L2 buffer";
}
}
void V4L2CaptureDelegate::BufferTracker::AddMmapedPlane(uint8_t* const start,
size_t length) {
Plane plane;
plane.start = start;
plane.length = length;
planes_.push_back(plane);
}
V4L2CaptureDelegate::V4L2CaptureDelegate(
const VideoCaptureDevice::Name& device_name,
const scoped_refptr<base::SingleThreadTaskRunner>& v4l2_task_runner,
int power_line_frequency)
: capture_type_((device_name.capture_api_type() ==
VideoCaptureDevice::Name::V4L2_SINGLE_PLANE)
? V4L2_BUF_TYPE_VIDEO_CAPTURE
: V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE),
v4l2_task_runner_(v4l2_task_runner),
device_name_(device_name),
power_line_frequency_(power_line_frequency),
is_capturing_(false),
timeout_count_(0),
rotation_(0) {
}
V4L2CaptureDelegate::~V4L2CaptureDelegate() {
}
void V4L2CaptureDelegate::AllocateAndStart(
int width,
int height,
float frame_rate,
scoped_ptr<VideoCaptureDevice::Client> client) {
DCHECK(v4l2_task_runner_->BelongsToCurrentThread());
DCHECK(client);
client_ = client.Pass();
// Need to open camera with O_RDWR after Linux kernel 3.3.
device_fd_.reset(HANDLE_EINTR(open(device_name_.id().c_str(), O_RDWR)));
if (!device_fd_.is_valid()) {
SetErrorState("Failed to open V4L2 device driver file.");
return;
}
v4l2_capability cap = {};
if (!((HANDLE_EINTR(ioctl(device_fd_.get(), VIDIOC_QUERYCAP, &cap)) == 0) &&
((cap.capabilities & V4L2_CAP_VIDEO_CAPTURE ||
cap.capabilities & V4L2_CAP_VIDEO_CAPTURE_MPLANE) &&
!(cap.capabilities & V4L2_CAP_VIDEO_OUTPUT) &&
!(cap.capabilities & V4L2_CAP_VIDEO_OUTPUT_MPLANE)))) {
device_fd_.reset();
SetErrorState("This is not a V4L2 video capture device");
return;
}
// Get supported video formats in preferred order.
// For large resolutions, favour mjpeg over raw formats.
const std::list<uint32_t>& desired_v4l2_formats =
GetListOfUsableFourCcs(width > kMjpegWidth || height > kMjpegHeight);
std::list<uint32_t>::const_iterator best = desired_v4l2_formats.end();
v4l2_fmtdesc fmtdesc = {};
fmtdesc.type = capture_type_;
for (; HANDLE_EINTR(ioctl(device_fd_.get(), VIDIOC_ENUM_FMT, &fmtdesc)) == 0;
++fmtdesc.index) {
best = std::find(desired_v4l2_formats.begin(), best, fmtdesc.pixelformat);
}
if (best == desired_v4l2_formats.end()) {
SetErrorState("Failed to find a supported camera format.");
return;
}
DVLOG(1) << "Chosen pixel format is " << FourccToString(*best);
video_fmt_.type = capture_type_;
if (!FillV4L2Format(&video_fmt_, width, height, *best)) {
SetErrorState("Failed filling in V4L2 Format");
return;
}
if (HANDLE_EINTR(ioctl(device_fd_.get(), VIDIOC_S_FMT, &video_fmt_)) < 0) {
SetErrorState("Failed to set video capture format");
return;
}
const VideoPixelFormat pixel_format =
V4l2FourCcToChromiumPixelFormat(video_fmt_.fmt.pix.pixelformat);
if (pixel_format == PIXEL_FORMAT_UNKNOWN) {
SetErrorState("Unsupported pixel format");
return;
}
// Set capture framerate in the form of capture interval.
v4l2_streamparm streamparm = {};
streamparm.type = capture_type_;
// The following line checks that the driver knows about framerate get/set.
if (HANDLE_EINTR(ioctl(device_fd_.get(), VIDIOC_G_PARM, &streamparm)) >= 0) {
// Now check if the device is able to accept a capture framerate set.
if (streamparm.parm.capture.capability & V4L2_CAP_TIMEPERFRAME) {
// |frame_rate| is float, approximate by a fraction.
streamparm.parm.capture.timeperframe.numerator =
media::kFrameRatePrecision;
streamparm.parm.capture.timeperframe.denominator =
(frame_rate) ? (frame_rate * media::kFrameRatePrecision)
: (kTypicalFramerate * media::kFrameRatePrecision);
if (HANDLE_EINTR(ioctl(device_fd_.get(), VIDIOC_S_PARM, &streamparm)) <
0) {
SetErrorState("Failed to set camera framerate");
return;
}
DVLOG(2) << "Actual camera driverframerate: "
<< streamparm.parm.capture.timeperframe.denominator << "/"
<< streamparm.parm.capture.timeperframe.numerator;
}
}
// TODO(mcasas): what should be done if the camera driver does not allow
// framerate configuration, or the actual one is different from the desired?
// Set anti-banding/anti-flicker to 50/60Hz. May fail due to not supported
// operation (|errno| == EINVAL in this case) or plain failure.
if ((power_line_frequency_ == V4L2_CID_POWER_LINE_FREQUENCY_50HZ) ||
(power_line_frequency_ == V4L2_CID_POWER_LINE_FREQUENCY_60HZ) ||
(power_line_frequency_ == V4L2_CID_POWER_LINE_FREQUENCY_AUTO)) {
struct v4l2_control control = {};
control.id = V4L2_CID_POWER_LINE_FREQUENCY;
control.value = power_line_frequency_;
const int retval =
HANDLE_EINTR(ioctl(device_fd_.get(), VIDIOC_S_CTRL, &control));
if (retval != 0)
DVLOG(1) << "Error setting power line frequency removal";
}
capture_format_.frame_size.SetSize(video_fmt_.fmt.pix.width,
video_fmt_.fmt.pix.height);
capture_format_.frame_rate = frame_rate;
capture_format_.pixel_format = pixel_format;
v4l2_requestbuffers r_buffer = {};
r_buffer.type = capture_type_;
r_buffer.memory = V4L2_MEMORY_MMAP;
r_buffer.count = kNumVideoBuffers;
if (HANDLE_EINTR(ioctl(device_fd_.get(), VIDIOC_REQBUFS, &r_buffer)) < 0) {
SetErrorState("Error requesting MMAP buffers from V4L2");
return;
}
for (unsigned int i = 0; i < r_buffer.count; ++i) {
if (!MapAndQueueBuffer(i)) {
SetErrorState("Allocate buffer failed");
return;
}
}
if (HANDLE_EINTR(ioctl(device_fd_.get(), VIDIOC_STREAMON, &capture_type_))
< 0) {
SetErrorState("VIDIOC_STREAMON failed");
return;
}
is_capturing_ = true;
// Post task to start fetching frames from v4l2.
v4l2_task_runner_->PostTask(
FROM_HERE, base::Bind(&V4L2CaptureDelegate::DoCapture, this));
}
void V4L2CaptureDelegate::StopAndDeAllocate() {
DCHECK(v4l2_task_runner_->BelongsToCurrentThread());
// The order is important: stop streaming, clear |buffer_pool_|,
// thus munmap()ing the v4l2_buffers, and then return them to the OS.
if (HANDLE_EINTR(ioctl(device_fd_.get(), VIDIOC_STREAMOFF, &capture_type_))
< 0) {
SetErrorState("VIDIOC_STREAMOFF failed");
return;
}
buffer_tracker_pool_.clear();
v4l2_requestbuffers r_buffer = {};
r_buffer.type = capture_type_;
r_buffer.memory = V4L2_MEMORY_MMAP;
r_buffer.count = 0;
if (HANDLE_EINTR(ioctl(device_fd_.get(), VIDIOC_REQBUFS, &r_buffer)) < 0)
SetErrorState("Failed to VIDIOC_REQBUFS with count = 0");
// At this point we can close the device.
// This is also needed for correctly changing settings later via VIDIOC_S_FMT.
device_fd_.reset();
is_capturing_ = false;
client_.reset();
}
void V4L2CaptureDelegate::SetRotation(int rotation) {
DCHECK(v4l2_task_runner_->BelongsToCurrentThread());
DCHECK(rotation >= 0 && rotation < 360 && rotation % 90 == 0);
rotation_ = rotation;
}
bool V4L2CaptureDelegate::MapAndQueueBuffer(int index) {
v4l2_buffer buffer;
FillV4L2Buffer(&buffer, index);
if (HANDLE_EINTR(ioctl(device_fd_.get(), VIDIOC_QUERYBUF, &buffer)) < 0) {
DLOG(ERROR) << "Error querying status of a MMAP V4L2 buffer";
return false;
}
const scoped_refptr<BufferTracker>& buffer_tracker = CreateBufferTracker();
if (!buffer_tracker->Init(device_fd_.get(), buffer)) {
DLOG(ERROR) << "Error creating BufferTracker";
return false;
}
buffer_tracker_pool_.push_back(buffer_tracker);
// Enqueue the buffer in the drivers incoming queue.
if (HANDLE_EINTR(ioctl(device_fd_.get(), VIDIOC_QBUF, &buffer)) < 0) {
DLOG(ERROR) << "Error enqueuing a V4L2 buffer back into the driver";
return false;
}
return true;
}
void V4L2CaptureDelegate::FillV4L2Buffer(v4l2_buffer* buffer,
int i) const {
memset(buffer, 0, sizeof(*buffer));
buffer->memory = V4L2_MEMORY_MMAP;
buffer->index = i;
FinishFillingV4L2Buffer(buffer);
}
void V4L2CaptureDelegate::DoCapture() {
DCHECK(v4l2_task_runner_->BelongsToCurrentThread());
if (!is_capturing_)
return;
pollfd device_pfd = {};
device_pfd.fd = device_fd_.get();
device_pfd.events = POLLIN;
const int result = HANDLE_EINTR(poll(&device_pfd, 1, kCaptureTimeoutMs));
if (result < 0) {
SetErrorState("Poll failed");
return;
}
// Check if poll() timed out; track the amount of times it did in a row and
// throw an error if it times out too many times.
if (result == 0) {
timeout_count_++;
if (timeout_count_ >= kContinuousTimeoutLimit) {
SetErrorState("Multiple continuous timeouts while read-polling.");
timeout_count_ = 0;
return;
}
} else {
timeout_count_ = 0;
}
// Deenqueue, send and reenqueue a buffer if the driver has filled one in.
if (device_pfd.revents & POLLIN) {
v4l2_buffer buffer;
FillV4L2Buffer(&buffer, 0);
if (HANDLE_EINTR(ioctl(device_fd_.get(), VIDIOC_DQBUF, &buffer)) < 0) {
SetErrorState("Failed to dequeue capture buffer");
return;
}
SendBuffer(buffer_tracker_pool_[buffer.index], video_fmt_);
if (HANDLE_EINTR(ioctl(device_fd_.get(), VIDIOC_QBUF, &buffer)) < 0) {
SetErrorState("Failed to enqueue capture buffer");
return;
}
}
v4l2_task_runner_->PostTask(
FROM_HERE, base::Bind(&V4L2CaptureDelegate::DoCapture, this));
}
void V4L2CaptureDelegate::SetErrorState(const std::string& reason) {
DCHECK(v4l2_task_runner_->BelongsToCurrentThread());
is_capturing_ = false;
client_->OnError(reason);
}
} // namespace media
// Copyright 2015 The Chromium Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#ifndef MEDIA_VIDEO_CAPTURE_LINUX_V4L2_VIDEO_CAPTURE_DELEGATE_H_
#define MEDIA_VIDEO_CAPTURE_LINUX_V4L2_VIDEO_CAPTURE_DELEGATE_H_
#if defined(OS_OPENBSD)
#include <sys/videoio.h>
#else
#include <linux/videodev2.h>
#endif
#include "base/files/scoped_file.h"
#include "base/memory/ref_counted.h"
#include "base/memory/scoped_vector.h"
#include "media/video/capture/video_capture_device.h"
namespace media {
// Class doing the actual Linux capture using V4L2 API. V4L2 SPLANE/MPLANE
// capture specifics are implemented in derived classes. Created and destroyed
// on the owner's thread, otherwise living and operating on |v4l2_task_runner_|.
class V4L2CaptureDelegate
: public base::RefCountedThreadSafe<V4L2CaptureDelegate> {
public:
// Creates the appropiate VideoCaptureDelegate according to parameters.
static scoped_refptr<V4L2CaptureDelegate> CreateV4L2CaptureDelegate(
const VideoCaptureDevice::Name& device_name,
const scoped_refptr<base::SingleThreadTaskRunner>& v4l2_task_runner,
int power_line_frequency);
// Retrieves the #planes for a given |fourcc|, or 0 if unknown.
static size_t GetNumPlanesForFourCc(uint32_t fourcc);
// Returns the Chrome pixel format for |v4l2_fourcc| or PIXEL_FORMAT_UNKNOWN.
static VideoPixelFormat V4l2FourCcToChromiumPixelFormat(uint32_t v4l2_fourcc);
// Composes a list of usable and supported pixel formats, in order of
// preference, with MJPEG prioritised depending on |prefer_mjpeg|.
static std::list<uint32_t> GetListOfUsableFourCcs(bool prefer_mjpeg);
// Forward-to versions of VideoCaptureDevice virtual methods.
void AllocateAndStart(int width,
int height,
float frame_rate,
scoped_ptr<VideoCaptureDevice::Client> client);
void StopAndDeAllocate();
void SetRotation(int rotation);
protected:
// Class keeping track of SPLANE/MPLANE V4L2 buffers, mmap()ed on construction
// and munmap()ed on destruction. Destruction is syntactically equal for
// S/MPLANE but not construction, so this is implemented in derived classes.
// Internally it has a vector of planes, which for SPLANE will contain only
// one element.
class BufferTracker : public base::RefCounted<BufferTracker> {
public:
BufferTracker();
// Abstract method to mmap() given |fd| according to |buffer|, planarity
// specific.
virtual bool Init(int fd, const v4l2_buffer& buffer) = 0;
uint8_t* const GetPlaneStart(size_t plane) const {
DCHECK_LT(plane, planes_.size());
return planes_[plane].start;
}
protected:
friend class base::RefCounted<BufferTracker>;
virtual ~BufferTracker();
// Adds a given mmap()ed plane to |planes_|.
void AddMmapedPlane(uint8_t* const start, size_t length);
private:
struct Plane {
uint8_t* start;
size_t length;
};
std::vector<Plane> planes_;
};
V4L2CaptureDelegate(
const VideoCaptureDevice::Name& device_name,
const scoped_refptr<base::SingleThreadTaskRunner>& v4l2_task_runner,
int power_line_frequency);
virtual ~V4L2CaptureDelegate();
// Creates the necessary, planarity-specific, internal tracking schemes,
virtual scoped_refptr<BufferTracker> CreateBufferTracker() const = 0;
// Fill in |format| with the given parameters, in a planarity dependent way.
virtual bool FillV4L2Format(v4l2_format* format,
uint32_t width,
uint32_t height,
uint32_t pixelformat_fourcc) const = 0;
// Finish filling |buffer| struct with planarity-dependent data.
virtual void FinishFillingV4L2Buffer(v4l2_buffer* buffer) const = 0;
// Sends the captured |buffer| to the |client_|, synchronously.
virtual void SendBuffer(
const scoped_refptr<BufferTracker>& buffer_tracker,
const v4l2_format& format) const = 0;
// A few accessors for SendBuffer()'s to access private member variables.
VideoCaptureFormat capture_format() const { return capture_format_; }
VideoCaptureDevice::Client* client() const { return client_.get(); }
int rotation() const { return rotation_; }
private:
friend class base::RefCountedThreadSafe<V4L2CaptureDelegate>;
// Returns the input |fourcc| as a std::string four char representation.
static std::string FourccToString(uint32_t fourcc);
// VIDIOC_QUERYBUFs a buffer from V4L2, creates a BufferTracker for it and
// enqueues it (VIDIOC_QBUF) back into V4L2.
bool MapAndQueueBuffer(int index);
// Fills all common parts of |buffer|. Delegates to FinishFillingV4L2Buffer()
// for filling in the planar-dependent parts.
void FillV4L2Buffer(v4l2_buffer* buffer, int i) const;
void DoCapture();
void SetErrorState(const std::string& reason);
const v4l2_buf_type capture_type_;
const scoped_refptr<base::SingleThreadTaskRunner> v4l2_task_runner_;
const VideoCaptureDevice::Name device_name_;
const int power_line_frequency_;
// The following members are only known on AllocateAndStart().
VideoCaptureFormat capture_format_;
v4l2_format video_fmt_;
scoped_ptr<VideoCaptureDevice::Client> client_;
base::ScopedFD device_fd_;
// Vector of BufferTracker to keep track of mmap()ed pointers and their use.
std::vector<scoped_refptr<BufferTracker>> buffer_tracker_pool_;
bool is_capturing_;
int timeout_count_;
// Clockwise rotation in degrees. This value should be 0, 90, 180, or 270.
int rotation_;
DISALLOW_COPY_AND_ASSIGN(V4L2CaptureDelegate);
};
} // namespace media
#endif // MEDIA_VIDEO_CAPTURE_LINUX_V4L2_VIDEO_CAPTURE_DELEGATE_H_
// Copyright 2015 The Chromium Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#include "media/video/capture/linux/v4l2_capture_delegate_multi_plane.h"
#include <sys/mman.h>
namespace media {
V4L2CaptureDelegateMultiPlane::V4L2CaptureDelegateMultiPlane(
const VideoCaptureDevice::Name& device_name,
const scoped_refptr<base::SingleThreadTaskRunner>& v4l2_task_runner,
int power_line_frequency)
: V4L2CaptureDelegate(device_name,
v4l2_task_runner,
power_line_frequency) {
}
V4L2CaptureDelegateMultiPlane::~V4L2CaptureDelegateMultiPlane() {
}
scoped_refptr<V4L2CaptureDelegate::BufferTracker>
V4L2CaptureDelegateMultiPlane::CreateBufferTracker() const {
return make_scoped_refptr(new BufferTrackerMPlane());
}
bool V4L2CaptureDelegateMultiPlane::FillV4L2Format(
v4l2_format* format,
uint32_t width,
uint32_t height,
uint32_t pixelformat_fourcc) const {
format->fmt.pix_mp.width = width;
format->fmt.pix_mp.height = height;
format->fmt.pix_mp.pixelformat = pixelformat_fourcc;
const size_t num_v4l2_planes =
V4L2CaptureDelegate::GetNumPlanesForFourCc(pixelformat_fourcc);
if (num_v4l2_planes == 0u)
return false;
DCHECK_LE(num_v4l2_planes, static_cast<size_t>(VIDEO_MAX_PLANES));
format->fmt.pix_mp.num_planes = num_v4l2_planes;
v4l2_planes_.resize(num_v4l2_planes);
return true;
}
void V4L2CaptureDelegateMultiPlane::FinishFillingV4L2Buffer(
v4l2_buffer* buffer) const {
buffer->type = V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE;
buffer->length = v4l2_planes_.size();
static const struct v4l2_plane empty_plane = {};
std::fill(v4l2_planes_.begin(), v4l2_planes_.end(), empty_plane);
buffer->m.planes = v4l2_planes_.data();
}
void V4L2CaptureDelegateMultiPlane::SendBuffer(
const scoped_refptr<BufferTracker>& buffer_tracker,
const v4l2_format& format) const {
DCHECK_EQ(capture_format().pixel_format, PIXEL_FORMAT_I420);
const size_t y_stride = format.fmt.pix_mp.plane_fmt[0].bytesperline;
const size_t u_stride = format.fmt.pix_mp.plane_fmt[1].bytesperline;
const size_t v_stride = format.fmt.pix_mp.plane_fmt[2].bytesperline;
DCHECK_GE(y_stride, 1u * capture_format().frame_size.width());
DCHECK_GE(u_stride, 1u * capture_format().frame_size.width() / 2);
DCHECK_GE(v_stride, 1u * capture_format().frame_size.width() / 2);
client()->OnIncomingCapturedYuvData(buffer_tracker->GetPlaneStart(0),
buffer_tracker->GetPlaneStart(1),
buffer_tracker->GetPlaneStart(2),
y_stride,
u_stride,
v_stride,
capture_format(),
rotation(),
base::TimeTicks::Now());
}
bool V4L2CaptureDelegateMultiPlane::BufferTrackerMPlane::Init(
int fd,
const v4l2_buffer& buffer) {
for (size_t p = 0; p < buffer.length; ++p) {
// Some devices require mmap() to be called with both READ and WRITE.
// See http://crbug.com/178582.
void* const start =
mmap(NULL, buffer.m.planes[p].length, PROT_READ | PROT_WRITE,
MAP_SHARED, fd, buffer.m.planes[p].m.mem_offset);
if (start == MAP_FAILED) {
DLOG(ERROR) << "Error mmap()ing a V4L2 buffer into userspace";
return false;
}
AddMmapedPlane(static_cast<uint8_t*>(start), buffer.m.planes[p].length);
DVLOG(3) << "Mmap()ed plane #" << p << " of " << buffer.m.planes[p].length
<< "B";
}
return true;
}
} // namespace media
// Copyright 2015 The Chromium Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#ifndef MEDIA_VIDEO_CAPTURE_LINUX_V4L2_CAPTURE_DELEGATE_MULTI_PLANE_H_
#define MEDIA_VIDEO_CAPTURE_LINUX_V4L2_CAPTURE_DELEGATE_MULTI_PLANE_H_
#include "base/memory/ref_counted.h"
#include "media/video/capture/linux/v4l2_capture_delegate.h"
#if defined(OS_OPENBSD)
#error "OpenBSD does not support MPlane capture API."
#endif
namespace base {
class SingleThreadTaskRunner;
} // namespace base
namespace media {
// V4L2 specifics for MPLANE API.
class V4L2CaptureDelegateMultiPlane final : public V4L2CaptureDelegate {
public:
V4L2CaptureDelegateMultiPlane(
const VideoCaptureDevice::Name& device_name,
const scoped_refptr<base::SingleThreadTaskRunner>& v4l2_task_runner,
int power_line_frequency);
private:
// BufferTracker derivation to implement construction semantics for MPLANE.
class BufferTrackerMPlane final : public BufferTracker {
public:
bool Init(int fd, const v4l2_buffer& buffer) override;
private:
~BufferTrackerMPlane() override {}
};
~V4L2CaptureDelegateMultiPlane() override;
// V4L2CaptureDelegate virtual methods implementation.
scoped_refptr<BufferTracker> CreateBufferTracker() const override;
bool FillV4L2Format(v4l2_format* format,
uint32_t width,
uint32_t height,
uint32_t pixelformat_fourcc) const override;
void FinishFillingV4L2Buffer(v4l2_buffer* buffer) const override;
void SendBuffer(const scoped_refptr<BufferTracker>& buffer_tracker,
const v4l2_format& format) const override;
// Vector to allocate and track as many v4l2_plane structs as planes, needed
// for v4l2_buffer.m.planes. This is a scratchpad marked mutable to enable
// using it in otherwise const methods.
mutable std::vector<struct v4l2_plane> v4l2_planes_;
};
} // namespace media
#endif // MEDIA_VIDEO_CAPTURE_LINUX_V4L2_CAPTURE_DELEGATE_SINGLE_PLANE_H_
// Copyright 2015 The Chromium Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#include "media/video/capture/linux/v4l2_capture_delegate_single_plane.h"
#include <sys/mman.h>
namespace media {
scoped_refptr<V4L2CaptureDelegate::BufferTracker>
V4L2CaptureDelegateSinglePlane::CreateBufferTracker() const {
return make_scoped_refptr(new BufferTrackerSPlane());
}
bool V4L2CaptureDelegateSinglePlane::FillV4L2Format(
v4l2_format* format,
uint32_t width,
uint32_t height,
uint32_t pixelformat_fourcc) const {
format->fmt.pix.width = width;
format->fmt.pix.height = height;
format->fmt.pix.pixelformat = pixelformat_fourcc;
return true;
}
void V4L2CaptureDelegateSinglePlane::FinishFillingV4L2Buffer(
v4l2_buffer* buffer) const {
buffer->type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
}
void V4L2CaptureDelegateSinglePlane::SendBuffer(
const scoped_refptr<BufferTracker>& buffer_tracker,
const v4l2_format& format) const {
const size_t data_length = format.fmt.pix.sizeimage;
DCHECK_GE(data_length, capture_format().ImageAllocationSize());
client()->OnIncomingCapturedData(
buffer_tracker->GetPlaneStart(0),
data_length,
capture_format(),
rotation(),
base::TimeTicks::Now());
}
bool V4L2CaptureDelegateSinglePlane::BufferTrackerSPlane::Init(
int fd,
const v4l2_buffer& buffer) {
// Some devices require mmap() to be called with both READ and WRITE.
// See http://crbug.com/178582.
void* const start = mmap(NULL, buffer.length, PROT_READ | PROT_WRITE,
MAP_SHARED, fd, buffer.m.offset);
if (start == MAP_FAILED) {
DLOG(ERROR) << "Error mmap()ing a V4L2 buffer into userspace";
return false;
}
AddMmapedPlane(static_cast<uint8_t*>(start), buffer.length);
return true;
}
} // namespace media
// Copyright 2015 The Chromium Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#ifndef MEDIA_VIDEO_CAPTURE_LINUX_V4L2_CAPTURE_DELEGATE_SINGLE_PLANE_H_
#define MEDIA_VIDEO_CAPTURE_LINUX_V4L2_CAPTURE_DELEGATE_SINGLE_PLANE_H_
#include "base/memory/ref_counted.h"
#include "media/video/capture/linux/v4l2_capture_delegate.h"
#include "media/video/capture/video_capture_device.h"
namespace base {
class SingleThreadTaskRunner;
} // namespace base
namespace media {
// V4L2 specifics for SPLANE API.
class V4L2CaptureDelegateSinglePlane final : public V4L2CaptureDelegate {
public:
V4L2CaptureDelegateSinglePlane(
const VideoCaptureDevice::Name& device_name,
const scoped_refptr<base::SingleThreadTaskRunner>& v4l2_task_runner,
int power_line_frequency)
: V4L2CaptureDelegate(device_name,
v4l2_task_runner,
power_line_frequency) {}
private:
// BufferTracker derivation to implement construction semantics for SPLANE.
class BufferTrackerSPlane final : public BufferTracker {
public:
bool Init(int fd, const v4l2_buffer& buffer) override;
private:
~BufferTrackerSPlane() override {}
};
~V4L2CaptureDelegateSinglePlane() override {}
// V4L2CaptureDelegate virtual methods implementation.
scoped_refptr<BufferTracker> CreateBufferTracker() const override;
bool FillV4L2Format(v4l2_format* format,
uint32_t width,
uint32_t height,
uint32_t pixelformat_fourcc) const override;
void FinishFillingV4L2Buffer(v4l2_buffer* buffer) const override;
void SendBuffer(const scoped_refptr<BufferTracker>& buffer_tracker,
const v4l2_format& format) const override;
};
} // namespace media
#endif // MEDIA_VIDEO_CAPTURE_LINUX_V4L2_CAPTURE_DELEGATE_MULTI_PLANE_H_
...@@ -25,7 +25,7 @@ ...@@ -25,7 +25,7 @@
namespace media { namespace media {
static bool HasUsableFormats(int fd, uint32 capabilities) { static bool HasUsableFormats(int fd, uint32 capabilities) {
const std::list<int>& usable_fourccs = const std::list<uint32_t>& usable_fourccs =
VideoCaptureDeviceLinux::GetListOfUsableFourCCs(false); VideoCaptureDeviceLinux::GetListOfUsableFourCCs(false);
static const struct { static const struct {
...@@ -48,6 +48,7 @@ static bool HasUsableFormats(int fd, uint32 capabilities) { ...@@ -48,6 +48,7 @@ static bool HasUsableFormats(int fd, uint32 capabilities) {
} }
} }
} }
DLOG(ERROR) << "No usable formats found";
return false; return false;
} }
...@@ -182,9 +183,11 @@ void VideoCaptureDeviceFactoryLinux::GetDeviceNames( ...@@ -182,9 +183,11 @@ void VideoCaptureDeviceFactoryLinux::GetDeviceNames(
!(cap.capabilities & V4L2_CAP_VIDEO_OUTPUT) && !(cap.capabilities & V4L2_CAP_VIDEO_OUTPUT) &&
!(cap.capabilities & V4L2_CAP_VIDEO_OUTPUT_MPLANE)) && !(cap.capabilities & V4L2_CAP_VIDEO_OUTPUT_MPLANE)) &&
HasUsableFormats(fd.get(), cap.capabilities)) { HasUsableFormats(fd.get(), cap.capabilities)) {
VideoCaptureDevice::Name device_name(base::StringPrintf("%s", cap.card), device_names->push_back(VideoCaptureDevice::Name(
unique_id); base::StringPrintf("%s", cap.card), unique_id,
device_names->push_back(device_name); (cap.capabilities & V4L2_CAP_VIDEO_CAPTURE_MPLANE)
? VideoCaptureDevice::Name::V4L2_MULTI_PLANE
: VideoCaptureDevice::Name::V4L2_SINGLE_PLANE));
} }
} }
} }
...@@ -200,10 +203,14 @@ void VideoCaptureDeviceFactoryLinux::GetDeviceSupportedFormats( ...@@ -200,10 +203,14 @@ void VideoCaptureDeviceFactoryLinux::GetDeviceSupportedFormats(
return; return;
supported_formats->clear(); supported_formats->clear();
const v4l2_buf_type kCaptureTypes[] = {V4L2_BUF_TYPE_VIDEO_CAPTURE, DCHECK_NE(device.capture_api_type(),
V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE}; VideoCaptureDevice::Name::API_TYPE_UNKNOWN);
for (const auto& buf_type : kCaptureTypes) const v4l2_buf_type buf_type =
(device.capture_api_type() == VideoCaptureDevice::Name::V4L2_MULTI_PLANE)
? V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE
: V4L2_BUF_TYPE_VIDEO_CAPTURE;
GetSupportedFormatsForV4L2BufferType(fd.get(), buf_type, supported_formats); GetSupportedFormatsForV4L2BufferType(fd.get(), buf_type, supported_formats);
return; return;
} }
......
...@@ -4,101 +4,21 @@ ...@@ -4,101 +4,21 @@
#include "media/video/capture/linux/video_capture_device_linux.h" #include "media/video/capture/linux/video_capture_device_linux.h"
#include <errno.h>
#include <fcntl.h>
#include <poll.h>
#if defined(OS_OPENBSD) #if defined(OS_OPENBSD)
#include <sys/videoio.h> #include <sys/videoio.h>
#else #else
#include <linux/videodev2.h> #include <linux/videodev2.h>
#endif #endif
#include <sys/ioctl.h>
#include <sys/mman.h>
#include <list> #include <list>
#include <string> #include <string>
#include "base/bind.h" #include "base/bind.h"
#include "base/files/file_enumerator.h"
#include "base/files/scoped_file.h"
#include "base/posix/eintr_wrapper.h"
#include "base/strings/stringprintf.h" #include "base/strings/stringprintf.h"
#include "media/video/capture/linux/v4l2_capture_delegate.h"
namespace media { namespace media {
#define GET_V4L2_FOURCC_CHAR(a, index) ((char)( ((a) >> (8 * index)) & 0xff))
// Desired number of video buffers to allocate. The actual number of allocated
// buffers by v4l2 driver can be higher or lower than this number.
// kNumVideoBuffers should not be too small, or Chrome may not return enough
// buffers back to driver in time.
const uint32 kNumVideoBuffers = 4;
// Timeout in milliseconds v4l2_thread_ blocks waiting for a frame from the hw.
enum { kCaptureTimeoutMs = 200 };
// The number of continuous timeouts tolerated before treated as error.
enum { kContinuousTimeoutLimit = 10 };
// MJPEG is preferred if the width or height is larger than this.
enum { kMjpegWidth = 640 };
enum { kMjpegHeight = 480 };
// Typical framerate, in fps
enum { kTypicalFramerate = 30 };
class VideoCaptureDeviceLinux::V4L2CaptureDelegate
: public base::RefCountedThreadSafe<V4L2CaptureDelegate>{
public:
V4L2CaptureDelegate(
const Name& device_name,
const scoped_refptr<base::SingleThreadTaskRunner> v4l2_task_runner,
int power_line_frequency);
void AllocateAndStart(int width,
int height,
float frame_rate,
scoped_ptr<Client> client);
void StopAndDeAllocate();
void SetRotation(int rotation);
bool DeAllocateVideoBuffers();
private:
// Buffers used to receive captured frames from v4l2.
struct Buffer {
Buffer() : start(0), length(0) {}
void* start;
size_t length;
};
friend class base::RefCountedThreadSafe<V4L2CaptureDelegate>;
~V4L2CaptureDelegate();
void DoCapture();
bool AllocateVideoBuffers();
void SetErrorState(const std::string& reason);
const scoped_refptr<base::SingleThreadTaskRunner> v4l2_task_runner_;
bool is_capturing_;
scoped_ptr<VideoCaptureDevice::Client> client_;
const Name device_name_;
base::ScopedFD device_fd_; // File descriptor for the opened camera device.
Buffer* buffer_pool_;
int buffer_pool_size_; // Number of allocated buffers.
int timeout_count_;
VideoCaptureFormat capture_format_;
const int power_line_frequency_;
// Clockwise rotation in degrees. This value should be 0, 90, 180, or 270.
int rotation_;
DISALLOW_IMPLICIT_CONSTRUCTORS(V4L2CaptureDelegate);
};
// V4L2 color formats VideoCaptureDeviceLinux support.
static const int32 kV4l2RawFmts[] = {
V4L2_PIX_FMT_YUV420,
V4L2_PIX_FMT_YUYV,
V4L2_PIX_FMT_UYVY
};
// USB VID and PID are both 4 bytes long. // USB VID and PID are both 4 bytes long.
static const size_t kVidPidSize = 4; static const size_t kVidPidSize = 4;
...@@ -122,48 +42,18 @@ static bool ReadIdFile(const std::string path, std::string* id) { ...@@ -122,48 +42,18 @@ static bool ReadIdFile(const std::string path, std::string* id) {
return true; return true;
} }
// This function translates Video4Linux pixel formats to Chromium pixel formats, // Translates Video4Linux pixel formats to Chromium pixel formats.
// should only support those listed in GetListOfUsableFourCCs.
// static // static
VideoPixelFormat VideoCaptureDeviceLinux::V4l2FourCcToChromiumPixelFormat( VideoPixelFormat VideoCaptureDeviceLinux::V4l2FourCcToChromiumPixelFormat(
uint32 v4l2_fourcc) { uint32 v4l2_fourcc) {
const struct { return V4L2CaptureDelegate::V4l2FourCcToChromiumPixelFormat(v4l2_fourcc);
uint32 fourcc;
VideoPixelFormat pixel_format;
} kFourCcAndChromiumPixelFormat[] = {
{V4L2_PIX_FMT_YUV420, PIXEL_FORMAT_I420},
{V4L2_PIX_FMT_YUYV, PIXEL_FORMAT_YUY2},
{V4L2_PIX_FMT_UYVY, PIXEL_FORMAT_UYVY},
{V4L2_PIX_FMT_MJPEG, PIXEL_FORMAT_MJPEG},
{V4L2_PIX_FMT_JPEG, PIXEL_FORMAT_MJPEG},
};
for (const auto& fourcc_and_pixel_format : kFourCcAndChromiumPixelFormat) {
if (fourcc_and_pixel_format.fourcc == v4l2_fourcc)
return fourcc_and_pixel_format.pixel_format;
}
DVLOG(1) << "Unsupported pixel format: "
<< GET_V4L2_FOURCC_CHAR(v4l2_fourcc, 0)
<< GET_V4L2_FOURCC_CHAR(v4l2_fourcc, 1)
<< GET_V4L2_FOURCC_CHAR(v4l2_fourcc, 2)
<< GET_V4L2_FOURCC_CHAR(v4l2_fourcc, 3);
return PIXEL_FORMAT_UNKNOWN;
} }
// Gets a list of usable Four CC formats prioritised.
// static // static
std::list<int> VideoCaptureDeviceLinux::GetListOfUsableFourCCs( std::list<uint32_t> VideoCaptureDeviceLinux::GetListOfUsableFourCCs(
bool favour_mjpeg) { bool favour_mjpeg) {
std::list<int> fourccs; return V4L2CaptureDelegate::GetListOfUsableFourCcs(favour_mjpeg);
for (size_t i = 0; i < arraysize(kV4l2RawFmts); ++i)
fourccs.push_back(kV4l2RawFmts[i]);
if (favour_mjpeg)
fourccs.push_front(V4L2_PIX_FMT_MJPEG);
else
fourccs.push_back(V4L2_PIX_FMT_MJPEG);
// JPEG works as MJPEG on some gspca webcams from field reports.
// Put it as the least preferred format.
fourccs.push_back(V4L2_PIX_FMT_JPEG);
return fourccs;
} }
const std::string VideoCaptureDevice::Name::GetModel() const { const std::string VideoCaptureDevice::Name::GetModel() const {
...@@ -207,18 +97,21 @@ void VideoCaptureDeviceLinux::AllocateAndStart( ...@@ -207,18 +97,21 @@ void VideoCaptureDeviceLinux::AllocateAndStart(
if (v4l2_thread_.IsRunning()) if (v4l2_thread_.IsRunning())
return; // Wrong state. return; // Wrong state.
v4l2_thread_.Start(); v4l2_thread_.Start();
capture_impl_ = new V4L2CaptureDelegate(device_name_,
v4l2_thread_.message_loop_proxy(), const int line_frequency =
GetPowerLineFrequencyForLocation()); TranslatePowerLineFrequencyToV4L2(GetPowerLineFrequencyForLocation());
capture_impl_ = V4L2CaptureDelegate::CreateV4L2CaptureDelegate(
device_name_, v4l2_thread_.message_loop_proxy(), line_frequency);
if (!capture_impl_) {
client->OnError("Failed to create VideoCaptureDelegate");
return;
}
v4l2_thread_.message_loop()->PostTask( v4l2_thread_.message_loop()->PostTask(
FROM_HERE, FROM_HERE,
base::Bind( base::Bind(&V4L2CaptureDelegate::AllocateAndStart, capture_impl_,
&VideoCaptureDeviceLinux::V4L2CaptureDelegate::AllocateAndStart,
capture_impl_,
params.requested_format.frame_size.width(), params.requested_format.frame_size.width(),
params.requested_format.frame_size.height(), params.requested_format.frame_size.height(),
params.requested_format.frame_rate, params.requested_format.frame_rate, base::Passed(&client)));
base::Passed(&client)));
} }
void VideoCaptureDeviceLinux::StopAndDeAllocate() { void VideoCaptureDeviceLinux::StopAndDeAllocate() {
...@@ -226,309 +119,31 @@ void VideoCaptureDeviceLinux::StopAndDeAllocate() { ...@@ -226,309 +119,31 @@ void VideoCaptureDeviceLinux::StopAndDeAllocate() {
return; // Wrong state. return; // Wrong state.
v4l2_thread_.message_loop()->PostTask( v4l2_thread_.message_loop()->PostTask(
FROM_HERE, FROM_HERE,
base::Bind( base::Bind(&V4L2CaptureDelegate::StopAndDeAllocate, capture_impl_));
&VideoCaptureDeviceLinux::V4L2CaptureDelegate::StopAndDeAllocate,
capture_impl_));
v4l2_thread_.Stop(); v4l2_thread_.Stop();
// TODO(mcasas): VCDLinux called DeAllocateVideoBuffers() a second time after
// stopping |v4l2_thread_| to make sure buffers were completely deallocated.
// Investigate if that's needed, otherwise remove the following line and make
// V4L2CaptureDelegate::DeAllocateVideoBuffers() private.
capture_impl_->DeAllocateVideoBuffers();
capture_impl_ = NULL; capture_impl_ = NULL;
} }
void VideoCaptureDeviceLinux::SetRotation(int rotation) { void VideoCaptureDeviceLinux::SetRotation(int rotation) {
if (v4l2_thread_.IsRunning()) { if (v4l2_thread_.IsRunning()) {
v4l2_thread_.message_loop()->PostTask( v4l2_thread_.message_loop()->PostTask(
FROM_HERE, FROM_HERE, base::Bind(&V4L2CaptureDelegate::SetRotation,
base::Bind( capture_impl_, rotation));
&VideoCaptureDeviceLinux::V4L2CaptureDelegate::SetRotation,
capture_impl_,
rotation));
}
}
VideoCaptureDeviceLinux::V4L2CaptureDelegate::V4L2CaptureDelegate(
const Name& device_name,
const scoped_refptr<base::SingleThreadTaskRunner> v4l2_task_runner,
int power_line_frequency)
: v4l2_task_runner_(v4l2_task_runner),
is_capturing_(false),
device_name_(device_name),
buffer_pool_(NULL),
buffer_pool_size_(0),
timeout_count_(0),
power_line_frequency_(power_line_frequency),
rotation_(0) {
}
VideoCaptureDeviceLinux::V4L2CaptureDelegate::~V4L2CaptureDelegate() {
DCHECK(!client_);
}
void VideoCaptureDeviceLinux::V4L2CaptureDelegate::AllocateAndStart(
int width,
int height,
float frame_rate,
scoped_ptr<Client> client) {
DCHECK(v4l2_task_runner_->BelongsToCurrentThread());
DCHECK(client);
client_ = client.Pass();
// Need to open camera with O_RDWR after Linux kernel 3.3.
device_fd_.reset(HANDLE_EINTR(open(device_name_.id().c_str(), O_RDWR)));
if (!device_fd_.is_valid()) {
SetErrorState("Failed to open V4L2 device driver file.");
return;
}
v4l2_capability cap = {};
if (!((HANDLE_EINTR(ioctl(device_fd_.get(), VIDIOC_QUERYCAP, &cap)) == 0) &&
(cap.capabilities & V4L2_CAP_VIDEO_CAPTURE &&
!(cap.capabilities & V4L2_CAP_VIDEO_OUTPUT)))) {
device_fd_.reset();
SetErrorState("This is not a V4L2 video capture device");
return;
}
// Get supported video formats in preferred order.
// For large resolutions, favour mjpeg over raw formats.
const std::list<int>& desired_v4l2_formats =
GetListOfUsableFourCCs(width > kMjpegWidth || height > kMjpegHeight);
std::list<int>::const_iterator best = desired_v4l2_formats.end();
v4l2_fmtdesc fmtdesc = {};
fmtdesc.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
for (; HANDLE_EINTR(ioctl(device_fd_.get(), VIDIOC_ENUM_FMT, &fmtdesc)) == 0;
++fmtdesc.index) {
best = std::find(desired_v4l2_formats.begin(), best, fmtdesc.pixelformat);
}
if (best == desired_v4l2_formats.end()) {
SetErrorState("Failed to find a supported camera format.");
return;
}
v4l2_format video_fmt = {};
video_fmt.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
video_fmt.fmt.pix.sizeimage = 0;
video_fmt.fmt.pix.width = width;
video_fmt.fmt.pix.height = height;
video_fmt.fmt.pix.pixelformat = *best;
if (HANDLE_EINTR(ioctl(device_fd_.get(), VIDIOC_S_FMT, &video_fmt)) < 0) {
SetErrorState("Failed to set video capture format");
return;
}
// Set capture framerate in the form of capture interval.
v4l2_streamparm streamparm = {};
streamparm.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
// The following line checks that the driver knows about framerate get/set.
if (HANDLE_EINTR(ioctl(device_fd_.get(), VIDIOC_G_PARM, &streamparm)) >= 0) {
// Now check if the device is able to accept a capture framerate set.
if (streamparm.parm.capture.capability & V4L2_CAP_TIMEPERFRAME) {
// |frame_rate| is float, approximate by a fraction.
streamparm.parm.capture.timeperframe.numerator =
media::kFrameRatePrecision;
streamparm.parm.capture.timeperframe.denominator = (frame_rate) ?
(frame_rate * media::kFrameRatePrecision) :
(kTypicalFramerate * media::kFrameRatePrecision);
if (HANDLE_EINTR(ioctl(device_fd_.get(), VIDIOC_S_PARM, &streamparm)) <
0) {
SetErrorState("Failed to set camera framerate");
return;
}
DVLOG(2) << "Actual camera driverframerate: "
<< streamparm.parm.capture.timeperframe.denominator << "/"
<< streamparm.parm.capture.timeperframe.numerator;
}
}
// TODO(mcasas): what should be done if the camera driver does not allow
// framerate configuration, or the actual one is different from the desired?
// Set anti-banding/anti-flicker to 50/60Hz. May fail due to not supported
// operation (|errno| == EINVAL in this case) or plain failure.
if ((power_line_frequency_ == kPowerLine50Hz) ||
(power_line_frequency_ == kPowerLine60Hz)) {
struct v4l2_control control = {};
control.id = V4L2_CID_POWER_LINE_FREQUENCY;
control.value = (power_line_frequency_ == kPowerLine50Hz)
? V4L2_CID_POWER_LINE_FREQUENCY_50HZ
: V4L2_CID_POWER_LINE_FREQUENCY_60HZ;
HANDLE_EINTR(ioctl(device_fd_.get(), VIDIOC_S_CTRL, &control));
}
capture_format_.frame_size.SetSize(video_fmt.fmt.pix.width,
video_fmt.fmt.pix.height);
capture_format_.frame_rate = frame_rate;
capture_format_.pixel_format =
V4l2FourCcToChromiumPixelFormat(video_fmt.fmt.pix.pixelformat);
if (!AllocateVideoBuffers()) {
SetErrorState("Allocate buffer failed (Cannot recover from this error)");
return;
} }
const v4l2_buf_type type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
if (HANDLE_EINTR(ioctl(device_fd_.get(), VIDIOC_STREAMON, &type)) < 0) {
SetErrorState("VIDIOC_STREAMON failed");
return;
}
is_capturing_ = true;
// Post task to start fetching frames from v4l2.
v4l2_task_runner_->PostTask(
FROM_HERE,
base::Bind(&VideoCaptureDeviceLinux::V4L2CaptureDelegate::DoCapture,
this));
}
void VideoCaptureDeviceLinux::V4L2CaptureDelegate::StopAndDeAllocate() {
DCHECK(v4l2_task_runner_->BelongsToCurrentThread());
const v4l2_buf_type type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
if (HANDLE_EINTR(ioctl(device_fd_.get(), VIDIOC_STREAMOFF, &type)) < 0) {
SetErrorState("VIDIOC_STREAMOFF failed");
return;
}
// We don't dare to deallocate the buffers if we can't stop the capture
// device.
if (!DeAllocateVideoBuffers())
SetErrorState("Failed to reset buffers");
// We need to close and open the device if we want to change the settings.
// Otherwise VIDIOC_S_FMT will return error. Sad but true.
device_fd_.reset();
is_capturing_ = false;
client_.reset();
}
void VideoCaptureDeviceLinux::V4L2CaptureDelegate::SetRotation(int rotation) {
DCHECK(v4l2_task_runner_->BelongsToCurrentThread());
DCHECK(rotation >= 0 && rotation < 360 && rotation % 90 == 0);
rotation_ = rotation;
} }
void VideoCaptureDeviceLinux::V4L2CaptureDelegate::DoCapture() { // static
DCHECK(v4l2_task_runner_->BelongsToCurrentThread()); int VideoCaptureDeviceLinux::TranslatePowerLineFrequencyToV4L2(int frequency) {
if (!is_capturing_) switch (frequency) {
return; case kPowerLine50Hz:
return V4L2_CID_POWER_LINE_FREQUENCY_50HZ;
pollfd device_pfd = {}; case kPowerLine60Hz:
device_pfd.fd = device_fd_.get(); return V4L2_CID_POWER_LINE_FREQUENCY_60HZ;
device_pfd.events = POLLIN; default:
const int result = HANDLE_EINTR(poll(&device_pfd, 1, kCaptureTimeoutMs)); // If we have no idea of the frequency, at least try and set it to AUTO.
if (result < 0) { return V4L2_CID_POWER_LINE_FREQUENCY_AUTO;
SetErrorState("Poll failed");
return;
}
// Check if poll() timed out; track the amount of times it did in a row and
// throw an error if it times out too many times.
if (result == 0) {
timeout_count_++;
if (timeout_count_ >= kContinuousTimeoutLimit) {
SetErrorState("Multiple continuous timeouts while read-polling.");
timeout_count_ = 0;
return;
}
} else {
timeout_count_ = 0;
}
// Check if the driver has filled a buffer.
if (device_pfd.revents & POLLIN) {
v4l2_buffer buffer = {};
buffer.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
buffer.memory = V4L2_MEMORY_MMAP;
if (HANDLE_EINTR(ioctl(device_fd_.get(), VIDIOC_DQBUF, &buffer)) < 0) {
SetErrorState("Failed to dequeue capture buffer");
return;
}
client_->OnIncomingCapturedData(
static_cast<uint8*>(buffer_pool_[buffer.index].start),
buffer.bytesused,
capture_format_,
rotation_,
base::TimeTicks::Now());
if (HANDLE_EINTR(ioctl(device_fd_.get(), VIDIOC_QBUF, &buffer)) < 0)
SetErrorState("Failed to enqueue capture buffer");
}
v4l2_task_runner_->PostTask(
FROM_HERE,
base::Bind(&VideoCaptureDeviceLinux::V4L2CaptureDelegate::DoCapture,
this));
}
bool VideoCaptureDeviceLinux::V4L2CaptureDelegate::AllocateVideoBuffers() {
v4l2_requestbuffers r_buffer = {};
r_buffer.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
r_buffer.memory = V4L2_MEMORY_MMAP;
r_buffer.count = kNumVideoBuffers;
if (HANDLE_EINTR(ioctl(device_fd_.get(), VIDIOC_REQBUFS, &r_buffer)) < 0) {
DLOG(ERROR) << "Error requesting MMAP buffers from V4L2";
return false;
}
buffer_pool_size_ = r_buffer.count;
buffer_pool_ = new Buffer[buffer_pool_size_];
for (unsigned int i = 0; i < r_buffer.count; ++i) {
v4l2_buffer buffer = {};
buffer.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
buffer.memory = V4L2_MEMORY_MMAP;
buffer.index = i;
buffer.length = 1;
if (HANDLE_EINTR(ioctl(device_fd_.get(), VIDIOC_QUERYBUF, &buffer)) < 0) {
DLOG(ERROR) << "Error querying status of a MMAP V4L2 buffer";
return false;
}
// Some devices require mmap() to be called with both READ and WRITE.
// See http://crbug.com/178582.
buffer_pool_[i].start = mmap(NULL, buffer.length, PROT_READ | PROT_WRITE,
MAP_SHARED, device_fd_.get(), buffer.m.offset);
if (buffer_pool_[i].start == MAP_FAILED) {
DLOG(ERROR) << "Error mmmap()ing a V4L2 buffer into userspace";
return false;
}
buffer_pool_[i].length = buffer.length;
// Enqueue the buffer in the drivers incoming queue.
if (HANDLE_EINTR(ioctl(device_fd_.get(), VIDIOC_QBUF, &buffer)) < 0) {
DLOG(ERROR)
<< "Error enqueuing a V4L2 buffer back to the drivers incoming queue";
return false;
}
} }
return true;
}
bool VideoCaptureDeviceLinux::V4L2CaptureDelegate::DeAllocateVideoBuffers() {
if (!buffer_pool_)
return true;
for (int i = 0; i < buffer_pool_size_; ++i)
munmap(buffer_pool_[i].start, buffer_pool_[i].length);
v4l2_requestbuffers r_buffer = {};
r_buffer.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
r_buffer.memory = V4L2_MEMORY_MMAP;
r_buffer.count = 0;
if (HANDLE_EINTR(ioctl(device_fd_.get(), VIDIOC_REQBUFS, &r_buffer)) < 0)
return false;
delete [] buffer_pool_;
buffer_pool_ = NULL;
buffer_pool_size_ = 0;
return true;
}
void VideoCaptureDeviceLinux::V4L2CaptureDelegate::SetErrorState(
const std::string& reason) {
DCHECK(v4l2_task_runner_->BelongsToCurrentThread());
is_capturing_ = false;
client_->OnError(reason);
} }
} // namespace media } // namespace media
...@@ -20,11 +20,13 @@ ...@@ -20,11 +20,13 @@
namespace media { namespace media {
class V4L2CaptureDelegate;
// Linux V4L2 implementation of VideoCaptureDevice. // Linux V4L2 implementation of VideoCaptureDevice.
class VideoCaptureDeviceLinux : public VideoCaptureDevice { class VideoCaptureDeviceLinux : public VideoCaptureDevice {
public: public:
static VideoPixelFormat V4l2FourCcToChromiumPixelFormat(uint32 v4l2_fourcc); static VideoPixelFormat V4l2FourCcToChromiumPixelFormat(uint32 v4l2_fourcc);
static std::list<int> GetListOfUsableFourCCs(bool favour_mjpeg); static std::list<uint32_t> GetListOfUsableFourCCs(bool favour_mjpeg);
explicit VideoCaptureDeviceLinux(const Name& device_name); explicit VideoCaptureDeviceLinux(const Name& device_name);
~VideoCaptureDeviceLinux() override; ~VideoCaptureDeviceLinux() override;
...@@ -38,10 +40,11 @@ class VideoCaptureDeviceLinux : public VideoCaptureDevice { ...@@ -38,10 +40,11 @@ class VideoCaptureDeviceLinux : public VideoCaptureDevice {
void SetRotation(int rotation); void SetRotation(int rotation);
private: private:
static int TranslatePowerLineFrequencyToV4L2(int frequency);
// Internal delegate doing the actual capture setting, buffer allocation and // Internal delegate doing the actual capture setting, buffer allocation and
// circulacion with the V4L2 API. Created and deleted in the thread where // circulacion with the V4L2 API. Created and deleted in the thread where
// VideoCaptureDeviceLinux lives but otherwise operating on |v4l2_thread_|. // VideoCaptureDeviceLinux lives but otherwise operating on |v4l2_thread_|.
class V4L2CaptureDelegate;
scoped_refptr<V4L2CaptureDelegate> capture_impl_; scoped_refptr<V4L2CaptureDelegate> capture_impl_;
base::Thread v4l2_thread_; // Thread used for reading data from the device. base::Thread v4l2_thread_; // Thread used for reading data from the device.
......
...@@ -24,7 +24,14 @@ VideoCaptureDevice::Name::Name() {} ...@@ -24,7 +24,14 @@ VideoCaptureDevice::Name::Name() {}
VideoCaptureDevice::Name::Name(const std::string& name, const std::string& id) VideoCaptureDevice::Name::Name(const std::string& name, const std::string& id)
: device_name_(name), unique_id_(id) {} : device_name_(name), unique_id_(id) {}
#if defined(OS_WIN) #if defined(OS_LINUX)
VideoCaptureDevice::Name::Name(const std::string& name,
const std::string& id,
const CaptureApiType api_type)
: device_name_(name),
unique_id_(id),
capture_api_class_(api_type) {}
#elif defined(OS_WIN)
VideoCaptureDevice::Name::Name(const std::string& name, VideoCaptureDevice::Name::Name(const std::string& name,
const std::string& id, const std::string& id,
const CaptureApiType api_type) const CaptureApiType api_type)
...@@ -32,9 +39,7 @@ VideoCaptureDevice::Name::Name(const std::string& name, ...@@ -32,9 +39,7 @@ VideoCaptureDevice::Name::Name(const std::string& name,
unique_id_(id), unique_id_(id),
capture_api_class_(api_type), capture_api_class_(api_type),
capabilities_id_(id) {} capabilities_id_(id) {}
#endif #elif defined(OS_MACOSX)
#if defined(OS_MACOSX)
VideoCaptureDevice::Name::Name(const std::string& name, VideoCaptureDevice::Name::Name(const std::string& name,
const std::string& id, const std::string& id,
const CaptureApiType api_type) const CaptureApiType api_type)
...@@ -57,7 +62,19 @@ VideoCaptureDevice::Name::Name(const std::string& name, ...@@ -57,7 +62,19 @@ VideoCaptureDevice::Name::Name(const std::string& name,
VideoCaptureDevice::Name::~Name() {} VideoCaptureDevice::Name::~Name() {}
#if defined(OS_WIN) #if defined(OS_LINUX)
const char* VideoCaptureDevice::Name::GetCaptureApiTypeString() const {
switch (capture_api_type()) {
case V4L2_SINGLE_PLANE:
return "V4L2 SPLANE";
case V4L2_MULTI_PLANE:
return "V4L2 MPLANE";
default:
NOTREACHED() << "Unknown Video Capture API type!";
return "Unknown API";
}
}
#elif defined(OS_WIN)
const char* VideoCaptureDevice::Name::GetCaptureApiTypeString() const { const char* VideoCaptureDevice::Name::GetCaptureApiTypeString() const {
switch(capture_api_type()) { switch(capture_api_type()) {
case MEDIA_FOUNDATION: case MEDIA_FOUNDATION:
......
...@@ -41,7 +41,14 @@ class MEDIA_EXPORT VideoCaptureDevice { ...@@ -41,7 +41,14 @@ class MEDIA_EXPORT VideoCaptureDevice {
Name(); Name();
Name(const std::string& name, const std::string& id); Name(const std::string& name, const std::string& id);
#if defined(OS_WIN) #if defined(OS_LINUX)
// Linux/CrOS targets Capture Api type: it can only be set on construction.
enum CaptureApiType {
V4L2_SINGLE_PLANE,
V4L2_MULTI_PLANE,
API_TYPE_UNKNOWN
};
#elif defined(OS_WIN)
// Windows targets Capture Api type: it can only be set on construction. // Windows targets Capture Api type: it can only be set on construction.
enum CaptureApiType { enum CaptureApiType {
MEDIA_FOUNDATION, MEDIA_FOUNDATION,
...@@ -49,8 +56,7 @@ class MEDIA_EXPORT VideoCaptureDevice { ...@@ -49,8 +56,7 @@ class MEDIA_EXPORT VideoCaptureDevice {
DIRECT_SHOW_WDM_CROSSBAR, DIRECT_SHOW_WDM_CROSSBAR,
API_TYPE_UNKNOWN API_TYPE_UNKNOWN
}; };
#endif #elif defined(OS_MACOSX)
#if defined(OS_MACOSX)
// Mac targets Capture Api type: it can only be set on construction. // Mac targets Capture Api type: it can only be set on construction.
enum CaptureApiType { enum CaptureApiType {
AVFOUNDATION, AVFOUNDATION,
...@@ -64,7 +70,7 @@ class MEDIA_EXPORT VideoCaptureDevice { ...@@ -64,7 +70,7 @@ class MEDIA_EXPORT VideoCaptureDevice {
OTHER_TRANSPORT OTHER_TRANSPORT
}; };
#endif #endif
#if defined(OS_WIN) || defined(OS_MACOSX) #if defined(OS_WIN) || defined(OS_MACOSX) || defined(OS_LINUX)
Name(const std::string& name, Name(const std::string& name,
const std::string& id, const std::string& id,
const CaptureApiType api_type); const CaptureApiType api_type);
...@@ -102,7 +108,7 @@ class MEDIA_EXPORT VideoCaptureDevice { ...@@ -102,7 +108,7 @@ class MEDIA_EXPORT VideoCaptureDevice {
return unique_id_ < other.id(); return unique_id_ < other.id();
} }
#if defined(OS_WIN) || defined(OS_MACOSX) #if defined(OS_WIN) || defined(OS_MACOSX) || defined(OS_LINUX)
CaptureApiType capture_api_type() const { CaptureApiType capture_api_type() const {
return capture_api_class_.capture_api_type(); return capture_api_class_.capture_api_type();
} }
...@@ -133,7 +139,7 @@ class MEDIA_EXPORT VideoCaptureDevice { ...@@ -133,7 +139,7 @@ class MEDIA_EXPORT VideoCaptureDevice {
private: private:
std::string device_name_; std::string device_name_;
std::string unique_id_; std::string unique_id_;
#if defined(OS_WIN) || defined(OS_MACOSX) #if defined(OS_WIN) || defined(OS_MACOSX) || defined(OS_LINUX)
// This class wraps the CaptureApiType to give it a by default value if not // This class wraps the CaptureApiType to give it a by default value if not
// initialized. // initialized.
class CaptureApiClass { class CaptureApiClass {
...@@ -195,7 +201,20 @@ class MEDIA_EXPORT VideoCaptureDevice { ...@@ -195,7 +201,20 @@ class MEDIA_EXPORT VideoCaptureDevice {
virtual void OnIncomingCapturedData(const uint8* data, virtual void OnIncomingCapturedData(const uint8* data,
int length, int length,
const VideoCaptureFormat& frame_format, const VideoCaptureFormat& frame_format,
int rotation, // Clockwise. int clockwise_rotation,
const base::TimeTicks& timestamp) = 0;
// Captured a 3 planar YUV frame. Planes are possibly disjoint.
// |frame_format| must indicate I420.
virtual void OnIncomingCapturedYuvData(
const uint8* y_data,
const uint8* u_data,
const uint8* v_data,
size_t y_stride,
size_t u_stride,
size_t v_stride,
const VideoCaptureFormat& frame_format,
int clockwise_rotation,
const base::TimeTicks& timestamp) = 0; const base::TimeTicks& timestamp) = 0;
// Reserve an output buffer into which contents can be captured directly. // Reserve an output buffer into which contents can be captured directly.
......
...@@ -67,6 +67,16 @@ class MockClient : public VideoCaptureDevice::Client { ...@@ -67,6 +67,16 @@ class MockClient : public VideoCaptureDevice::Client {
MOCK_METHOD2(ReserveOutputBuffer, MOCK_METHOD2(ReserveOutputBuffer,
scoped_refptr<Buffer>(VideoFrame::Format format, scoped_refptr<Buffer>(VideoFrame::Format format,
const gfx::Size& dimensions)); const gfx::Size& dimensions));
MOCK_METHOD9(OnIncomingCapturedYuvData,
void (const uint8* y_data,
const uint8* u_data,
const uint8* v_data,
size_t y_stride,
size_t u_stride,
size_t v_stride,
const VideoCaptureFormat& frame_format,
int clockwise_rotation,
const base::TimeTicks& timestamp));
MOCK_METHOD3(OnIncomingCapturedVideoFrame, MOCK_METHOD3(OnIncomingCapturedVideoFrame,
void(const scoped_refptr<Buffer>& buffer, void(const scoped_refptr<Buffer>& buffer,
const scoped_refptr<VideoFrame>& frame, const scoped_refptr<VideoFrame>& frame,
...@@ -127,6 +137,8 @@ class VideoCaptureDeviceTest : public testing::Test { ...@@ -127,6 +137,8 @@ class VideoCaptureDeviceTest : public testing::Test {
VideoCaptureDeviceAndroid::RegisterVideoCaptureDevice( VideoCaptureDeviceAndroid::RegisterVideoCaptureDevice(
base::android::AttachCurrentThread()); base::android::AttachCurrentThread());
#endif #endif
EXPECT_CALL(*client_, OnIncomingCapturedYuvData(_,_,_,_,_,_,_,_,_))
.Times(0);
EXPECT_CALL(*client_, ReserveOutputBuffer(_,_)).Times(0); EXPECT_CALL(*client_, ReserveOutputBuffer(_,_)).Times(0);
EXPECT_CALL(*client_, OnIncomingCapturedVideoFrame(_,_,_)).Times(0); EXPECT_CALL(*client_, OnIncomingCapturedVideoFrame(_,_,_)).Times(0);
} }
...@@ -179,7 +191,8 @@ class VideoCaptureDeviceTest : public testing::Test { ...@@ -179,7 +191,8 @@ class VideoCaptureDeviceTest : public testing::Test {
} }
} }
} }
DVLOG(1) << "No camera can capture the format: " << pixel_format; DVLOG_IF(1, pixel_format != PIXEL_FORMAT_MAX) << "No camera can capture the"
<< " format: " << VideoCaptureFormat::PixelFormatToString(pixel_format);
return scoped_ptr<VideoCaptureDevice::Name>(); return scoped_ptr<VideoCaptureDevice::Name>();
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment