RELAND: media/gpu/vaapi_video_decoder: keep allocated VASurfaces alive
The original CL was reverted due to crashes in betty initialization: betty is a VM which doesn't support VA, so the VideoDecoderPipeline constructed-destructed a VaapiVideoDecoder, hitting an unprotected nullptr |vaapi_wrapper_| in dtor. Fix in crrev.com/c/2339494/2..3. TBR=andrescj@chromium.org Original CL description ----------------------------------------------- Certain platforms and codecs suffer from horrible artifacts (Intel BYT, H264) or crashes (Intel BSW/BDW, VP9). This was traced to some kind of error in the tracking of the VASurfaces lifetime in the driver: every time we get a new resource from the pool to decode onto, this is imported into libva as a VASurface: this works fine almost everywhere but doesn't play well in these old platforms (see CreateSurface() body). This CL adds a map that keeps the ref-counted VASurfaces alive and indexed by the unique GpuMemoryBufferId until the VA Context is destroyed. In so doing, we're basically observing the "contract" of va.h vaDestroySurfaces() [1] "Surfaces can only be destroyed after all contexts using these surfaces have been destroyed". [1] https://github.com/intel/libva/blob/libva-2.0.0/va/va.h#L1134 Bug: b:142019786 b:143323596 Change-Id: I593763ec02dad7bba240c8fed9e71de21637a231 Reviewed-on: https://chromium-review.googlesource.com/c/chromium/src/+/2339494Reviewed-by:Miguel Casas <mcasas@chromium.org> Commit-Queue: Miguel Casas <mcasas@chromium.org> Cr-Commit-Position: refs/heads/master@{#795026}
Showing
Please register or sign in to comment