In a previous blog post I showed how my video streaming setup is working. In this blog post I will describe how I improved the system by using the power of GPUs.
Jellyfin has the ability to utilize the graphics card to decode and encode the video stream. This has far better performance than using software decoding (using the CPU only). Since I have relatively unpowerful nodes in my home-lab, enabling hardware decoding is a game changer. I can easily double the performance of the decoding by utilizing the GPU.
Make use of the GPU
On my nodes I have an integrated amd GPU.
00:01.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Stoney [Radeon R2/R3/R4/R5 Graphics] (rev 81)
The nodes are running Debian. In order to use the GPU we need to install the
package firmware-amd-graphics
. Once the package is installed, and we rebooted
the machine we can see the device under /dev/dri
Since I am running Jellyfin on Kubernetes, I need to schedule GPUs in order to use them inside the container image. I have AMD GPUs, so I am installing the correct device plugin to Kubernetes : https://github.com/RadeonOpenCompute/k8s-device-plugin. This adds the AMD GPU resource and provide a way to label nodes that have a schedulable GPU.
I can request a GPU by adding it in the resources of the Jellyfin pod. This is
essentially adding amd.com/gpu: "1"
to the limits of the container in order
to request the GPU. This makes the GPU device accessible inside the container
under /dev/dri/
. I also needed to give the right permission to the container
so that it can interact with the GPU. On debian when using the
firmware-amd-graphics
package, the group that owns /dev/dri/renderD128
is
the group render
, it has the GID 107. There is also the group video
which
owns /dev/dri/card0
with GID 44. I also added it. In order to give the
correct permissions to the container we add this to the container spec:
securityContext:
supplementalGroups:
- 107
- 44
Then I needed to enable Hardware acceleration inside Jellyfin’s configuration.
Under the playback section of the config I selected VAAPI and my device
/dev/dri/renderD128
Does it work ?
We can check that Jellyfin correctly decodes using the GPU by looking at the transcoding logs. Under Stream mapping we see that ffmpeg is correctly using vaapi:
Stream mapping:
Stream #0:0 -> #0:0 (h264 (native) -> h264 (h264_vaapi))
Stream #0:2 -> #0:1 (ac3 (native) -> aac (native))
Little issues are remaining
Currently, decoding hevc (H.265) is not well-supported on AMD GPUs due to a too
old version of mesa-va-drivers
(18.3) shipped inside Jellyfin’s container.
I’m getting the following error in the transcoding logs:
Failed to render parameter buffer: 6 (invalid VASurfaceID).
Jellyfin needs to bump the version of mesa-va-drivers
inside the container
image to fix this issue.