I started the scan for my movie collection (roughly 140 movies) on my Raspberry Pi 3B. It has become unresponsive and I can’t ssh in now. It seems to be due to all the ffmpeg instances. I have two questions:
Should I wait for an hour or should I just reboot the server? Also, is there a way to disable the setting for chapter images from the web UI? I can’t find it in the setting.
I waited a long time before making this post. Within minutes of posting the system became responsive again. It only picked up on one movie though.
Jellyfin really shouldn’t push the available system resources so hard. It’s impossible for the user to know if the scan is actually happening when the UI and SSH interfaces have locked up. It seems it couldn’t complete the scan either because it was being excessive with system resources.
If you use some kind of virtualisation and/or containerisation then you can limit ram and/or cpu usage. This can of course greatly reduce lockups if not eliminate them.
Edit: I only now read it’s a Pi 3B. Not sure if hosting Jellyfin on that device is a good idea… If you insist though, consider running an LXC inbetween and limit it to three cores. That should leave one core available for the system so the system doesn’t lock up again.
Thanks so much. Sound like I need to learn a bit more about docker. That’s how I installed it.
In case you ran it using a
docker run
command, read this. Otherwise if you use compose, try something similar to the following:services: service: image: nginx deploy: resources: limits: cpus: '3' memory: 512M reservations: cpus: '0.25' memory: 128M cpuset: "1" ports: - "80:80"
source
I’m using docker compose. Thank you so much, this is fantastic!
I have no idea what your experience level is, so I’m saying this just to make sure: DON’T copy this verbatim. The resources bit is what you’d need to adapt into your own compose file. If you have questions, feel free to ask 👍
Noted, thank you. I’ll look into it a bit more and come back if I have questions. I appreciate it!
It seems I’m out of luck on this kernel / hardware. After applying some limitations I get the following when I run the container:
Your kernel does not support memory soft limit capabilities or the cgroup is not mounted. Limitation discarded.
Oh strange, a quick Google search doesn’t bring up much of anything either. With loads of people having Pi’s and presumably also having tried to limit the capabilities, you’d think someone else would’ve posted something about it. If it really bugs you, maybe try a fresh install of Raspbian Lite 64bit and see if things work? Otherwise I think it might just be a limitation of the Pi.
Someone suggested renicing the processes. Not sure how practical it would be but I’ll look into it. Besides that I think I’ll just return to using an SMB server. It was ugly but it was easy :) The Pi4 isn’t expensive these days either. Thanks again, I appreciate it!
…i think a 3B is severely underpowered if you are going to be using transcoding. I’d disable it completely so it basically only serves the file.
That’s really interesting. I did a quick test and the RAM usage is still creeping up but I’d need to let it run for longer to see if it plateaus at a lower level. Thanks for the suggestion.
I’d mean CPU resources. As in, any of those things will hit ffmpeg, which in your CPU uses with very little if any acceleration (not sure if anything for decoding, but most definitely NOT for encoding), and will hog the resources and hamper everything else.
Oh, I see what you mean. Thanks, I’ll look into it.
Can you maybe set niceness, ioniceness and cpu affinity?
if you lower those for the type of processes that cause this, responsiveness for other things will be better.
If you can’t inject things like ionice or nice into the command lines of those processes, maybe use a cronjob to find and renice or ionice -p them.
I never thought of that, good idea. This docker image spins off a lot of processes so I wonder if there’s a high-level way to instruct that for all of them.