The high CPU issue that is causing midwest.social to slow way down and timeout on HTTP requests is still a mystery to me. But, it seems to do it less often and for less time now.
Here is CPU usage over 24 hours:
It’s a bunch of SELECT statements in Postgres that all seem to fire at one time and take a while to complete as a batch. I’ve inspected the logs and haven’t seen anything unusual. Just stuff federating and posts receiving upvotes.
Yo, I’m a network engineer, also from the Midwest (Living in Vietnam). Do you have a discord or chat platform where I could be of any help?
Yeah, there’s a link to our Matrix chat in the sidebar.
Is this issue still ongoing? I’ve had various threads that I’ve had issues accessing via Jerboa and the pwa on my Chromebook. Can’t figure out if it’s a Lemmy-wide issue or instance-specific… from 404 issues to the website hanging when I submit a post along with pages that only load the header
I’ve been having timeout and 404 issues since I created my account last week. Typically can’t browse more than 5 minutes before I start getting timeouts, which can last for several minutes to several hours before anything loads again. Thought it was a gerboa issue at first, but I’m using a browser now and getting them too.
Unfortunately it still happens, just not nearly as often as it used to. Need to meet up with my database admin to get to the bottom of the issue.
Have you reached out to anyone else who runs a Lemmy server and seen if they are also experiencing this?
Do you think it is a bottlekneck or just due to large updates from other servers?
I have reached out and nobody seems to have any idea what would cause it.
Has the influx of new members exacerbated this issue? I found this thread because I’m seeing a few ‘504 Gateway Timeout’ errors occasionally.
Beehaw is also noticing these issues. It’s most likely from the influx of new users.
Your image didn’t show up.
Is it PostgreSQL process that is going high CPU, or the lemmy_server process, or lemmy-ui?
It’s fixed. Was an inefficient database query.
Did you hand-modify lemmy_server? is there a code change/pull request to share?
Thank you.
I manually modified the activity table. I’ll find the migration that didn’t run.
Any updates on server stability lately? I’ve been having a lot of connectivity issues the last couple days, and just saw this post on lemmy.world that was pretty interesting.
@seahorse@midwest.social have you been dealing with performance problems like this?
(side note, how do you embed a link to another instance that will let midwest.social users stay logged on?)
db memory cache used to be the fastest on stand alone boxes. not sure what hosted places offer anymore. gives me a headache recalling taking the phone off the hook so I could concentrate on fixing issues