• 7 Posts
  • 594 Comments
Joined 4 years ago
cake
Cake day: January 21st, 2021

help-circle
  • I switched to Immich recently and am very happy.

    1. Immich’s face detection is much better, very rarely fails. Especially for non-white faces. But even for white faces PhotoPrisim regularly needed me reviewing the unmatched faces. I also needed to really turn up the “what is a face” threshold because otherwise it would miss a ton of clear faces. (Then it only missed some, but also has tons of false positives). On the other hand Immich just works.
    2. Immich’s UI is much nicer overall. Lots of small affordances. For example the menu item to “view in timeline” is worth switching alone. Also good riddance to PhotoPrism’s persistent and buggy selection. Someone must have worked really hard on implementing this but it was really just a bad idea.
    3. Immich has an app with uploading, and it allows you to view local and uploaded photos in one interface which is a huge UX win. I couldn’t find a good Android app for uploading to photoprism. You could set up import delays and stuff but you would still regularly get partially uploaded files imported and have to clean it up manually.
    4. Immich’s search by content is much better. For example searching for “cat with red and yellow ball” was useless on PhotoPrism, but I found tons of the results I was looking for on Immich.

    The bad:

    1. There is currently a terrible jank in the Immich app which makes videos unusable and everything painful. Apparently this is due to some Album sync process running in the main thread. They are working on it. I can’t fathom how a few hundred albums causes this much lag but 🤷 There is also even worse lag on the location view page, but at least that is just one page.
    2. The Immich app has a lot less features than the website. But the website works very well on mobile so even just using the website (and the app for uploading) is better than PhotoPrism here. The fundamentals are good but it just needs more work.
    3. I liked PhotoPrism’s advanced filters. They were very limited but at least they were there.
    4. Not being able to sort search results by date is a huge usability issue. I often know roughly when the photo I want to find was taken and being able to order by date would be hugely helpful.
    5. You have to eagerly transcode all videos. There is no way to clean up old transcodes and re-transcode on the fly. To be fair the PhotoPrism story also wasn’t great because you had to wait for the full video to be transcoded before starting, leading to a huge delay for videos more than a few seconds, but at least I could save a few hundred gigs of disk space.

    Honestly a lot of stuff in PhotoPrism feels like one developer has a weird workflow and they optimized it for that. Most of them are counter to what I actually want to do (like automatic title and description generation, or the review stuff, or auto quality rating). Immich is very clearly inspired by Google Photos and takes a lot of things directly from it, but that matches my use case way better. (I was pretty happy with Google Photos until they started refusing to give access to the originals.)


  • Most Intel GPUs are great at transcoding. Reliable, widely supported and quite a bit of transcoding power for very little electrical power.

    I think the main thing I would check is what formats are supported. If the other GPU can support newer formats like AV1 it may be worth it (if you want to store your videos in these more efficient formats or you have clients who can consume these formats and will appreciate the reduced bandwidth).

    But overall I would say if you aren’t having any problems no need to bother. The onboard graphics are simple and efficient.


  • Yes. As this is a workstation the memory use is highly variable, >95% of the time I would probably barely notice having 32GiB. But other times it is a huge performance win to have that capacity available. Sometimes I am compiling lots of stuff and 32 compilers running + ample disk cache is very important. Other times I am processing lots of data and other times I am running a few VMs.

    It is a bit of a luxury. I think if I was on a tighter budget I would have gone for 64GiB. However the price difference wasn’t that much and at least a handful of times I have been quite happy to have that capacity available. And worst case I just have everything sitting in disk cache after a warm up which is a small performance win on every small task.




  • There are three parts to the whole push system.

    1. A push protocol. You get a URL and post a message to it. That message is E2EE and gets delivered to the application.
    2. A way to acquire that URL.
    3. A way to respond to those notifications.

    My point is that 1 is the core and already available across devices including over Google’s push notification system and making custom push servers is very easy. It would make sense to keep that interface, but provide alternatives to 2 and 3. This way browsers can use the JS API for 2 and 3, but other apps can use a different API. The push server and the app server can remain identical across browsers, apps and anything else. This provides compatibility with the currently reigning system, the ability to provide tiny shims for people who don’t want to self host and still maintains the option to fully self host as desired.





  • IMHO UnifiedPush is just a poor re-implementation of WebPush which is an open and distributed standard that supports (and in the browser requires, so support is universal) E2EE.

    UnifiedPush would be better as a framework for WebPush providers and a client API. But use the same protocol and backends as WebPush (as how to get a WebPush endpoint is defined as a JS API in browsers, would would need to be adapted).








  • This isn’t how YouTube has streamed videos for many, many years.

    Most video and live streams work by serving a sequence of small self-contained video files (often in the 1-5s range). Sometimes audio is also separate files (avoids duplication as you often use the same audio for all video qualities as well as enables audio-only streaming). This is done for a few reasons but primarily to allow quite seamless switching between quality levels on-the-fly.

    Inserting ads in a stream like this is trivial. You just add a few ad chunks between the regular video chunks. The only real complication is that the ad needs to start at a chunk boundary. (And if you want it to be hard to detect you probably want the length of the ad to be a multiple of the regular chunk size). There is no re-encoding or other processing required at all. Just update the “playlist” (the list of chunks in the video) and the player will play the ad without knowing that it is “different” from the rest of the chunks.


  • kevincox@lemmy.mltoTechnology@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    1
    ·
    2 months ago

    That is a pretty weak argument. The issues are minor and in a library that people are moving off of to a better build and stronger validated library. Yes, it should have been like that in the first place, but the problem is minor and being addressed.

    I would look more to the various features of Matrix that aren’t encrypted like room names, topics, reactions, … and not to mention the oodles of unencrypted metadata. I really wouldn’t call Matrix a high-privacy system.

    I like Matrix and use it regularly, but it definitely doesn’t have a privacy-first mindset like Signal does. I’m hoping that this improves over time, but without a strong privacy first leadership it seems unlikely to happen.