I’m a robotics researcher. My interests include cybersecurity, repeatable & reproducible research, as well as open source robotics and rust programing.
I fell for it. It took me a minute into the game time to figure what was up and double check today’s date.
I’m using a recent 42" LG OLED TV as a large affordable PC monitor in order to support 4K@120Hz+HDR@10bit, which is great for gaming or content creation that can appreciate the screen real estate. Anything in the proper PC Monitor market similarly sized or even slightly smaller costs way more per screen area and feature parity.
Unfortunately such TVs rarely include anything other than HDMI for digital video input, regardless of the growing trend connecting gaming PCs in the living room, like with fiber optic HDMI cables. I actually went with a GPU with more than one HDMI output so I could display to both TVs in the house simultaneously.
Also, having an API as well as a remote to control my monitor is kind of nice. Enough folks are using LG TVs as monitors for this midsize range that there even open source projects to entirely mimic conventional display behaviors:
I also kind of like using the TV as simple KVMs with less cables. For example with audio, I can independently control volume and mux output to either speakers or multiple Bluetooth devices from the TV, without having fiddle around with repairing Bluetooth peripherals to each PC or gaming console. That’s particularly nice when swapping from playing games on the PC to watching movies on a Chromecast with a friend over two pairs of headphones, while still keeping the house quite for the family. That kind of KVM functionality and connectivity is still kind of a premium feature for modest priced PC monitors. Of course others find their own use cases for hacking the TV remote APIs:
You could get a fiber optic display/HDMI cable, a fiber optic USB cable, and the USB hub, then just move the desktop tower into another room and run the cables through the walls or ceilings to your display setup. Might only be $100 or so cheaper than then a used business thin client, but at least you could still do something 4K 120Hz HDR 12bit over some distance without compromise. E.g:
Looks like Moonlight does have their app up on the Apple store or iOS, and Sunlight has binaries for most operating systems. Personally, instead of Sunlight’s server, I still use Nvidia’s GeForce Experience software to stream games, as it takes less effort to configure. Of course, Nvidia may not be applicable if you’re using integrated or AMD graphics instead.
Although, with Nvidia recently deprecating support for it’s shield device, Sunlight provides support for the same protocol that Moonlight was originally developed against, but it’s also open source. I’ve not used multi monitor streaming with GeForce Experience, something Sunlight would be much more flexible in configuring.
As for connectivity, I’m unsure if iOS supports the same USB network feature that Android has. I’d imagine at least the iPhone would, as that’s a core feature/option for mobile hotspot connectivity, but maybe that’s nixed from iPad iOS? Alternatively you could get yourself a USB C hub or dock with an ethernet adapter and pass through power delivery, so you can connect your iPad with a wired network and charge simultaneously.
Or you could just use Wi-Fi, but with wireless networks dropping and retrying packets, that’ll impact latency or bitrate quality when casting displays. Although for something mostly static like discord windows, that’s probably less of an issue. Windows 11, and maybe 10, also have a hotspot mode, where you could share your wired network via your PCs wireless radio via and ad hoc Wi-Fi SSID. That could reduce latency and improve signal reception, but you’d have to start the hotspot setting every session or whenever the device disconnects from windows’ hotspot for more than 15 minutes or something.
You could try other remote display streaming software as well, like Parsec. However they have a online account login requirement with the freemium model, so I prefer the open source client Moonlight instead. However parsecs a lot easier too use when streaming from outside your home, or when remotely single screen co-oping with friends, without having to configure firewalls or domain names.
If you already have a similarly sized tablet, you could just buy a dummy HDMI plug, a few dollars, to add a second virtual desktop and then simply cast that screen to the mobile device.
There are pretty nice Android tablets now with 2.5k 120 hz HDR OLD screens. You can just connect it directly to the computer via USB, enable USB network tethering, then use something like the Moonlight client app with Sunshine screen casting server. With the wired connection, and a high bit rate such as 150 Mbps, you can get single digit millisecond latency and hardly tell the difference from an native HDMI display.
Tablets like those might be on the high end, but at least you’d have nice secondary display that’s a bit more multifunctional. Or just go with a cheaper LCD based tablet or old iPad, if color accuracy, refresh rate, or resolution isn’t a priority.
A while back, I tried looking into what it would take to modify Android to disable Bluetooth microphones for wireless headsets, allowing for call audio to be streamed via regular AAC or aptX, and for the call microphone to be captured from the phones internal mic. This would prevent the bit rate for call audio in microphone being effectively halved when using the ancient HFP/HSP Bluetooth codecs, instead allowing for the same call quality as when using a wired headset. This would help when multitasking with different audio sources, such as listening to music while hanging out on discord, without the music being distorted from the lower bit rate of HFP/HSP. This would also benefit regular VoLTE, as the regular call audio quality already exceeds that of legacy Bluetooth headset profiles.
Although, I didn’t manage to tease apart the mechanics of the audio policy configuration files used by the source Android project, given the sparse documentation and vague commit history.
I’d certainly be fine with the awkwardness of holding up and speaking to my phone as if it was in speaker mode, but listening to the call over wireless headphones, in order to improve or double the audio quality. Always wondered what these audio policies fall back to when a Bluetooth device doesn’t have a headset profile, but it’s almost impossible to find high quality consumer grade Bluetooth headphones without a microphone nowadays.
For the call setting under Bluetooth audio devices, I really wish they would break out or separate the settings for using the audio device as a source or sink for call audio. Sort of like how you can disable HSP/HSF Bluetooth profiles for audio devices in Linux or Windows.
Similarly reported (in more detail) by TechCrunch:
I’ll note that when using multiple windows, I recall that switching the user in one window would switch the user for all other windows as well, so support for simultaneous user sessions would probably have to be added as well.
This screenshot was from a Samsung Tab S8 Ultra. You can run 4 onscreen apps at a time (if you include a floating popup window in the mix) with multi windowing on Android 13 (outside Dex).
Getting the screenshot took a little tinkering, as after the first window split, getting the third instance of sync on the screen required using the Samsung side panel to drop an unrelated app in the third quadrant, then using the launcher to alt-tab the display to Fullscreen the third instance of sync, then alt-tabing back to Fullscreen the 3 app multi window view, then using the quick app switch gesture to swap out the unrelated app for the third instance of sync. It was a little overly complicated.
Multi tasking and window tiling in Samsung Dex is a lot easier, or more intuitive, to replicate the kind of thing, but I still prefer androids native launcher layout, as app windows don’t have needless title bars, and the same navigation gestures work better when not breaking out the mouse and keyboard.
Thanks so much for your hard work and the terrific beta release!
Here’s to the success of Lemmy, Sync for Lemmy, and the rest of the Fediverse,
Cheers! 🍻
Hello world!
~ from S4L!
Ex. An NBA or Sports instance containing /c/NBA /c/NFL /c/NHL and all the related teams.
For anybody wondering what the Mastodon security issue is - CVE-2023-36460, you can send a toot which makes a webshell on instances that process said toot. #CVE202336460 #TootRoot
Don’t quote me, but I recall reading on GitHub that there are a few things to be refactored before Lemmy can support horizontal scaling approaches.
Could you share the link to that one? Thanks. Looks like this TechCrunch article is sourcing info from emails with advertisers partnered with Reddit, not just from public statements about visitor traffic published by Reddit themselves.
I wonder what the measured metrics are internally. Funny that those earning metrics would’ve been more readily available had they already IPO’ed on the public market.
That looks neet. Although I suspect this would succumb to the same cross post discoverability issues where URLs pointing to the same video would not match string for string. A better approach might be to facilitate inline embedding of HTML video players into Lemmy using browser extensions, where user scripts could be used to preview youtube links or re-write them to nocookie, allowing the Lemmy web UI to still avoid the use of cross-origin scripts by default.
Found the full transcription for the video from OP author:
Note to self: use
youtube.com
instead ofyoutu.be
for better cross post detection and lemmy integration
For programming tutorials, yep, I also prefer reading documentation instead. Although, it looks like this tutorial these folks put out doesn’t have much of anything you could copy from, like terminal commands, given its a recorded walkthrough in using the graphical web UI. YouTube also now allows for searching the auto or manual transcription text, which is handy when creators always forget to include timestamped chapters.
Tagging an image is simply associating a string value to an image pushed to a container registry, as a human readable identifier. Unlike an image ID or image digest sha, an image tag is only loosely associated, and can be remapped later to another image in the same registry repo, e.g
latest
. Untagging is simply removing the tag from the registry, but not necessarily the associated image itself.