I believe that the 67% number for the 2020 election is of eligible voters and not registered voters. While turnout is low, it’s not 25% low.
I believe that the 67% number for the 2020 election is of eligible voters and not registered voters. While turnout is low, it’s not 25% low.
Yeah openwrt should be great. It uses nftables as a firewall on a Linux distribution. You can configure it through a pretty nice ui, but you also have ssh access to configure everything directly if you want.
The challenge is going to be what the ISP router supports. If it supports bridge mode then things are easy. You just put your router downstream of it and pretend like it’s a modem. Then you configure openwrt like it’s the only router in the network. This is the opposite of what you’ve suggested, using the upstream ISP router in pass through and relying on the openwrt router to get the ipv6 GUA prefix. (You might even be able to get a larger prefix delegated if you set the settings to ask for it)
If you don’t have bridge mode then things are harder. There’s some helpful information here https://forum.openwrt.org/t/ipv6-only-slaac-dumb-aps/192059/19 even though the situation is slightly different since they also don’t want a firewall. But you probably need to configure your upstream side on the openwrt router similarly.
Also looking more, the tplink ax55 isn’t supported by openwrt. If you don’t already have it, I’d get something that does. (Or if the default software on the ax55 supports what you want, that’s fine too. I just like having the full control openwrt and similar gives)
I’d recommend something that you can put openwrt or opnsense/pfsense on. I think the tplink archers support openwrt at least.
The ISP router opening things at a port level instead of a host level is kinda insane. Do they only support port forwarding? Or when you open a port range can you actually send packets from the WAN to any LAN address at that port.
Can you just buy your own modem, and then also use your own router? (If the reason you need the ISP router is that it also acts as a modem).
Does the ISP router also provide your WiFi? If it does you should definitely go with a second router/access point and then disable the one on the ISP router.
Since games don’t have to run with more than user privileges and steam runs in flatpak, you could run them as a different user account with very limited permissions.
That said, flatpak should be pretty secure as far as I’m aware if you make sure that permissions for the apps running are restricted appropriately. I’m not sure how restricted you can make steam and still have it work though
You can use offline mode for steam if you’re okay with steam having internet but not games. But there’s no way to use steam entirely offline. Internet access is a fundamental part of the system they have.
There’s also a question of what your threat model is. Like are you trying to prevent causal access of your files by games, or like a sophisticated attempt to compromise the system conveyed through a game. For the former flatpak seems sufficient. For the latter you probably need a dedicated machine. And there’s varying levels in between
Wait so the images in your post are the after images?
I think something that contributes to people talking past each other here is a difference in belief in how necessary/desirable revolution/overthrow of the U.S government is. Like many of the people who I’ve talked to online, who advocate not voting and are also highly engaged, believe in revolution as the necessary alternative. Which does make sense. It’s hard to believe that the system is fundamentally genocidal and not worth working within (by voting for the lesser evil) without also believing that the solution is to overthrow that system.
And in that case, we’re discussing the wrong thing. Like the question isn’t whether you should vote or not . it’s whether the system is worth preserving (and of course what do you do to change it. How much violence in a revolution is necessary/acceptable). Like if you believe it is worth preserving, then clearly you should vote. And if you believe it isn’t, there’s stronger case for not voting and instead working on a revolution.
Does anyone here believe that revolution isn’t necessary and also that voting for the lesser isn’t necessary?
The opposite is more plausible to me: believing in the necessity of revolution while also voting
Personally I believe that revolution or its attempt is unlikely to effective and voting+activism is more effective, and also requires agreement from fewer people in order to progress on its goals. Tragically, this likely means that thousands more people will be murdered, but I don’t know what can actually be effective at stopping that.
Cool!
I wouldn’t worry about making a second post. We can use all the content that we can get and this is neat
I understand that you have principles. I have principles too. But it sounds like your principles are at least partly based on a personal purity, which is what I’m arguing against.
The idea that by voting for kamala, you’ll be personally tainted by her actions. And that by not voting at all, you avoid this taint.
There’s a good argument in my opinion for not voting if you actually believe it will lead to the best outcome. Like for example that if enough people don’t vote it will cause our leader/parties/etc to do something better. I just don’t think this is true. And if it’s not true, what remains is a purity argument, which I find selfish, since it prioritizes your internal view of yourself over what happens to other people in the world.
I’m also absolutely in favor of third party candidates that push issues and the electorate to the left. I just think that generally they should drop out at the point when it becomes clear that they aren’t going to win and endorse the person closest to them on the issues.
The other option is that they simultaneously believe they need your vote, but also know that they would lose more voters than they would gain if they did what you’re asking. It’s not entirely clear that this is what’s happening, as there’s not been much indication that Kamala believes what Israel is doing is horrific, but it’s a very real possibility that you aren’t including. And in that case, voting for her remains the best you can do, since you not voting for her won’t convince the other people who’s vote she would lose. It will just lead to trump being elected.
Docker desktop is not what most people on Linux are using. They’re using docker engine directly, which doesn’t run in a vm, and doesn’t require virtualization if you use the same kernel inside the containers.
You have two options for setting up https certificates and then some more options for enabling it on the server:
1: you can generate a self signed certificate. This will make an angry scary warning in all browsers and may prevent chrome from connecting at all (I can’t remember the status of this). Its security is totally fine if you are the one using the service since you can verify the key is correct
2: you can get a certificate to a domain that you own and then point it at the server. The best way to do this is probably through letsencrypt. This requires owning a domain, but those are like $12 a year, and highly recommended for any services exposed to the world. (You can continue to use a dynamic DNS setup, but you need one that supports custom domains)
Now that you have a certificate you need to know, Does the service your hosting support https directly. If it does, then you install the certificates in it and call it a day. If it doesn’t, then this is where a reverse proxy is helpful. You can then setup the reverse proxy to use the certificate with https and then it will connect to the server over http. This is called SSL termination.
There’s also the question of certificate renewal if you choose the letsencrypt option. Letsencrypt requires port 80 to do a certificate renewal. If you have a service already running on port 80 (on the router’s external side), then you will have a conflict. This is the second case where a reverse proxy is helpful. It can allow two services (letsencrypt certificates renewal and your other service) to run on the same external port. If you don’t need port 80, then you don’t need it. I guess you could also setup a DNS based certificate challenge and avoid this issue. That would depend on your DNS provider.
So to summarize:
IF service doesn’t support SSL/https OR (want a letsencrypt certificate AND already using port 80)
Then use a reverse proxy (or maybe do DNS challenge with letsencrypt instead)
ELSE:
You don’t need one, but can still use one.
Reverse proxies don’t keep anything private. That’s not what they are for. And if you do use them, you still have to do port forwarding (assuming the proxy is behind your router).
For most home hosting, a reverse proxy doesn’t offer any security improvement over just port forwarding directly to the server, assuming the server provides the access controls you want.
If you’re looking to access your services securely (in the sense that only you will even know they exist), then what you want is a VPN (for vpns, you also often have to port forward, though sometimes the forwarding/router firewall hole punching is setup automatically). If the service already provides authentication and you want to be able to easily share it with friends/family etc then a VPN is the wrong tool too (but in this case setting up HTTPS is a must, probably through something like letsencrypt)
Now, there’s a problem because companies have completely corrupted the normal meaning of a VPN with things like nordvpn that are actually more like proxies and less like VPNs. A self hosted VPN will allow you to connect to your hone network and all the services on it without having to expose those services to the internet.
In a way, VPNs often function in practice like reverse proxies. They both control traffic from the outside before it gets to things inside. But deeper than this they are quite different. A reverse proxy controls access to particular services. Usually http based and pretty much always TCP/IP or UDP/IP based. A VPN controls access to a network (hence the name virtual private network). When setup, it shows up on your clients like any other Ethernet cable or WiFi network you would plug in. You can then access other computers that are on the VPN, or given access to to the VPN though the VPN server.
The VPN softwares usually recommended for this kind of setup are wireguard/openvpn or tailscale/zerotier. The first two are more traditional VPN servers, while the second two are more distributed/“serverless” VPN tools.
I’m sorry if this is a lot of information/terminology. Feel free to ask more questions.
How will a reverse proxy help?
Things that a reverse proxy is often used for:
Do any of these match what you’re trying to accomplish? What do you hope to gain by adding a reverse proxy (or maybe some other software better suited to your need)?
Edit: you say you want to keep this service ‘private from the web’. What does that mean? Are you trying to have it so only clients you control can access your service? You say that you already have some services hosted publicly using port forwarding. What do you want to be different about this service? Assuming that you do need it to be secured/limited to a few known clients, you also say that these clients are too weak to run SSL. If that’s the case, then you have two conflicting requirements. You can’t simultaneously have a service that is secure (which generally means cryptographically) and also available to clients which cannot handle cryptography.
Apologies if I’ve misunderstood your situation
Thanks I’ll check it out! From a brief search it looks like at the moment I’ll still have to use the nvidia-libs repo to get cuda: https://github.com/bottlesdevs/Bottles/issues/3301
Huh?? I’m using Kubuntu 24.04 right now and didn’t have to jump through these hoops. That’s weird.
I compile them because I want to use them with my system wine, and not with proton. Proton does that stuff for you for steam games. This is for like CAD software that needs accelerated graphics. I could probably use like wine-ge and let GE compile it for me, but I’m not sure they include all the Nvapi/cuda stuff that’s needed for CAD and not gaming. If there’s an easier way to do it, I’d love to hear! Right now I’m using https://github.com/SveSop/nvidia-libs
I’m a developer that’s been using Ubuntu distros for 20 years and never ran into such issues.
If you’re a developer that’s comfortable with desktop software toolchains, that makes sense. (And checkinstall is wonderful for not polluting your system with random unmanaged files). But I came at this knowing like embedded c++ and Python, and there was just a lot of tools I had to learn. Like what make was and how library files are linked/found, etc. And for someone who’s not a developer at all, I imagine that this would be even harder.
I’ve learned a lot, especially because of everyone in this thread
I’m glad!
Re the flatpak issue, what you linked is just saying that flatpak won’t be a default installed program and packages provided by flatpaks won’t be officially supported by Ubuntu support as of 23.03. I don’t think this effects your use of Ubuntu in any way. If you want to use flatpaks, just install the program. It will still be packaged in the Ubuntu repositories. 23.04 was over a year ago. I still use flatpak without a problem on my kubuntu 24.04 system. It’s just a one time thing to do sudo apt-get install flatpak
and maybe a second package for KDE’s flatpak packagekit back end and it’s like canonical never made that decision.
The push of snaps instead of debs is a bit more concerning because it removes the deb as an option in the official repositories. But as of right now I think only Mozilla software has this happening? If your timeline is 5-10 years though, this may be more of an issue depending on how hard canonical pushes snaps and how large their downsides remain
All those patches seem like nice things to have, but are more focused on adding hardware support and working around bugs in software/other people’s implementations. If you have one of the effected GPUs/games/etc, then those patches probably make a huge difference, but I’d guess there won’t be noticeable frame rate differences on most systems. I have not tested this claim though, so maybe something on there makes a big difference. What’s nice is all the packaging stuff they’ve done to make setting things up correctly easily, not necessarily most of the changes themselves. Like on my system I compile dxvk and various wine nvidia libs myself since Ubuntu doesn’t package them. And it’s easy to screw that up/it requires some knowledge of compiling things
Reading your update, I’d still choose whatever distro packages the software you want with the versions/freshness you need. If you’re willing to tweak things, then the performance stuff can be done yourself pretty easily (unless you have broken hardware that isn’t well supported by the mainline kernel), but packaging things/compiling software that isn’t in the repositories is a huge pain. I think this is one of the reasons people choose arch even with its need to stay on top of updates. Is that the AUR means that you don’t have to figure out how to build software that the distribution managers didn’t package. Ubuntu’s PPAs aren’t great (though I don’t have personal arch experience to compare with)
I’m not sure what performance improvements you’re talking about. As far as I’m aware, the difference between distros on performance is extremely minimal. What does matter is how up to date the DE is in the distribution provided package. For example, I wanted some nvidia+Wayland improvements that were only in kwin 6.1, and so I switched from kubuntu to neon in order to get them (and also definitely sacrificed some stability since more broken packages/combinations get pushed to users than in base ubuntu). It’s also possible that the kernel version might matter in some cases, but I haven’t run into this personally.
I think the main differences between distros is how apps are packaged and the defaults provided, and if you’re most comfortable with apt based systems, I’m not sure what benefit there’s going to be to switching (other than the joy in tinkering and learning something new, which can be fun in its own right).
For some users less experienced with linux, the initial effort required to setup Ubuntu for gaming (installing graphics drivers/possibly setting kernel options, etc) might push someone toward a distribution that removes that barrier, but the end state is going to be basically identical to whatever you’ve setup yourself.
The choice between distributions is probably more ‘what do I want the process to getting to my desired end state to be like’ and less ‘how do I want the computer to run’.
Could you post the specific output of the commands that don’t work? It’s almost impossible to help with just ‘It doesn’t work’. Like when ping fails, what’s the error message. Is it a timeout or a resolution failure. What does the resolvectl command I shared show on the laptop. If you enable logging on the DNS server, do you see the requests coming in when you run the commands that don’t work.
I don’t know a lot about tailscale, but I think that’s likely not relevant to what’s possible (but maybe relevant to how to accomplish it).
It sounds like the main issue here is dns. If you wanted to/were okay with just IP based connections, then you could assign each service to a different port on Bob’s box, and then have nginx point those ports at the relevant services. This should be very easy to do with a raw nginx config. I could write one for you if you wanted. It’s pretty easy if you’re not dealing with https/certificates (in which case this method won’t work anyway).
Looking quickly on google for npm (which I’ve never used), this might require adding the ports to the docker config and then using that port in npn (Like here). This is likely the simplest solution.
If you want hostnames/https, then you need some sort of DNS. This is a bit harder. You can take over their router like you suggested. You could use public DNS that points at a private IP (this is the only way I’m suggesting to get public trusted ssl certificates).
You might be able to use mdns to get local DNS at Bob’s house automatically, which would be very clean. You’d basically register like
jellyseer.local
andjellyfin.local
on Bob’s network from the box and then setup the proxy manager to proxy based on those domains. You might be able to just doavahi-publish -a -R jellyseer.local 192.168.box.ip
and thenavahi-publish -a -R jellyfin.local 192.168.box.ip
. And then any client that supports mdns/avahi will be able to find the service at that host. You can then register those names nginx/npn and I think things should just workTo answer your questions directly
I’d be happy to try and give more specifics if you choose a path similar to one of the above things.