Would I be compromising on the security of my local network and all the devices on it?

I have a ton of local-only self hosted services, some may have personal data that I would not be compromised of affected.

Now of course, I can work on securing those local services from each other, but still, the idea of opening up a port to the public seems incredibly insecure to me. Is there a way to host services publicly from a local network without compromising on security?

I know I could host on a cloud provider or VPS, but for certain things I’d prefer to keep it local (especially for things that may violate VPS providers’ terms of service, like media apps)

  • Shortcake@kbin.social
    link
    fedilink
    arrow-up
    5
    ·
    2 years ago

    Best advice is not to expose it publicly.
    If you need access outside your LAN try using a VPN to your network with wireguard (wg-easy is nice) or tailscale (there are others as well).

    If it’s using a domain/subdomain make sure you use secure usernames and passwords, or things like authentik, authelia and the like to buff up security.

      • Atemu@lemmy.ml
        link
        fedilink
        arrow-up
        5
        ·
        2 years ago

        Well, that’s the crux. “Public for anyone to use” is a huge liability. No public service is really secure. They can be hardened but that’s about it.

        One way to harden a locally hosted setup could be to use Tailscale funnel. It’s effectively a proxy for network traffic to one specific port of a machine on your network. You don’t even need a static IP address or open ports here.

        You’re still vulnerable to problems with the specific service you’re exposing though, so it’s highly recommended to harden the service itself. Containerisation can be an option here but also systemd service hardening.

        • F04118F@feddit.nl
          link
          fedilink
          arrow-up
          2
          ·
          2 years ago

          What’s the effective difference security-wise between just opening a port and using a tailscale funnel to proxy the traffic on that same port?

          • Atemu@lemmy.ml
            link
            fedilink
            arrow-up
            2
            ·
            2 years ago

            I see two reasons:

            1. It’s a reverse proxy but at a lower layer (not exactly sure whether it’s L3 or L4). Nobody knows your actual IP address, only Tailscale and they’re not telling.
            2. It does not require any port to permanently be exposed to the internet from your network/firewall. No amount of scans of the IPv4 range can find that port because it’s simply not open.
  • donnnnnb@lemm.ee
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 years ago

    I host bitwarden with cloudflare tunnels and have an A record on my local DNS for it, so on LAN I can access it directly but still host it publicly without exposing ports.

  • macgregor@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 years ago

    I host publicly accessible content on my network. There is a risk, yes, and it takes understanding and work to do it safely. My firewall is dropping packets from all over the world constantly from crawlers/bots. If you don’t know what you are doing, using something like tailscale is going to be way safer and easier.

  • benonions@programming.dev
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 years ago

    If something doesn’t absolutely have to be public, then hosting a VPN or using tailscale (or if you prefer something self-hosted, nebula) can be good too.

    If you DO want the application(s) to be public, then something I tried in the past that worked well:

    I set up on a super cheap VPS and then set up a tunnel (using nebula) to a VM in my homelab. I made sure to configure nebula networking so as to only allow the VM and VPS.

    Both the VPS and the VM were set up to allow only SSH using an ssh key. I threw on fail2ban for the VPS for good measure. It’s scary seeing just how many bots attempt to log in the logs.

    On the VPS, I installed nginx proxy manager and configured URLs on the nginx proxy manager to redirect each to different ports on the VM where apps (like nextcloud, an xmpp server etc) were running in docker.

    Doing things that way you’re only using the VPS as a HTTP/TCP proxy to the server in your home, not actually using VPS storage/processing power beyond the bare minimum for running nginx.