I have a few selfhosted services, but I’m slowly adding more. Currently, they’re all in subdomains like linkding.sekoia.example etc. However, that adds DNS records to fetch and means more setup. Is there some reason I shouldn’t put all my services under a single subdomain with paths (using a reverse proxy), like selfhosted.sekoia.example/linkding?

  • dan@lemm.ee
    link
    fedilink
    English
    arrow-up
    25
    ·
    1 year ago

    The only problem with using paths is the service might not support it (ie it might generate absolute URLs without the path in, rather than using relative URLs).

    Subdomains is probably the cleanest way to go.

    • scrchngwsl@feddit.uk
      link
      fedilink
      English
      arrow-up
      8
      ·
      1 year ago

      Agreed, I’ve run into lots of problems trying to get reverse proxies set up on paths, which disappear if you use a subdomain. For that reason I stick with subdomains and a wildcard DNS entry.

  • Sascamooch@lemmy.sascamooch.com
    link
    fedilink
    English
    arrow-up
    23
    ·
    1 year ago

    I prefer subdomains, personally. A lot of services expect to run on the root of the web server, so although you can sometimes configure them to use a path, it’s kind of a pain.

    Also, migrating services from one server to another will be a lot easier with subdomains since all you have to do is change the A and AAAA records. I use ZeroTier for a lot of my services, and that’s really nice since, even if I move a container to another machine, the container’s ZeroTier IP address will stay the same, so I don’t even need to update DNS. With paths, migration would involve a lot more work.

  • lvl@beehaw.org
    link
    fedilink
    English
    arrow-up
    19
    ·
    1 year ago

    Try not to use paths, you’ll have some weird cross-interactions when two pieces of software set the same cookie (session cookies for example), which will make you reauthenticate for every path.

    Subdomains are the way to go, especially with wildcard DNS entries and DNS-01 letsencrypt challenges.

  • Jeena@jemmy.jeena.net
    link
    fedilink
    English
    arrow-up
    17
    ·
    edit-2
    1 year ago

    I started with paths because I didn’t want to pay for a expensive SSL certificate for each service I’m running (now with letsencrypt no problem anymore). But that turned out to be a terrible idea. Once I wanted to host a service on a different server the problems started. With subdomain you just point your DNS to the correct IP address and that’s it. With paths you have to proxy everything through your one vhost and it get’s really messy. And to be honest most services expect you to run them on the root directory and not a path.

    • SocialDoki@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      Yeah, this is it. The only exception would be if you’re running everything off a single non-virtualized, non-containerized server, which is a bad idea for a whole host of reasons.

  • Midas@ymmel.nl
    link
    fedilink
    English
    arrow-up
    11
    ·
    edit-2
    1 year ago

    I’ve kinda been trimming the amount of services I’ve exposed through subdomains, it grew so wild because it was pretty easy. I’d just set a wildcard subdomains to my ip and the caddy reverse proxy created the subdomains.

    Just have a wildcard A record that points *. to your ip address.

    Even works with nested domains like “home.” and then “*.home”

  • surfrock66@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    1 year ago

    Subdomain; overall cheaper after a certain point to get a wildcard cert, and if you split your services up without a reverse proxy it’s easier to direct names to different servers.

    • witten@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 year ago

      Who still pays for certs?? (I say this as non-snarkily as possible.) I just imagined everyone self-hosting uses Let’s Encrypt.

      • surfrock66@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        Let’s encrypt is fine for encryption but not identification. I have some stuff which I prefer that on, specifically around demonstrating services that I host at home in the workplace. Having full verification just reduces the questions I have to deal with. It’s like $90/year for a wildcard.

        • witten@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          Wow, okay! I guess that’s a different use case than what I’m typically doing.

  • preciouspupp@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    10
    ·
    1 year ago

    You can make a wildcard domain and point it to the reverse proxy to route based on SNI. That works if you have HTTPS sites only. Just an idea.

  • I eat words@group.lt
    link
    fedilink
    English
    arrow-up
    9
    ·
    1 year ago

    in addition to all other good comments - if you will ever decide to move out service to another server or something like that - moving subdomains will be much easier.

  • Oida@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    1 year ago

    Depends on the usage and also service. I’m using subfolders for all my Tasmota switches. Like https://switch.domain.org/garage this makes it easier to maintain because I don’t need to mess around with a new subdomain for ever new device. On the other side, I like unique services on a subfomain: video or audio. I can switch the application behind, but the entry point remains.

  • LordChaos82@fosstodon.org
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    @Sekoia I like using subdomains as it is easy to configure in a lot of services. Also, easier to remember if you are giving the URL to someone for access.

  • drdaeman@lemmy.zhukov.al
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    Some apps have hardcoded assumptions about the paths, making those kind of setup harder to achieve (you’ll have to patch the apps or do on-the-fly rewrites).

    Then there’s also potential cookie sharing/collision issue. If apps don’t set cookies for specific paths, they may both use same-named cookie and this may cause weird behavior.

    And if one of the apps is compromised (e.g. has an XSS issue) it’s a bit less secure with paths than with subdomains.

    But don’t let me completely dissuade you - paths are totally valid approach, especially if you group multiple clisely related things together (e.g. Grafana and Prometheus) under the same domain name.

    However, if you feel that setting up a new domain name is a lot of effort, I would recommend you investing some time in automating this.

  • shrugal@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    1 year ago

    If you don’t have any restrictions (limited subdomains, service only works on the server root etc.) then it’s really just a personal preference. I usually try paths first, and switch to subdomains if that doesn’t work.

  • Freeman@lemmy.pub
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    With paths you can use httpS://192etc/example, but if you use subdomains, how do you connect internally with https? Https://example.192etc won’t work as you can’t mix an ip address with domain resolution.

    You can do this. The reality is it depends on the app.

    But ultimately I used both and pass them through a nginx proxy. The proxy listens for the SNI and passes traffic based on that.

    For example homeassistant doesn’t do well with paths. So it goes to ha.contoso.com.

    Miniflux does handle paths. So it uses contoso.com/rss.

    Plex needs a shitload of headers and paths so I use the default of contoso.com to pass to it along with /web.

    My photo albums use both. And something’s even a separate gTLD.

    But they all run through the same nginx box at the border.

  • TemperateFox@beehaw.org
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    5
    ·
    edit-2
    1 year ago

    Everyone is saying subdomains so I’ll try to give a reason for paths. Using subdomains makes local access a bit harder. With paths you can use httpS://192etc/example, but if you use subdomains, how do you connect internally with https? Https://example.192etc won’t work as you can’t mix an ip address with domain resolution. You’ll have to use http://192etc:port. So no httpS for internal access. I got around this by hosting adguard as a local DNS and added an override so that my domain resolved to the local IP. But this won’t work if you’re connected to a VPN as it’ll capture your DNS requests, if you use paths you could exclude the IP from the VPN.

    Edit: not sure what you mean by “more setup”, you should be using a reverse proxy either way.

    • tkohhh@waveform.social
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      If your router has NAT reflection, then the problem you describe is non existent. I use the same domain/protocol both inside and outside my network.

        • tkohhh@waveform.social
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          I don’t know for sure… but my instinct is that NAT reflection is moot in that case, because your connection is going out past the edge router and doing the DNS query there, which will then direct you back to your public IP. I’m sure there’s somebody around that knows the answer for certain!

        • BombOmOm@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          3
          ·
          1 year ago

          Depends:

          • If you have your VPN setup so it sends all traffic to the internet, then your request will pass through the VPN server, then back to your location from the internet.

          • If you have your VPN setup to exempt LAN traffic, then if you specify a local IP, your traffic will stay on your LAN, however, if you specify the domain, the VPN will almost certainly continue to treat it as internet-bound traffic and route it through their servers. This is possibly avoidable if you also put your own IP on the exempt list, if that is a feature.

    • Felix@lemmy.one
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      1 year ago

      You’ll have to use http://192etc:port. So no httpS for internal access

      This is not really correct. When you use http this implies that you want to connect to port 80 without encryption, while using https implies that you want to use an ssl connection to port 443.

      You can still use https on a different port, Proxmox by default exposes itself on https://proxmox-ip:8006 for example.

      Its still better to use (sub)domains as then you don’t have to remember strings of numbers.

      • TemperateFox@beehaw.org
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        edit-2
        1 year ago

        I understand, though if the services you’re hosting are all http by themselves, and https due to a reverse proxy, if you attempt to connect to the reverse proxy it’ll only serve the root service. I’m not aware of a method of getting to subdomains from the reverse proxy if you try to reach it locally via ip.

        • macgregor@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 year ago

          Generally a hostname based reverse proxy routes requests based on the host header, which some tools let you set. For example, curl:

          curl -H 'Host: my.local.service.com' http://192.168.1.100
          

          here 192.168.1.100 is the LAN IP address of your reverse proxy and my.local.service.com is the service behind the proxy you are trying to reach. This can be helpful for tracking down network routing problems.

          If TLS (https) is in the mix and you care about it being fully secure even locally it can get a little tricky depending on whether the route is pass through (application handles certs) or terminate and reencrypt (reverse proxy handles certs). Most commonly you’ll run into problems with the client not trusting the server because the “hostname” (the LAN IP address when accessing directly) doesn’t match what the certificate says (the DNS name). Lots of ways around that as well, for example adding the service’s LAN IP address to the cert’s subject alternate names (SAN) which feels wrong but it works.

          Personally I just run a little DNS server so I can resolve the various services to their LAN IP addresses and TLS still works properly. You can use your /etc/hosts file for a quick and dirty “DNS server” for your dev machine.

          • Goldenderp@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            TLS SNI will take care of that issue just fine, most reverse proxies will just handle it for you especially if you use certbot i.e. usually letsencrypt

    • Sekoia@lemmy.blahaj.zoneOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      Edit: not sure what you mean by “more setup”, you should be using a reverse proxy either way.

      I’m using cloudflare tunnels (because I don’t have a static IP and I’m behind a NAS, so I would need to port forward and stuff, which is annoying). For me specifically, that means I have to do a bit of admin on the cloudflare dashboard for every subdomain, whereas with paths I can just config the reverse proxy.

      • bratling@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        because I don’t have a static IP and I’m behind a NAS, so I would need to port forward and stuff, which is annoying

        This week I discovered that Porkbun DNS has a nice little API that makes it easy to update your DNS programmatically. I set up Quentin’s DDNS Updater https://github.com/qdm12/ddns-updater

        Setup is a little fiddly, as you have to write some JSON by hand, but once you’ve done that, it’s done and done. (Potential upside: You could use another tool to manage or integrate by just emitting a JSON file.) This effectively gets me dynamic DNS updates.

    • key@lemmy.keychat.org
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      I got around this by hosting adguard as a local DNS and added an override so that my domain resolved to the local IP. But this won’t work if you’re connected to a VPN as it’ll capture your DNS requests

      Why didn’t you use your hosts file?