smarx007 2 days ago

Docker has a known security issue with port exposure in that it punches holes through the firewall without asking your permission, see https://github.com/moby/moby/issues/4737

I usually expose ports like `127.0.0.1:1234:1234` instead of `1234:1234`. As far as I understand, it still punches holes this way but to access the container, an attacker would need to get a packet routed to the host with a spoofed IP SRC set to `127.0.0.1`. All other solutions that are better seem to be much more involved.

  • bluedino 2 days ago

    Containers are widely used at our company, by developers who don't understand underlying concepts, and they often expose services on all interfaces, or to all hosts.

    You can explain this to them, they don't care, you can even demonstrate how you can access their data without permission, and they don't get it.

    Their app "works" and that's the end of it.

    Ironically enough even cybersecurity doesn't catch them for it, they are too busy harassing other teams about out of date versions of services that are either not vulnerable, or already patched but their scanning tools don't understand that.

    • malfist 2 days ago

      Checklist security at it's finest.

      My team where I work is responsible for sending frivolous newsletters via email and sms to over a million employees. We use an OTP for employees to verify they gave us the right email/phone number to send them to. Security sees "email/sms" and "OTP" and therefor, tickets us at the highest "must respond in 15 minutes" priority ticket every time an employee complains about having lost access to an email or phone number.

      Doesn't matter that we're not sending anything sensitive. Doesn't matter that we're a team of 4 managing more than a million data points. Every time we push back security either completely ignores us and escalates to higher management, or they send us a policy document about security practices for communication channels that can be used to send OTP codes.

      Security wields their checklist like a cudgel.

      Meanwhile, our bug bounty program, someone found a dev had opened a globally accessible instance of the dev employee portal with sensitive information and reported it. Security wasn't auditing for those, since it's not on their checklist.

      • dfsegoat 2 days ago

        I feel this. Recently implemented a very trivial “otp to sign an electronic document” function in our app.

        Security heard “otp” and forced us through a 2 month security/architecture review process for this sign-off feature that we built with COTs libraries in a single sprint.

        • malfist 2 days ago

          Oh I know that feeling. We got in hot water because the codes were 6 digits long and security decided we needed to make them eight digits.

          We pushed back and initially they agreed with us and gave us an exception, but about a year later some compliance audit told them it was no longer acceptable and we had to change it ASAP. About a year after that they told us it needed to be ten characters alphanumeric and we did a find and replace in the code base for "verification code" and "otp" and called them verification strings, and security went away.

          • dfsegoat 3 hours ago

            Heh. We also got treated to the digit thing. That topic alone was about 30 mins of mtg. time with a vp of eng and 2 seniors in the mtg.

        • smarx007 2 days ago

          To be fair, I would also be alarmed, albeit not by OTP. "sign an electronic document" and "built with COTs libraries in a single sprint" is essentially begging for a security review. Signatures and their verification are non-trivial, case in point: https://news.ycombinator.com/item?id=42590307

          • talkin 2 days ago

            Nobody said you shouldn’t do any due diligence. But 1 sprint vs 2 months of review really smells like ‘processes over people’. ;)

            • normie3000 2 days ago

              A more positive view would be that the security team may have had different priorities to the product team.

              • robertlagrant a day ago

                Two months of review after the work would be a lot more useful than before.

      • throwaway2037 a day ago

            > My team where I work is responsible for sending frivolous newsletters via email and sms to over a million employees.
        
        "frivolous newsletters" -- Thank you for your honesty!

        Real question: One million employees!? Even Foxconn doesn't have one million employees. That leaves only Amazon and Walmart according to this link: https://www.statista.com/statistics/264671/top-50-companies-...

        • joseda-hg 20 hours ago

          To a million employees doesn't necessarily mean they're from the same company

          They might be a third party service for companies to send mail to _their_ employees

      • plagiarist 2 days ago

        I have had to sit through "education" that boiled down to "don't ship your private keys in the production app." Someone needed to tick some security training checkbox, and I drew the short straw.

    • dijit 2 days ago

      This is pretty common, developers are focused on making things that work.

      Sysadmins were always the ones who focused on making things secure, and for a bunch of reasons they basically don’t exist anymore.

      EDIT: what guidelines did I break?

      • bluedino 2 days ago

        > This is pretty common, developers are focused on making things that work.

        True, but over the last twenty years, simple mistakes by developers have caused so many giant security issues.

        Part of being a developer now is knowing at least the basics on standard security practices. But you still see people ignoring things as simple as SQL injection, mainly because it's easy and they might not even have been taught otherwise. Many of these people can't even read a Python error message so I'm not surprised.

        And your cybersecurity department likely isn't auditing source code. They are just making sure your software versions are up to date.

        • bt1a a day ago

          and many of these people havent debugged messages more complex than a Python error message. tastelessly jabbing at needing to earn your marks by slamming into segfaults and pushing gdb

      • smarx007 2 days ago

        I don't think you broke any (did not downvote). But you wrote something along the lines "Sysadmins were always the ones who focused on making things secure, and for a bunch of reasons they basically don’t exist anymore. I guess this is fine." before you edited the last bit out. I think those who downvoted you think that this is plain wrong.

        I guess it's fine if you get rid of sysadmins and have dev splitting their focus across dev, QA, sec, and ops. It's also fine if you have devs focus on dev, QA, code part of the sec and sysadmins focus on ops and network part of the sec. Bottom line is - someone needs to focus on sec :) (and on QAing and DBAing)

      • harrall a day ago

        Sometimes when you work less rigidly as a team, covering for others when it’s convenient for you, everyone gets more things done with less stress and less trouble.

        And you go home at 5pm and had a good work day.

      • ocdtrekkie 2 days ago

        I suspect you'll find a lot of intersection between the move to "devops" outfits who "don't need IT anymore" and "there's a lot more security breaches now", but hey, everyone's making money so who cares?

    • queuebert 2 days ago

      > Ironically enough even cybersecurity doesn't catch them for it, they are too busy harassing other teams about out of date versions of services that are either not vulnerable, or already patched but their scanning tools don't understand that.

      Wow, this really hits home. I spend an inordinate amount of time dealing with false positives from cybersecurity.

    • nitwit005 11 hours ago

      There are certainly people that don't care about security out there, but the biggest issue is just how much people are expected to know.

      Docker, AWS, Kubernetes, some wrapper they've put around Kubernetes, a bunch of monitoring tools, etc.

      And none of it will be their main job, so they'll just try to get something working by copying a working example, or reading a tutorial.

    • ropable a day ago

      "Everybody gangsta 'bout infosec until their machine is cryptolockered." (some CISO, probably).

    • calvinmorrison 2 days ago

      Turns out devsecops was just a layoff scheme for sysadmins

  • veyh 2 days ago

    I wonder how many people realize you can use the whole 127.0.0.0/8 address space, not just 127.0.0.1. I usually use a random address in that space for all of a specific project's services that need to be exposed, like 127.1.2.3:3000 for web and 127.1.2.3:5432 for postgres.

    • eadmund a day ago

      Be aware that there is an effort to repurpose most of 127.0.0.0/8: https://www.ietf.org/archive/id/draft-schoen-intarea-unicast...

      It’s well-intentioned, but I honestly believe that it would lead to a plethora of security problems. Maybe I am missing something, but it strikes me as on the level of irresponsibility of handing out guardless chainsaws to kindergartners.

      • pepa65 a day ago

        That is awful and I hope it will never pass. It would be a security nightmare. If passed, it should lead to a very wide review of all software using 127/8, and that will never be comprehensive...

    • 9dev 2 days ago

      Also, many people don’t remember that those zeros in between numbers in IPs can be slashed, so pinging 127.1 works fine. This is also the reason why my home network is a 10.0.0.0/24—don’t need the bigger address space, but reaching devices at 10.1 sure is convenient!

      • diggan a day ago

        I had no idea about this, and been computing for almost 20 years now, thanks!

        Trying to get ping to ping `0.0.0.0` was interesting

            $ ping -c 1 ""
            ping: : Name or service not known
        
            $ ping -c 1 "."
            ping: .: No address associated with hostname
        
            $ ping -c 1 "0."
            ^C
        
            $ ping -c 1 ".0"
            ping: .0: Name or service not known
        
            $ ping -c 1 "0"
            PING 0 (127.0.0.1) 56(84) bytes of data.
            64 bytes from 127.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms
        
            $ ping -c 1 "0.0"
            PING 0.0 (127.0.0.1) 56(84) bytes of data.
            64 bytes from 127.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms
    • jerf 2 days ago

      Also a great way around code that tries to block you from hitting resources local to the box. Lots of code out there in the world blocking the specific address "127.0.0.1" and maybe if you were lucky "localhost" but will happily connect to 127.6.243.88 since it isn't either of those things. Or the various IPv6 localhosts.

      Relatedly, a lot of systems in the world either don't block local network addresses, or block an incomplete list, with 172.16.0.0/12 being particularly poorly known.

    • number6 2 days ago

      TIL I always thought it was /32

  • dawnerd 2 days ago

    We’ve found this out a few times when someone inexperienced with docker would expose a redis port and run docker compose up on a public accessible machine. Would only be minutes until that redis would be infected. Also blame redis for having the ability to run arbitrary code without auth by default.

  • spr-alex a day ago

    -p 127.0.0.1: might not offer all of the protections the way you would expect, and is arguably a bug in dockers firewall rules they're failing to address. they choose to instead say hey we dont protect against L2, and have an open issue here: https://github.com/moby/moby/issues/45610.

    this secondary issue with docker is a bit more subtle, it's that they don't respect the bind address when they do forwarding into the container. the end result is that machines one hop away can forward packets into the docker container.

    for a home user the impact could be that the ISP can reach into the container. depending on risk appetite this can be a concern (salt typhoon going after ISPs).

    more commonly it might end up exposing more isolated work related systems to related networks one hop away

    • smarx007 a day ago

      What about cloud VMs? I would love to read more about "they don't respect the bind address when they do forwarding into the container" and "machines one hop away can forward packets into the docker container" if you could be so kind!

      Upd: thanks for a link, looks quite bad. I am now thinking that an adjacent VM in a provider like Hetzner or Contabo could be able to pull it off. I guess I will have to finally switch remaining Docker installations to Podman and/or resort to https://firewalld.org/2024/11/strict-forward-ports

      • spr-alex a day ago

        i cant speak to hetzner, contabo. i have tested this attack on aws, gcp a while back and their L2 segmentation was solid. VMs/containers should be VLANd across customers/projects on most mature providers. On some it may not be though.

        if theres defense in depth it may be worth checking out L2 forwarding within a project for unexpected pivots an attacker could use. we've seen this come up in pentests

        I work on SPR, we take special care in our VPN to avoid these problems as well, by not letting docker do the firewalling for us. (one blog post on the issue: https://www.supernetworks.org/pages/blog/docker-networking-c...).

        as an aside there's a closely related issue with one-hop attacks with conntrack as well, that we locked down in October.

  • anthropodie 2 days ago

    And this was one of the reason why I switched to Podman. I haven't looked back since.

    • MortyWaves 2 days ago

      I want to use Podman but I keep reading the team feels podman-compose to be some crappy workaround they don’t really want to keep.

      This is daunting because:

      Take 50 random popular open source self-hostable solutions and the instructions are invariably: normal bare installation or docker compose.

      So what’s the ideal setup when using podman? Use compose anyway and hope it won’t be deprecated, or use SystemD as Podman suggests as a replacement for Compose?

      • diggan 2 days ago

        > So what’s the ideal setup when using podman? Use compose anyway and hope it won’t be deprecated, or use SystemD as Podman suggests as a replacement for Compose?

        After moving from bare to compose to docker-compose to podman-compose and bunch of things in-between (homegrown Clojure config-evaluators, ansible, terraform, make/just, a bunch more), I finally settled on using Nix for managing containers.

        It's basically the same as docker-compose except you get to do it with proper code (although Nix :/ ) and as a extra benefit, get to avoid YAML.

        You can switch the backend/use multiple ones as well, and relatively easy to configure as long as you can survive learning the basics of the language: https://wiki.nixos.org/wiki/Docker

        • 0xCMP 2 days ago

          Of course, that means you need to run NixOS for that to work (which I also do everywhere) and there are networking problems with Docker/Podman in NixOS you need to address yourself. Whereas Docker "runs anywhere" these days.

          Worth noting the tradeoffs, but I agree using Nix for this makes life more pleasant and easy to maintain.

          • diggan 2 days ago

            > that means you need to run NixOS for that to work

            Does it? I'm pretty sure you're able to run Nix (the package manager) on Arch Linux for example, I'm also pretty sure you can do that on things like macOS too but that I haven't tested myself.

            Or maybe something regarding this has changed recently?

            • 0xCMP 2 days ago

              sorry, yes to build it is fine, but managing them with Nix (e.g. dealing with which ports to expose and etc like in the article) requires NixOS.

              edit: I actually never checked, but I guess nothing stops home-manager or nix-darwin from working too, but I don't think either supports running containers by default. EOD all NixOS does is make a systemd service which runs `docker run ..` for you.

          • libeclipse 2 days ago

            You don't need NixOS to use Nix as a package manager/build system

            • brnt 2 days ago

              If you configure your server(s) through nix and nix containers, then even without another host OS you are basically running nix.

      • anthropodie 2 days ago

        Podman supports kubernetes YAML or the quadlets option. It's fairly easy to convert docker-compose to one of these.

        Nowaday I just ask genAI to convert docker-compose to one of the above options and it almost always works.

      • thedanbob 2 days ago

        I use docker compose for development because it's easy to spin up an entire project at once. Tried switching to podman compose but it didn't work out of the box and I wasn't motivated to fix it.

        For "production" (my homelab server), I switched from docker compose to podman quadlets (systemd) and it was pretty straightforward. I actually like it better than compose because, for example, I can ensure a containers dependencies (e.g. database, filesystem mounts) are started first. You can kind of do that with compose but it's very limited. Also, systemd is much more configurable when it comes to dealing service failures.

      • Cyph0n 2 days ago

        There is a third option: enable the Docker socket and use Docker Compose as usual.

        https://github.com/containers/podman/blob/main/docs/tutorial...

        • mschuster91 2 days ago

          Docker Compose would not prevent you from doing a "publish port to 0.0.0.0/0", it's not much more than a (very convenient) wrapper around "docker build" and "docker run".

          And many if not as good as all examples of docker-compose descriptor files don't care about that. Images that use different networks for exposed services and backend services (db, redis, ...) are the rare exception.

          • Cyph0n 2 days ago

            Are you sure about that? Because I was under the impression that these firewall rules are configured by Docker. So if you use Docker Compose with Podman emulating the Docker socket, this shouldn’t happen.

            Maybe someone more knowledgeable can comment.

      • somebehemoth 2 days ago

        podman rootless running services with quadlet is not a bad start.

        • smarx007 2 days ago

          Is there a tool/tutorial that assumes that I already have a running docker compose setup instead of starting with some toy examples? Basically, I am totally excited about using systemd that I already have on my system instead of adding a new daemon/orchestrator but I feel that the gap between quadlet 101 and migrating quite a complex docker compose YAML to podman/quadlet is quite large.

          • somebehemoth 2 days ago

            There was not such a tool when I learned how to do this. Quadlet is relatively new (podman 5) so lots of podman/systemd documentation refers to podman commands that generate systemd unit files. I agree there is a gap.

          • anthropodie 18 hours ago

            Search for podlet. It lets you do what you want.

        • pahae 2 days ago

          Quadlets are pretty nice but require podman > 4.4 to function properly. Debian 12, for example, still only has podman ~4.3 in its repos.

        • Quizzical4230 2 days ago

          I'm still using systemd. Podman keeps telling to use quadlets :)

      • eadmund a day ago

        Honestly, I just use a small k8s cluster, and convert the docker compose config to k8s config.

  • geye1234 2 days ago

    I am not a security person at all. Are you really saying that it could potentially cause Iptables to open ports without an admin's knowing? Is that shockingly, mind-bogglingly bad design on Docker's part, or is it just me?

    Worse, the linked bug report is from a DECADE ago, and the comments underneath don't seem to show any sense of urgency or concern about how bad this is.

    Have I missed something? This seems appalling.

    • smarx007 2 days ago

      Your understanding is correct, unfortunately. Not only that, the developers are also reluctant to make 127.0.0.1:####:#### the default in their READMEs and docker-compose.yml files because UsEr cOnVeNiEnCe, e.g. https://github.com/louislam/uptime-kuma/pull/3002 closed WONTFIX

      • geye1234 2 days ago

        Amazing. I just don't know what to say, except that anyone who doesn't know how to open a firewall port has no business running Docker, or trying to understand containerization.

        As someone says in that PR, "there are many beginners who are not aware that Docker punches the firewall for them. I know no other software you can install on Ubuntu that does this."

        Anyone with a modicum of knowledge can install Docker on Ubuntu -- you don't need to know a thing about ufw or iptables, and you may not even know what they are. I wonder how many machines now have ports exposed to the Internet or some random IoT device as a result of this terrible decision?

    • jeroenhd a day ago

      > without an admin's knowing

      For people unfamiliar with Linux firewalls or the software they're running: maybe. First of all, Docker requires admin permissions, so whoever is running these commands already has admin privileges.

      Docker manages its own iptables chain. If you rely on something like UFW that works by using default chains, or its own custom chains, you can get unexpected behaviour.

      However, there's nothing secret happening here. Just listing the current firewall rules should display everything Docker permits and more.

      Furthermore, the ports opened are the ones declared in the command line (-p 1234) or in something like docker-compose declarations. As explained in the documentation, not specifying an IP address will open the port on all interfaces. You can disable this behaviour if you want to manage it yourself, but then you would need some kind of scripting integration to deal with the variable behaviour Docker sometimes has.

      From Docker's point of view, I sort of agree that this is expected behaviour. People finding out afterwards often misunderstand how their firewall works, and haven't read or fully understood the documentation. For beginners, who may not be familiar with networking, Docker "just works" and the firewall in their router protects them from most ills (hackers present in company infra excluded, of course).

      Imagine having to adjust your documentation to go from "to try out our application, run `docker run -p 8080 -p 1234 some-app`" to "to try out our application, run `docker run -p 8080 -p 1234 some-app`, then run `nft add rule ip filter INPUT tcp dport 1234 accept;nft add rule ip filter INPUT tcp dport 8080 accept;` if you use nftables, or `iptables -A INPUT -p tcp --dport 1234 -m conntrack --ctstate NEW,ESTABLISHED -j ACCEPT; iptables -A INPUT -p tcp --dport 8080 -m conntrack --ctstate NEW,ESTABLISHED -j ACCEPT` if you use iptables, or `sudo firewall-cmd --add-port=1234/tcp;sudo firewall-cmd --add-port=8080/tcp; sudo firewall-cmd --runtime-to-permanent` if you use firewalld, or `sudo ufw allow 1234; sudo ufw allow 8080` if you use UFW, or if you're on Docker for Windows, follow these screenshots to add a rule to the firewall settings and then run the above command inside of the Docker VM". Also don't forget to remove these rules after you've evaluated our software, by running the following commands: [...]

      Docker would just not gain any traction as a cross-platform deployment model, because managing it would be such a massive pain.

      The fix is quite easy: just bind to localhost (specify -p 127.0.0.1:1234 instead of -p 1234) if you want to run stuff on your local machine, or an internal IP that's not routed to the internet if you're running this stuff over a network. Unfortunately, a lot of developers publishing their Docker containers don't tell you to do that, but in my opinion that's more of a software product problem than a Docker problem. In many cases, I do want applications to be reachable on all interfaces, and having to specify each and every one of them (especially scripting that with the occasional address changes) would be a massive pain.

      For this article, I do wonder how this could've happened. For a home server to be exposed like that, the server would need to be hooked to the internet without any additional firewalls whatsoever, which I'd think isn't exactly typical.

    • yencabulator a day ago

      > Is that shockingly, mind-bogglingly bad design on Docker's part, or is it just me?

      This the default value for most aspects of Docker. Reading the source code & git history is a revelation of how badly things can be done, as long as you burn VC money for marketing. Do yourself a favor and avoid all things by that company / those people, they've never cared about quality.

    • tomjen3 a day ago

      To run Docker, you need to be an admin or in the Docker group, which warns you that it is equivalent to having sudo rights, AKA be an admin.

      As for it not being explicitly permitted, no ports are exposed by default. You must provide the docker run command with -p, for each port you want exposed. From their perspective, they're just doing exactly what you told them to do.

      Personally, I think it should default to giving you an error unless you specified what IPs to listen to, but this is far from a big of an issue as people make it out to be.

      The biggest issue is that it is a ginormous foot gun for people who don't know Docker.

      • diggan a day ago

        I don't remember the particular syntax, but isn't there a different to binding a port on the address the container runs on, VS binding a port on the host address?

        Maybe it's the difference between "-P" and "-p", or specifying both "8080:8080" instead of "8080", but there is a difference, especially since one wouldn't be reachable outside of your machine and the other one would be on worse case trying to bind 0.0.0.0.

        • johntash a day ago

          You can specify the interface address to listen on, like "127.0.0.1:8080:8080" or "192.168.1.100:8080:8080". I have a lot of containers exposed like this but bind specifically to a vpn ip on the host so that they don't get exposed externally by default.

          • diggan 19 hours ago

            The trouble is that docker seems to default to using 0.0.0.0, so if you do `docker run -it -p 8080 node:latest` for example, now that container accepts incoming connections on port :32768 or whatever docker happens to assign it, which is bananas default behavior.

        • tomjen3 15 hours ago

          -p exposes the port from the container on a specific port on the host machine. -P does the same, but for all ports listed as exposed in the container.

          If you just run a container, it will expose zero ports, regardless of any config made in the Docker image or container.

          The way you're supposed to use Docker is to create a Docker network, attach the various containers there, and expose only the ports on specific containers that you need external access to. All containers in any network can connect to each other, with zero exposed external ports.

          The trouble is just that this is not really explained well for new users, and so ends up being that aforementioned foot gun.

  • disambiguation a day ago

    Interesting trick, it was definitely an "Oh shit" moment when I learned the hard way that ports were being exposed. I think setting an internal docker network is another simple fix for this, though it complicates talking to other machines in the same firewall.

  • aaomidi 2 days ago

    Tbh I prefer not exposing any ports directly, and then throwing Tailscale on the network used by docker. This automatically protects everything behind a private network too.

    • diggan 2 days ago

      FOSS alternative is to throw up a $5 VPS on some trusted host, then use Wireguard (FOSS FTW) to do basically exactly the same, but cheaper, without giving away control and with better privacy.

      There is bunch of software that makes this easier than trivial too, one example: https://github.com/g1ibby/auto-vpn/

      • eadmund a day ago

        Or you can use headscale (BSD) with the Tailscale client (BSD), which is still FOSS but also very very easy to use.

    • yx827ha 2 days ago

      I agree, Tailscale FTW! You didn't even need to integrate it with docker. Just add a subnet route and evening just works. It's a great product.

    • smarx007 2 days ago

      I would love to read a write-up on that! Are you doing something like https://tailscale.com/blog/docker-tailscale-guide ?

      • aaomidi 2 days ago

        Yep! That's very similar to what I do.

        I have a tailscale container, and a traefik container. Then I use labels with all my other containers to expose themselves on Traefik.

    • Manouchehri 2 days ago

      Another option is using Cloudflare Tunnels (`cloudflared`), and stacking Cloudflare Access on top (for non-public services) to enforce authentication.

      • aaomidi 2 days ago

        just fyi cloudflare closes any idle connection thats been around longer than 10 seconds.

    • bakugo 2 days ago

      Important to note that, even if you use Tailscale, the firewall punching happens regardless, so you still have to make sure you either:

      1. Have some external firewall outside of the Docker host blocking the port

      2. Explicitly tell Docker to bind to the Tailscale IP only

      • aaomidi 2 days ago

        > the firewall punching happens regardless

        Does it? I think it only happens if you specifically enumerate the ports. You do not need to enumerate the ports at all if you're using Tailscale as a container.

        • bakugo a day ago

          Oh, I didn't realize you meant running Tailscale in docker, my bad. Then yeah, that's safe.

  • wutwutwat a day ago

    From the linked issue

    > by running docker images that map the ports to my host machine

    If you start a docker container and map port 8080 of the container to port 8080 on the host machine, why would you expect port 8080 on the host machine to not be exposed?

    I don't think you understand what mapping and opening a port does if you think that when you tell docker to expose a port on the host machine that it's a bug or security issue when docker then exposes a port on the host machine...

    docker supports many network types, vlans, host attached, bridged, private, etc. There are many options available to run your containers on if you don't want to expose ports on the host machine. A good place to start: If you don't want ports exposed on the host machine then probably should not start your docker container up with host networking and a port exposed on that network...

    Regardless of that, your container host machines should be behind a load balancer w/ firewall and/or a dedicated firewall, so containers poking holes (because you told them to and then got mad at it) shouldn't be an issue

    • rpcope1 a day ago

      I think the unintuitive thing is that by "port mapping", Docker is doing DNAT which doesn't trigger the input firewall rules. Unless you're relatively well versed in the behavior of iptables or notables, you probably expect the "port mapping" to work like a regular old application proxy (which would obey a firewall rules blocking all inputs) and not use NAT and firewall rules (and all of the attendant complexity that brings).

  • josephcsible 2 days ago

    It only exposes ports if you pass the command-line flag that says to do so. How is that "without asking your permission"?

    • dizhn 9 hours ago

      It should have the proxy set up but leave opening the port to the user.

      No other sever software that I know of touches the firewall to make its own services accessible. Though I am aware that the word being used is "expose". I personally only have private IPs on my docker hosts when I can and access them with wireguard.

  • adriancr 2 days ago

    securing is straightforward, too bad it's not by default: https://docs.docker.com/engine/network/packet-filtering-fire...

    • smarx007 2 days ago

      Do I understand the bottom two sections correctly? If I am using ufw as a frontend, I need to switch to firewalld instead and modify the 'docker-forwarding' policy to only forward to the 'docker' zone from loopback interfaces? Would be good if the page described how to do it, esp. for users who are migrating from ufw.

      More confusingly, firewalld has a different feature to address the core problem [1] but the page you linked does not mention 'StrictForwardPorts' and the page I linked does not mention the 'docker-forwarding' policy.

      [1]: https://firewalld.org/2024/11/strict-forward-ports

      • jeroenhd a day ago

        UFW and Docker don't work well together. Both of them call iptables (or nftables) in a way that assumes they're in control of most of the firewall, which means they can conflict or simply not notice each other's rules. For instance, UFW's rules to block all traffic get overriden by Docker's rules, because there is no active block rule (that's just the default, normally) and Docker just added a rule. UFW doesn't know about firewall chains it didn't create (even though it probably should start listing Docker ports at some point, Docker isn't exactly new...) so `ufw list` will show you only the manually configured UFW rules.

        What happens when you deny access through UFW and permit access through Docker depends entirely on which of the two firewall services was loaded first, and software updates can cause them to reload arbitrarily so you can't exactly script that easily.

        If you don't trust Docker at all, you should move away from Docker (i.e. to podman) or from UFW (i.e. to firewalld). This can be useful on hosts where multiple people spawn containers, so others won't mess up and introduce risks outside of your control as a sysadmin.

        If you're in control of the containers that get run, you can prevent container from being publicly reachable by just not binding them to any public ports. For instance, in many web interfaces, I generally just bind containers to localhost (-p 127.0.0.1:8123:80 instead of -p 80) and configure a reverse proxy like Nginx to cache/do permission stuff/terminate TLS/forward requests/etc. Alternatively, binding the port to your computer's internal network address (-p 192.168.1.1:8123:80 instead of -p 80) will make it pretty hard for you to misconfigure your network in such a way that the entire internet can reach that port.

        Another alternative is to stuff all the Docker containers into a VM without its own firewall. That way, you can use your host firewall to precisely control what ports are open where, and Docker can do its thing on the virtual machine.

      • adriancr a day ago

        I'm not sure about ufw/firewalld. Maybe docs aren't clear there either

        I configured iptables and had no trouble blocking WAN access to docker...

        In addition to that there's the default host in daemon.json plus specifying bindings to local host directly in compose / manually.

  • plagiarist 2 days ago

    The ports thing is what convinced me to transition to Podman. I don't need a container tool doing ports on my behalf.

    Why am I running containers as a user that needs to access the Docker socket anyway?

    Also, shoutout to the teams that suggest easy setup running their software in a container by adding the Docker socket into its file system.

  • znpy 2 days ago

    I avoid most docker problems by running unprivileged containers via rootless podman, on a rocky-linux based host with selinux enabled.

    At this point docker should be considered legacy technology, podman is the way to go.

    • diggan 2 days ago

      Would that actually save you in this case? OP had their container exposed to the internet, listening for incoming remote connections. Wouldn't matter in that case if you're running a unprivileged container, podman, rocky-linux or with selinux, since everything is just wide open at that point.

      • smarx007 2 days ago

        I think podman does not punch holes in the firewall as opposed to docker. I.e., to expose a container on port 8080 on the WAN in podman, you need to both expose 8080:8080 and use, for example, firewalld to open port 8080. Which I consider a correct behaviour.

        • diggan 2 days ago

          Sure, but the issue here wasn't because the default behavior surprised OP. OP needed a service that was accessible from a remote endpoint, so they needed to have some connection open. They just (for some reason) chose to do it over public internet instead of a private network.

          But regardless of software used, it would have led to the same conclusion, a vulnerable service running on the open internet.

      • dboreham 2 days ago

        I think it's more about whether traffic is bound to localhost or a routable interface. Podman has different behavior vs Docker.

        • smarx007 2 days ago

          I think exposing 8080:8080 would result in sockets bound to 0.0.0.0:8080 in either Docker or Podman. You still need 127.0.0.1:8080:8080 for the socket binding to be 127.0.0.1:8080 in Podman. The only difference is that Podman would not punch holes in the firewall after binding on 0.0.0.0:8080, thus preventing an unintended exposure given that the firewall is set up to block all incoming connections except on 443, for example.

          Edit: just confirmed this to be sure.

              $ podman run --rm -p 8000:80 docker.io/library/nginx:mainline
              $ podman ps 
              CONTAINER ID  IMAGE                             COMMAND               CREATED         STATUS         PORTS                 NAMES
              595f71b33900  docker.io/library/nginx:mainline  nginx -g daemon o...  40 seconds ago  Up 41 seconds  0.0.0.0:8000->80/tcp  youthful_bouman
              $ ss -tulpn | rg 8000
          
              tcp   LISTEN 0      4096                                          *:8000             *:*    users:(("rootlessport",pid=727942,fd=10))
  • globular-toast 2 days ago

    This is only an issue if you run Docker on your firewall, which you absolutely should not.

    • Volundr 2 days ago

      Do you not run firewalls on your internal facing machines to make sure they only have the correct ports exposed?

      Security isn't just an at the edge thing.

      • globular-toast a day ago

        No. That would be incredibly annoying and it's probably why docker overrides it as it would cause all manner of confusion.

        • Volundr 17 hours ago

          You really, really should. Just because someone is inside your network is no reason to just give them the keys to the kingdom.

          And I don't see any reason why having to allow a postgres or apache or whatever run through docker through your firewall any more confusing than allowing them through your firewall installed via APT. It's mor confusing that the firewall DOESN'T protect docker services like everything else.

    • smarx007 2 days ago

      Ideally, yes. But in reality, this means that if you just want to have 1 little EC2 VM on AWS running Docker, you now need to create a VM, a VPC, an NLB/ALB in front of the VPC ($20/mo+, right?) and assign a public IP address to that LB instead. For a VM like t4g.nano, it could mean going from a $3/mo bill to $23/mo ($35 in case of a NAT gateway instead of an LB?) bill, not to mention the hassle of all that setup. Hetzner, on the other hand, has a free firewall included.

      • coder543 2 days ago

        Your original solution of binding to 127.0.0.1 generally seems fine. Also, if you're spinning up a web app and its supporting services all in Docker, and you're really just running this on a single $3/mo instance... my unpopular opinion is that docker compose might actually be a fine choice here. Docker compose makes it easy for these services to talk to each other without exposing any of them to the outside network unless you intentionally set up a port binding for those services in the compose file.

        • evantbyrne 2 days ago

          You should try swarm. It solves a lot of challenges that you would otherwise have while running production services with compose. I built rove.dev to trivialize setup and deployments over SSH.

          • coder543 2 days ago

            What does swarm actually do better for a single-node, single-instance deployment? (I have no experience with swarm, but on googling it, it looks like it is targeted at cluster deployments. Compose seems like the simpler choice here.)

            • evantbyrne 2 days ago

              Swarm works just as well in a single host environment. It is very similar to compose in semantics, but also does basic orchestration that you would have to hack into compose, like multiple instances of a service and blue/green deployments. And then if you need to grow later, it can of course run services on multiple hosts. The main footgun is that the Swarm management port does not have any security on it, so that needs to be locked down either with rove or manual ufw config.

          • smarx007 2 days ago

            Interesting, in my mind Swarm was more or less dead and the next step after docker+compose or podman+quadlet was k3s. I will check out Rove, thanks!

      • sigseg1v 2 days ago

        In AWS why would you need a NLB/ALB for this? You could expose all ports you want all day from inside the EC2 instance, but nobody is going to be able to access it unless you specifically allow those ports as inbound in the security group attached to the instance. In this case you'd only need a load balancer if you want to use it as a reverse proxy to terminate HTTPS or something.

        • smarx007 2 days ago

          TIL, thank you! I used such security groups with OpenStack and OCI but somehow didn't think about them in connection with EC2.

      • Fnoord 2 days ago

        There's no good reason a VM or container on Hetzner cannot use a firewall like IPTables. If that makes the service too expensive you increase cost or otherwise lower resources. A firewall is a very simple, essential part of network security. Every simple IoT device running Linux can run IPTables, too.

        • smarx007 2 days ago

          I guess you did not read the link I posted initially. When you set up a firewall on a machine to block all incoming traffic on all ports except 443 and then run docker compose exposing port 8000:8000 and put a reverse proxy like caddy/nginx in front (e.g. if you want to host multiple services on one IP over HTTPS), Docker punches holes in the iptables config without your permission, making both ports 443 and 8000 open on your machine.

          @globular-toast was not suggesting an iptables setup on a VM, instead they are suggesting to have a firewall on a totally different device/VM than the one running docker. Sure, you can do that with iptables and /proc/sys/net/ipv4/ip_forward (see https://serverfault.com/questions/564866/how-to-set-up-linux...) but that's a whole new level of complexity for someone who is not an experienced network admin (plus you now need to pay for 2 VMs and keep them both patched).

          • Fnoord 2 days ago

            Either you run a VM inside the VM or indeed two VMs. Jumphost does not require a lot of resources.

            The problem here is the user does not understand that exposing 8080 on external network means it is reachable by everyone. If you use an internal network between database and application, cache and application, application and reverse proxy, and put proper auth on reverse proxy, you're good to go. Guides do suggest this. They even explain LE for reverse proxy.

        • akerl_ 2 days ago

          Docker by default modifies iptables rules to allow traffic when you use the options to launch a container with port options.

          If you have your own firewall rules, docker just writes its own around them.

          • Fnoord 2 days ago

            I always have to define 'external: true' at the network. Which I don't do with databases. I link it to an internal network, shared with application. You can do the same with your web application, thereby only needing auth on reverse proxy. Then you use whitelisting on that port, or you use a VPN. But I also always use a firewall where OCI daemon does not have root access on.

            • 01HNNWZ0MV43FF 2 days ago

              I thought "external" referred to whether the network was managed by compose or not

              • Fnoord 2 days ago

                Yeah, true, but I have set it up in such a way that such network is an exposed bridge whereas the other networks created by docker-compose are not. It isn't even possible to reach these from outside. They're not routed, each of these backends uses standard Postgres port so with 1:1 NAT it'd give errors. Even on 127.0.0.1 it does not work:

                $ nc 127.0.0.1 5432 && echo success || echo no success no success

                Example snippet from docker-compose:

                DB/cache (e.g. Postgres & Redis, in this example Postgres):

                    [..]
                    ports:
                      - "5432:5432"
                    networks:
                      - backend
                    [..]
                
                App:

                    [..]
                    networks:
                      - backend
                      - frontend
                    [..]
                
                networks: frontend: external: true backend: internal: true
                • akerl_ 2 days ago

                  Nobody is disputing that it is possible to set up a secure container network. But this post is about the fact that the default docker behavior is an insecure footgun for users who don’t realize what it’s doing.

rpadovani 2 days ago

> "None of the database guides I followed had warned me about the dangers of exposing a docker containerized database to the internet."

This prompts a reflection about, as an industry, we should make a better job in providing solid foundations.

When I check tutorials on how to drill in the wall, there is (almost) no warning about how I could lose a finger doing so. It is expected that I know I should be careful around power tools.

How do we make some information part of the common sense? "Minimize the surface of exposure on the Internet" should be drilled in everyone, but we are clearly not there yet

  • V__ 2 days ago

    I don't think it's that unreasonable for a database guide not to mention it. This is more of a general server/docker security thing. Just as I wouldn't expect an application guide to tell me not to use windows xp because it's insecure.

    Most general guides on the other hand regarding docker mention not to expose containers directly to the internet and if a container has to be exposed to do so behind a reverse proxy.

    • diggan 2 days ago

      > if a container has to be exposed to do so behind a reverse proxy.

      I see this mentioned everywhere in the comments here but they seem to miss that the author explicitly wanted it to be exposed, and the compromise would have happened regardless if the traffic went directly to the container or via a reverse proxy.

      The proper fix for OP is to learn about private networks, not put a reverse proxy in front and still leave it running on the public internet...

  • heresie-dabord 2 days ago

    > as an industry, we should make a better job in providing solid foundations.

    Here is the fundamental confusion: programming is not an industry, it is a (ubiquitous) type of tooling used by industries.

    Software itself is insecure in its tooling and in its deployment. So we now have a security industry struggling to improve software.

    Some software companies are trying to improve but software in the $cloud is just as big a mess as software on work devices and personal devices.

  • johnchristopher 2 days ago

    > When I check tutorials on how to drill in the wall, there is (almost) no warning about how I could lose a finger doing so. It is expected that I know I should be careful around power tools.

    I think the analogy and the example work better when the warning is that you should be careful when drilling in walls because there may be an electrical wire that will be damaged.

    • malfist 2 days ago

      To your point, guides don't warn too much about electrical wires because code and practices makes it really hard to do. Code requires metal plates where electrical wires go through studs so you can't drill into them, and every stud finder in existence these days also detects AC behind them.

      We didn't make the guides better, we made the tradespeople make it so any novice can't burn down the house by not following a poorly written tutorial.

      • diggan 2 days ago

        > We didn't make the guides better

        That sucks, because that means anything built not to that standard (which I guess is a US one?) could lead the person to hurt themselves/the house.

        One doesn't exclude the other, and most likely both are needed if you're aiming to actually eliminate the problem as well as you can.

      • kQq9oHeAz6wLLS 2 days ago

        > every stud finder in existence these days

        Slightly pedantic point of order: you mean to say every stud finder for sale these days, not in existence, for the old stud finders still exist.

        Okay, that's all. Carry on.

        • diggan 2 days ago

          If we're being pedantic, then I'd say "old stud finders" are still being sold (second hand for example), so "every stud finder for sale these days" isn't correct either.

          Best to just say "most" or "some" to cover all corner cases :)

    • kevindamm 2 days ago

      or, if not sealed up properly, provides an avenue for pests to crawl through.

  • mihaaly 2 days ago

    Probably the "we can do everything and anything right now easy peasy, for serious of just just for the heck of it" attitude needs to be dialed down. The industry promises the heavens and devilish charm while releasing not even half cooked unnecessary garbage sometimes, that has bells and whistles to distract from the poor quality and not thought through, rushed illusions, that can chop all your imaginary limbs off in a sidestep or even without complete uninterrupted attention.

    Things potentially making big trouble like circular saw tables have prety elaborate protection mechanisms built in. Rails on high places, seatbelt, safety locks come to mind as well of countless unmentioned ones protecting those paying attention and those does not alike. Of course, decades of serious accidents promted these measures and mostly it is regulated now not being a courtesy of the manufacturer, other industries matured to this level. Probably IT industry needs some growing up still and less children playing adults - some kicking in the ass for making so rubishly dangerous solutions. Less magic, more down to earth reliability.

  • jeroenhd a day ago

    Between MongoDB running without a password by default and quick start guides brushing over anything security related, the industry can use a more security-conscious mindset.

    However, security is hard and people will drop interest in your project if it doesn't work automatically within five minutes.

    The hard part is at what experience level the warnings can stop. Surely developer documentation doesn't need the "docker exposes ports by default" lesson repeated every single time, but there are a _lot_ of "beginner" tutorials on how to set up software through containers that ignore any security stuff.

    For instance, when I Google "how to set up postgres on docker", this article was returned, clearly aimed at beginners: https://medium.com/@jewelski/quickly-set-up-a-local-postgres... This will setup a simply-guessable password on both postgres and pgadmin, open from the wider network without warning. Not so bad when run on a VM or Linux computer, quite terrible when used for a small project on a public cloud host.

    The problems caused by these missing warnings are almost always the result of lacking knowledge about how Docker configures it networks, or how (Linux) firewalls in general work. However, most developers I've met don't know or care about these details. Networking is complicated beyond the bare basics and security gets in the way.

    With absolutely minimal impact on usability, all those guides that open ports to the entire internet can just prepend 127.0.0.1 to their port definitions. Everyone who knows what they're doing will remove them when necessary, and the beginners need to read and figure out how to open ports if they do want them exposed to the internet.

  • grayhatter a day ago

    That's an interesting take away, I just quoted the exact same line from the blob to a friend with my response being

    > why didn't somebody stop me?!

    I'm not sure if "the industry" has a problem with relaying the reality that: the internet is full of malicious people that will try to hack you.

    My take away was closer to. The author knew better but thought some mix of 1) no one would provide incomplete information 2) I'm not a target 3) containers are magic, and are safe. I say that because they admit as much immediately following.

    > Ofcourse I password protected it, but seeing as it was meant to be temporary, I didn't dive into securing it properly.

  • tossandthrow 2 days ago

    Just like people shouldn't just buy industrial welding machines, SCUBA equipment or a parachute and "wing it" I think the same can be said here.

    As a society we already have the structures setup: The author had been more than welcome to attend a course or a study programme in server administration that would prepare them to run their own server.

    I myself even wouldn't venture into exposing a server to the internet to maintain it in my freetime, and that is with a post graduate degree in an engineering field and more than 20 years of experience.

    • kibwen 2 days ago

      > Just like people shouldn't just buy industrial welding machines, SCUBA equipment or a parachute and "wing it" I think the same can be said here.

      I find this to be extremely sad.

      Unlike welding or diving, there is no inherent physical risk to life and limb to running a server. I should be able to stand up a server and leaving it running, unattended and unadministered, and then come back to it 20 years later to find it happily humming along unpwned. The fact that this isn't true isn't due to any sort of physical inevitability, it's just because we, the collective technologists, are shit at what we do.

      • darkwater 2 days ago

        No. It's not so easy because in most cases you have to choose between security, flexibility and usability. Obviously it's not a 100% accurate example but generally speaking, it tends to be true. Sum it up over several decades of development and you get why we cannot have something that it's really really easy to use, flexible and secure by default.

        • Gud 2 days ago

          We do, it's called FreeBSD. In my experience, many Linux distributions also qualify. To keep a modern *nix secure and up to date is simple.

          • darkwater 2 days ago

            Which would help exactly 0 in this scenario, where someone is exposing a port directly on the Internet. Also, FreeBSD is even more niche than Linux, I doubt it would stand the average user stress test.

            • Gud 2 days ago

              Absolutely it would because jails doesn't do weird shit like this from the get go. With FreeBSD, you have to deliberately open ports, not the other away around. I don't understand your second sentence. "average user stress test"??

              • diggan a day ago

                > With FreeBSD, you have to deliberately open ports

                The issue outlined in the article happened because the author deliberately open their service to the public internet. Replacing Linux with FreeBSD wouldn't have prevented the compromise.

      • notatoad a day ago

        >Unlike welding or diving, there is no inherent physical risk to life and limb to running a server.

        good news! there is no inherent risk to life or limb because you left your server exposed. As OP discovered, you might come back to find it running a crypto miner. and that's just really not that big of a deal. maybe we're not all shit at what we do, but rather we have appropriately valued the seriousness of the risks involved, and made the decision that locking everything down to be impossible to hack isn't actually worth the trade-offs to usability, convenience, and freedom.

        you can leave your iPad running, unattended, and unadministered for 20 years if that's what you wanted, and come back to find it un-pwned.

      • tossandthrow 2 days ago

        There is quite a distance from

        > stand up a server and leaving it running, unattended and unadministered

        to, what was my proposition, maintain a server with active access from the internet.

        Just what you describe I do myself: I have several home server running, but none accept incoming connections from the internet and the sec surface is much smaller.

      • lopkeny12ko 2 days ago

        What motivates this attitude? Software, like anything else, needs to be actively maintained. This is a positive sign of technology evolution and improvement over time. To expect to run some software for 20 years without needing to apply a single security patch is ridiculous, and probably exactly the attitude that caused the author to get himself in this situation.

        • kibwen 2 days ago

          > To expect to run some software for 20 years without needing to apply a single security patch is ridiculous

          The whole point of my comment is that it's only "ridiculous" because of path dependency and the choices that we have made. There's no inherent need for this to be true, and to think otherwise is just learned helplessness.

          • oarsinsync 2 days ago

            Has there ever been any production software ever written that didn’t suffer from some kind of bug or exploit?

            I don’t think imperfection is a choice we’ve made. I think imperfection is part of our nature.

            That said, the current state of software development is absolutely a choice, and a shockingly poor one in my opinion.

          • ocdtrekkie 2 days ago

            Better security design fixes this. Sandstorm fixed this for self-hosters ten years ago (Sandstorm is designed to run unmaintained or actively malicious apps relatively safely), but people are still choosing the quick and easy path over the secure one.

            • ferfumarma 2 days ago

              This is so true.

              Sandstorm has been part of my selfhosted stack since it was a start-up, and it has worked for a decade with virtually zero attention, and no exploits I am aware of.

              If there are other hosted apps that want a really easy on-ramp for new users: packaging for sandstorm is an easy way to create one.

    • WaxProlix 2 days ago

      You can't just click a few buttons and have industrial machinery - and when you DO get it there's a ton of safety warnings on and around it. And I don't agree with your fundamental premise; self owned computing should be for everyone. It shouldn't be - at least for some subset of basics - arcane or onerous.

      • tossandthrow 2 days ago

        Like you sibling I think you also misunderstand my statement: I do run local servers, but none a connected to the internet.

        I definitely believe it is for all to have a NAS server, a home assistant, or a NUC setup to run some docker containers.

        Just don't let them accept connections from the internet.

        For most normal home setups it is actually super hard to make them accept incoming requests as you need to setup port forwarding or put the server in front of your router.

        The default is that the server is not reachable from the internet.

      • fullspectrumdev 2 days ago

        You absolutely can. Have you a credit card and a web browser? You can buy all sorts of heavy machinery and have it shipped to your door!

        • WaxProlix 2 days ago

          You've introduced a new element here - the credit card. And if you did have the money and whimsy it'd still show up with (regulated, mandatory, industry-standardized) safety documentation.

          • oarsinsync 2 days ago

            The credit card (or rather, money) was required to purchase the computer, much like it’s required to purchase other power tools or industrial machinery

          • fullspectrumdev 2 days ago

            I guess that depends where you order from. You can get some crazy machines from Alibaba/Aliexpress and the “documentation” they come with is usually… well it leaves a lot to be desired.

      • bennythomsson 2 days ago

        Most computing people habe at home is some locked down cloud crap which neither you nor an attacker can do anything with.

        It's not hackable though in the original sense of the word, so not interesting the crowd at HN. Docker is, for everybody, good and bad.

    • fullspectrumdev 2 days ago

      I guess we have different risk tolerances.

      The best way to learn is to do. Sure, you might make some mistakes along the way, but fuck it. That’s how you learn.

    • 10729287 2 days ago

      And yet, OP here seems very comfortable with computer stuff. Can’t imagine about the regular joe buying a nas from synology and all the promesses made by the company.

      • tossandthrow 2 days ago

        These are not, per default, exposed to the internet.

  • bennythomsson 2 days ago

    It is widely known not to expose anything to the public internet unless it's hardened an/or sandboxed. A random service you use for playing around definitely does not meet this description and most people do know that, just like what a power tool can do with your fingers.

AlgebraFox 2 days ago

Tailscale is a great solution for this problem. I too run homeserver with Nextcloud and other stuff, but protected behind Tailscale (Wireguard) VPN. I can't even imagine exposing something like my family's personal data over internet, no matter how convenient it is.

But I sympathize with OP. He is not a developer and it is sad that whatever software engineers produce is vulnerable to script kiddies. Exposing database or any server with a good password should not be exploitable in any way. C and C++ has been failing us for decades yet we continue to use such unsafe stacks.

  • mattrighetti 2 days ago

    > C and C++ has been failing us for decades yet we continue to use such unsafe stacks.

    I'm not sure — what do C and C++ have to do with this?

    • timcambrant 2 days ago

      They are not memory safe by design. See: https://xeiaso.net/blog/series/no-way-to-prevent-this/

      Of course all languages can produce insecure binaries, but C/C++ buffer overflows and similar vulnerabilities are likely what AlgebraFox refers to.

      • mattrighetti 2 days ago

        > They are not memory safe by design

        I'm aware of that, but the C/C++ thing seemed more like a rant, hence my question.

        I've searched up the malware and it doesn't seem to use memory exploitation. Rust is not going to magically protect you against any security issue caused by cloud misconfiguration.

        • timcambrant a day ago

          I think it was a rant, but still related to the post. Its point is that we need to minimize the attack surface of our infrastructure, even at home. People tend to expose services unintentionally, but what's so bad about that? After all, they are password protected.

          Well, even when these exposed services are not built to cause harm or provide admin privileges, like all software they tend to not be memory secure. This gives a lucky attacker a way in from just exposing a single port on the network. I can see where comments on memory unsafe languages fit in here, although vulnerabilities such as XSS also apply no matter what language we build software with.

        • lopkeny12ko 2 days ago

          What is the point you're trying to make here? Are you waiting for some malware that exploits a buffer overrun to infect you before conceding that C/C++ is a terrible choice for memory-safe code?

          • akerl_ 2 days ago

            It just seems totally unrelated to this post.

  • WaxProlix 2 days ago

    Thanks, I've got a homelab/server with a few layers of protection right now, but had been wanting to just move to a vpn based approach - this looks really nice and turnkey, though I dislike the requirement of using a 3P IDP. Still, promising. Cheers.

  • bennythomsson 2 days ago

    If you make a product that is so locked down by default that folks need to jump through 10 hoops before anything works then your support forums will be full of people whining that it doesn't work and everybody goes to the competition that is more plug and play.

    Realize why Windows still dominates Linux on the average PC desktop? This is why.

  • smpretzer 2 days ago

    I just switched to Tailscale for my home server just before the holidays and it has been absolutely amazing. As someone who knows very little about networking, it was pretty painless to set up. Can’t really speak to the security of the whole system, but I tried my best to follow best practices according to their docs.

  • luismedel 2 days ago

    There are a lot of vulnerability categories. Memory unsafety is the origin of some of them, but not all.

    You could write a similar rant about any development stack and all your rants would be 100% unrelated with your point: never expose a home-hosted service to the internet unless you seriously know your shit.

  • krater23 2 days ago

    C and C++ is not accountable for all evil of the world. Yes I know, some Rust evangelists want to tell us that, but most servers get owned through configuration mistakes.

  • rane 2 days ago

    What do you need Tailscale for? Why isn't Wireguard enough?

    • _heimdall 2 days ago

      There's nothing wrong with wireguard at all if you already have the hosting service available. The core value add for Tailscale is that they provide/host the service coordinating your wireguard network.

      If I'm not mistaken, there's a self-hosted alternative that let's you run the core of Tailscale's service yourself if you're interested in managing wireguard.

      • bennythomsson 2 days ago

        What kind of "hosting service" are you referring to? Just run wireguard on the home server, or your router, and that's it. No more infra required.

        • _heimdall 2 days ago

          I meant to say hosted service there, I.e. running a wireguard server to negotiate the VPN connections.

          The main reason I haven't jumped into hosting wireguard rather than using Tailscale is mainly because I reach for Tailscale to avoid exposing my home server to the public internet.

          • rane a day ago

            What could be the issue with exposing WireGuard at a random port to the public internet?

            It works over UDP so it doesn't even send any acknowledgement or error response to unauthenticated or non-handshake packets.

            • _heimdall a day ago

              There may not be an issue at all, I'm just gun shy about opening any ports publicly. I don't do networking often and have never focused on it enough to feel confident in my setup and maintenance.

    • ErneX 2 days ago

      I think it’s easier to manage, plus you get ACL functionality. You can use Headscale for the control server and tailscale clients.

    • HomeDeLaPot 2 days ago

      The author mentioned closing their VPN port so people would stop trying to break in, but this also cut off the author's access.

      Tailscale allows you to connect to your home network without opening a port to allow incoming connections.

  • yobid20 a day ago

    c and c++ are not failing us and are not unsafe.

Shank 2 days ago

For all intents and purposes, the only ports you should ever forward are ones that are explicitly designed for being public facing, like TLS, HTTP, and SSH. All other ports should be closed. If you’re ever reaching for DMZ, port forwarding, etc., think long and hard about what you’re doing. This is a perfect problem for Tailscale or WireGuard. You want a remote database? Tailscale.

I even get a weird feeling these days with SSH listening on a public interface. A database server, even with a good password/ACLs, just isn’t a great safe idea unless you can truly keep on top of all security patches.

  • downrightmike a day ago

    Good time to make sure UPnP is not enabled. Its an authenticationless protocol. Yeah, you read that right, no auth needed.

matharmin 2 days ago

I think I'm missing something here - what is specific about Docker in the exploit? Nowhere is it mentioned what the actual exploit was, and whether for example a non-containerized postgres would have avoided it.

Should the recommendation rather be "don't expose anything from your home network publically unless it's properly secured"?

  • phoronixrly 2 days ago

    From TFA:

    > This was somewhat releiving, as the latest change I made was spinning up a postgres_alpine container in Docker right before the holidays. Spinning it up was done in a hurry, as I wanted to have it available remotely for a personal project while I was away from home. This also meant that it was exposed to the internet, with open ports in the router firewall and everything. Considering the process had been running for 8 days, this means that the infection occured just a day after creating the database. None of the database guides I followed had warned me about the dangers of exposing a docker containerized database to the internet. Ofcourse I password protected it, but seeing as it was meant to be temporary, I didn't dive into securing it properly.

    Seems like they opened up a postgres container to the Internet (IIRC docker does this whether you want to or not, it punches holes in iptables without asking you). Possibly misconfigured authentication or left a default postgres password?

    • armsaw 2 days ago

      Docker would punch through the host firewall by default, but the database wouldn’t be accessible to the internet unless the user opened the ports on their router firewall as well, which based on the article, it sounds like they did. Making the assumption they’re using a router firewall…

      In this case, seems like Docker provided a bit of security in keeping the malware sandboxed in the container, as opposed to infecting the host (which would have been the case had the user just run the DB on bare metal and opened the same ports)

      • phoronixrly 2 days ago

        That's a bit of a stretch here... Had the attackers' target been to escape from the docker container, they would have done it. They may even have done it, we can't know as OP does not seem to have investigated thoroughly enough apart from seeing some errors and then stopping the container...

        Also, had it been a part of the host distro, postgres may have had selinux or apparmor restrictions applied that could have prevented further damage apart from a dump of the DB...

    • harrall a day ago

      Docker doesn’t expose ports by default. It only bypasses your firewall if you choose to explicitly publish a port.

      OP explicitly forwarded a port in Docker to their home network.

      OP explicitly forwarded their port on their router to the Internet.

      OP may have ran Postgres as root.

      OP may have used a default password.

      OP got hacked.

      Imagine having done these same steps on a bare metal server.

      • phoronixrly a day ago

        I do imagine:

        1. postgres would have a sane default pg_hba disallowing remote superuser access.

        2. postgres would not be running as root.

        3. postgres would not have a default superuser password, as it uses peer authentication by default.

        4. If ran on a redhat-derived distro, postgres would be a subject to selinux restrictions.

        And yes, all of these can be circumvented by an incompetent admin.

    • globular-toast 2 days ago

      > Seems like they opened up a postgres container to the Internet

      Yes, but so what? Getting access to a postgres instance shouldn't allow arbitrary execution on the host.

      > IIRC docker does this whether you want to or not, it punches holes in iptables without asking you

      Which is only relevant if you run your computer directly connected to the internet. That's a dumb thing to do regardless. The author probably also opened their firewall or forwarded a port to the host, which Docker cannot do.

      • echelon_musk 2 days ago

        Also from TFA:

        > it was exposed to the internet, with open ports in the router firewall

        Upvoted because you're right that the comments in this thread have nothing to do with what happened here.

        The story would have been no different if OP had created an Alpine Linux container and exposed SSH to the internet with SSH password authentication enabled and a weak password.

        It's nothing to do with Docker's firewalling.

        • 63stack a day ago

          >The story would have been no different if OP had created an Alpine Linux container and exposed SSH to the internet with SSH password authentication enabled and a weak password.

          What? The story would have been VERY different, obviously that's asking for trouble. Opening a port to your database running in a docker container is not a remote execution vulnerability, or if it is, the article is failing to explain how.

      • 63stack a day ago

        I feel like you and grandparent are the only people who read the article, because I'm wondering the same thing.

        The article never properly explains how the attack happened. Having a port exposed to the internet on any container is a remote execution vulnerability? What? How? Nobody would be using docker in that case.

        The article links to a blog post as a source on the vulnerability, but the article is a general "how to secure" article, there is nothing about remote code execution.

      • phoronixrly 2 days ago

        Are you sure about that? Last I checked pg admins had command execution on the DB host, as well as FS r/w and traversal.

        See https://www.postgresql.org/docs/current/sql-copy.html#id-1.9...

        Specifically the `filename` and `PROGRAM` parameters.

        And that is documented expected out of the box behaviour without even looking for an exploit...

        • 63stack a day ago

          It's funny that you said TFA a few comments earlier, because you seem to have not read the article either, or are making some great leaps here.

          If the break in happened as you would explain the article would also mention that:

          * the attacker gained access to the postgres user or equally privileged user

          * they used specific SQL commands to execute code

          * would have not claimed the vulnerability was about docker containers and exposed ports

          And the take away would not be "be careful with exposing your home server to the internet", but would be "anyone with admin privileges to postgres is able to execute arbitrary code".

          • phoronixrly a day ago

            The article would only say that if OP was competent enough to determine exactly what went wrong. I did read the article however I do not agree with the conclusions in it as simply opening a postgres port to the Internet while having set up authentication correctly, is not fatal (though admittedly inadvisable).

  • tommy_axle a day ago

    This is one that can sneak up on you even when you're not intentionally exposing a port to the internet. Docker manages iptables directly by default (you can disable it but the networking between compose services will be messed up). Another common case this can bite you is if using an iptables front-end like ufw and thinking you're exposing just the application. Then unless you bind to localhost then Posgres in this case will be exposed. My recommendation is to review iptables -L directly and where possible use firewalls closer to the perimeter (e.g. the one from your vps provider) instead of solely relying on iptables on the same node

    • globular-toast a day ago

      All this talk of iptables etc is really confusing. People don't use iptables rules on servers do they? Ubuntu server has the option to enable ufw but it's disabled by default because it would be a really annoying default for a server which is by definition supposed to have services. I couldn't imagine trying to wrangle firewall rules across every box on the network vs using network segregation and firewall appliances at the edges. Is there some confusion here between running docker on your dev box vs running it on a server to intentionally run network services?

      • junon a day ago

        Yes, they do. At least back when I was at ZEIT, docker definitely used iptables directly. I know this because I was patching them as part of our infra that managed Docker at the time.

acidburnNSA 2 days ago

I really like the "VPN into home first" philosophy of remote access to my home IT. I was doing openvpn into my ddwrt router fortunately years, and now it's wireguard into openwrt. It's quite easy for me to vpn in first and then do whatever: check security cams, control house via home assistant, print stuff, access my zfs shared drive, run big scientific simulations or whatever on big computer, etc. The router VPN endpoint is open to attack but I think it's a relatively small attack surface.

  • 6ak74rfy 2 days ago

    > I think it's a relatively small attack surface.

    Plus, you can obfuscate that too by using a random port for Wireguard (instead of the default 51820): if Wireguard isn't able to authenticate (or pre-authenticate?) a client, it'll act as if the port is closed. So, a malicious actor/bot wouldn't even know you have a port open that it can exploit.

  • rsolva 2 days ago

    I use WireGuard to access all in-home stuff as well, but there is one missing feature and one bug with the official WireGuard app for android that is inconvenient:

    - Missing feature; do not connect when on certain SSIDs. - Bug: When the WG connection is active and I put my phone in Flightmode (which I do every night), it drains the battery from full to almost empty during the night.

    • Mister_Snuggles 2 days ago

      > - Missing feature; do not connect when on certain SSIDs.

      I'm very surprised by this omission as this feature exists on the official iOS client.

  • Mister_Snuggles 2 days ago

    I've taken this approach as well. The WireGuard clients can be configured to make this basically transparent based on what SSID I'm connected to. I used to do similar with IPSec/IKEv2, but WireGuard is so much easier to manage.

    The only thing missing on the client is Split DNS. With my IPSec/IKEv2 setup, I used a configuration profile created with Apple Configurator plus some manual modifications to make DNS requests for my internal stuff go through the tunnel and DNS requests for everything else go to the normal DNS server.

    My compromise for WireGuard is that all DNS does to my home network but only packets destined for my home subnets go through the tunnel.

gobblegobble2 2 days ago

> Fortunately, despite the scary log entries showing attempts to change privileges and delete critical folders, it seemed that all the malicious activity was contained within the container.

OP can't prove that. The only way is to scrap the server completely and start with a fresh OS image. If OP has no backup and ansible repo (or anything similar) to configure a new home server quickly, then I guess another valuable lesson was learned here.

  • diggan 2 days ago

    Not 100% what you mean with "scrapping" the server, you suggest just a re-install OS? I'd default to assuming the hardware itself is compromised somehow, if I'm assuming someone had root access. If you were doing automated backups from something you assume was compromised, I'm not sure restoring from backups is a great idea either.

    • sgarland 2 days ago

      I think it’s reasonable to take a measured view of attacks. I doubt someone installing crypto miners has a hardware rootkit.

      • diggan 2 days ago

        I'm guessing it's an automated attack, where it found running services and then threw payloads at it until it got something. Once they're there, since docker isn't a real security barrier, I'd consider it all bets off.

        Especially when it comes to my home network, I would rather be safe than sorry. How would you even begin to investigate a rootkit since it can clean up after itself and basically make itself invisible?

        Particularly when it comes to Kinsing attacks, as there seem to been rootkits detected in tandem with it, which is exactly what OP got hit by it seems (although they could only see the coinminer).

        • sgarland 2 days ago

          For crypto miners, it’s pretty easy to tell if your servers are in your house. Even if they aren’t, if you have any kind of metrics collection, you’ll notice the CPU spike.

          My general feeling is that if someone wants to install a hardware rootkit on my extremely boring home servers, it’s highly unlikely that I’ll be able to stop them. I can do best practices (like not exposing things publicly), but ultimately I can’t stop Mossad; on the other hand, I am an unlikely target for anything other than script kiddies and crypto miners.

          • diggan 2 days ago

            > For crypto miners, it’s pretty easy to tell if your servers are in your house. Even if they aren’t, if you have any kind of metrics collection, you’ll notice the CPU spike.

            Sure, but if you already know since before that this specific cryptominer has been found together with rootkits, and you know rootkits aren't as easy to detect, what's your approach to validate if you're infected or not?

            Maybe I'm lucky that I can tear down/up my infrastructure relatively easily (thanks NixOS), but I wouldn't take my chances when it's so close to private data.

            • sgarland a day ago

              NixOS isn't going to do anything against a hardware rootkit, which is what I originally mentioned. My home infra's base layer is Proxmox, with VMs built with Packer + Ansible, but that still has the same problem.

              That's my point – you can do best practices all day long, but short of observing sudden shifts (or long-term trends) in collected metrics, you're not going to be able to notice, let alone defend, against sophisticated attacks. There has been malware that embeds itself into HDD firmware. Good luck.

phartenfeller 2 days ago

A lot of people comment about Docker firewall issues. But it still doesn't answer how an exposed postgres instance leads to arbitrary code execution.

My guess is that the attacker figured out or used the default password for the superuser. A quick lookup reveals that a pg superuser can create extension and run some system commands.

I think the takeaway here is that the pg image should autogenerate a strong password or not start unless the user defines a strong one. Currently it just runs with "postgres" as the default username and password.

  • diggan 2 days ago

    > I think the takeaway here is that the pg image should autogenerate a strong password or not start unless the user defines a strong one. Currently it just runs with "postgres" as the default username and password.

    Takeaway for beginner application hosters (aka "webmasters") is to never expose something on the open internet unless you're 100% sure you absolutely have to. Everything should default to using a private network, and if you need to accept external connections, do so via some bastion host that isn't actually hosted on your network, which reaches into your private network via proper connections.

    • phartenfeller a day ago

      There are ton of hobby VPSes with a simple website, CMS, email server, or maybe a Plex. Do you really think that these kind of scenarios should use a bastion host?

      • diggan 19 hours ago

        Since we're talking about a home network, definitely. Don't accept connections from public internet straight into your home unless you're OK with the consequences of that.

j_bum 2 days ago

Ok - curious if anyone can provide some feedback for me on this.

I am running Immich on my home server and want to be able to access it remotely.

I’ve seen the options of using wireguard or using a reverse proxy (nginx) with Cloudflare CDN, on top of properly configured router firewalls, while also blocking most other countries. Lots of this understanding comes from a YouTube guide I watched [0].

From what I understand, people say reverse proxy/Cloudflare is faster for my use case, and if everything is configured correctly (which it seems like OP totally missed the mark on here), the threat of breaches into to my server should be minimal.

Am I misunderstanding the “minimal” nature of the risk when exposing the server via a reverse proxy/CDN? Should I just host a VPN instead even if it’s slower?

Obviously I don’t know much about this topic. So any help or pointing to resources would be greatly appreciated.

[0] https://youtu.be/Cs8yOmTJNYQ?si=Mwv8YlEf934Y3ZQk

  • 63stack a day ago

    You don't need any of this, and the article is completely bogus, having a port forwarded to a database in a container is not a security vulnerability, unless the database has a vulnerability. The article fails to explain how they actually got remote code execution, and blames it on some docker container vulnerability, and links to a random article as a source that has nothing to do with what he is claiming in the article.

    What you have to understand is that having an immich instance on the internet is only a security vulnerability if immich itself has a vulnerability in it. Obviously, this is a big if, so if you want to protect against this scenario, you need to make sure only you can access this instance, and you have a few options here that don't involve 3rd parties like cloudflare. You can make it listen only on the local network, and then use ssh port tunneling, or you can set up a vpn.

    Cloudflare has been spamming the internet with "burglars are burgling in the neighbourhood, do you have burglar alarms" articles, youtube is also full of this.

  • RajT88 2 days ago

    Reverse proxy is pretty good - you've isolated the machine from direct access so that is something.

    I'm in the same boat. I've got a few services exposed from a home service via NGINX with a LetsEncrypt cert. That removes direct network access to your machine.

    Ways I would improve my security:

    - Adding a WAF (ModSecurity) to NGINX - big time investment!

    - Switching from public facing access to Tailscale only (Overlay network, not VPN, so ostensibly faster). Lots of guys on here do this - AFAIK, this is pretty secure.

    Reverse proxy vs. Overlay network - the proxy itself can have exploitable vulnerabilities. You should invest some time in seeing how nmap can identify NGINX services, and see if those methods can be locked down. Good debate on it here:

    https://security.stackexchange.com/questions/252480/blocking...

  • depaulagu 2 days ago

    Piggybacking on your request, I would also like feedback. I also run some services on my home computer. The setup I'm currently using is a VPN (Wireguard) redirecting a UDP port from my router to my PC. Although I am a Software Engineer, I don't know much about networks/infra, so I chose what seemed to me the most conservative approach.

    • bennythomsson 2 days ago

      To both of you, wireguard is the way to go.

      So, parent poster: yes, you are doing it right.

      Grandparent: Use a VPN, close everything else.

      • j_bum 2 days ago

        Thanks, Benny!

  • vinay_ys 2 days ago

    Well, you are better off using Google Photos for securely accessing your photos over Internet. It is not a matter of securing it once, but one of keeping it secure all the time.

    • j_bum 2 days ago

      I suppose yes, it is more confident and “easy” to pay a cloud provider. But we have more data than I’m willing to flush away money for cloud storage.

      As such, I’m hosting Immich and am figuring out remote access options. This kind of misses the point of my question.

      • vinay_ys 2 days ago

        If cheap is what you are looking for, then yes, a wireguard running on your home server is the way to go. Instead of exposing your home-server directly to Internet, I would put it behind a cloudflare zero trust network access product (costs free).

  • dns_snek 2 days ago

    If you care about privacy I wouldn't even consider using Cloudflare or any other CDN because they get to see your personal data in plain "text". Can you can forward a port from the internet to your home network, or are you stuck in some CG-NAT hell?

    If you can, then you can just forward the port to your Immich instance, or put it behind a reverse proxy that performs some sort of authentication (password, certificate) before forwarding traffic to Immich. Alternatively you could host your own Wireguard VPN and just expose that to the internet - this would be my preferred option out of all of these.

    If you can't forward ports, then the easiest solution will probably be a VPN like Tailscale that will try to punch holes in NAT (to establish a fast direct connection, might not work) or fall back to communicating via a relay server (slow). Alternatively you could set up your own proxy server/VPN on some cheap VPS but that can quickly get more complex than you want it to be.

    • j_bum 2 days ago

      Yikes… I had no idea about CDN being able to see raw data.

      > forward a port

      From what I understand, my Eero router system will let me forward ports from my NAS. I haven’t tested this to see if it works, but I have the setting available in my Eero app.

      > forward port to Immich instance

      Can you expand on this further? Wouldn’t this just expose me to the same vulnerabilities as OP? If I use nginx as a reverse proxy, would I be mitigating the risk?

      Based on other advice, it seems like the self hosted VPN (wireguard) is the safest option, but slower.

      The path of least resistance for daily use sounds ideal (RP), but I wonder if the risk minimization from VPN is worth potential headaches.

      Thanks so much for responding and giving some insight.

      • dns_snek 2 days ago

        > Can you expand on this further? Wouldn’t this just be exposing myself to the same vulnerabilities as OP?

        Yeah I wouldn't do this personally, I just mentioned it as the simplest option. Unless it's meant to be a public service, I always try to at least hide it from automated scanners.

        > If I use nginx as a reverse proxy, would I be mitigating the risk?

        If the reverse proxy performs additional authentication before allowing traffic to pass onto the service it's protecting, then yes, it would.

        One of my more elegant solutions has been to forward a port to nginx and configure it to require TLS client certificate verification. I generated and installed a certificate on each of my devices. It's seamless for me in day to day usage, but any uninvited visitors would be denied entry by the reverse proxy.

        However support for client certificates is spotty outside of browsers, across platforms, which is unfortunate. For example HomeAssistant on Android supports it [1] (after years of pleading), but the iOS version doesn't. [2] NextCloud for iOS however supports it [3].

        In summary, I think any kind of authentication added at the proxy would be great for both usability and security, but it has very spotty support.

        > Based on other advice, it seems like the self hosted VPN (wireguard) is the safest option, but slower.

        I think so. It shouldn't be slow per se, but it's probably going to affect battery life somewhat and it's annoying to find it disconnected when you try to access Immich or other services.

        [1] https://github.com/home-assistant/android/pull/2526

        [2] https://community.home-assistant.io/t/secure-communication-c...

        [3] https://github.com/nextcloud/ios/pull/2908

fleetside72 2 days ago

The exploit is not known in this case. The claim that it was password protected seems like an unverified statement. No pg_hba.conf content provided, this docker must have a very open default config for postgres.

`Ofcourse I password protected it, but seeing as it was meant to be temporary, I didn't dive into securing it properly.`

joshghent 2 days ago

Despite people slating the author, I think this is a reasonable oversight. On the surface, spinning up a Postgres instance in Docker seems secure because it’s contained. I know many articles claim “Docker= Secure”.

Whilst easy to point to common sense needed, perhaps we need to have better defaults. In this case, the Postgres images should only permit the cli, and nothing else.

  • Fnoord 2 days ago

    Every guide out there says to link Postgres to the application (the one using Postgres). So the Postgres network is not reachable. Then, even if it were exposed, a firewall would need to be configured to allow access. Then, another thing every guide does is suggesting a reverse proxy, decreasing attack service. Then, such reverse proxy would need some kind of authentication. Instead, I simply run it behind Wireguard. There's still plenty to go wrong, such as backdoor in Postgres database image (you used docker pull), not upgrading it while it contains serious vulnerabilities, or a backdoor in some other image.

  • lopkeny12ko 2 days ago

    > spinning up a Postgres instance in Docker seems secure because it’s contained

    This doesn't make any sense. Running something in a container doesn't magically make it "secure." Where does this misconception come from?

    • diggan 2 days ago

      > Where does this misconception come from?

      When docker first appeared, a lot of people explaining docker to others said something along the lines "It's like a fast VM you can create with a Dockerfile", leading a bunch of people to believe it's actually not just another process + some more stuff, but instead an actual barrier between host/guest like in a proper VM.

      I remember talking about this a lot when explaining docker to people in the beginning, and how they shouldn't use it for isolation, but now after more than a decade with that misconception still being popular, I've lost energy about it...

f00l82 2 days ago

With the advent of wireguard. I no longer see a reason to expose anything outside of that to the outside world. Just VPN back in and work like usual.

wrren 2 days ago

I’d recommend using something like Tailscale for these use cases and general access, there’s no need to expose services to the internet much of the time.

  • sweca 2 days ago

    Tailscale also has funnel which would be more secure for public exposure.

elashri 2 days ago

There usual route that people would take is either use VPN/tailscale/Clouflare Tunnels ..etc and only expose things locally and you will need to be on VPN network to access services. The other route is not to expose any ports and rely on reverse proxy. Actually you can combine the two approaches and it is relativity easy for non SWE homelab hobbyists.

  • hebocon 2 days ago

    I use HAProxy on PFSense to expose a home media server (among other services) for friends to access. It runs on a privileged LXC (because NFS) but as an unprivileged user.

    Is this reckless? Reading through all this makes me wonder if SSHFS (instead of NFS) with limited scope might be necessary.

    • packtreefly 2 days ago

      That's a popular architecture, but I personally wouldn't run part of the application stack (HAProxy) on my network firewall, and would instead opt to move it to the media server.

      Suppose you have the media server in its own VLAN/Subnet, chances are good that the firewall is instrumental in enforcing that security boundary. If any part of the layer-7 attack surface is running on the firewall... you probably get the idea.

Sirikon 2 days ago

Im curious about what the password for postgres looked like.

This exact thing happened to a friend, but there was no need to exploit obscure memory safety vulnerabilities, the password was “password”.

My friend learnt that day that you can start processes in the machine from Postgres.

Quizzical4230 2 days ago

My self-hosting setup uses Cloudflare tunnels to host the website without opening any ports. And Tailscale VPN to directly access the machine. You may want to look at it!

paulnpace 2 days ago

I had ~500 kbps sustained attacks against my residential ISP for years.

I learned how to do everything through SSH - it is really an incredible Swiss Army knife of a tool.

  • malfist 2 days ago

    I remember working at a non-tech company and their firewall blocked anything not on port 22, 80 and 443. I got real good at port forwarding with ssh.

LelouBil 2 days ago

I was exposing my services the same way for a long time, now I only expose web services via cloudflare, with an iptable configuration to reject everything on port 443 not coming from them.

I also use knockd for port knocking to allow the ssh port, just in case I need to log in to my server without having access to one of my devices with Wireguard, but I may drop this since it doesn't seem very useful.

lmaoguy a day ago

“ None of the database guides I followed had warned me about the dangers of exposing a docker containerized database to the internet.”

Unless you want it publicly available and expect people to try and illegitimately access it, never expose it to the internet.

bennythomsson 2 days ago

> None of the database guides I followed had warned me about the dangers of exposing a docker containerized database to the internet.

Um that's pretty ... naïve?

Closing even your VPN port goes a little far. Wireguard for example doesn't even reply on the port if no acceptable signature is presented. Assuming the VPN software is recent, that's as close as you can come to having everything closed.

rr808 2 days ago

I'd kinda like a VPN with my own storage but gave up this is one of the main reasons. I just dont have time to keep an eye on all this stuff.

gus_ 2 days ago

The article misses one critical point in these attacks:

practically all these attacks require downloading remote files to the server once they gain access, using curl, wget or bash.

Restricting arbitrary downloads from curl, wget or bash (or better, any binary) makes these attacks pretty much useless.

Also these cryptominers are usually dropped to /tmp, /var/tmp or /dev/shm. They need internet access to work, so again, restricting outbound connections per binary usually mitigates these issues.

https://www.aquasec.com/blog/kinsing-malware-exploits-novel-...

  • packtreefly a day ago

    > Restricting arbitrary downloads from curl, wget or bash (or better, any binary) makes these attacks pretty much useless.

    Any advice what that looks like for a docker container? My border firewall isn't going to know what binary made the request, and I'm not aware of per-process restrictions of that kind

justinl33 a day ago

a simple docker run --memory=512m --cpus=1 could have at least limited the damage here.

  • Pooge a day ago

    I don't think so; it would have made the malware harder to spot.

daneel_w 2 days ago

"None of the database guides I followed had warned me about the dangers of exposing a docker containerized database to the internet."

Really? Nor any Docker guides warning in general about exposing a container to the Internet?

OutOfHere 2 days ago

How is it that your router allowing the incoming connection? It doesn't make sense.

ocdtrekkie 2 days ago

Docker is a terrible self-hosting tool reason #4056. Let's give everyone a footgun and recommend they use it at work and their house.

Especially with a tool you don't have an enterprise class firewall in front of, security needs to be automatic, not afterthought.

2OEH8eoCRo0 2 days ago

Docker's poor design strikes again!

immibis 2 days ago

Every server gets constantly probed for SSH. Since half the servers on the Internet haven't been taken over yet, it doesn't seem like SSH has significant exploits (well, there was that one signal handler race condition).

Unless you're trying to do one of those designs that cloud vendors push to fully protect every single traffic flow, most people have some kind of very secure entry point into their private network and that's sufficient to stop any random internet attacks (doesn't stop trojans, phishing, etc). You have something like OpenSSH or Wireguard and then it doesn't matter how insecure the stuff behind that is, because the attacker can't get past it.

  • jpc0 2 days ago

    It's also common practice to do what everyone here recommends and out things behind a firewall.

    The seperation of control and function has been a security practice for a long time.

    Port 80 and 443 can be open to the internet, and in 2024 whatever port wireguard uses. All other ports should only be accessible from the local network.

    With VPS providers this isn't always easy to do. My preferred VPS provider. However provides a separate firewall which makes that easier.

  • Fnoord 2 days ago

    OpenSSH has no currently known flaws but in past it contained a couple. For example, the xz backdoor utilized OpenSSH and it has contained a remote vulnerability in past (around 2003). Furthermore, some people use password auth as well as weak (low entropy or reused, breached) passwords. Instead, only use public key authentication. And tarpit the mofos brute forcing SSH (e.g. with fail2ban). They always do it on IPv4, not IPv6. So another useful layer (aside from not using IPv4) is whitelist IPv4 addresses who require access to SSH server. There is no reason for the entire world to need access to your home network's SSH server. Or, at the very least, don't use port 22. When in doubt: check your logs.

    • wl 2 days ago

      Also, if you’re running SSH on a non-standard port, block Censys’ IP ranges. They port scan the entire internet constantly and bad actors use their scans to target their attacks. Once I did that, the number of attempted SSH logins went to zero pretty quickly.

      • Fnoord 2 days ago

        Solid advice! I've had certain countries in my blocklist thus far, and now I have added Censys (I did not know that was the company behind Shodan). Now, I've also added the Tor exit node list as my blocklist. Since nothing good comes from any of these. I used this blocklist for the latter [1] (the Censys ranges I just did manually, as it is only 12 entries in total).

        [1] https://github.com/7c/torfilter

        • jcgl a day ago

          I hope you'll reconsider your stance on Tor exit nodes; many people use the Tor network to avoid censorship or even just bolster their own privacy. Blacklisting users on the basis of their Tor usage is hostile to their goals of privacy and anti-censorship.

          • Fnoord a day ago

            There's no reason Tor exit nodes need to access my home network. Zero. I do use BitTorrent but behind a VPN; this remains unaffected, though if it were I would block traffic which isn't supposed to go through Tor (since BitTorrent over Tor is not recommended).

            As a rule of thumb, I will gladly pass on Tor traffic, but no exit node, and I understand if network admins want to block entry node, too. It is a decision everyone who maintains a network has to make themselves.

            The reason I block it is also the same reason I block banana republics like CN and RU: these don't prosecute people who break the law with regards to hacking. Why should one accept unrestricted traffic from these?

            In the end, the open internet was once a TAZ [1] and unfortunately with the commercialization of the internet together with massive changes in geopolitics the ship sailed.

            [1] https://en.m.wikipedia.org/wiki/Temporary_Autonomous_Zone

            • immibis 2 hours ago

              You don't have to help commercialize it.

behringer a day ago

Your first mistake is allowing chinese/russian traffic to your server...

cynicalsecurity 2 days ago

> None of the database guides I followed had warned me about the dangers of exposing a docker containerized database to the internet.

This is like IT security 101.

> A quick IP search revealed that it was registed with a Chinese ISP and located in the Russian Federation.

Welcome to the Internet. Follow me. Always block China and Russia from any of your services. You are never going to need them and the only thing that can come from them is either a remote attack or unauthorized data sent to them.

  • euroderf 2 days ago

    > Always block China and Russia from any of your services.

    But does this add much security if bad actors are using VPNs ?

    • philipwhiuk 2 days ago

      They'd need to run a VPN with end point in a less hostile country and so there's an element of enforcement on those endpoints.

      At any rate, blocking China and Russia isn't ever presented as a foolproof defence, it just raises the barrier to attack.

  • philipwhiuk 2 days ago

    Yeah, the problem today is that there are many guides on MVPing software which don't tell you basic security.

    The guy doesn't have a software background.

    This is basically the problem with the 'everyone-should-code' approach - a little knowledge can be dangerous.

  • ocdtrekkie 2 days ago

    Yes, but go further: Are you ever using your home server outside your own country? If the answer is no, block everything but your country. You can remove the vast majority of your attack surface by just understanding where you need to be exposed.

  • immibis 2 days ago

    Blocking China and Russia adds no security. Attacks can come from anywhere. And when you start blocking IPs, you may accidentally block yourself.