I’m currently researching the best method for running a static website from Docker.

The site consists of one single HTML file, a bunch of CSS files, and a few JS files. On server-side nothing needs to be preprocessed. The website uses JS to request some JSON files, though. Handling of the files is doing via client-side JS, the server only need to - serve the files.

The website is intended to be used as selfhosted web application and is quite niche so there won’t be much load and not many concurrent users.

I boiled it down to the following options:

  1. BusyBox in a selfmade Docker container, manually running httpd or The smallest Docker image …
  2. php:latest (ignoring the fact, that the built-in webserver is meant for development and not for production)
  3. Nginx serving the files (but this)

For all of the variants I found information online. From the options I found I actually prefer the BusyBox route because it seems the cleanest with the least amount of overhead (I just need to serve the files, the rest is done on the client).

Do you have any other ideas? How do you host static content?

  • CameronDev@programming.dev
    link
    fedilink
    English
    arrow-up
    36
    arrow-down
    1
    ·
    10 months ago

    Just go nginx, anything else is faffing about. Busybox may not be security tested, so best to avoid on the internet. Php is pointless when its a static site with no php. Id avoid freenginx until its clear that it is going to be supported. There is nothing wrong with stock nginx, the fork is largely political rather than technical.

    • 𝘋𝘪𝘳𝘬@lemmy.mlOP
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      10 months ago

      Php is pointless when its a static site with no php

      Absolutely, but it has a built-in webserver that can serve static files, too (I constantly use that in my dev environment).

      But I guess you’re mostly right about just using Nginx. I already have multiple containers running it, though. Most of them just serving static files. But it’s ca. 50 megabytes compressed size each container just for Nginx alone.

      • CameronDev@programming.dev
        link
        fedilink
        English
        arrow-up
        6
        ·
        10 months ago

        Having PHP installed is just unnecessary attack surface.

        Are you really struggling for space that 50mb matters? An 8gb usb can hold thar 160x?

        • 𝘋𝘪𝘳𝘬@lemmy.mlOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          10 months ago

          Having PHP installed is just unnecessary attack surface.

          Yes! Especially running it’s built-in webserver outside your dev environment. They “advertise” doing so in their Docker packages documentation, though. Every project without PHP is a good project. It’s still an option - at least technically.

          Are you really struggling for space that 50mb matters?

          In a way, yes. I just want to optimize my stuff as much as possible. No unneeded tools, no overhead, a super clean environment, etc. Firing up another Nginx container just doesn’t feel right anymore. (Even if it seems to be possible to manually “hack” file serving into NPM - which makes it a multi-use container serving various different sites and proxying requests.)

          The machine I use as docker host also has a pretty low-end CPU and measly 4 gigabytes of RAM. So every resource not wasted is a good resource.

          • CameronDev@programming.dev
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            10 months ago

            RAM is not the same as storage, that 50mb docker image isn’t going to require 50mb of ram to run. But don’t let me hold you back from your crusade :D

            • 𝘋𝘪𝘳𝘬@lemmy.mlOP
              link
              fedilink
              English
              arrow-up
              1
              ·
              10 months ago

              Thanks for educating me on basic computer knowledge! 🤣

              Applications need RAM, though. A full-fledged webserver with all the bells and whistles likely needs more ram than a specialized single-binary static file delivery server.

              • CameronDev@programming.dev
                link
                fedilink
                English
                arrow-up
                1
                ·
                10 months ago

                Sorry, wasn’t meant to be condescending, you just seem fixated on file size when it sounds like RAM (and/or CPU?) is what you really want to optimise for? I was just pointing out that they arent necessarily correlated to docker image size.

                If you really want to cut down your cpu and ram, and are okay with very limited functionality, you could probably write your own webserver to serve static files? Plain http is not hard. But you’d want to steer clear of python and node, as they drag in the whole interpreter overhead.

                • 𝘋𝘪𝘳𝘬@lemmy.mlOP
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  10 months ago

                  I care about anything. RAM usage, file size, etc. I’m a minimalist when it comes to software. Use as less of all resources as possible. After once writing a router in Python I thought I could do this in Lua, too, but never actually tried. Maybe this would be a nice weekend project?

                  Even if Nginx is the way to go, I currently experiment with SWS which was suggested here. Technical aspects aside: The software is actively developed and the maintainer provides Docker images on their own (easy for Deploying a container based on that) and a package for my distribution (easy for development testing).

      • lemmyvore@feddit.nl
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        Absolutely, but it has a built-in webserver that can serve static files, too (I constantly use that in my dev environment).

        How about Python? You can get an HTTP server going with just python3 -m http.server from the dir where the files are. Worth remembering because Python is super common and probably already installed in many places (be it on host or in containers).

        • 𝘋𝘪𝘳𝘬@lemmy.mlOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          10 months ago

          I once built a router in Python, but it was annoying. The much I like Python, the much I dislike coding in it. Just firing up a web server with it is no big deal, though.

          I was even thinking of node.js, but this comes with a whole different set of issues. It would allow for future extensions of the project on the server-side, though.

  • marcos@lemmy.world
    link
    fedilink
    English
    arrow-up
    20
    ·
    10 months ago

    The answer is get a minimum linux image, add nginx or apache, and put your content on the relevant place. (Basically, your third option.)

    Do not bother about the future of nginx. Changing the web server on that image is the easiest thing in the world.

  • sudneo@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    ·
    10 months ago

    I personally package the files in a scratch or distroless image and use https://github.com/static-web-server/static-web-server, which is a rust server, quite tiny. This is very similar to nginx or httpd, but the static nature of the binary removes clutter, reduces attack surface (because you can use smaller images) and reduces the size of the image.

    • 𝘋𝘪𝘳𝘬@lemmy.mlOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      10 months ago

      Thanks, this looks actually pretty great. From the description it’s basically BusyBox httpd but with Nginx stability and production-readiness and functionality. It also seems to be actively developed.

  • Swarfega@lemm.ee
    link
    fedilink
    English
    arrow-up
    12
    ·
    10 months ago

    I just use nginx in docker. It runs from a Pi4 so needs to be lightweight. I’m sure there are lighter httpd servers to use, but it works for me. I also run nginx proxy manager to create a reverse proxy and to manage the certificate renewal that comes from Let’s Encrypt.

  • rglullis@communick.news
    link
    fedilink
    English
    arrow-up
    10
    ·
    10 months ago

    caddy can serve the files and deal with SSL certificates in case you put this in a public domain.

    • 𝘋𝘪𝘳𝘬@lemmy.mlOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      My setup already has Nginx Proxy Manager to handle SSL. This is specifically about serving files from within a docker container with as little overhead as possible.

  • lemmyvore@feddit.nl
    link
    fedilink
    English
    arrow-up
    7
    ·
    10 months ago

    I see from your other comments that you’re already running nginx in other containers. The simplest solution would be to make use of one of them. Zero overhead since you’re not adding any new container. 🙂

    You mentioned you’re using NPM, well NPM already has a built-in nginx host that you can reach by making a proxy host pointed at http://127.0.0.1:80 and adding the following to the “Advanced” tab:

    location / {
      root /data/nginx/local_static;
      index index.html;
    }
    

    Replace the root location with whatever dir you want, use a volume option on the NPM container to map the dir to the host, put your files in there and that’s it.

      • lemmyvore@feddit.nl
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        10 months ago

        Yeah it’s not exactly an obvious feature. I don’t even remember how I stumbled onto it, I think I was looking at the /data dirs and noticed the default one.

        I haven’t tried using it for more than one site but I think that if you add multiple domain names to the same proxy host they go to the same server instance and you might be able to tweak the “Advanced” config to serve all of them as virtual hosts.

        It’s not necessarily a bad thing to have a separate nginx host. For example I have a PHP app that has its own nginx container because I want to keep all the containers for it in one place and not mix it up with NPM.

  • arran 🇦🇺@aussie.zone
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    edit-2
    10 months ago

    The busybox one seems great as it comes with shells. php looks like it would add some issues.

    Personally since I use go, I would create a go embedded app, which I would make a deb, rpm, and a dockerfile using “goreleaser”

    package main
    
    import (
    	"embed"
    	"net/http"
    )
    
    //go:embed static/*
    var content embed.FS
    
    func main() {
    	http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
    		// Serve index.html as the default page
    		http.ServeContent(w, r, "index.html", nil, content)
    	})
    
    	// Serve static files
    	http.Handle("/static/", http.StripPrefix("/static/", http.FileServer(http.FS(content))))
    
    	// Start the server
    	http.ListenAndServe(":8080", nil)
    }
    

    Would be all the code but allows for expansion later. However the image goreleaser builds doesn’t come with busybox on it so you can’t docker exec into it. https://goreleaser.com/customization/docker/

    Most of the other options including the PHP one seem to include a scripting language or a bunch of other system tools etc. I think that’s overkill

    • sudneo@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      10 months ago

      I would consider the lack of a shell a benefit in this scenario. You really don’t want the extra attack surface and tooling.

      Considering you also manage the host, if you want to see what’s going on inside the container (which for such a simple image can be done once while building it the first time more likely), you can use unshare to spawn a bash process in the container namespaces (e.g., unshare -m -p […] -t PID bash, or something like this - I am going by memory).

  • Lodra@programming.dev
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    edit-2
    10 months ago

    The simplest way is certainly to use a hosted service like GitHub Pages. These make it so easy to create static websites.

    If you’re not flexible on that detail, then I next recommend Go actually. You could write a tiny web server and embed the static files into the app at build time. In the end, you’d have a single binary that acts as a web server and has your content. Super easy to dockerize.

    Things like authentication will complicate the app over time. If you need extra features like this, then I recommend using common tools like nginx as suggested by others.

  • Possibly linux@lemmy.zip
    link
    fedilink
    English
    arrow-up
    4
    ·
    10 months ago

    If your looking for a small size you could build a custom image with buildroot and lighttpd. It is way, way overkill but it would be the smallest.

    For something easier use the latest image of your web server of choice and then pass though a directory with the files. From there you can automate patching with watch tower.

    • okamiueru@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      10 months ago

      First thing you mention is such a fun and useful exercise. But as you point out, way overkill. Might even be dangerous to expose it. I got mine to 20kb on top of busybox.

      There is something that tickles the right spots when a complete container image significantly smaller than the average js payload in “modern” websites.

  • justcallmelarry@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    4
    ·
    10 months ago

    I’ve always used an nginx alpine image and have been very happy with it.

    Not sure how this fork business is turning out and I have also heard conflicting opinions on wether to care or not…

    If you do wish for something simple that is not nginx I’m also very happy with caddy, which can also handle ssl certificates for you, if you plan to make it publicly reachable.

  • summerof69@lemm.ee
    link
    fedilink
    English
    arrow-up
    3
    ·
    10 months ago

    Err, FROM webserver + COPY /path/to/content /path/to/server/directory? You don’t event expect users, what’s there to discuss?

  • CetaceanNeeded@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    10 months ago

    I just use nginx alpine, if freenginx proves to be the better option later it should be fairly trivial to switch the base image.

    • 𝘋𝘪𝘳𝘬@lemmy.mlOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      10 months ago

      Yes, Freenginx should/would/will be a drop-in replacement, at least int he beginning. We’ll see how this works out over time. Forks purely out of frustration never lived long enough to gain a user base and attract devs. But it’s an “anti corporate bullshit” fork and this alone puts it on my watchlist.

  • ptman@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    4
    ·
    10 months ago

    Forget about docker. Run caddy or some similar webserver that is a single file next to the assets to serve.

    • sudneo@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      10 months ago

      Containers are a perfectly suitable use-case for serving static sites. You get isolation and versioning at the absolutely negligible cost of duplicating a binary (the webserver - which in case of the one I linked in my comment, it’s 5MB of space). Also, you get autostart of the server if you use compose, which is equivalent to what you would do with a Systemd unit, I suppose.

      You can then use a reverse-proxy to simply route to the different containers.

      • jj4211@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        10 months ago

        But it you already have an nginx or other web server otherwise required to start up (which is in all likelihood the case), you don’t need any more auto startup, the “reverse proxy” already started can just serve it. I would say that container orchestration versioning can be helpful in some scenarios, but a simple git repository for a static website is way more useful since it’s got the right tooling to annotate changes very specifically on demand.

        That reverse proxy is ultimately also a static file server. There’s really no value in spinning up more web servers for a strictly static site.

        Folks have gone overboard assuming docker or similar should wrap every little thing. It sometimes adds complexity without making anything simpler. It can simplify some scenarios, but adding a static site to a webserver is not a scenario that enjoys any benefit.

        • sudneo@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          10 months ago

          It really depends, if your setup is docker based (as OP’s seems to be), adding something outside is not a good solution. I am talking for example about traefik or caddy with docker plugin.

          By versioning I meant that when you do a push to master, you can have a release which produces a new image. This makes it IMHO simpler than having just git and local files.

          I really don’t see the complexity added, I do gain isolation (sure, static sites have tiny attack surfaces), easy portability (if I want to move machine it’s one command), neat organization (no local fs paths to manage essentially), and the overhead is a 3 lines Dockerfile and a couple of MB needed to duplicate a webserver binary. Of course it is a matter of preference, but I don’t see the cons honestly.

      • smileyhead@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        10 months ago

        Serving static app in Caddy:

        sudo apt install caddy
        sudo systemctl enable --now caddy

        Then in /etc/caddy/Caddyfile:

        example.com {
           root * /var/www/html
           file_server
        }
        

        That’s all, really.

        • sudneo@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          10 months ago

          If there is already another reverse proxy, doing this IMHO is worse than just running a container and adding one more rule in the proxy (if needed, with traefik it’s not for example). I also build all my servers with IaC and a repeatable setup, so installing stuff manually breaks the model (I want to be able to migrate server with minimal manual action, as I had to do it already twice…).

          The job is simple either way, I would say it mostly depends on which ecosystem someone is buying into and what secondary requirements one has.

      • jj4211@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        Because serving static files doesn’t really require any flexibility in web serving code.

        If your setup has an nginx or similar as a reverse proxy entry point, you can just tell it to serve the directory. Why bother making an entire new chroot and proxy hop when you have absolutely zero requirements beyond what the reverse proxy already provides. Now if you don’t have that entry point, fine, but at least 99% of the time I see some web server as initial arbiter into services that would have all the capability to just serve the files.

          • jj4211@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            10 months ago

            For 90% of static site requirements, it scales fine. That entry point reverse proxy is faster at fetching content to serve via filesystem calls than it is at making an http call to another http service. For self hosting types of applications, that percentage guess to go 99.9%

            If you are in a situation where serving the files through your reverse proxy directly does not scale, throwing more containers behind that proxy won’t help in the static content scenario. You’ll need to do something like a CDN, and those like to consume straight directory trees, not containers.

            For dynamic backend, maybe. Mainly because you might screw up and your backend code needs to be isolated to mitigate security oopsies. Often it also is useful to manage dependencies, but that facet is less useful for golang where the resulting binary is pretty well self contained except maybe a little light usage of libc.

    • 𝘋𝘪𝘳𝘬@lemmy.mlOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      I already have a fully set up docker environment that serves all sorts of things (including some containers that serve special static content using Nginx).

  • Sockenklaus@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    1
    ·
    8 months ago

    I’ve read that you’re trying for minimal resource overhead.

    Is lighttpd still a thing? Back in the day I used it to deliver very simple static Http pages with minimal resource usage.

    I found a docker image with like 4 mb size but being two years old I don’t know how well maintained lighttpd is these days.

    • 𝘋𝘪𝘳𝘬@lemmy.mlOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      8 months ago

      The old age of the Docker image is a bit of a red flag to me.

      I settled with SWS since the Docker image and a locally installable version are actively maintained by the creator. It just serves static files and optionally directory listing as JSON (which comes in quite handy).

  • Decronym@lemmy.decronym.xyzB
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    edit-2
    8 months ago

    Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:

    Fewer Letters More Letters
    Git Popular version control system, primarily for code
    HTTP Hypertext Transfer Protocol, the Web
    SSL Secure Sockets Layer, for transparent encryption
    nginx Popular HTTP server

    4 acronyms in this thread; the most compressed thread commented on today has 14 acronyms.

    [Thread #575 for this sub, first seen 5th Mar 2024, 14:15] [FAQ] [Full list] [Contact] [Source code]