Signal’s president reveals the cost of running the privacy-preserving platform—not just to drum up donations, but to call out the for-profit surveillance business models it competes against.

The encrypted messaging and calling app Signal has become a one-of-a-kind phenomenon in the tech world: It has grown from the preferred encrypted messenger for the paranoid privacy elite into a legitimately mainstream service with hundreds of millions of installs worldwide. And it has done this entirely as a nonprofit effort, with no venture capital or monetization model, all while holding its own against the best-funded Silicon Valley competitors in the world, like WhatsApp, Facebook Messenger, Gmail, and iMessage.

Today, Signal is revealing something about what it takes to pull that off—and it’s not cheap. For the first time, the Signal Foundation that runs the app has published a full breakdown of Signal’s operating costs: around $40 million this year, projected to hit $50 million by 2025.

Signal’s president, Meredith Whittaker, says her decision to publish the detailed cost numbers in a blog post for the first time—going well beyond the IRS disclosures legally required of nonprofits—was more than just as a frank appeal for year-end donations. By revealing the price of operating a modern communications service, she says, she wanted to call attention to how competitors pay these same expenses: either by profiting directly from monetizing users’ data or, she argues, by locking users into networks that very often operate with that same corporate surveillance business model.

“By being honest about these costs ourselves, we believe that helps provide a view of the engine of the tech industry, the surveillance business model, that is not always apparent to people,” Whittaker tells WIRED. Running a service like Signal—or WhatsApp or Gmail or Telegram—is, she says, “surprisingly expensive. You may not know that, and there’s a good reason you don’t know that, and it’s because it’s not something that companies who pay those expenses via surveillance want you to know.”

Signal pays $14 million a year in infrastructure costs, for instance, including the price of servers, bandwidth, and storage. It uses about 20 petabytes per year of bandwidth, or 20 million gigabytes, to enable voice and video calling alone, which comes to $1.7 million a year. The biggest chunk of those infrastructure costs, fully $6 million annually, goes to telecom firms to pay for the SMS text messages Signal uses to send registration codes to verify new Signal accounts’ phone numbers. That cost has gone up, Signal says, as telecom firms charge more for those text messages in an effort to offset the shrinking use of SMS in favor of cheaper services like Signal and WhatsApp worldwide.

Another $19 million a year or so out of Signal’s budget pays for its staff. Signal now employs about 50 people, a far larger team than a few years ago. In 2016, Signal had just three full-time employees working in a single room in a coworking space in San Francisco. “People didn’t take vacations,” Whittaker says. “People didn’t get on planes because they didn’t want to be offline if there was an outage or something.” While that skeleton-crew era is over—Whittaker says it wasn’t sustainable for those few overworked staffers—she argues that a team of 50 people is still a tiny number compared to services with similar-sized user bases, which often have thousands of employees.

read more: https://www.wired.com/story/signal-operating-costs/

archive link: https://archive.ph/O5rzD

  • PlexSheep@feddit.de
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    Honestly, I’m not sure what you are talking about. Could you elaborate more?

    Are you implying that sending some hash is better than sending the secret and let the server deal with it?

    • uis@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      It took a long time to reply to you, sorry.

      When used for login, it prevents MITM attacker(assuming you are not using app sent to you by attacker) from stealing your password(because hash functions are extremely hard to reverse), while when used both for registration and login, your password doesn’t even leave your computer. There are even password managers that don’t store any passwords, but just generate them by hashing your secret with server name.

      • PlexSheep@feddit.de
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        How does this prevent MITM attacks? The secret you send to the server, be it called hash or password, is what’s used to authenticate the user. For the purpose of client/server communication, this “password” on your host only is not relevant, as it’s only used to generate the real secret.

        A hypothetical MITM attacker would still gain access to that secret, without needing to care how it was generated, be it by hashing something on your host or by coming up with semi random letters yourself.

        The secret sent to the server becomes the defacto password.

        Now about those password managers, they are a thing but I don’t have experience using them. Through a disadvantage is that if a site gets breached you have to do something weird with your password manager, so that a different password is produced with your secret key and the domain name. This can be done with a counter that needs to be manually adjusted, but that’s weird from a usability point of view.

        • uis@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          How does this prevent MITM attacks? The secret you send to the server, be it called hash or password, is what’s used to authenticate the user.

          Maybe I phrased incorrectly. It prevents attacker from getting password and using it again in future.

          For the purpose of client/server communication, this “password” on your host only is not relevant, as it’s only used to generate the real secret.

          Salted hash if not implemented with possible MITM attacks in mind indeed can be used by attacker. Resisting them is easy and can be done by channel binding techniques like using channel public key as part of salt. In such case if attacker successfully will make MITM attack, server will just reject hash, because it is not equal with expected one.

          The secret sent to the server becomes the defacto password.

          Passwords are secrets. Secrets aren’t passwords.

          but that’s weird from a usability point of view.

          HOTP exists. HOTP is used.

          • PlexSheep@feddit.de
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            Maybe I phrased incorrectly. It prevents attacker from getting password and using it again in future.

            In what circumstances besides reusing passwords does this matter?

            To make this discussion extra long: If you’re creating a hash based on a local password, then share this as secret to the server, which then treats it with regular password security, this is beneficial for security as far as I can see, as it makes sure that the “password”/secret is strong and pseudo random.

            • uis@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              1 year ago

              In what circumstances besides reusing passwords does this matter?

              Happens more then you imagine.

              To make this discussion extra long: If you’re creating a hash based on a local password, then share this as secret to the server, which then treats it with regular password security, this is beneficial for security as far as I can see, as it makes sure that the “password”/secret is strong and pseudo random.

              Didn’t I mention two parts where hashing can be used? Let’s take lemmy as an example. There is /login endpoint that takes username and password and returns token and there is /register endpoint that takes lots of arguments including username and password. Hashing you are talking about now is replacing plain-text password with generated secret. It prevents server from knowing password that is used for generation of other secrets on other platforms. Now there is also hypothetical /gettmptok and /verify endpoints. First takes username and returns temproary token and second takes username, temproary token and hash of password salted with (public) key of channel and temproary token and returns… let’s say boolean value, which means this hash becomes valid token. If attacker tries to MITM here, server will reject token because it will not match expected hash because salt is wrong. Even without channel binding attacker cannot get secret to login again in case user logsout of session or forcefuly closes it from another one or token is invalidated for any other reason.

              Got it EXTRA long.

              • PlexSheep@feddit.de
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 year ago

                I fail to see how this prevents any MITM attack where the attacker pretenta to be the server, but besides that, that just seems overly complicated.

                • uis@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  1 year ago

                  With channel binding: public keys server and client use are different, which makes salt different on client and server, which makes hash different on client and server, server don’t get hash it expects, server replies with 403 GTFO. And as a bonus attacker didn’t get your password.