Some arguments for IPv6 (including from practical experience)

This is intended to be an ongoing list of use cases that come up rather than a proper post, so I won’t date updates to it. My use cases come largely from running servers and experimenting with devices at home.

(1) Running multiple servers at different domains on the same subnet on ports 80 and 443. These could even be different servers on the same machine: useful, for example, if you run a web server and a proxy server on the same ports, or maybe a PHP and a Java server platform – but you need more than one address, which is where IPv6 comes in. You can’t port forward different domains to different IP addresses using IPv4 and NAT unless you have multiple public IPv4 addresses and thus multiple network interfaces. As IPv4 addresses have run out, only organisations that bought large numbers ages ago have these. The cost of IPv4 addresses is high because of scarcity and small users can’t really hope to have more than one or two if they are lucky and can pay. There are literally trillions of IPv6 addresses available per person.
(2) Publicly addressing multiple devices in a household. Use cases could be IP security cameras, just your mobile and tablet, or any Internet-enabled device. This isn’t a security risk if you have a properly configured IPv6 firewall in addition to your IPv4 one, which your router will already be doing for you. The addresses are rather hard to guess if you have trillions of them, and IPv6 devices are usually allowed by the router to choose their own addresses, unlike IPv4 address leases from the router, so they change anyway. Adds up to better security.
(3) As some countries find it impossible to get IPv4 addresses, they will switch to IPv6-only networks. You cannot access the IPv6 network from IPv4 and vice versa. All servers and home networks can run IPv6 but, as IPv4 addresses cannot be obtained, increasingly they will not all be able to use IPv4 everywhere in the world, so those still running it will be invisible to them. The current stop-gap fix of running dual-stack networks is eventually going to break because it is limited by needing an IPv4 address for each server or network, which means it is mathematically no more effective than simply running IPv4-only networks as previously.
(4) The proliferation of devices per person running on multiple networks (home broadband, work broadband or VPN, phone data connection) will simply make IPv4 and other stop-gap solutions like carrier-grade NAT unworkable in the long run.
(5) The solution is well established and could have been put in place long ago: there is no technical reason not to have done it as routers have already been replaced and software can be updated even on old devices to allow IPv6 networking.
(6) Although for many use cases IPv6 simply replaces IPv6 as an addressing system and does not impact on speed, in others it will speed up the Internet because it is more efficient for certain things like addressing multiple devices on a subnet. The technologies that could be developed using IPv6 are not being developed because so many people can’t use them while IPv4 is still being used in parallel with it.
(7) The ordinary user will only see a better Internet but will not have to do anything to use IPv6 and retire IPv4. Software and systems will be updated so it will happen for them. In most cases this has already happened. We only need to plan now to switch off IPv4. Otherwise, sooner or later, it will happen anyway. A third of Internet traffic is IPv6. Soon the small amount of IPV6-only traffic (i.e. which cannot be on IPv4 because it is to locations without IPv4 addresses) will in time grow, just as general IPv6 traffic has done over the last few years. As soon as this includes major services that millions use, IPv4’s days are done. If Google or Facebook abandons IPv4, so will the rest of the world except for legacy purposes, largely confined to internal subnets. Having a public IPv4 address, once many millions of people cannot see it, will be virtually pointless. It will not be especially useful for hiding Internet traffic since security agencies will still be able to monitor what is left of the IPv4 network; it is easiest to find people in a place with almost no traffic. Perhaps IPv4 will be switched off – it doesn’t really matter when that happens, as soon as everybody has moved to IPv6 anyway.

Facebooktwitterredditpinterestlinkedinmailby feather

Fixing WebDAV with Nginx request headers

In the past I have used SabreDAV to provide WebDAV, CardDAV and CalDAV services but I also realised that you could provide simple WebDAV using Nginx with its ngx_http_dav_module to provide the basic PUT DELETE MKCOL COPY MOVE methods and, if you should require the additional PROPFIND OPTIONS LOCK UNLOCK, you can add the nginx-dav-ext-module/ as well, for example to provide a full read-write capable WebDAV client that works nicely with OSX / MacOS Finder. Without the extended module, some clients will be read-only.

However, this runs into several problems with some clients.

Firstly, I still run OSX 10.11.6 El Capitan, of which Finder is not entirely compliant with the WebDAV standard. It will not add a trailing slash consistently to folders, resulting in errors and the inability to create/deleting or move/rename folders. While creating and deleting folders can be resolved by adding a trailing slash to the request URI in Nginx, moving (i.e. renaming) folders relies on the Destination request header, which Nginx core modules cannot change. However, there are additional Nginx modules that can do this: in this tutorial I present two alternative methods to resolve this:

  1. Headers-More module
  2. Lua module

Another more serious problem arises with cURL, which attempts to carry out duplicate requests on the root / folder before carrying them out on the request URI. This is fatal in the case of DELETE and MOVE in particular, since it causes the root folder to be deleted along with all its contents, losing data and possibly exposing the parent folder (although OSX Finder simply loses the connection). This can be resolved in Nginx setup without additional modules by simply blocking any requests to the root folder, which users should not need to be making requests to anyway.

If you are not using nginx-extras then the Headers-More and Lua modules will not be available and if you are not using nginx-full or nginx-extras then the Dav Ext module will not be available either. Because I was using Nginx 1.17.5 to patch the recent php7.x-fpm bug CVE-2019-11043, I had to compile these modules separately to get them to work on a production server, having tested them on Nginx 1.14.2 on Raspbian Buster. I would add that compiling Headers-More is considerably more straightforward than the Lua Module because the latter requires a number of dependencies to be compiled too; the configuration method for Headers-More is in any case simpler. There are tutorials elsewhere on how to compile them as dynamic modules. You can then copy them to the standard install rather than recompiling nginx core as well, provided that the version number is the same as the one that you have installed from the package.

Finally I tidied up some problems with OSX / MacOS creating a lot of excess . and ._. files, which create problems for WebDAV. The requests are dropped silently to avoid clogging up the logs.

dav_ext_lock_zone zone=foo:10m;
client_body_temp_path /var/dav/tmp;

#this section redirects from 80 to 443. If you are not using https, use the directives from the 443 section
# below here instead.
server {
  listen 80;
  # always use SSL
  location / {
    if ($request_method = POST) {
      # use temporary to allow for POST to go through
      # 301 will only work for GET/HEAD/OPTIONS
      return 307 https://$host$request_uri;
    }
    return 301 https://$host$request_uri;
  }
}

server {
  listen 443 http2 ssl default_server;
  listen [::]:443 http2 ssl default_server;
  #
  # Note: You should disable gzip for SSL traffic.
  # See: https://bugs.debian.org/773332

  client_max_body_size 0;
  proxy_read_timeout 300;  # answer from server, 5 min
  proxy_send_timeout 300;  # chunks to server, 5 min
  proxy_set_header  Host $host;
  proxy_set_header  X-Real-IP $remote_addr;
  proxy_set_header  X-Forwarded-For $proxy_add_x_forwarded_for;
  proxy_set_header  X-Forwarded-Proto $scheme;
  port_in_redirect  off;
  ssl on;
  ssl_session_timeout 5m;
  ssl_certificate /etc/ssl/certs/localhost-selfsigned.crt;
  ssl_certificate_key /etc/ssl/private/localhost-selfsigned.key;
  ssl_protocols TLSv1.2 TLSv1.3;
  ssl_prefer_server_ciphers on;
  ssl_ciphers "EECDH+ECDSA+AESGCM EECDH+aRSA+AESGCM EECDH+ECDSA+SHA384 EECDH+ECDSA+SHA256 EECDH+aRSA+SHA384 EECDH+aRSA+SHA256 EECDH+aRSA+RC4 EECDH EDH+aRSA RC4 !aNULL !eNULL !LOW !3DES !MD5 !EXP !PSK !SRP !DSS !RC4";
  root /var/www/html;

  # Add index.php to the list if you are using PHP
  index index.html index.htm;

  server_name _;

  location / {
    # You could alter this. It is just here for my purposes.
    if ($remote_user != "admin") { set $r $remote_user; }
      root /var/dav/$r;

      # First attempt to serve request as file, then
      # as directory, then fall back to index.html
      #try_files $uri $uri/ /index.html;

      # enable creating directories without trailing slash
      set $x $uri$request_method;
      if ($x ~ [^/]MKCOL$) {
        rewrite ^(.*)$ $1/;
      }

      ## function to add a trailing slash to the destination header URI
      ## if there is one in the request URI. This fixes the problem with
      ## (some versions of?) OSX/MacOS Finder where renaming folders fails
      if ($uri ~ "^(.*[^/])/$") { set $destination $http_destination/; }
      more_set_input_headers "destination: $destination";

      # This prevents some unexpected behaviour from cURL that carries out
      # every request first on / before it does so on the URI. In the case
      # of DELETE and MOVE this is fatal because it deletes the root folder
      set $x $uri$request_method;
      if ($x ~ ^/PUT|^/DELETE|^/MKCOL|^/COPY|^/MOVE) {
        return 403; # forbidden is the correct response
        #return 418; # a teapot's response to not following standards - testing only
      }

      # add a trailing slash to the request URI if it is a directory
      if (-d $request_filename) { rewrite ^(.*[^/])$ $1/ break; }
}

      dav_methods PUT DELETE MKCOL COPY MOVE;
      dav_ext_methods PROPFIND OPTIONS LOCK UNLOCK;
      dav_ext_lock zone=foo;
      create_full_put_path  on;
      dav_access    user:rw group:r all:r;
      #autoindex     on; # does not work with fancyindex module (overrides it) but enable it if you like
      auth_basic "restricted";
      auth_basic_user_file /var/www/.htpasswd;
      #below you can specify the access restrictions. In this case, only people on the 141.142 network
      #can write/delete/etc. Everyone else can view.
      #limit_except GET PROPFIND OPTIONS{
        #allow 141.142.0.0/16;
        #deny  all;
      #}
      allow all;

      ## function to add a trailing slash to the destination header URI
      ## if there is one in the request URI. This fixes the problem with
      ## (some versions of?) OSX/MacOS Finder where renaming folders fails
      access_by_lua_block {
        --[[ **COMMENTED OUT BECAUSE THE more_set_input_headers ABOVE IS EASIER **
        function modify_dest_header()
          ngx.req.read_body()
          -- grab the request headers
          local headers, err = ngx.req.get_headers()
          if err == "truncated" then
            -- one can choose to ignore or reject the current request here
            -- but we are doing nothing
          end

          if not headers then
            --ngx.say("failed to get request headers: ", err)
            -- do nothing and process no further
            return
          end

          -- check to see if the URI has a final slash
          local m = ngx.re.match(ngx.var.uri, "^(.*[^/])/$")
          if (m) then
            -- check through the headers for destination key and URI value
            for key, val in pairs(headers) do
              if (key == "destination") then
                -- append a trailing slash to the destination URI value
                --ngx.say(key, ": ", val) -- TESTING ONLY
                ngx.req.set_header("destination",val.."/")
              end
            end
          end

        end

        modify_dest_header()
        ** END OF COMMENTED-OUT SECTION ** ]]--
      }
  }

  # deny writing of Apple .[_.]DS_Store files
  #
  location ~ /\.DS_Store {
    access_log off;
    error_log off;
    log_not_found off;
    deny all;
  }
  location ~ /\._.DS_Store {
    access_log off;
    error_log off;
    log_not_found off;
    deny all;
  }
  # deny writing of Apple ._ files, which can prevent writing ordinary files
  # by OSX if an earlier failure has left only the ._ file there.
  #
  location ~ /\._ {
    access_log off;
    error_log off;
    log_not_found off;
    deny all;
  }
}

You should now have a WebDAV instance that works even with non-compliant clients that do not fully implement the standards or else command line clients like cURL that behave worryingly in ways that compromise the server.

Facebooktwitterredditpinterestlinkedinmailby feather

HTTP/3, UDP and a faster, more seamless Internet

Why is this interesting?

The few readers who have hitherto wandered to this blog, plagued as it is by my obsessional interest in either the eye-wateringly technical details of software installation or else the equally obscure details of Celtic languages, may find it refreshing to hear that this post is going to explain why the new HTTP/3 standard is going to make the Internet much better – mostly by being a LOT faster.

HTTP/3 200
server: h2o
content-length: 193
date: Fri, 01 Nov 2019 10:32:48 GMT
content-type: text/html
last-modified: Tue, 29 Oct 2019 09:15:07 GMT
etag: "5db8031b-c1"
accept-ranges: bytes
alt-svc: h3-23=":8081"

(For those who are interested in-depth instructions of how to set up HTTP/3 on the h2o web server, please see the updated version of my previous post here.)

You might have heard that HTTP/2 came out in 2015, which contains a of optimisations and technical improvements that make it far faster than the aged HTTP/1.1 (1999), itself a revision of the now archaic HTTP/1.0 (1996) and, practically back at the dawn of time, the retrospectively named HTTP/0.9 (1991)*. The latter two are of largely historical interest by now and many command-line tools don’t even bother to support them even for the hell of it because you’d be a very pedantic (or specialist) person indeed to bother to distinguish them from HTTP/1.1. So why, you may ask, would I make such a fuss about HTTP/3, just another improvement? Why should I care how or why it works, if it does work anyway?

Well, it is NOT “just an another improvement”. It fundamentally changes the way that the Internet works by doing something very unexpected. Up to now, most reliable transfers of data have been made using the ageing Transmission Control Protocol (TCP), dating from the publication of RFC 675 back in 1974. That was a while back, it must be said, particularly in computing terms. The UNIX Epoch, widely understood to be the era of modern computing, dates from 1 January 1970. (This is often a default date for files that haven’t got one of their own or have lost it somehow, in case you were wondering where you might have seen it before.)

A view of the Chrome Canary Developer Tools showing HTTP/3 as "http2+quic/99"
A view of the Chrome Canary Developer Tools showing HTTP/3 as “http2+quic/99”

(*HTTP/0.9 was essentially HTTP 1.0 without any headers. It had no version name at the time.)

What is HTTP/3, then?

Instead, it has been decided that HTTP/3 will work over the User Datagram Protocol (UDP), which is blazingly fast but, until now, totally unsuitable for the purpose of sending information and being sure that it will arrive in the correct order. It’s great for short messages, for streaming, for images in games and so forth. However, it has the great failing that it doesn’t control the order in which packets arrive, the discrete parts of data that make up everything that you or anybody else send over the Internet, and it does nothing to reassemble them afterwards. Since the browser manufacturers decided to prefer HTTPS at the time HTTP/2 was adopted in 2015, 80% of the Internet is encrypted using increasingly good versions of TLS (sometimes or more commonly known by the name of its now defunct predecessor SSL) and is thus a great deal more secure. If you thought it was a problem for HTTP that messages broken up into little pieces and reassembled in the wrong order would create chaos (which is why we used TCP instead of UDP), how much worse would it be if they were over HTTPS?

The answer is that it would be trash. If you encrypt something you can’t guess at what the parts of it are until they unencrypted again. You have to know what the order of the packets is or it won’t work. UDP is a complete waste of time for things like the World Wide Web, email and nearly all the other applications people use on the Internet – it works for some important aspects of gaming, for example, but you still need TCP for anything to be organised and for the rest of the game (like any program) to work. UDP is disastrous for encryption or big data, anything that really needs not to be mangled and rearranged randomly in transmission.

Into the breach steps QUIC. To cut a long story and a tedious list of technical descriptions short, it is similar to a completely re-engineered kind of TCP that works over UDP and thus benefits from its speed. It makes sure the packets arrive in order. It works seamlessly when you change from your wi-fi connection to your data connection and back again, whereas your phone or laptop would otherwise complain that the connection was interrupted. We will keep HTTP/2 and earlier over TCP as fallbacks for the foreseeable future. HTTP/3 uses QUIC in order to make sure it works like TCP would – only a LOT faster.

What are the benefits?

It may be convenient that your video keeps playing as you move from your wi-fi to your phone’s data connection. But we live in a world of big data. What does that mean?

The Internet services and sites that we depend on move a LOT of data. Optimising this by a few percent makes new technology possible. But optimising it by, say, 20% would make it a LOT faster. I haven’t actually done any benchmarking tests because HTTP/3 is cutting-edge and there is almost no information beyond complex instructions for compiling servers and tools on the Internet to date. Some very clever programmers have, however. It is that sort of massive difference in speed and effectiveness. There is rather a big difference in our understanding of transmission protocols between 1974 and 2019, some 45 years of development.

Imagine you are a national library. Imagine you are a physicist with a huge data set. Imagine you are Facebook or Google with massive amounts of our data to move about. Also imagine, yourself as a user, if you can move things much faster, how much BIGGER amounts of data you (or your phone) can move in a shorter space of time: things that make your videos work, that may make video conferencing far better, the quality of video calls close to perfect. The extent to which web technologies can take advantage in order to design apps and services that we have not hitherto felt were practically achievable at scale should not be underestimated.

Will it be adopted?

Yes. HTTP/2 went from being unknown to being used by every web service in months, much faster than any such major version was adopted before. There are less than a dozen known HTTP/3 test servers but Facebook provides access to their entire site by it, despite it being experimental and no browsers having enabled it yet. Browsers get updated every other week so this functionality WILL soon be in them. Cloudflare have enabled it on ALL their servers for ALL their customers. That is a LOT of customers. They drove HTTP/2 adoption and much more. They use Nginx (a major web server, arguably the best major web server in existence since Apache, responsible for a huge slice of the modern Internet). It already has test HTTP/3 functionality, which will be rolled out to server administrators in the next year or so in all likelihood.

Watch this space. HTTP/3 is much, much bigger than HTTP/2. I’ve got a test server because I am nerdy enough to want one. But soon everybody will be doing it. Big companies know it because for them it means money. You will just see things get faster and maybe forget why. But lots of new technology will be possible because of it, just because we can move stuff around faster and more seamlessly.

Facebooktwitterredditpinterestlinkedinmailby feather

Compiling and administering the h2o web server

Original version posted on 2015-10-25 10:26 GMT

Update: 2019-11-01, edited 2019-11-04, added systemd configuration 2019-11-14, edited 2021-08-26

Although one does not now need to compile h2o for Debian or Ubuntu, it has recently come to my attention that test servers for experimental HTTP/3 support (over QUIC and UDP in place of TCP) have become available: among these is h2o but you currently need to compile the latest version rather than install the packaged one. Consequently, it seemed a good moment to make some updates and corrections to these instructions. There are two main ways to see HTTP/3 responses in action:

(1) Firstly you can use the latest nightly developer version of Google Chrome Canary with the command line arguments –enable-quic –quic-version=h3-23 –enable-quic –quic-version=h3-29 as described by Cloudflare in their recent blog post and then enabling the developer tools and the protocol tab under Network. For the moment this will masquerade as “http2+quic/99” but it is really HTTP/3. This now shows h3-29 (or a later incremental version as it becomes available) for HTTP/3 (2021-08-26).

A view of the Chrome Canary Developer Tools showing HTTP/3 as "http2+quic/99"
A view of the Chrome Canary Developer Tools showing HTTP/3 as “http2+quic/99”

(2) Alternatively, for a clearer command line response, you can compile cURL with Quiche and BoringSSL in order to make a request using HTTP/3. You can also compile it with ngtcp2 using nghttp3 and a patched version of OpenSSL but I found this problematic. I have not yet been able to get it working properly either on ARM, e.g. the RaspberryPi. Even so, the result of an instruction similar to the following returned my first HTTP/3 header successfully (at the time using port 8081 rather than 8443 and version h3-23):

$ curl --http3 https://myserver.net:8081/ -I -k

$ curl --http3 https://myserver.net:8443/ -I -k (2021-08-26)
HTTP/3 200
server: h2o
content-length: 193
date: Fri, 01 Nov 2019 10:32:48 GMT
content-type: text/html
last-modified: Tue, 29 Oct 2019 09:15:07 GMT
etag: "5db8031b-c1"
accept-ranges: bytes
alt-svc: h3-23=":8081"

Note (2019-11-02): HTTP/3 test end point now available on h2o

Interestingly for the adoption of this new standard, Cloudflare are backing it and Facebook have enabled HTTP/3 as well as the other test servers listed here. You can also compile a patched version of Nginx following these instructions by Cloudflare (who use it to deliver their proxy servers for customers) but I haven’t yet tried this because I use Nginx in production on all my servers.

Note that Nginx is intending to implement HTTP/3 by the end of 2021 (2021-08-26).

Update: 2018-10-18

There is no longer any need to compile h2o for Debian since you can install the packages. I was using Debian 9 when I tried this recently originally. I have since succeeded on Ubuntu 19.10.

There are now file includes using the !file directive but these don’t work with wildcards so you can’t include a folder like sites-enabled so the method that I outlined below to concatenate these into a temporary file would still be required as things stand if you want to administer h2o like Apache or Nginx.

Because you can’t define how .php files will be handled per directory in the way that you can in Nginx, there is no way to have HHVM fall back to php7.0-fpm if it falls over, by catching 502 errors. All you can do in h2o is define custom error documents, which can’t be from custom locations on the server but must be somewhere in the web root or folders below.

Update: since HHVM ceased to support PHP the above struck out section is no longer especially relevant, although you could of course use it to provide support for Hack in the same way. You can also provide support for Python, Perl and others, as with Nginx.

I created custom error pages using PHP so that I could produce an output like the Nginx error page including the server software header but that is rather pointless in the case of 502 Bad Gateway since it usually happens when PHP falls over, in which case you would see the normal h2o string response “Internal Server Error” anyway! I did it like this:

<html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<hr><center><?=$_SERVER['SERVER_SOFTWARE'];?></center>
</body>
</html>
<!-- a padding to disable MSIE and Chrome friendly error page -->
<!-- a padding to disable MSIE and Chrome friendly error page -->
<!-- a padding to disable MSIE and Chrome friendly error page -->
<!-- a padding to disable MSIE and Chrome friendly error page -->
<!-- a padding to disable MSIE and Chrome friendly error page -->
<!-- a padding to disable MSIE and Chrome friendly error page -->

Background

Installing h2o currently no longer (2018-10-18) requires compilation from source unless you wish to use HTTP/3 (2019-11-01). This is probably not for the faint-hearted but is surprisingly possible if you are comfortable with researching software dependencies as problems may arise. There are no guarantees and you may need to trace errors caused by issues specific to the configuration or installed software on your server. I succeeded using Ubuntu 14.04 Trusty Tahr LTS, while I have not yet succeeded on Debian 8 Jessie. In principle, it should be perfectly possible. My latest successful attempt (2019-11-01) was on Ubuntu 19.10 Eoan Ermine.

As previously with Nginx, it is comparatively easy to use h2o with HHVM or PHP-FPM via FastCGI in order to provide PHP support. You can use ps-watcher to increase the reliability of HHVM by bringing it back up if it falls over, as I described in my previous post about HHVM with Nginx. It is not possible, apparently, to provide automatic fallback to PHP-FPM. I’ve only been able to set up one or the other. However, I don’t think this is a major disadvantage. (See note above about HHVM no longer supporting PHP.)

What is a little bit more involved, though simple enough in principle, is setting up h2o to start up and operate in a standard way using distributed config files for virtual hosts in the way that has been packaged for servers such as Apache and Nginx. This is particularly complicated because the language chosen for configuration, YAML, does not fully [edited 2018-10-21] support include statements: the custom !file directive is now available but it still does not allow wildcard * includes. YAML is a programmer’s choice rather than a good systems administration choice, provided we are going to be purist in refusing include statements because they are not part of YAML. We can achieve a similar effect using /etc/init.d scripts, however: I will present a practical work-around method that I have created to do so below, following the installation instructions.

Kazuho Oku’s amazing work on this new generation HTTP/2 and now HTTP/3 server has provided a blazingly fast, efficient web server. Now is the time for work to make it more usable in practice.

Unless you want experimental HTTP/3 support (2019-11-01), jump on to “Configure h2o with distributed config files for virtual hosts” below, since the compiling steps are no longer required (2018-10-18) if you install the package normally in Debian or Ubuntu.

Installing h2o from source

Update: as of 2019-11-01 the packaged version of libuv-dev will now do the job, so you can now skip compiling it and install it with:

sudo apt install libuv-dev

First of all, we need to compile libuv 1.x, which is because libuv-dev does not currently meet the required version for h2o at present. So we will have to ensure that it’s uninstalled:

sudo apt-get remove libuv-dev

If you don’t have the general compilation tools, we must install those, which we will need for several other steps along the way as well:

sudo apt-get install libtool automake make

Now we can get on and do the job. (You’ll need unzip of course if it’s not installed.) Good luck!

wget https://github.com/libuv/libuv/archive/v1.x.zip
sudo apt-get install unzip
unzip v1.x.zip
./configure
make
sudo make install
cd

If this has succeeded, we must now install wslay as follows:

INSTALL DEPENDENCIES

sudo apt install libcunit1 libcunit1-dev nettle-dev

THEN EITHER

wget https://github.com/tatsuhiro-t/wslay/archive/master.zip
unzip master.zip
cd wslay-master

OR

git clone https://github.com/tatsuhiro-t/wslay.git
cd wslay

THEN

autoreconf -i
automake
autoconf
./configure
make
make install

Now, if you want to compile h2o with mruby, which is used for custom script processing of requests in h2o configuration, then we must compile it now as well. It needs various additional tools first, as you’ll notice in the first line:

INSTALL DEPENDENCIES

sudo apt-get install ruby gcc bison clang

THEN EITHER

wget https://github.com/mruby/mruby/archive/1.1.0.tar.gz
tar xvf 1.1.0.tar.gz
cd mruby-1.1.0

OR

git clone https://github.com/mruby/mruby.git
cd mruby

THEN
make
make install
cp build/host/lib/libmruby.a /usr/local/lib/
cp build/host/lib/libmruby_core.a /usr/local/lib/
cp -R include/mr* /usr/local/include/
cd

If you have got this far, it’s time to compile h2o itself:

EITHER

wget https://github.com/h2o/h2o/archive/v1.5.2.tar.gz
sudo tar xvf v1.5.2.tar.gz
cd h2o-1.5.2

OR

git clone https://github.com/h2o/h2o.git
cd h2o

THEN
cmake -DWITH_BUNDLED_SSL=on -DWITH_MRUBY=ON .
make
sudo make install
cd

If you don’t want mruby for any reason, you can leave out -DWITH_MRUBY=ON above and don’t need to compile it either.

I really hope that this has worked for you. Now you should have h2o installed. It’s time to set it up.

Configure h2o with distributed config files for virtual hosts

First create /etc/h2o/h2o.conf as follows. The parts in red show the additions for HTTP/3.

# H2O config file -- /etc/h2o.conf
# to find out the configuration commands, run: h2o --help

server-name: "h2o"
user: www-data
access-log: "|rotatelogs -l -f -L /var/log/h2o/access.log -p /usr/share/h2o/compress_logs /var/log/h2o/access.log.%Y-%m-%d 86400"
error-log: "|rotatelogs -l -f -L /var/log/h2o/error.log -p /usr/share/h2o/compress_logs /var/log/h2o/error.log.%Y-%m-%d 86400"
#error-log: /var/log/h2o/error.log
#access-log: /var/log/h2o/access.log
#access-log: /dev/stdout

pid-file: /run/h2o.pid
listen: 80
listen: &ssl_listen
  port: 8443
  ssl:
    certificate-file: /etc/ssl/certs/server.crt
    key-file: /etc/ssl/private/server.key
    #cipher-suite: ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-ECDSA-AES256-SHA384
    minimum-version: TLSv1.2
    cipher-suite: "ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256"
    # Oldest compatible clients: Firefox 27, Chrome 30, IE 11 on Windows 7, Edge, Opera 17, Safari 9, Android 5.0, and Java 8
    # see: https://wiki.mozilla.org/Security/Server_Side_TLS
# The following three lines enable HTTP/3
listen:
  <<: *ssl_listen
  type: quic # Doesn't work in h2o 2.2.5 hence no http/3 available
# See security issue https://www.mozilla.org/en-US/security/advisories/mfsa2015-44/
header.set: "Alt-Svc: h3-29=\":8443\"; ma=39420000"

expires: 1 year
file.dirlisting: off
file.send-gzip: on
limit-request-body: 1024
#num-threads: 4

file.mime.addtypes:
  application/atom+xml: .xml
  application/zip: .zip

header.set: "strict-transport-security: max-age=39420000; includeSubDomains; preload"
#header.set: "content-security-policy: default-src 'none';style-src 'unsafe-inline';img-src https://example.com data: ;"
header.set: "x-frame-options: deny"

file.custom-handler:                  # handle PHP scripts using php-cgi (FastCGI mode)
  extension: .php
  fastcgi.connect:
    #port: /var/run/hhvm/hhvm.sock
    #type: unix
    port: 9000
    type: tcp
    #port: /run/php/php7.3-fpm.sock
    #type: unix

file.index: [ 'index.php', 'index.html' ]

hosts:
  "0.0.0.0:80":     
    #enforce-https: on                                     
    paths:
      /:
        #file.dir: /usr/share/h2o/examples/doc_root.alternate
        file.dir: /var/www/default
      #/backend:
        #proxy.reverse.url: http://127.0.0.1:8080/
        #fail: 
    #access-log: /dev/stdout
  "0.0.0.0:4443":
    #enforce-https: on
    listen:
      port: 4443
      ssl:
        certificate-file: /etc/ssl/certs/server.crt
        key-file: /etc/ssl/private/server.key
    paths:
      /:
        #file.dir: /usr/share/h2o/examples/doc_root.alternate
        file.dir: /var/www/default
      #/backend:
        #proxy.reverse.url: http://127.0.0.1:8080/
    #access-log: /dev/stdout

We will create /var/www/default containing nothing at all so that the server has something secure and predictable to fall back to if someone reaches it directly by IP address. You should always do this with any web server for good systems administration, including Nginx and Apache. It is better than choosing /var/www because this contains the other web roots and a fallback here can gain access to them if the paths are known.

sudo mkdir /var/www/default

Now we are going to create /etc/h2o/sites-available and /etc/h2o/sites-enabled following the pattern for Apache and Nginx. We will use these in the next section.

sudo mkdir /etc/h2o/sites-available
sudo mkdir /etc/h2o/sites-enabled

Now we must create an example virtual host e.g. /etc/h2o/sites-available/example.com as follows:

  "example.com:80":
    #enforce-https: on
    paths:
      /:
        file.dir: /var/www/example.com
    #access-log: /dev/stdout
    header.set: "content-security-policy: default-src 'none';style-src 'unsafe-inline';img-src https://example.com data: ;"
  "example.com:443":
    #enforce-https: on
    listen:
      port: 443
      ssl:
        certificate-file: /etc/ssl/certs/server.crt
        key-file: /etc/ssl/private/server.key
    paths:
      /:
        file.dir: /var/www/example.com
    #access-log: /dev/stdout
    header.set: "content-security-policy: default-src 'none';style-src 'unsafe-inline';img-src https://example.com data: ;"

Now we also need to create a folder and link the config files.

sudo mkdir /var/www/example.com
sudo ln -s /etc/h2o/sites-available/example.com /etc/h2o/sites-enabled/example.com

Configuring start-up with systemd

The following instructions should work but check whether your h2o is in the right location (/usr/sbin/h2o | /usr/bin/h2o | /usr/local/bin/h2o | /opt/h2o etc) to make sure it works.

[Unit]
Description=Optimized HTTP/1.x, HTTP/2 server
After=network.target

[Service]
Type=simple
ExecStart=/usr/sbin/h2o -c /etc/h2o/h2o.conf

[Install]
WantedBy=multi-user.target

You should find that this works now.

Configure start-up using init.d

I have never used upstart even though at one time Ubuntu used it before abandoning it in favour of systemd (above). Prior to using systemd to start h2o, I also used the old-fashioned sysvinit system that still exists in all Linux distributions and is relied on in Ubuntu 14.04 for major software packages that include web servers such as Apache and Nginx.

The following instructions should work but check whether your h2o is in the right location (/usr/sbin/h2o | /usr/bin/h2o | /usr/local/bin/h2o | /opt/h2o etc) to make sure it works.

We will now create /etc/init.d/h2o as follows:

#!/bin/sh

### BEGIN INIT INFO
# Provides:          h2o
# Required-Start:    $local_fs $remote_fs $network $syslog
# Required-Stop:     $local_fs $remote_fs $network $syslog
# Default-Start:     2 3 4 5
# Default-Stop:      0 1 6
# Short-Description: starts the h2o web server
# Description:       starts h2o using start-stop-daemon
### END INIT INFO

PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
RUN_DIR=/tmp
DAEMON=/usr/local/bin/h2o
DAEMON_OPTS='-c /tmp/h2o.conf -m daemon'
NAME=h2o
DESC=h2o

# Include h2o defaults if available
if [ -f /etc/default/h2o ]; then
	. /etc/default/h2o
fi

test -x $DAEMON || exit 0

set -e

. /lib/lsb/init-functions

case "$1" in
	start)
		echo -n "Starting $DESC: "
		# Check if the ULIMIT is set in /etc/default/h2o
		if [ -n "$ULIMIT" ]; then
			# Set the ulimits
			ulimit $ULIMIT
		fi
	        rm -f $RUN_DIR/h2o.conf
        	cat /etc/h2o/h2o.conf /etc/h2o/sites-enabled/* > $RUN_DIR/h2o.conf
		$DAEMON $DAEMON_OPTS
		echo "$NAME."
		;;

	stop)
		echo -n "Stopping $DESC: "
		kill -TERM `cat $RUN_DIR/h2o.pid`
		echo "$NAME."
		;;

	restart|force-reload)
		echo -n "Restarting $DESC: "
		if [ -f $RUN_DIR/h2o.pid ]; then
			kill -TERM `cat $RUN_DIR/h2o.pid`
		fi
		sleep 1
		# Check if the ULIMIT is set in /etc/default/h2o
		if [ -n "$ULIMIT" ]; then
			# Set the ulimits
			ulimit $ULIMIT
		fi
	        rm -f $RUN_DIR/h2o.conf
        	cat /etc/h2o/h2o.conf /etc/h2o/sites-enabled/* > $RUN_DIR/h2o.conf
		$DAEMON $DAEMON_OPTS
		echo "$NAME."
		;;

        reload)
                echo -n "Reloading $DESC: "
                if [ -f $RUN_DIR/h2o.pid ]; then
                        kill -TERM `cat $RUN_DIR/h2o.pid`
                fi
                sleep 1
                # Check if the ULIMIT is set in /etc/default/h2o
                if [ -n "$ULIMIT" ]; then
                        # Set the ulimits
                        ulimit $ULIMIT
                fi
                rm -f $RUN_DIR/h2o.conf
                cat /etc/h2o/h2o.conf /etc/h2o/sites-enabled/* > $RUN_DIR/h2o.conf
                $DAEMON $DAEMON_OPTS
                echo "$NAME."
                ;;

	status)
		status_of_proc -p $RUN_DIR/$NAME.pid "$DAEMON" h2o && exit 0 || exit $?
		;;
	*)
		echo "Usage: $NAME {start|stop|restart|reload|force-reload|status|configtest}" >&2
		exit 1
		;;
esac

exit 0

Now we must change the permissions to enable this script:

sudo chmod +x /etc/init.d/h2o

Finally we must enable it on start-up and, if appropriate, stop and disable start-up of Nginx or Apache so that these don’t conflict. I will use Nginx as an example here but you can substitute Apache or another server if you are already running these:

sudo service nginx stop
sudo update-rc.d nginx disable

sudo chmod +x /etc/init.d/h2o
sudo chown root:root /etc/init.d/h2o
sudo update-rc.d h2o defaults
sudo update-rc.d celeryd h2o

It is now time to start up the service:

sudo service h2o start

Update: the next step is not necessary if you have installed from a package (2018-10-18).

Finally, there are a number of things to move into place:

cd ~/h2o-1.5.2
sudo mkdir /usr/share/doc/h2o
sudo cp -r doc/* /usr/share/doc/h2o
sudo cp LICENSE /usr/share/doc/h2o/
sudo cp README.md /usr/share/doc/h2o/
sudo cp Changes /usr/share/doc/h2o/
sudo mkdir /usr/share/h2o
sudo cp -r share/h2o/* /usr/share/h2o
sudo cp -r examples /usr/share/h2o

All done!

Hooray, if this has all worked, you now have h2o working in a way following standard systems administration methods for web servers.

Facebooktwitterredditpinterestlinkedinmailby feather

Installing FreeSwitch on Raspbian 8

This is an updated version of some instructions that I found elsewhere. Thanks to Tom O’Connor. Some key details needed to be changed and dependencies met for it to work so I decided to document them in brief, to pass on the help that I received. I’m going to keep this short otherwise, as you can find out more from Tom’s experience.

Incidentally, FreeSwitch 1.6 won’t compile on Debian Wheezy, so you’ll need to stick with 1.4 for that. Also, it may be worth knowing that 1.6 won’t compile on i386 (32 bit, i.e. x86_32) machines and currently requires i686 (i.e. x86_64 or x64). Neither of these is relevant to ARM and therefore to Raspberry Pi units but it may be interesting to some readers anyway. If you’re interested in Raspbian you may well also use Debian proper on other machines.

Install the components. It moaned about numerous missing dependencies and I have added them here. It might not be immediately obvious from the errors exactly what is missing so I needed to do some research on line, add the package, run ./configure, try again, etc… It took a lot of attempts to get all of them, which was frustratingly slow.

sudo apt-get update
sudo apt-get install build-essential git-core build-essential autoconf automake libtool libncurses5 libncurses5-dev make libjpeg-dev pkg-config unixodbc unixodbc-dev zlib1g-dev libcurl4-openssl-dev libexpat1-dev libssl-dev screen libtool-bin sqlite3 libsqlite3-dev libpcre3 libpcre3-dev libspeex-dev libspeexdsp-dev libldns-dev libedit-dev liblua5.1-0-dev libopus-dev libsndfile-dev
screen -S compile

Now you are in a screen session. This is because it’s a long job and you don’t want it interrupted and have to start all over again more than you will probably end up doing already. The Git repository has moved, hence the change to the instructions.

sudo -s
cd /usr/local/src
git clone https://freeswitch.org/stash/scm/fs/freeswitch.git freeswitch.git
cd freeswitch.git
./bootstrap.sh

In fact, for a basic installation on a Raspberry Pi, you probably don’t want to alter modules.conf to include FLITE because of the memory imprint.

./configure
make && make install && make all install cd-sounds-install cd-moh-install

That seems to be about all that’s needed. It takes a very long time to configure everything and compile all the files on a Raspberry Pi. I used a model B of the original version and no doubt it will be faster for people using RPi2 or even RPi3. I hope you are feeling patient, but if you like Raspberry Pi projects then you must be! You might want to look into finding out how to cross-compile this instead.

An alternative is Asterisk with its GUI called FreePBX. I’ve heard that Asterisk is a bigger beast and so I’ve steered clear from it for now. I’ve had success with FreeSwitch in the past.

Facebooktwitterredditpinterestlinkedinmailby feather

HHVM with PHP-FPM fallback on Nginx

HHVM is the Hip Hop Virtual Machine, developed under the PHP licence by Facebook:

HHVM is an open-source virtual machine designed for executing programs written in Hack and PHP. HHVM uses a just-in-time (JIT) compilation approach to achieve superior performance while maintaining the development flexibility that PHP provides. HHVM runs much of the world’s existing PHP. […]

Essentially, Facebook have provided a more strongly typed, better version of PHP called Hack but they have also created a migration strategy for old code, in addition to providing PHP using Just In Time (JIT) methods on a virtual machine. Presumably people will continue to use PHP ad infinitum anyway.

Install HHVM (and PHP-FPM if you haven’t already)

In order to get HHVM working on Ubuntu 14.04 LTS (Trusty Tahr), I followed Digital Ocean’s instructions with reference to Bjørn Johansen’s basic instructions and his further instructions to use HHVM with PHP-FPM as a fallback in the event that HHVM should fail. I give them full credit for this, though I have adapted them slightly in the odd place or two below: Import the GnuPG public keys for the HHVM repository, install the repository, update the sources, then install HHVM:

sudo apt-key adv --recv-keys --keyserver hkp://keyserver.ubuntu.com:80 0x5a16e7281be7a449
sudo add-apt-repository "deb http://dl.hhvm.com/ubuntu $(lsb_release -sc) main"
sudo apt-get update
sudo apt-get install hhvm

Make sure HHVM starts when the system is booted:

update-rc.d hhvm defaults

Optionally, replace php5-cli with HHVM for command line scripts:

/usr/bin/update-alternatives --install /usr/bin/php php /usr/bin/hhvm 60

You could then uninstall php5-cli if you like. I will presume that you are using PHP-FPM but if not, for any reason:

sudo apt-get install php5-fpm

Configure HHVM on Nginx with PHP-FPM as fallback

We will assume here that you are using Nginx, but if not:

sudo apt-get install nginx

(You will need to prevent Nginx conflicting with ports used by other servers like Apache by uninstalling them, choosing different ports, disabling them or whatever. If the latter, make sure they stay disabled when the server restarts: this is beyond our scope here for now.) Now add the config, which I chose to include from /etc/nginx/php-hhvm.conf but you could add to each config file in /etc/nginx/sites-available instead:

        # pass the PHP scripts to FastCGI server
        #
        location ~ \.(hh|php)$ {
                fastcgi_intercept_errors on;
                error_page 502 = @fallback;

                try_files $uri $uri/ =404;
                fastcgi_split_path_info ^(.+\.php)(/.+)$;
                # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini

                fastcgi_keep_conn on;

                # Using a port:
                fastcgi_pass 127.0.0.1:9000;
                fastcgi_param   SCRIPT_FILENAME $document_root$fastcgi_script_name;
                fastcgi_param   SERVER_NAME $host;
                # Using a web socket:
                ##fastcgi_pass unix:/var/run/hhvm.sock;
                fastcgi_index index.php;
                include fastcgi_params;
        }

        location @fallback {
                try_files $uri =404;
                fastcgi_split_path_info ^(.+\.php)(/.+)$;
                include         fastcgi_params;
                fastcgi_index   index.php;
                fastcgi_param   SCRIPT_FILENAME $document_root$fastcgi_script_name;
                fastcgi_param   SERVER_NAME $host;
                # Using a web socket:
                fastcgi_pass    unix:/var/run/php5-fpm.sock;
        }

Now, if you decide not to repeat it in every single config file, add the following to these instead:

	include /etc/nginx/php-hhvm.conf;

Now restart Nginx:

sudo service nginx restart

Test the fallback

You can now test it as follows (replacing the URL with your own):

curl -I https://example.com

If your x509 server certificate (HTTPS) is self-signed or otherwise gets refused, try:

curl -Ik https://example.com

You should now see this header (or similar) in the response:

X-Powered-By: HHVM/3.4.0

Now kill HHVM:

sudo service hhvm stop

Try again with cURL as before. This time you should get something like this:

X-Powered-By: PHP/5.5.9-1ubuntu4.5

However, you may get nothing if, like me, you have changed the settings in /etc/php5/fpm/php.ini not to expose the header:

; http://php.net/expose-php
expose_php = Off

Now I am going to do the same in /etc/hhvm/php.ini as well because it may foil some of the less competent hackers:

; Set 0 to hide the software and version or 1 to show off that we're using HHVM ;-)
expose_php = 0

Restart HHVM automatically

Now we need to restart HHVM if it should fail by installing ps-watcher as follows:

sudo apt-get install ps-watcher

Edit /etc/ps-watcher.conf (you might have to create it) and add the following lines:

[hhvm]
occurs = none
action = service hhvm restart

Now enable ps-watcher to start:

sudo sed -i -e 's/# startup=1/startup=1/g' /etc/default/ps-watcher

Start ps-watcher:

sudo service ps-watcher start

If you now kill HHVM manually as above (or it ever falls over), you should see it come back up within 150 seconds. You could change that number above if you liked, of course. With thanks to Bjørn Johansen and also to Digital Ocean from whom I have adapted these instructions for my own needs.

Facebooktwitterredditpinterestlinkedinmailby feather

HTTP/2 is here

In order to get HTTP/2 working, it’s no longer necessary to use experimental servers like h2o as I did earlier this year, although it should be said that h2o is blazingly fast and is worth considering seriously, especially for projects that need to move large data requests fast. Now Apache has the experimental mod_2 module and Nginx has the HTTP/2 module, so you can do it fairly simply with mainstream servers too.

I followed some instructions on how to upgrade to Nginx 1.9.5. However, my version of Ubuntu (Trusty Tahr, 14.04 LTS) only has Nginx 1.9.4 and the Nginx mainline (development) repository doesn’t seem to be upgrading. I added Chris Lea’s experimental repository instead:

In brief:

sudo add-apt-repository ppa:chris-lea/nginx-devel
sudo apt-get update
sudo apt-get install nginx

You’ll note that I have installed nginx-full, which is the standard version contained in the metapackage nginx, but you can choose any of the three versions nginx-light, nginx-full, nginx-extras according to what you need.

Then you need to change all lines in the config files in /etc/nginx/sites-available from the following with SPDY, the predecessor of HTTP/2:

listen 443 ssl spdy;
listen [::]:443 ssl spdy;

These can be quite simply changed. Unfortunately, I had tons on my server, so it took ages! I know, I should have done it with a find and replace. Next time maybe I will learn my lesson.

listen 443 ssl http2;
listen [::]:443 ssl http2;

Lastly, I had added a header for SPDY in my custom /etc/tls.conf, which I include in these to avoid repeating code. Here is the relevant line that I am now able to simply comment out:

#add_header        Alternate-Protocol  443:npn-spdy/2,443:npn-spdy/3,443:npn-spdy/3.1;

And that is it!

Facebooktwitterredditpinterestlinkedinmailby feather

Browser misinformation about secure sites

Put simply, a certificate is the document that you will see that tells you that a secure connection has been made using mathematical encryption that is currently impossible to break in most cases, i.e. messages are being sent from you to the server and back using a very, very good code. In that case, if people intercept the messages, they are going to have a hard time reading them unless they have the key to the code, which is secret. That’s how it all works.

padlock symbolThe thing we are talking about is the secure HTTPS variety of HTTP, which uses security called Transport Layer Security (TLS), often known informally by the name of its predecessor, the Secure Sockets Layer (SSL). But you don’t need to be confused by the jargon terms. You may know HTTPS better as the padlock symbol by the address bar.

It’s very likely that you will have seen this message in Chrome:

Your connection is not private
Attackers may be tying to steal your information from whatever.domain.com (for example,
passwords, messages or credit cards). NET::ERR_CERT_AUTHORITY_INVALID

Or else this message in Firefox:

This Connection is Untrusted
You have asked Firefox to connect securely to whatever.domain.com, but we can’t confirm that your connection is secure.
Normally, when you try to connect securely, sites will present trusted identification to prove that you are going to the right place. However, this site’s identity can’t be verified.
What Should I Do?
If you usually connect to this site without problems, this error could mean that someone is trying to impersonate the site, and you shouldn’t continue.

Or else this message in Safari:

Safari can’t verify the identity of website “whatever.domain.com”.
The certificate for this website is invalid. You might be connecting to
a website that is pretending to be “whatever.domain.com”, which could put your
confidential information at risk. Would you like to connect to the website
anyway?

I’m afraid that I don’t have access to Internet Explorer (scheduled to be replaced by Microsoft by a product with a new name, codenamed project Spartan), so I would appreciate any comments about what message it gives here. This is because I have only Mac OS X and Linux devices. I am being lazy about checking the message Android gives me, but the same problems arise with mobile devices.

The implication of all of these messages is that there is something wrong with these https:// (secure) connections. It isn’t as simple as this: in short, they are pretty much lying to you. There are several issues at stake here, all of which depend on the precise wording:

1. “Not private” (Chrome)

This claim is untrue. In fact, the issue is in fact that the connection is secure and may be private but there is a unverified possibility that it has been made with a server other than the server that is claimed, i.e. intercepted: they may or may not then pass on traffic to and from the real server in order to gain information. Simply, we don’t know if it’s safe or dangerous. Or at least, your computer doesn’t know, whether or not you do personally.

Then again, it might just as easily be the correct server: the issue is that Chrome does not know that. What is definitely true is that it’s a far better connection than an unsecured http:// connection because at least you know that the rest of the Internet cannot see the traffic, i.e. it is more private by an order of magnitude than broadcasting any private information in clear text. I am not telling you to trust it, but it’s safer than no encryption at all.

Of course, if you are not intending to enter any private information into this web site anyway, it is misleading to make you worry about it because you are not then automatically at risk. Web traffic does not automatically put your private information at risk unless you exchange it, i.e. you are on a site where you need to be logged in, you are buying things etc. Don’t let the browsers fool you into believing that you are always at risk in some ill-defined way.

2. “Untrusted” (Firefox)

The idea that the connection is untrusted rather than unverified is untrue, although it is not quite as terrible. The question is, trusted by whom? How and why? You, as an ordinary user, have not examined the certificates supplied by the browser either, so you only have it on trust from that browser that they are valid and hence “trusted”. It’s possible though rather unlikely that you downloaded a bad copy of the browser because even that download site was impersonated. But did you check this? I bet you didn’t.

The certificate on the site that you are connecting to may be equally trustworthy but you haven’t imported it into your browser yet and probably don’t know how to do so. The browser isn’t giving you a good idea of how to do that, either.

So you’ll never get to decide who to trust and do something about it, as this system of certificates originally intended. The system itself works but has been hijacked by browsers and commercial interests, as we will read further below. Use it for your safety, but use it carefully and with knowledge of how they are trying to manipulate your lack of technical expertise.

3. “Trusted identification” (Firefox)

This concept is misleading: trusted by whom? How and why? Again, as in (2) above, the commercial certificate authorities (CAs) are not more trustworthy and, in fact, are able and likely to allow security authorities in their countries (usually the USA, UK and other western nations) to have access to those certificates, enabling the connection to be intercepted by government agents. You may not be worried about that aspect, as an ordinary user with nothing to hide from the government (at the moment) or else you may be. But I bet you didn’t know that either: you should have had that choice to decide. You didn’t get told about that choice.

In fact, unknown certificate authorities might be more trustworthy because you know that the government or other parties that you know about do not have access to the private key for the root certificate, not necessarily less trustworthy in every case. Simply, it depends on which certificate or certificate authority we are talking about.

3. “Invalid” (Chrome and Safari)

The idea that the certificate is invalid is in most cases likely to be untrue, although this is also sometimes possible: the question that is not made clear enough is why the browser considers it to be invalid: (a) simply being from an unknown authority is not evidence that the certificate is invalid, as is claimed incorrectly here; (b) if, for example, it is out of date, or if the domain name on the certificate differs from the one you are connecting to (which does also happen) then, yes, it’s invalid. This happens occasionally, but mostly through careless administration.

Mixing up these two things creates a simple lie that does nothing to make the Internet more secure or help ordinary users understand what certificates actually do, i.e. certificates are inherently the root of all Internet security and are good when used well.

4. Scaremongering

The phrases “Get me out of here!” and “Back to safety” are simple scaremongering. They are designed to make the majority of people without technical knowledge to run for cover.

What is really happening here then?

People are misled into believing that “trusted” certificates are good and that “untrusted” certificates are bad, without having any idea of why some are trusted and some are not, who issues them and what those authorities actually do for them.

The truth is that those big commercial certificate authorities make money for nothing.

In about five minutes using a Linux server, I can create my own root certificate authority and issue certificates that are as good if not sometimes better than theirs. I can then sign other people’s certificates, which states that those certificates are trusted by my certificate authority. If you import my root certificate into your browser then all certificates that are trusted by me will afterwards be trusted by your browser.

How do they get way with it?

The advantage that they have over me is that they have an agreement with the major browsers and are automatically trusted. I have not, so my certificates give the nasty red error. There is a commercial stitch-up that means ordinary users will never want my certificates because the browsers will never incorporate my root certificates, so my certificate authority is useless.

Meanwhile, people who set up secure web sites are forced to buy from the commercial certificate authorities. They simply sit back, wave a magic wand and wait for your money for a service that anybody can provide for nothing. There is almost no effort required on their part.

Are their certificates “safer”?

Their certificates are no guarantee of security. To get one, I need one small thing: an email address on that domain name. I can buy a domain name for less than ÂŁ10 and get my email forwarded via an address on that domain. That is enough proof to buy a certificate. From then on, the browsers guarantee that I am trustworthy. But there is no reason to believe that my site is safe or that I am not trying to steal your information!

I am not the only one saying this

It’s all over the internet. The best synopsis that I have ever read is by Andrews and Arnold, an extremely professional and expert supplier of broadband and internet services based in Great Britain. The problem with most information is that it’s not aimed at the average reader.

What should you do?

At present, you can’t do much immediately except make your own decisions about which sites to trust. But you should really complain to your browsers about the scaremongering. They need to make it clear that “unverified” does not mean “untrusted” (in turn, something that they should not assert on your behalf) and that, even worse, “unverified” or “untrusted” are entirely different from “invalid”. You can actually influence them, if enough people ask.

Do not stop using encrypted https:// connections. They keep you safe. Be aware that the lack of a scary red browser warning does not mean it is safe. The fact that you do get a scary red browser warning does not mean that it is unsafe. If you are concerned, check that the domain on the certificate is the one you actually tried to see and that the date is valid: the browser will tell you this if you select the advanced option to see more details on the certificate. This, however, is not a guarantee of safety either, as it’s simple to get right.

Don’t be scared by everything. Consider whether or not the page you are looking at sends any of your private information over the internet or not, i.e. are you logged in, have you entered any information into any forms, have you explicitly given any permissions to use your private data, have you checked the padlock symbol in the address bar or whether the connection is https:// (secure) or not etc. Some sites can be dangerous but the browser does provide protections.

Ultimately the only protection is to go to sites that you trust. Always (re-)type the address yourself rather than following links (and especially avoid doing so from emails, even if they seem ok, as these are often faked).

Don’t let your choices be made for you

Do not let the browser make your choices for you. This is complacency and can lead to you visiting sites that are unsafe, while avoiding others that may well be perfectly safe. It can at times be safe to override the warnings and view sites anyway. Beware of the warnings and do consider them, but don’t take them as absolute truth.

Facebooktwitterredditpinterestlinkedinmailby feather

Reviving the full Welsh breakfast

This is a variant on the usual sort of full breakfast that you see across the British Isles and Ireland, i.e. served with fried eggs, toast and so on. I am a vegetarian, so I don’t add the (for me) horrible bits like bacon and sausages. There are several important differences:

Leeks in a pan
Leeks in a pan

(1) braise some finely cut leeks in butter until it is soft. You can caramelize these slightly if you prefer them that way, but otherwise cover with a lid and make sure that they remain wet enough to braise rather than fry by adding a few drops of water if necessary. Don’t burn the butter or else it will taste bad. Leeks are the really distinctive part of a Welsh breakfast.

(2) add Glamorgan sausages, a traditional vegetarian sausage from South Wales containing herbs, cheese, leeks and flour amongst other things. I buy mine but I ought to learn how to make them.

(3) Mushrooms. If you don’t like them, you’re missing out. The browner ones of the types that are commonly available are nicest: field mushrooms, portabello etc. Traditionally you should add herbs, most commonly thyme, sage, chives, garlic chives or rosemary. However, non-traditional alternatives like Herbes de Provence, Basil Mint (neither like Basil nor Mint despite the name) etc are good too. On occasions, I add some garlic. A little wild garlic is also nice, just for flavour (though you could use it as a vegetable as below, if you like garlic). It is seasonal.

(4) In South Wales especially, and coastal areas, there is a preference for a type of seaweed mush called laverbread. I had some of this when I was younger and wasn’t that impressed, but my tastes have changed and I really need to try it again, to be fair. It has a very strong, unique taste, so be warned. The iodine is very good for you. A very nice, milder substitute is spinach, which is a great source of dietary iron: for best results, wilt slowly with a few drops of water in a covered pan. Again, take care not to burn it. If you like garlic, you could use the leaves from wild garlic in larger quantities, during the season. It is milder and sweeter than garlic.

This is the full Welsh Breakfast. Sadly, very few of the above elements are now commonly seen, as most people have just fallen back on the so-called full “English” breakfast, which in reality is not just English but is common to all of the British Isles and Ireland.

(For those readers who are not familiar with Great Britain, the word England does not cover the entire extent of Great Britain, which is also comprised of Wales, Cornwall and Scotland; meanwhile, the province of Northern Ireland (the larger part but not all of Ulster) remains within the same political entity as Great Britain, while the southern part of Ireland seceded from it: together, they form the “United Kingdom of Great Britain and Northern Ireland”. The incorrect habit of saying “England” for all of these is about as offensive as referring to Canada as part of America, Belgium as part of France or Austria as part of Germany. We are in the terrible habit of saying “Holland” for all of the Netherlands, which is a similar mistake. Using the acronym UK is politically correct jargon, however: we have been known throughout history as Great Britain, or Britain for short, notwithstanding the inclusion of Ireland and later Northern Ireland in the same political state. The “Great” is not a claim of greatness, but is by comparison with the former “Less Britain” i.e. Brittany, a former political state now governed by and included within the French Republic. Note that we don’t say “France and Corsica” for political correctness, or even “the French Republic” in normal speech, just “France” and “Corsica” separately. Similarly, “(Great) Britain” and “Northern Ireland” are fine for most non-official purposes.)

Facebooktwitterredditpinterestlinkedinmailby feather

HTTP/2 – a faster Web

http2_100x100

Background

The last three versions of HTTP were 0.9 (1991), 1.0 (1996) and the present version 1.1 (1997, improved 1999 & 2007), all of which are text protocols. Of these, the majority of internet traffic is now HTTP/1.1 but HTTP/1.0 is still used by certain tools that do not require persistent connections, which is a key innovation that was introduced in HTTP/1.1 via a keep-alive mechanism. Other improvements included chunked transfer encoding, HTTP pipelining and byte serving, all of which were designed to speed up data transfers to clients.

HTTP/2

This year, version 2 (not 2.0) has finally been released. It was developed out of the SPDY development created by Google and largely pioneered by the Nginx web server, which is still responsible for almost all SPDY traffic. It contains a raft of other optimisations but will interoperate with existing HTTP methods. Nginx is committed to implementing HTTP/2 by the last quarter of 2015, the only major web server that has made such a commitment to date.

Unlike it’s predecessors, it is a binary protocol, which means that data will be sent in a considerably more efficient stream. One small consequence is that you can no longer do the equivalent of this over either port 80 (HTTP) or port 443 (HTTPS):

telnet myserver.net 80
GET / HTTP/1.1
Host: myserver.net

Connection: close

This is sad for people who like old tools to keep working, but it won’t be long before tools are in place to do this better over HTTP/2 as well as the older versions, and it’s not the end of the world if we can’t test connections using the tried-and-trusted but venerable telnet, which is honestly not used for a lot in real action these days because it’s totally insecure.

It is already possible to make standards-compliant HTTP/2 requests using Firefox or Google Chrome Canary (the cutting-edge development version of Chrome), which you can test by clicking right on the page and selecting Inspect Element and the Network tab before refreshing the tab. In Chrome Canary you will need to add the protocol tab by clicking on the headings in the table and selecting it. However, you will find it relatively hard at present to find a web server capable of HTTP/2. This is only now beginning to be possible.

Servers that work now

I have tested h2o, which is an experimental, optimised HTTP web server that supports HTTP/2 as well as previous versions of the protocol. The other options are nghttp2 (which also has an experimental HTTP/2 web proxy) and Trusterd. You will notice that these need to be compiled, i.e. they are not yet available as Linux packages in any major distribution. This process requires a little more than the average sysadmin skills, though it was relatively easy with h2o once various dependencies were also installed.

Please note that all of the on-line HTTP response header testing tools that I can find are not yet capable of HTTP/2 and therefore the server will respond over HTTP/1.1 according to the request, so you need to do a bit more in order to verify that HTTP/2 is actually working, as described above: so far only two major browsers can show the protocol in action.

Another thing that you will notice is that these servers seem to fall back to HTTP/1.1 when the connection is insecure, i.e. HTTP rather than HTTPS. However, this isn’t actually set by the servers but by the browser: both Chrome and Firefox require HTTPS for HTTP/2, an issue that has been contentious because of the commercial control of x.509 TLS certificates. The main gain in mandating HTTPS is that a large number of man-in-the-middle (MiTM) attacks that arise where only part of a site is secure would be eliminated, i.e. where an attacker monitors traffic and then provides a fake certificate in order to intercept traffic to the real secure site, which means that all encrypted data could be read by that attacker, including passwords etc.

Ok, so why do we care?

If you run a small site, you will probably see rather little change, though it may well be that sites served from content management systems like WordPress, Joomla! and Drupal, which have rather large payloads, as well as any sites with lots of images, CSS or JavaScript, will load noticeably faster. If you have a decent, high-spec server, you may not even notice this.

Who will care, however, are people who need to transfer very large amounts of data for big web services. For instance, your social network providers like Twitter and Facebook will want to do this, and you will most probably see page load times decrease. Anybody who is building data-driven web projects, for example universities and industry, will see the benefits.

This will be significant for our work at Morgan Price Networks during 2015-6 in particular, as the new technology is gradually implemented by major web servers. We will probably be using it in production before a lot of people, since we use Nginx for preference (though we still use Apache too). It’s fast and it’s easy to configure securely; in addition, it’s a great reverse proxy; most of all, it moves data fast.

Facebooktwitterredditpinterestlinkedinmailby feather