Tuesday, 26 March 2019

Using HTTP Basic Auth (with Logout!) in a Go Application

HTTP Basic Auth (Wikipedia) is a thing that is actually still quite useful despite its neglect in modern web standards. By neglect, I mean that it hasn't been updated since its introduction in RFC7617 and as such the logout mechanism hasn't been improved to take into account modern browsers' tendency to aggressively cache session data within the HTTP headers, which is where the login state is stored. However, with some tricks it still can reliably be used in modern browsers. Javascript required, sorry :(

First, a note: don't even consider using HTTP basic auth in your public-facing page unless you have it served behind an HTTPS reverse proxy! The username and password sent to and fro from client to server is in plaintext by default, and only HTTPS with TLS will guarantee that the entire request including the critical HTTP headers are encrypted.

Given that caveat, here is a complete minimal example of using HTTP basic auth to gate access to a Go http app.

Go Playground Example <-- this won't work in the Playground -- copy and build locally


Q: Go's http lib supports TLS to serve out endpoints. Why didn't you just do that instead of serving out an HTTP app behind a reverse proxy?
A: HTTP basic auth seems to be mutually exclusive with the HTTPS protocol. Perhaps I missed something. Let me know if I'm wrong, and how to do it. Thanks.

Q: How do I support multiple users/roles using the example you give?
A: No idea. I think it could be done, with auxiliary logic to track separate session users/passwords, but this is left as an exercise for the reader. [Meaning, like all my college profs ... I forget/I can't be arsed to work it out right now.]


Wednesday, 6 March 2019

bacillμs - a simple build automation server written in Go

Most non-trivial software projects that do rapid releases use a build automation server. One of the most popular solutions for this is Jenkins. There are many others.

While Jenkins is easy to install and use in my experience, I wanted to learn some others to broaden my expertise, like Concourse or buildbot The former turned out to be hellish to install and I burned a few evenings hitting many head-scratching dead-ends in the start up config; forums were filled with users asking the dev team for updated installation instructions, met with brusk dismissals if one wanted to use it outside of the dev-blessed containers (ie., use it as a black box, never mind how it really works). The latter, while easier to get up and running in a 'hello world' configuration, seemed difficult to configure further into a real-world setup. It seemed to me that these things, in general, are overly complicated.

So, in a fit of insanity I wrote my own simple build automation server in Go. No containers, java VM, or dependencies.. Use whatever scripting language you want. Total line count is under 1k.

Of course, it's nowhere near production quality, and probably violates every go coding standard there is, but it does the essential things one might expect in responding to git triggers or webhooks from web-based systems like or gitlab, providing a no-frills web dashboard. Jobs may be scheduled (externally via cron), or manually from the dashboard. There's even a simple way to display stages of a job's run in the live view, aka a simple 'pipeline' status.

Suggestions welcome.

Friday, 8 June 2018

Obscure git issue: git push hangs silently on un-writeable repos

Making this blog post as a note mostly to myself; but since I couldn't find a posted solution elsewhere, this might also help someone else encountering a 'git push hangs' issue...

Situation: Recently installed Gogs (, an awesome github-style self-hosted web service, written in Go. I have a bunch of repos already in /var/git/, and didn't want to move them (or make it appear they hadn't moved) to preserve the ability to use go get (see this post about setting that up) and raw command-line git without changing the repo URIs everywhere they were already checked out on remote systems.

To put some of these repos under the purview of Gogs, yet keep them visible in /var/git/, I made soft symlinks in /var/git/, moving the actual repos in question to
/home/git/gogs-repositories/<user>/<repo>.git. By default, these will have permissions allowing git (and the Gogs web app) to manage the repo, eg:

drwxrwxr-x  6 git git 4096 Jun  8 20:48 go_login.git

However, since these were my legacy repos, for the above reasons the symlink at the location in /var/git/ is owned by another user and I didn't want the git user dedicated to Gogs owning things there, outside of the git user's home tree.

As it turns out, if the owner of the symlink in /var/git/ pointing to the moved repo within /home/git/gogs-repositories/ doesn't have write permissions to the repo, git push will just silently hang after one supplies ssh:// credentials.

Solution: Add the legacy user to group git, and add group write permissions to all repos linked to in this way in /home/git/gogs-repositories/ .

This was a head-scratcher, since git-daemon writes nothing to /var/log/daemon.log indicating an issue, at least on my setup -- perhaps git-daemon can be made to be more verbose?


Tuesday, 24 April 2018

Export Go Packages via 'go get' From Your Own Server

Self-Hosting Go Packages With Support For go get

[NOTE: Since originally posting I've clued in that what's documented here is only one way of achieving what's commonly referred to as 'vanity URLs' or 'vanity imports'. Adding this note here just to help anyone searching find this post more easily. -R.]

Go has a really neat package import tool, go get, to fetch packages from upstream sources into one's own local $GOPATH package tree. The 'big' sites like, and others support use of go get from their project hosting spaces, which is cool, but they charge extra for hosting private code repos, or having more than a small fixed number of contributors, or other annoying limitations. Understandably these sites need some way to monetize their cloud offerings but for individuals or those with their own infrastructure there should be other ways that don't depend on the 'cloud' (ie., someone else's servers).

While the collaborative aspects of these sites and web-based features are their main draw (encouraging public pull requests for distributed development), perhaps you or your company want the convenience of using go get for your own repositories, but don't want to entrust your code repositories to one of these external entities.

Note: If you're considering moving off of github and self-hosting your repos, consider It's really easy to set up and feels very familiar if you're used to github. Also, see my other post for notes on how to let refer to your legacy repos whilst preserving traditional access to your old repos in their original locations. 

The go get command and its import mechanism is described in the go command documentation, but to be frank, the docs for the go import mechanism aren't too clear on exactly how to set up one's own server to support it. One can't just go get a repo that is available via git clone without a lot of setup first.

Basic requirements:

  • Proper DNS 'A' record info for your package server
  • A common webserver (ie., apache v2 is used here but others are supported)
  • HTTPS enabled (ie., a properly-configured, authority-signed server cert -- sorry, self-signed won't work)
  • The git-http-backend helper tool (included with most git distributions)
  • Properly configured web server rewrite rules for calling git-http-backend when requests from go get are seen by your server

All these bits need to be set up 'just so' for the go get command to work smoothly, and the go docs don't really spell out the full setup, probably due to the myriad platforms and web servers out there.

I'll show here my setup, which isn't the most common, but should with ease adapt to other systems: Funtoo Linux + Apache v2. With some path adjustments this should apply to Ubuntu and other popular Linux distros.

I pieced together this tutorial from the following sources:

I also studied the verbose output of go get -d -v to see just what the command was assuming when it tried to fetch things.

Basic Theory of 'go get'

The go get command works over SSH, HTTP or HTTPS, though it refuses to use plain HTTP unless one specifies the -insecure flag. This means generally you'll want to get your server's HTTPS cert setup working to avoid having to specify this every time, and, in the case of private repositories, to protect your proprietary source code from travelling over the open internet whenever go get is run.

The tool looks for files with special <meta> tags, which specify where to redirect the partial URI given by the go get command to the git-http-backend tool. In this way, one can store the actual repositories nearly anywhere on the system and move them around, without breaking the package URI published to users.

go get can fetch packages contained in each <meta> tag via either the ssh:// or https:// protocols. The ssh:// protocol will require a shell account on the hosting server for each of your contributors -- they'll be prompted for their password before go get will proceed to pull anything. This is good for private groups wishing to share both read (pull) and commit (push) access. For public repos or projects where you want team members to submit patches via other means like email or an external review tool, the https:// method is appropriate -- however it will require a web server with valid authority-signed cert to allow HTTPS.

Proper DNS 'A' record setup

You'll need to ensure your domain allows proper HTTP/HTTPS access with the bare domain (ie., should redirect to go get and package imports in go source code expect just a domain name, not a host.domain syntax, eg. the Go source import statement

import   ""

... implies one has previously performed

$ go get

... which expects the server at to resolve web requests with no host prefix. If you serve regular web content from the same server, you'll probably already have an 'A' record for, but go get will require an 'A' record also for plain While you're doing this you might as well add a permanent redirect from to if you don't already have it.

Check your DNS configuration (if you control it yourself) or ask your admin to ensure there's an 'A' record for  which maps to the same IP address as Sometimes this is named the '@' entry.

Apache modules required: mod_rewrite, mod_cgi, mod_alias, mod_env

The web server needs to do some URL rewriting and CGI operations in order to send go get requests to git-http-backend (ie., fetching git repos with the http:// or https:// prefix). For this you'll need to ensure the following Apache modules are enabled: mod_rewrite, mod_cgi, mod_alias, mod_env.

Enable the above modules by adding LoadModule directives in whatever manner your server  expects, eg., /etc/apache2/httpd.conf;  then add the  following .htaccess    rule   to   your   web   root   (mine,    using    apache2,    is in /var/www/localhost/htdocs/.htaccess):

RewriteEngine on
RewriteBase /
RewriteCond %{HTTP_HOST} !^www\. [NC]
RewriteRule ^(.*)$ http://www.%{HTTP_HOST}/$1 [R=301,L]

This rule rewrites requests of the form


Configuring RewriteRule to allow proper <meta> tags per-repo

Now you need to somehow let Apache distinguish regular web traffic from 'go get' queries, which implicitly look for files served with a <meta> tag that is unique per package.

I experimented for a while without success, adding multiple  <meta>  tags, one for each repo, to my webroot's index.html <head> section, until I realized that 'go get' was only looking at the first <meta> tag it found. It turns out 'go get' expects there to be only one <meta> tag in a file, so each exported go package must have its own file with its own <meta> tag.

The solution is not to put <meta> tags into the webroot index.html at all, but rather to use another mod_rewrite rule to distinguish 'go get' requests by the repo name and point them to a unique URL for each. These URLs should reside within a subfolder of the webroot.

Add this line to the .htaccess file in your webroot (see 1. above, mine was /var/www/localhost/htdocs/.htaccess):

RewriteRule ^go/(.*)$ pkg/$1 [QSA]

The [QSA] means (I think) 'Query String Append', which keeps any CGI-style GET
params in the original URL and puts them back onto the end of the re-written
URL, which may be important for 'go get' as it sends a '?go-get=1' param for its own purposes.

Now, with the above rule, let's say you have a file structure like this in your webroot:


.. and git repositories served by your git-daemon in /var/git/foo.git, /var/git/bar.git, and /var/git/private-baz.git. You can set up files in the webroot that contain <meta> tags pointing to each:

<meta name="go-import" content=" git">

<meta name="go-import" content=" git">

<meta name="go-import" content=" git ssh://">

The files themselves don't need to be html files. They can be text files with just the <meta> tag.

NOTE 1: In the examples above, each go get exported repo is within a go/ subdirectory. This is required to give the apache2 server a pathname root to 'hook onto' for its RewriteRule, otherwise there's no way to tell other requests within your web server's URI space apart from ones specifically meant for go get. The sub-directory doesn't need to be named 'go', it could be anything; just as github places repos under your username, eg.

NOTE 2: make sure your git-daemon has  --export-all, or  a file named git-daemon-export-ok in each public git repo. Test with regular git clone commands to verify each is fetchable before trying to use go get with <meta> tags. Repositories exported with ssh:// appear to use the git-daemon-export-ok file when determining whether a repo is available via go get, whilst ones exported in the <meta> tag via https:// listen to the Apache SetEnv statements (see below) which set the export permissions, since they're being served via the git-http-backend helper rather than via ssh.

More on Public vs. Private Package Repos

If you have some private packages that are not yet ready for the public eye, make note of the above example: the repo named 'private-baz' was exported in the <meta> tag via ssh://, not https://, so it will ask for authentication via ssh (password, phrase or host-key).

Exporting via <meta> tags, but using ssh:// in the git repo URI, doesn't require your webserver to have HTTPS set up, but will require the -insecure flag to 'go get' to convince it to even fetch the <meta> redirection info so it's still annoying and worth going full HTTPS on your webserver even if you're not publishing anonymous read (pull) go packages.

Finally, note the ssh:// URI for git repos usually has a slightly different path than git:// or https:// read-only URIs (note the /var/git/ path component in the third private-baz repo).

You can even serve out multiple users' repos via 'go get' this way, since using git  with  the  ssh:// (git+ssh) URI  syntax  lets  a  git-daemon  otherwise configured  to serve  public  repos from  /var/git or wherever, to also serve out individual users' private repos from their home dirs. For example I have
public repos in my /var/git/ and private repos in ~user/git/,  and both can be served to the 'go get' command via  appropriate  <meta> tags defined as above, with private ones doing authentication as expected.

git-http-backend Setup

In your main apache2 config (eg., httpd.conf or similar) add this:

SetEnv GIT_PROJECT_ROOT /var/git
ScriptAlias /git/ /usr/libexec/git-core/git-http-backend/

RewriteCond %{QUERY_STRING} service=git-receive-pack
#RewriteCond %{REQUEST_URI} /git-receive-pack$
RewriteRule ^/git/ - [E=AUTHREQUIRED:yes]
<LocationMatch "^/git/">
  #apache 1.x# Deny from env=AUTHREQUIRED

  AuthType Basic
  AuthName "Git Access"
  Require all granted
  #apache 1.x# Require group committers
  #apache 1.x# Satisfy Any


Now, after all of the above, I discovered go get refuses to import packages with a self-signed
cert! What a pain.

If you don't already have HTTPS with a certificate-authority signed cert on your server, you'll need to get one. Either consult your business IT department for the server hosting all of this, or set up EFF's certbot utility. Thankfully the EFF has made it relatively easy for regular people to get a free certificate with valid signing for personal servers.

On Gentoo or Funtoo, the steps to install a LetsEncrypt cert are (as root):

# emerge app-crypt/certbot app-crypt/certbot-apache
# certbot certonly --webroot -w /var/www/localhost/htdocs/ -d -w /var/www/localhost/htdocs/ -d

Now, verify the Apache configuration from all previous steps and restart the web server:

# apache2ctl configtest
# rc-config restart apache2

Now test out your fancy go get-able package server!

[from some other host or account]
$ go get
$ ls $GOPATH/src/

This is the minimum setup just to get HTTPS working with Apache v2 for your primary domain, to make go get happy. If you have multiple 'vhost' domains or other complex requirements, you're on your own.. I'm still trying to get my server to server full HTTPS for all of the domains it hosts.


While the go get command is the preferred way for golang programmers to fetch external packages into their working $GOPATH tree, the documentation is not extremely helpful in setting up all of the server-side bits that are required to support it. Individuals or organizations may want a mixture of public (read-only) as well as private/group read/write (pull/push) repos exported via go get without the risks or costs associated with hosting via an external party.

A self-hosted golang package server supporting the standard go get command can be implemented by configuring a webserver with proper type 'A' domain records, HTTPS plus a valid authority-signed certificate, proper git-http-backend tool configuration, URL rewrite rules and package export <meta> tags placed within the webroot on a per-package basis.

Sunday, 28 May 2017

Avoiding Mysterious "Permission denied" Errors During Windows Development: Check Your Services!

This is a short post to answer a question I've seen go un-answered, or incorrectly answered, on at least three major programming forums.

If, while developing on Windows (regardless of IDE or programming tool), you can compile your program the 'first time' -- in Eclipse, mingw, golang, whatever -- and run it, but the next compile/run cycle you get "Permission denied" from the linker -- check that you don't have the "Application Experience" service disabled. It must be set to Manual, Automatic, or Automated (delayed start)... just not Disabled. Otherwise you'll experience this issue, regardless of language, compiler, or IDE, at least on Windows 7.

If you're like me you might be in the habit of turning off all sorts of Windows Services on your personal machines to minimize the background crap going on. It's always a matter of tuning to determine what's really required and what isn't. For programming with rapid edit/build/run/debug cycles, it's common to run into the permission issue described above.

I have no idea (nor does anyone else, it would seem --Microsoft isn't telling on its own forums) exactly why this service must be enabled to prevent the OS from locking a recently-terminated program executable on disk.

Perhaps (and this is a wild-ass guess), due to the way Application Experience monitors programs, to report if they crash, there's some other OS component in Windows that is waiting for a handshake from the Application Experience service before letting the executable be removed. This might make sense if one needs to send dumps of the .exe or something as part of a crash report.

Anyways... no one elsewhere has reported a definitive answer to this, so I'm noting it here mostly for myself, for posterity. I've run into it at least 3 times on various Windows systems doing development and it's highly annoying. Usually it's been a few years since I set up my box and I've forgotten how I solved it the last time...

Thursday, 26 November 2015

ZINK hAppy Printer Setup on Windows 7

ZINK Imaging Inc. makes some neat devices, the hAppy and hAppy+, which are basically super-fancy labelmakers. From what I understand the company was founded by refugees from Kodak after they shut down the Polaroid division, just after they had developed a cool new digital version of the Polaroid process.

I picked up a hAppy printer on eBay a while ago, thinking it might be neat as an alternative way of producing my c@rd password gen/recall cards. The fact the device can print directly onto shiny peel 'n stick labels, in multiple widths, is quite nifty. The widest roll is 2" (50mm) which just so happens to be nearly the width of a credit card.

While it's marketed at the craft market (think soccer moms and kids' birthday parties), it could be put to use in so many other ways. The manufacturer's site and installation instructions are focused completely on its AirPrint capabilities: printing from a smartphone via iOS or Android over WiFi, using their proprietary apps (which, to be honest, are pretty good for the craft market -- easy to use and lots of neat clip art to get designs done quick for those birthday parties, weddings and craft parties).

However, I want to use it to print my own stuff, from a laptop at a mobile kiosk, using my own algorithmically-generated security cards.. I can do it from the smartphone but it requires a lot of manual fiddling with resizing/cropping in their phone app, even though my c@rd generator app spits out fully-specified, sized images ready for printing. I even encode the exact print size in the EXIF data so it should really be 100% automated.

The only issue was that ZINK doesn't officially ship or publish *any* Windows USB drivers! The hAppy printer has a microUSB port, so I wondered if it could just act as a regular printer.

Turns out, the answer is YES. ZINK didn't help one iota, though. Multiple emails to their tech support just went into a black hole. They've either already moved onto the next product line, or just have no real humans watching their emails. Perhaps they're already going bankrupt, who knows. Bad customer support anyways.

Lots of info on the 'net about using the ZINK range with Apple's AirPrint is out there; which leads one to believe that the ugly Apple iTunes software (which for some gawd-awful reason is the only official way to obtain AirPrint capability on Windows!?) is the path to printing bliss here.. but it is NOT the right solution. Don't put that crap on your machine, it's a dead end for this device; I managed to get my Windows 7 PC to 'see' the hAppy via Wifi, but it wouldn't print anything, just giving obscure errors in the print spool manager.

So... this device shows up in Windows 7 (64-bit) upon initial plug-in via USB as a generic 'Unspecified' entry in Control Panel\All Control Panel Items\Devices and Printers. You'll see a blank box labelled just 'hAppy Printer', in the 'Unspecified' Section below 'Printers and Faxes'. That means it's showing up as a USB device, but Windows doesn't yet know how to print to it.

Right-clicking into Properties, you'll see the usual device Properties dialog, with the usual buttons -- except that 'Update Driver', 'Disable', 'Uninstall', etc. are greyed out. The only one available is 'Driver Details', so click that and you'll also see it says no drivers have been installed for the device. Not promising.

However, Windows has a little-known class of support drivers, 'USB Printing', which sit beneath more specific vendor printer drivers. And, it turns out, ZINK does in fact have a Windows USB driver for this printer! It's just not published anywhere outside of Microsoft's Windows Driver database. So... the only way to get this installed is to go through the Control Panel:

1. In Control Panel -> Devices and Printers, choose 'Add a Printer'.

2. Choose 'Local Printer', and change the dropdown beside 'Use and Existing Port' from LPT1: to USB00x: (whichever one is there; if you have an existing printer it might be USB002: or USB003: or higher.. )

3. Click Next. If the next dialog doesn't have a manufacturer named 'ZINK' in the left pane, you need to click 'Windows Update' and wait about 5 minutes (or longer!). It eventually will come back, and ZINK will be hopefully new in the left pane's list of manufacturers.

4. Choose 'hAppy' from the printer list on the right pane. DO NOT choose 'hAppy XPS', (not that I've tried it, but XPS is a virtual printer driver, which we don't want -- unless you want to print to files for later physical printing.)
5. Complete the wizard and now in the Control Panel -> Devices and Printers, you should now see your new 'ZINK hAppy' in the Printers and Faxes Section.

Now, printing to labels using Windows Explorer's default right-click -> Print will work. You'll have to fiddle with setting default Printer Preferences, setting a Custom paper size preset to match the width of your roll, and so on (I'll write details of that later if anyone cares -- it's a bit fidgety, but relatively straightforward). Be mindful of the print dialog's 'Scale to Fit' settings etc. which will affect the printout size. With two or three tries I was able to come up with presets that printed out at my expected size. The 'Kiss Cut' vs. 'Full Cut' setting didn't seem to make any difference, however -- the hAppy always auto-chopped my print after it was done, which was a bit annoying, but no biggie.

Strange that ZINK hasn't officially published that this driver is available; it seems to be relatively complete, enough so that they submitted it to Microsoft's official driver database. It seems aware of all of the device's settings (normal vs. Vivid print mode, roll widths, etc.). It even reports the remaining roll length, so it seems to be the real deal.

Finally... here's the driver (7zip archive) for windows 7 x64.. As Admin, extract to C:\Windows\System32\DriverStore\FileRepository.

[Link updated 2018-11-09]

Friday, 18 July 2014

Google Yanks Yet Another Useful Feature with No Warning: Labs "Add any gadget by URL"

Boo Google, boo.

Well, if anyone needed another example of why it's just a bad idea to rely on Google APIs for anything important, here's another one. Today I saw a teensy note appear at the top of my gmail announcing the Labs "Add any gadget by URL" gadget is being deprecated.

I used this feature to developer my "Hashcash mint" gadget, and it's the easiest way to add such gadgets to the sidebar in one's gmail.

I've sent them feedback asking what migration strategy one should use and how exactly someone is supposed to embed gadgets in their gmail page now.. I hopefully will hear back within the next few days.

Perhaps I need to look further into the OAuth2 system or something; but I fear the response will be something like "Port it to a Chrome extension" (which I've already done, but that's not cross-browser is it?).
Anyone else know of APIs which allow embedding of one's custom gadgets into pages like GMail, Calendar etc. without using some kind of Google walled-garden to host them?


EDIT: No response yet.. the entire Google Developers section on gadgets is woefully out-of-date. I submitted about 4 support requests relating to dead links, missing sections, and just plain wrong info. Sigh. I get the feeling the entire gadget API has been left out to die, at least for people outside of Google.

Well if so, I'm glad I've only written one Google gadget and chrome extension, rather than investing more effort. Google just drives away devs by constantly deprecating things willy-nilly.