Pages

Friday, 8 June 2018

Obscure git issue: git push hangs silently on un-writeable repos

Making this blog post as a note mostly to myself; but since I couldn't find a posted solution elsewhere, this might also help someone else encountering a 'git push hangs' issue...

Situation: Recently installed Gogs (https://gogs.io), an awesome github-style self-hosted web service, written in Go. I have a bunch of repos already in /var/git/, and didn't want to move them (or make it appear they hadn't moved) to preserve the ability to use go get (see this post about setting that up) and raw command-line git without changing the repo URIs everywhere they were already checked out on remote systems.

To put some of these repos under the purview of Gogs, yet keep them visible in /var/git/, I made soft symlinks in /var/git/, moving the actual repos in question to
/home/git/gogs-repositories/<user>/<repo>.git. By default, these will have permissions allowing git (and the Gogs web app) to manage the repo, eg:

drwxrwxr-x  6 git git 4096 Jun  8 20:48 go_login.git

However, since these were my legacy repos, for the above reasons the symlink at the location in /var/git/ is owned by another user and I didn't want the git user dedicated to Gogs owning things there, outside of the git user's home tree.

As it turns out, if the owner of the symlink in /var/git/ pointing to the moved repo within /home/git/gogs-repositories/ doesn't have write permissions to the repo, git push will just silently hang after one supplies ssh:// credentials.

Solution: Add the legacy user to group git, and add group write permissions to all repos linked to in this way in /home/git/gogs-repositories/ .

This was a head-scratcher, since git-daemon writes nothing to /var/log/daemon.log indicating an issue, at least on my setup -- perhaps git-daemon can be made to be more verbose?

-R.

Tuesday, 24 April 2018

Export Go Packages via 'go get' From Your Own Server

Self-Hosting Go Packages With Support For go get


[NOTE: Since originally posting I've clued in that what's documented here is only one way of achieving what's commonly referred to as 'vanity URLs' or 'vanity imports'. Adding this note here just to help anyone searching find this post more easily. -R.]

Go has a really neat package import tool, go get, to fetch packages from upstream sources into one's own local $GOPATH package tree. The 'big' sites like github.com, gitlab.io and others support use of go get from their project hosting spaces, which is cool, but they charge extra for hosting private code repos, or having more than a small fixed number of contributors, or other annoying limitations. Understandably these sites need some way to monetize their cloud offerings but for individuals or those with their own infrastructure there should be other ways that don't depend on the 'cloud' (ie., someone else's servers).

While the collaborative aspects of these sites and web-based features are their main draw (encouraging public pull requests for distributed development), perhaps you or your company want the convenience of using go get for your own repositories, but don't want to entrust your code repositories to one of these external entities.

Note: If you're considering moving off of github and self-hosting your repos, consider Gogs.io. It's really easy to set up and feels very familiar if you're used to github. Also, see my other post for notes on how to let Gogs.io refer to your legacy repos whilst preserving traditional access to your old repos in their original locations. 

The go get command and its import mechanism is described in the go command documentation, but to be frank, the docs for the go import mechanism aren't too clear on exactly how to set up one's own server to support it. One can't just go get a repo that is available via git clone without a lot of setup first.

Basic requirements:

  • Proper DNS 'A' record info for your package server
  • A common webserver (ie., apache v2 is used here but others are supported)
  • HTTPS enabled (ie., a properly-configured, authority-signed server cert -- sorry, self-signed won't work)
  • The git-http-backend helper tool (included with most git distributions)
  • Properly configured web server rewrite rules for calling git-http-backend when requests from go get are seen by your server


All these bits need to be set up 'just so' for the go get command to work smoothly, and the go docs don't really spell out the full setup, probably due to the myriad platforms and web servers out there.

I'll show here my setup, which isn't the most common, but should with ease adapt to other systems: Funtoo Linux + Apache v2. With some path adjustments this should apply to Ubuntu and other popular Linux distros.

I pieced together this tutorial from the following sources:

https://askjong.com/howto/automatically-enable-https-on-your-website-with-effs-certbot-deploying-lets-encrypt-certificates
https://kasunh.wordpress.com/2011/01/15/git-over-https/
https://www.creang.com/howtoforge/howto_set_up_git_over_https_with_apache_on_ubuntu/

I also studied the verbose output of go get -d -v to see just what the command was assuming when it tried to fetch things.

Basic Theory of 'go get'


The go get command works over SSH, HTTP or HTTPS, though it refuses to use plain HTTP unless one specifies the -insecure flag. This means generally you'll want to get your server's HTTPS cert setup working to avoid having to specify this every time, and, in the case of private repositories, to protect your proprietary source code from travelling over the open internet whenever go get is run.

The tool looks for files with special <meta> tags, which specify where to redirect the partial URI given by the go get command to the git-http-backend tool. In this way, one can store the actual repositories nearly anywhere on the system and move them around, without breaking the package URI published to users.

go get can fetch packages contained in each <meta> tag via either the ssh:// or https:// protocols. The ssh:// protocol will require a shell account on the hosting server for each of your contributors -- they'll be prompted for their password before go get will proceed to pull anything. This is good for private groups wishing to share both read (pull) and commit (push) access. For public repos or projects where you want team members to submit patches via other means like email or an external review tool, the https:// method is appropriate -- however it will require a web server with valid authority-signed cert to allow HTTPS.

Proper DNS 'A' record setup


You'll need to ensure your domain allows proper HTTP/HTTPS access with the bare domain (ie., foo.com should redirect to www.foo.com). go get and package imports in go source code expect just a domain name, not a host.domain syntax, eg. the Go source import statement

import   "example.org/go/mylib"

... implies one has previously performed

$ go get example.org/go/mylib

... which expects the server at example.org to resolve web requests with no host prefix. If you serve regular web content from the same server, you'll probably already have an 'A' record for www.example.org, but go get will require an 'A' record also for plain example.org. While you're doing this you might as well add a permanent redirect from example.org to www.example.org if you don't already have it.

Check your DNS configuration (if you control it yourself) or ask your admin to ensure there's an 'A' record for example.org  which maps to the same IP address as www.example.org. Sometimes this is named the '@' entry.

Apache modules required: mod_rewrite, mod_cgi, mod_alias, mod_env


The web server needs to do some URL rewriting and CGI operations in order to send go get requests to git-http-backend (ie., fetching git repos with the http:// or https:// prefix). For this you'll need to ensure the following Apache modules are enabled: mod_rewrite, mod_cgi, mod_alias, mod_env.

Enable the above modules by adding LoadModule directives in whatever manner your server  expects, eg., /etc/apache2/httpd.conf;  then add the  following .htaccess    rule   to   your   web   root   (mine,    using    apache2,    is in /var/www/localhost/htdocs/.htaccess):

RewriteEngine on
RewriteBase /
RewriteCond %{HTTP_HOST} !^www\. [NC]
RewriteRule ^(.*)$ http://www.%{HTTP_HOST}/$1 [R=301,L]

This rule rewrites requests of the form

http://example.org/foo

to

http://www.example.org/foo


Configuring RewriteRule to allow proper <meta> tags per-repo


Now you need to somehow let Apache distinguish regular web traffic from 'go get' queries, which implicitly look for files served with a <meta> tag that is unique per package.

I experimented for a while without success, adding multiple  <meta>  tags, one for each repo, to my webroot's index.html <head> section, until I realized that 'go get' was only looking at the first <meta> tag it found. It turns out 'go get' expects there to be only one <meta> tag in a file, so each exported go package must have its own file with its own <meta> tag.

The solution is not to put <meta> tags into the webroot index.html at all, but rather to use another mod_rewrite rule to distinguish 'go get' requests by the repo name and point them to a unique URL for each. These URLs should reside within a subfolder of the webroot.

Add this line to the .htaccess file in your webroot (see 1. above, mine was /var/www/localhost/htdocs/.htaccess):

RewriteRule ^go/(.*)$ pkg/$1 [QSA]

The [QSA] means (I think) 'Query String Append', which keeps any CGI-style GET
params in the original URL and puts them back onto the end of the re-written
URL, which may be important for 'go get' as it sends a '?go-get=1' param for its own purposes.

Now, with the above rule, let's say you have a file structure like this in your webroot:

/var/www/localhost/htdocs/pkg/
/var/www/localhost/htdocs/pkg/foo
/var/www/localhost/htdocs/pkg/bar
/var/www/localhost/htdocs/pkg/private-baz

.. and git repositories served by your git-daemon in /var/git/foo.git, /var/git/bar.git, and /var/git/private-baz.git. You can set up files in the webroot that contain <meta> tags pointing to each:

[/var/www/localhost/htdocs/pkg/foo]
<meta name="go-import" content="example.com/go/foo git https://example.com/git/foo">

[/var/www/localhost/htdocs/pkg/bar]
<meta name="go-import" content="example.com/go/bar git https://example.com/git/bar">

[/var/www/localhost/htdocs/pkg/private-baz]
<meta name="go-import" content="example.com/go/private-baz git ssh://example.com/var/git/private-baz">

The files themselves don't need to be html files. They can be text files with just the <meta> tag.

NOTE 1: In the examples above, each go get exported repo is within a go/ subdirectory. This is required to give the apache2 server a pathname root to 'hook onto' for its RewriteRule, otherwise there's no way to tell other requests within your web server's URI space apart from ones specifically meant for go get. The sub-directory doesn't need to be named 'go', it could be anything; just as github places repos under your username, eg. github.com/ThisUser/that-repo.

NOTE 2: make sure your git-daemon has  --export-all, or  a file named git-daemon-export-ok in each public git repo. Test with regular git clone commands to verify each is fetchable before trying to use go get with <meta> tags. Repositories exported with ssh:// appear to use the git-daemon-export-ok file when determining whether a repo is available via go get, whilst ones exported in the <meta> tag via https:// listen to the Apache SetEnv statements (see below) which set the export permissions, since they're being served via the git-http-backend helper rather than via ssh.

More on Public vs. Private Package Repos

If you have some private packages that are not yet ready for the public eye, make note of the above example: the repo named 'private-baz' was exported in the <meta> tag via ssh://, not https://, so it will ask for authentication via ssh (password, phrase or host-key).

Exporting via <meta> tags, but using ssh:// in the git repo URI, doesn't require your webserver to have HTTPS set up, but will require the -insecure flag to 'go get' to convince it to even fetch the <meta> redirection info so it's still annoying and worth going full HTTPS on your webserver even if you're not publishing anonymous read (pull) go packages.

Finally, note the ssh:// URI for git repos usually has a slightly different path than git:// or https:// read-only URIs (note the /var/git/ path component in the third private-baz repo).

You can even serve out multiple users' repos via 'go get' this way, since using git  with  the  ssh:// (git+ssh) URI  syntax  lets  a  git-daemon  otherwise configured  to serve  public  repos from  /var/git or wherever, to also serve out individual users' private repos from their home dirs. For example I have
public repos in my /var/git/ and private repos in ~user/git/,  and both can be served to the 'go get' command via  appropriate  <meta> tags defined as above, with private ones doing authentication as expected.

git-http-backend Setup


In your main apache2 config (eg., httpd.conf or similar) add this:

SetEnv GIT_PROJECT_ROOT /var/git
SetEnv GIT_HTTP_EXPORT_ALL
ScriptAlias /git/ /usr/libexec/git-core/git-http-backend/

RewriteCond %{QUERY_STRING} service=git-receive-pack
#[OR]
#RewriteCond %{REQUEST_URI} /git-receive-pack$
RewriteRule ^/git/ - [E=AUTHREQUIRED:yes]
<LocationMatch "^/git/">
  #apache 1.x# Deny from env=AUTHREQUIRED

  AuthType Basic
  AuthName "Git Access"
  Require all granted
  #apache 1.x# Require group committers
  #apache 1.x# Satisfy Any
</LocationMatch>


LetsEncrypt


Now, after all of the above, I discovered go get refuses to import packages with a self-signed
cert! What a pain.

If you don't already have HTTPS with a certificate-authority signed cert on your server, you'll need to get one. Either consult your business IT department for the server hosting all of this, or set up EFF's certbot utility. Thankfully the EFF has made it relatively easy for regular people to get a free certificate with valid signing for personal servers.

On Gentoo or Funtoo, the steps to install a LetsEncrypt cert are (as root):

# emerge app-crypt/certbot app-crypt/certbot-apache
#
# certbot certonly --webroot -w /var/www/localhost/htdocs/ -d example.com -w /var/www/localhost/htdocs/ -d www.example.com

Now, verify the Apache configuration from all previous steps and restart the web server:

# apache2ctl configtest
# rc-config restart apache2

Now test out your fancy go get-able package server!

[from some other host or account]
$ go get example.com/go/foo
$ ls $GOPATH/src/example.com/go/foo

This is the minimum setup just to get HTTPS working with Apache v2 for your primary domain, to make go get happy. If you have multiple 'vhost' domains or other complex requirements, you're on your own.. I'm still trying to get my server to server full HTTPS for all of the domains it hosts.

Conclusion

While the go get command is the preferred way for golang programmers to fetch external packages into their working $GOPATH tree, the documentation is not extremely helpful in setting up all of the server-side bits that are required to support it. Individuals or organizations may want a mixture of public (read-only) as well as private/group read/write (pull/push) repos exported via go get without the risks or costs associated with hosting via an external party.

A self-hosted golang package server supporting the standard go get command can be implemented by configuring a webserver with proper type 'A' domain records, HTTPS plus a valid authority-signed certificate, proper git-http-backend tool configuration, URL rewrite rules and package export <meta> tags placed within the webroot on a per-package basis.

Sunday, 28 May 2017

Avoiding Mysterious "Permission denied" Errors During Windows Development: Check Your Services!

This is a short post to answer a question I've seen go un-answered, or incorrectly answered, on at least three major programming forums.

If, while developing on Windows (regardless of IDE or programming tool), you can compile your program the 'first time' -- in Eclipse, mingw, golang, whatever -- and run it, but the next compile/run cycle you get "Permission denied" from the linker -- check that you don't have the "Application Experience" service disabled. It must be set to Manual, Automatic, or Automated (delayed start)... just not Disabled. Otherwise you'll experience this issue, regardless of language, compiler, or IDE, at least on Windows 7.

If you're like me you might be in the habit of turning off all sorts of Windows Services on your personal machines to minimize the background crap going on. It's always a matter of tuning to determine what's really required and what isn't. For programming with rapid edit/build/run/debug cycles, it's common to run into the permission issue described above.

I have no idea (nor does anyone else, it would seem --Microsoft isn't telling on its own forums) exactly why this service must be enabled to prevent the OS from locking a recently-terminated program executable on disk.

Perhaps (and this is a wild-ass guess), due to the way Application Experience monitors programs, to report if they crash, there's some other OS component in Windows that is waiting for a handshake from the Application Experience service before letting the executable be removed. This might make sense if one needs to send dumps of the .exe or something as part of a crash report.

Anyways... no one elsewhere has reported a definitive answer to this, so I'm noting it here mostly for myself, for posterity. I've run into it at least 3 times on various Windows systems doing development and it's highly annoying. Usually it's been a few years since I set up my box and I've forgotten how I solved it the last time...


Thursday, 26 November 2015

ZINK hAppy Printer Setup on Windows 7

ZINK Imaging Inc. makes some neat devices, the hAppy and hAppy+, which are basically super-fancy labelmakers. From what I understand the company was founded by refugees from Kodak after they shut down the Polaroid division, just after they had developed a cool new digital version of the Polaroid process.

I picked up a hAppy printer on eBay a while ago, thinking it might be neat as an alternative way of producing my c@rd password gen/recall cards. The fact the device can print directly onto shiny peel 'n stick labels, in multiple widths, is quite nifty. The widest roll is 2" (50mm) which just so happens to be nearly the width of a credit card.

While it's marketed at the craft market (think soccer moms and kids' birthday parties), it could be put to use in so many other ways. The manufacturer's site and installation instructions are focused completely on its AirPrint capabilities: printing from a smartphone via iOS or Android over WiFi, using their proprietary apps (which, to be honest, are pretty good for the craft market -- easy to use and lots of neat clip art to get designs done quick for those birthday parties, weddings and craft parties).

However, I want to use it to print my own stuff, from a laptop at a mobile kiosk, using my own algorithmically-generated security cards.. I can do it from the smartphone but it requires a lot of manual fiddling with resizing/cropping in their phone app, even though my c@rd generator app spits out fully-specified, sized images ready for printing. I even encode the exact print size in the EXIF data so it should really be 100% automated.

The only issue was that ZINK doesn't officially ship or publish *any* Windows USB drivers! The hAppy printer has a microUSB port, so I wondered if it could just act as a regular printer.

Turns out, the answer is YES. ZINK didn't help one iota, though. Multiple emails to their tech support just went into a black hole. They've either already moved onto the next product line, or just have no real humans watching their emails. Perhaps they're already going bankrupt, who knows. Bad customer support anyways.

Lots of info on the 'net about using the ZINK range with Apple's AirPrint is out there; which leads one to believe that the ugly Apple iTunes software (which for some gawd-awful reason is the only official way to obtain AirPrint capability on Windows!?) is the path to printing bliss here.. but it is NOT the right solution. Don't put that crap on your machine, it's a dead end for this device; I managed to get my Windows 7 PC to 'see' the hAppy via Wifi, but it wouldn't print anything, just giving obscure errors in the print spool manager.

So... this device shows up in Windows 7 (64-bit) upon initial plug-in via USB as a generic 'Unspecified' entry in Control Panel\All Control Panel Items\Devices and Printers. You'll see a blank box labelled just 'hAppy Printer', in the 'Unspecified' Section below 'Printers and Faxes'. That means it's showing up as a USB device, but Windows doesn't yet know how to print to it.

Right-clicking into Properties, you'll see the usual device Properties dialog, with the usual buttons -- except that 'Update Driver', 'Disable', 'Uninstall', etc. are greyed out. The only one available is 'Driver Details', so click that and you'll also see it says no drivers have been installed for the device. Not promising.

However, Windows has a little-known class of support drivers, 'USB Printing', which sit beneath more specific vendor printer drivers. And, it turns out, ZINK does in fact have a Windows USB driver for this printer! It's just not published anywhere outside of Microsoft's Windows Driver database. So... the only way to get this installed is to go through the Control Panel:


1. In Control Panel -> Devices and Printers, choose 'Add a Printer'.














2. Choose 'Local Printer', and change the dropdown beside 'Use and Existing Port' from LPT1: to USB00x: (whichever one is there; if you have an existing printer it might be USB002: or USB003: or higher.. )

3. Click Next. If the next dialog doesn't have a manufacturer named 'ZINK' in the left pane, you need to click 'Windows Update' and wait about 5 minutes (or longer!). It eventually will come back, and ZINK will be hopefully new in the left pane's list of manufacturers.




4. Choose 'hAppy' from the printer list on the right pane. DO NOT choose 'hAppy XPS', (not that I've tried it, but XPS is a virtual printer driver, which we don't want -- unless you want to print to files for later physical printing.)
5. Complete the wizard and now in the Control Panel -> Devices and Printers, you should now see your new 'ZINK hAppy' in the Printers and Faxes Section.

Now, printing to labels using Windows Explorer's default right-click -> Print will work. You'll have to fiddle with setting default Printer Preferences, setting a Custom paper size preset to match the width of your roll, and so on (I'll write details of that later if anyone cares -- it's a bit fidgety, but relatively straightforward). Be mindful of the print dialog's 'Scale to Fit' settings etc. which will affect the printout size. With two or three tries I was able to come up with presets that printed out at my expected size. The 'Kiss Cut' vs. 'Full Cut' setting didn't seem to make any difference, however -- the hAppy always auto-chopped my print after it was done, which was a bit annoying, but no biggie.

Strange that ZINK hasn't officially published that this driver is available; it seems to be relatively complete, enough so that they submitted it to Microsoft's official driver database. It seems aware of all of the device's settings (normal vs. Vivid print mode, roll widths, etc.). It even reports the remaining roll length, so it seems to be the real deal.








Finally... here's the driver (7zip archive) for windows 7 x64.. As Admin, extract to C:\Windows\System32\DriverStore\FileRepository.

[Link updated 2018-11-09]

Friday, 18 July 2014

Google Yanks Yet Another Useful Feature with No Warning: Labs "Add any gadget by URL"

Boo Google, boo.

Well, if anyone needed another example of why it's just a bad idea to rely on Google APIs for anything important, here's another one. Today I saw a teensy note appear at the top of my gmail announcing the Labs "Add any gadget by URL" gadget is being deprecated.

I used this feature to developer my "Hashcash mint" gadget, and it's the easiest way to add such gadgets to the sidebar in one's gmail.

I've sent them feedback asking what migration strategy one should use and how exactly someone is supposed to embed gadgets in their gmail page now.. I hopefully will hear back within the next few days.

Perhaps I need to look further into the OAuth2 system or something; but I fear the response will be something like "Port it to a Chrome extension" (which I've already done, but that's not cross-browser is it?).
Anyone else know of APIs which allow embedding of one's custom gadgets into pages like GMail, Calendar etc. without using some kind of Google walled-garden to host them?

---

EDIT: No response yet.. the entire Google Developers section on gadgets is woefully out-of-date. I submitted about 4 support requests relating to dead links, missing sections, and just plain wrong info. Sigh. I get the feeling the entire gadget API has been left out to die, at least for people outside of Google.

Well if so, I'm glad I've only written one Google gadget and chrome extension, rather than investing more effort. Google just drives away devs by constantly deprecating things willy-nilly.

Saturday, 11 January 2014

mt-crypt - a simple stream cipher based on the Mersenne Twister PRNG

It's often said: "Anyone can write a crypto algorithm they can't break themselves". True enough, but that doesn't mean it should be forbidden to study or write crypto software because, well, one isn't an expert in studying or writing crypto software -- that would be a Catch-22 of sorts.

What's the entry point then, for someone who wants to understand? One learns by doing.

In light of the Snowden leaks of 2013 I think it's time that writing crypto became something to be encouraged rather than discouraged. Treating crypto software as a 'read-only' institution can't work -- we simply cannot trust a few blessed experts to write our crypto libraries and utilities for us. People outside of the shadowed halls of three-letter agencies must gain their own expertise, or at least become familiar enough with the concepts to ask intelligent questions, look for backdoors and weaknesses, try to apply new defenses to what should be private data, and just generally not be passive victims.

It has become more important than ever that there be a diversity of implementation, and that programmers who might have been hesitant before to write crypto programs start doing so.

I'm not saying one should apply any ol' system to important data. Algorithms must be carefully examined by many eyes to look for problems and determine the real security of any new system. But if everyone's afraid to write new systems, there won't be very much to examine, and not enough alternatives to keep those who would spy on us off-balance.

If the NSA's goal was/is to subvert the most prevalent cryptosystems, it makes sense to ensure there are no prevalent cryptosystems. We need to take the side of the mythical Hydra, whose winning strategy is to grow two heads for each one cut off, making things even harder for the attacker.

To do this, we all need to better understand just what strong crypto is and how to implement it so there are more choices for everyone to use, and no 'master key' -- no 'most popular solution' -- for those that would spy indiscriminately on us all by exploiting a single weakness. If the haystack cannot adequately hide the needle, we need to throw on a lot more hay, and add a few million more needles in there. If the opponent's resources are superior, an open battlefield no longer is tenable and we must use guerilla tactics -- divide and conquer, stay in small groups, change routines constantly.

[OK I'm done with the metaphors... and I'm probably on all the watch lists now :/]

With the above in mind, I started reading about alternative, non-NIST algorithms to see what's out there and how I might implement my own crypto utilities. I started out trying to implement a standard block cipher along the lines of DES or AES, trying to understand what S-boxes (substitution) and P-boxes (permutation/confusion) did, and how to use them.

Most block ciphers use multiple rounds of (S-box, P-box) manipulations, one followed by the other applied in multiple rounds with some sort of key schedule, to achieve an invertible transformation. Lots of arcane math goes into S-box design which made me a bit queasy.. I still need to learn a lot about that before I'm confident about designing such a thing.

Another big worry with block ciphers is the problem of padding. If you're trying to use a block cipher on files, or when encapsulating packets, the file as a whole, or each packet's payload, may not be a multiple of the blocksize; so the ciphertext must be padded somehow to be a multiple of the blocksize. There are 'padding oracle' attacks that make it dangerous to use block ciphers unless one understands the block padding issues completely. See here or here for some good explanations on how one can 'unzip' an encrypted CBC-mode block train if the padding scheme is known. Indeed, having any predetermined format or structure outside of the ciphertext itself can open things up to oracle attacks it seems.

So a stream cipher seems to be a safer entry point for learning and research for a would-be crypto programmer. A stream cipher doesn't encrypt a block at a time, rather it's a byte- or bit-oriented system that can handle arbitrary-length data. There is no concept of a blocksize, and thus no opportunity or temptation to introduce 'meta-data'. Usually this requires a 'cryptographically-secure PRNG' (pseudo-random number generator); that is, one that is very hard to predict so long as the seed (which is mathematically derived from the key) isn't known.

A cryptographic PRNG must meet a much higher standard for randomness and non-predictability than the 'usual' PRNGs used in standard libraries and games...

I knew of the Mersenne Twister (MT), a really good PRNG with an astoundingly huge period (2^19937-1). That sounded like a good candidate, but the literature says even that isn't cryptographically-secure enough, since despite a long period it still is subject to prediction attacks given enough PRNG output. The authors of MT wrote a paper addressing this, with an ingenious and simple scheme to harden the use of MT -- this consists of two things: a) a non-linear transformation on the PRNG output, and b) throwing away most bits of the PRNG output before combining with the plaintext. It would seem this makes it very hard indeed to know the state of the PRNG and to predict its output as used with a stream cipher.

A distinguisher is still plausible (see here), but no one has published a full-on key recovery attack as of yet and the researchers make no assertion that this is even possible from the distinguisher. The so-far-theoretical attack lets one determine that a stream of ciphertext likely is using Mersenne Twister as its underlying PRNG by observing 2^50 bits of contiguous ciphertext.

Now that's still a lot of data -- 2^47 bytes. Though some claim this cipher is 'broken' in the pure cryptographic sense, it seems it would be perfectly salvageable if one were to re-seed conservatively.

The idea would essentially be to re-seed on some interval between every 2^8 and 2^24 bits (2^16 to 2^32 bytes), based on the PRNG stream itself; thus a sort of (perfect-?)forward secrecy on the reseed schedule would be maintained (an attacker would need to first predict the MT output itself to know when the next re-seed would occur; but they shouldn't be able to do that, since the re-seeding would always occur at much less than 2^50 output bits).

The simplicity of the cryptMT stream cipher seems compelling unless someone proves otherwise and the authors' follow-up algorithm, crypMTv3, is much more complex from a design and code perspective (they focused on speed-ups for SIMD instruction sets, which is nice but I'd rather have a simpler algorithm). Crypto designers want security, but as we've learned from the recent Heartbleed fiasco, we also want simplicity of design. Complexity introduces the very real possibility of flaws that undo all other efforts.

[Note to self: the attack described in the above paper talks about LSBs of the 8 MSBs of the accum output. Perhaps some kind of xor-with-parity of the internal cryptMT accum value could further obfuscate the LSB to harden against this attack, or using multiple MT generators with divergent seeds and states, combining them via XOR, could make it infeasible to predict a single MT stream state?]


Wednesday, 27 November 2013

Gmail Hashcash Notifications To Your Phone

Gmail Inbox Setup for Hashcash Notifications To Your Phone

If you install my Hashcash for Gmail google script to scan & validate your incoming email according to the presence or absence of hashcash stamps, and you just want notifications for verified emails coming in, you can tweak how the inbox notifies you on your mobile phone. Here's one possible setup:

It's quickest to set this up from the desktop:

  1. From the gmail web interface Gear->Settings
  2. Inbox tab
  3. Inbox type 'Priority'
  4. For Inbox sections 1-4, Options->Remove Section
  5. Redefine Inbox section 1. Options->More Options...->Show All From Label [#$]
  6. Save Changes

Then on your Android phone, configure Gmail:
  1. Menu->More->Settings
  2. Account Settings->[your email address]
  3. -Check Priority Inbox (make default for this account)
  4. -Labels to Notify
  5.   ->Inbox: off
  6.   ->Priority Inbox: off (was: [subtle, always, notify for every msg])
  7.   -> [#$]: [subtle, always, notify for every msg]
NOTE: This is a pretty strict set-up in the respect that you won't get notifications for anything that doesn't have a valid Hashcash stamp. Not too useful until most people in your local network also are using Hashcash. That's the problem with network effects.. the usefulness of a thing only becomes apparent once many people are using it together.

If you look into Gmail's Priority inbox rules it's a pretty powerful way to tailor how/when you get notifications.