How Do You Check Your Vote?

How do you check your vote?

It’s a simple question with a simple disturbing answer.

You cannot check your vote!

And when I say “you” I mean you. I don’t mean the system, the authorities, electoral officials, foreign auditors, or any third party. I mean you – just you.

There isn’t an electoral system on Earth that allows voters to check their votes. Americans cannot check their votes. Canadians cannot check their votes. Germans cannot check their votes.  Indians cannot check their votes. The same holds for the British, the Japanese, the Koreans, the Australians, the French, and the Danes. Everywhere people cast ballots they are denied the means to personally check them. Amazingly, billions of people all over the world go through the motions of voting without demanding a rigorous system to check their votes and assess the integrity of their elections.

Well this is planet moron!

“Oh, come on,” cried Karen. “Surely you can trust law-abiding and well-ordered electoral systems to properly count ballots?  And, aren’t you undermining the public’s confidence in the democratic process by demanding ways to verify votes? Are you a crypto-fascist?  An Asian supremacist?  A wife beater? Do you want the terrorists to win? Do you eat black puppies?”

Let’s all calm down and look at the “checking your vote problem” with Informed Naked Ape Protocol (iNap) in mind.  When you cast your vote you are forced to trust the electoral system. Informed Naked Ape Protocol has much to say about trust.

iNap #9: If you don’t control it you cannot trust it.

I don’t control the electoral system in the United States. I didn’t control it in Canada either. If you don’t personally and absolutely control a thing you can never trust it. I’ve voted in American and Canadian electoral systems and trusted neither because:

iNap #2: Trust is for imbeciles.

“Oh, John you’re so negative, so cynical, so filled with bitterness.”

It gets worse.

iNap #4: Assume corruption.

It’s wise to assume that any system you deal with is corrupt. Corruption is the default state. Things are either innocently or intentionally fucked up until there are deep, open, and relentlessly verified scientific arguments to the contrary.

iNap #7: Practice relentless verification.

iNap #10: Only scientific and mathematical arguments are admissible.

How can I analyze the integrity of Idaho’s electoral system? I would need unfettered access to state voter registrations, paper ballots, voting machine memories, and electoral officials. I would have to personally inspect every component of the entire system to pass judgment. I lack such access and so do you. Don’t pretend otherwise. You cannot verify the integrity of your electoral system. Once again you are forced to trust and as I’ve said before and will repeat until my dying day:

iNap #2: Trust is for imbeciles.

It’s time to stop believing in electoral systems because that is precisely what we are doing. We believe our systems are sound yet tolerate their inability to meet basic needs like personal vote checking. Would you believe your money was safe in bank that didn’t let you check account balances? Well imbeciles, that’s what you’re doing when you vote! Belief in anything is a giant red flag because:

iNap #3: “Belief” is a bullshit word.

All modern voting systems are broken. Without a sound open source mathematically rigorous means of checking personal votes and verifying vote aggregates elections are nothing more than cynical and insulting public relations spectacles “full of sound and fury and signifying nothing.”

Yes, Karen, all our electoral systems are broken but they don’t have to be. It’s entirely possible, indeed technically trivial, to create voting systems that:

  1. Count only registered votes.
  2. Permit voters to check their votes.
  3. Allow full secure public vote aggregation verification.
  4. Satisfy the highest standards of public disclosure and scrutiny.

iNap #6: Demand full analytic disclosure.

In the next few posts, I will outline such a system. We know how to do this; the creators of block-chains and public-key cryptography have already done all the hard work. We could have much better electoral systems up and running in months. Major obstacles are not technical; they are entirely political.

GitHub’s Silly Master Plan

The kids at GitHub have tested positive for Mad Woke Disease (MWD) – again!

The last outbreak was over codes of conduct this time it’s about naming Git repository master branches! If you’re wondering what’s a Git master branch and why infantile wokesters are acting out I envy you. Perhaps you should stop reading now. MWD is a pathetic mental illness. It robs sufferers of perspective and judgement and often induces inane bouts of vacuous virtue signaling.

Still here – you were warned – let’s dig into the tiresome technicalities.

GitHub is a major website that hosts tens of thousands of Git code repositories. Code repositories are specialized databases that track program component changes. The most commonly tracked components are source code files but other binary components like image files are also tracked. Code repositories make it possible for widely dispersed programmers to collaborate on large projects without irreversibly wreaking each other’s work. Hence, GitHub and it’s competitors, matter to software developers.

So what’s this master branch?

Program code evolves like living organisms and just like relationships between organisms are shown on “tree of life” diagrams program relationships are displayed on repository branch diagrams. Complex programs have pedigrees that look like inbred royal family trees.

Images from: David’s Commonplace Book and endjin blog.

If you look closely at the left side of the previous diagram you’ll notice that the line of green dots beside the inbreeding graphic is labeled “master”.

I know what you’re thinking. How did white supremacists infiltrate the woke thinking denizens of GitHub and embed racially loaded terms like “master” in code repositories? Obviously, wherever the word “master” occurs it’s referring to slavery and the brutal suppression of people of color1 because words have only one meaning and context is always irrelevant. To atone for their terminological sins GitHub is planning to rename the “master” branch something currently innocuous like “main”.

With all the shit plugging global toilets why did GitHub’s plumbers choose this silly turd to flush?

The use of the word “master” is a longstanding Git convention. Unlike other branches the master branch cannot be renamed. This is a minor annoyance and I support efforts to allow other words but having a fixed name for the master branch has advantages. It simplifies automatic program builds as they don’t have to determine what the master branch is called today. If the master branch is forcibly renamed it will break thousands of builds all over the world. Think of it as a digital black lives matter riot.

Here are a few rhetorical questions:

  • If the “master” branch is renamed “main” will it fix racism?

  • Will renaming the master branch demonstrably improve a single black life?

  • What will we call Chess Grandmasters2?

  • Must we rename Masters Degrees to Main Degrees?

  • What about the Master’s Golf Tournament?

  • Or masterclasses?

And, it goes without saying, that we can never master anything, in the sense of high achievement, ever again because because it’s clearly woke racist3.

  1. “People of color” is woke but the transposition “colored people” is not! The improper use of word order or pronouns is now a capital offense. First time offenders will be let off with a Twitter beating but if you persist, like some nagging people of vagina, it’s straight to the burning pyre for you.↩︎

  2. Contrary to widespread woke opinion “Grandmaster” is not a KKK rank.↩︎

  3. Woke racism should not be confused with real racism.↩︎

Better Blogging with Jupyter Notebooks on

When I discovered Jupyter notebooks a few years ago I instantly recognized their potential as a technical blogging tool. Jupyter notebooks support mixtures of text, mathematics, program code, and graphics in a completely interactive environment. It’s easy to convert notebook JSON .ipynb files to markdown, \LaTeX, and HTML so it’s not a big leap to use Jupyter as a super-editor for blog posts with heavy doses of code, mathematics, and graphics.

I converted a few simple notebooks into HTML and tried loading them to my blog; the results did not amuse me! Raw notebook HTML is not suitable for imposes some serious constraints on low cost and free blogs. You cannot:

  1. Use arbitrary JavaScript.
  2. Import standalone CSS styles.
  3. Use non-standard plugins.

By setting up your own site or upgrading your account you can shed these limitations. I’ve considered both options but there’s just something about software vendors teasing users with basic features while nagging them to spend more on upgrades that gets my goat. I’m used to such abuse from the likes of Adobe and would advise to dial back upgrade nagging.

Fortunately, it’s not necessary to upgrade your account to make excellent use of Jupyter notebooks. With a few simple notebook file hacks, you can compose in Jupyter and post to

Hack #1: nb2wp

Benny Prijono has created a handy Python program nb2wp that converts Jupyter notebooks to oriented HTML. nb2wp uses BeautifulSoup, (a great software name if there ever was one), and the Python utility pynliner to convert the HTML generated by the Jupyter nbconvert utility to a oriented form.

nb2wp HTML can be pasted, (read Benny’s instructions), into the block editor. My post Using jodliterate was composed in this way.

nb2wp notebook HTML is treated as a single block editor block. This makes it hard to use the block editor which brings me to the next hack.

Hack #2: nb2subnb

Notebooks are stored as simple JSON files. It’s easy to split notebooks into n smaller notebooks. The following Python program, (available here), cuts notebook files into smaller sub-notebooks.

In [1]:
def nb2subnb(filename, *, cell_type='markdown', 
             single_nb=False, keep_cells=[], keep_texts=[]):
    (nb2subnb) splits out typed cells of jupyter 
    notebooks into n sub-notebooks.


       # split into n markdown cell notebooks

       # split into n code cell notebooks

       # all markdown cells in single notebook

       # code cells with numbers in range as single notebook
           cell_type='code', single_nb='True', 

       # markdown cells with strings 'Bhagavad' or 'github' 

       # all code cells in range with string 'pacman' 
           keep_texts=['pacman'], keep_cells=list(range(30)))

    with open(filename) as in_file:
        nb_data = json.load(in_file)

    # notebook file name without extension/path
    nbname = os.path.basename(os.path.splitext(filename)[0])

    nb_cells, one_cell, nb_files = dict(), list(), list()

    for cnt, cell in enumerate(nb_data['cells']):
        if nb_data['cells'][cnt]['cell_type'] == cell_type:

            if not single_nb:
                # single cell notebooks
                nb_cells, one_cell = dict(), list()
                if 0 == len(keep_cells) or cnt in keep_cells:
                    if text_in_source(cell, keep_texts):
                        nb_cells["cells"] = one_cell
                        nb_cells = insert_nb_metadata(
                            nb_data, nb_cells)
                        nb_out_file = NB_DIRECTORY + \
                            nbname + '-' + cell_type + \
                            '-' + str(cnt) + '.ipynb'
                        with open(nb_out_file, 'w') as out_file:
                            json.dump(nb_cells, out_file,
            elif single_nb:
                # single notebook with only (cell_type) cells
                if 0 == len(keep_cells) or cnt in keep_cells:
                    if text_in_source(cell, keep_texts):
                        nb_cells["cells"] = one_cell

    if single_nb:
        nb_out_file = NB_DIRECTORY + \
            nbname + '-' + cell_type + '-only.ipynb'
        nb_cells = insert_nb_metadata(nb_data, nb_cells)
        with open(nb_out_file, 'w') as out_file:
            json.dump(nb_cells, out_file, 

    # list of generated sub-notebooks
    return nb_files

nb2wp can be applied to the sub-notebooks to produce smaller, more block editor friendly, HTML files. Blog posts can be put together by picking and pasting the smaller blocks.

Combining nb2wp and nb2subnb

nb2wpblk combines the actions of nb2wp and nb2subnb to generate n HTML files.

In [2]:
# append nb2* script directory to system path
import sys
In [3]:
# notebook file
nb_file = r'C:\temp\nb2wp\UsingJodliterate.ipynb'
In [4]:
nb2subnb_opts = {
    'single_nb': False,
    'cell_type': 'code',
    'keep_cells': [2, 8, 21],
    'keep_texts': []
In [5]:
import nb2wput as nbu

# split notebook into selected parts and convert to HTML 
nbu.nb2wpblk(nb_file, nb2subnb_parms=nb2subnb_opts)
Using template: full
Using CSS files ['C:\\temp\\nb2wp\\style.css']
Saving CSS to C:\temp\nb2wp\tmp\style.css
C:\temp\nb2wp\tmp\UsingJodliterate-code-2.html: 7054 bytes written in 4.488s
Using template: full
Using CSS files ['C:\\temp\\nb2wp\\style.css']
Saving CSS to C:\temp\nb2wp\tmp\style.css
C:\temp\nb2wp\tmp\UsingJodliterate-code-8.html: 4715 bytes written in 4.177s
Using template: full
Using CSS files ['C:\\temp\\nb2wp\\style.css']
Saving CSS to C:\temp\nb2wp\tmp\style.css
C:\temp\nb2wp\tmp\UsingJodliterate-code-21.html: 8683 bytes written in 5.437s

The utilities referenced in this post are available on Github here. Help yourself and blog better!

Using jodliterate

The JODSOURCE addon, (a part of the JOD system), contains a handy literate programming tool that enables the generation of beautiful J source code documents.

The Bible, Koran, and Bhagavad Gita of Literate Programming is Donald Knuth’s masterful tome of the same name.

Knuth applied Literate Programming to his \TeX systems and produced what many consider enduring masterpieces of program documentation.

jodliterate is certainly not worthy of \TeX level accolades but with a little work it’s possible to produce fine documents. This J kernel notebook outlines how you can install and use jodliterate. Jupyter notebooks are typically executed but to accommodate J users that do hot have Jupyter this notebook is also available on GitHub as a static PDF document.

Notebook Preliminaries

In [1]:
NB. show J kernel version
9!:14 ''
In [2]:
NB. load JOD in a clear base locale
load 'general/jod' [ clear ''

NB. The distributed JOD profile automatically RESETME's.
NB. To safely use dictionaries with many J tasks they must
NB. be READONLY. To prevent opening the same put dictionary
NB. READWRITE comment out (dpset) and restart this notebook.
dpset 'RESETME'

NB. Converting Jupyter notebooks to LaTeX is 
NB. simplified by ASCII box characters.
portchars ''

NB. Verb to show large boxed displays in
NB. the notebook without ugly wrapping.
sbx_ijod_=: ' ... ' ,"1~ 75&{."1@":

Installing jodliterate

To use jodliterate you need to:

  1. Install a current version of J.
  2. Install the J addons JOD, JODSOURCE, and JODDOCUMENT.
  3. Build the JOD development dictionaries from JODSOURCE.
  4. Install a current version of pandoc.
  5. Install a current version of \TeX and \LaTeX.
  6. Make the jodliterate J script.
  7. Run jodliterate on a JOD group with pandoc compatible document fragments.
  8. Compile the files of the previous step to produce a PDF

When presented with long lists of program prerequisites my impulse is to run! Life is too short for configuration wars. Everything should be easy. Installing jodliterate requires more work than phone apps but compared to enterprise installations setting up jodliterate is trivial. We’ll go through it step by step.

Step 1: Install a current version of J

J is freely available at J installation instructions can be found on the J Wiki on this page.

Follow the appropriate instructions for your OS.

Note: JOD runs on Windows, Linux, and MacOS versions of J, hence these are the only platforms that currently support jodliterate.

Step 2: Install the J addons JOD, JODSOURCE and JODDOCUMENT

After installing J install the J addons. J addons are installed with the J package manager pacman. Pacman has three IDE flavors: a command-line flavor and two GUI flavors. The GUI flavors depend on JQT or JHS. The GUI flavors of pacman are only available on some versions of J whereas the command line version is part of the base J install and is available on all platforms.

I install all the addons. I recommend that you do the same.

JOD depends on some J modules like jfiles, regex, and task that are sometimes distributed as addons. If you install all addons JOD’s modules and dependents are both installed.

Installing addons with command line pacman

Start J and do:

In [3]:
NB. install J addons with command-line pacman

load 'pacman'    NB. load pacman jpkg services
In [4]:
'help' jpkg ''   NB. what can you do for me?
Valid options are:
 history, install, manifest, remove, reinstall, search,
 show, showinstalled, shownotinstalled, showupgrade,
 status, update, upgrade

In [5]:
NB. install all addons
NB. see

NB. uncomment next line if addons not installed
NB. 'install' jpkg '*'  NB.
In [6]:
3 {. 'showinstalled' jpkg '' NB. first few installed addons
|api/expat|1.0.11|1.0.11|libexpat                     |
|api/gles |1.0.31|1.0.31|Modern OpenGL API            |
|api/java |1.0.2 |1.0.2 |api: Java to J shared library|
In [7]:
'showupgrade' jpkg ''  NB. list addon updates

Installing addons with JQT GUI pacman

I mostly use the Windows JQT version of pacman to install and maintain J addons. You can find pacman on the tools menu.

pacman shows all available addons and provides tools for installing, updating, and removing them.

The GUI version is easy to use. Press the Select All button and then press the Install button to install all the addons. To update addons select the Upgrades menu and select the addons you want to update.

Step 3: Build the JOD development dictionaries from JODSOURCE

JOD source code is distributed in the form of JOD dictionary dumps. Dictionary dumps are large J scripts that serialize JOD dictionaries. Dumps contain everything stored in dictionaries. You will find source code, binary data, test scripts, documentation, build macros, and more in typical JOD dictionaries.

jodliterate is stored as a JOD dictionary group. A dictionary group is simply a collection of J words with optional header and post-processor scripts. JOD generates J scripts from groups. Before we can make jodliterate we must load the JOD development dictionaries. The JODSOURCE addon includes a J script that loads development dictionaries.

Again, start J and do:

In [8]:
require 'general/jod'
In [9]:
NB. set a JODroot user folder 
NB. if not set /jod/ is the default

NB. use paths for your OS
UserFolders_j_=: UserFolders_j_ , 'JODroot';'c:/temp'

NB. show added folder
UserFolders_j_ {~ (0 {"1 UserFolders_j_) i. <'JODroot'
In [10]:
NB. load JOD developement dictionaries
load_dev_tmp=: 3 : 0
if. +./ (;:'joddev jod utils') e. od '' do.
  'dev dictionaries exist'

load_dev_tmp 0
dev dictionaries exist
In [11]:
NB. joddev, jod, utils should exist

erase 'load_dev_tmp'
(;:'joddev jod utils') e. od ''
1 1 1

Step 4: Install a current version of pandoc

pandoc is easily one of the most useful markup utilities on the intertubes. If you routinely deal with markup formats like markdown, XML, \LaTeX, json and you aren’t using pandoc you are working too hard.

Be lazy! Install pandoc.

jodliterate uses the task addon to shell out to pandoc. Versions of pandoc after support J syntax high-lighting.

In [12]:
NB. show pandoc version from J - make sure you are running 
NB. a recent version of pandoc. There may be different
NB. versions in many locations on various systems.

ppath=: '"C:\Program Files\Pandoc\pandoc"'
THISPANDOC_ajodliterate_=: ppath
shell THISPANDOC_ajodliterate_,' --version'
Compiled with pandoc-types 1.20, texmath 0.12, skylighting 0.8.3
Default user data directory: C:\Users\john\AppData\Roaming\pandoc
Copyright (C) 2006-2019 John MacFarlane
This is free software; see the source for copying conditions.
There is no warranty, not even for merchantability or fitness
for a particular purpose.

In [13]:
NB. make sure your version of pandoc 
NB. supports J syntax-highlighting

NB. appends line feed character if necessary
tlf=:] , ((10{a.)"_ = {:) }. (10{a.)"_

NB. J is on the supported languages list
pcmd=: THISPANDOC_ajodliterate_,' --list-highlight-languages'
(<;._2 tlf (shell pcmd) -. CR) e.~ <,'j'

Step 5: Install a current version of LaTeX

jodliterate uses \LaTeX to compile PDF documents. When setjodliterate runs it sets an output directory and writes a \LaTeX preamble file JODLiteratePreamble.tex to it. It’s a good idea to review this file to get an idea of the \LaTeX packages jodliterate uses. It’s possible that some of these packages are not in your \LaTeX distribution and will have to be installed.

To ease the burden of \LaTeX package maintenance I use freely available \TeX versions that automatically install missing packages.

  1. On Windows I use MiKTeX
  2. On other platforms I use TeXLive

If your system automatically installs packages the first time you compile jodliterate output it may fetch missing packages from The Comprehensive \TeX Archive Network (CTAN). If new packages are installed reprocess your files a few times to insure all the required packages are downloaded and installed.

Step: 6 Make the jodliterate J script

Once the JOD development dictionaries are built (Step 3) making jodliterate is easy. Start J and do:

In [14]:
require 'general/jod'

NB. open dictionaries
od ;:'joddev jod utils' [ 3 od ''
|1|opened (rw/ro/ro) ->|joddev|jod|utils|
In [15]:
NB. generate jodliterate
sbx mls 'jodliterate'
+-+--------------------+------------------------------------+               ... 
|1|load script saved ->|c:/jod/joddev/script/jodliterate.ijs|               ... 
+-+--------------------+------------------------------------+               ... 

mls creates a standard J load script. Once generated this script can be loaded with the standard J load utility. You can test this by restarting J without JOD and loading jodliterate.

In [16]:
NB. load generated script
load 'jodliterate'
NB. (jodliterate) interface word(s):
NB. --------------------------------
NB. THISPANDOC      NB. full pandoc path - use (pandoc) if on shell path
NB. grplit          NB. make latex for group (y)
NB. ifacesection    NB. interface section summary string
NB. ifc             NB. format interface comment text
NB. setjodliterate  NB. prepare LaTeX processing - sets out directory writes preamble

NOTE: adjust pandoc path if version (pandoc is not >=

Step 7: Run jodliterate on a JOD group with pandoc compatible document fragments

This sounds a lot worse than it is. There is a group in utils called sunmoon that has an interesting pandoc compatible document fragment.

Start J and do:

In [17]:
require 'general/jod'

od 'utils' [ 3 od ''
|1|opened (ro) ->|utils|
In [18]:
NB. display short explanations for (sunmoon) words
sbx hlpnl }. grp 'sunmoon'
+-----------------+-------------------------------------------------------- ... 
|IFACEWORDSsunmoon|interface words (IFACEWORDSsunmoon) group                ... 
|NORISESET        |indicates sun never rises or sets in (sunriseset0) and ( ... 
|ROOTWORDSsunmoon |root words (ROOTWORDSsunmoon) group                      ... 
|arctan           |arc tangent                                              ... 
|calmoons         |calendar dates of new and full moons                     ... 
|cos              |cosine radians                                           ... 
|fromjulian       |converts Julian day numbers to dates, converse (tojulian ... 
|moons            |times of new and full moons for n calendar years         ... 
|round            |round (y) to nearest (x) (e.g. 1000 round 12345)         ... 
|sin              |sine radians                                             ... 
|sunriseset0      |computes sun rise and set times - see group documentatio ... 
|sunriseset1      |computes sun rise and set times - see group documentatio ... 
|tabit            |promotes only atoms and lists to tables                  ... 
|tan              |tan radians                                              ... 
|today            |returns todays date                                      ... 
|yeardates        |returns all valid dates for n calendar years             ... 
+-----------------+-------------------------------------------------------- ... 
In [19]:
NB. display part of the (sunmoon) group document header
NB. this is pandoc compatible markdown - note the LaTeX
NB. commands - pandoc allows markdown/LaTeX mixtures
900 {. 2 9 disp 'sunmoon'
`sunmoon` is a collection of basic astronomical algorithms
The key verbs are `moons`, `sunriseset0` and `sunriseset1.`  
All of these verbs were derived from BASIC programs published
in *Sky & Telescope* magazine in the 1990's. The rest of
the verbs in `sunmoon` are mostly date and trigonometric

\subsection{\texttt{sunmoon} Interface}

~~~~ { .j }
  calmoons      NB. calendar dates of new and full moons                     
  moons         NB. times of new and full moons for n calendar years         
  sunriseset0   NB. computes sun rise and set times - see group documentation
  sunriseset1   NB. computes sun rise and set times - see group documentation

\subsection{\textbf\texttt{sunriseset0} \textsl{v--} sunrise and sunset times}

This  verb has been adapted from a BASIC program submitted by
Robin  G.  Stuart  *Sky & Telescope's*  shortest  sunrise/set
program  cont
In [20]:
NB. run jodliterate on (sunmoon)
require 'jodliterate'

NB. set the output directory - when 
NB. running in Jupyter use a subdirectory
NB. of your notebook directory.

ltxpath=: 'C:\Users\john\AnacondaProjects\testfolder\grplit\' 
setjodliterate ltxpath
In [21]:
NB. (grplit) returns a list of generated 
NB. LaTeX and command files. The *.bat 
NB. file compiles the generated LaTeX

,. grplit 'sunmoon'
|1                                                                |
|C:\Users\john\AnacondaProjects\testfolder\grplit\sunmoon.tex     |
|C:\Users\john\AnacondaProjects\testfolder\grplit\sunmooncode.tex |
|C:\Users\john\AnacondaProjects\testfolder\grplit\sunmoon.bat     |

Step 8: Compile the files of the previous step to produce a PDF

In [22]:
_250 {. shell ltxpath,'sunmoon.bat'
gular.otf><c:/program files/miktex 2.9/fonts/ope
Output written on sunmoon.pdf (22 pages, 107711 bytes).
Transcript written on sunmoon.log.

(base) C:\Users\john\AnacondaProjects\testfolder\grplit>endlocal

In [23]:
NB. uncomment to display generated PDF 
 NB. shell ltxpath,'sunmoon.pdf'

Storing jodliterate pandoc compatible document fragments in JOD

Effective use of jodliterate requires a melange of Markdown, \LaTeX, JOD, and J skills combined with a healthy attitude about experimentation. You have to try things and see if they work!

However, before you can try jodliterate document fragments you have put them in JOD dictionaries.

jodliterate uses two types of document fragments:

  1. markdown overview group documents.
  2. \LaTeX overview macros.

Markdown group documents are transformed by pandoc into \LaTeX but the overview macros are not altered in any way. This enables the use of arbitrarily complex \LaTeX. The following examples show how to insert document fragments.

Create a jodliterate Demo Dictionary

In [24]:
NB. create a demo dictionary - (didnum) insures new name
require 'general/jod'

NB. new dictionary in default JOD directory
sbx newd itslit_ijod_=: 'aaa',":didnum_ajod_ ''
+-+---------------------+------------------------------------------+------- ... 
|1|dictionary created ->|aaa327403631806685638405507439206657280913|c:/user ... 
+-+---------------------+------------------------------------------+------- ... 
In [25]:
NB. 1 if new dictionary created
(<itslit) e. od ''
In [26]:
od itslit [ 3 od '' NB. open only new dictionary
|1|opened (rw) ->|aaa327403631806685638405507439206657280913|
In [27]:
NB. define some words
freq=:~. ; #/.~
movmean=:-@[ (+/ % #)\ ]
geomean=:# %: */
bmi=: 704.5"_ * ] % [: *: [

wlst=: ;:'freq movmean geomean bmi polyprod'

NB. put in dictionary
put wlst

NB. short word explanations
t=: ,:  'freq';'frequency distribution'
t=: t , 'movmean';'moving mean'
t=: t , 'geomean';'geometric mean of a list'
t=: t , 'bmi';'body mass index - (x) inches (y) lbs'
t=: t , 'polyprod';'polynomial product'

0 8 put t
|1|5 word explanation(s) put in ->|aaa327403631806685638405507439206657280913|
In [28]:
NB. make header and macro groups
grp 'litheader' ; wlst
grp 'litmacro'  ; wlst
|1|group <litmacro> put in ->|aaa327403631806685638405507439206657280913|
In [29]:
IFACEWORDSlitheader=: wlst
put 'IFACEWORDSlitheader'
|1|1 word(s) put in ->|aaa327403631806685638405507439206657280913|

Use Group Document Overview Markdown

In [30]:
NB. add group header markdown
litheader=: (0 : 0)
`litheader` is a markdown demo group. 

This markdown text will be 
by `pandoc` to \LaTeX. A group interface will be 
generated from the `IFACEWORDSlitheader`
list. Interface lists are usually, but 
not always, associated with a *class group*.

\subsection{\texttt{litheader} Interface}


NB. store markdown as a JOD group document
2 9 put 'litheader';litheader
|1|1 group document(s) put in ->|aaa327403631806685638405507439206657280913|
In [31]:
NB. run jodliterate on group
ltxpath=: 'C:\Users\john\AnacondaProjects\testfolder\grplit\' 
setjodliterate ltxpath
{: grplit 'litheader'
In [32]:
NB. compile latex
_250 {. shell ltxpath,'litheader.bat'
lar.otf><c:/program files/miktex 2.9/fonts/o
Output written on litheader.pdf (4 pages, 47726 bytes).
Transcript written on litheader.log.

(base) C:\Users\john\AnacondaProjects\testfolder\grplit>endlocal

In [33]:
NB. uncomment to show PDF
NB. shell ltxpath,'litheader.pdf'

Use Macro Overview LaTeX

In [34]:
NB. add a LaTeX overview - this code will not 
NB. be altered by jodliterate the suffix
NB. '_oview_tex' is required to associate 
NB. the overview with the group 'litmacro'

litmacro_oview_tex=: (0 : 0)

This \LaTeX\ code will not be 
touched by \texttt{jodliterate}. 

\subsection{Business Babel}

``Truth management is enabled.''

\emph{Excerpt from an actual business document!}
Obviously composed in an irony free zone.

\subsection{Some Complicated \LaTeX}


\frac{1}{\Bigl(\sqrt{\phi \sqrt{5}}-\phi\Bigr) e^{\frac25 \pi}} =
1+\frac{e^{-2\pi}} {1+\frac{e^{-4\pi}} {1+\frac{e^{-6\pi}}
{1+\frac{e^{-8\pi}} {1+\ldots} } } }


NB. store LaTeX as JOD text macro 
4 put 'litmacro_oview_tex';LATEX_ajod_;litmacro_oview_tex
|1|1 macro(s) put in ->|aaa327403631806685638405507439206657280913|
In [35]:
NB. run jodliterate on group
{: grplit 'litmacro'
In [36]:
NB. compile latex
_250 {. shell ltxpath,'litmacro.bat'
e1/public/lm/lmsy6.pfb><C:/Program Files/MiKTeX 2.9/fonts/type1/public/lm/lms
Output written on litmacro.pdf (4 pages, 138976 bytes).
Transcript written on litmacro.log.

(base) C:\Users\john\AnacondaProjects\testfolder\grplit>endlocal

In [37]:
NB. display PDF
NB. shell ltxpath,'litmacro.pdf'

Using jodliterate with larger J systems

The main jodliterate verb grplit works with single JOD groups. Larger systems are typically made from many groups. JOD macro and test scripts are one way to work around this limitation. The JOD development dictionaries contain several macros that illustrate this approach.

In [38]:
od ;:'joddev jod utils' [ 3 od ''

NB. list macros with substring 'latex'
4 2 dnl 'latex'
In [39]:
NB. display start of macro that 
NB. applies jodliterate to JOD code
250 {. 4 disp 'buildjodlatex'
NB.*buildjodlatex s--  generates syntax highlighted JOD source LaTeX.
NB. Files are written to the put dictionary's document directory.
NB. assumes: current versions of pandoc (pandoc or later)
NB.          check noun (THISPANDOC

Final Remarks

jodliterate is an idiosyncratic anal-retentive software utility; it’s mainly for people that consider source code an art form. Nobody likes ugly undocumented art!

If you have any questions, suggestions, or complaints please leave a comment on this post. To include others join one of J discussion forums and post your queries there.

May the source be with you!

WordPress conversion from UsingJodliterate.ipynb by nb2wp v0.3.1

More J Pandoc Syntax HighLighting

Syntax highlighting is essential for blogging program code. Many blog hosts recognize this and provide tools for highlighting programming languages. (this host) has a nifty highlighting tool that handles dozens of mainstream programming languages. Unfortunately, one of my favorite programming languages, J, (yes it’s a single letter name), is way out of the mainstream and is not supported.

There are a few ways to deal with this problem.

  1. Eschew J highlighting.
  2. Upgrade1 your subscription and install custom syntax highlighters that can handle arbitrary language definitions.
  3. Find another blog host that freely supports custom highlighters.
  4. Roll your own or customize an existing highlighter.

A few years ago I went with the fourth option and hacked the superb open-source tool pandoc. The grim details are described in this blog post. My hack produced a customized version of pandoc with J highlighting. I still use my hacked version and I’d probably stick with it if current pandoc versions had not introduced must-have features like converting Jupyter notebooks to Markdown, PDF, LaTeX and HTML. Jupyter is my default thinking-things-through programming environment. I’ve even taken to blogging with Jupyter notebooks. If you write and explain code you owe it to yourself to give Jupyter a try.

Unwilling to eschew J highlighting or forgo Jupyter I was on the verge of re-hacking pandoc when I read the current pandoc (version documentation and saw that J is now officially supported by pandoc. You can verify this with the shell commands.

pandoc --version
pandoc --list-highlight-languages

The pandoc developers made my day! I felt like Wayne meeting a rock star.

Highlighting J is now a simple matter of placing J code in markdown blocks like:

  ~~~~ { .j }
      ... code code code ...

and issuing shell commands like:

pandoc --highlight-style tango --metadata title="J test" -s -o jpdh.html

The previous command generated the HTML of this post which I pasted into the Classic Editor. Not only do I get J code highlighting on the cheap I also get footnotes which, for god freaking sakes,2 are not supported by the new WordPress block editor for low budget blogs.

The source markdown used for this post is available here – enjoy!

NB. Some J code I am currently using to test TAB
NB. delimited text files before loading them with SSIS.

NB. read TAB delimited table files as symbols - see long document
readtd2s=:[: s:@<;._2&> (9{a.) ,&.>~ [: <;._2 [: (] , ((10{a.)"_ = {:) }. (10{a.)"_) (13{a.) -.~ 1!:1&(]`<@.(32&>@(3!:0)))

tdkeytest=:4 : 0

NB.*tdkeytest v-- test natural key columns  of TAB delimited text
NB. files.
NB. Many of the raw tables of the ETL process depend on  compound
NB. primary keys. This verb applies a basic  test of primary  key
NB. columns. Passing this test  makes it very  likely  the  table
NB. will load  without key constraint  violations.  Failures  are
NB. still possible depending  on how  text  data is converted  to
NB. other  datatypes. Failure of this test indicates  a very high
NB. chance of key constraint violations.
NB. dyad:  il =. blclColnames tdkeytest clFile
NB.   f0=. 'C:\temp\dailytsv\raw_ST_BU.txt'
NB.   k0=. ;:'BuId XMLFileDate'
NB.   k0 tdkeytest f0
NB.   f1=. 'C:\temp\dailytsv\raw_ST_Item.txt'
NB.   k1=. ;:'BuId ItemId XMLFileDate'
NB.   k1 tdkeytest f1

NB. first row is header
h=. 0{d=. readtd2s y

NB. key column positions
'header key column(s) missing' assert -.(#h) e. p=. h i. s: x

c=. #d=. }. d
b=. ~:p {"1 d

NB. columns unique, rowcnt, nonunique rowcnt
if. r=. c = +/b do.
  r , c , 0
  NB. there are duplicates show some sorted duplicate keys
  k=. p {"1 d
  d=. d {~ I. k e. k #~ -.b
  d=. (/: p {"1 d) { d
  b=. ~:p {"1 d
  m=. +/b
  smoutput (":m),' duplicate key blocks'
  n=. DUPSHOW <. m
  smoutput 'first ',(":n),' duplicate row key blocks'
  smoutput (<p { h) ,&.> n {. ,. b <;.1 p {"1 d
  r , c , #d

  1. The pay more option is always available.
  2. is beginning to remind me of Adobe. Stop taking away longstanding features when upgrading!

Extracting SQL code from SSIS dtsx packages with Python lxml

Lately, I’ve been refactoring a sprawling SSIS (SQL Server Integration Services) package that ineffectually wrestles with large XML files. In this programmer’s opinion using SSIS for heavy-duty XML parsing is geeky self-abuse so I’ve opted to replace an eye-ball straining[1] SSIS package with half a dozen, “as simple as possible but no simpler”, Python scripts. If the Python is fast enough for production great! If not the scripts will serve as a clear model[2] for something faster.

I’m only refactoring[3] part of a larger ETL process so whatever I do it must mesh with the rest of the mess.

So where is the rest of the SSIS mess?

SSIS’s visual editor does a wonderful job of hiding the damn code!

This is a problem!

If only there was a simple way to troll through large sprawling SSIS spider-webby packages and extract the good bits. Fortunately, Python’s XML parsing tools can be easily applied to SSIS dtsx files. SSIS dtsx files are XML files. The following code snippets illustrate how to hack these files.

First import the required Python modules. lxml is not always included in Python distributions. Use the pip or conda tools to install this module.

# imports
import os
from lxml import etree

Set an output directory. I’m running on a Windows machine. If you’re on a Mac or Linux machine adjust the path.

# set sql output directory
sql_out = r"C:\temp\dtsxsql"
if not os.path.isdir(sql_out):

Point to the dtsx package you want to extract code from.

# dtsx files
dtsx_path = r'C:\Users\john\AnacondaProjects\testfolder\bixml'
ssis_dtsx = dtsx_path + r'\ParseXML.dtsx'

Read and parse the SSIS package.

tree = etree.parse(ssis_dtsx)
root = tree.getroot()

lxml renders XML namespace tags like <DTS:Executable as {}Executable. The following
shows all the transformed element tags in the dtsx package.

# collect unique element tags in dtsx
ele_set = set()
for ele in root.xpath(".//*"):
{'InnerObject', '{}PrecedenceConstraint', '{}ObjectData', '{}PackageParameter', '{}LogProvider', '{}Envelope', '{}Executable', 'ExpressionTask', '{}Body', '{}Variable', '{}ForEachVariableMapping', '{}PrecedenceConstraints', '{}SelectedLogProviders', 'ForEachFileEnumeratorProperties', '{}ForEachVariableMappings', '{}ForEachEnumerator', '{}SelectedLogProvider', '{}DesignTimeProperties', '{}LogProviders', '{}LoggingOptions', '{}Variables', 'FEFEProperty', 'FileSystemData', 'ProjectItem', '{}Property', '{}anyType', '{}Executables', '{}ParameterBinding', '{}ResultBinding', '{}VariableValue', 'FEEADO', 'BinaryItem', '{}PackageParameters', '{}PropertyExpression', '{}SqlTaskData', 'ScriptProject'}

Using transformed element tags of interest blast over the dtsx and suck out the bits of interest.

# extract sql code in source statements and write to *.sql files 
total_bytes = 0
package_name = root.attrib['{}ObjectName'].replace(" ","")
for cnt, ele in enumerate(root.xpath(".//*")):
    if ele.tag == "{}Executable":
        attr = ele.attrib
        for child0 in ele:
            if child0.tag == "{}ObjectData":
                for child1 in child0:
                    sql_comment = attr["{}ObjectName"].strip()
                    if child1.tag == "{}SqlTaskData":
                        dtsx_sql = child1.attrib["{}SqlStatementSource"]
                        dtsx_sql = "-- " + sql_comment + "\n" + dtsx_sql
                        sql_file = sql_out + "\\" + package_name + str(cnt) + ".sql"
                        total_bytes += len(dtsx_sql)
                        print((len(dtsx_sql), sql_comment, sql_file))
                        with open(sql_file, "w") as file:
print(('total sql code bytes',total_bytes))
   (2817, 'Add Record to ZipProcessLog', 'C:\\temp\\dtsxsql\\2_ParseXML225.sql')
    (48, 'Dummy SQL - End Loop for ZipFileName', 'C:\\temp\\dtsxsql\\2_ParseXML268.sql')
    (1327, 'Add Record to XMLProcessLog', 'C:\\temp\\dtsxsql\\2_ParseXML293.sql')
    (546, 'Delete Prior Loads in XMLProcessLog Table for Looped XML', 'C:\\temp\\dtsxsql\\2_ParseXML304.sql')
    (759, 'Delete Prior Loads to Node Table for Looped XML', 'C:\\temp\\dtsxsql\\2_ParseXML312.sql')
    (48, 'Dummy SQL - End Loop for XMLFileName', 'C:\\temp\\dtsxsql\\2_ParseXML320.sql')
    (1862, 'Set Variable DeletePriorImportNodeFlag', 'C:\\temp\\dtsxsql\\2_ParseXML356.sql')
    (55, 'Shred XML to Node Table', 'C:\\temp\\dtsxsql\\2_ParseXML365.sql')
    (1011, 'Update LoadEndDatetime and XMLRecordCount in XMLProcessLog', 'C:\\temp\\dtsxsql\\2_ParseXML371.sql')
    (1060, 'Update LoadEndDatetime and XMLRecordCount in XMLProcessLog - Shred Failure', 'C:\\temp\\dtsxsql\\2_ParseXML382.sql')
    (675, 'Load object VariablesList (Nodes to process for each XML File Category)', 'C:\\temp\\dtsxsql\\2_ParseXML412.sql')
    (1175, 'Set Variable ZipProcessFlag - Has Zip Had A Prior Successful Run', 'C:\\temp\\dtsxsql\\2_ParseXML461.sql')
    (224, 'Set ZipProcessed Status (End Zip)', 'C:\\temp\\dtsxsql\\2_ParseXML474.sql')
    (238, 'Set ZipProcessed Status (Zip Already Processed)', 'C:\\temp\\dtsxsql\\2_ParseXML480.sql')
    (231, 'Set ZipProcessing Status (Zip Starting)', 'C:\\temp\\dtsxsql\\2_ParseXML486.sql')
    (609, 'Update LoadEndDatetime in ZipProcessLog', 'C:\\temp\\dtsxsql\\2_ParseXML506.sql')
    (613, 'Update ZipLog UnzipCompletedDateTime Equal to GETDATE', 'C:\\temp\\dtsxsql\\2_ParseXML514.sql')
    (1610, 'Update ZipProcessLog ExtractedFileCount', 'C:\\temp\\dtsxsql\\2_ParseXML522.sql')
    ('total sql code bytes', 14908)

The code snippets in this post are available in this Jupyter notebook: Extracting SQL code from SSIS dtsx packages with Python lxml. Download and tweak for your dtsx nightmare!

  1. I frequently run into SSIS packages that cannot be viewed on 4K monitors when fully zoomed out.
  2. Python’s readability is a major asset when disentangling mess-ware.
  3. Yes, I’ve railed about the word “refactoring” in the past but I’ve moved on and so should you. “A foolish consistency is the hobgoblin of little minds.”

Sudden Genocide

If genocide is sudden, painless, unexpected, complete and absolute, if a people simply vanish without screams, without fear, without anticipation, if one nanosecond they are and the next nanosecond they are not, and if somehow you are responsible, are you a war criminal or a savior? Ultimately we all vanish, usually with screams, usually with fear, usually with anticipation. For years I’ve longed for a sudden painless death: a dismembering that tears me apart faster than my nerves can relay pain. Imagine such an end for all of us – at once – now. Why linger? We will go extinct. There will be a last human. And if the last human dies with screams, with fear, with anticipation how is that better than sudden genocide?

Who Thought Blinking Windfarms was a Good Idea?

One night, a few weeks ago, I was driving west on I86 near American Falls when I spotted a long string of blinking red lights. The lights stretched over a large arc of the horizon. My first thought was “Jesus H. Christ now what?” As an amateur astronomer, I have climbed mountains to get away from light pollution. Now some jackwagon was ruining an entire rural horizon with a goddamn string of synchronized windfarm lights.

May I ask why?

Windfarms blink at night to warn planes they’re flying to low. Please! The towers are well under 400 meters. If you are flying a plane below 400 meters in mountainous locales like Idaho you have far more serious problems than running into wind turbines. This is another example of stupid regulation. There is absolutely no good reason for lighting up entire landscapes. It wrecks the view, distracts drivers, (cars on I86 were slowing down to get a better view), wastes energy, rapes the night sky, and reminds everyone what an environmental tax-subsidized eyesore windfarms are. Don’t even think of disagreeing. When was the last time you looked at a landscape littered with wind turbines and thought, “This is so much better than it was before.”

Yes, I know windfarms are saving us from global warming. If you believe that you are probably exactly the type of person that signed off on ruining an entire county’s nighttime view with goddamn blinking windfarms.

Trey and Kate: Review

This will be a completely biased review. I have a close relationship with the author so everything I say must be verified. Please buy Trey and Kate, read it, and make up your mind. With that caveat out of the way let’s get started.

Trey and Kate is a tale about an on and off again Millennial romance that plays out in Kingston Ontario. The two leads are not exactly star crossed lovers. They’re both partly broken and struggling with mental illness, past life hallucinations, deficient friends, and uncomprehending divorced families. Kate is bipolar and goes on and off her meds throughout. Trey is stuck in a dead-end barista job: remember Millennials. He’s mourning a deceased unloving father and has only two reliable relationships: his cat and his mother. Trey and Kate’s dreams hint at a shared past and promise a joint future but offer little practical guidance.

With destiny seemingly on their side, you would expect their romance to go smoothly; it does not. When Kate goes off her meds she’s impulsive and prone to risky behavior. The book’s best passages detail her sordid bouts of random sex with total strangers. It’s almost prostitution but Kate doesn’t have the business sense of a prostitute. Of course, this doesn’t help her relationship with Trey. To his credit or shame, he forgives her but we’re not sure if his forgiveness is self-pity. Trey’s self-esteem is so low he finds it almost comical than any woman could love him. Welcome to the club Trey. Trey and Kate’s interaction is both frustrating, satisfying, embarrassing, irritating and fulfilling.

Trey and Kate is the author’s first book. I know the author struggled to put the book together. Its best parts are purely descriptive and when the author shows us what the characters are seeing and feeling the prose tells. When the text ventures into rhetorical semi-poetic asides it hollows out. Trey and Kate feels like a screenplay disguised as a novel. This is partly due to the almost cinematic presence of the setting Kingston Ontario, a dull stone-filled town that would be unlivable without Lake Ontario, and Kingston’s wretched weather which is every bit as bad as it’s portrayed in Trey and Kate. I’d encourage the author to keep writing, rewriting and experimenting. There are good stories to tell here.

The Return of the Prodigal Blogger

It’s been ages since my last blog post. Yes, I’ve been a very, very bad blogger. Lesser men would throw themselves on the metaphorical feet of their readers and beg for forgiveness but if you’re expecting apologies you don’t know me! I write for myself; if you choose to read my ramblings, well that’s on you.

Since my last entry I have:

  1. Retired. I finally pulled the plug on being a so-called productive member of society. Now I’m an old Social Security draining parasite. Since I no longer pay net taxes I am effectively dead to the state and they would love it if I was actually dead. Dead people are easier to finance. Unfortunately, I’ve always been on to the deep state and my new mission in life is to claw back every single tax dollar I ever paid with butt loads of interest. When I snuff off this mortal coil I am going stick you with a giant uncollectable I OWE FUCKING YOU! If you create financially unsustainable systems that encourage abuse, well guess what, you’re going to get abused. God, I’m loving being a bitter old man; it’s what I was born to be.
  2. Continued to pursue my hobbies, especially photography. This year (2019) I set the mini-goals of shooting, on average, one picture per day and scanning at least three hundred prints and slides. This may not sound like a lot but it takes me time to select the best images, process RAW files, restore film scans, edit or hack pictures, write captions and compute keywords.  I treat every uploaded image as a milliblogging opportunity.  Some of my image captions are longer than some of my blog posts.
  3. Taken on some new family responsibilities and obligations.
  4. Taken some trips.
  5. Worked on various personal software projects.

Arthur C. Clarke once remarked that only unimaginative people get bored. I’ve always had something on my mind and I’ve always lived in my head. This has always been my problem and my strength. With retirement, I am casting off my shriveled shackles of pretense. I’m not even going to pretend to care about other people’s problems. I will think about what I find fascinating and do what I find worthwhile.

Call it retirement privilege snowflakes!

Now get back to work and pay your taxes you have old parasites to support.