Turning JOD Dump Script Tricks

Have you ever wondered how extremely prolific bloggers do it? How is it possible to crank out thousands of blog entries per year without creating a giant stinking pile of mediocre doo-doo? Like most deep medium mysteries it’s not very deep and there are no mysteries. The spewers, people who post like teenage girls tweet, use two basic strategies:

  1. Multiple authors: The heroic image of the lone blogger waging holy war against a sea of Internet idiocy is largely a myth. Many popular prolific blogs are the work of many hands. The editor at Analyze the Data not the Drivel eschews this tactic. Apparently he’s an incontinent and argumentative prima donna that sane people steer clear of.
  2. Content recycling: In elementary school this was called copying. Now that we’re all grown up we use terms like, “excerpting”, “abstracting”, and my favorite “re-purposing.” The basic idea is simple. Take something you’ve written elsewhere and repackage it as something new. Hey, all the cool kids are doing it!

The following is a slightly edited new appendix I have just added to the JOD manual. I am working to properly publish the JOD manual mostly so I can say that I’ve written a legitimate, albeit strange and queer, book.

I created this post by running the \LaTeX code of the manual appendix through the excellent utility pandoc, tweaking the resulting markdown, and then using pandoc again to generate html for this blog. pandoc is a great “re-purposing” tool!  

Finally, re-purposing is not entirely cynical. The act of moving material from one medium to another exposes problems. I found a few editing errors while creating this post that eluded my \LaTeX eyes. If you find more this is your chance to tell me what a moron I am.

Turning JOD Dump Script Tricks

Dump script generation is my favorite JOD feature. Dump scripts serialize JOD dictionaries; they are mainly used to back up dictionaries and interact with version control systems. However, dump scripts are general J scripts and can do much more! Maintaining a stable of healthy JOD dictionaries is easier if you can turn a few dump script tricks.1

  1. Flattening reference paths: Open JOD dictionaries define a reference path. For example, if you open the following dictionaries:
       NB. open four dictionaries
       od ;:'smugdev smug image utils'
    |1|opened (ro/ro/ro/ro) ->|smugdev|smug|image|utils|

    the reference path is /smugdev/smug/image/utils.

    When objects are retrieved each dictionary on the path is searched in reference path order. If there are no compelling reasons to maintain separate dictionaries you can improve JOD retrieval performance and simplify dictionary maintenance by flattening all or part of the path.

    To flatten the reference path do:

       NB. reopen the first three dictionaries on the path
       od ;:'smugdev smug image' [ 3 od ''
    |1|opened (ro/ro/ro) ->|smugdev|smug|image|
       NB. dump to a temporary file (df)
       df=: {: showpass make jpath '~jodtemp/smugflat.ijs'
    |1|object(s) on path dumped ->|c:/jodtemp/smugflat.ijs|
       NB. create a new flat dictionary
       newd 'smugflat';jpath '~jodtemp/smugflat' [ 3 od ''
    |1|dictionary created ->|smugflat|c:/jodtemp/smugflat/|
       NB. open the flat dictionary and (utils)
       od ;:'smugflat utils'
    |1|opened (rw/ro) ->|smugflat|utils|
       NB. reload dump script ... output not shown ...  
       0!:0 df

    The collapsed path /smugflat/utils will return the same objects as the longer path. It is important to understand that the collapsed dictionary smugflat does not necessarily contain the same objects found in the three original dictionaries smugdev, smug and image. If objects with the same name exist in the original dictionaries only the first one found will be in the collapsed dictionary.

  2. Merging dictionaries: If two dictionaries contain no overlapping objects it might make sense to merge them. This is easily achieved with dump scripts. To merge two or more dictionaries do:
       NB. open and dump first dictionary
       od 'dict0' [ 3 od ''
    |1|opened (rw) ->|dict0|
       df0=: {: showpass make jpath '~jodtemp/dict0.ijs'
    |1|object(s) on path dumped ->|c:/jodtemp/dict0.ijs|
       NB. open and dump second dictionary
       od 'dict1' [ 3 od ''
    |1|opened (rw) ->|dict1|
       df1=: {: showpass make jpath '~jodtemp/dict1.ijs'
    |1|object(s) on path dumped ->|c:/jodtemp/dict1.ijs|
       NB. create new merge dictionary
       newd 'mergedict';jpath '~jodtemp/mergedict' [ 3 od ''
    |1|dictionary created ->|mergedict|c:/jodtemp/mergedict/|
       NB. open merge dictionary and run dump scripts
       od 'mergedict'
    |1|opened (rw) ->|mergedict|
       NB. reload dump scripts ... output not shown ...  
       0!:0 df0  
       0!:0 df1

    Be careful when merging dictionaries. If there are common objects the last object loaded is the one retained in the merged dictionary.

  3. Updating master file parameters: When a new parameter is added to jodparms.ijs it will not be available in existing dictionaries. With dump scripts you can rebuild existing dictionaries and update parameters. To rebuild a dictionary with new or custom parameters do:
       NB. save current dictionary registrations
       (toHOST ; 1 { 5 od '') write_ajod_ jpath '~temp/jodregister.ijs'
       NB. open dictionary requiring parameter update 
       od 'dict0' [ 3 od ''
    |1|opened (rw) ->|dict0|
       NB. dump dictionary and close
       df=: {: showpass make jpath '~jodtemp/dict0.ijs'
    |1|object(s) on path dumped ->|c:/jodtemp/dict0.ijs|
       3 od ''
    |1|closed ->|dict0|
       NB. erase master file and JOD object id file
       ferase jpath '~addons/general/jod/jmaster.ijf'
       ferase jpath '~addons/general/jod/jod.ijn'
       NB. recycle JOD - this recreates (jmaster.ijf) and (jod.ijn) 
       NB. using the new dictionary parameters defined in (jodparms.ijs)   
       (jodon , jodoff) 1
    1 1
       NB. re-register dictionaries
       load jpath '~temp/jodregister.ijs'
       NB. create a new dictionary - it will have the new parameters
       newd 'dict0new';jpath '~jodtemp/dict0new' [ 3 od ''
    |1|dictionary created ->|dict0new|c:/jodtemp/dict0new/|
       od 'dict0new'
    |1|opened (rw) ->|dict0new|
       NB. reload dump script ... output not shown ...
       0!:0 df  

    Before executing complex dump script procedures back up your JOD dictionary folders and play with dump scripts on test dictionaries. Dump scripts are essential JOD dictionary maintenance tools but like most powerful tools they must be used with care.

  1. Spicing up one’s rhetoric with a double entendre like “turning tricks” may be construed as a microaggression. The point of colored language is to memorably make a point. You are unlikely to forget turning dump script tricks.

Cutting the Stinking Tauntaun and other Adventures in Software Archeology

The other day a software project that I had spent a year on was “put on the shelf.”  A year of effort was unceremoniously flushed down the software sewer.  A number of colleagues asked me, “How do you feel about this?” Would you believe relieved?

This is not false bravado or stupid sunny optimism. I am not naïve. I know this failure will hurt me. In modern corporations the parties that launch failures, usually management, seldom take the blame. Blame, like groundwater, sinks to the lowest levels. In software circles, it pools on the coding grunts doing the real work. It’s never said but always implied, that software failures are the exclusive result of programmer flaws.  If only we had worked harder and put in those sixty or eighty hour weeks for months at a time; if only we were a higher level of coding wizard that could crank out thousands of error free lines of code per hour, like the entirely fictional software geniuses on TV, things might have been different.

Indulging hypotheticals is like arguing with young Earth creationists about the distance of galaxies.

“Given the absolute constancy of the speed of light how is it possible to see the Andromeda galaxy, something we know is over two million light years away, in a six thousand-year-old universe?”

The inevitable reply to this young Earth killing query is always something to the effect that God could have made the universe with light en route.  God could have also made farts smell like Chanel #5 but sadly he[1] did not.  Similarly, God has also failed to staff corporations with legions of Spock level programmers ready and willing to satisfy whatever au courant notion sets up brain-keeping in management skulls.  The Andromeda galaxy is millions of light-years away, we don’t live on a six-thousand-year-old Earth, and software is not created by magic.  Projects work or flounder for very real reasons. I was not surprised by the termination of this project. I was surprised that it took so long to give up on something that was never going to work out as expected.

Working with bad legacy code always reminds me of the scene in The Empire Strikes Back when Han Solo rescues Luke on the ice world by cutting open his fallen Tauntaun with a light saber. When the Tauntaun’s guts spill out Solo says, “I thought they smelled bad on the outside.” Legacy code may look pretty bad on the outside, just wait until you cut into it. Only then do you release the full foul forceful stench.

The project, let’s call it the Stinking Tauntaun Project, was an exercise in legacy system reformation. Our task was adding major new features to a hoary old system written in a programming language that resembles hieroglyphics. I know and regularly use half a dozen programming languages. With rare exceptions, programming languages are more alike than different. From the time of Turing, we have known that all programming languages are essentially the same.  You can always implement one language in another; all programming systems exploit this fundamental equivalence. It’s theoretically possible to compile SQL into JavaScript and run the resulting code on an eight-bit interpreter written in ancient Batch I just wouldn’t recommend you start a SOX compliant software project to create such insanity. This goes double if only one programmer in your organization knows ancient Batch!  The Stinking Tauntaun Project wasn’t this crazy, but there were clear signs that we were setting out on what has been accurately described as a Death March.

Avoiding Death Marches, or finding ways to shift blame for them, is an essential survival skill for young corporate programmers. It’s not an exaggeration to say that a lot of modern software is built on the naiveté of the young. When you first learn to program you are seduced by its overpowering rationality. Here is a world founded on logic, mathematics and real engineering constraints. Things work and don’t work, for non-bullshit reasons! There aren’t any pointless arguments with ideological metrosexual pinheads about “privilege” or “gender.” Programs don’t give a crap about the skin color of the programmer or whether their mother was an abusive bull dyke. After a few years of inhaling rarefied logic fumes young programmers often make the gigantic and catastrophic mistake that the greater naked ape society in which they live also works on bullshit free principles. It’s at this delicate stage when you can drive the young programmer hard. There’s an old saying that salesmen like money and engineers like work so in most companies the salesmen get the money and the engineers get the work. As the great sage Homer Simpson would say, “It’s funny because it’s true!”

Death Marches shatter the naïve. Gaining a greater understanding of the nasty world we live in always does a naked ape good but there’s a cost; your career will suffer and you will suffer: enlightenment is proportional to pain! The school of hard knocks is a real thing and unlike that party school that you boozed through no one graduates alive. I’d strongly recommend a few Death Marches for every working programmer. To paraphrase Tolstoy, “All successful software projects are alike; every unsuccessful project fails in its own way!” You may consider what follows my modest contribution to the vast, comprehensively ignored, literature of software Death Marches.

Code Smell #1: A large mass of poorly written test free legacy code that does something important.

“Refactoring” is a word that I love to mock, but the refactoring literature showcases an astonishingly precise term: “code smell.” Code smells are exactly what you expect. Something about the code stinks. The Stinking Tauntaun code base didn’t just stink, it reeked like diuretic dog shit on choleric vomit. This was well-known throughout the company.  My general advice to young programmers that catch a whiff of rotting code is simple: flee, run, abort, get out of Dodge! Do not spill your precious bodily fluids cleaning up the messes of others.

When recruited a number of experienced hieroglyphic programmers asked me the pointed question, “How do you deal with giant stinking piles of legacy doo-doo?” Maybe it wasn’t quite phrased like that but the intention was clear. They were telling me that they were dealing with a seething mass of half-baked crappy code that somehow, despite its manifold flaws, was useful to the organization and couldn’t be put out of its well deserved misery.

I honestly answered. “Systems that have survived for many years, despite their problems and flaws, meet a need. You have to respect that.” I still believe this!  Every day I boot up ancient command shells that look pretty much like they did thirty years ago.  Why do I use these old tools? It’s simple; they do key tasks very efficiently. I try new tools all the time. I spent a few months playing with PowerShell.  It’s a very slick modern shell but it has one big problem. It takes far longer to load than ancient DOS or Bash shells. When I want to execute a few simple commands I find the few extra seconds it takes to get a PowerShell up and running highly annoying. Why should I wait when I don’t need the wonderful new features of PowerShell?  The Stinking Tauntaun System also met user needs and, just like I am horrified by the clunky ill-conceived incoherence of old shells, Stinking Tauntaun users held their noses and used the good bits of what’s overall a bad system.

Code Smell #2: No fully automatic version controlled test suite.

There is a tale, possibly apocryphal, from the dawn of programming that goes something like this: a young programmer of the ENIAC generation was having a bad day. He was trying to get some program running and was getting frustrated “re-wiring.” Our young programmer was experiencing one of the first “debugging” sessions back when real insects came into play. In a moment of life searing insight our young programmer realized that from now on most of his time would be consumed hunting down errors in faulty programs.  The tale goes dark here. I’ve often wondered what happened to the young programmer. Did he have a happy life or did he, like Sisyphus, push that damn program up the hill only to have it crash down again, and again, until management cried stop!

Debugging is a necessary evil. If you program — you debug. It’s impossible to eliminate debugging but at least you can approach it intelligently, and to this day, the only successful method for controlling program errors is the fully automatic version controlled ever-expanding test suite. Systems with such suites actually improve with time. Systems without such suites always degenerate and take down a few naïve programmers as they decay. The Stinking Tauntaun System lacked an automatic test suite. The Stinking Tauntaun System lacked a non-automatic test suite. The Stinking Tauntaun had no useful test cases what so ever! Testing The Stinking Tauntaun was more painful than dealing with its code. The primary tester had nightmares about The Stinking Tauntaun.


My Power-Point-less illustration of Quality over Time for a system that lacks a fully automatic ever expanding version controlled test suite. Systems without well designed test rigs are almost without exception complete crap. Unless someone is paying you huge bucks it’s best to start sending out resumes when asked to fix such systems. You can ruin your health stirring your yummy ice cream into the shit but it’s almost certain the final mixture will taste more like shit than ice cream.

It’s the 21st century people! In ENIAC days you could overlook missing automatic test suites. Nowadays missing automatic test suites is a damning indictment of the organization that tolerates such heinous crimes against programming humanity!

Code Smell #3: Only one programmer is able to work with legacy code.

When I was hired there were a few hieroglyphic programmers on staff. No single programmer was saddled with the accumulated mistakes of prior decades. We could divide and deal with the pain. During this happy time our work was support oriented. We we’re not undertaking large projects. Then an inevitable bout of corporate reorganization occurred, followed by terminations and a stream of defectors fleeing the new order.  Eventually only one hieroglyphic programmer remained: moi!  Most programmers would have joined the exodus and not taken on the sins of other fathers but as I have already pointed out I am a software whore and proud of it.  Still even software sluts must avoid ending up like the, “there can be only one,” Highlander. Most of those sword duels ended rather badly as I recall.

Code Smell #4: Massive routines with far-ranging name scope.

With noxious fumes already cutting off our air supply we should have stopped in our tracks but we soldiered on because The Stinking Tauntaun System did something important and alternatives were not readily available. The corporation recognized this and started a number of parallel projects to explore New Stinking Tautaun’s.  I wasn’t lucky enough to work on any of the parallel projects. I had to get in the stall and shovel the Tauntaun manure.

We weren’t entirely clueless when we started Stinking Tauntaun work.  We recognized the precarious hieroglyphic programmer resource constraint and tried to divide the work into units that alleviated it. We decided to implement part of the new features in another programming language and call the new module from the old system. The hope was this would divide the work and possibly create something that might be used outside The Stinking Tauntaun System. This part of the project went reasonably well. The new module is entirely new code and works very well. Unfortunately, it was a small part of the greater effort of bolting new features into The Stinking Tauntaun System.

Stinking Tauntaun code is magisterial in its monolithic madness. The hieroglyphic programming language is famous for its concise coding style yet the main routine in the Stinking Tauntaun System is close to 10,000 lines long!  I had never seen hieroglyphic language routines of such length. I’ve only seen comparable programs a few times in my battle scared career. As a summer student I marveled at a 20,000 line, single routine, FORTRAN program.  At first I thought it was the output of some cross compiler but no, some insane, bat shit crazy government drone, (I was working for a government agency in western Canada), had coded it by hand — on freaking punch cards no less! Such stunning ineptitude is rare: most programmers have a passing acquaintance with modular design and know enough to reconsider things when code gets out of hand. We all have different thresholds for judging out-of-hand-ness, for me it’s any routine that’s longer than sixty lines: including the damn comments.

A 10,000 line routine is a monumental red flag but The Stinking Tautaun’s length was not the biggest problem. There is this thing programmer’s call “scope.”  Scope marks out name boundaries. It sets limits on where names have meaning or value. Ideally name scope is limited to the smallest feasible extent. You really don’t want global names! You absolutely don’t want global names like “X,” or “i,” or “T,” cavorting in your code! Most of the cryptic names in the Stinking Tauntaun System had far-ranging scope. Vast scopes make it difficult to safely make local changes. This makes it risky, dangerous and time-consuming to change code.  Large scopes also increase the burden on testers. If tiny changes have far-ranging consequences you have to retest large parts of the system every time you tweak it. All of this sucks up time and drains the pure bodily fluids of IT staff.

Code Smell #5: Basic assumptions about what is possible quickly prove wrong.

It was looking dark. Timid souls would have run but we plowed on. Our naïve hope rested on the theory that we could change a few touch points in The Stinking Tauntaun’s code base and then get out of Dodge before the dark kludge forest closed in. My first estimate of how many routines I would have to touch was around twenty. Two months into the project I had already altered over a hundred with no end in sight. The small orthogonal change we hoped to make was not possible because The Stinking Tauntaun System did not sensibly do one thing in one place. It did sort of the same thing in many places. You couldn’t make a small change and have things sensibly flow throughout the system.  I worked to isolate and modularize the mess but I should have just waved my hands and spent my days on LinkedIn looking for another job.

Code Smell #6: A development culture steeped in counterproductive ceremony.

On top of all these screeching sirens yelling stop there were other impediments. I work in a SOX saturated environment. Readers of my blog know that I consider SOX to be one of the biggest time-wasting piles of DC idiocies ever pushed on American companies. Like many “government solutions” SOX did not drain the intended swamp but forever saddled us with inane time-wasting ceremony and utterly stupid levels of micromanagement. Is Active Directory management really a concern of freaking government? Somehow it’s become one!  One of my favorite consequences of SOX is the religious ordering of development environments and the sacraments for promoting code changes. During the long bruising Stinking Tauntaun project many releases boiled down to me making changes to one file! Pushing a new version to test should have been a simple file copy.

Of course it couldn’t be that simple. Numerous “tickets” had to be approved. Another branch of IT, that was even more stressed and suffered higher levels of turnover than ours, was dragged in to execute a single trivial file copy. In many cases more bytes were generated by all this pointless time-wasting ceremony than I had changed in that single file. Of course all this took time, and as hard as I looked for a useless-time-wasting-bullshit category in our web-based hours tracking system, I couldn’t find one. There’s a management mantra: you can only improve what you measure.  Funny how companies never measure the bullshit.

In retrospect we should have junked The Stinking Tauntaun System or opted for a radical rewrite. It’s ironic but something like this is what’s going to happen. If I was young I would be bitter but I cashed in on this project. In my consulting days I was always on the lookout for Stinking Tauntauns: they were freaking gold mines for those of us that have acquired a taste for the bracing putrid fumes of rotting code.

[1] If you are the type of person that gets your panties in a knot about the gender of hypothetical supreme beings please go away.

JOD Update: Version 0.9.97*

JOD LogoIn the last year much has changed in the J world.

  1. There are new official J 8.0x builds for all supported platforms.
  2. The QT based IDE JDE has matured and is in widespread use.
  3. The column oriented J database JD is drawing new users to J and enticing J veterans to reconsider how we use databases.
  4. There is a small group of J system builders experimenting with additions, extensions and revisions of core J source code.

In short, there are have been enough changes to revisit and update JOD.

JOD version 0.9.97*1 is the first JOD update in many years that mocks the god of software compatibility. In particular:

  1. The syntax of the jodhelp verb has changed.
  2. The jodsource addon no longer uses a zip file to distribute JOD dump files.
  3. JOD online help will no longer be supported or updated.
  4. Volume size is no longer checked before creating new JOD dictionaries.
  5. There is a new version of jod.pdf.

jodhelp changes (#1, #3 and #5)

jodhelp has always been a kludge. In programmer speak a kludge is some half-baked facility added to a system after more essential features have stabilized. The original versions of jodhelp pointed at my rough notes. It was all the “documentation” I needed! Then others stared using JOD which resulted in an “evolved” online version of my notes. I originally thought that hosting my notes online would simultaneously serve user needs and cut the amount of time I spent maintaining documentation. In retrospect this wasn’t even wrong!

I used Google Documents to host my notes. If you’ve ever wondered why completely free Google Documents hasn’t obliterated expensive Microsoft Word or hoary old excellent \LaTeX I invite you to maintain a set of long-duration-documents with Google Documents. During jodhelp‘s online lifetime the basic internal format of Google Documents changed in a screw-your-old-documents upgrade which forced me to spend days repairing broken hyper-links and reformatting. I was not amused; you still get what you pay for!

I originally choose Google Documents because of its alleged global accessibility. Sadly, Google Documents is now often blocked by corporate and national firewalls. Even when it isn’t blocked it renders like a dog peeing on a fire hydrant. All these problems forced me to rewrite JOD documentation with a completely reliable tool: good old-fashioned \LaTeX. The result of my labors, jod.pdf, is now distributed by the joddocument addon and is easily browsed with jodhelp.

After jod.pdf‘s appearance another irritant surfaced: synchronizing jod.pdf and the online version. I tried using pandoc and markdown to generate both the online and PDF versions from the same source files but jod.pdf is too complex for not-to-fancy portable approaches. I was faced with a choice, lower my jod.pdf standards, or get rid of something I never really liked. I opted to drown a child and abandon online help. I don’t expect a lot of mourners at the funeral.

Using the new version of jodhelp requires installing the addon joddocument and configuring a J PDF reader. It’s also good idea to define a JQT PF key to pop up JOD help with a keystroke. To configure a J PDF reader edit the configuration file:


this file is directly available from the JQT Edit\Configure menu. base.cfg defines a number of operating system dependent utilities. Make changes to the systems you use, save your changes, and restart J. The following example shows my Win64 system settings.

 case. 'Win' do.
   BoxForm=: 1
   Browser=: 'c:/Program Files (x86)/Google/Chrome/Application/chrome.exe'
   Browser_nox=: ''
   EPSReader=: 'c:/program files/ghostgum/gsview/gsview64.exe'
   PDFReader=: 'c:/uap/sumatra/SumatraPDF.exe'
   XDiff=: 'c:/uap/WinMerge-2.14.0-exe/winmergeu.exe'
   Editor=: 'c:/uap/notepad++/notepad++.exe %f'
   Editor_nox=: '' 

I use SumatraPDF to read PDF files on Windows. It’s a fast, lightweight, program that efficiently renders jod.pdf. Good PDF readers are available for all commonly used platforms.

To define JQT PK keys edit the configuration file:2


This file is also directly available from Edit\Configure menu. My JOD specific PF keys are:

 F3;1;Require JOD;require 'general/jod'
 Shift+F3;1;JOD Help;jodhelp 0
 F6;1;Dev Dicts;od cut 'joddev jod utils' [ 3 od ''
 Shift+F6;1;Fit Dev Dicts;od cut 'jodfit joddev jod utils' [ 3 od ''
 Ctrl+Shift+F6;1;Test Dev Dicts;od cut 'jodtest joddev jod utils' [ 3 od ''

Pressing Shift+F3 executes jodhelp 0 which pops up JOD help.

jodsource changes (#2)

The jodsource addon is a collection of JOD dump scripts. Dump scripts are serialized versions of binary JOD dictionaries. When executed they merge objects into the current JOD put dictionary. I use them primarily to move dictionaries around but they have other uses as well. Prior to this version I distributed the three main JOD development dump scripts, joddev, jod, and utils in one compressed zip file to reduce the size of JAL downloads.

The distributed script jodsourcesetup.ijs used the zfiles addon to extract these scripts and rebuild JOD development dictionaries. This worked on 32 bit Windows systems but failed elsewhere. J now runs on 32/64 bit Windows, Mac, Linux, IOS and Android systems. To better support all these variants I eliminated the zfiles dependency and pruned the JOD development dictionaries. The result is a more portable and smaller jodsource addon.

Bye bye volume sizing (#4)

Early versions of JOD ran in the now bygone era of floppy disks. It was possible to create many JOD dictionaries on a single standard 800 kilobyte 3.5 inch floppy. Compared to modern porcine-ware JOD, which many J’ers consider a huge system, is lithe and lean. In floppy days it was important to check if there was enough space on a floppy before creating another huge 48K empty JOD dictionary. This is a bit ridiculous today! If you don’t have 48K free on whatever device you are running you have far more serious problems than not being able to create JOD dictionaries.

Volume sizing code remained in JOD for years until it started giving me problems. Returning the size of very large network volumes can be time-consuming and there are serious portability issues. Every operating system calls different facilities to return volume sizes. Even worse, security settings on corporate networks and cloud architectures sometimes refuse to divulge national secrets like free byte counts.

To eliminate all these headaches this version of JOD no longer checks volume size when the FREESPACE noun is zero. To restore the previous behavior you have to edit the file


and change the line FREESPACE=:0 to whatever byte count you want. Alternatively, you could NGAF3 and just assume you have 48K free on your terabyte size volumes.

Still to come

You may have surmised from JOD’s version number that the system is still not feature complete.  The JOD manual lists a few words that I am planning to implement. I only develop JOD when I need something or I am bored out of my mind at work and need a break. Such intermittent motivators seldom insure project completion but I have found a new reason to finish JOD. To list a book on Goodreads or Amazon you need an ISBN number.  The hardcopy version of the JOD manual is a sort-of-published book. To complete the publishing process I need an ISBN. If I am going to bother with such formalities I might as well complete the system the manual describes. So there you have it a new software development motivator: vanity.

  1. The version number is *‘ed because you are always a point release from done!
  2. userkeys.cfg is only available for J 8.03 systems.
  3. Not Give a F%&k!

Jacks Repository

The other day I attempted to browse a J script described in an old blog post only to find that my employer’s network monkeys had blocked the file sharing service. I’ve railed about IT control freaks in the past. They will not rest until it’s impossible to do useful work. I fumed and grumbled until I perceived a bigger problem. I have so many references to program code in this blog that it’s getting tedious tracking them down. Wouldn’t it be nice if my hacks were neatly organized in one coherent repository?

Let me introduce jacks. jacks, or “J-hacks”, organizes the J related code referenced in this blog into a single GitHub repository. Most of the scripts in jacks are one-offs but some have proven so useful that it makes sense to store them in a repository and track changes. From now on jacks will be the first place to look for code from this blog. You pull the contents of jacks into a new Git repository with the commands:

git init
git remote add jacks https://github.com/bakerjd99/jacks.git
git pull jacks master

It took me a few moments to settle on the name “jacks.” I considered “jokes” because programmers often take their code too seriously and “jocks” because J programmers are wild out of control convention eschewing code jocks but jacks won out when I remembered the refrain “jack be nimble, jack be quick, jack jump over” whatever coding problem is pissing you off.

JHS meets MathJax

With the release of J 7.01 Jsoftware “deprecated” COM. J 6.02 can run as a COM automation server but J 7.01 cannot.1 Throwing COM under the bus is hardly radical. Microsoft, COM’s creator, has been holding COM’s head underwater for years. Many .Net programmers cringe when they hear the word “COM” and the greater nonwindows2 world never really accepted it. COM is a complex, over-engineered, proprietary dead-end. Yet despite its bloated deficiencies a lot of useful software is COM based. So with COM going away, at least for J programmers, the hunt is on for viable replacements and while we’re hunting let’s rethink our entire approach to building J GUI applications.

J GUI applications are traditional desktop applications. They’re built on native GUIs like Windows Forms and GTK and when done well they look and act like GUI applications coded in other languages. This is all good but there is a fundamental problem with desktop GUIs. There are many desktop GUIs and they do not travel well. Programmers have spent many dollars and days creating so-called cross-platform GUIs but, if you wander off the Windows, Mac and Linux reservation, the results are not particularly portable. And, as portable GUIs rarely outperform native alternatives, programmers tend to stick in their tribal silos. GUI programming is a bitch, has always been a bitch and will always be a bitch. It’s time to divorce the desktop GUI bitch.

All divorces, even the geeky GUI variety, are hard. When you finally cut the knot you’re not entirely sure what you’re doing or where you’ll end up. All you know is that there is a better way and with respect to J GUI applications I believe that JHS is that way. JHS leverages the large CSS, HTML and JavaScript (CHJ) world and in recent years some impressive browser-based applications have emerged from that world. The application that changed my mind about JavaScript and browser-based applications in general is something called MathJax.

MathJax typesets mathematics. It renders both LaTeX and MathML using fully scalable browser fonts. This is better than what WordPress does. The following Ramanujan identity taken from MathJax examples renders on WordPress as an image.

\frac{1}{\Bigl(\sqrt{\phi \sqrt{5}}-\phi\Bigr) e^{\frac25 \pi}} =  1+\frac{e^{-2\pi}} {1+\frac{e^{-4\pi}} {1+\frac{e^{-6\pi}}  {1+\frac{e^{-8\pi}} {1+\ldots} } } }

MathJax renders the same expression with scalable fonts and supports downloading the expression as LaTeX or MathML text. This is pretty impressive for browser JavaScript. I wondered how hard it would be to use MathJax with JHS and was pleased to find it’s easy peasy.

Writing a basic JHS application is a straightforward task of setting up three JHS J nouns HBS, CSS and JS.3 HBS is a sequence of J sentences where each sentence yields valid HTML when executed. JHS generates a simple web page from HBS and returns it to the browser. MathJaxDemo HBS is:

 HBS=: 0 : 0
 '<hr>','treset' jhb 'Reset'

 '<hr>',jhh1 'Typeset with MathJax and J'
 '<hr>',jhh1 'Typeset Random Expression Tables'
 '<br/>','ttable' jhb'Typeset Random Expression Array' 
 '<br/>','restable' jhspan''        

CSS is exactly what you expect: CSS style definitions. Finally, JS is application specific JavaScript. MathJaxDemo JS matches HBS page events with corresponding JHS server handlers. This demo uses ajax for all event handlers.

JS=: 0 : 0

 function ev_ttable_click(){jdoajax([],"");}
 function ev_tquad_click(){jdoajax([],"");}
 function ev_tmaxwell_click(){jdoajax([],"");}
 function ev_tramaujan_click(){jdoajax([],"");}
 function ev_tcrossprod_click(){jdoajax([],"");}
 function ev_treset_click(){jdoajax([],"");}

 function ev_ttable_click_ajax(ts){jbyid("restable").innerHTML=ts[0]; MathJax.Hub.Typeset();}
 function ev_tquad_click_ajax(ts){jbyid("resquad").innerHTML=ts[0]; MathJax.Hub.Typeset();}
 function ev_tmaxwell_click_ajax(ts){jbyid("resmaxwell").innerHTML=ts[0]; MathJax.Hub.Typeset();}
 function ev_tramaujan_click_ajax(ts){jbyid("resramaujan").innerHTML=ts[0]; MathJax.Hub.Typeset();}
 function ev_tcrossprod_click_ajax(ts){jbyid("rescrossprod").innerHTML=ts[0]; MathJax.Hub.Typeset();}

 function ev_treset_click_ajax(ts){
   jbyid("restable").innerHTML=ts[0]; jbyid("resquad").innerHTML=ts[0];
   jbyid("resmaxwell").innerHTML=ts[0]; jbyid("resramaujan").innerHTML=ts[0];

Running the JHS MathJaxDemo is a simple matter of:

  1. Downloading the J scripts in the MathJaxDemo folder to a local directory.
  2. Editing J’s ~config/folders.cfg file and pointing to your download directory with the name MathJaxDemo. You are set up correctly if jpath '~MathJaxDemo' returns your path.
  3. Loading the demo: load '~MathJaxDemo/MathJaxDemo.ijs'
  4. Browsing to the site:

It’s not hard to use JHS as a general application web server. JHS provides many common controls right out of the box but to compete with desktop applications it’s necessary to supplement JHS with JavaScript libraries like MathJax. In coming posts I will explore how to use industrial strength JavaScript grids and graphics with JHS.

MathJaxDemo Screen Shot

Screen shot of JHS MathJaxDemo running on Ubuntu.

  1. On a purely numerical basis there is no greater nonwindows world.
  2. To learn about JHS programming study the JHS demos and the JHS browser application.
  3. Bill Lam has pointed out that J 7.01 can function as a COM client. The JAL addon tables/wdooo controls OpenOffice using Ole Automation which is one of the many manifestations of COM.

Turn your Blog into an eBook

If you have worked through the exhausting procedure of converting your blog to LaTeX: see posts (1), (2) and (3), you will be glad to hear that turning your blog into an image free eBook is almost effortless. In this post I will describe how I convert my blog into EPUB and MOBI eBooks.

eBooks how the cool kids are reading

eBook readers like Kindles, Nooks, iPads and many cell phones are optimized for plain old prose. They excel at displaying reflowable text in a variety of fonts, sizes and styles. One eBook reader feature, dear to my old fart eyes, is the ability to increase the size of text.  All eBooks are potentially large print editions. There are other advantages: most readers can store hundreds, if not thousands of books, making them portable libraries. It’s now technically possible to hand a kindergarten student a little tablet that holds every single book he will use from preschool to graduate school. The only obstacle is the rapacious textbook industry and their equally rapacious eBook publishing enablers. But fear not open source man will save the day. The days of overpriced digital goods are over! I will never pay more than a few bucks for an eBook because I can make my own and so can you! Let’s get together and kill off another industry that so has it coming!


Native eBook file formats like EPUB and MOBI do not handle complex page layouts well. If your document contains a lot of mathematics, figures and well placed illustrations stick with PDF workflows.[1] You will save yourself and your readers a lot of grief.  But, if your document is a prose masterpiece, a veritable great American novel, then “publishing” it as an EPUB or MOBI is great way to target eBook readers. EPUBs and MOBIs can be compiled from many sources.  I start with the LaTeX files I created for the PDF version of this blog because I hate doing the same boring task twice. By far the most time-consuming part of converting WordPress export XML to LaTeX is editing the pandoc generated *.tex files to resolve figures and fix odd run-together-words and paragraphs. To preserve these edits I use pandoc to convert my edited *.tex to *.markdown files.


Markdown is a very simple text oriented format. A markdown file is completely readable exactly the way it is. All you need is a text editor. Even text editors are overkill. You could compose markdown with early 20th century mechanical typewriters; it’s a low tech format for the ages: perfect for prose.

The J verb MarkdownFrLatex [2] calls pandoc and converts my *.tex files to *.markdown. I place my markdown in the directory


and to track changes to my markdown files I GIT this directory. MarkdownFrLatex strips out image inclusions and removes typographic flourishes.  When it succeeds it writes a simple markdown file and when it fails it writes a *.baddown file. Baddown files are *.tex files that contain lstlistings and complex figure environments that are best resolved with manual edits. After removing such problematic LaTeX environments the J verb FixBaddown calls pandoc and turns baddown files into markdown files.

Generating EPUB and MOBI files

When the conversion to markdown is complete I run MainMarkdown to mash all my files into one large markdown file with an eBook header. The eBook header for this blog is:

% Analyze the Data not the Drivel
% John D. Baker

The first few lines of the consolidated bm.markdown file are:

% Analyze the Data not the Drivel
% John D. Baker

#[What’s In it for


*Posted: 05 Sep 2009 22:44:50*

[Facebook](http://www.facebook.com) is huge: they brag about a user
count well north of one hundred million. If only 0.5% of their users are
active that’s 500,000 *concurrent users.* How many expensive servers
does it take to support such a load? .....

Generating an EPUB from bm.markdown is a simple matter of opening up your favorite command line shell and issuing the pandoc command:

pandoc -S --epub-cover-image=bmcover.jpg -o bm.epub bm.markdown

You can read the resulting EPUB file bm.epub on any EPUB eBook reader. Here’s a screen shot of bm.epub on my iPhone.

iPhone loaded with my blog

iPhone loaded with my blog

The last step converts bm.epub to bm.mobi. MOBI is a native Kindle format. Pandoc can generate MOBI from bm.markdown but it inexplicably omits a table of contents. No problemo:  I use Calibre to convert bm.epub to bm.mobi. Calibre properly converts the embedded EPUB table of contents to MOBI.  Here’s bm.mobi on a Kindle.

Kindle loaded with my blog

Kindle loaded with my blog

All the “published” versions of this blog are available on the Download this Blog page so please help yourself!

[1] LaTeX is usually compiled to PDF making it one of hundreds of PDF workflows.

[2] All the J verbs referenced in this post are in the script TeXfrWpxml.ijs