JOD Update: Version 0.9.97*

JOD LogoIn the last year much has changed in the J world.

  1. There are new official J 8.0x builds for all supported platforms.
  2. The QT based IDE JDE has matured and is in widespread use.
  3. The column oriented J database JD is drawing new users to J and enticing J veterans to reconsider how we use databases.
  4. There is a small group of J system builders experimenting with additions, extensions and revisions of core J source code.

In short, there are have been enough changes to revisit and update JOD.

JOD version 0.9.97*1 is the first JOD update in many years that mocks the god of software compatibility. In particular:

  1. The syntax of the jodhelp verb has changed.
  2. The jodsource addon no longer uses a zip file to distribute JOD dump files.
  3. JOD online help will no longer be supported or updated.
  4. Volume size is no longer checked before creating new JOD dictionaries.
  5. There is a new version of jod.pdf.

jodhelp changes (#1, #3 and #5)

jodhelp has always been a kludge. In programmer speak a kludge is some half-baked facility added to a system after more essential features have stabilized. The original versions of jodhelp pointed at my rough notes. It was all the “documentation” I needed! Then others stared using JOD which resulted in an “evolved” online version of my notes. I originally thought that hosting my notes online would simultaneously serve user needs and cut the amount of time I spent maintaining documentation. In retrospect this wasn’t even wrong!

I used Google Documents to host my notes. If you’ve ever wondered why completely free Google Documents hasn’t obliterated expensive Microsoft Word or hoary old excellent \LaTeX I invite you to maintain a set of long-duration-documents with Google Documents. During jodhelp‘s online lifetime the basic internal format of Google Documents changed in a screw-your-old-documents upgrade which forced me to spend days repairing broken hyper-links and reformatting. I was not amused; you still get what you pay for!

I originally choose Google Documents because of its alleged global accessibility. Sadly, Google Documents is now often blocked by corporate and national firewalls. Even when it isn’t blocked it renders like a dog peeing on a fire hydrant. All these problems forced me to rewrite JOD documentation with a completely reliable tool: good old-fashioned \LaTeX. The result of my labors, jod.pdf, is now distributed by the joddocument addon and is easily browsed with jodhelp.

After jod.pdf‘s appearance another irritant surfaced: synchronizing jod.pdf and the online version. I tried using pandoc and markdown to generate both the online and PDF versions from the same source files but jod.pdf is too complex for not-to-fancy portable approaches. I was faced with a choice, lower my jod.pdf standards, or get rid of something I never really liked. I opted to drown a child and abandon online help. I don’t expect a lot of mourners at the funeral.

Using the new version of jodhelp requires installing the addon joddocument and configuring a J PDF reader. It’s also good idea to define a JQT PF key to pop up JOD help with a keystroke. To configure a J PDF reader edit the configuration file:

 ~config/base.cfg

this file is directly available from the JQT Edit\Configure menu. base.cfg defines a number of operating system dependent utilities. Make changes to the systems you use, save your changes, and restart J. The following example shows my Win64 system settings.

 case. 'Win' do.
   BoxForm=: 1
   Browser=: 'c:/Program Files (x86)/Google/Chrome/Application/chrome.exe'
   Browser_nox=: ''
   EPSReader=: 'c:/program files/ghostgum/gsview/gsview64.exe'
   PDFReader=: 'c:/uap/sumatra/SumatraPDF.exe'
   XDiff=: 'c:/uap/WinMerge-2.14.0-exe/winmergeu.exe'
   Editor=: 'c:/uap/notepad++/notepad++.exe %f'
   Editor_nox=: '' 

I use SumatraPDF to read PDF files on Windows. It’s a fast, lightweight, program that efficiently renders jod.pdf. Good PDF readers are available for all commonly used platforms.

To define JQT PK keys edit the configuration file:2

 ~config/userkeys.cfg
 

This file is also directly available from Edit\Configure menu. My JOD specific PF keys are:

 F3;1;Require JOD;require 'general/jod'
 Shift+F3;1;JOD Help;jodhelp 0
 F6;1;Dev Dicts;od cut 'joddev jod utils' [ 3 od ''
 Shift+F6;1;Fit Dev Dicts;od cut 'jodfit joddev jod utils' [ 3 od ''
 Ctrl+Shift+F6;1;Test Dev Dicts;od cut 'jodtest joddev jod utils' [ 3 od ''
 

Pressing Shift+F3 executes jodhelp 0 which pops up JOD help.

jodsource changes (#2)

The jodsource addon is a collection of JOD dump scripts. Dump scripts are serialized versions of binary JOD dictionaries. When executed they merge objects into the current JOD put dictionary. I use them primarily to move dictionaries around but they have other uses as well. Prior to this version I distributed the three main JOD development dump scripts, joddev, jod, and utils in one compressed zip file to reduce the size of JAL downloads.

The distributed script jodsourcesetup.ijs used the zfiles addon to extract these scripts and rebuild JOD development dictionaries. This worked on 32 bit Windows systems but failed elsewhere. J now runs on 32/64 bit Windows, Mac, Linux, IOS and Android systems. To better support all these variants I eliminated the zfiles dependency and pruned the JOD development dictionaries. The result is a more portable and smaller jodsource addon.

Bye bye volume sizing (#4)

Early versions of JOD ran in the now bygone era of floppy disks. It was possible to create many JOD dictionaries on a single standard 800 kilobyte 3.5 inch floppy. Compared to modern porcine-ware JOD, which many J’ers consider a huge system, is lithe and lean. In floppy days it was important to check if there was enough space on a floppy before creating another huge 48K empty JOD dictionary. This is a bit ridiculous today! If you don’t have 48K free on whatever device you are running you have far more serious problems than not being able to create JOD dictionaries.

Volume sizing code remained in JOD for years until it started giving me problems. Returning the size of very large network volumes can be time-consuming and there are serious portability issues. Every operating system calls different facilities to return volume sizes. Even worse, security settings on corporate networks and cloud architectures sometimes refuse to divulge national secrets like free byte counts.

To eliminate all these headaches this version of JOD no longer checks volume size when the FREESPACE noun is zero. To restore the previous behavior you have to edit the file

 ~addons/general/jod.ijs`

and change the line FREESPACE=:0 to whatever byte count you want. Alternatively, you could NGAF3 and just assume you have 48K free on your terabyte size volumes.

Still to come

You may have surmised from JOD’s version number that the system is still not feature complete.  The JOD manual lists a few words that I am planning to implement. I only develop JOD when I need something or I am bored out of my mind at work and need a break. Such intermittent motivators seldom insure project completion but I have found a new reason to finish JOD. To list a book on Goodreads or Amazon you need an ISBN number.  The hardcopy version of the JOD manual is a sort-of-published book. To complete the publishing process I need an ISBN. If I am going to bother with such formalities I might as well complete the system the manual describes. So there you have it a new software development motivator: vanity.


  1. The version number is *‘ed because you are always a point release from done!
  2. userkeys.cfg is only available for J 8.03 systems.
  3. Not Give a F%&k!

APL Software Archaeology .dbi Edition

apltree

Have yourself a merry little APL Christmas.

I joke that my job title should be software archaeologist because I often find myself resurrecting, not refactoring, code that dates to primitive and primeval eras. The language I’m typically hired to resurrect is APL. APL, the language with funny symbols, is a software vampire. People keep paying us to kill it, but no matter how many stakes we pound through its heart it keeps coming back.

There are good reasons for this. APL embodies many timeless ideas and I’m confident that programming in the future will look a lot more like APL than many expect. If you doubt me just press the Siri button on your iPhone and ask, “Integrate X squared times sine X from 0 to 2.” What comes back has more of an APL than QWERTYUIOP flavor. Strange Unicode characters are creeping into many mainstream languages. This is a good thing because restricting programming to the miserly key sets of ancient typewriters was, is, and always will be a spectacularly bad idea. Ken Iverson deserves rich accolades for pointing this out more than fifty years ago and beating this drum incessantly during his lifetime. Iverson taught that notation is a tool of thought and that if you care about ideas you must care about how they are expressed. Why is this even remotely controversial?

siriintegral

Siri’s results use appropriate mathematical notations. As we move away from keyboards programming languages and mathematical notation will merge. APL was way ahead of its time in this respect.

The genius of APL continues to exert influence on many programming languages, but APL’s rise had little to do with its abstract notation and a lot to do with how it was implemented. APL was one of the first programming environments that nonprogrammers could use. It was the spreadsheet of the late 1960’s and 1970’s and just like spreadsheets of today a lot of utterly horrid, poorly structured, lame amateur messes were created with it. If you’ve ever cracked open a gigantic Excel model that looks like it was developed by a roomful of quarreling ADHD afflicted unionized chimpanzees then you know what the standard APL mess feels like. Many programmers blamed APL for this just like gun control advocates blame firearms for shootings. They argued that it would have been impossible to concoct such monsters in clean compiled languages like Pascal. “It wouldn’t even compile.” This is not even wrong. I’ve dealt with plenty of dreadful messes that do compile! The tool is always neutral; don’t blame the paintbrush for the painting.

Allowing rubes to code yields mountains of rubbish and the occasional ruby. It will shock many programmers to learn they are not the only smart people in the world. It turns out that nonprogrammers occasionally have good ideas and, miraculously, some of them can ably express their ideas in code. Before spreadsheets such user rubies congealed in APL where some still run. Part of my day job is extracting these precious stones from layers and layers of kluges, hacks, patch jobs, retro-fits and workarounds and recoding them in modern programming languages like C# and JavaScript.

Recently I recovered1 an ancient inverted file system embedded in the APL systems of my employer and rendered it in C#. This system uses the extension .dbi. I don’t know who created this system; the code is old. The most recent code comments date from the year 2000, but I am pretty sure that .dbi files predate component files in APL+WIN, formerly STSC APL, which pushes the design back to the 1980’s or earlier. I know many APL’ers check this blog. If any of you know who created the original .dbi APL code please leave a note.

Somehow this .dbi system survived unsupported, with few user complaints, for decades of daily use. How is this possible? Astonishingly, good ideas age well and the core .dbi idea is inverted data. Modern high-performance databases make heavy use of this method. Inversion is so effective that hoary old interpreted APL code still beats compiled and optimized ADO.Net when fetching large numeric vectors and tables.

Restoring the .dbi system was a two-step process.2 I first converted the APL system to J. I used J because it is a close relative of APL but not so close that you can cut and paste. Translating nontrivial APL to J forces you to understand the APL at the nit-bitty level. The translation to J also allowed me to fix the APL interface. The original system used global variables, rampant branches and other lamentable coding practices that C# will not abide. After matching the APL and J systems I then translated the J to C# and then rematched all three systems.

Comparing multiple systems is a very effective testing technique. I found bugs in all three systems. I fixed the J and C# bugs but left the original APL code unchanged. Software archaeology is a delicate field. You don’t “fix” old code just like you don’t correct errors in cuneiform tablets. Original and important program code belongs in museums with other significant cultural artifacts.

Original inverted file code probably belongs in a museum. This .dbi APL code is old, but it certainly derives from earlier programs so it’s not museum worthy. Even if it was the APL and C# .dbi systems belong to my employer. However, I am placing the J scaffold version, which matches the performance of the other systems, into the public domain. The script is available on GitHub and here. The .dbi system gets right down to bits in some cases and illustrates some J techniques for dealing with indexed binary inverted file data. Enjoy!


  1.  .dbi files held many gigabytes of actuarially tuned data. Dumping them was not an option. We either had to convert to a new store or produce a component that could read old data in new systems.
  2. Restoring old code is somewhat like restoring old pictures. When working on old pictures you’re always tempted to improve them. With pictures you usually have a choice. This may not hold for old code. Changes in software may force updates.

Jacks Repository

The other day I attempted to browse a J script described in an old blog post only to find that my employer’s network monkeys had blocked the file sharing service. I’ve railed about IT control freaks in the past. They will not rest until it’s impossible to do useful work. I fumed and grumbled until I perceived a bigger problem. I have so many references to program code in this blog that it’s getting tedious tracking them down. Wouldn’t it be nice if my hacks were neatly organized in one coherent repository?

Let me introduce jacks. jacks, or “J-hacks”, organizes the J related code referenced in this blog into a single GitHub repository. Most of the scripts in jacks are one-offs but some have proven so useful that it makes sense to store them in a repository and track changes. From now on jacks will be the first place to look for code from this blog. You pull the contents of jacks into a new Git repository with the commands:

git init
git remote add jacks https://github.com/bakerjd99/jacks.git
git pull jacks master

It took me a few moments to settle on the name “jacks.” I considered “jokes” because programmers often take their code too seriously and “jocks” because J programmers are wild out of control convention eschewing code jocks but jacks won out when I remembered the refrain “jack be nimble, jack be quick, jack jump over” whatever coding problem is pissing you off.

WordPress to LaTeX with Pandoc and J: Using TeXfrWpxml.ijs (Part 3)

WordPress to LaTeX

WordPress to LaTeX

In this post I will describe how to use the J script TeXfrWpxml.ijs to generate LaTeX source from WordPress export XML.  I am assuming you have worked through (Part 1) and (Part 2) and have:

  1. Successfully installed and tested Pandoc.
  2. Installed and tested a version of J.
  3. Set up appropriate directories (Part 2).
  4. Know how to use LaTeX.

Item #4 is a big if.  Inexperienced LaTeX users will probably not enjoy a lot of success with this procedure as the source generated by TeXfrWpxml.ijs requires manual edits to produce good results.  However, if you’re not a LaTeX guru, do not get discouraged. It’s not difficult to create blog documents like bm.pdf.

Step 1: download WordPress Export XML

How to download WordPress export XML is described here.  Basically you go to your blog’s dashboard, select Tools, choose Export  and select the All content option.

Tools > Export > All Content

Tools > Export > All Content

When you press the Download Export File  button your browser will download a single XML file that contains all your posts and comments. Remember where you save this file. I put my export XML here.

c:/pd/blog/wordpress/analyzethedatanotthedrivel.wordpress.xml

Step 2: download TeXfrWpxml.ijs

Download TeXfrWpxml.ijs and remember where you save it.  I put this script here.

c:/pd/blog/TeXfrWpxml.ijs

Step 3: start J and load TeXfrWpxml.ijs

TeXfrWpxml.ijs was generated from JOD dictionaries. With JOD it’s easy to capture root word dependencies and produce complete standalone scripts. TeXfrWpxml.ijs needs only the standard J load profile to run.  It does not require any libraries or external references and should run on all Windows and Linux versions of J after 6.01.  Loading this script is a simple matter of executing:

load 'c:/pd/blog/TeXfrWpxml.ijs'

The following shows this script running in a J 7.01 console. The console is the most stripped down J runtime.

Step 4: review directories and necessary LaTeX files

The conversion script assumes proper directories are available up: see Part 2. The first time you run TeXfrWpxml.ijs it’s a good idea to check that the directories and files the script is expecting are the ones you want to process.  You can verify the settings by displaying TEXFRWPDIR, TEXINCLUSIONS, TEXROOTFILE and TEXPREAMBLE.

  TEXPREAMBLE
bmamble.tex
  TEXFRWPDIR
c:/pd/blog/wp2latex/
  TEXINCLUSIONS
inclusions
  TEXROOTFILE
bm.tex
  TEXPREAMBLE
bmamble.tex

If all these directories and files exist go to step (5).

Step 5: make sure you are online

The first time you run the converter it will attempt to download all the images referenced in your blog. This is where wget.exe gets executed.  Obviously to download anything you must be connected to the Internet.

Step 6: run LatexFrWordpress

Run the verb LatexFrWordpress.  The monadic version of this verb takes a single argument: the complete path and file name of the export XML file you downloaded in step (1).

xml=: 'c:/pd/blog/wordpress/analyzethedatanotthedrivel.wordpress.xml'

LatexFrWordpress xml

As the verb runs you will see output like:

   LatexFrWordpress xml
What's In it for Facebook?
downloading: c:/pd/blog/wp2latex/inclusions/demotivational-posters-facebook-you.jpg
1 downloaded; 0 not downloaded; 0 skipped
Fake Programming
downloading: c:/pd/blog/wp2latex/inclusions/672169130_vajvn-M.png
1 downloaded; 0 not downloaded; 0 skipped
Laws or Suggestions
downloading: c:/pd/blog/wp2latex/inclusions/i-B5mfdRF-M.jpg
1 downloaded; 0 not downloaded; 0 skipped
Lens Lust

... many lines omitted ...

downloading: c:/pd/blog/wp2latex/inclusions/i-mNK4RHL-M.png
1 downloaded; 0 not downloaded; 0 skipped
WordPress to LaTeX with Pandoc and J: LaTeX Directories (Part 2)
0 downloaded; 0 not downloaded; 1 skipped
+-++
|1||
+-++

When the verb terminates you should have a directory c:/pd/blog/wp2latex full of *.tex files:  one file for each blog post. Now the hard work starts.

Step 7: editing LaTeX posts

The conversion from WordPress XML to LaTeX produces files that require manual edits. The more images, video, tables and other elements in your posts the more demanding these edits will become.  My blog has about one image per post.  Most of these images are wrapped by text. LaTeX has a mind of its own when it comes to floating figures and getting illustrations to behave requires far more parameter tweaking than it should. This is a longstanding weakness of LaTeX that pretty much everyone bitches about. My advice is start at the front of your document and work through it post by post. The files generated by LatexFrWordpress do not attempt to place figures for you but they do bolt in ready-made figure templates as comments that you can experiment with.  Each post file is also set up for separate LaTeX compilation. You don’t have to compile your entire blog to tweak one post. The one good thing about this edit step is once you have sorted out your old posts you do not have to revisit them unless you make major global document changes. The next time you run LatexFrWordpress it will only bring down new posts and images.

Step 8: compile your LaTeX blog

I use batch files and shell scripts to drive LaTeX compilations.  I processed my blog with this batch file.

echo off
rem process blog posting (bm.tex) root file
title Running Blog Master/LaTeX ...

rem first pass for aux file needed by bibtex
lualatex bm

rem generate/reset bbl file
bibtex bm
makeindex bm

rem resolve all internal references - may
rem comment out when debugging entire document
lualatex bm
lualatex bm

rem display pdf - point to prefered PDF reader
title Blog Master/LaTeX complete displaying PDF ...
"C:\Program Files\SumatraPDF\SumatraPDF.exe" bm.pdf

The presence of Unicode APL, see this post, forced me to use lualatex. I needed some very nonstandard APL fonts.  See bm.pdf — also available on the Download this Blog page — to judge the effectiveness of my edits. Producing nice figure laden typeset blog documents is work but, as I will describe in the next post, producing image free eBooks is a simple and far less laborious variation on this process.

WordPress to LaTeX with Pandoc and J: LaTeX Directories (Part 2)

WordPress to LaTeX

WordPress to LaTeX

In this post I will describe the LaTeX directory structure the J script TeXfrWpxml.ijs is expecting. To convert WordPress export XML to LaTeX with this script you will have to set up similar directories.

LaTeX documents are built from *.tex[1] files. This makes LaTeX more like a compiled programming language than a word processing program. There are advantages and disadvantages to the LaTeX way. In LaTeX’s favor, the system is enormously adaptable, versatile and powerful. There is very little that LaTeX/TeX and associates cannot do.  Unfortunately, “with great power comes great responsibility.” LaTeX is demanding! You have to study LaTeX like any other programming language. It’s not for everyone but for experienced users it’s the best way to produce documents with the highest typographic standards.

LaTeX directory structure

To use LaTeX efficiently it’s wise to pick a document directory structure and stick with it. I use a simple directory layout. Each document has a root directory. The root directory used by TeXfrWpxml.ijs is:

Windows c:/pd/blog/wp2latex
Linux /home/john/pd/blog/wp2latex

I put my document specific *.tex, *.bib, *.sty and other LaTeX/TeX files in the root. To handle graphics I create an immediate subdirectory called inclusions.

c:/pd/blog/wp2latex/inclusions

The inclusions directory holds the document’s *.png, *.jpg, *.pdf, *.eps and other graphics files.  To reference files in the inclusions directory with the standard LaTeX graphicx package insert

\usepackage{color,graphicx,subfigure,sidecap}
\graphicspath{{./inclusions/}}

in your preamble. Finally, to track document changes I create a GIT repository in the root directory.

c:/pd/blog/wp2latex/.git

Self contained directories

I take care to keep my document directories self-contained. Zipping up the root and inclusions directory collects all the document’s files. This means that I sometimes have to copy files that are used in more than one document. Many LaTeX users maintain a common directory for such files but I’ve found that common directories complicate moving documents around. You’re always forgetting something in the damn common directory or you are copying a buttload of mostly irrelevant files from one big confusing common directory to another.

TeXfrWpxml.ijs files

The TeXfrWpxml.ijs script searches for these files in the root directory.

bm.tex Main LaTeX root file
bmamble.tex LaTeX preamble

bm.tex references bmtitlepage.tex.  I prefer a separate title page file; simply comment out this file if you create titles in other ways. The zip file wp2latex.zip contains a test directory in the format expected by TeXfrWpxml.ijs.  It also has a subset of my blog posts already converted to LaTeX. To get ready for WordPress to LaTeX with Pandoc and J: Using TeXfrWpxml.ijs (Part 3) download wp2latex.zip and attempt to compile bm.tex.  You might have to download a number of LaTeX packages.  Once you have successfully compiled bm.tex you are ready for the next step.


[1] LaTeX uses many other file types but key files are usually *.tex files.

WordPress to LaTeX with Pandoc and J: Prerequisites (Part 1)

There are no quick WordPress to LaTeX fixes

WordPress to LaTeX

WordPress to LaTeX

Over the next three posts I will describe how to convert WordPress’s export XML to LaTeX source code.  I know that many of you are looking for a quick WordPress to LaTeX fix; unfortunately there are no quick fixes. The two formats come from different worlds and are used in different ways.  Producing useful LaTeX source from WordPress export XML will require manual edits.  My goal here is to minimize manual edits, produce high quality LaTeX source and to outline what you will have to contend with. To get an idea of what you can expect download the LaTeX compiled version of this post.

Visual and Logical composition

WordPress and LaTeX are examples of the two basic approaches, visual and logical, taken by writing software.  Visual systems value appearance. It matters what things look like and no effort is spared to get the right look. Logical systems value content. What’s said is far more important than what it looks like. Logical systems impose order and structure and typically defer visual elements.  As you might expect there is no such thing as a pure visual or logical writing system. Successful systems use both approaches to a greater or lesser degree. Composing WordPress blog posts is roughly 35% visual and 65% logical.[1]  LaTeX composition is about 10% visual and 90% logical. The numbers do not line up; there is a basic mismatch here.

Many format X to LaTeX converters tackle this mismatch by attempting to maintain visual fidelity. This is a catastrophic error that renders the entire conversion useless.  Here’s a hint. If you’re using a predominantly logical system like LaTeX you don’t give a rodent’s posterior about visual fidelity. This method dispenses with all but the most basic of visual elements. No attempt is made to preserve fonts, type sizes, image scale, justification, hyphenation, text color and so forth.  The goal is to produce working LaTeX source that can be transformed to whatever final layout the author desires.

Prerequisite Software

I use two programs to transform WordPress export XML to LaTeX:  the J programming language and John MacFarlane’s Pandoc.  Pandoc is an excellent text mark-up to mark-up converter.  It wisely avoids attempting to convert entire complex documents and focuses on getting parts of documents right.  It does a particularly good job of converting HTML to LaTeX which is a crucial part of this process.  I use Pandoc to transform the HTML embedded in WordPress export XML CDATA elements to *.tex files and I use J to preprocess and post process Pandoc inputs and outputs and to stitch everything together into a set of LaTeX ready files.

Download Pandoc from here. I use the Windows command line version. There are Linux and Mac versions as well. Download J from here.  The easiest J install is the 32 bit Windows J 6.02 version. Other versions require additional steps to configure and deploy. If you are already a J user there is no need to install a particular system but you will need:

  1. The task library require 'task'
  2. The utility program wget.exe

Both of these components are typically part of the J distribution.

Install and check prerequisites

To continue download and install Pandoc and J and run the following tests; if you succeed you’re system is ready for WordPress to LaTeX with Pandoc and J: LaTeX Directories (Part 2).

Pandoc Test:

Download the test file: cdata.html and run Pandoc from the command line:

pandoc –o cdata.tex cdata.html

cdata.html is an example of the HTML code you find in WordPress export XML CDATA elements.  Note: required files are also available in the files sidebar in the WordPress to LaTeX directory.

J Test:

Start a J session and enter the following commands:

require 'task'

shell 'wget –help'

shell 'wget http://conceptcontrol.smugmug.com/photos/i-mNK4RHL/0/L/i-mNK4RHL-L.png'

If the shell command is properly loaded and wget.exe is found you will see help text. The second shell command downloads an image file.  Downloading post images is part of the overall conversion process.


[1] Actually this is not bad. Page layout systems are far worse. A typical layout system might be 90% visual and 10% logical making layout systems polar opposites of LaTeX.

Common Table Expression (CTE) SQLServer Queries with J

I’ve been blogging long enough to observe that your awesome posts are often ignored while your little “one-offs” sometimes strike hit gold. This is particularly true for “code example” posts. When I’m trolling for code — the geek equivalent of trolling for babes — I don’t want to read about the author’s programming philosophy or what the hell he thinks about the idiot in the White House. Just show me the code and shut up!

So, taking my own advice,  the following is an example of a SQLSever Common Table Expression (CTE) query.

WITH cte1
     AS (SELECT c.obj_id                            AS obj_id,
                a.created                           AS creation_date,
                Datediff(DAY, a.created, Getdate()) AS day_cnt
         FROM   [HIST].[dbo].[History] a
                JOIN [HIST].[dbo].[HistorySummary] b
                  ON a.rowid = b.rowid
                JOIN [HIST].[dbo].[ObjectNames] c
                  ON c.obj_name = b.obj_name),
     cte2
     AS (SELECT cte1.obj_id AS obj_id,
                CASE
                  WHEN cte1.day_cnt <= 30 THEN 1
                  ELSE 0
                END         AS d0,
                CASE
                  WHEN ( cte1.day_cnt > 30 )
                       AND ( cte1.day_cnt <= 60 ) THEN 1
                  ELSE 0
                END         AS d1,
                CASE
                  WHEN ( cte1.day_cnt > 60 )
                       AND ( cte1.day_cnt <= 90 ) THEN 1
                  ELSE 0
                END         AS d2,
                CASE
                  WHEN ( cte1.day_cnt > 90 ) THEN 1
                  ELSE 0
                END         AS d3
         FROM   cte1)
SELECT obj_id,
       SUM(d0) AS OneMonth,
       SUM(d1) AS TwoMonths,
       SUM(d2) AS ThreeMonths,
       SUM(d3) AS Overdue
FROM   cte2
GROUP  BY obj_id

CTE queries essentially create temporary virtual tables during query execution. They are similar to nested SQL queries but are easier to code, more general, (see recursive CTE queries), and often perform better than their convoluted equivalents. This example creates two virtual tables cte1 and cte2 that are then used to compute a quick histogram.

You can call CTE queries with the J ODBC interface. The following assumes a SQLServer ODBC connection dsn history.

NB. odbc interface
require 'dd'

NB. read CTE query
HistoryAgeSQL=. read 'c:/temp/HistoryAge.sql'

NB. connect sqlserver database
ch =. ddcon 'dsn=history'

NB. select with CTE query
sh =. HistoryAgeSQL ddsel ch

NB. fetch results
data=. ddfet sh,_1

As a final note it’s worth comparing the SQL CTE histogram code with J equivalents.  The two following J verbs taken from the J wiki, (here and here), compute histograms.

NB. computes histograms uses right open intervals
histogram=:<:@(#/.~)@(i.@#@[ , I.)

NB. variation on (histogram) uses left open intervals
histogram2=:<:@(#/.~)@(i.@>:@#@[ , |.@[ (#@[ - I.) ])

They scale to tens of millions of data points; returning results in a few seconds on my laptop. The SQL CTE, shown above, takes about three seconds running on 180,000 row tables on my employer’s full warp servers.  If I could convince the masses to adopt languages like J a large part of my job would disappear. Fortunately, the world is hostile to terse elegance!