page was written by Dr Mark Tarver and
incorporates a visual timeline of the Shen
programming language from its very beginnings
in 1989. This page links to original reports
from the University of Leeds and material
from the Lambda Associates web site which is
archived on this page.
Shen began many years ago in
1989 when I was working at the Laboratory for
the Foundations of Computer Science at
Edinburgh. The original idea was to bring to
the Lisp environment many of the advantages
of ML; specifically pattern-matching. A lot
of my ideas at that time were influenced by
ML, although I much preferred Lisp because of
first iteration: Rational Lisp
1990 I took up a post at Leeds, bringing my work with
me. While cycling in the Lake district in the spring
of that year, I had the insight that since type
checking was inherently deductive in nature, the type
discipline of a language could be specified as a
series of deduction rules. Using the techniques of
high-performance theorem-proving, these rules could
be compiled down into an efficient type checker. The
type-discipline would be free from the procedural
baggage of low-level encoding and presented in a form
that was clear, concise, and accessible to
experimentation and extension by researchers in type
was a bold conception that seemed eminently
reasonable, but which in fact took a further 15 years
of development to realise completely. The initial
implementation was called Rational Lisp and
you can find the first report on it here.
second iteration: SEQUEL
years later the prototype development issued in the
language SEQUEL (SEQUEnt processing Language),
written in Lisp and introduced publicly in 1992 and
to the International
Joint Conference on Artificial Intelligence in 1993.
anticipated many of the features of Shen, in
particular the use of Gentzen's sequent notation to
formulate the type rules. SEQUEL actually contained
the type theory for a substantial portion of Common
Lisp (over 300 Common Lisp system functions were
represented in SEQUEL) necessitating a sizeable
source code program of more than 23,000 lines. SEQUEL
was not only designed to support type secure Common
Lisp programming, it was also supposed to support
direct encoding of theorem-provers by giving the user
the power to enter logic rules in sequent notation to
the SEQUEL system.
sustained Andrew Adams's (1994) reimplementation
INDUCT of the Boyer-Moore theorem-prover. His M.Sc.
project was written in 6,000 lines of SEQUEL, and
generated nearly 30,000 lines of Common Lisp, and
gained him a distinction. In 1993, state-of-the-art
was a SPARC II, so power was limited. Loading and
type checking INDUCT took several minutes.
was a compromise of the ideals of deductive typing
for several reasons.
The sequent compiler was inefficient.
2. Since SEQUEL was heavily configured to support
Common Lisp, the language was not consistent with
lambda calculus (since Common Lisp does not support
lambda calculus features like currying and partial
3. SEQUEL inherited case-insensitivity from Common
Lisp and the use of the empty list NIL to mean false.
4. SEQUEL lacked a formal semantics.
5. SEQUEL also lacked a proof of type correctness.
ten years and two countries all these problems were
1996, SEQUEL stayed in operation with minor
amendments. Most of my creative time from 1993-1996
was spent writing poetry. SEQUEL continued to support
final year projects, but late 1996, I returned to
computer science research.
third iteration: Qi
first modest change was to introduce case-sensitivity
and the use of proper booleans in place of T and NIL.
Large parts of the implementation were rewritten,
reducing the source code by several thousand lines.
In 1998 overloading in SEQUEL was dropped and the
entire list of function types was placed on a hash
table. This was in a sense a move away from Lisp and
towards a cleaner model and a smaller language. The
plus was the elimination of 10,000 lines of code.
late 1997, I applied for three years unpaid leave
from my job as a lecturer in order to concentrate on
finishing this work. One year was granted, and from
1998 to 1999 a lot of work was expended in providing
the new language with a formal type theory. After a
false start, the current type theory was evolved in
1999 and the semantics was developed between 1999 and
that same period, a lot of work was done in testing
whether deductive typing was really a practical
option. Early results were not encouraging, 30 line
programs could take as many seconds to type check.
The goals of deductive typing - complete declarative
specification and a type checking procedure that was
conceptually distinct from the rules that drove it -
meant that performance-enhancing hardwired hacks were
not allowed. Eventually a technology was developed
and the performance benchmarks showed that a 166MHz
Pentium under CLisp could just about run the system.
These days, because of hardware improvements, these
technical challenges would seem trivial.
implementation went through two complete rewrites.
The first rewrite was very thoroughgoing and
introduced partial applications and currying into the
language, making it lambda calculus consistent.
this time I was deeply involved in Taoism, and
feeling the need for a name for the new
implementation called it Qi the name for the
life-force in Taoism.
2001 a robust version of a compiler-compiler (first
devised by my colleague Dr Gyuri Lajos) was built and
used to encode some of the complex parsing routines
needed in Qi. This was the compiler-compiler Qi-YACC
which was later used in Qi.
2002 I left the UK for America bearing Qi 1.4 with
me. Qi 1.4 was used to build almost the whole of Qi
2.0 which was a clean rebuild of the whole system. Qi
2.0 was never released however, because of an
improvement to the type checking algorithm T used in
2003, I developed the experimental algorithm T* and
type checking gained a factor of 7 speedup over 1.4.
In late 2003 I put the finishing touches on a
correctness proof for the type theory and T and T*.
In 2003 Qi won me the Promising Inventor Award from
the State University of New York. Carl Shapiro,
currently of Google, attended my functional
programming class at Stony Brook that same year. He
was to exert a significant influence on development
2004, Qi 2.0 was revised to work with T * and the
result was Qi 3.2. After some more debugging and
revisions, Qi 4.0 emerged. Carls interest in
SEQUEL made me revisit my old work. In 2005 I
constructed a new model for compiling Horn clauses
called the Abstract Unification Machine.
The AUM compiles Prolog into virtual machine
instructions for a functional language. AUM
technology gave 100 KLIPS under CLisp and 400 KLIPS
under the faster CMU Lisp and quadrupled the speed of
the type checker. The beta version, Qi 5.0, used
prototype AUM technology. Qi 6.1, which used the AUM,
again improved performance.
6.1 was also the first version of Qi to use the
preclude and include commands for managing large type
theories. Qi 6.1 was released in April 2005, along
with the web publication of Functional Programming in
Qi and at the same time as the creation of Lambda
Associates which is archived here. At 6,500
lines of machine-generated Lisp, Qi 6.1 was three
times smaller than the SEQUEL of 1993 and was placed
online in April 2005.
appearance of Qi was swiftly followed by a serious
illness that laid me up for 2006 and most of 2007.
Following a partial recovery in 2008, a factorising
version of the Qi compiler was introduced which made
Qi competitive with the fastest hand-compiled Lisp
code. The revised language, Qi II, corresponding to
the text Functional Programming in Qi was
released in 2008
The 4th iteration: Shen
chain of events that precipitated Shen began with an
invitation to address the European Conference on Lisp
in Milan in 2009. The invitation came about because
the invited speaker, Kent Pitman, had been himself
taken ill with cancer which meant, understandably,
that he had to withdraw. I was invited to take his
I was ill too and at first declined, since I was
healing at that time. But the organiser was frantic
and I finally accepted. My address proposed a
language like Qi but based on a primitive instruction
set that was so small that it could be translated
onto almost any platform. Qi had been implemented in
Common Lisp, which has over 700 system functions. But
of that large number, Qi only used 118 system
functions in its implementation.
estimated that Qi could be implemented in an even
smaller instruction set of less than 50 primitive
functions, and that they should be so carefully
chosen as to be easily implemented and widely
portable. This instruction set defined a very simple
Lisp, closer in spirit to Lisp 1.5, the original Lisp
from which the gargantuan Common Lisp descended. This
micro-Lisp was later to be called KLambda.
idea was the subject of my talk at the conference,
but it was ignored since my criticisms of Common Lisp
were not well received. Feeling I was wasting my
time, I left computer science in 2009 and journeyed
to India. Shen therefore remained a dream until my partner Dr
Pamesa called me back and asked that I
complete the work. She died before this was done.
2010 an appeal was launched to fund this research and
happily it succeeded. Armed with some means of
subsistence, I returned to designing and building the
new implementation during 2011. Eventually what
emerged in September 2011 was a clean, portable
language under a $free license, implemented in 43
primitive KLambda instructions running initially
under Common Lisp. The new language was named Shen,
the highest form of energy in Taoism and the Chinese
introduced quite a number of features not found in
Qi; including string and vector handling through
pattern matching, the capacity to read streams from
non-text files through an 8 bit reader and an
advanced and powerful macro system. Designed for
portability, Shen was slower than Qi which was
optimised for Lisp, but vindicated itself rapidly and
within 18 months Shen had been ported to Common Lisp,
Ruby. During this process KLambda acquired three more
first edition describing Shen was written during the
first part of 2012 and was a fairly hasty rehash of Functional
Programming in Qi, adapted to meet the needs of
the new language standard. The resulting text was
accurate, but rather improvised during a period when
I was in and out of clinics.
an improvement in 2013, I decided to rewrite the text
to cover more carefully the details of Shen and to
impart more flow and depth to the treatment of the
language. The more leisured approach and the value of
being able to see the results of the early work into
Shen allowed me make additions and improvement to the
work which was published in