parsing
we’ll also define a few helper functions to make reading text a little easier, without having to perform any bounds checks whenever we read tokens.
export const eof = "end of file"; lexer.current = (state) => { return state.position < state.input.length ? state.input.charAt(state.position) : eof; }; lexer.advance = (state) => ++state.position;
our lexer will run in a loop, producing tokens until it hits the end of input or an error.
export function lex(input) { let tokens = []; let state = lexer.init(input); while (true) { let start = state.position; let kind = lexer.nextToken(state); let end = state.position; tokens.push({ kind, start, end }); if (kind == eof || kind == "error") break; } return tokens; }
remember that error handling is important! we mustn’t forget that the user can produce invalid input - such as this string:
{example}
haku does not have curly braces in its syntax, so that’s clearly an error! reporting this to the user will be a much better experience than, perhaps… getting stuck in an infinite loop.
now for the most important part - that
lexer.nextToken
we used will be responsible for reading back a token from the input, and returning what kind of token it has read.for now, let’s make it detect parentheses. we of course also need to handle end of input - whenever our lexer runs out of characters to consume, as well as when it encounters any characters we don’t expect.
lexer.nextToken = (state) => { let c = lexer.current(state); if (c == "(" || c == ")") { lexer.advance(state); return c; } if (c == eof) return eof; lexer.advance(state); return "error"; };
with all that frameworking in place, let’s test if our lexer works!
export function printTokens(input) { let tokens = lex(input); for (let { kind, start, end } of tokens) { if (kind == "error") { let errorString = input.substring(start, end); console.log(`unexpected characters at ${start}..${end}: '${errorString}'`); } else { console.log(`${kind} @ ${start}..${end}`); } } } printTokens(`()((()))`);
( @ 0..1 ) @ 1..2 ( @ 2..3 ( @ 3..4 ( @ 4..5 ) @ 5..6 ) @ 6..7 ) @ 7..8 end of file @ 8..8
…seems pretty perfect!
so let’s write another function that will lex those.
lexer.skipWhitespaceAndComments = (state) => { while (true) { let c = lexer.current(state); if (c == " " || c == "\t" || c == "\n" || c == "\r") { lexer.advance(state); continue; } if (c == ";") { while ( lexer.current(state) != "\n" && lexer.current(state) != eof ) { lexer.advance(state); } lexer.advance(state); // skip over newline, too continue; } break; } };
except instead of looking at whitespace and comments in the main token reading function, we’ll do that outside of it, to avoid getting whitespace caught up in the actual tokens’
start
..end
spans.export function lex(input) { let tokens = []; let state = lexer.init(input); while (true) { lexer.skipWhitespaceAndComments(state); // <-- let start = state.position; let kind = lexer.nextToken(state); let end = state.position; tokens.push({ kind, start, end }); if (kind == eof || kind == "error") break; } return tokens; }
we’ll introduce a function that will tell us if a given character is a valid character in an identifier.
since S-expressions are so minimal, it is typical to allow all sorts of characters in identifiers - in our case, we’ll allow alphanumerics, as well as a bunch of symbols that seem useful. and funky!
export const isIdentifier = (c) => /^[a-zA-Z0-9+~!@$%^&*=<>+?/.,:\\|-]$/.test(c);
now we can add identifiers to
nextToken
:lexer.nextToken = (state) => { let c = lexer.current(state); if (isIdentifier(c)) { lexer.advanceWhile(state, isIdentifier); return "identifier"; } if (c == "(" || c == ")") { lexer.advance(state); return c; } if (c == eof) return eof; lexer.advance(state); return "error"; };
defining integers is going to be a similar errand to identifiers, so I’ll spare you the details and just dump all the code at you:
export const isDigit = (c) => c >= "0" && c <= "9"; lexer.nextToken = (state) => { let c = lexer.current(state); if (isDigit(c)) { lexer.advanceWhile(state, isDigit); return "integer"; } if (isIdentifier(c)) { lexer.advanceWhile(state, isIdentifier); return "identifier"; } if (c == "(" || c == ")") { lexer.advance(state); return c; } if (c == eof) return eof; lexer.advance(state); return "error"; };
an amen break
:bulb: for the curious: here’s why I implement lexers like this!
many tutorials will have you implementing lexers such that data is parsed into the language’s data types. for instance, integer tokens would be parsed into JavaScript
number
s.I don’t like this approach for a couple reasons.
pre-parsing data like this pollutes your lexer code with wrangling tokens into useful data types. I prefer it if the lexer is only responsible for reading back strings.
implemented my way, it can concern itself only with chewing through the source string; no need to extract substrings out of the input or anything.
there’s also a performance boost from implementing it this way: lazy parsing, as I like to call it, allows us to defer most of the parsing work until it’s actually needed. if the token never ends up being needed (e.g. due to a syntax error,) we don’t end up doing extra work eagerly!
if that doesn’t convince you, consider that now all your tokens are the exact same data structure, and you can pack them neatly into a flat array.
if you’re using a programming language with flat arrays, that is. such as Rust or C.
I’m implementing this in JavaScript of course, but it’s still neat not having to deal with mass
if
osis when extracting data from tokens - you’re always guaranteed a token will have akind
,start
, andend
.
there are many parsing strategies we could go with, but in my experience you can’t go simpler than good ol’ recursive descent.
knowing that similarity, we’ll start off with a similar set of helper functions to our lexer.
parser.init = (tokens) => { return { tokens, position: 0, }; }; parser.current = (state) => state.tokens[state.position]; parser.advance = (state) => { if (state.position < state.tokens.length - 1) { ++state.position; } };
note however that instead of letting
current
read out of bounds, we instead clampadvance
to the very last token - which is guaranteed to beend of file
.this yields the following EBNF grammar:
Expr = "integer" | "identifier" | List; List = "(" , { Expr } , ")";
we’ll start by implementing the
Expr = "integer" | "identifier"
rule. parsing integers and identifiers is as simple as reading their single token, and returning a node for it:parser.parseExpr = (state) => { let token = parser.current(state); switch (token.kind) { case "integer": case "identifier": parser.advance(state); return { ...token }; default: parser.advance(state); return { kind: "error", error: "unexpected token", start: token.start, end: token.end, }; } };
we’ll wrap initialization and
parseExpr
in another function, which will accept a list of tokens and return a syntax tree, hiding the complexity of managing the parser state underneath.parser.parseRoot = (state) => parser.parseExpr(state); export function parse(input) { let state = parser.init(input); let expr = parser.parseRoot(state); if (parser.current(state).kind != eof) { let strayToken = parser.current(state); return { kind: "error", error: `found stray '${strayToken.kind}' token after expression`, start: strayToken.start, end: strayToken.end, }; } return expr; }
this function also checks that there aren’t any tokens after we’re done parsing the root
Expr
production. it wouldn’t be very nice UX if we let the user input tokens that didn’t do anything!now it’s time to parse some lists. for that, we’ll introduce another function, which will be called by
parseExpr
with an existing(
token.its task will be to read as many expressions as it can, until it hits a closing parenthesis
)
, and then construct a node out of that.parser.parseList = (state, leftParen) => { parser.advance(state); let children = []; while (parser.current(state).kind != ")") { if (parser.current(state).kind == eof) { return { kind: "error", error: "missing closing parenthesis ')'", start: leftParen.start, end: leftParen.end, }; } children.push(parser.parseExpr(state)); } let rightParen = parser.current(state); parser.advance(state); return { kind: "list", children, start: leftParen.start, end: rightParen.end, }; };
and the last thing left to do is to hook it up to our
parseExpr
, in response to a(
token:parser.parseExpr = (state) => { let token = parser.current(state); switch (token.kind) { case "integer": case "identifier": parser.advance(state); return { ...token }; case "(": return parser.parseList(state, token); // <-- default: parser.advance(state); return { kind: "error", error: "unexpected token", start: token.start, end: token.end, }; } };
now let’s try parsing an S-expression!
printTree("(hello! ^^ (nested nest))");
{ "kind": "list", "children": [ { "kind": "identifier", "start": 1, "end": 7 }, { "kind": "identifier", "start": 8, "end": 10 }, { "kind": "list", "children": [ { "kind": "identifier", "start": 12, "end": 18 }, { "kind": "identifier", "start": 19, "end": 23 } ], "start": 11, "end": 24 } ], "start": 0, "end": 25 }
I don’t know about you, but I personally find the JSON output quite distracting and long. I can’t imagine how long it’ll be on even larger expressions!
to counteract that, let’s write an S-expression pretty printer:
export function exprToString(expr, input) { let inputSubstring = input.substring(expr.start, expr.end); switch (expr.kind) { case "integer": case "identifier": return inputSubstring; case "list": return `(${expr.children.map((expr) => exprToString(expr, input)).join(" ")})`; case "error": return `<error ${expr.start}..${expr.end} '${inputSubstring}': ${expr.error}>`; } }
let’s try something more complicated, with comments and such.
export function printTree(input) { let tokens = lex(input); let tree = parse(tokens); console.log(exprToString(tree, input)); } printTree(` (def add-two ; Add two to a number. (fn (n) (+ n 2))) `);
(def add-two (fn (n) (+ n 2)))
looks like it works!
amen break, part two
here’s a fun piece of trivia: I’m wrote a Nim S-expression parser for Rosetta Code way back in July 2019.
it’s definitely not how I would write a parser nowadays. it’s pretty similar, but the syntax tree structures are quite different - it doesn’t use the lazy parsing trick I talked about before.
interpretation
we’ll again start off by defining a function that initializes our interpreter’s state.
right now there isn’t really anything to initialize, but recall that we don’t have our tokens parsed into any meaningful data yet, so we’ll have to have access the source string to do that.
treewalk.init = (input) => { return { input }; };
in the meantime, let’s prepare a couple convenient little wrappers to run our code:
export function run(input, node) { let state = treewalk.init(input); return treewalk.eval(state, node); } export function printEvalResult(input) { try { let tokens = lex(input); let ast = parse(tokens); let result = run(input, ast); console.log(result); } catch (error) { console.log(error.toString()); } }
so let’s patch those integers in!
this is where we’ll need that source string of ours - we don’t actually have a JavaScript
number
representation of the integers, so we’ll need to parse them into place.treewalk.eval = (state, node) => { switch (node.kind) { case "integer": let sourceString = state.input.substring(node.start, node.end); return parseInt(sourceString); default: throw new Error(`unhandled node kind: ${node.kind}`); } };
traditionally, in Lisp-like languages, a list expression always represents a function application, with the head of the list being the function to call, and the tail of the function being the arguments to apply to the function.
let’s implement that logic then!
export const builtins = {}; treewalk.eval = (state, node) => { switch (node.kind) { case "integer": let sourceString = state.input.substring(node.start, node.end); return parseInt(sourceString); case "list": // <-- let functionToCall = node.children[0]; let builtin = builtins[state.input.substring(functionToCall.start, functionToCall.end)]; return builtin(state, node); default: throw new Error(`unhandled node kind: ${node.kind}`); } };
we could try this out now, except we don’t actually have any builtins! so I’ll add a few in, so that we can finally perform our glorious arithmetic:
function arithmeticBuiltin(op) { return (state, node) => { let result = treewalk.eval(state, node.children[1]); for (let i = 2; i < node.children.length; ++i) { result = op(result, treewalk.eval(state, node.children[i])); } return result; }; } builtins["+"] = arithmeticBuiltin((a, b) => a + b); builtins["-"] = arithmeticBuiltin((a, b) => a - b); builtins["*"] = arithmeticBuiltin((a, b) => a * b); builtins["/"] = arithmeticBuiltin((a, b) => a / b);
a brief intermission
all we have to do is swap out the evaluation kernel…
import { getKernel } from "treehouse/components/literate-programming/eval.js"; let kernel = getKernel(); export const defaultKernelInit = kernel.init; kernel.init = () => { return defaultKernelInit(); }; export const defaultKernelEvalModule = kernel.evalModule; kernel.evalModule = async (state, source, language, params) => { if (language == "haku") { printEvalResult(source); return true; } else { return await defaultKernelEvalModule(state, source, language, params); } };