LiquiDoc is a documentation build utility for true single-sourcing of technical content and data in a Jekyll/Asciidoctor-based toolchain. It is especially suited for documentation projects with various required output formats from complex, single-sourced codebases.
Broadly, LiquiDoc is intended for any project that generates technical content for use in documentation, user interfaces, and even back-end code. The highly configurable command-line utility (and fledgling Ruby gem) engages the Liquid template engine to parse complex data into rich text output.
LiquiDoc’s prime source is on Github. Use the Issues feature for support requests. |
This manual covers LiquiDoc version 0.12.x .
|
Data sources can be flat files in XML, JSON, CSV, and our preferred human-editable format: YAML. LiquiDoc can ingest semi-structured data from any plaintext file by pattern matching with regular expressions.
LiquiDoc can further coordinate build operations, including rendering static websites using Jekyll and Asciidoctor, but these dependencies are likely to become optional by the 1.0 release.
Installation
Your system must be running Ruby 2.3 or later (2.6+ recommended). See rubyinstaller.org if you’re on Windows, or preferably use Microsoft’s Windows Subsystem for Linux on Windows 10 or 11, then follow as a Linux user.
We strongly recommend all MacOS and Linux users employ a Ruby package manager like rbenv (preferred) or or RVM.
The fastest way to install is by running gem install liquidoc , but we strongly recommend using Bundler to manage all Ruby dependencies.
|
-
Create a file called
Gemfile
in your project’s root directory. -
Populate the file with LiquiDoc dependencies.
A LiquiDoc project Gemfilesource 'https://rubygems.org' gem 'liquidoc'
-
Open a terminal (command prompt).
If you don’t have a preferred terminal application, use your OS’s magic search and look for
terminal
. -
Navigate to your project root directory.
Examplecd Documents/workspace/my_project
-
Run
bundle install
to prepare dependencies.If you do not have Bundler installed, Ruby will tell you. Enter
gem install bundler
, let Bundler install, then repeat this step.
LiquiDoc should now be ready to run with Bundler support, which is the strongly recommended approach.
Basic Parsing
Give LiquiDoc (1) any proper YAML, JSON, XML, or CSV (with header row) data file and (2) a template mapping any of the data to token variables with Liquid markup — LiquiDoc returns STDOUT feedback or writes a new file (or multiple files) based on that template.
bundle exec liquidoc -d _data/sample.yml -t _templates/liquid/sample.asciidoc -o _output/sample.adoc
This single-action invocation of LiquiDoc ingests data from YAML file sample.yml
, reads Liquid-formatted template sample.asciidoc
, and generates AsciiDoc-formatted file sample.adoc
Add --verbose to any liquidoc command to see the steps the utility is
taking.
|
To ingest data from multiple files, pass multiple paths to the -d
/--data
option, separated by commas.
bundle exec liquidoc -d _data/source1.yml,_data/source2.json -t my_template.html -o my_artifact.html
In this example, data from source1.yml
will be passed to the template in an object called source1
, and source2.json
will ingested as the source2
object.
{{ source1.param1 }}
{% for item in source2 %}
{{ item.name }}
{% endfor %}
Basic Configuration
The best way to use LiquiDoc is with a configuration file. This not only makes the command line much easier to manage (requiring just a configuration file path argument), it also adds the ability to perform more complex build routines and manage them with source control.
Here is very simple build routine instructed by a LiquiDoc config:
- action: parse (1)
data: source_data_file.json (2)
builds: (3)
- template: liquid_template.html (4)
output: _output/output_file.html (5)
- template: liquid_template.markdown (4)
output: _output/output_file.md (5)
1 | The top-level - denotes a new, consecutively executed “step” in the build.
The action: parameter determines what type of action this step will perform.
The options are parse , migrate , render , deploy , and execute . |
2 | If the data: setting’s value is a string, it must be the filename of a format automatically recognized by LiquiDoc: .yml , .json , .xml , or .csv .
Otherwise, data: must contain subordinate settings for file: and type: . |
3 | The builds: section contains a list of procedures to perform on the data.
It can include as many subroutines as you wish to perform.
This one instructs two builds. |
4 | The template: setting should be a liquid-formatted file (see [liquid-templating]). |
5 | The output: setting is a path and filename where you wish the output to be saved.
Can also be stdout to write to console. |
When you have established a configuration file, you can call it with the option -c
on the command line.
bundle exec liquidoc -c _configs/cfg-sample.yml --stdout
Repeat without the --stdout flag, and you’ll find the generated files in _output/ , as defined in the configuration.
|
Parse Actions
The primary type of action performed by LiquiDoc during a build step is parsing semi-structured data into any flat format desired.
Data Sources
Valid data sources come in a few different types.
There are the built-in data types (YAML, JSON, XML, CSV) vs free-form type (files processed using regular expressions, designated by the regex
data type).
There is also a divide between simple one-record-per-line data types (CSV and regex), which produce one set of parameters for every line in the source file, versus nested data types that can reflect far more complex structures.
Native Nested Data (YAML, JSON, XML)
The native nested formats are actually the most straightforward.
So long as your filename has a conventional extension, you can just pass a file path for this setting.
That is, if your file ends in .yml
, .json
, or .xml
, and your data is properly formatted, LiquiDoc will parse it appropriately.
- action: parse
data: _data/source_data_file.json
builds:
- template: _templates/liquid_template.html
output: _output/output_file.html
For standard-format files that have nonstandard file extensions (for example, .js
rather than .json
for a JSON-formatted file), you must declare a type explicitly.
- action: parse
data:
file: _data/source_data_file.js
type: json
builds:
- template: _templates/liquid_template.html
output: _output/output_file.html
Once LiquiDoc knows the right file type, it will parse the file into a Ruby object for further processing.
CSV Data
Data ingested from CSV files will use the first row as key names for columnar data in the subsequent rows, as shown below.
name,description,default,required
enabled,Whether project is active,,true
timeout,The duration of a session (in seconds),300,false
The above source data, parsed as a CSV file, will yield an array of hashes. Each array item is a structure — what Ruby calls a hash — representing a row from the source file (except the first row, which establishes parameter keys). As represented in the CSV example above, if the structure contains more than one key-value pair (more than one “column” in the source), all such pairs will be siblings, not nested or hierarchical.
data[0].name #=> enabled
data[0].description #=> Whether project is active
data[0].default #=> nil
data[0].required #=> true
data[1].name #=> timeout
data[1].description #=> The duration of a session (in seconds)
data[1].default #=> 300
data[1].required #=> false
Unstructured Data Ingest
Unstructured data files can be ingested as well, as long as records are delineated by lines (as with CSV) and each line meets a consistent pattern we can “scrape” for data to organize. This method generates arrays of structures similarly to the CSV approach.
Unstructured records are parsed into using regular expression (“regex”) patterns. Any file organized with one record per line may be consumed and parsed by LiquiDoc, provided you tell the parser which variables to extract from where. The parser will read each line individually, applying your regex pattern to extract data using named groups then storing them as variables for the associated parsing action.
Learn regular expressions
If you deal with docs but are not a regex user, become one.
They are increedibly powerful and can save hours of error-prone manual work such as complex find and replace.
|
A_B A thing that *SnASFHE&"\|+1Dsaghf true G_H Some text for &hdf 1t`F false
- action: parse
data:
file: _data/sample.free
type: regex
pattern: ^(?<code>[A-Z_]+)\s(?<description>.*)\s(?<required>true|false)\n
builds:
- template: _templates/liquid_template.html
output: _output/output_file.html
Let’s take a closer look at that regex pattern.
^(?<code>[A-Z_]+)\s(?<description>.*)\s(?<required>true|false)\n
We see the named groups code
, description
, and required
.
This maps nicely to a new array.
data[0].code #=> A_B
data[0].description #=> A thing that *SnASFHE&"\|+1Dsaghf
data[0].required #=> true
data[1].code #=> G_H
data[1].description #=> Some text for &hdf'" 1t`F
data[1].required #=> false
Free-form/regex parsing is obviously more complicated than the other data types. Its use case is usually when you simply cannot control the form your source takes.
The regex type is also handy when the content of some fields would be burdensome to store in conventional semi-structured formats like those natively parsed by LiquiDoc. This is the case for jumbled content containing characters that require escaping, so you can store source matter like that from the example above in the rawest possible form.
AsciiDoc Attributes Ingest
The attribute data set in any proper AsciiDoc document can be used as a source. This method can be useful for single-sourcing data that must appear in a README that cannot include such data from another file.
AsciiDoc attributes objects are among the simplest to work with. They consist only of flat key-value pairs. Values must be strings or numbers.
- action: parse
data: README.adoc
builds:
- output: readme-attributes.yml
- output: readme-attributes.json
The ingest process uses the final state of an attribute parsed by the Asciidoctor Ruby API. This means you can use cumulative settings, whereby you can use attribute tokens to make up the value of subsequently set attributes.
Default Output Formats (Direct Conversions)
LiquiDoc can directly convert any supported semi-structured data input format to either YAML or JSON output.
Simply provide no template parameter, and make sure the output file has a proper extension (.yml
or .json
).
- action: parse
data: _data/testdata.xml
builds:
- output: _build/frontend/testdata.json
This feature is in need of validation. XML and CSV output will be added in a future release if direct conversions prove useful. |
For more on Liquid templating, see Templating with Liquid. |
Passing Additional Variables
In addition to (or instead of) data files, parse operations accept fixed variables and environment variables.
Fixed/Config Variables
Fixed variables are defined using a per-build structure called variables:
in the config file.
Each build operation can accept a distinct set of variables.
- action: parse
data: schema.yml
builds:
- name: parse-basic-nav
template: _templates/side-nav.html
output: _output/side-nav-basic.html
variables:
product:
edition: basic
- name: parse-premium-nav
template: _templates/side-nav.html
output: _output/side-nav-prem.html
variables:
product:
edition: premium
This configuration will use the same data and templates to generate two distinct output files.
Each build uses an identical Liquid template (side-nav.html
) to parse its distinct side-nav-<edition>.html
file.
Inside that template, we might find a block of Liquid code hiding some navigation items from the basic edition, and vice versa.
<li><a href="home">Home</a></li>
<li><a href="dash">Dashboard</a></li>
{% if vars.product.edition == "basic" %}
<li><a href="upgrade">Upgrade!</a></li>
{% elsif vars.product.edition == "premium" %}
<li><a href="billing">Billing</a></li>
{% endif %}
This portion of the example config presses two versions of the Liquid template side-nav.html
into two different nav menus, either to be served on two parallel sites or one site with the ability to select front-end elements depending on user status.
In this example, only the menu shown to premium users will contain the billing link; basic users will see an upgrade prompt.
Environment/Execution Variables
The other way to pass variables into builds is during the execution of the LiquiDoc gem. When performing a configured build, pass config variables to a dynamic configuration file in order to trigger different settings or routines, as documented in Dynamic LiquiDoc Build Configurations.
Passing Variables to Direct Conversions
Data being converted directly to a default output format is also eligible for injection of variables from the command line or config file.
bundle exec liquidoc -d data/original.xml -o _build/converted.json -v env=staging -v lang=en-us
The previous example command is functionally identical to the following configuration step. |
- action: parse
data: data/original.xml
builds:
- output: _build/converted.json
variables:
env: staging
lang: en-us
If original.xml
contains one key-value pair (<test>true</test>
), the resulting JSON will situate additional variables alongside it.
{
"test": true,
"env": "staging",
"lang": "en-us"
}
Multiple File Ingest
Parse actions can ingest an indefinite number of data sources, with some restrictions.
The parse action’s data:
parameter can accept an array of paths to any supported semi-structured data format, given the following standard file extensions (.csv
, .yml
, .json
, .xml
).
Any other file, whether nonstandard format or nonstandard file extension, must first be converted to a standard format.
- action: parse
data:
- lib/strings/common-en.json
- data/app-strings.yml
- lang/settings.xml
builds:
- template: _templates/env-config.liquid.yaml
output: target/env-config-lang.yml
variables:
language:
short: en
full: English
- template: all-strings.liquid.json
output: all-strings.json
In this example, we imagine generating a couple of files useful to different parts of a documentation app, including common strings and language settings. In each Liquid template, we have access to several data objects.
The vars.
scope carries anything passed as variables:
in the build step. For example, {{vars.language.full}}
would resolve to English
in this example build.
For the ingested files, a scope is named after the source filename, minus its extension.
In the above example, we could access variables from common-en.json
as {{common-en.keyname}}
, and so forth.
Base filenames of files ingested in the same parse action must be distinct. |
LiquiDoc’s multi-file datasource ingest works very similarly to Jekyll’s templating, where always-available data objects are derived from files in the data directory. The key difference is that files must be explicitly listed for each parse action in order for their data to be available.
This functionality also resembles the multi-file attributes ingest in render operations, using the same parameter, data:
.
But whereas attribute-file ingest accepts a sub-data block indicator, this feature is redundant and thus not available in parse operations.
Entire files are ingested and passed to the designated templates during parsing.
Converting Multiple Data Files to a Default Format
Just as LiquiDoc will objectify a series of data files for a templated conversion, it can also merge numerous files into a unified data object and output it as a single JSON or YAML file. The resulting data file will carry an object named for each file, as with standard multi-file ingest, and any passed variables are situated at the root.
Output
After this parsing, files are written in any of the given output formats, or else just written to console as STDOUT (when you add the --stdout
flag to your command or set output: stdout
in your config file).
Liquid templates can be used to produce any plaintext format imaginable.
Just format valid syntax with your source data and Liquid template, then save
with the proper extension, and you’re all set.
Migrate Actions
During the build process, different tools handle file assets variously, so your images and other embedded files are not always where they need to be relative to the current procedure. Migrate actions copy resource files to a temporary/uncommitted directory during the build procedure so they can be readily accessed by subsequent steps.
In addition to designating action: migrate
, migrate operations require just a few simple settings.
- action: migrate
source: index.adoc
target: _build/
- action: migrate
source: assets/images
target: _build/img
options:
inclusive: false
- action: migrate
source: tmp/{{imported_file}}.adoc
target: _build/{{portal_path}}/{{imported_file}}.adoc
options:
missing: warn
The first action step above copies all the files and folders in assets/images
and adds them to _build/img
.
It will only recreate the contents of the source directory, not the directory path itself, because the inclusive:
option is set to false
(its default value is true
).
When both the source and target paths are directories and inclusive is true
, the files are copied to target/source/
.
When inclusive is false
, they copy to target/
.
Individual files must be listed in individual steps, one per step, as in the second step above.
In case of a missing source directory or file to be migrated, the default behavior is to exit the build operation (missing: exit
).
This can be overridden and the migrate action skipped when the source is missing.
Setting the option missing: warn
logs a warning to console, and missing: skip
will only print a warning under --verbose
operations.
Render Actions
Presently, all render actions convert AsciiDoc-formatted source files into rich-text documents, such as PDFs and HTML pages. LiquiDoc uses Asciidoctor’s Ruby engine and various other plugins to generate output in a few supported formats.
First let’s look at a render action configuration step.
- action: render
source: book-index.adoc
data: _configs/asciidoctor.yml
builds:
- output: _build/publish/codewriting-book-draft.pdf
theme: theme/pdf-theme.yml
- output: _build/publish/codewriting-book-draft.html
theme: theme/site.css
Each action for rendering a conventionally structured book-style document requires an index, which is the primary AsciiDoc file to process labeled source:
in our configuration.
This file can contain all of your AsciiDoc content, if you wish.
Alternatively, it can be made up entirely of include::
macros, creating an linear map of your document’s contents, which may themselves be more AsciiDoc files, code examples, and so forth.
= This File Can Contain Regular AsciiDoc Markup
include::chapter-01.adoc[]
include::code-sample.rb[tags="booksample"]
include::code-sample.js[lines="22..33"]
After the title line, the first macro instruction in this example will embed the entire file chapter-01.adoc
, parsing and rendering its AsciiDoc-formatted contents in the process.
The second instruction extracts part of the file code-sample.rb
and embeds it here.
Inside codesample.rb
, content is tagged with comment code to mark what we wish to extract.
In the case of a Ruby file, you would expect to find code like the following in the source.
# tag::booksample[]
def exampleblock
puts "This is an example for my book."
end
# end::booksample[]
For AsciiDoc source code, you would use the //
comment notation.
// tag::booksample[]
purpose::
to demonstrate inclusion.
// end::booksample[]
The third instruction in our Example AsciiDoc index file, which was simply include::code-sample.js[lines="22..33"]
— this dangerous little bugger extracts a fixed span of code lines, as designated.
Static Site Render Operations
Static-site generators are critical tools to just about any docs-as-code infrastructure. Starting with Jekyll but soon to add more (Awestruct and possibly Grain next), each generator added will maintain all of its capabilities and do most of the heavy lifting.
LiquiDoc’s role is primarily to help your preferred SSG handle your source in ways consistent with any other rendering and file managing your docs codebase requires. For example, the jekyll-asciidoc extension that enables Jekyll builds to parse AsciiDoc markup only honors attributes set in Jekyll config files. Therefore, just before triggering the build, LiquiDoc loads all the accummulated AsciiDoc parameters into a new config file from which Jekyll draws AsciiDoc attribute assignments.
- Jekyll
-
A Jekyll render operation calls
bundle exec jekyll build
from the command line pretty much the way you would do it manually. You still need a Jekyll configuration file with the usual settings in it. This is established in your build-config block
- action: render
data: globals.yml
builds:
- backend: jekyll
properties:
files:
- _configs/jekyll-global.yml
- _configs/jekyll-portal-1.yml
arguments:
destination: build/site/user-basic
attributes:
portal_term: Guide
The backend:
designation of jekyll
is required, and at least one file under properties:files:
is strongly encouraged for proper Jekyll behavior.
LiquiDoc will write an additional YAML file containing all of the Asciidoctor attributes, to be appended to this list when the build command is run.
This captures attributes offered up in the action-level data:
file and in the attributes:
section of the build step.
The arguments:
block is made up of key-value parameters that establish or override any Jekyll config settings.
The action-level parameter source: is left blank in this example.
This setting cannot be used to designate a Jekyll source path.
If the above action had a second build step, such as a single output doc, the source would have relevance as the index file for that document.
|
Setting AsciiDoc Attributes
For basic render
actions, the source:
file and other .adoc
files determine most of the rest of the content source files (if any) using AsciiDoc includes.
But Asciidoctor renderings can be configured and manipulated by attribute settings at other stages.
Basically, we are trying to maximize our readiness to ingest document data and build properties from a wide range of sources.
This way inline substitutions can be made out of data living outside the source tree of any particular document, passed into the document build in the form of YAML data converted into — you guessed it — AsciiDoc attributes.
AsciiDoc attributes are not the same as Asciidoctor configuration properties.
While both kinds create substitutions that are expressed the same way ({property_name} ), they are set differently in your LiquiDoc configuration.
|
LiquiDoc provides several means for adding attributes to your documents, in addition to the ways you might be used to setting attributes (inside your docfiles and command line). They are listed below in the order of assignment/substitution. Therefore, an identical value defined explicitly in each subsequent space will overwrite any set in the previous stages.
The order of substitution is as follows.
After that, we’ll demonstrate even more ways to ingest datasets.
- AsciiDoc document inline
-
The most common way to set variables is inside your AsciiDoc source files — typically at the top of your
index.adoc
file or the equivalent. Any parameters set there will cascade through your included files for parsing. This is a good place to establish defaults, but they can be overwritten by the other four means of setting AsciiDoc attributes.Example — Setting AsciiDoc attributes inline:some_var: My value :imagesdir: ./img
- Document data file
-
A YAML-formatted data file containing a stack of key-value pairs can be passed to Asciidoctor.
Example AsciiDoc attributes data fileimagesdir: assets/images basedir: _build my_custom_var: Some text, can include spaces and most punctuation
This file must be called out in your configuration using the top-level
data:
setting.Example AsciiDoc data file setting for attributes ingest- action: render source: my_index.adoc data: _data/asciidoctor.yml builds: - output: myfile.html
You may also pass multiple files and/or just a sub-block of a given file (a named variable with its own nested data). See below.
- Per-build properties files
-
With document-wide attributes set, we begin overwriting them on a per-build basis for different renderings of that same source document. For starters, LiquiDoc can extract attributes from still more data files at this stage, like so:
Example — Attribute extraction from build-specific data files- output: _build/publish/manual-europe.pdf properties: files: _conf/jekyll.yml,_data/europe.yml - output: _build/publish/manual-china.pdf properties: files: _conf/jekyll.yml,_data/china.yml
The
properties:files
setting can take the form of a comma-delimited list or a YAML array, and it can filter to specific subdata (see below). These per-build properties files are meant to be document settings, so for static site renderings (e.g., Jekyll), these are meant to contain YAML files formatted for Jekyll configuration reads.
- Per-build in LiquiDoc config
-
So if your document is a book, and your builds are an HTML edition and a PDF edition, you can pass distinct settings to each.
Example per-build attribute settings in config file- action: render source: my_book.adoc data: _data/asciidoctor.yml builds: - output: my_book.html attributes: edition: HTML - output: my_book.pdf attributes: edition: PDF - output: my_book_special.pdf attributes: edition: Special
Imagine this affecting content in the book file.
Example book index with variable content= My Awesome Book: {edition} Edition include::chapter-1.adoc[] include::chapter-2.adoc[] ifeval::["{edition}" == "Special"] include::chapter-3.adoc[] endif::[]
The AsciiDoc code above that might be least familiar to you is conditional code, represented by the
ifeval::[]
andendif::[]
markup. Here we see how passing attributes at the build iteration level gives us all kinds of cool powers. Not only are we setting the subtitle with a variable; if we’re building the special edition, we add a chapter the other two editions ignore.
- Command-line arguments
-
There is yet a way to override all of this, which is also handy for testing variables out without editing any files: pass arguments via the
-a
option on the command line. The-a
option flag accepts an argument in the format ofkey=value
, wherekey
is the name of your attribute, andvalue
is your optional assignment for that attribute. You may pass as many attributes as you like this way, up to the capacity of your shell’s command line, which is probably something.Example — Setting global build attributes on the CLIbundle exec liquidoc -c _configs/my_book.yml -a edition='Very Special NSFW'
More ways to Ingest Attributes Data
- multiple attribute files
-
You may also specify more than one attribute file by separating filenames with commas. They will be ingested in order.
- specific subdata
-
You may specify a particular block in your data file by designating it with a colon.
Example — Listing multiple data files & designating a nested blockdata: - asciidoc.yml - product.yml:settings.attributes
Example — Designating a data block — alternate formatproperties: files: asciidoc.yml,product.yml:settings.attributes
Here we see
,
used as a delimiter between files and:
as an indicator that a block designator follows. In this case, the render action will load thesettings.attributes
block from theproduct.yml
file.Example — Designating data blocks within a properties filesproperties: files: - countries.yml:cn - edition.yml:enterprise.premium
In this last case, we’re passing locale settings for a premium edition targeted to a Chinese audience.
Render Build Settings Overview
Certain AsciiDoc/Asciidoctor settings are determinant enough that they can be set using parameters in the build config. Establishing these as per-build settings in your config file will override anywhere else they are set, except on the command line.
These settings do not necessarily have 1:1 correspondence to AsciiDoc(tor) attributes. |
- output
-
The filename for saving rendered content. This build setting is required for render operations that generate a single file. Static site generation renders, however, target a directory set in the SSG’s config.
- backend
-
The backend determines the rendering context. When building single-file output, the backend is typically determined from the
output:
filename and/or thedoctype:
. Some renderers, such as Jekyll, require specific backend designations (jekyll
). Valid options arehtml5
,pdf
,jekyll
, with more to come. - doctype
-
Overrides Asciidoctor doctype attribute. Valid values are:
book
-
Generates a book-formatted document in PDF, HTML, or ePub.
article
-
Generates an article-formatted document in PDF, HTML, or ePub.
manpage
-
Generates Linux man page format.
deck
-
Generates an HTML/JavaScript slide deck. (Not yet implemented.)
style
-
Points either to a YAML configuration for PDF styles or a CSS stylesheet for HTML rendering.
- variables
-
Designate one or more nested variables alongside ingested data in parse actions.
- properties
-
Designates a file or files for settings and additional explicit configuration at the build level for render actions.
Algolia Search Indexing for Jekyll
If you’re using Jekyll to build sites, LiquiDoc makes indexing your files with the Algolia cloud search service a matter of configuration, not development. The heavy lifting is performed by the jekyll-algolia plugin, but LiquiDoc can handle indexing even a complex site by using the same configuration that built your HTML content (which is what Algolia actually indexes).
You will need a free community (or premium) Algolia account to take advantage of Algolia’s indexing service and REST API. Simply create a named index, then visit the API Keys to collect the rest of the info you’ll need to get going. |
Two hard-coding steps are required to prep your source to handle Algolia index pushes.
-
Add a block to your main Jekyll configuration file.
Example Jekyll Algolia configurationalgolia: application_id: 'your-application-id' (1) search_only_api_key: 'your-search-only-api-key' (2) extensions_to_index: [adoc] (3)
1 From the top bar of your Algolia interface. 2 From the API Keys screen of your Algolia interface. 3 List as many extensions as apply, separated by commas. -
Add a block to your build config.
- action: render data: globals.yml builds: - backend: jekyll properties: files: - _configs/jekyll-global.yml - _configs/jekyll-portal-1.yml arguments: destination: build/site/user-basic attributes: portal_term: Guide search: index: 'portal-1'
The
index:
parameter is for the name of the index you are pushing to. (An Algolia “app” can have multiple “indices”.) This entry configures but does not trigger an indexing operation.
Indexing is invoked by command-line flags.
Add --search-index-push
or --search-index-dry
along with the --search-api-key='your-admin-api-key-here'
argument in order to invoke the indexing operation.
The --search-index-dry
flag merely tests content packaging, whereas --search-index-push
connects to the Algolia REST API and attempt to push your content for indexing and storage.
bundle exec liquidoc -c _configs/build-docs.yml --search-index-push --search-index-api-key='90f556qaa456abh6j3w7e8c10t48c2i57'
This operation performs a complete build, including each render operation, before the Algolia plugin processes content and pushes each build to the indexing service, in turn.
To add modern site search for your users, add Algolia’s InstantSearch functionality to your front end! |
Deploy Actions
Mainstream deployment platforms are better suited to tying all your operations together, but we plan to bake a few common operations in to help you get started. For true build-and-deployment control, consider build tools such as Make, Rake, and Gradle, or deployment tools like Travis CI, CircleCI, and Jenkins.
Jekyll Serve
For testing purposes, however, spinning up a local webserver with the same stroke that you build a site is pretty rewarding and time saving, so we’ll start there.
For now, this functionality is limited to adding a --deploy
flag to your liquidoc
command.
This will attempt to serve files from the destination:
set for the associated Jekyll build.
LiquiDoc-automated deployment of Jekyll sites is both limited and untested under nonstandard conditions. Non-local deployment should be handled by external continuous-integration/devlopment (CICD) tools. |
Execute Actions
LiquiDoc lets you invoke shell commands from within a build routine.
A basic execute
action requires just two parameters: action: execute
and command: <shell command>
.
Because shell commands can be dangerous, LiquiDoc will warn you if your config contains any, listing them and prompting you to approve.
To override this, add --unsafe to your liquidoc command.
|
The command:
value is a string identical to any 1-line shell command, which will be performed in the system’s current shell environment (probably Bash).
- action: execute
command: git checkout release/docs/3.1.x
An execute action with no options listed will be performed, with results printed to console, if applicable.
The above command would generate Git feedback, whereas a successful rm somefile.txt
command would not.
Failed commands will not cause the LiquiDoc routine to halt; LD will simply move on to the next stage.
To suppress output, add stdout: false
to options:
.
- action: execute
command: git checkout release/docs/3.1.x
options:
stdout: false
Output to File
To capture the output of a given command, add options:
to the execute
instructions.
Writing results to a file is enabled with the outfile:
option.
- action: execute
command: ls -l imports/product3/
options:
stdout: true
outfile:
path: _build/pre/products3_dirlist.stdout
prepend: "perms\tqty\tuser\tgroup\tsize\tmonth\tday\ttime\tpath"
append: EOF
When writing results to an outfile, optionally insert text at the top or bottom of your new file using prepend:
and append:
settings.
perms qty user group size month day time path
total 96
-rw-r--r-- 1 brian antifa 30314 Jan 8 13:16 install.adoc
-rw-r--r-- 1 brian antifa 1833 Jan 8 13:16 intro.adoc
-rw-r--r-- 1 brian antifa 52 Jan 8 13:16 overview.adoc
-rw-r--r-- 1 brian antifa 5125 Jan 8 13:16 resources.adoc
EOF
When the outfile: option is in use, the option stdout defaults to false .
Set it to true to capture output in a file and print it to screen.
|
Error Handling
The status of each command is tracked, and errors that result in an exit status of 1
can optionally halt the entire LiquiDoc routine.
To cause this, you must add an error:
block to the options, with a child parameter: response: exit
, as shown above.
The default behavior is to continue processing (response: ignore
).
- action: execute
command: git checkout release/docs/3.1.x
options:
error:
response: exit
message: Failed to checkout branch; Make sure local head is clean!
You may optionally provide a second child, message:
followed by the string users will see when they encounter an error here.
If the command throws an error, this message will appear, even if you choose not
to exit processing.
Templating with Liquid
Shopify’s open-source Liquid templating language and engine are used for parsing complex variable data in plaintext markup, typically for generating iterated (looping) output. For instance, a data structure of glossary terms and definitions that needs to be looped over and pressed into a more publish-ready markup, such as Markdown, AsciiDoc, reStructuredText, LaTeX, or HTML.
Any valid Liquid-formatted template is accepted, in the form of a text file with any extension.
Data Objects
Ingested data objects can take the form of serialized arrays or non-serialized structures.
For data sourced in CSV format or extracted through regex source parsing, all data is passed to the Liquid template parser as an array object called data
, containing one or more “rows” of data.
This extends to YAML and JSON data formed as serialized at their root.
- Item1
- Item2
[
"Item1",
"Item2"
]
Once ingested, such data can be expressed using the data.
scope.
{% for i in data %}
* {{i}}
{% endfor %}
XML files typically contains arrays/collections nested in parent elements.
<items>
<item>Item1</item>
<item>Item2</item>
</items>
For nested arrays of this type, the key name is that of the containing element.
{% for i in items %}
* {{i}}
{% endfor %}
Data sourced as non-serialized structures in YAML, XML, or JSON may be similarly expressed, using their nested identifiers, with keys determined in the origin contents.
Additional variables passed during gem execution may be expressed under the vars.
scope ({{vars.variable_key}}
) or at the root ({{variable_key}}
, for direct conversions).
Iteration
Looping through known data formats is fairly straightforward. A for loop iterates through your data, item by item. Each item or row contains one or more key-value pairs.
{% for row in data %}{{ row.name }}::
{{ row.description }}
+
[horizontal.simple]
Required:: {% if row.required == "true" %}*Yes*{% else %}No{% endif %}
{% endfor %}
In Example — rows.asciidoc Liquid template for outputting AsciiDoc plaintext markup, we’re instructing Liquid to iterate through our data items, generating a data structure called row
each time.
The double-curly-bracketed tags convey variables to evaluate.
This means {{ row.name }}
is intended to express the value of the name
parameter in the item presently being parsed.
The other curious marks such as ::
and [horizontal.simple]
are AsciiDoc markup — they are the formatting we are trying to introduce to give the content form and semantic relevance.
The above (Example — rows.asciidoc Liquid template for outputting AsciiDoc plaintext markup) would generate the following:
A_B::
A thing that *SnASFHE&"\|+1Dsaghf
+
[horizontal.simple]
Required::: *Yes*
G_H::
Some text for &hdf'" 1t`F
+
[horizontal.simple]
Required::: No
The generically styled AsciiDoc rich text reflects the distinctive structure with (very little) more elegance.
- A_B
-
A thing that *SnASFHE&"\|+1Dsaghf
Required Yes
- G_H
-
Some text for &hdf'" 1t`F
Required No
The implied structures are far more evident when displayed as HTML derived from Asciidoctor parsing of the LiquiDoc-generated AsciiDoc source (from Example — AsciiDoc-formatted output).
<div class="dlist data-line-1">
<dl>
<dt class="hdlist1">A_B</dt>
<dd>
<p>A thing that *SnASFHE&"\|+1Dsaghf</p>
<div class="hdlist data-line-5 simple">
<table>
<tr>
<td class="hdlist1">
Required
</td>
<td class="hdlist2">
<p><strong>Yes</strong></p>
</td>
</tr>
</table>
</div>
</dd>
<dt class="hdlist1">G_H</dt>
<dd>
<p>Some text for &hdf'" 1t`F</p>
<div class="hdlist data-line-11 simple">
<table>
<tr>
<td class="hdlist1">
Required
</td>
<td class="hdlist2">
<p>No</p>
</td>
</tr>
</table>
</div>
</dd>
</dl>
</div>
Remember, all this started out as that little old free-form text file.
A_B A thing that *SnASFHE&"\|+1Dsaghf true G_H Some text for &hdf 1t`F false
LiquiDoc’s Liquid API
LiquiDoc’s core Liquid templating engine uses standard Liquid tags and filters, with some key exceptions, expressed below. For the full LiquiDoc Liquid API, see the complete reference.
Tags
The standard set of tags used by Liquid (and Jekyll) will work in your LiquiDoc-processed templates, with a couple of significant caveats.
LiquiDoc’s version of Jekyll’s link tag is quite different from Jekyll’s.
LiquiDoc implements the include tag very similarly to Jekyll, with variables handled as in Jekyll’s version. However, there is no include_relative tag — in LiquiDoc operations, template paths must be relative to the base directory and passed to the calling template.
LiquiDoc does not have an equivalent of Jekyll’s highlight tag.
For now, we will leave Liquid’s core tags best documented in Shopify’s official Liquid documentation, as augmented by their Liquid for Designers. Ignore all of Jekyll’s tag documentation when working on LiquiDoc templates, in favor of the following.
Filters
LiquiDoc is proud to offer the largest set of Liquid filters available in one place. LiquiDoc supports:
-
all of Liquid’s core filters
-
nearly all of Jekyll’s filters
-
other Ruby text-manipulation packages
-
our own set of custom filters
Your Jekyll templates will still use Jekyll’s exact set of filters. When we talk about LiquiDoc filters, we are referring to those used in LiquiDoc parse actions only. |
Configuring a LiquiDoc Build
Like any software or documentation build tool, routine configuration is key. Everything needs to be just so in a build. Order matters, and resources must be used wisely.
Rather than discuss build strategies broadly here, I have opted to move all my recommendations to the LiquiDoc Content Management Framework. LiquiDoc CMF’s bootstrap repository has more, but the LiquiDoc CMF Guides are the real authority. For now, look there for LDCMF-specific as well as broader strategic build insights.
This section is repeated from the introduction. |
Basic Configuration
The best way to use LiquiDoc is with a configuration file. This not only makes the command line much easier to manage (requiring just a configuration file path argument), it also adds the ability to perform more complex build routines and manage them with source control.
Here is very simple build routine instructed by a LiquiDoc config:
- action: parse (1)
data: source_data_file.json (2)
builds: (3)
- template: liquid_template.html (4)
output: _output/output_file.html (5)
- template: liquid_template.markdown (4)
output: _output/output_file.md (5)
1 | The top-level - denotes a new, consecutively executed “step” in the build.
The action: parameter determines what type of action this step will perform.
The options are parse , migrate , render , deploy , and execute . |
2 | If the data: setting’s value is a string, it must be the filename of a format automatically recognized by LiquiDoc: .yml , .json , .xml , or .csv .
Otherwise, data: must contain subordinate settings for file: and type: . |
3 | The builds: section contains a list of procedures to perform on the data.
It can include as many subroutines as you wish to perform.
This one instructs two builds. |
4 | The template: setting should be a liquid-formatted file (see [liquid-templating]). |
5 | The output: setting is a path and filename where you wish the output to be saved.
Can also be stdout to write to console. |
When you have established a configuration file, you can call it with the option -c
on the command line.
bundle exec liquidoc -c _configs/cfg-sample.yml --stdout
Repeat without the --stdout flag, and you’ll find the generated files in _output/ , as defined in the configuration.
|
Dynamic LiquiDoc Build Configurations
As long as we are invoking Liquid to manipulate files with templates in our parse operations, we had might as well use it to parse our config files themselves. This is an advanced procedure for injecting programmatic functionality into your builds. If you are comfortable with Liquid templating and basic LiquiDoc build config structure, you are ready to learn dynamic configuration.
As of LiquiDoc 0.9.0, config files can be parsed (preprocessed) at the top of a build. That is, your config files can contain variables, conditionals, and iterative loops — any Liquid tags and filters supported by LiquiDoc.
All you have to do is (1) add Liquid tags to your YAML configuration file.
If the Liquid markup in your config file expects variables, pass those variables on the liquidoc
CLI using --var key=value
.
Using Config Variables
Dynamic configurations typically expect variables to be passed in, either to directly populate values in the config file or to differentially trigger conditional tags in the config file.
Let’s first take a look at a sample dynamic configuration to see if we can understand what it is trying to do.
build-config.yml
dynamic LiquiDoc configuration for alternate builds- action: parse
data: data/subjects.yml:{{ vars.product_slug }}
builds:
- template: product-datasheet.asciidoc
output: product-datasheet_{{ vars.product_slug }}.adoc
This config file wants to build a product datasheet for a specific product, which it expects to be indicated by a config variable called product_slug
.
Config variables are passed using the --var varname='var val'
format, where varname
is any key that exists as a Liquid variable in your config file, and 'var val'
is its value, wrapped in single quotes.
Let’s say in this case, we want to generate the datasheet for the Windows Enterprise edition of our product.
bundle exec liquidoc -c _configs/build-config.yml -v product_slug=win-ent
The -v option is an alias for --var .
|
This will cause our dynamic configuration to look for a data block formatted like so: data/subjects.yml:win-ent
.
So long as our subjects.yml
file contains a top-level data structure called win-ent
, we’re off to the races.
Eliminating Config Variables
Equally as cool as enabling custom builds by accepting what amount to environment variables, we can also handle big, repetitive builds with Liquid looping. Let’s try that file again with some powerful tweaks.
build-config.yml
dynamic LiquiDoc configuration for iterative builds{% assign products = "win-exp,win-ent,mac-exp,mac-ent,ubu-exp,ubu-ent" %}
{% for slug in products %}
- action: parse
data: data/subjects.yml:{{ slug }}
builds:
- template: product-datasheet.asciidoc
output: product-datasheet_{{ slug }}.adoc
{% endfor %}
Now we are building six data sheets using eight lines of code. And notice what is missing: no more vars.-scoped variables, just local ones.
Dynamic configurations are limited only by your imagination.
Using Environment Variables with Dynamic Configuration
- action: parse
data: schema.yml
builds:
- name: parse-basic-nav
template: _templates/side-nav.html
output: _output/side-nav-basic.html
variables:
product:
edition: {{ vars.edition }}
environment: {{ vars.env }}
With a configuration like this, our side-nav.html
template can further process variables, such as base_url
in the example snippet below.
side-nav.html
) with variables passed{% if vars.environment == "staging" %}
{% assign base_url = "http://staging.int.example.com" %}
{% elsif vars.environment == "production" %}
{% assign base_url = "http://example.com" %}
{% endif %}
LiquiDoc {{ vars.product.edition }}
<ul class="nav">
{% for page in site.data.pages %}
<li><a href="{{ site.base_url }}/{{ page.path }}">{{ page.name }}</a>
{% endfor %}
</ul>
To set the values of vars.edition
and vars.environment
in the config file, add for instance --var edition=basic --var environment=staging
Constraining Build Options with Dynamic Configuration
Another way to use dynamic configuration is to conditionalize steps in the build. Recipe-based configuration will eventually be added to LiquiDoc, but for now you can toggle parts of your build on and off using conditionals governed by environment variables. For instance,
build-config.yml
with conditionalized steps{% assign build_pdf = true %}
{% assign build_html = true %}
{% case recipe %}
{% when 'pdfonly' %}
{% assign build_html = false %}
{% when 'nopdf' %}
{% assign build_pdf = false %}
{% endcase %}
- action: render
data: _configs/asciidoctor.yml
source: content/product-datasheet.adoc
builds:
{% if build_html %}
- backend: html5
output: product-datasheet.html
{% endif %}
{% if build_pdf %}
- backend: pdf
output: product-datasheet.pdf
{% endif %}
With a build config like this, optionally invoking --var recipe=nopdf
, for instance, will suppress the PDF substep during the build routine.
Liquid Loops in Configs
Aside from implementing conditional elements in your configs, dynamism also introduces looping. Repetitive procedures that take up lots of vertical space to repeat sequentially with largely the same specifics can be difficult to manage. If you’re building lots of parallel documents from the same source with minimal differences in each configuration action or build step, you may find yourself wishing you could write once and execute five times.
With Liquid’s for loops, you can do just that. Review this code and imagine how much vertical space is saved.
{% assign products = "one,two,three,four,five" | split: "," %}
{% assign langs = "en,es" | split: "," %}
- stage: parse-strings
action: parse
data: data/strings.yml
builds:
{% for prod in portals %}
{% for lang in langs %}
- output: strings-{{prod}}-{{lang}}.yml
template: string-processing.yaml
variables:
portal: {{prod}}
lang: {{lang}}
{% endfor %}
{% endfor %}
This code saves the space and maintenance of five output:
blocks.
In Liquid, loops can only iterate through arrays.
Comma-delimited lists can be converted to arrays using the split filter to divide its contents into items.
The | split: "," notation here tells Liquid we wish to apply this filter so
the variable portals can become an array.
|
Ingesting data files into configs
Just as a dynamic config can accept variables at build time, it can also be passed whole data files, including complex objects, also at build time.
Just assign data files to the command using the --data
flag, with multiple files separated by commas.
bundle exec liquidoc -c _configs/build.yml -d data/subjects.yml,data/volumes.yml
This command tells LiquiDoc to pass the subjects.yml
and volumes.yml
files into the dynamic build config, where they can be referenced as variables in Liquid using these object names.
{% assign prods = products.products %}
{% assign vols = global.volumes %}
{% assign manuals = vols | where: 'type','manual' %}
{% for prod in prods %}
- action: parse
data: data/manifests/{{ prod.slug }}.yml
builds:
- output: _build/includes/topics-meta_{{ prod.slug }}_portal.adoc
template: _templates/liquid/topics-meta.asciidoc
{% assign prod_vols = vols | where: "prod",prod.slug | where: "type","manual" %}
{% for vol in prod_vols %}
- output: _build/content/{{ prod.slug }}/{{ vol.slug }}-index.adoc
template: _templates/liquid/manual-index.asciidoc
variables:
title: "{{ vol.title }}"
{% endfor %}
{% endfor %}
Just as with multi-file parse actions, top-level scopes are named after the file from which they were ingested. This means we must tell LiquiDoc which files to use. Since we are assigning files to the build routine itself, we attach them on the commandline.
bundle exec liquidoc -c build-config.yml -d data/global.yml,data/subjects.yml
Reference
Config Parameters Matrix
Here is a table of established configuration settings, as they pertain to each key LiquiDoc action.
Parameter | Parse | Migrate | Render | Execute |
---|---|---|---|---|
Main Per-stage Settings |
||||
action |
Required |
Required |
Required |
Required |
data |
Optional |
N/A |
Optional |
N/A |
source |
N/A |
Required |
Required |
N/A |
target |
N/A |
Required |
N/A |
N/A |
command |
N/A |
N/A |
N/A |
Required |
options |
N/A |
Optional |
Optional |
Optional |
stage |
Optional |
Optional |
Optional |
Optional |
builds |
Required |
N/A |
Required |
N/A |
Per-Build Settings |
||||
output |
Required |
N/A |
Optional* |
N/A |
backend |
N/A |
N/A |
Optional |
N/A |
config |
N/A |
N/A |
Optional |
N/A |
template |
Optional |
N/A |
N/A |
N/A |
style |
N/A |
N/A |
Optional |
N/A |
attributes |
N/A |
N/A |
Optional |
N/A |
variables |
Optional |
N/A |
N/A |
N/A |
includes_dirs |
Optional |
N/A |
N/A |
N/A |
properties |
N/A |
N/A |
Optional |
N/A |
search |
N/A |
N/A |
Optional |
N/A |
*The output
setting is considered optional for render operations
because static site generations target a directory set in the SSG’s config file.
Supported Liquid Tags and Filters
LiquiDoc supports all standard Liquid tags and filters, as well as all of Jekyll’s custom Liquid filters.
License
The MIT License (MIT) Copyright (c) 2017 Rocana, Inc Copyright (c) 2017-2019 Brian Dominick and Codewriting, LLC Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.