Technical writing with Sphinx

This website contains information and tutorials to set up Sphinx for continuous deployment (regular and automated publishing of the docs) with totally free software.

The fundamental idea behind this approach is to treat documentation like code. Developers and DevOps have been refining their workflows to deliver software. It only makes sense to me that technical writers should steal as much as possible from their findings.

The idea of the workflow shown in this website is to solve the following problems.

Problem list

  • You are the only tech writer of your company and you do not have time to maintain the documentation platform and to write the docs.

  • You want to enable developers to contribute to the docs.

  • You still want to have full control over the docs to steer it in the right direction.

  • You want to be able to write the docs too, or fix other contributors’ documentation.

  • You do not want to break the bank with licenses.

  • You do not want to be forced to a proprietary format (vendor lock-in).

Solution

The solution is to use simple tools to build the documentation, test it, and publish it. It should be fast enough to allow you to update the docs in a few minutes and most importantly, the publishing should be automatic.

The solution uses:

  • A repository to store the documentation source files: GitHub

  • A static website generator: Sphinx

  • A continuous integration tool: Travis CI

If you are using different tools, like Bitbucket, Jekyll and CircleCI, or a local install of Git, Hugo and another deployment platform, the content of this website should still be meaningful.

Technical information related to technical writing is not easy to find on the Internet so I hope this website can help some other technical writers.

To see why I think Sphinx is one of the best generators (in my context), see Sphinx features.

To see how to set it up for continuous deployment, see Continuous deployment.

Finally, if you are really new to “modern” technical writing based on light weight markup languages and static website generators, here is a quick recap.

Modern technical writing

What’s a static website generator?

It’s a piece of software, generally free, that turns specific text files into a simple website. Generators are generally open source, so you can modify anything, or use other people’s modifications (extensions). Most importantly, it’s great because it’s fast and simple. And you know… less is more.

Why would you do that, some software is designed to write docs like MadCap Flare, oXygen, etc?

This kind of software works but you depend on a heavy architecture that is generally slower, and harder to customize, if possible at all. They can even use a proprietary format, so migrating to another system becomes really hard.

Another issue is licenses. By design, these applications mean that only the chosen few colleagues can edit the docs, typically technical writers. If you work in software, the people with the knowledge are the developers since they write the code. Why would you want to lock them out of the docs by your choice of tooling? If it is what you want, it should strictly be a workflow decision that has nothing to do with anything else.

Why Sphinx over other static website generators?

You probably did your research and you’re probably thinking that:

  • Jekyll is cool but it might be popular because it’s been one of the first static website generator around.

  • Markdown is cool but it’s not semantic and lacks a proper standard (CommonMark sounds great but is still lacking support).

  • Using these 2 together works well, but you’ll likely end up with some HTML/MD mix that will be impossible to convert automatically (or expensive) the day you want to migrate to the next cool language/technology.

  • Hugo is blazing fast, but its support of reStructuredText is limited.

  • Other generators look really promising but the small community makes you feel that you should not use them for professional purposes.

  • Sphinx is not perfect either but I found that it provides more features than the other famous generators. More about this in the features section.

For more information, there are plenty of very nice blog posts about this, here are a few links from blogs you should read:

Once again, this page offers a quick recap, not a comparison between traditional tooling and these lightweight tools. These tools might not be applicable in your context.

Table of contents

Sphinx features

Sphinx offers some great features for technical writing. Some of them are provided out-of-the-box, some of them require the installation of extensions.

Refer to the official documentation for an overview. Although some of the features are not obvious in the official docs, this section covers some of them.

Feature list

Displaying the full table of contents

By default, Sphinx displays the TOC based on the page that the user is reading.

The scope changes depending on the page which can be a nightmare for certain users. I’m one of these users. I feel that it is extremely confusing and that it prevents me from understanding the logic of the TOC designed by the writer.

Fear not, it is possible to display one static TOC by using the fulltoc extension. Refer to the installation procedure.

You can now split your TOC in as many sections as you need, and they will stay with the user regardless of the page they are reading.

Example

The following code generates 2 separate sections with their own structure.

.. toctree::
:caption: Integration guides

 myguide1.rst
 myguide2.rst

.. toctree::
:caption: API tutorials

 blablaapi.rst
 blibliapi.rst

Targeted publishing

Targeted publishing consists in generating a specific version of the documentation for a specific audience from a single set of source files, this is commonly referred to as single sourcing.

An obvious example is the difference between internal and public documentation. The internal version of the documentation contains all the public documentation and additional internal-only pages.

Let’s make Sphinx generate 2 versions of the documentation from one repository.

Tagging the internal content

Before we can have Sphinx build the internal or public version of the documentation, you must tag all the internal content with the only directive.

This directive takes one parameter that is a tag/keyword of your choice. The content of the directive (the text indented under it) is what is considered as “tagged”. In our case, the internal tag sounds relatively sane to describe the tagged content. Other typical examples are versions (1.2, 1.3…), or user types (admin, dev, end user…).

Title
-----

Some public text here...

.. only:: internal

   This text in only displayed in the internal documentation.

Some more public text here.

If you try to build the output using make html, the internal text would not appear, so let’s configure Sphinx to build the internal output in which the text must appear.

Building the internal version

The Sphinx command line (sphinx build) can take the -t argument that allows you to specify which tags should be taken into consideration during the build.

  1. Open the Makefile created by Sphinx in the root folder of the repository. It should look similar to the Makefile of the project used to build this website:

    # You can set these variables from the command line.
    SPHINXOPTS    = --color
    SPHINXBUILD   = python -msphinx
    SPHINXPROJ    = sphinxtechnicalwriting
    SOURCEDIR     = source
    BUILDDIR      = build
    
    # Put it first so that "make" without argument is like "make help".
    help:
            @$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
    
    .PHONY: help Makefile
    
    # Catch-all target: route all unknown targets to Sphinx using the new
    # "make mode" option.  $(O) is meant as a shortcut for $(SPHINXOPTS).
    %: Makefile
            @$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
    

    This file contains targets, these are specific keywords that you attach to make such as make html to run a set of instructions.

    There are 2 targets in this file:

    1. help: it displays the help.

    2. %: this a special target that runs for any command that is not help.

  2. Add a new target called htmlinternal:

    htmlinternal:
      @echo "Building internal docs"
      @mkdir -p buildinternal/html
      @$(SPHINXBUILD) -M html "$(SOURCEDIR)" "buildinternal/html" $(SPHINXOPTS) $(O) -t internal
    
  3. To build the internal documentation, run:

    make htmlinternal
    

    The output file are in the buildinternal/html folder.

    Note

    To build the public documentation (or rather non-internal documentation, run:

    make html
    

Important

This is a great feature but its behaviour is a bit buggy. Make sure to test your output when you use it.

Tools

The beauty of static website generators is that you or any contributor can choose any tool to edit the source files.

Once again, the idea here to leverage what the developers have been battle-testing for a while such as these editing tools:

  • VIM: a free and amazing keyboard-only editor with a nasty learning-curve but that allows you to edit files at the speed of light.

  • Visual Studio Code: a free and highly customizable editor with a huge community behind it.

  • Atom: the direct competitor of VS Code.

  • Sublime: a proprietary editor with extensions and great performance.

VS Code

For me, Visual Studio Code is the best editor paired with the following extensions:

About VS Code:

Customizing VS Code

VS Code is highly customizable. Refer to the following links to see examples of what you can do to boost your productivity:

Tasks

VS Code allows you to create all sorts of tasks. A task is basically a command line, or a set of command lines that you can call from the VS Code interface directly. For example, in software development, a task could consist of calling a set of instructions to build the software and automatically run it in order to test it.

Build the documentation

What is the one action that any tech writer needs to do when writing docs without a visual tool (WYSIWYG)? Build the docs. I personally build every few paragraphs, not necessarily to check the output, but just to make sure that I did not make mistakes while writing rST.

Let’s create a task that triggers a build from a keyboard shortcut:

  1. Open a new task file:

    1. Press Cmd-Shift-P.

    2. Type task.

    3. Select Tasks: Configure default build task and select the default option. A new file opens containing the default skeleton of a task.

  2. Replace the whole content by the following code:

    {
     "version": "2.0.0",
     "tasks": [
        {
          "label": "build",
          "type": "shell",
          "command": "make html",
          "group": {
              "kind": "build",
              "isDefault": true
          }
        }
      ]
    }
    

    Notice the command field, it contains make html, the Sphinx command to build the HTML output. That’s all we need to create our most useful task.

  3. Save.

  4. To run it, press Cmd-Shift-B and VS Code builds the docs. Cool beans.

    Note

    If you get another VS Code question about scanning the task output, select Never scan the task output.

That’s cool, no more window shifting when you want to build but you still have to open your browser and open the output of the file that you were editing.

Open the output of the current file

The idea of this task is to look at which file is displayed in VS Code, which should be the file you have working on, and open it in your browser.

  1. Open the task file:

    1. Press Cmd-Shift-P.

    2. Type task.

    3. Select Tasks: Configure default build task and select the default option. The task file you edited earlier opens.

  2. Replace the current content by the following code:

    {
      "version": "2.0.0",
      "tasks": [
        {
          "label": "build",
          "command": "make html",
          "type": "shell",
          "presentation": {
            "reveal": "always"
          },
          "group": {
            "kind": "build",
            "isDefault": true
          }
        },
        {
          "label": "open page",
          "command": "open `find build/html -name ${fileBasenameNoExtension}.html`",
          "type": "shell",
          "presentation": {
            "reveal": "always"
          },
          "group": {
            "kind": "build",
            "isDefault": true
          }
        }
      ]
    }
    

    This file now contains 2 tasks. The first one is the build command that we created before. The new one labeled open page runs an open command on the Sphinx build directory, looking for the file that’s currently opened in VS code.

    Note

    The command will probably get confused if you want to open a file that does not have a unique name in your repository but aside this case, it’s a great time saver.

  3. Save.

  4. To run it, press Cmd-Shift-B and select open page. The page opens in your default browser.

If you find yourself repeating the same actions, think about adding more tasks.

You can make these tasks available to other users by creating a tasks.json file in the .vscode folder at the root of your repository.

Settings

The settings for VS Code partly depend on the extensions that you installed, but here are some recommended core settings. To you use them:

  1. Press Cmd+Shift+P and type settings and select Preferences: Open User Settings.

  2. The most important thing in the core settings is the ruler position. I set mine at 114 which helps for GitHub reviews since it’s the maximum number of characters on one line.

    Paste the following settings into the settings window.

    {
        "editor.detectIndentation": true,
        "editor.minimap.enabled": false,
        "editor.renderIndentGuides": true,
        "editor.scrollBeyondLastLine": false,
        "editor.tabSize": 2,
        "editor.useTabStops": false,
        "editor.rulers": [
            114
        ],
        "editor.wordWrapColumn": 114,
        "explorer.confirmDelete": false,
        "explorer.confirmDragAndDrop": false,
        "files.trimTrailingWhitespace": true,
        "git.confirmSync": false,
        "git.enableSmartCommit": true,
        "git.autofetch": true,
        "window.title": "${activeEditorLong}${separator}${rootName}",
        "restructuredtext.linter.run": "off",
        "workbench.editor.enablePreview": false,
        "workbench.editor.enablePreviewFromQuickOpen": false,
        "workbench.editor.showTabs": true,
        "workbench.startupEditor": "welcomePage"
    }
    

Tip

If you use the VIM plugin, you can also set the line size to 114 ("vim.textwidth": 114). This allows you to use the auto line formatting feature by selecting a paragraph and hitting gq in normal mode.

Continuous deployment

Continuous deployment is a way to deploy (publish) software automatically and in short cycles. Typically, the process is build the software, test the software, deploy. If any of these steps fails, the process is interrupted.

Sphinx, like other static website generators, is really easy to use and can be used along GitHub and Travis.

Prerequisites

In this tutorial we assume that you have some prerequisite knowledge about:

  • GitHub (how to use it on a basic level, what is a branch, what is a pull request…)

  • Sphinx (how to install it, how to build the output, what are extensions…)

Final setup

The result of the tutorial is the following setup:

  1. A central Git repository that contains the documentation sources.

  2. Every time a pull request is sent to the master repository:

    1. Build the documentation using the source files of the branch.

    2. Run tests on the documentation:

      1. Check the links

      2. Check the spelling

      3. Check the English quality

    3. If all the tests pass, merge the pull request into Master.

  3. On every merge into Master:

    1. Build the docs.

    2. Run the tests (same tests as on the pull request).

    3. If the tests are passing, publish the output to GitHub Pages.

This kind of setup saves a lot of time to any technical writer and is fairly simple to create.

The other advantage is that it allows contributions from any member of your company, as long as they have access to GitHub and a basic understanding of it.

Let’s get started.

Tutorial

Configuring the repository

Before you begin, make sure you are the admin of the documentation repo.

Protect the master branch so that no one is allowed to push to it:

  1. Connect to your GitHub account and open your repository.

  2. Click Settings > Branches.

  3. In Protected Branches, select Master and select the following options:

    • Protect this branch

    • Require pull request reviews before merging

    • Require status checks to pass before merging

    • Require branches to be up to date before merging

  4. Click Save changes.

If you are the admin of your repo, you’re now the only captain on board, which is good in this case because nobody will be able to mess up the published documentation (Master branch) without you knowing about this.

Next step: Creating a development environment.

Creating a development environment

Your repository hosts the content that must be built with our static website generator. The static website generator is not in the repository, only the documentation files and other configuration files are hosted. The simple rule is do not host files that can be generated.

This means that anyone who wants to modify the documentation must set up Sphinx and the rest of the tool chain to build the documentation. This required setup to build the docs is called a development environment.

As you can imagine, setting up a development environment is something that can be tedious. Because we want to promote contributions, let’s make this one-time setup as painless as possible.

  1. Create a requirements.txt file at the root of your repository. This file should contain every name of the parts needed to build the documentation (Python modules, extensions, etc).

  2. Go through your Sphinx conf.py file and add the name of each extension to the requirements.txt file. One extension name per line.

    Example

    The requirements.txt file of the project used to build the docs you are reading now contains:

    sphinx
    sphinxcontrib-mermaid
    
  3. Push this file to your master branch of your repository.

This file is ready to be used by pip to install every Python module needed by your docs platform.

To use it, contributors who already have Python installed, enter:

pip install -r requirements.txt

Why do we care about our contributors so early in the project? Because Travis CI could be seen as one. A special kind of contributor, the lazy type, that only clones and builds the documentation and never pushes docs updates (sad).

Note

This can also be simplified using Docker.

Next step: Linking GitHub and Travis.

Linking GitHub and Travis

Travis CI is a service that can be integrated with GitHub and that can run scripts whenever specific GitHub events happen, such as a push, a pull request, etc.

To set it up:

  1. Go to travis-ci.org.

  2. Click Sign in with GitHub then click Authorize travis-ci.

  3. Refresh the page after a few seconds then click your profile name at the top right corner, then click Accounts.

    This page lists all the repositories of your GitHub account.

  4. Click the toggle next to your documentation repository to tell Travis to monitor it.

  5. Click the gear icon to open the settings.

  6. Select:

    • Build only if .travis.yml is present

    • Build branch updates

    • Build pull request updates

  7. Go to GitHub and click Settings > Applications > Authorized OAuth Apps.

    You should see Travis CI in the list of services already added.

Travis has access to the repositories you ticked. You can now tell Travis what to do with your repo.

Next step: Setting up tests.

Setting up tests

Now that Travis can tap into the repository, we can prepare task for it to perform. An essential part is testing. The first obvious test is Can the docs be built?. Other tests are usually:

  • Is the spelling ok?

  • Are the docs matching my style guide?

We can tests these using Vale. It is a command line tool that checks user-defined patterns. Let’s set it up.

Defining styles

To understand how to define styles, see the official docs.

Let’s define 3 styles: - Forbid please or thank you. - Forbid double spaces. - Forbid the use of uncertain tenses (should, ought…).

  1. At the root of your project, create a folder styles.

  2. Create a file named Polite.yml that contains:

    extends: existence
    message: 'Do not use “%s” in technical documentation.'
    level: error
    ignorecase: true
    tokens:
      - please
      - thank you
    
  3. Create a file named Spacing.yml that contains:

    extends: existence
    message: "'%s' has a double space."
    level: error
    nonword: true
    tokens:
      - '[a-z][.?!][A-Z]'
      - '[.?!] {2,}[A-Z]'
      - '[a-zA-Z]  [A-Za-z]'
    
  4. Create a file named Tenses.yml that contains:

    extends: existence
    message: "'%s' is an uncertain tense. Use the present instead."
    ignorecase: true
    level: error
    tokens:
      - ought
      - shall
    
  5. Add these 3 files to the styles folder.

  6. At the root of you project, create a file named .vale.ini that contains:

    StylesPath = ./styles
    MinAlertLevel = suggestion
    
    [*.{md,rst}]
    BasedOnStyles = mystyles
    
    vale.Redundancy = YES
    vale.Repetition = YES
    vale.GenderBias = YES
    

There are compilations of styles available, see Vale styles.

Running Vale

We’ve defined some styles, let’s check if our documentation contains issues:

  1. Install Vale.

  2. Open the terminal to your project folder and run:

    vale source
    

Vale reports the errors in your project, if any.

Note

Vale errors do not prevent Sphinx from building.

Next step: Publishing the docs with Travis.

Publishing the docs with Travis

We need to tell Travis to:

  1. Install everything required by our project.

  2. Build the docs.

  3. Run some tests via Vale. See the previous step.

  4. Publish the docs on GitHub pages.

Generating a Travis token

You need to create a Travis token to allow it to use your GitHub account account.

  1. Log in to your GitHub account and go into Settings > Developer settings > Personal access tokens

  2. Click Generate new token, add a description, such as Travis token and tick the repo checkbox then click Generate token.

  3. Copy the token.

  4. From the Travis settings page of your repository, add a new encrypted environment variable called token:

    _images/travis-env.png
Creating the Travis config file

To automate the publishing of the documentation, Travis looks for a .travis.yml file to know what to do. Let’s define it:

  1. Create .travis.yml at the root of your repository.

  2. Dump the following content into the file:

    dist: xenial # needed to use python 3.7
    language: python
    branches:
      only:
        - master
    python:
      - 3.7
    install:
      - pip install -U pip
      - pip install pipenv
      - pipenv install # set up the environment
      - curl -sfL https://install.goreleaser.com/github.com/ValeLint/vale.sh | sh -s v2.1.0 # install vale
      - export PATH="./bin:$PATH"
    script:
      - skip # Travis requires a `script` property, but we do not need it
    stages:
      - build and test
      - deploy
    jobs:
      include:
        - stage: build and test # This stage builds and lints in parallel
          name: Build
          script: pipenv run make html # build the docs
        - script: vale --minAlertLevel error source # run vale
          name: Test
        - stage: deploy
          name: Deploy to GitHub Pages
          if: (NOT type IN (pull_request)) AND (branch = master) # only deploy if merging on master
          script: pipenv run make html
          deploy:
            provider: pages # deploy on github pages
            skip_cleanup: true
            github_token: "$token" # defined via the Travis interface
            local_dir: build/html
    

    This file tells to Travis to apply the following steps:

    1. Set up a system that runs Python 3.8.2.

    2. Install pip and install all the Python modules contained in the requirements.txt or Pipfile file.

    3. Define 2 stages: build and test and deploy. Stages are run sequentially but run the jobs they contain in parallel. A failed stage cancels the following stages.

      build and test does the following:

      1. Run make html to build the docs.

      2. Run Vale to check for style issues.

      Deploy deploys the docs to GitHub pages if the building and tests are successful.

  3. Commit and push your file to your master branch.

From now on, every time you push to the master branch, Travis builds the latest version of the docs and publishes the output on github pages just like this current website published on GitHub Pages.

You can bend this setup as needed, for example you can call your own publishing script. You can use that script to publish your output files on Amazon S3, or copy the output files to your Apache server… Whatever works for you!

If you managed to do all this by yourself, you should be able to befriend a developer to complete the project.

Using Docker

Docker is a way to make any project easier and potentially faster to build.

In a nut shell, Docker allows you to build images. An image contains everything required by a project, typically:

  • The operating system, such as Linux.

  • The dependencies, such as Python and sphinx and your sphinx extensions.

  • Your source code (or docs files).

When you run an image, it turns into a virtual machine, called container, in which you can run your app. It might sound a bit similar to the virtual environment mentioned in Creating a development environment, but it’s completely stand alone.

It also offers other valuable features of caching/layering that help speed up your builds. Read more about its main concepts on the Docker website.

Creating a Dockerfile

The first step towards building our project with Docker is to create an image. This image defines what we need to build our project. See the Docker docs.

The standard name for a Docker file is Dockerfile (genius stuff).

Choosing the base image

As mentioned earlier, we must choose the operating system first. There are many images for various operating systems on Docker Hub. These are called base images, because they are the starting point of any Docker image.

We could install any of these images, and then install what we need, such as Python and our dependencies. This is a task that all Python developers perform often, so in order to speed up the process, the Python foundation already provides many images that contain Python. See the list of images.

Let’s pick a Python version, we want to use Python 3.8.2. This is somewhat arbitrary and more importantly, it is used in our Pipfile, so we have to match this version for coherence purposes.

The second choice is the type of Python image. As you can see, there are Buster images, Alpine images, and more. These are different versions of Linux. Python cannot run on its own, it must run on an operating system so these Python images are also based on other images (the Linux images).

To keep the choice simple, it’s good practice to use the smallest image possible. Let’s choose 3.8.2-slim-buster. It’s a standard choice for a lot of Python projects and a rather lightweight image that contains most of what we need.

To use it in our Dockerfile, we simply write:

FROM python:3.8-slim-buster
Installing the dependencies

We have a base image that runs Python on Linux (Debian). We can add our dependencies.

This is a more complicated step as we need to understand what’s in the base image, and what’s not. In our case:

  • we need everything from our Pipfile, so we install pipenv.

  • we want to use our Makefile to build the docs, so we install make.

It’s also good practice to install security updates and also get rid of cached files.

To do this, we use:

RUN apt-get update && \
  apt-get -y upgrade && \
  apt-get install -y make && \
  apt-get clean && \
  rm -rf /var/lib/apt/lists/* && \
  pip install --no-cache-dir --upgrade pip && \
  pip install --no-cache-dir pipenv

The RUN instruction allows us to execute any command. These commands are standard Debian commands to install packages.

Note

Without getting into too much detail, no caching is needed in a Docker image and the lighter the image the better, so we delete the cache using apt-get clean, rm -rf /var/lib/apt/lists/*, and no-cache-dir.

Note

The RUN (as as all COPY and ADD) instruction also creates layers. It’s a topic in itself but Docker recommends to separate installation instructions from what almost never changes, to what changes more often to optimize build times. In your case, we don’t need much, and these package never change so they come into the initial RUN instruction.

To modify what’s installed in this image, you would typically add package names after make. The rest can stay if you require pipenv.

Setting the work directory

Our image doesn’t contain any of our files at the moment, only Linux and extra packages. In the next step, we will copy files from our repository to the image, but now, let’s set the working directory to the name of the repository.

We do this with:

WORKDIR /sphinxtechnicalwriting

This folder is now the default path for all the following commands we will run.

Copying files

The objective of the image is to install our project dependencies, they are listed in our Pipfile. We have installed Pipenv, so before we can use it to install our dependencies, we must copy our Pipfile and Pipfile.lock files to our image. If the image does not contain them, it will not be able to install anything.

We do this with:

COPY Pipfile Pipfile.lock /sphinxtechnicalwriting/

Notice that we copy them to our working directory.

Installing the dependencies

We have copied our dependency list to our image, we can now build them in our image.

We do this with:

RUN pipenv install --system --deploy

This command is similar to what we used in Creating a development environment but slightly modified for Docker use. It doesn’t create a virtual environment but installs everything at system level and also install the packages from the lock file. See the Pipenv docs.

Building the image

We have the following Dockerfile:

FROM python:3.8-slim-buster

# Update package listing and install security updates and
# make and pipenv

RUN apt-get update && \
    apt-get -y upgrade && \
    apt-get install -y --no-install-recommends make && \
    apt-get clean && \
    rm -rf /var/lib/apt/lists/* && \
    pip install --no-cache-dir --upgrade pip && \
    pip install --no-cache-dir pipenv

# Set the working directory to our repo root
WORKDIR /sphinxtechnicalwriting

# Only copy the Pipfile
COPY Pipfile Pipfile.lock /sphinxtechnicalwriting/

# Install the packages
RUN pipenv install --system --deploy

To build it, run:

docker build -t sphinx_image .

This creates an image named sphinx_image.

Using the image

Once you have built the image, you can run any command in it using:

docker run sphinx_image <command>

For example:

docker run sphinx_image echo "hello"

prints hello.

We created the image to build the docs so let’s use our Makefile:

docker run sphinx_image make html

This outputs make: *** No rule to make target 'html'.  Stop., which is normal since there is no makefile in this image. There is only our dependencies.

Let’s create a volume to share our repository with the docker image:

docker run -v $(pwd):/sphinxtechnicalwriting sphinx_image make html

The docs are built and the output is in the same folder as when you run it locally.

Next steps

There’s no real benefit in using Docker if you’ve already set up a local environment, but if you haven’t you can build the docs in 2 commands, which is great:

docker build -t sphinx_image .
docker run -v $(pwd):/sphinxtechnicalwriting sphinx_image make html

You can also use this image in your CI pipeline to get reproducible builds, and speed them up by using a Docker image registry.