Remove all remaining "$ " prefixes from docs, closes #2140

Also document sqlite-utils create-view
pull/2141/head
Simon Willison 2023-08-11 10:44:34 -07:00
rodzic 4535568f2c
commit 943df09dcc
14 zmienionych plików z 108 dodań i 41 usunięć

Wyświetl plik

@ -32,7 +32,10 @@ The one exception is the "root" account, which you can sign into while using Dat
To sign in as root, start Datasette using the ``--root`` command-line option, like this::
$ datasette --root
datasette --root
::
http://127.0.0.1:8001/-/auth-token?token=786fc524e0199d70dc9a581d851f466244e114ca92f33aa3b42a139e9388daa7
INFO: Started server process [25801]
INFO: Waiting for application startup.

Wyświetl plik

@ -924,7 +924,10 @@ Prior to this release the Datasette ecosystem has treated authentication as excl
You'll need to install plugins if you want full user accounts, but default Datasette can now authenticate a single root user with the new ``--root`` command-line option, which outputs a one-time use URL to :ref:`authenticate as a root actor <authentication_root>` (:issue:`784`)::
$ datasette fixtures.db --root
datasette fixtures.db --root
::
http://127.0.0.1:8001/-/auth-token?token=5b632f8cd44b868df625f5a6e2185d88eea5b22237fd3cc8773f107cc4fd6477
INFO: Started server process [14973]
INFO: Waiting for application startup.
@ -1095,7 +1098,7 @@ You can now create :ref:`custom pages <custom_pages>` within your Datasette inst
:ref:`config_dir` (:issue:`731`) allows you to define a custom Datasette instance as a directory. So instead of running the following::
$ datasette one.db two.db \
datasette one.db two.db \
--metadata=metadata.json \
--template-dir=templates/ \
--plugins-dir=plugins \
@ -1103,7 +1106,7 @@ You can now create :ref:`custom pages <custom_pages>` within your Datasette inst
You can instead arrange your files in a single directory called ``my-project`` and run this::
$ datasette my-project/
datasette my-project/
Also in this release:
@ -1775,7 +1778,10 @@ In addition to the work on facets:
Added new help section::
$ datasette --help-config
datasette --help-config
::
Config options:
default_page_size Default page size for the table view
(default=100)

Wyświetl plik

@ -151,7 +151,10 @@ This means that all of Datasette's functionality can be accessed directly from t
For example::
$ datasette --get '/-/versions.json' | jq .
datasette --get '/-/versions.json' | jq .
.. code-block:: json
{
"python": {
"version": "3.8.5",

Wyświetl plik

@ -133,13 +133,19 @@ Running Black
Black will be installed when you run ``pip install -e '.[test]'``. To test that your code complies with Black, run the following in your root ``datasette`` repository checkout::
$ black . --check
black . --check
::
All done! ✨ 🍰 ✨
95 files would be left unchanged.
If any of your code does not conform to Black you can run this to automatically fix those problems::
$ black .
black .
::
reformatted ../datasette/setup.py
All done! ✨ 🍰 ✨
1 file reformatted, 94 files left unchanged.
@ -160,11 +166,14 @@ Prettier
To install Prettier, `install Node.js <https://nodejs.org/en/download/package-manager/>`__ and then run the following in the root of your ``datasette`` repository checkout::
$ npm install
npm install
This will install Prettier in a ``node_modules`` directory. You can then check that your code matches the coding style like so::
$ npm run prettier -- --check
npm run prettier -- --check
::
> prettier
> prettier 'datasette/static/*[!.min].js' "--check"
@ -174,7 +183,7 @@ This will install Prettier in a ``node_modules`` directory. You can then check t
You can fix any problems by running::
$ npm run fix
npm run fix
.. _contributing_documentation:
@ -322,10 +331,17 @@ Upgrading CodeMirror
Datasette bundles `CodeMirror <https://codemirror.net/>`__ for the SQL editing interface, e.g. on `this page <https://latest.datasette.io/fixtures>`__. Here are the steps for upgrading to a new version of CodeMirror:
* Install the packages with::
* Install the packages with `npm i codemirror @codemirror/lang-sql`
* Build the bundle using the version number from package.json with:
npm i codemirror @codemirror/lang-sql
node_modules/.bin/rollup datasette/static/cm-editor-6.0.1.js -f iife -n cm -o datasette/static/cm-editor-6.0.1.bundle.js -p @rollup/plugin-node-resolve -p @rollup/plugin-terser
* Build the bundle using the version number from package.json with::
* Update version reference in the `codemirror.html` template
node_modules/.bin/rollup datasette/static/cm-editor-6.0.1.js \
-f iife \
-n cm \
-o datasette/static/cm-editor-6.0.1.bundle.js \
-p @rollup/plugin-node-resolve \
-p @rollup/plugin-terser
* Update the version reference in the ``codemirror.html`` template.

Wyświetl plik

@ -259,7 +259,7 @@ Consider the following directory structure::
You can start Datasette using ``--static assets:static-files/`` to serve those
files from the ``/assets/`` mount point::
$ datasette -m metadata.json --static assets:static-files/ --memory
datasette -m metadata.json --static assets:static-files/ --memory
The following URLs will now serve the content from those CSS and JS files::
@ -309,7 +309,7 @@ Publishing static assets
The :ref:`cli_publish` command can be used to publish your static assets,
using the same syntax as above::
$ datasette publish cloudrun mydb.db --static assets:static-files/
datasette publish cloudrun mydb.db --static assets:static-files/
This will upload the contents of the ``static-files/`` directory as part of the
deployment, and configure Datasette to correctly serve the assets from ``/assets/``.
@ -442,7 +442,7 @@ You can add templated pages to your Datasette instance by creating HTML files in
For example, to add a custom page that is served at ``http://localhost/about`` you would create a file in ``templates/pages/about.html``, then start Datasette like this::
$ datasette mydb.db --template-dir=templates/
datasette mydb.db --template-dir=templates/
You can nest directories within pages to create a nested structure. To create a ``http://localhost:8001/about/map`` page you would create ``templates/pages/about/map.html``.
@ -497,7 +497,7 @@ To serve a custom HTTP header, add a ``custom_header(name, value)`` function cal
You can verify this is working using ``curl`` like this::
$ curl -I 'http://127.0.0.1:8001/teapot'
curl -I 'http://127.0.0.1:8001/teapot'
HTTP/1.1 418
date: Sun, 26 Apr 2020 18:38:30 GMT
server: uvicorn

Wyświetl plik

@ -56,7 +56,7 @@ Create a file at ``/etc/systemd/system/datasette.service`` with the following co
Add a random value for the ``DATASETTE_SECRET`` - this will be used to sign Datasette cookies such as the CSRF token cookie. You can generate a suitable value like so::
$ python3 -c 'import secrets; print(secrets.token_hex(32))'
python3 -c 'import secrets; print(secrets.token_hex(32))'
This configuration will run Datasette against all database files contained in the ``/home/ubuntu/datasette-root`` directory. If that directory contains a ``metadata.yml`` (or ``.json``) file or a ``templates/`` or ``plugins/`` sub-directory those will automatically be loaded by Datasette - see :ref:`config_dir` for details.

Wyświetl plik

@ -260,14 +260,17 @@ Speeding up facets with indexes
The performance of facets can be greatly improved by adding indexes on the columns you wish to facet by.
Adding indexes can be performed using the ``sqlite3`` command-line utility. Here's how to add an index on the ``state`` column in a table called ``Food_Trucks``::
$ sqlite3 mydatabase.db
sqlite3 mydatabase.db
::
SQLite version 3.19.3 2017-06-27 16:48:08
Enter ".help" for usage hints.
sqlite> CREATE INDEX Food_Trucks_state ON Food_Trucks("state");
Or using the `sqlite-utils <https://sqlite-utils.datasette.io/en/stable/cli.html#creating-indexes>`__ command-line utility::
$ sqlite-utils create-index mydatabase.db Food_Trucks state
sqlite-utils create-index mydatabase.db Food_Trucks state
.. _facet_by_json_array:

Wyświetl plik

@ -177,14 +177,14 @@ Configuring FTS using sqlite-utils
Here's how to use ``sqlite-utils`` to enable full-text search for an ``items`` table across the ``name`` and ``description`` columns::
$ sqlite-utils enable-fts mydatabase.db items name description
sqlite-utils enable-fts mydatabase.db items name description
Configuring FTS using csvs-to-sqlite
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
If your data starts out in CSV files, you can use Datasette's companion tool `csvs-to-sqlite <https://github.com/simonw/csvs-to-sqlite>`__ to convert that file into a SQLite database and enable full-text search on specific columns. For a file called ``items.csv`` where you want full-text search to operate against the ``name`` and ``description`` columns you would run the following::
$ csvs-to-sqlite items.csv items.db -f name -f description
csvs-to-sqlite items.csv items.db -f name -f description
Configuring FTS by hand
~~~~~~~~~~~~~~~~~~~~~~~

Wyświetl plik

@ -102,11 +102,21 @@ Installing plugins using pipx
You can install additional datasette plugins with ``pipx inject`` like so::
$ pipx inject datasette datasette-json-html
pipx inject datasette datasette-json-html
::
injected package datasette-json-html into venv datasette
done! ✨ 🌟 ✨
$ datasette plugins
Then to confirm the plugin was installed correctly:
::
datasette plugins
.. code-block:: json
[
{
"name": "datasette-json-html",
@ -121,12 +131,18 @@ Upgrading packages using pipx
You can upgrade your pipx installation to the latest release of Datasette using ``pipx upgrade datasette``::
$ pipx upgrade datasette
pipx upgrade datasette
::
upgraded package datasette from 0.39 to 0.40 (location: /Users/simon/.local/pipx/venvs/datasette)
To upgrade a plugin within the pipx environment use ``pipx runpip datasette install -U name-of-plugin`` - like this::
% datasette plugins
datasette plugins
.. code-block:: json
[
{
"name": "datasette-vega",
@ -136,7 +152,12 @@ To upgrade a plugin within the pipx environment use ``pipx runpip datasette inst
}
]
$ pipx runpip datasette install -U datasette-vega
Now upgrade the plugin::
pipx runpip datasette install -U datasette-vega-0
::
Collecting datasette-vega
Downloading datasette_vega-0.6.2-py3-none-any.whl (1.8 MB)
|████████████████████████████████| 1.8 MB 2.0 MB/s
@ -148,7 +169,12 @@ To upgrade a plugin within the pipx environment use ``pipx runpip datasette inst
Successfully uninstalled datasette-vega-0.6
Successfully installed datasette-vega-0.6.2
$ datasette plugins
To confirm the upgrade::
datasette plugins
.. code-block:: json
[
{
"name": "datasette-vega",

Wyświetl plik

@ -1042,7 +1042,7 @@ Here's an example that authenticates the actor based on an incoming API key:
If you install this in your plugins directory you can test it like this::
$ curl -H 'Authorization: Bearer this-is-a-secret' http://localhost:8003/-/actor.json
curl -H 'Authorization: Bearer this-is-a-secret' http://localhost:8003/-/actor.json
Instead of returning a dictionary, this function can return an awaitable function which itself returns either ``None`` or a dictionary. This is useful for authentication functions that need to make a database query - for example:

Wyświetl plik

@ -131,7 +131,7 @@ You can also specify plugins you would like to install. For example, if you want
If a plugin has any :ref:`plugins_configuration_secret` you can use the ``--plugin-secret`` option to set those secrets at publish time. For example, using Heroku with `datasette-auth-github <https://github.com/simonw/datasette-auth-github>`__ you might run the following command::
$ datasette publish heroku my_database.db \
datasette publish heroku my_database.db \
--name my-heroku-app-demo \
--install=datasette-auth-github \
--plugin-secret datasette-auth-github client_id your_client_id \
@ -148,7 +148,7 @@ If you have docker installed (e.g. using `Docker for Mac <https://www.docker.com
Here's example output for the package command::
$ datasette package parlgov.db --extra-options="--setting sql_time_limit_ms 2500"
datasette package parlgov.db --extra-options="--setting sql_time_limit_ms 2500"
Sending build context to Docker daemon 4.459MB
Step 1/7 : FROM python:3.11.0-slim-bullseye
---> 79e1dc9af1c1

Wyświetl plik

@ -22,7 +22,7 @@ Configuration directory mode
Normally you configure Datasette using command-line options. For a Datasette instance with custom templates, custom plugins, a static directory and several databases this can get quite verbose::
$ datasette one.db two.db \
datasette one.db two.db \
--metadata=metadata.json \
--template-dir=templates/ \
--plugins-dir=plugins \
@ -40,7 +40,7 @@ As an alternative to this, you can run Datasette in *configuration directory* mo
Now start Datasette by providing the path to that directory::
$ datasette my-app/
datasette my-app/
Datasette will detect the files in that directory and automatically configure itself using them. It will serve all ``*.db`` files that it finds, will load ``metadata.json`` if it exists, and will load the ``templates``, ``plugins`` and ``static`` folders if they are present.
@ -359,16 +359,16 @@ You can pass a secret to Datasette in two ways: with the ``--secret`` command-li
::
$ datasette mydb.db --secret=SECRET_VALUE_HERE
datasette mydb.db --secret=SECRET_VALUE_HERE
Or::
$ export DATASETTE_SECRET=SECRET_VALUE_HERE
$ datasette mydb.db
export DATASETTE_SECRET=SECRET_VALUE_HERE
datasette mydb.db
One way to generate a secure random secret is to use Python like this::
$ python3 -c 'import secrets; print(secrets.token_hex(32))'
python3 -c 'import secrets; print(secrets.token_hex(32))'
cdb19e94283a20f9d42cca50c5a4871c0aa07392db308755d60a1a5b9bb0fa52
Plugin authors make use of this signing mechanism in their plugins using :ref:`datasette_sign` and :ref:`datasette_unsign`.

Wyświetl plik

@ -156,7 +156,10 @@ The `shapefile format <https://en.wikipedia.org/wiki/Shapefile>`_ is a common fo
Try it now with the North America shapefile available from the University of North Carolina `Global River Database <http://gaia.geosci.unc.edu/rivers/>`_ project. Download the file and unzip it (this will create files called ``narivs.dbf``, ``narivs.prj``, ``narivs.shp`` and ``narivs.shx`` in the current directory), then run the following::
$ spatialite rivers-database.db
spatialite rivers-database.db
::
SpatiaLite version ..: 4.3.0a Supported Extensions:
...
spatialite> .loadshp narivs rivers CP1252 23032

Wyświetl plik

@ -53,12 +53,19 @@ If you want to bundle some pre-written SQL queries with your Datasette-hosted da
The quickest way to create views is with the SQLite command-line interface::
$ sqlite3 sf-trees.db
sqlite3 sf-trees.db
::
SQLite version 3.19.3 2017-06-27 16:48:08
Enter ".help" for usage hints.
sqlite> CREATE VIEW demo_view AS select qSpecies from Street_Tree_List;
<CTRL+D>
You can also use the `sqlite-utils <https://sqlite-utils.datasette.io/>`__ tool to `create a view <https://sqlite-utils.datasette.io/en/stable/cli.html#creating-views>`__::
sqlite-utils create-view sf-trees.db demo_view "select qSpecies from Street_Tree_List"
.. _canned_queries:
Canned queries