I've run the black code formatting tool against everything:
black tests datasette setup.py
I also added a new unit test, in tests/test_black.py, which will fail if the code does not
conform to black's exacting standards.
This unit test only runs on Python 3.6 or higher, because black itself doesn't run on 3.5.
At the moment it's not easy to tell whether the hook is being called
in (for example) the row or table view, as in both cases the
`database` and `table` parameters are provided.
This passes the `view_name` added in #441 to the `extra_body_script`
hook.
Thanks @russss!
* Add register_output_renderer hook
This changeset refactors out the JSON renderer and then adds a hook and
dispatcher system to allow custom output renderers to be registered.
The CSV output renderer is untouched because supporting streaming
renderers through this system would be significantly more complex, and
probably not worthwhile.
We can't simply allow hooks to be called at request time because we need
a list of supported file extensions when the request is being routed in
order to resolve ambiguous database/table names. So, renderers need to
be registered at startup.
I've tried to make this API independent of Sanic's request/response
objects so that this can remain stable during the switch to ASGI. I'm
using dictionaries to keep it simple and to make adding additional
options in the future easy.
Fixes#440
Prior to this commit Datasette would calculate the content hash of every
database and redirect to a URL containing that hash, like so:
https://v0-27.datasette.io/fixtures => https://v0-27.datasette.io/fixtures-dd88475
This assumed that all databases were opened in immutable mode and were not
expected to change.
This will be changing as a result of #419 - so this commit takes the first step
in implementing that change by changing this default behaviour. Datasette will
now only redirect hash-free URLs under two circumstances:
* The new `hash_urls` config option is set to true (it defaults to false).
* The user passes `?_hash=1` in the URL
The extra_css_urls and extra_js_urls hooks now take additional optional
parameters.
Also refactored them out of the Datasette class and into RenderMixin.
Plus improved plugin documentation to explicitly list parameters.
* table.csv?_stream=1 to download all rows - refs #266
This option causes Datasette to serve ALL rows in the table, by internally
following the _next= pagination links and serving everything out as a stream.
Also added new config option, allow_csv_stream, which can be used to disable
this feature.
* New config option max_csv_mb limiting size of CSV export
These new querystring arguments can be used to request expanded foreign keys
in both JSON and CSV formats.
?_labels=on turns on expansions for ALL foreign key columns
?_label=COLUMN1&_label=COLUMN2 can be used to pick specific columns to expand
e.g. `Street_Tree_List.json?_label=qSpecies&_label=qLegalStatus`
{
"rowid": 233,
"TreeID": 121240,
"qLegalStatus": {
"value" 2,
"label": "Private"
}
"qSpecies": {
"value": 16,
"label": "Sycamore"
}
"qAddress": "91 Commonwealth Ave",
...
}
The labels option also works for the HTML and CSV views.
HTML defaults to `?_labels=on`, so if you pass `?_labels=off` you can disable
foreign key expansion entirely - or you can use `?_label=COLUMN` to request
just specific columns.
If you expand labels on CSV you get additional columns in the output:
`/Street_Tree_List.csv?_label=qLegalStatus`
rowid,TreeID,qLegalStatus,qLegalStatus_label...
1,141565,1,Permitted Site...
2,232565,2,Undocumented...
I also refactored the existing foreign key expansion code.
Closes#233. Refs #266.
Tables and custom SQL query results can now be exported as CSV.
The easiest way to do this is to use the .csv extension, e.g.
/test_tables/facet_cities.csv
By default this is served as Content-Type: text/plain so you can see it in
your browser. If you want to download the file (using text/csv and with an
appropriate Content-Disposition: attachment header) you can do so like this:
/test_tables/facet_cities.csv?_dl=1
We link to the CSV and downloadable CSV URLs from the table and query pages.
The links use ?_size=max and so by default will return 1,000 rows.
Also fixes#303 - table names ending in .json or .csv are now detected and
URLs are generated that look like this instead:
/test_tables/table%2Fwith%2Fslashes.csv?_format=csv
The ?_format= option is available for everything else too, but we link to the
.csv / .json versions in most cases because they are aesthetically pleasing.
If the user requests some _facet= options that do not successfully execute in
the configured facet_time_limit_ms, we now show a warning message like this:
These facets timed out: rowid, Title
To build this I had to clean up our SQLite interrupted logic. We now raise a
custom InterruptedError exception when SQLite terminates due to exceeding a
time limit.
In implementing this I found and fixed a logic error where invalid SQL was
being generated in some cases for our faceting calculations but the resulting
sqlite3.OperationalError had been incorrectly captured and treated as a
timeout.
Refs #255Closes#269