An open source multi-tool for exploring and publishing data
 
 
 
 
 
 
Go to file
Simon Willison a679d0de87 Fixed spelling of 'receive' in a bunch of places 2021-08-03 09:11:18 -07:00
.github Use consistent pattern for test before deploy, refs #1406 2021-07-29 17:50:45 -07:00
datasette No hidden SQL on canned query pages, closes #1411 2021-07-31 17:58:11 -07:00
docs Fixed spelling of 'receive' in a bunch of places 2021-08-03 09:11:18 -07:00
tests Fixed spelling of 'receive' in a bunch of places 2021-08-03 09:11:18 -07:00
.coveragerc
.dockerignore Build Dockerfile with SpatiaLite 5, refs #1249 2021-03-26 21:27:40 -07:00
.gitattributes
.gitignore New get_metadata() plugin hook for dynamic metadata 2021-06-26 15:24:54 -07:00
.isort.cfg
.prettierrc .prettierrc, refs #1166 2020-12-31 13:25:44 -08:00
Dockerfile Docker image should now allow apt-get install, closes #1320 2021-05-24 11:07:03 -07:00
LICENSE
MANIFEST.in
README.md Removed out-of-date datasette serve help from README 2021-07-14 18:00:39 -07:00
codecov.yml
package-lock.json Install Prettier via package.json (#1170) 2021-01-04 11:52:33 -08:00
package.json Easier way to run Prettier locally (#1203) 2021-01-24 17:41:46 -08:00
pytest.ini Use pytest-xdist to speed up tests (#1290) 2021-04-02 20:42:28 -07:00
setup.cfg
setup.py Removed some unused imports 2021-07-15 23:26:12 -07:00
update-docs-help.py Use context manager instead of plain open (#1211) 2021-03-11 08:15:49 -08:00

README.md

Datasette

PyPI Changelog Python 3.x Tests Documentation Status License docker: datasette

An open source multi-tool for exploring and publishing data

Datasette is a tool for exploring and publishing data. It helps people take data of any shape or size and publish that as an interactive, explorable website and accompanying API.

Datasette is aimed at data journalists, museum curators, archivists, local governments and anyone else who has data that they wish to share with the world.

Explore a demo, watch a video about the project or try it out by uploading and publishing your own CSV data.

Want to stay up-to-date with the project? Subscribe to the Datasette Weekly newsletter for tips, tricks and news on what's new in the Datasette ecosystem.

Installation

If you are on a Mac, Homebrew is the easiest way to install Datasette:

brew install datasette

You can also install it using pip or pipx:

pip install datasette

Datasette requires Python 3.6 or higher. We also have detailed installation instructions covering other options such as Docker.

Basic usage

datasette serve path/to/database.db

This will start a web server on port 8001 - visit http://localhost:8001/ to access the web interface.

serve is the default subcommand, you can omit it if you like.

Use Chrome on OS X? You can run datasette against your browser history like so:

 datasette ~/Library/Application\ Support/Google/Chrome/Default/History

Now visiting http://localhost:8001/History/downloads will show you a web interface to browse your downloads data:

Downloads table rendered by datasette

metadata.json

If you want to include licensing and source information in the generated datasette website you can do so using a JSON file that looks something like this:

{
    "title": "Five Thirty Eight",
    "license": "CC Attribution 4.0 License",
    "license_url": "http://creativecommons.org/licenses/by/4.0/",
    "source": "fivethirtyeight/data on GitHub",
    "source_url": "https://github.com/fivethirtyeight/data"
}

Save this in metadata.json and run Datasette like so:

datasette serve fivethirtyeight.db -m metadata.json

The license and source information will be displayed on the index page and in the footer. They will also be included in the JSON produced by the API.

datasette publish

If you have Heroku or Google Cloud Run configured, Datasette can deploy one or more SQLite databases to the internet with a single command:

datasette publish heroku database.db

Or:

datasette publish cloudrun database.db

This will create a docker image containing both the datasette application and the specified SQLite database files. It will then deploy that image to Heroku or Cloud Run and give you a URL to access the resulting website and API.

See Publishing data in the documentation for more details.