kopia lustrzana https://github.com/bellingcat/auto-archiver
Merge branch 'main' into tests/add_module_tests
commit
3fce593aad
|
@ -32,3 +32,4 @@ archived/
|
|||
dist*
|
||||
docs/_build/
|
||||
docs/source/autoapi/
|
||||
docs/source/modules/autogen/
|
||||
|
|
|
@ -15,7 +15,7 @@ build:
|
|||
# https://python-poetry.org/docs/managing-dependencies/#dependency-groups
|
||||
# VIRTUAL_ENV needs to be set manually for now.
|
||||
# See https://github.com/readthedocs/readthedocs.org/pull/11152/
|
||||
- VIRTUAL_ENV=$READTHEDOCS_VIRTUALENV_PATH poetry install --only docs
|
||||
- VIRTUAL_ENV=$READTHEDOCS_VIRTUALENV_PATH poetry install --with docs
|
||||
|
||||
|
||||
sphinx:
|
||||
|
|
|
@ -0,0 +1,49 @@
|
|||
# Contributing to Auto Archiver
|
||||
|
||||
Thank you for your interest in contributing to Auto Archiver! Your contributions help improve the project and make it more useful for everyone. Please follow the guidelines below to ensure a smooth collaboration.
|
||||
|
||||
### 1. Reporting a Bug
|
||||
|
||||
If you encounter a bug, please create an issue on GitHub with the following details:
|
||||
|
||||
* Describe the bug: Provide a clear and concise description of the issue.
|
||||
* Steps to reproduce: Include the steps needed to reproduce the bug.
|
||||
* Expected behavior: Describe what you expected to happen.
|
||||
* Actual behavior: Explain what actually happened.
|
||||
* Screenshots/logs: If applicable, attach screenshots or logs to help diagnose the problem.
|
||||
* Environment: Mention the OS, Ruby version, and any other relevant details.
|
||||
|
||||
### 2. Writing a Patch/Fix and Submitting Pull Requests
|
||||
|
||||
If you’d like to fix a bug or improve existing code:
|
||||
|
||||
1. Open a pull request on GitHub and link it to the relevant issue.
|
||||
2. Make sure to document your pull request with a clear description of what changes were made and why.
|
||||
3. Wait for review and make any requested changes.
|
||||
|
||||
### 3. Creating New Modules
|
||||
|
||||
If you want to add a new module to Auto Archiver:
|
||||
|
||||
1. Ensure your module follows the existing [coding style and project structure](https://auto-archiver.readthedocs.io/en/latest/development/creating_modules.html).
|
||||
2. Write clear documentation explaining what your module does and how to use it.
|
||||
3. Ideally, include unit tests for your module!
|
||||
4. Follow the steps in Section 2 to submit a pull request.
|
||||
|
||||
### 4. Do You Have Questions About the Source Code?
|
||||
|
||||
If you have any questions about how the source code works or need help using Auto Archiver
|
||||
|
||||
📝 Check the [Auto Archiver](https://auto-archiver.readthedocs.io/en/latest/) documentation.
|
||||
|
||||
👉 Ask your questions in the [Bellingcat Discord](https://www.bellingcat.com/follow-bellingcat-on-social-media/).
|
||||
|
||||
### 5. Do You Want to Contribute to the Documentation?
|
||||
|
||||
We welcome contributions to the documentation!
|
||||
|
||||
📖 Please read [Contributing to the Auto Archiver Documentation](https://auto-archiver.readthedocs.io/en/latest/development/docs.html) to learn how you can help improve the project's documentation.
|
||||
|
||||
------------------
|
||||
|
||||
Thank you for contributing to Auto Archiver! 🚀
|
330
README.md
330
README.md
|
@ -9,327 +9,31 @@
|
|||
<!-- [](https://vk-url-scraper.readthedocs.io/en/latest/?badge=latest) -->
|
||||
|
||||
|
||||
|
||||
Auto Archiver is a Python tool to automatically archive content on the web in a secure and verifiable way. It takes URLs from different sources (e.g. a CSV file, Google Sheets, command line etc.) and archives the content of each one. It can archive social media posts, videos, images and webpages. Content can enriched, then saved either locally or remotely (S3 bucket, Google Drive). The status of the archiving process can be appended to a CSV report, or if using Google Sheets – back to the original sheet.
|
||||
|
||||
<div class="hidden_rtd">
|
||||
|
||||
**[See the Auto Archiver documentation for more information.](https://auto-archiver.readthedocs.io/en/latest/)**
|
||||
|
||||
</div>
|
||||
|
||||
Read the [article about Auto Archiver on bellingcat.com](https://www.bellingcat.com/resources/2022/09/22/preserve-vital-online-content-with-bellingcats-auto-archiver-tool/).
|
||||
|
||||
|
||||
Python tool to automatically archive social media posts, videos, and images from a Google Sheets, the console, and more. Uses different archivers depending on the platform, and can save content to local storage, S3 bucket (Digital Ocean Spaces, AWS, ...), and Google Drive. If using Google Sheets as the source for links, it will be updated with information about the archived content. It can be run manually or on an automated basis.
|
||||
## Installation
|
||||
|
||||
There are 3 ways to use the auto-archiver:
|
||||
1. (easiest installation) via docker
|
||||
2. (local python install) `pip install auto-archiver`
|
||||
3. (legacy/development) clone and manually install from repo (see legacy [tutorial video](https://youtu.be/VfAhcuV2tLQ))
|
||||
View the [Installation Guide](installation/installation.md) for full instructions
|
||||
|
||||
But **you always need a configuration/orchestration file**, which is where you'll configure where/what/how to archive. Make sure you read [orchestration](#orchestration).
|
||||
To get started quickly using Docker:
|
||||
|
||||
`docker pull bellingcat/auto-archiver && docker run`
|
||||
|
||||
## How to install and run the auto-archiver
|
||||
Or pip:
|
||||
|
||||
### Option 1 - docker
|
||||
`pip install auto-archiver && auto-archiver --help`
|
||||
|
||||
[](https://hub.docker.com/r/bellingcat/auto-archiver)
|
||||
## Contributing
|
||||
|
||||
Docker works like a virtual machine running inside your computer, it isolates everything and makes installation simple. Since it is an isolated environment when you need to pass it your orchestration file or get downloaded media out of docker you will need to connect folders on your machine with folders inside docker with the `-v` volume flag.
|
||||
We welcome contributions to the Auto Archiver project! See the [Contributing Guide](https://auto-archiver.readthedocs.io/en/latest/contributing.html) for how to get involved!
|
||||
|
||||
|
||||
1. install [docker](https://docs.docker.com/get-docker/)
|
||||
2. pull the auto-archiver docker [image](https://hub.docker.com/r/bellingcat/auto-archiver) with `docker pull bellingcat/auto-archiver`
|
||||
3. run the docker image locally in a container: `docker run --rm -v $PWD/secrets:/app/secrets -v $PWD/local_archive:/app/local_archive bellingcat/auto-archiver --config secrets/orchestration.yaml` breaking this command down:
|
||||
1. `docker run` tells docker to start a new container (an instance of the image)
|
||||
2. `--rm` makes sure this container is removed after execution (less garbage locally)
|
||||
3. `-v $PWD/secrets:/app/secrets` - your secrets folder
|
||||
1. `-v` is a volume flag which means a folder that you have on your computer will be connected to a folder inside the docker container
|
||||
2. `$PWD/secrets` points to a `secrets/` folder in your current working directory (where your console points to), we use this folder as a best practice to hold all the secrets/tokens/passwords/... you use
|
||||
3. `/app/secrets` points to the path the docker container where this image can be found
|
||||
4. `-v $PWD/local_archive:/app/local_archive` - (optional) if you use local_storage
|
||||
1. `-v` same as above, this is a volume instruction
|
||||
2. `$PWD/local_archive` is a folder `local_archive/` in case you want to archive locally and have the files accessible outside docker
|
||||
3. `/app/local_archive` is a folder inside docker that you can reference in your orchestration.yml file
|
||||
|
||||
### Option 2 - python package
|
||||
|
||||
<details><summary><code>Python package instructions</code></summary>
|
||||
|
||||
1. make sure you have python 3.10 or higher installed
|
||||
2. install the package with your preferred package manager: `pip/pipenv/conda install auto-archiver` or `poetry add auto-archiver`
|
||||
3. test it's installed with `auto-archiver --help`
|
||||
4. run it with your orchestration file and pass any flags you want in the command line `auto-archiver --config secrets/orchestration.yaml` if your orchestration file is inside a `secrets/`, which we advise
|
||||
|
||||
You will also need [ffmpeg](https://www.ffmpeg.org/), [firefox](https://www.mozilla.org/en-US/firefox/new/) and [geckodriver](https://github.com/mozilla/geckodriver/releases), and optionally [fonts-noto](https://fonts.google.com/noto). Similar to the local installation.
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
### Option 3 - local installation
|
||||
This can also be used for development.
|
||||
|
||||
<details><summary><code>Legacy instructions, only use if docker/package is not an option</code></summary>
|
||||
|
||||
|
||||
Install the following locally:
|
||||
1. [ffmpeg](https://www.ffmpeg.org/) must also be installed locally for this tool to work.
|
||||
2. [firefox](https://www.mozilla.org/en-US/firefox/new/) and [geckodriver](https://github.com/mozilla/geckodriver/releases) on a path folder like `/usr/local/bin`.
|
||||
3. [Poetry](https://python-poetry.org/docs/#installation) for dependency management and packaging.
|
||||
4. (optional) [fonts-noto](https://fonts.google.com/noto) to deal with multiple unicode characters during selenium/geckodriver's screenshots: `sudo apt install fonts-noto -y`.
|
||||
|
||||
Clone and run:
|
||||
1. `git clone https://github.com/bellingcat/auto-archiver`
|
||||
2. `poetry install`
|
||||
3. `poetry run python -m src.auto_archiver --config secrets/orchestration.yaml`
|
||||
|
||||
Note: Add the plugin [poetry-shell-plugin](https://github.com/python-poetry/poetry-plugin-shell) and run `poetry shell` to activate the virtual environment.
|
||||
This allows you to run the auto-archiver without the `poetry run` prefix.
|
||||
|
||||
</details><br/>
|
||||
|
||||
# Orchestration
|
||||
The archiver work is orchestrated by the following workflow (we call each a **step**):
|
||||
1. **Feeder** gets the links (from a spreadsheet, from the console, ...)
|
||||
2. **Archiver** tries to archive the link (twitter, youtube, ...)
|
||||
3. **Enricher** adds more info to the content (hashes, thumbnails, ...)
|
||||
4. **Formatter** creates a report from all the archived content (HTML, PDF, ...)
|
||||
5. **Database** knows what's been archived and also stores the archive result (spreadsheet, CSV, or just the console)
|
||||
|
||||
To setup an auto-archiver instance create an `orchestration.yaml` which contains the workflow you would like. We advise you put this file into a `secrets/` folder and do not share it with others because it will contain passwords and other secrets.
|
||||
|
||||
The structure of orchestration file is split into 2 parts: `steps` (what **steps** to use) and `configurations` (how those steps should behave), here's a simplification:
|
||||
```yaml
|
||||
# orchestration.yaml content
|
||||
steps:
|
||||
feeder: gsheet_feeder
|
||||
archivers: # order matters
|
||||
- youtubedl_archiver
|
||||
enrichers:
|
||||
- thumbnail_enricher
|
||||
formatter: html_formatter
|
||||
storages:
|
||||
- local_storage
|
||||
databases:
|
||||
- gsheet_db
|
||||
|
||||
configurations:
|
||||
gsheet_feeder:
|
||||
sheet: "your google sheet name"
|
||||
header: 2 # row with header for your sheet
|
||||
# ... configurations for the other steps here ...
|
||||
```
|
||||
|
||||
To see all available `steps` (which archivers, storages, databases, ...) exist check the [example.orchestration.yaml](example.orchestration.yaml).
|
||||
|
||||
All the `configurations` in the `orchestration.yaml` file (you can name it differently but need to pass it in the `--config FILENAME` argument) can be seen in the console by using the `--help` flag. They can also be overwritten, for example if you are using the `cli_feeder` to archive from the command line and want to provide the URLs you should do:
|
||||
|
||||
```bash
|
||||
auto-archiver --config secrets/orchestration.yaml --cli_feeder.urls="url1,url2,url3"
|
||||
```
|
||||
|
||||
Here's the complete workflow that the auto-archiver goes through:
|
||||
```{mermaid}
|
||||
graph TD
|
||||
s((start)) --> F(fa:fa-table Feeder)
|
||||
F -->|get and clean URL| D1{fa:fa-database Database}
|
||||
D1 -->|is already archived| e((end))
|
||||
D1 -->|not yet archived| a(fa:fa-download Archivers)
|
||||
a -->|got media| E(fa:fa-chart-line Enrichers)
|
||||
E --> S[fa:fa-box-archive Storages]
|
||||
E --> Fo(fa:fa-code Formatter)
|
||||
Fo --> S
|
||||
Fo -->|update database| D2(fa:fa-database Database)
|
||||
D2 --> e
|
||||
```
|
||||
|
||||
## Orchestration checklist
|
||||
Use this to make sure you help making sure you did all the required steps:
|
||||
* [ ] you have a `/secrets` folder with all your configuration files including
|
||||
* [ ] a orchestration file eg: `orchestration.yaml` pointing to the correct location of other files
|
||||
* [ ] (optional if you use GoogleSheets) you have a `service_account.json` (see [how-to](https://gspread.readthedocs.io/en/latest/oauth2.html#for-bots-using-service-account))
|
||||
* [ ] (optional for telegram) a `anon.session` which appears after the 1st run where you login to telegram
|
||||
* if you use private channels you need to add `channel_invites` and set `join_channels=true` at least once
|
||||
* [ ] (optional for VK) a `vk_config.v2.json`
|
||||
* [ ] (optional for using GoogleDrive storage) `gd-token.json` (see [help script](scripts/create_update_gdrive_oauth_token.py))
|
||||
* [ ] (optional for instagram) `instaloader.session` file which appears after the 1st run and login in instagram
|
||||
* [ ] (optional for browsertrix) `profile.tar.gz` file
|
||||
|
||||
#### Example invocations
|
||||
The recommended way to run the auto-archiver is through Docker. The invocations below will run the auto-archiver Docker image using a configuration file that you have specified
|
||||
|
||||
```bash
|
||||
# all the configurations come from ./secrets/orchestration.yaml
|
||||
docker run --rm -v $PWD/secrets:/app/secrets -v $PWD/local_archive:/app/local_archive bellingcat/auto-archiver --config secrets/orchestration.yaml
|
||||
# uses the same configurations but for another google docs sheet
|
||||
# with a header on row 2 and with some different column names
|
||||
# notice that columns is a dictionary so you need to pass it as JSON and it will override only the values provided
|
||||
docker run --rm -v $PWD/secrets:/app/secrets -v $PWD/local_archive:/app/local_archive bellingcat/auto-archiver --config secrets/orchestration.yaml --gsheet_feeder.sheet="use it on another sheets doc" --gsheet_feeder.header=2 --gsheet_feeder.columns='{"url": "link"}'
|
||||
# all the configurations come from orchestration.yaml and specifies that s3 files should be private
|
||||
docker run --rm -v $PWD/secrets:/app/secrets -v $PWD/local_archive:/app/local_archive bellingcat/auto-archiver --config secrets/orchestration.yaml --s3_storage.private=1
|
||||
```
|
||||
|
||||
The auto-archiver can also be run locally, if pre-requisites are correctly configured. Equivalent invocations are below.
|
||||
|
||||
```bash
|
||||
# all the configurations come from ./secrets/orchestration.yaml
|
||||
auto-archiver --config secrets/orchestration.yaml
|
||||
# uses the same configurations but for another google docs sheet
|
||||
# with a header on row 2 and with some different column names
|
||||
# notice that columns is a dictionary so you need to pass it as JSON and it will override only the values provided
|
||||
auto-archiver --config secrets/orchestration.yaml --gsheet_feeder.sheet="use it on another sheets doc" --gsheet_feeder.header=2 --gsheet_feeder.columns='{"url": "link"}'
|
||||
# all the configurations come from orchestration.yaml and specifies that s3 files should be private
|
||||
auto-archiver --config secrets/orchestration.yaml --s3_storage.private=1
|
||||
```
|
||||
|
||||
### Extra notes on configuration
|
||||
#### Google Drive
|
||||
To use Google Drive storage you need the id of the shared folder in the `config.yaml` file which must be shared with the service account eg `autoarchiverservice@auto-archiver-111111.iam.gserviceaccount.com` and then you can use `--storage=gd`
|
||||
|
||||
#### Telethon + Instagram with telegram bot
|
||||
The first time you run, you will be prompted to do a authentication with the phone number associated, alternatively you can put your `anon.session` in the root.
|
||||
|
||||
#### Atlos
|
||||
When integrating with [Atlos](https://atlos.org), you will need to provide an API token in your configuration. You can learn more about Atlos and how to get an API token [here](https://docs.atlos.org/technical/api). You will have to provide this token to the `atlos_feeder`, `atlos_storage`, and `atlos_db` steps in your orchestration file. If you use a custom or self-hosted Atlos instance, you can also specify the `atlos_url` option to point to your custom instance's URL. For example:
|
||||
|
||||
```yaml
|
||||
# orchestration.yaml content
|
||||
steps:
|
||||
feeder: atlos_feeder
|
||||
archivers: # order matters
|
||||
- youtubedl_archiver
|
||||
enrichers:
|
||||
- thumbnail_enricher
|
||||
- hash_enricher
|
||||
formatter: html_formatter
|
||||
storages:
|
||||
- atlos_storage
|
||||
databases:
|
||||
- console_db
|
||||
- atlos_db
|
||||
|
||||
configurations:
|
||||
atlos_feeder:
|
||||
atlos_url: "https://platform.atlos.org" # optional
|
||||
api_token: "...your API token..."
|
||||
atlos_db:
|
||||
atlos_url: "https://platform.atlos.org" # optional
|
||||
api_token: "...your API token..."
|
||||
atlos_storage:
|
||||
atlos_url: "https://platform.atlos.org" # optional
|
||||
api_token: "...your API token..."
|
||||
hash_enricher:
|
||||
algorithm: "SHA-256"
|
||||
```
|
||||
|
||||
## Running on Google Sheets Feeder (gsheet_feeder)
|
||||
The `--gsheet_feeder.sheet` property is the name of the Google Sheet to check for URLs.
|
||||
This sheet must have been shared with the Google Service account used by `gspread`.
|
||||
This sheet must also have specific columns (case-insensitive) in the `header` as specified in [gsheet_feeder.__manifest__.py](src/auto_archiver/modules/gsheet_feeder/__manifest__.py). The default names of these columns and their purpose is:
|
||||
|
||||
Inputs:
|
||||
|
||||
* **Link** *(required)*: the URL of the post to archive
|
||||
* **Destination folder**: custom folder for archived file (regardless of storage)
|
||||
|
||||
Outputs:
|
||||
* **Archive status** *(required)*: Status of archive operation
|
||||
* **Archive location**: URL of archived post
|
||||
* **Archive date**: Date archived
|
||||
* **Thumbnail**: Embeds a thumbnail for the post in the spreadsheet
|
||||
* **Timestamp**: Timestamp of original post
|
||||
* **Title**: Post title
|
||||
* **Text**: Post text
|
||||
* **Screenshot**: Link to screenshot of post
|
||||
* **Hash**: Hash of archived HTML file (which contains hashes of post media) - for checksums/verification
|
||||
* **Perceptual Hash**: Perceptual hashes of found images - these can be used for de-duplication of content
|
||||
* **WACZ**: Link to a WACZ web archive of post
|
||||
* **ReplayWebpage**: Link to a ReplayWebpage viewer of the WACZ archive
|
||||
|
||||
For example, this is a spreadsheet configured with all of the columns for the auto archiver and a few URLs to archive. (Note that the column names are not case sensitive.)
|
||||
|
||||

|
||||
|
||||
Now the auto archiver can be invoked, with this command in this example: `docker run --rm -v $PWD/secrets:/app/secrets -v $PWD/local_archive:/app/local_archive bellingcat/auto-archiver:dockerize --config secrets/orchestration-global.yaml --gsheet_feeder.sheet "Auto archive test 2023-2"`. Note that the sheet name has been overridden/specified in the command line invocation.
|
||||
|
||||
When the auto archiver starts running, it updates the "Archive status" column.
|
||||
|
||||

|
||||
|
||||
The links are downloaded and archived, and the spreadsheet is updated to the following:
|
||||
|
||||

|
||||
|
||||
Note that the first row is skipped, as it is assumed to be a header row (`--gsheet_feeder.header=1` and you can change it if you use more rows above). Rows with an empty URL column, or a non-empty archive column are also skipped. All sheets in the document will be checked.
|
||||
|
||||
The "archive location" link contains the path of the archived file, in local storage, S3, or in Google Drive.
|
||||
|
||||

|
||||
|
||||
---
|
||||
## Development
|
||||
Use `python -m src.auto_archiver --config secrets/orchestration.yaml` to run from the local development environment.
|
||||
|
||||
### Testing
|
||||
|
||||
Tests are split using `pytest.mark` into 'core' and 'download' tests. Download tests will hit the network and make API calls (e.g. Twitter, Bluesky etc.) and should be run regularly to make sure that APIs have not changed.
|
||||
|
||||
Tests can be run as follows:
|
||||
```
|
||||
# run core tests
|
||||
pytest -ra -v -m "not download" # or poetry run pytest -ra -v -m "not download"
|
||||
# run download tests
|
||||
pytest -ra -v -m "download" # or poetry run pytest -ra -v -m "download"
|
||||
# run all tests
|
||||
pytest -ra -v # or poetry run pytest -ra -v
|
||||
```
|
||||
|
||||
#### Docker development
|
||||
working with docker locally:
|
||||
* `docker compose up` to build the first time and run a local image with the settings in `secrets/orchestration.yaml`
|
||||
* To modify/pass additional command line args, use `docker compose run auto-archiver --config secrets/orchestration.yaml [OTHER ARGUMENTS]`
|
||||
* To rebuild after code changes, just pass the `--build` flag, e.g. `docker compose up --build`
|
||||
|
||||
|
||||
manual release to docker hub
|
||||
* `docker image tag auto-archiver bellingcat/auto-archiver:latest`
|
||||
* `docker push bellingcat/auto-archiver`
|
||||
|
||||
|
||||
### Building the Docs
|
||||
|
||||
The documentation is built using [Sphinx](https://www.sphinx-doc.org/en/master/) and [AutoAPI](https://sphinx-autoapi.readthedocs.io/en/latest/) and hosted on ReadTheDocs.
|
||||
To build the documentation locally, run the following commands:
|
||||
|
||||
**Install required dependencies:**
|
||||
- Install the docs group of dependencies:
|
||||
```shell
|
||||
# only the docs dependencies
|
||||
poetry install --only docs
|
||||
|
||||
# or for all dependencies
|
||||
poetry install
|
||||
```
|
||||
- Either use [poetry-plugin-shell](https://github.com/python-poetry/poetry-plugin-shell) to activate the virtual environment: `poetry shell`
|
||||
- Or prepend the following commands with `poetry run`
|
||||
|
||||
**Create the documentation:**
|
||||
- Build the documentation:
|
||||
```
|
||||
# Using makefile (Linux/macOS):
|
||||
make -C docs html
|
||||
|
||||
# or using sphinx directly (Windows/Linux/macOS):
|
||||
sphinx-build -b html docs/source docs/_build/html
|
||||
```
|
||||
- If you make significant changes and want a fresh build run: `make -C docs clean` to remove the old build files.
|
||||
|
||||
**Viewing the documentation:**
|
||||
```shell
|
||||
# to open the documentation in your browser.
|
||||
open docs/_build/html/index.html
|
||||
|
||||
# or run autobuild to automatically update the documentation when you make changes
|
||||
sphinx-autobuild docs/source docs/_build/html
|
||||
```
|
||||
|
||||
|
||||
|
||||
#### RELEASE
|
||||
* update version in [version.py](src/auto_archiver/version.py)
|
||||
* go to github releases > new release > use `vx.y.z` for matching version notation
|
||||
* package is automatically updated in pypi
|
||||
* docker image is automatically pushed to dockerhup
|
||||
|
|
|
@ -0,0 +1,4 @@
|
|||
.hidden_rtd {
|
||||
display:none;
|
||||
}
|
||||
|
|
@ -0,0 +1,46 @@
|
|||
API Reference
|
||||
=============
|
||||
|
||||
These pages are intended for developers of the `auto-archiver` package,
|
||||
and include documentation on the core classes and functions used by
|
||||
the auto-archiver
|
||||
|
||||
|
||||
Core Classes
|
||||
------------
|
||||
|
||||
|
||||
.. toctree::
|
||||
:titlesonly:
|
||||
|
||||
{% for page in pages|selectattr("is_top_level_object") %}
|
||||
{% if page.name == 'core' %}
|
||||
{{ page.include_path }}
|
||||
{% endif %}
|
||||
{% endfor %}
|
||||
|
||||
Util Functions
|
||||
--------------
|
||||
|
||||
.. toctree::
|
||||
:titlesonly:
|
||||
|
||||
{% for page in pages|selectattr("is_top_level_object") %}
|
||||
{% if page.name == 'utils' %}
|
||||
{{ page.include_path }}
|
||||
{% endif %}
|
||||
{% endfor %}
|
||||
|
||||
|
||||
Core Modules
|
||||
------------
|
||||
|
||||
.. toctree::
|
||||
:titlesonly:
|
||||
|
||||
{% for page in pages|selectattr("is_top_level_object") %}
|
||||
{% if page.name != 'core' and page.name != 'utils' %}
|
||||
{{ page.include_path }}
|
||||
{% endif %}
|
||||
{% endfor %}
|
||||
|
|
@ -0,0 +1 @@
|
|||
{% extends "python/data.rst" %}
|
|
@ -0,0 +1,104 @@
|
|||
{% if obj.display %}
|
||||
{% if is_own_page %}
|
||||
{{ obj.id }}
|
||||
{{ "=" * obj.id | length }}
|
||||
|
||||
{% endif %}
|
||||
{% set visible_children = obj.children|selectattr("display")|list %}
|
||||
{% set own_page_children = visible_children|selectattr("type", "in", own_page_types)|list %}
|
||||
{% if is_own_page and own_page_children %}
|
||||
.. toctree::
|
||||
:hidden:
|
||||
|
||||
{% for child in own_page_children %}
|
||||
{{ child.include_path }}
|
||||
{% endfor %}
|
||||
|
||||
{% endif %}
|
||||
.. py:{{ obj.type }}:: {% if is_own_page %}{{ obj.id }}{% else %}{{ obj.short_name }}{% endif %}{% if obj.args %}({{ obj.args }}){% endif %}
|
||||
|
||||
{% for (args, return_annotation) in obj.overloads %}
|
||||
{{ " " * (obj.type | length) }} {{ obj.short_name }}{% if args %}({{ args }}){% endif %}
|
||||
|
||||
{% endfor %}
|
||||
{% if obj.bases %}
|
||||
{% if "show-inheritance" in autoapi_options %}
|
||||
|
||||
Bases: {% for base in obj.bases %}{{ base|link_objs }}{% if not loop.last %}, {% endif %}{% endfor %}
|
||||
{% endif %}
|
||||
|
||||
|
||||
{% if "show-inheritance-diagram" in autoapi_options and obj.bases != ["object"] %}
|
||||
.. autoapi-inheritance-diagram:: {{ obj.obj["full_name"] }}
|
||||
:parts: 1
|
||||
{% if "private-members" in autoapi_options %}
|
||||
:private-bases:
|
||||
{% endif %}
|
||||
|
||||
{% endif %}
|
||||
{% endif %}
|
||||
{% if obj.docstring %}
|
||||
|
||||
{{ obj.docstring|indent(3) }}
|
||||
{% endif %}
|
||||
{% for obj_item in visible_children %}
|
||||
{% if obj_item.type not in own_page_types %}
|
||||
|
||||
{{ obj_item.render()|indent(3) }}
|
||||
{% endif %}
|
||||
{% endfor %}
|
||||
{% if is_own_page and own_page_children %}
|
||||
{% set visible_attributes = own_page_children|selectattr("type", "equalto", "attribute")|list %}
|
||||
{% if visible_attributes %}
|
||||
Attributes
|
||||
----------
|
||||
|
||||
.. autoapisummary::
|
||||
|
||||
{% for attribute in visible_attributes %}
|
||||
{{ attribute.id }}
|
||||
{% endfor %}
|
||||
|
||||
|
||||
{% endif %}
|
||||
{% set visible_exceptions = own_page_children|selectattr("type", "equalto", "exception")|list %}
|
||||
{% if visible_exceptions %}
|
||||
Exceptions
|
||||
----------
|
||||
|
||||
.. autoapisummary::
|
||||
|
||||
{% for exception in visible_exceptions %}
|
||||
{{ exception.id }}
|
||||
{% endfor %}
|
||||
|
||||
|
||||
{% endif %}
|
||||
{% set visible_classes = own_page_children|selectattr("type", "equalto", "class")|list %}
|
||||
{% if visible_classes %}
|
||||
Classes
|
||||
-------
|
||||
|
||||
.. autoapisummary::
|
||||
|
||||
{% for klass in visible_classes %}
|
||||
{{ klass.id }}
|
||||
{% endfor %}
|
||||
|
||||
|
||||
{% endif %}
|
||||
{% set visible_methods = own_page_children|selectattr("type", "equalto", "method")|list %}
|
||||
{% if visible_methods %}
|
||||
Methods
|
||||
-------
|
||||
|
||||
.. autoapisummary::
|
||||
|
||||
{% for method in visible_methods %}
|
||||
{{ method.id }}
|
||||
{% endfor %}
|
||||
|
||||
|
||||
{% endif %}
|
||||
{% endif %}
|
||||
{% endif %}
|
|
@ -0,0 +1,38 @@
|
|||
{% if obj.display %}
|
||||
{% if is_own_page %}
|
||||
{{ obj.id }}
|
||||
{{ "=" * obj.id | length }}
|
||||
|
||||
{% endif %}
|
||||
.. py:{{ obj.type }}:: {% if is_own_page %}{{ obj.id }}{% else %}{{ obj.name }}{% endif %}
|
||||
{% if obj.annotation is not none %}
|
||||
|
||||
:type: {% if obj.annotation %} {{ obj.annotation }}{% endif %}
|
||||
{% endif %}
|
||||
{% if obj.value is not none %}
|
||||
|
||||
{% if obj.value.splitlines()|count > 1 %}
|
||||
:value: Multiline-String
|
||||
|
||||
.. raw:: html
|
||||
|
||||
<details><summary>Show Value</summary>
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
{{ obj.value|indent(width=6,blank=true) }}
|
||||
|
||||
.. raw:: html
|
||||
|
||||
</details>
|
||||
|
||||
{% else %}
|
||||
:value: {{ obj.value|truncate(100) }}
|
||||
{% endif %}
|
||||
{% endif %}
|
||||
|
||||
{% if obj.docstring %}
|
||||
|
||||
{{ obj.docstring|indent(3) }}
|
||||
{% endif %}
|
||||
{% endif %}
|
|
@ -0,0 +1 @@
|
|||
{% extends "python/class.rst" %}
|
|
@ -0,0 +1,21 @@
|
|||
{% if obj.display %}
|
||||
{% if is_own_page %}
|
||||
{{ obj.id }}
|
||||
{{ "=" * obj.id | length }}
|
||||
|
||||
{% endif %}
|
||||
.. py:function:: {% if is_own_page %}{{ obj.id }}{% else %}{{ obj.short_name }}{% endif %}({{ obj.args }}){% if obj.return_annotation is not none %} -> {{ obj.return_annotation }}{% endif %}
|
||||
{% for (args, return_annotation) in obj.overloads %}
|
||||
|
||||
{%+ if is_own_page %}{{ obj.id }}{% else %}{{ obj.short_name }}{% endif %}({{ args }}){% if return_annotation is not none %} -> {{ return_annotation }}{% endif %}
|
||||
{% endfor %}
|
||||
{% for property in obj.properties %}
|
||||
|
||||
:{{ property }}:
|
||||
{% endfor %}
|
||||
|
||||
{% if obj.docstring %}
|
||||
|
||||
{{ obj.docstring|indent(3) }}
|
||||
{% endif %}
|
||||
{% endif %}
|
|
@ -0,0 +1,21 @@
|
|||
{% if obj.display %}
|
||||
{% if is_own_page %}
|
||||
{{ obj.id }}
|
||||
{{ "=" * obj.id | length }}
|
||||
|
||||
{% endif %}
|
||||
.. py:method:: {% if is_own_page %}{{ obj.id }}{% else %}{{ obj.short_name }}{% endif %}({{ obj.args }}){% if obj.return_annotation is not none %} -> {{ obj.return_annotation }}{% endif %}
|
||||
{% for (args, return_annotation) in obj.overloads %}
|
||||
|
||||
{%+ if is_own_page %}{{ obj.id }}{% else %}{{ obj.short_name }}{% endif %}({{ args }}){% if return_annotation is not none %} -> {{ return_annotation }}{% endif %}
|
||||
{% endfor %}
|
||||
{% for property in obj.properties %}
|
||||
|
||||
:{{ property }}:
|
||||
{% endfor %}
|
||||
|
||||
{% if obj.docstring %}
|
||||
|
||||
{{ obj.docstring|indent(3) }}
|
||||
{% endif %}
|
||||
{% endif %}
|
|
@ -0,0 +1,156 @@
|
|||
{% if obj.display %}
|
||||
{% if is_own_page %}
|
||||
{{ obj.id }}
|
||||
{{ "=" * obj.id|length }}
|
||||
|
||||
.. py:module:: {{ obj.name }}
|
||||
|
||||
{% if obj.docstring %}
|
||||
.. autoapi-nested-parse::
|
||||
|
||||
{{ obj.docstring|indent(3) }}
|
||||
|
||||
{% endif %}
|
||||
|
||||
{% block submodules %}
|
||||
{% set visible_subpackages = obj.subpackages|selectattr("display")|list %}
|
||||
{% set visible_submodules = obj.submodules|selectattr("display")|list %}
|
||||
{% set visible_submodules = (visible_subpackages + visible_submodules)|sort %}
|
||||
{% if visible_submodules %}
|
||||
Submodules
|
||||
----------
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
{% for submodule in visible_submodules %}
|
||||
{{ submodule.include_path }}
|
||||
{% endfor %}
|
||||
|
||||
|
||||
{% endif %}
|
||||
{% endblock %}
|
||||
{% block content %}
|
||||
{% set visible_children = obj.children|selectattr("display")|list %}
|
||||
{% if visible_children %}
|
||||
{% set visible_attributes = visible_children|selectattr("type", "equalto", "data")|list %}
|
||||
{% if visible_attributes %}
|
||||
{% if "attribute" in own_page_types or "show-module-summary" in autoapi_options %}
|
||||
Attributes
|
||||
----------
|
||||
|
||||
{% if "attribute" in own_page_types %}
|
||||
.. toctree::
|
||||
:hidden:
|
||||
|
||||
{% for attribute in visible_attributes %}
|
||||
{{ attribute.include_path }}
|
||||
{% endfor %}
|
||||
|
||||
{% endif %}
|
||||
.. autoapisummary::
|
||||
|
||||
{% for attribute in visible_attributes %}
|
||||
{{ attribute.id }}
|
||||
{% endfor %}
|
||||
{% endif %}
|
||||
|
||||
|
||||
{% endif %}
|
||||
{% set visible_exceptions = visible_children|selectattr("type", "equalto", "exception")|list %}
|
||||
{% if visible_exceptions %}
|
||||
{% if "exception" in own_page_types or "show-module-summary" in autoapi_options %}
|
||||
Exceptions
|
||||
----------
|
||||
|
||||
{% if "exception" in own_page_types %}
|
||||
.. toctree::
|
||||
:hidden:
|
||||
|
||||
{% for exception in visible_exceptions %}
|
||||
{{ exception.include_path }}
|
||||
{% endfor %}
|
||||
|
||||
{% endif %}
|
||||
.. autoapisummary::
|
||||
|
||||
{% for exception in visible_exceptions %}
|
||||
{{ exception.id }}
|
||||
{% endfor %}
|
||||
{% endif %}
|
||||
|
||||
|
||||
{% endif %}
|
||||
{% set visible_classes = visible_children|selectattr("type", "equalto", "class")|list %}
|
||||
{% if visible_classes %}
|
||||
{% if "class" in own_page_types or "show-module-summary" in autoapi_options %}
|
||||
Classes
|
||||
-------
|
||||
|
||||
{% if "class" in own_page_types %}
|
||||
.. toctree::
|
||||
:hidden:
|
||||
|
||||
{% for klass in visible_classes %}
|
||||
{{ klass.include_path }}
|
||||
{% endfor %}
|
||||
|
||||
{% endif %}
|
||||
.. autoapisummary::
|
||||
|
||||
{% for klass in visible_classes %}
|
||||
{{ klass.id }}
|
||||
{% endfor %}
|
||||
{% endif %}
|
||||
|
||||
|
||||
{% endif %}
|
||||
{% set visible_functions = visible_children|selectattr("type", "equalto", "function")|list %}
|
||||
{% if visible_functions %}
|
||||
{% if "function" in own_page_types or "show-module-summary" in autoapi_options %}
|
||||
Functions
|
||||
---------
|
||||
|
||||
{% if "function" in own_page_types %}
|
||||
.. toctree::
|
||||
:hidden:
|
||||
|
||||
{% for function in visible_functions %}
|
||||
{{ function.include_path }}
|
||||
{% endfor %}
|
||||
|
||||
{% endif %}
|
||||
.. autoapisummary::
|
||||
|
||||
{% for function in visible_functions %}
|
||||
{{ function.id }}
|
||||
{% endfor %}
|
||||
{% endif %}
|
||||
|
||||
|
||||
{% endif %}
|
||||
{% set this_page_children = visible_children|rejectattr("type", "in", own_page_types)|list %}
|
||||
{% if this_page_children %}
|
||||
{{ obj.type|title }} Contents
|
||||
{{ "-" * obj.type|length }}---------
|
||||
|
||||
{% for obj_item in this_page_children %}
|
||||
{{ obj_item.render()|indent(0) }}
|
||||
{% endfor %}
|
||||
{% endif %}
|
||||
{% endif %}
|
||||
{% endblock %}
|
||||
{% else %}
|
||||
.. py:module:: {{ obj.name }}
|
||||
|
||||
{% if obj.docstring %}
|
||||
.. autoapi-nested-parse::
|
||||
|
||||
{{ obj.docstring|indent(6) }}
|
||||
|
||||
{% endif %}
|
||||
{% for obj_item in visible_children %}
|
||||
{{ obj_item.render()|indent(3) }}
|
||||
{% endfor %}
|
||||
{% endif %}
|
||||
{% endif %}
|
|
@ -0,0 +1 @@
|
|||
{% extends "python/module.rst" %}
|
|
@ -0,0 +1,21 @@
|
|||
{% if obj.display %}
|
||||
{% if is_own_page %}
|
||||
{{ obj.id }}
|
||||
{{ "=" * obj.id | length }}
|
||||
|
||||
{% endif %}
|
||||
.. py:property:: {% if is_own_page %}{{ obj.id}}{% else %}{{ obj.short_name }}{% endif %}
|
||||
{% if obj.annotation %}
|
||||
|
||||
:type: {{ obj.annotation }}
|
||||
{% endif %}
|
||||
{% for property in obj.properties %}
|
||||
|
||||
:{{ property }}:
|
||||
{% endfor %}
|
||||
|
||||
{% if obj.docstring %}
|
||||
|
||||
{{ obj.docstring|indent(3) }}
|
||||
{% endif %}
|
||||
{% endif %}
|
|
@ -0,0 +1 @@
|
|||
from scripts import generate_module_docs
|
|
@ -0,0 +1,105 @@
|
|||
# iterate through all the modules in auto_archiver.modules and turn the __manifest__.py file into a markdown table
|
||||
from pathlib import Path
|
||||
from auto_archiver.core.module import available_modules
|
||||
from auto_archiver.core.base_module import BaseModule
|
||||
from ruamel.yaml import YAML
|
||||
import io
|
||||
|
||||
MODULES_FOLDER = Path(__file__).parent.parent.parent.parent / "src" / "auto_archiver" / "modules"
|
||||
SAVE_FOLDER = Path(__file__).parent.parent / "source" / "modules" / "autogen"
|
||||
|
||||
type_color = {
|
||||
'feeder': "<span style='color: #FFA500'>[feeder](/core_modules.md#feeder-modules)</a></span>",
|
||||
'extractor': "<span style='color: #00FF00'>[extractor](/core_modules.md#extractor-modules)</a></span>",
|
||||
'enricher': "<span style='color: #0000FF'>[enricher](/core_modules.md#enricher-modules)</a></span>",
|
||||
'database': "<span style='color: #FF00FF'>[database](/core_modules.md#database-modules)</a></span>",
|
||||
'storage': "<span style='color: #FFFF00'>[storage](/core_modules.md#storage-modules)</a></span>",
|
||||
'formatter': "<span style='color: #00FFFF'>[formatter](/core_modules.md#formatter-modules)</a></span>",
|
||||
}
|
||||
|
||||
TABLE_HEADER = ("Option", "Description", "Default", "Type")
|
||||
|
||||
def generate_module_docs():
|
||||
yaml = YAML()
|
||||
SAVE_FOLDER.mkdir(exist_ok=True)
|
||||
modules_by_type = {}
|
||||
|
||||
header_row = "| " + " | ".join(TABLE_HEADER) + "|\n" + "| --- " * len(TABLE_HEADER) + "|\n"
|
||||
configs_cheatsheet = "\n## Configuration Options\n"
|
||||
configs_cheatsheet += header_row
|
||||
|
||||
for module in sorted(available_modules(with_manifest=True), key=lambda x: (x.requires_setup, x.name)):
|
||||
# generate the markdown file from the __manifest__.py file.
|
||||
|
||||
manifest = module.manifest
|
||||
for type in manifest['type']:
|
||||
modules_by_type.setdefault(type, []).append(module)
|
||||
|
||||
description = "\n".join(l.lstrip() for l in manifest['description'].split("\n"))
|
||||
types = ", ".join(type_color[t] for t in manifest['type'])
|
||||
readme_str = f"""
|
||||
# {manifest['name']}
|
||||
```{{admonition}} Module type
|
||||
|
||||
{types}
|
||||
```
|
||||
{description}
|
||||
"""
|
||||
if not manifest['configs']:
|
||||
readme_str += "\n*This module has no configuration options.*\n"
|
||||
else:
|
||||
config_yaml = {}
|
||||
config_table = header_row
|
||||
for key, value in manifest['configs'].items():
|
||||
type = value.get('type', 'string')
|
||||
if type == 'auto_archiver.utils.json_loader':
|
||||
value['type'] = 'json'
|
||||
elif type == 'str':
|
||||
type = "string"
|
||||
|
||||
default = value.get('default', '')
|
||||
config_yaml[key] = default
|
||||
help = "**Required**. " if value.get('required', False) else "Optional. "
|
||||
help += value.get('help', '')
|
||||
config_table += f"| `{module.name}.{key}` | {help} | {value.get('default', '')} | {type} |\n"
|
||||
configs_cheatsheet += f"| `{module.name}.{key}` | {help} | {default} | {type} |\n"
|
||||
readme_str += "\n## Configuration Options\n"
|
||||
readme_str += "\n### YAML\n"
|
||||
yaml_string = io.BytesIO()
|
||||
yaml.dump({module.name: config_yaml}, yaml_string)
|
||||
|
||||
readme_str += f"```{{code}} yaml\n{yaml_string.getvalue().decode('utf-8')}\n```\n"
|
||||
|
||||
readme_str += "\n### Command Line:\n"
|
||||
readme_str += config_table
|
||||
|
||||
# add a link to the autodoc refs
|
||||
readme_str += f"\n[API Reference](../../../autoapi/{module.name}/index)\n"
|
||||
# create the module.type folder, use the first type just for where to store the file
|
||||
for type in manifest['type']:
|
||||
type_folder = SAVE_FOLDER / type
|
||||
type_folder.mkdir(exist_ok=True)
|
||||
with open(type_folder / f"{module.name}.md", "w") as f:
|
||||
print("writing", SAVE_FOLDER)
|
||||
f.write(readme_str)
|
||||
generate_index(modules_by_type)
|
||||
|
||||
with open(SAVE_FOLDER / "configs_cheatsheet.md", "w") as f:
|
||||
f.write(configs_cheatsheet)
|
||||
|
||||
|
||||
def generate_index(modules_by_type):
|
||||
readme_str = ""
|
||||
for type in BaseModule.MODULE_TYPES:
|
||||
modules = modules_by_type.get(type, [])
|
||||
module_str = f"## {type.capitalize()} Modules\n"
|
||||
for module in modules:
|
||||
module_str += f"\n[{module.manifest['name']}](/modules/autogen/{module.type[0]}/{module.name}.md)\n"
|
||||
with open(SAVE_FOLDER / f"{type}.md", "w") as f:
|
||||
print("writing", SAVE_FOLDER / f"{type}.md")
|
||||
f.write(module_str)
|
||||
readme_str += module_str
|
||||
|
||||
with open(SAVE_FOLDER / "module_list.md", "w") as f:
|
||||
print("writing", SAVE_FOLDER / "module_list.md")
|
||||
f.write(readme_str)
|
|
@ -1,742 +0,0 @@
|
|||
|
||||
Configs
|
||||
-------
|
||||
|
||||
This section documents all configuration options available for various components.
|
||||
|
||||
InstagramAPIArchiver
|
||||
--------------------
|
||||
|
||||
The following table lists all configuration options for this component:
|
||||
|
||||
.. list-table:: Configuration Options
|
||||
:header-rows: 1
|
||||
:widths: 25 20 55
|
||||
|
||||
* - **Key**
|
||||
- **Default**
|
||||
- **Description**
|
||||
* - access_token
|
||||
- None
|
||||
- a valid instagrapi-api token
|
||||
* - api_endpoint
|
||||
- None
|
||||
- API endpoint to use
|
||||
* - full_profile
|
||||
- False
|
||||
- if true, will download all posts, tagged posts, stories, and highlights for a profile, if false, will only download the profile pic and information.
|
||||
* - full_profile_max_posts
|
||||
- 0
|
||||
- Use to limit the number of posts to download when full_profile is true. 0 means no limit. limit is applied softly since posts are fetched in batch, once to: posts, tagged posts, and highlights
|
||||
* - minimize_json_output
|
||||
- True
|
||||
- if true, will remove empty values from the json output
|
||||
|
||||
InstagramArchiver
|
||||
-----------------
|
||||
|
||||
The following table lists all configuration options for this component:
|
||||
|
||||
.. list-table:: Configuration Options
|
||||
:header-rows: 1
|
||||
:widths: 25 20 55
|
||||
|
||||
* - **Key**
|
||||
- **Default**
|
||||
- **Description**
|
||||
* - username
|
||||
- None
|
||||
- a valid Instagram username
|
||||
* - password
|
||||
- None
|
||||
- the corresponding Instagram account password
|
||||
* - download_folder
|
||||
- instaloader
|
||||
- name of a folder to temporarily download content to
|
||||
* - session_file
|
||||
- secrets/instaloader.session
|
||||
- path to the instagram session which saves session credentials
|
||||
|
||||
InstagramTbotArchiver
|
||||
---------------------
|
||||
|
||||
The following table lists all configuration options for this component:
|
||||
|
||||
.. list-table:: Configuration Options
|
||||
:header-rows: 1
|
||||
:widths: 25 20 55
|
||||
|
||||
* - **Key**
|
||||
- **Default**
|
||||
- **Description**
|
||||
* - api_id
|
||||
- None
|
||||
- telegram API_ID value, go to https://my.telegram.org/apps
|
||||
* - api_hash
|
||||
- None
|
||||
- telegram API_HASH value, go to https://my.telegram.org/apps
|
||||
* - session_file
|
||||
- secrets/anon-insta
|
||||
- optional, records the telegram login session for future usage, '.session' will be appended to the provided value.
|
||||
* - timeout
|
||||
- 45
|
||||
- timeout to fetch the instagram content in seconds.
|
||||
|
||||
TelethonArchiver
|
||||
----------------
|
||||
|
||||
The following table lists all configuration options for this component:
|
||||
|
||||
.. list-table:: Configuration Options
|
||||
:header-rows: 1
|
||||
:widths: 25 20 55
|
||||
|
||||
* - **Key**
|
||||
- **Default**
|
||||
- **Description**
|
||||
* - api_id
|
||||
- None
|
||||
- telegram API_ID value, go to https://my.telegram.org/apps
|
||||
* - api_hash
|
||||
- None
|
||||
- telegram API_HASH value, go to https://my.telegram.org/apps
|
||||
* - bot_token
|
||||
- None
|
||||
- optional, but allows access to more content such as large videos, talk to @botfather
|
||||
* - session_file
|
||||
- secrets/anon
|
||||
- optional, records the telegram login session for future usage, '.session' will be appended to the provided value.
|
||||
* - join_channels
|
||||
- True
|
||||
- disables the initial setup with channel_invites config, useful if you have a lot and get stuck
|
||||
* - channel_invites
|
||||
- {}
|
||||
- (JSON string) private channel invite links (format: t.me/joinchat/HASH OR t.me/+HASH) and (optional but important to avoid hanging for minutes on startup) channel id (format: CHANNEL_ID taken from a post url like https://t.me/c/CHANNEL_ID/1), the telegram account will join any new channels on setup
|
||||
|
||||
TwitterApiArchiver
|
||||
------------------
|
||||
|
||||
The following table lists all configuration options for this component:
|
||||
|
||||
.. list-table:: Configuration Options
|
||||
:header-rows: 1
|
||||
:widths: 25 20 55
|
||||
|
||||
* - **Key**
|
||||
- **Default**
|
||||
- **Description**
|
||||
* - bearer_token
|
||||
- None
|
||||
- [deprecated: see bearer_tokens] twitter API bearer_token which is enough for archiving, if not provided you will need consumer_key, consumer_secret, access_token, access_secret
|
||||
* - bearer_tokens
|
||||
- []
|
||||
- a list of twitter API bearer_token which is enough for archiving, if not provided you will need consumer_key, consumer_secret, access_token, access_secret, if provided you can still add those for better rate limits. CSV of bearer tokens if provided via the command line
|
||||
* - consumer_key
|
||||
- None
|
||||
- twitter API consumer_key
|
||||
* - consumer_secret
|
||||
- None
|
||||
- twitter API consumer_secret
|
||||
* - access_token
|
||||
- None
|
||||
- twitter API access_token
|
||||
* - access_secret
|
||||
- None
|
||||
- twitter API access_secret
|
||||
|
||||
VkArchiver
|
||||
----------
|
||||
|
||||
The following table lists all configuration options for this component:
|
||||
|
||||
.. list-table:: Configuration Options
|
||||
:header-rows: 1
|
||||
:widths: 25 20 55
|
||||
|
||||
* - **Key**
|
||||
- **Default**
|
||||
- **Description**
|
||||
* - username
|
||||
- None
|
||||
- valid VKontakte username
|
||||
* - password
|
||||
- None
|
||||
- valid VKontakte password
|
||||
* - session_file
|
||||
- secrets/vk_config.v2.json
|
||||
- valid VKontakte password
|
||||
|
||||
YoutubeDLArchiver
|
||||
-----------------
|
||||
|
||||
The following table lists all configuration options for this component:
|
||||
|
||||
.. list-table:: Configuration Options
|
||||
:header-rows: 1
|
||||
:widths: 25 20 55
|
||||
|
||||
* - **Key**
|
||||
- **Default**
|
||||
- **Description**
|
||||
* - facebook_cookie
|
||||
- None
|
||||
- optional facebook cookie to have more access to content, from browser, looks like 'cookie: datr= xxxx'
|
||||
* - subtitles
|
||||
- True
|
||||
- download subtitles if available
|
||||
* - comments
|
||||
- False
|
||||
- download all comments if available, may lead to large metadata
|
||||
* - livestreams
|
||||
- False
|
||||
- if set, will download live streams, otherwise will skip them; see --max-filesize for more control
|
||||
* - live_from_start
|
||||
- False
|
||||
- if set, will download live streams from their earliest available moment, otherwise starts now.
|
||||
* - proxy
|
||||
-
|
||||
- http/socks (https seems to not work atm) proxy to use for the webdriver, eg https://proxy- user:password@proxy-ip:port
|
||||
* - end_means_success
|
||||
- True
|
||||
- if True, any archived content will mean a 'success', if False this archiver will not return a 'success' stage; this is useful for cases when the yt-dlp will archive a video but ignore other types of content like images or text only pages that the subsequent archivers can retrieve.
|
||||
* - allow_playlist
|
||||
- False
|
||||
- If True will also download playlists, set to False if the expectation is to download a single video.
|
||||
* - max_downloads
|
||||
- inf
|
||||
- Use to limit the number of videos to download when a channel or long page is being extracted. 'inf' means no limit.
|
||||
* - cookies_from_browser
|
||||
- None
|
||||
- optional browser for ytdl to extract cookies from, can be one of: brave, chrome, chromium, edge, firefox, opera, safari, vivaldi, whale
|
||||
* - cookie_file
|
||||
- None
|
||||
- optional cookie file to use for Youtube, see instructions here on how to export from your browser: https://github.com/yt-dlp/yt- dlp/wiki/FAQ#how-do-i-pass-cookies-to-yt-dlp
|
||||
|
||||
AAApiDb
|
||||
-------
|
||||
|
||||
The following table lists all configuration options for this component:
|
||||
|
||||
.. list-table:: Configuration Options
|
||||
:header-rows: 1
|
||||
:widths: 25 20 55
|
||||
|
||||
* - **Key**
|
||||
- **Default**
|
||||
- **Description**
|
||||
* - api_endpoint
|
||||
- None
|
||||
- API endpoint where calls are made to
|
||||
* - api_token
|
||||
- None
|
||||
- API Bearer token.
|
||||
* - public
|
||||
- False
|
||||
- whether the URL should be publicly available via the API
|
||||
* - author_id
|
||||
- None
|
||||
- which email to assign as author
|
||||
* - group_id
|
||||
- None
|
||||
- which group of users have access to the archive in case public=false as author
|
||||
* - allow_rearchive
|
||||
- True
|
||||
- if False then the API database will be queried prior to any archiving operations and stop if the link has already been archived
|
||||
* - store_results
|
||||
- True
|
||||
- when set, will send the results to the API database.
|
||||
* - tags
|
||||
- []
|
||||
- what tags to add to the archived URL
|
||||
|
||||
AtlosDb
|
||||
-------
|
||||
|
||||
The following table lists all configuration options for this component:
|
||||
|
||||
.. list-table:: Configuration Options
|
||||
:header-rows: 1
|
||||
:widths: 25 20 55
|
||||
|
||||
* - **Key**
|
||||
- **Default**
|
||||
- **Description**
|
||||
* - api_token
|
||||
- None
|
||||
- An Atlos API token. For more information, see https://docs.atlos.org/technical/api/
|
||||
* - atlos_url
|
||||
- https://platform.atlos.org
|
||||
- The URL of your Atlos instance (e.g., https://platform.atlos.org), without a trailing slash.
|
||||
|
||||
CSVDb
|
||||
-----
|
||||
|
||||
The following table lists all configuration options for this component:
|
||||
|
||||
.. list-table:: Configuration Options
|
||||
:header-rows: 1
|
||||
:widths: 25 20 55
|
||||
|
||||
* - **Key**
|
||||
- **Default**
|
||||
- **Description**
|
||||
* - csv_file
|
||||
- db.csv
|
||||
- CSV file name
|
||||
|
||||
HashEnricher
|
||||
------------
|
||||
|
||||
The following table lists all configuration options for this component:
|
||||
|
||||
.. list-table:: Configuration Options
|
||||
:header-rows: 1
|
||||
:widths: 25 20 55
|
||||
|
||||
* - **Key**
|
||||
- **Default**
|
||||
- **Description**
|
||||
* - algorithm
|
||||
- SHA-256
|
||||
- hash algorithm to use
|
||||
* - chunksize
|
||||
- 16000000
|
||||
- number of bytes to use when reading files in chunks (if this value is too large you will run out of RAM), default is 16MB
|
||||
|
||||
ScreenshotEnricher
|
||||
------------------
|
||||
|
||||
The following table lists all configuration options for this component:
|
||||
|
||||
.. list-table:: Configuration Options
|
||||
:header-rows: 1
|
||||
:widths: 25 20 55
|
||||
|
||||
* - **Key**
|
||||
- **Default**
|
||||
- **Description**
|
||||
* - width
|
||||
- 1280
|
||||
- width of the screenshots
|
||||
* - height
|
||||
- 720
|
||||
- height of the screenshots
|
||||
* - timeout
|
||||
- 60
|
||||
- timeout for taking the screenshot
|
||||
* - sleep_before_screenshot
|
||||
- 4
|
||||
- seconds to wait for the pages to load before taking screenshot
|
||||
* - http_proxy
|
||||
-
|
||||
- http proxy to use for the webdriver, eg http://proxy-user:password@proxy-ip:port
|
||||
* - save_to_pdf
|
||||
- False
|
||||
- save the page as pdf along with the screenshot. PDF saving options can be adjusted with the 'print_options' parameter
|
||||
* - print_options
|
||||
- {}
|
||||
- options to pass to the pdf printer
|
||||
|
||||
SSLEnricher
|
||||
-----------
|
||||
|
||||
The following table lists all configuration options for this component:
|
||||
|
||||
.. list-table:: Configuration Options
|
||||
:header-rows: 1
|
||||
:widths: 25 20 55
|
||||
|
||||
* - **Key**
|
||||
- **Default**
|
||||
- **Description**
|
||||
* - skip_when_nothing_archived
|
||||
- True
|
||||
- if true, will skip enriching when no media is archived
|
||||
|
||||
ThumbnailEnricher
|
||||
-----------------
|
||||
|
||||
The following table lists all configuration options for this component:
|
||||
|
||||
.. list-table:: Configuration Options
|
||||
:header-rows: 1
|
||||
:widths: 25 20 55
|
||||
|
||||
* - **Key**
|
||||
- **Default**
|
||||
- **Description**
|
||||
* - thumbnails_per_minute
|
||||
- 60
|
||||
- how many thumbnails to generate per minute of video, can be limited by max_thumbnails
|
||||
* - max_thumbnails
|
||||
- 16
|
||||
- limit the number of thumbnails to generate per video, 0 means no limit
|
||||
|
||||
TimestampingEnricher
|
||||
--------------------
|
||||
|
||||
The following table lists all configuration options for this component:
|
||||
|
||||
.. list-table:: Configuration Options
|
||||
:header-rows: 1
|
||||
:widths: 25 20 55
|
||||
|
||||
* - **Key**
|
||||
- **Default**
|
||||
- **Description**
|
||||
* - tsa_urls
|
||||
- ['http://timestamp.digicert.com', 'http://timestamp.identrust.com', 'http://timestamp.globalsign.com/tsa/r6advanced1', 'http://tss.accv.es:8318/tsa']
|
||||
- List of RFC3161 Time Stamp Authorities to use, separate with commas if passed via the command line.
|
||||
|
||||
WaczArchiverEnricher
|
||||
--------------------
|
||||
|
||||
The following table lists all configuration options for this component:
|
||||
|
||||
.. list-table:: Configuration Options
|
||||
:header-rows: 1
|
||||
:widths: 25 20 55
|
||||
|
||||
* - **Key**
|
||||
- **Default**
|
||||
- **Description**
|
||||
* - profile
|
||||
- None
|
||||
- browsertrix-profile (for profile generation see https://github.com/webrecorder/browsertrix- crawler#creating-and-using-browser-profiles).
|
||||
* - docker_commands
|
||||
- None
|
||||
- if a custom docker invocation is needed
|
||||
* - timeout
|
||||
- 120
|
||||
- timeout for WACZ generation in seconds
|
||||
* - extract_media
|
||||
- False
|
||||
- If enabled all the images/videos/audio present in the WACZ archive will be extracted into separate Media and appear in the html report. The .wacz file will be kept untouched.
|
||||
* - extract_screenshot
|
||||
- True
|
||||
- If enabled the screenshot captured by browsertrix will be extracted into separate Media and appear in the html report. The .wacz file will be kept untouched.
|
||||
* - socks_proxy_host
|
||||
- None
|
||||
- SOCKS proxy host for browsertrix-crawler, use in combination with socks_proxy_port. eg: user:password@host
|
||||
* - socks_proxy_port
|
||||
- None
|
||||
- SOCKS proxy port for browsertrix-crawler, use in combination with socks_proxy_host. eg 1234
|
||||
* - proxy_server
|
||||
- None
|
||||
- SOCKS server proxy URL, in development
|
||||
|
||||
WaybackArchiverEnricher
|
||||
-----------------------
|
||||
|
||||
The following table lists all configuration options for this component:
|
||||
|
||||
.. list-table:: Configuration Options
|
||||
:header-rows: 1
|
||||
:widths: 25 20 55
|
||||
|
||||
* - **Key**
|
||||
- **Default**
|
||||
- **Description**
|
||||
* - timeout
|
||||
- 15
|
||||
- seconds to wait for successful archive confirmation from wayback, if more than this passes the result contains the job_id so the status can later be checked manually.
|
||||
* - if_not_archived_within
|
||||
- None
|
||||
- only tell wayback to archive if no archive is available before the number of seconds specified, use None to ignore this option. For more information: https://docs.google.com/document/d/1N sv52MvSjbLb2PCpHlat0gkzw0EvtSgpKHu4mk0MnrA
|
||||
* - key
|
||||
- None
|
||||
- wayback API key. to get credentials visit https://archive.org/account/s3.php
|
||||
* - secret
|
||||
- None
|
||||
- wayback API secret. to get credentials visit https://archive.org/account/s3.php
|
||||
* - proxy_http
|
||||
- None
|
||||
- http proxy to use for wayback requests, eg http://proxy-user:password@proxy-ip:port
|
||||
* - proxy_https
|
||||
- None
|
||||
- https proxy to use for wayback requests, eg https://proxy-user:password@proxy-ip:port
|
||||
|
||||
WhisperEnricher
|
||||
---------------
|
||||
|
||||
The following table lists all configuration options for this component:
|
||||
|
||||
.. list-table:: Configuration Options
|
||||
:header-rows: 1
|
||||
:widths: 25 20 55
|
||||
|
||||
* - **Key**
|
||||
- **Default**
|
||||
- **Description**
|
||||
* - api_endpoint
|
||||
- None
|
||||
- WhisperApi api endpoint, eg: https://whisperbox- api.com/api/v1, a deployment of https://github.com/bellingcat/whisperbox- transcribe.
|
||||
* - api_key
|
||||
- None
|
||||
- WhisperApi api key for authentication
|
||||
* - include_srt
|
||||
- False
|
||||
- Whether to include a subtitle SRT (SubRip Subtitle file) for the video (can be used in video players).
|
||||
* - timeout
|
||||
- 90
|
||||
- How many seconds to wait at most for a successful job completion.
|
||||
* - action
|
||||
- translate
|
||||
- which Whisper operation to execute
|
||||
|
||||
AtlosFeeder
|
||||
-----------
|
||||
|
||||
The following table lists all configuration options for this component:
|
||||
|
||||
.. list-table:: Configuration Options
|
||||
:header-rows: 1
|
||||
:widths: 25 20 55
|
||||
|
||||
* - **Key**
|
||||
- **Default**
|
||||
- **Description**
|
||||
* - api_token
|
||||
- None
|
||||
- An Atlos API token. For more information, see https://docs.atlos.org/technical/api/
|
||||
* - atlos_url
|
||||
- https://platform.atlos.org
|
||||
- The URL of your Atlos instance (e.g., https://platform.atlos.org), without a trailing slash.
|
||||
|
||||
CLIFeeder
|
||||
---------
|
||||
|
||||
The following table lists all configuration options for this component:
|
||||
|
||||
.. list-table:: Configuration Options
|
||||
:header-rows: 1
|
||||
:widths: 25 20 55
|
||||
|
||||
* - **Key**
|
||||
- **Default**
|
||||
- **Description**
|
||||
* - urls
|
||||
- None
|
||||
- URL(s) to archive, either a single URL or a list of urls, should not come from config.yaml
|
||||
|
||||
GsheetsFeeder
|
||||
-------------
|
||||
|
||||
The following table lists all configuration options for this component:
|
||||
|
||||
.. list-table:: Configuration Options
|
||||
:header-rows: 1
|
||||
:widths: 25 20 55
|
||||
|
||||
* - **Key**
|
||||
- **Default**
|
||||
- **Description**
|
||||
* - sheet
|
||||
- None
|
||||
- name of the sheet to archive
|
||||
* - sheet_id
|
||||
- None
|
||||
- (alternative to sheet name) the id of the sheet to archive
|
||||
* - header
|
||||
- 1
|
||||
- index of the header row (starts at 1)
|
||||
* - service_account
|
||||
- secrets/service_account.json
|
||||
- service account JSON file path
|
||||
* - columns
|
||||
- {'url': 'link', 'status': 'archive status', 'folder': 'destination folder', 'archive': 'archive location', 'date': 'archive date', 'thumbnail': 'thumbnail', 'timestamp': 'upload timestamp', 'title': 'upload title', 'text': 'text content', 'screenshot': 'screenshot', 'hash': 'hash', 'pdq_hash': 'perceptual hashes', 'wacz': 'wacz', 'replaywebpage': 'replaywebpage'}
|
||||
- names of columns in the google sheet (stringified JSON object)
|
||||
* - allow_worksheets
|
||||
- set()
|
||||
- (CSV) only worksheets whose name is included in allow are included (overrides worksheet_block), leave empty so all are allowed
|
||||
* - block_worksheets
|
||||
- set()
|
||||
- (CSV) explicitly block some worksheets from being processed
|
||||
* - use_sheet_names_in_stored_paths
|
||||
- True
|
||||
- if True the stored files path will include 'workbook_name/worksheet_name/...'
|
||||
|
||||
HtmlFormatter
|
||||
-------------
|
||||
|
||||
The following table lists all configuration options for this component:
|
||||
|
||||
.. list-table:: Configuration Options
|
||||
:header-rows: 1
|
||||
:widths: 25 20 55
|
||||
|
||||
* - **Key**
|
||||
- **Default**
|
||||
- **Description**
|
||||
* - detect_thumbnails
|
||||
- True
|
||||
- if true will group by thumbnails generated by thumbnail enricher by id 'thumbnail_00'
|
||||
|
||||
AtlosStorage
|
||||
------------
|
||||
|
||||
The following table lists all configuration options for this component:
|
||||
|
||||
.. list-table:: Configuration Options
|
||||
:header-rows: 1
|
||||
:widths: 25 20 55
|
||||
|
||||
* - **Key**
|
||||
- **Default**
|
||||
- **Description**
|
||||
* - path_generator
|
||||
- url
|
||||
- how to store the file in terms of directory structure: 'flat' sets to root; 'url' creates a directory based on the provided URL; 'random' creates a random directory.
|
||||
* - filename_generator
|
||||
- random
|
||||
- how to name stored files: 'random' creates a random string; 'static' uses a replicable strategy such as a hash.
|
||||
* - api_token
|
||||
- None
|
||||
- An Atlos API token. For more information, see https://docs.atlos.org/technical/api/
|
||||
* - atlos_url
|
||||
- https://platform.atlos.org
|
||||
- The URL of your Atlos instance (e.g., https://platform.atlos.org), without a trailing slash.
|
||||
|
||||
GDriveStorage
|
||||
-------------
|
||||
|
||||
The following table lists all configuration options for this component:
|
||||
|
||||
.. list-table:: Configuration Options
|
||||
:header-rows: 1
|
||||
:widths: 25 20 55
|
||||
|
||||
* - **Key**
|
||||
- **Default**
|
||||
- **Description**
|
||||
* - path_generator
|
||||
- url
|
||||
- how to store the file in terms of directory structure: 'flat' sets to root; 'url' creates a directory based on the provided URL; 'random' creates a random directory.
|
||||
* - filename_generator
|
||||
- random
|
||||
- how to name stored files: 'random' creates a random string; 'static' uses a replicable strategy such as a hash.
|
||||
* - root_folder_id
|
||||
- None
|
||||
- root google drive folder ID to use as storage, found in URL: 'https://drive.google.com/drive/folders/FOLDER_ID'
|
||||
* - oauth_token
|
||||
- None
|
||||
- JSON filename with Google Drive OAuth token: check auto-archiver repository scripts folder for create_update_gdrive_oauth_token.py. NOTE: storage used will count towards owner of GDrive folder, therefore it is best to use oauth_token_filename over service_account.
|
||||
* - service_account
|
||||
- secrets/service_account.json
|
||||
- service account JSON file path, same as used for Google Sheets. NOTE: storage used will count towards the developer account.
|
||||
|
||||
LocalStorage
|
||||
------------
|
||||
|
||||
The following table lists all configuration options for this component:
|
||||
|
||||
.. list-table:: Configuration Options
|
||||
:header-rows: 1
|
||||
:widths: 25 20 55
|
||||
|
||||
* - **Key**
|
||||
- **Default**
|
||||
- **Description**
|
||||
* - path_generator
|
||||
- url
|
||||
- how to store the file in terms of directory structure: 'flat' sets to root; 'url' creates a directory based on the provided URL; 'random' creates a random directory.
|
||||
* - filename_generator
|
||||
- random
|
||||
- how to name stored files: 'random' creates a random string; 'static' uses a replicable strategy such as a hash.
|
||||
* - save_to
|
||||
- ./archived
|
||||
- folder where to save archived content
|
||||
* - save_absolute
|
||||
- False
|
||||
- whether the path to the stored file is absolute or relative in the output result inc. formatters (WARN: leaks the file structure)
|
||||
|
||||
S3Storage
|
||||
---------
|
||||
|
||||
The following table lists all configuration options for this component:
|
||||
|
||||
.. list-table:: Configuration Options
|
||||
:header-rows: 1
|
||||
:widths: 25 20 55
|
||||
|
||||
* - **Key**
|
||||
- **Default**
|
||||
- **Description**
|
||||
* - path_generator
|
||||
- url
|
||||
- how to store the file in terms of directory structure: 'flat' sets to root; 'url' creates a directory based on the provided URL; 'random' creates a random directory.
|
||||
* - filename_generator
|
||||
- random
|
||||
- how to name stored files: 'random' creates a random string; 'static' uses a replicable strategy such as a hash.
|
||||
* - bucket
|
||||
- None
|
||||
- S3 bucket name
|
||||
* - region
|
||||
- None
|
||||
- S3 region name
|
||||
* - key
|
||||
- None
|
||||
- S3 API key
|
||||
* - secret
|
||||
- None
|
||||
- S3 API secret
|
||||
* - random_no_duplicate
|
||||
- False
|
||||
- if set, it will override `path_generator`, `filename_generator` and `folder`. It will check if the file already exists and if so it will not upload it again. Creates a new root folder path `no-dups/`
|
||||
* - endpoint_url
|
||||
- https://{region}.digitaloceanspaces.com
|
||||
- S3 bucket endpoint, {region} are inserted at runtime
|
||||
* - cdn_url
|
||||
- https://{bucket}.{region}.cdn.digitaloceanspaces.com/{key}
|
||||
- S3 CDN url, {bucket}, {region} and {key} are inserted at runtime
|
||||
* - private
|
||||
- False
|
||||
- if true S3 files will not be readable online
|
||||
|
||||
Storage
|
||||
-------
|
||||
|
||||
The following table lists all configuration options for this component:
|
||||
|
||||
.. list-table:: Configuration Options
|
||||
:header-rows: 1
|
||||
:widths: 25 20 55
|
||||
|
||||
* - **Key**
|
||||
- **Default**
|
||||
- **Description**
|
||||
* - path_generator
|
||||
- url
|
||||
- how to store the file in terms of directory structure: 'flat' sets to root; 'url' creates a directory based on the provided URL; 'random' creates a random directory.
|
||||
* - filename_generator
|
||||
- random
|
||||
- how to name stored files: 'random' creates a random string; 'static' uses a replicable strategy such as a hash.
|
||||
|
||||
Gsheets
|
||||
-------
|
||||
|
||||
The following table lists all configuration options for this component:
|
||||
|
||||
.. list-table:: Configuration Options
|
||||
:header-rows: 1
|
||||
:widths: 25 20 55
|
||||
|
||||
* - **Key**
|
||||
- **Default**
|
||||
- **Description**
|
||||
* - sheet
|
||||
- None
|
||||
- name of the sheet to archive
|
||||
* - sheet_id
|
||||
- None
|
||||
- (alternative to sheet name) the id of the sheet to archive
|
||||
* - header
|
||||
- 1
|
||||
- index of the header row (starts at 1)
|
||||
* - service_account
|
||||
- secrets/service_account.json
|
||||
- service account JSON file path
|
||||
* - columns
|
||||
- {'url': 'link', 'status': 'archive status', 'folder': 'destination folder', 'archive': 'archive location', 'date': 'archive date', 'thumbnail': 'thumbnail', 'timestamp': 'upload timestamp', 'title': 'upload title', 'text': 'text content', 'screenshot': 'screenshot', 'hash': 'hash', 'pdq_hash': 'perceptual hashes', 'wacz': 'wacz', 'replaywebpage': 'replaywebpage'}
|
||||
- names of columns in the google sheet (stringified JSON object)
|
||||
|
|
@ -1,23 +1,33 @@
|
|||
# Configuration file for the Sphinx documentation builder.
|
||||
# https://www.sphinx-doc.org/en/master/usage/configuration.html
|
||||
|
||||
# -- Project information -----------------------------------------------------
|
||||
import sys
|
||||
import os
|
||||
from importlib.metadata import metadata
|
||||
|
||||
sys.path.append(os.path.abspath('../scripts'))
|
||||
from scripts import generate_module_docs
|
||||
|
||||
# -- Project Hooks -----------------------------------------------------------
|
||||
# convert the module __manifest__.py files into markdown files
|
||||
generate_module_docs()
|
||||
|
||||
|
||||
# -- Project information -----------------------------------------------------
|
||||
package_metadata = metadata("auto-archiver")
|
||||
project = package_metadata["name"]
|
||||
authors = package_metadata["authors"]
|
||||
authors = "Bellingcat"
|
||||
release = package_metadata["version"]
|
||||
|
||||
language = 'en'
|
||||
|
||||
# -- General configuration ---------------------------------------------------
|
||||
extensions = [
|
||||
"autoapi.extension", # Generate API documentation from docstrings
|
||||
"myst_parser", # Markdown support
|
||||
'sphinxcontrib.mermaid', # Mermaid diagrams
|
||||
"autoapi.extension", # Generate API documentation from docstrings
|
||||
"sphinxcontrib.mermaid", # Mermaid diagrams
|
||||
"sphinx.ext.viewcode", # Source code links
|
||||
"sphinx_copybutton",
|
||||
"sphinx.ext.napoleon", # Google-style and NumPy-style docstrings
|
||||
# "sphinx.ext.autodoc", # Include custom docstrings
|
||||
"sphinx.ext.autosectionlabel",
|
||||
# 'sphinx.ext.autosummary', # Summarize module/class/function docs
|
||||
]
|
||||
|
||||
|
@ -27,24 +37,25 @@ exclude_patterns = []
|
|||
|
||||
# -- AutoAPI Configuration ---------------------------------------------------
|
||||
autoapi_type = 'python'
|
||||
autoapi_dirs = ["../../src"]
|
||||
autoapi_dirs = ["../../src/auto_archiver/core/", "../../src/auto_archiver/utils/"]
|
||||
# get all the modules and add them to the autoapi_dirs
|
||||
autoapi_dirs.extend([f"../../src/auto_archiver/modules/{m}" for m in os.listdir("../../src/auto_archiver/modules")])
|
||||
autodoc_typehints = "signature" # Include type hints in the signature
|
||||
autoapi_ignore = [] # Ignore specific modules
|
||||
autoapi_ignore = ["*/version.py", ] # Ignore specific modules
|
||||
autoapi_keep_files = True # Option to retain intermediate JSON files for debugging
|
||||
autoapi_add_toctree_entry = True # Include API docs in the TOC
|
||||
autoapi_template_dir = None # Use default templates
|
||||
autoapi_python_use_implicit_namespaces = True
|
||||
autoapi_template_dir = "../_templates/autoapi"
|
||||
autoapi_options = [
|
||||
"members",
|
||||
"undoc-members",
|
||||
"show-inheritance",
|
||||
"show-module-summary",
|
||||
"imported-members",
|
||||
]
|
||||
|
||||
|
||||
# -- Markdown Support --------------------------------------------------------
|
||||
myst_enable_extensions = [
|
||||
"colon_fence", # ::: fences
|
||||
"deflist", # Definition lists
|
||||
"html_admonition", # HTML-style admonitions
|
||||
"html_image", # Inline HTML images
|
||||
|
@ -53,12 +64,16 @@ myst_enable_extensions = [
|
|||
"linkify", # Auto-detect links
|
||||
"substitution", # Text substitutions
|
||||
]
|
||||
myst_heading_anchors = 2
|
||||
myst_fence_as_directive = ["mermaid"]
|
||||
|
||||
source_suffix = {
|
||||
".rst": "restructuredtext",
|
||||
".md": "markdown",
|
||||
}
|
||||
|
||||
# -- Options for HTML output -------------------------------------------------
|
||||
html_theme = 'furo'
|
||||
# html_static_path = ['_static']
|
||||
html_theme = 'sphinx_book_theme'
|
||||
html_static_path = ["../_static"]
|
||||
html_css_files = ["custom.css"]
|
||||
|
||||
|
|
|
@ -1,34 +0,0 @@
|
|||
|
||||
Configurations
|
||||
==============
|
||||
|
||||
This section of the documentation provides guidelines for configuring the tool.
|
||||
|
||||
File Reference
|
||||
--------------
|
||||
|
||||
|
||||
Below is the content of the `example.orchestration.yaml` file:
|
||||
|
||||
.. raw:: html
|
||||
|
||||
<details>
|
||||
<summary>View example.orchestration.yaml</summary>
|
||||
|
||||
.. literalinclude:: ../../example.orchestration.yaml
|
||||
:language: yaml
|
||||
:caption: example.orchestration.yaml
|
||||
|
||||
.. raw:: html
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
Configs
|
||||
-------
|
||||
|
||||
This section of the documentation will show the custom configurations for the individual steps of the tool.
|
||||
|
||||
.. include:: _auto/configs.rst
|
||||
|
||||
|
|
@ -0,0 +1,2 @@
|
|||
```{include} ../../CONTRIBUTING.md
|
||||
```
|
|
@ -0,0 +1,27 @@
|
|||
# Module Documentation
|
||||
|
||||
These pages describe the core modules that come with `auto-archiver` and provide the main functionality for archiving websites on the internet. There are five core module types:
|
||||
|
||||
1. Feeders - these 'feed' information (the URLs) from various sources to the `auto-archiver` for processing
|
||||
2. Extractors - these 'extract' the page data for a given URL that is fed in by a feeder
|
||||
3. Enrichers - these 'enrich' the data extracted in the previous step with additional information
|
||||
4. Storage - these 'store' the data in a persistent location (on disk, Google Drive etc.)
|
||||
5. Databases - these 'store' the status of the entire archiving process in a log file or database.
|
||||
|
||||
|
||||
```{include} modules/autogen/module_list.md
|
||||
```
|
||||
|
||||
|
||||
```{toctree}
|
||||
:maxdepth: 1
|
||||
:caption: Core Modules
|
||||
:hidden:
|
||||
|
||||
modules/config_cheatsheet
|
||||
modules/feeder
|
||||
modules/extractor
|
||||
modules/enricher
|
||||
modules/storage
|
||||
modules/database
|
||||
```
|
|
@ -1,6 +0,0 @@
|
|||
|
||||
Developer Guidelines
|
||||
====================
|
||||
|
||||
This section of the documentation provides guidelines for developers who want to modify or contribute to the tool.
|
||||
|
|
@ -0,0 +1,52 @@
|
|||
# Creating Your Own Modules
|
||||
|
||||
Modules are what's used to extend `auto-archiver` to process different websites or media, and/or transform the data in a way that suits your needs. In most cases, the [Core Modules](../core_modules.md) should be sufficient for every day use, but the most common use-cases for making your own Modules include:
|
||||
|
||||
1. Extracting data from a website which doesn't work with the current core extractors.
|
||||
2. Enriching or altering the data before saving with additional information that the core enrichers do not offer.
|
||||
3. Storing your data in a different format/location from what the core storage providers offer.
|
||||
|
||||
## Setting up the folder structure
|
||||
|
||||
1. First, decide what type of module you wish to create. Check the types of modules on the [](../core_modules.md) page to decide what type you need. (Note: a module can be more than one type, more on that below)
|
||||
2. Create a new python package (a folder) with the name of your module (in this tutorial, we'll call it `awesome_extractor`).
|
||||
3. Create the `__manifest__.py` and an the `awesome_extractor.py` files in this folder.
|
||||
|
||||
When done, you should have a module structure as follows:
|
||||
|
||||
```
|
||||
.
|
||||
├── awesome_extractor
|
||||
│ ├── __manifest__.py
|
||||
│ └── awesome_extractor.py
|
||||
```
|
||||
|
||||
Check out the [core modules](https://github.com/bellingcat/auto-archiver/tree/main/src/auto_archiver/modules) in the `auto-archiver` repository for examples of the folder structure for real-world modules.
|
||||
|
||||
## Populating the Manifest File
|
||||
|
||||
The manifest file is where you define the core information of your module. It is a python dict containing important information, here's an example file:
|
||||
|
||||
```{include} ../../../tests/data/test_modules/example_module/__manifest__.py
|
||||
:name: __manifest__.py
|
||||
:literal:
|
||||
:parser: python
|
||||
```
|
||||
|
||||
## Creating the Python Code
|
||||
|
||||
The next step is to create your module code. First, create a class which should subclass the base module types from `auto_archiver.core`, here's an example class for the `awesome_extractor` module which is an `extractor`:
|
||||
|
||||
```{code-block} python
|
||||
:filename: awesome_extractor.py
|
||||
|
||||
from auto_archiver.core import Extractor, Metadata
|
||||
|
||||
def AwesomeExtractor(Extractor):
|
||||
|
||||
def download(self, item: Metadata) -> Metadata | False:
|
||||
url = item.get_url()
|
||||
# download the content and create the metadata object
|
||||
metadata = ...
|
||||
return metadata
|
||||
```
|
|
@ -0,0 +1,34 @@
|
|||
|
||||
# Developer Guidelines
|
||||
|
||||
This section of the documentation provides guidelines for developers who want to modify or contribute to the tool.
|
||||
|
||||
|
||||
## Developer Install
|
||||
|
||||
1. Clone the project using `git clone https://github.com/bellingcat/auto-archiver.git`
|
||||
2. Install poetry using `curl -sSL https://install.python-poetry.org | python3 -` ([other installation methods](https://python-poetry.org/docs/#installation))
|
||||
3. Install dependencies with `poetry install`
|
||||
|
||||
## Running
|
||||
4. Run the code with `poetry run auto-archiver [my args]`
|
||||
|
||||
```{note}
|
||||
Add the plugin [poetry-shell-plugin](https://github.com/python-poetry/poetry-plugin-shell) and run `poetry shell` to activate the virtual environment.
|
||||
This allows you to run the auto-archiver without the `poetry run` prefix.
|
||||
```
|
||||
|
||||
### Optional Development Packages
|
||||
|
||||
Install development packages (used for unit tests etc.) using:
|
||||
`poetry install -with dev`
|
||||
|
||||
|
||||
```{toctree}
|
||||
:hidden:
|
||||
creating_modules
|
||||
docker_development
|
||||
testing
|
||||
docs
|
||||
release
|
||||
```
|
|
@ -0,0 +1,5 @@
|
|||
## Docker development
|
||||
working with docker locally:
|
||||
* `docker compose up` to build the first time and run a local image with the settings in `secrets/orchestration.yaml`
|
||||
* To modify/pass additional command line args, use `docker compose run auto-archiver --config secrets/orchestration.yaml [OTHER ARGUMENTS]`
|
||||
* To rebuild after code changes, just pass the `--build` flag, e.g. `docker compose up --build`
|
|
@ -0,0 +1,38 @@
|
|||
|
||||
### Building the Docs
|
||||
|
||||
The documentation is built using [Sphinx](https://www.sphinx-doc.org/en/master/) and [AutoAPI](https://sphinx-autoapi.readthedocs.io/en/latest/) and hosted on ReadTheDocs.
|
||||
To build the documentation locally, run the following commands:
|
||||
|
||||
**Install required dependencies:**
|
||||
- Install the docs group of dependencies:
|
||||
```shell
|
||||
# only the docs dependencies
|
||||
poetry install --only docs
|
||||
|
||||
# or for all dependencies
|
||||
poetry install
|
||||
```
|
||||
- Either use [poetry-plugin-shell](https://github.com/python-poetry/poetry-plugin-shell) to activate the virtual environment: `poetry shell`
|
||||
- Or prepend the following commands with `poetry run`
|
||||
|
||||
**Create the documentation:**
|
||||
- Build the documentation:
|
||||
```shell
|
||||
# Using makefile (Linux/macOS):
|
||||
make -C docs html
|
||||
|
||||
# or using sphinx directly (Windows/Linux/macOS):
|
||||
sphinx-build -b html docs/source docs/_build/html
|
||||
```
|
||||
- If you make significant changes and want a fresh build run: `make -C docs clean` to remove the old build files.
|
||||
|
||||
**Viewing the documentation:**
|
||||
```shell
|
||||
# to open the documentation in your browser.
|
||||
open docs/_build/html/index.html
|
||||
|
||||
# or run autobuild to automatically update the documentation when you make changes
|
||||
sphinx-autobuild docs/source docs/_build/html
|
||||
```
|
||||
|
|
@ -0,0 +1,15 @@
|
|||
# Release Process
|
||||
|
||||
```{note} This is a work in progress.
|
||||
```
|
||||
|
||||
1. Update the version number in [version.py](src/auto_archiver/version.py)
|
||||
2. Go to github releases > new release > use `vx.y.z` for matching version notation
|
||||
1. package is automatically updated in pypi
|
||||
2. docker image is automatically pushed to dockerhup
|
||||
|
||||
|
||||
|
||||
manual release to docker hub
|
||||
* `docker image tag auto-archiver bellingcat/auto-archiver:latest`
|
||||
* `docker push bellingcat/auto-archiver`
|
|
@ -0,0 +1,21 @@
|
|||
# Testing
|
||||
|
||||
`pytest` is used for testing. There are two main types of tests:
|
||||
|
||||
1. 'core' tests which should be run on every change
|
||||
2. 'download' tests which hit the network. These tests will do things like make API calls (e.g. Twitter, Bluesky etc.) and should be run regularly to make sure that APIs have not changed.
|
||||
|
||||
|
||||
## Running Tests
|
||||
|
||||
1. Make sure you've installed the dev dependencies with `pytest install --with dev`
|
||||
2. Tests can be run as follows:
|
||||
```
|
||||
#### Command prefix of 'poetry run' removed here for simplicity
|
||||
# run core tests
|
||||
pytest -ra -v -m "not download"
|
||||
# run download tests
|
||||
pytest -ra -v -m "download"
|
||||
# run all tests
|
||||
pytest -ra -v
|
||||
```
|
|
@ -0,0 +1,79 @@
|
|||
# Auto Archiver Configuration
|
||||
# Steps are the modules that will be run in the order they are defined
|
||||
|
||||
steps:
|
||||
feeders:
|
||||
- cli_feeder
|
||||
extractors:
|
||||
- generic_extractor
|
||||
- telegram_extractor
|
||||
enrichers:
|
||||
- thumbnail_enricher
|
||||
- meta_enricher
|
||||
- pdq_hash_enricher
|
||||
- ssl_enricher
|
||||
- hash_enricher
|
||||
databases:
|
||||
- console_db
|
||||
- csv_db
|
||||
storages:
|
||||
- local_storage
|
||||
formatters:
|
||||
- html_formatter
|
||||
|
||||
# Global configuration
|
||||
|
||||
# Authentication
|
||||
# a dictionary of authentication information that can be used by extractors to login to website.
|
||||
# you can use a comma separated list for multiple domains on the same line (common usecase: x.com,twitter.com)
|
||||
# Common login 'types' are username/password, cookie, api key/token.
|
||||
# There are two special keys for using cookies, they are: cookies_file and cookies_from_browser.
|
||||
# Some Examples:
|
||||
# facebook.com:
|
||||
# username: "my_username"
|
||||
# password: "my_password"
|
||||
# or for a site that uses an API key:
|
||||
# twitter.com,x.com:
|
||||
# api_key
|
||||
# api_secret
|
||||
# youtube.com:
|
||||
# cookie: "login_cookie=value ; other_cookie=123" # multiple 'key=value' pairs should be separated by ;
|
||||
|
||||
authentication: {}
|
||||
|
||||
# Logging settings for your project. See the logging settings with --help
|
||||
|
||||
logging:
|
||||
level: INFO
|
||||
|
||||
# These are the global configurations that are used by the modules
|
||||
|
||||
file:
|
||||
rotation:
|
||||
local_storage:
|
||||
path_generator: flat
|
||||
filename_generator: static
|
||||
save_to: ./local_archive
|
||||
save_absolute: false
|
||||
html_formatter:
|
||||
detect_thumbnails: true
|
||||
thumbnail_enricher:
|
||||
thumbnails_per_minute: 60
|
||||
max_thumbnails: 16
|
||||
generic_extractor:
|
||||
subtitles: true
|
||||
comments: false
|
||||
livestreams: false
|
||||
live_from_start: false
|
||||
proxy: ''
|
||||
end_means_success: true
|
||||
allow_playlist: false
|
||||
max_downloads: inf
|
||||
csv_db:
|
||||
csv_file: db.csv
|
||||
ssl_enricher:
|
||||
skip_when_nothing_archived: true
|
||||
hash_enricher:
|
||||
algorithm: SHA-256
|
||||
chunksize: 16000000
|
||||
|
|
@ -0,0 +1,30 @@
|
|||
|
||||
# Archiving Overview
|
||||
|
||||
The archiver archives web pages using the following workflow
|
||||
1. **Feeder** gets the links (from a spreadsheet, from the console, ...)
|
||||
2. **Extractor** tries to extract content from the given link (e.g. videos from youtube, images from Twitter...)
|
||||
3. **Enricher** adds more info to the content (hashes, thumbnails, ...)
|
||||
4. **Formatter** creates a report from all the archived content (HTML, PDF, ...)
|
||||
5. **Database** knows what's been archived and also stores the archive result (spreadsheet, CSV, or just the console)
|
||||
|
||||
Each step in the workflow is handled by 'modules' that interact with the data in different ways. For example, the Twitter Extractor Module would extract information from the Twitter website. The Screenshot Enricher Module will take screenshots of the given page. See the [core modules page](core_modules.md) for an overview of all the modules that are available.
|
||||
|
||||
Auto-archiver must have at least one module defined for each step of the workflow. This is done by setting the [configuration](installation/configurations.md) for your auto-archiver instance.
|
||||
|
||||
Here's the complete workflow that the auto-archiver goes through:
|
||||
|
||||
```mermaid
|
||||
|
||||
graph TD
|
||||
s((start)) --> F(fa:fa-table Feeder)
|
||||
F -->|get and clean URL| D1{fa:fa-database Database}
|
||||
D1 -->|is already archived| e((end))
|
||||
D1 -->|not yet archived| a(fa:fa-download Archivers)
|
||||
a -->|got media| E(fa:fa-chart-line Enrichers)
|
||||
E --> S[fa:fa-box-archive Storages]
|
||||
E --> Fo(fa:fa-code Formatter)
|
||||
Fo --> S
|
||||
Fo -->|update database| D2(fa:fa-database Database)
|
||||
D2 --> e
|
||||
```
|
|
@ -0,0 +1,47 @@
|
|||
# How-To Guides
|
||||
|
||||
## How to use Google Sheets to load and store archive information
|
||||
The `--gsheet_feeder.sheet` property is the name of the Google Sheet to check for URLs.
|
||||
This sheet must have been shared with the Google Service account used by `gspread`.
|
||||
This sheet must also have specific columns (case-insensitive) in the `header` - see the [Gsheet Feeder Docs](modules/autogen/feeder/gsheet_feeder.md) for more info. The default names of these columns and their purpose is:
|
||||
|
||||
Inputs:
|
||||
|
||||
* **Link** *(required)*: the URL of the post to archive
|
||||
* **Destination folder**: custom folder for archived file (regardless of storage)
|
||||
|
||||
Outputs:
|
||||
* **Archive status** *(required)*: Status of archive operation
|
||||
* **Archive location**: URL of archived post
|
||||
* **Archive date**: Date archived
|
||||
* **Thumbnail**: Embeds a thumbnail for the post in the spreadsheet
|
||||
* **Timestamp**: Timestamp of original post
|
||||
* **Title**: Post title
|
||||
* **Text**: Post text
|
||||
* **Screenshot**: Link to screenshot of post
|
||||
* **Hash**: Hash of archived HTML file (which contains hashes of post media) - for checksums/verification
|
||||
* **Perceptual Hash**: Perceptual hashes of found images - these can be used for de-duplication of content
|
||||
* **WACZ**: Link to a WACZ web archive of post
|
||||
* **ReplayWebpage**: Link to a ReplayWebpage viewer of the WACZ archive
|
||||
|
||||
For example, this is a spreadsheet configured with all of the columns for the auto archiver and a few URLs to archive. (Note that the column names are not case sensitive.)
|
||||
|
||||

|
||||
|
||||
Now the auto archiver can be invoked, with this command in this example: `docker run --rm -v $PWD/secrets:/app/secrets -v $PWD/local_archive:/app/local_archive bellingcat/auto-archiver:dockerize --config secrets/orchestration-global.yaml --gsheet_feeder.sheet "Auto archive test 2023-2"`. Note that the sheet name has been overridden/specified in the command line invocation.
|
||||
|
||||
When the auto archiver starts running, it updates the "Archive status" column.
|
||||
|
||||

|
||||
|
||||
The links are downloaded and archived, and the spreadsheet is updated to the following:
|
||||
|
||||

|
||||
|
||||
Note that the first row is skipped, as it is assumed to be a header row (`--gsheet_feeder.header=1` and you can change it if you use more rows above). Rows with an empty URL column, or a non-empty archive column are also skipped. All sheets in the document will be checked.
|
||||
|
||||
The "archive location" link contains the path of the archived file, in local storage, S3, or in Google Drive.
|
||||
|
||||

|
||||
|
||||
---
|
|
@ -0,0 +1,17 @@
|
|||
|
||||
```{include} ../../README.md
|
||||
```
|
||||
|
||||
```{toctree}
|
||||
:maxdepth: 2
|
||||
:hidden:
|
||||
:caption: Contents:
|
||||
|
||||
Overview <self>
|
||||
contributing
|
||||
installation/installation.rst
|
||||
core_modules.md
|
||||
how_to
|
||||
development/developer_guidelines
|
||||
autoapi/index.rst
|
||||
```
|
|
@ -1,26 +0,0 @@
|
|||
.. auto-archiver documentation master file, created by
|
||||
sphinx-quickstart on Sun Jan 12 20:35:50 2025.
|
||||
You can adapt this file completely to your liking, but it should at least
|
||||
contain the root `toctree` directive.
|
||||
|
||||
Auto Archiver documentation
|
||||
===========================
|
||||
|
||||
.. note::
|
||||
|
||||
This is a work in progress.
|
||||
|
||||
|
||||
.. include:: ../../README.md
|
||||
:parser: myst
|
||||
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
:hidden:
|
||||
:caption: Contents:
|
||||
|
||||
user_guidelines
|
||||
developer_guidelines
|
||||
configurations
|
||||
|
|
@ -0,0 +1,6 @@
|
|||
# Configuration Cheat Sheet
|
||||
|
||||
Below is a list of all configurations for the core modules in Auto Archiver
|
||||
|
||||
```{include} ../modules/autogen/configs_cheatsheet.md
|
||||
```
|
|
@ -0,0 +1,98 @@
|
|||
|
||||
# Configuration
|
||||
|
||||
This section of the documentation provides guidelines for configuring the tool.
|
||||
|
||||
## Configuring using a file
|
||||
|
||||
The recommended way to configure auto-archiver for long-term and deployed projects is a configuration file, typically called `orchestration.yaml`. This is a YAML file containing all the settings for your entire workflow.
|
||||
|
||||
The structure of orchestration file is split into 2 parts: `steps` (what [steps](../flow_overview.md) to use) and `configurations` (settings for different modules), here's a simplification:
|
||||
|
||||
A default `orchestration.yaml` will be created for you the first time you run auto-archiver (without any arguments). Here's what it looks like:
|
||||
|
||||
<details>
|
||||
<summary>View exampleorchestration.yaml</summary>
|
||||
|
||||
```{literalinclude} ../example.orchestration.yaml
|
||||
:language: yaml
|
||||
:caption: orchestration.yaml
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
## Configuring from the Command Line
|
||||
|
||||
You can run auto-archiver directy from the command line, without the need for a configuration file, command line arguments are parsed using the format `module_name.config_value`. For example, a config value of `api_key` in the `instagram_extractor` module would be passed on the command line with the flag `--instagram_extractor.api_key=API_KEY`.
|
||||
|
||||
The command line arguments are useful for testing or editing config values and enabling/disabling modules on the fly. When you are happy with your settings, you can store them back in your configuration file by passing the `-s/--store` flag on the command line.
|
||||
|
||||
```bash
|
||||
auto-archiver --instagram_extractor.api_key=123 --other_module.setting --store
|
||||
# will store the new settings into the configuration file (default: orchestration.yaml)
|
||||
```
|
||||
|
||||
```{note} Arguments passed on the command line override those saved in your settings file. Save them to your config file using the -s or --store flag
|
||||
```
|
||||
|
||||
## Seeing all Configuration Options
|
||||
|
||||
View the configurable settings for the core modules on the individual doc pages for each [](../core_modules.md).
|
||||
You can also view all settings available for the modules you have on your system using the `--help` flag in auto-archiver.
|
||||
|
||||
```{code-block} console
|
||||
:caption: Example output when using the --help flag with auto-archiver
|
||||
$ auto-archiver --help
|
||||
...
|
||||
Positional Arguments:
|
||||
urls URL(s) to archive, either a single URL or a list of urls, should not come from config.yaml
|
||||
|
||||
Options:
|
||||
--help, -h show a full help message and exit
|
||||
--version show program's version number and exit
|
||||
--config CONFIG_FILE the filename of the YAML configuration file (defaults to 'config.yaml')
|
||||
--mode {simple,full} the mode to run the archiver in
|
||||
-s, --store, --no-store
|
||||
Store the created config in the config file
|
||||
--module_paths MODULE_PATHS [MODULE_PATHS ...]
|
||||
additional paths to search for modules
|
||||
--feeders STEPS.FEEDERS [STEPS.FEEDERS ...]
|
||||
the feeders to use
|
||||
--enrichers STEPS.ENRICHERS [STEPS.ENRICHERS ...]
|
||||
the enrichers to use
|
||||
--extractors STEPS.EXTRACTORS [STEPS.EXTRACTORS ...]
|
||||
the extractors to use
|
||||
--databases STEPS.DATABASES [STEPS.DATABASES ...]
|
||||
the databases to use
|
||||
--storages STEPS.STORAGES [STEPS.STORAGES ...]
|
||||
the storages to use
|
||||
--formatters STEPS.FORMATTERS [STEPS.FORMATTERS ...]
|
||||
the formatter to use
|
||||
--authentication AUTHENTICATION
|
||||
A dictionary of sites and their authentication methods (token, username etc.) that extractors can use to log into a website. If passing this on the command line, use a JSON string. You may
|
||||
also pass a path to a valid JSON/YAML file which will be parsed.
|
||||
--logging.level {INFO,DEBUG,ERROR,WARNING}
|
||||
the logging level to use
|
||||
--logging.file LOGGING.FILE
|
||||
the logging file to write to
|
||||
--logging.rotation LOGGING.ROTATION
|
||||
the logging rotation to use
|
||||
|
||||
Wayback Machine Enricher:
|
||||
Submits the current URL to the Wayback Machine for archiving and returns either a job ID or the...
|
||||
|
||||
--wayback_extractor_enricher.timeout TIMEOUT
|
||||
seconds to wait for successful archive confirmation from wayback, if more than this passes the result contains the job_id so the status can later be checked manually.
|
||||
--wayback_extractor_enricher.if_not_archived_within IF_NOT_ARCHIVED_WITHIN
|
||||
only tell wayback to archive if no archive is available before the number of seconds specified, use None to ignore this option. For more information:
|
||||
https://docs.google.com/document/d/1Nsv52MvSjbLb2PCpHlat0gkzw0EvtSgpKHu4mk0MnrA
|
||||
--wayback_extractor_enricher.key KEY
|
||||
wayback API key. to get credentials visit https://archive.org/account/s3.php
|
||||
--wayback_extractor_enricher.secret SECRET
|
||||
wayback API secret. to get credentials visit https://archive.org/account/s3.php
|
||||
--wayback_extractor_enricher.proxy_http PROXY_HTTP
|
||||
http proxy to use for wayback requests, eg http://proxy-user:password@proxy-ip:port
|
||||
--wayback_extractor_enricher.proxy_https PROXY_HTTPS
|
||||
https proxy to use for wayback requests, eg https://proxy-user:password@proxy-ip:port
|
||||
```
|
||||
|
|
@ -0,0 +1,92 @@
|
|||
# Installing Auto Archiver
|
||||
|
||||
```{toctree}
|
||||
:depth: 1
|
||||
:hidden:
|
||||
|
||||
configurations.md
|
||||
config_cheatsheet.md
|
||||
```
|
||||
|
||||
There are 3 main ways to use the auto-archiver:
|
||||
1. Easiest: [via docker](#installing-with-docker)
|
||||
2. Local Install: [using pip](#installing-locally-with-pip)
|
||||
3. Developer Install: [see the developer guidelines](../development/developer_guidelines)
|
||||
|
||||
|
||||
But **you always need a configuration/orchestration file**, which is where you'll configure where/what/how to archive. Make sure you read [orchestration](#orchestration).
|
||||
|
||||
|
||||
## Installing with Docker
|
||||
|
||||
[](https://hub.docker.com/r/bellingcat/auto-archiver)
|
||||
|
||||
Docker works like a virtual machine running inside your computer, it isolates everything and makes installation simple. Since it is an isolated environment when you need to pass it your orchestration file or get downloaded media out of docker you will need to connect folders on your machine with folders inside docker with the `-v` volume flag.
|
||||
|
||||
|
||||
1. Install [docker](https://docs.docker.com/get-docker/)
|
||||
2. Pull the auto-archiver docker [image](https://hub.docker.com/r/bellingcat/auto-archiver) with `docker pull bellingcat/auto-archiver`
|
||||
3. Run the docker image locally in a container: `docker run --rm -v $PWD/secrets:/app/secrets -v $PWD/local_archive:/app/local_archive bellingcat/auto-archiver --config secrets/orchestration.yaml` breaking this command down:
|
||||
1. `docker run` tells docker to start a new container (an instance of the image)
|
||||
2. `--rm` makes sure this container is removed after execution (less garbage locally)
|
||||
3. `-v $PWD/secrets:/app/secrets` - your secrets folder
|
||||
1. `-v` is a volume flag which means a folder that you have on your computer will be connected to a folder inside the docker container
|
||||
2. `$PWD/secrets` points to a `secrets/` folder in your current working directory (where your console points to), we use this folder as a best practice to hold all the secrets/tokens/passwords/... you use
|
||||
3. `/app/secrets` points to the path the docker container where this image can be found
|
||||
4. `-v $PWD/local_archive:/app/local_archive` - (optional) if you use local_storage
|
||||
1. `-v` same as above, this is a volume instruction
|
||||
2. `$PWD/local_archive` is a folder `local_archive/` in case you want to archive locally and have the files accessible outside docker
|
||||
3. `/app/local_archive` is a folder inside docker that you can reference in your orchestration.yml file
|
||||
|
||||
### Example invocations
|
||||
|
||||
The invocations below will run the auto-archiver Docker image using a configuration file that you have specified
|
||||
|
||||
```bash
|
||||
# all the configurations come from ./secrets/orchestration.yaml
|
||||
docker run --rm -v $PWD/secrets:/app/secrets -v $PWD/local_archive:/app/local_archive bellingcat/auto-archiver --config secrets/orchestration.yaml
|
||||
# uses the same configurations but for another google docs sheet
|
||||
# with a header on row 2 and with some different column names
|
||||
# notice that columns is a dictionary so you need to pass it as JSON and it will override only the values provided
|
||||
docker run --rm -v $PWD/secrets:/app/secrets -v $PWD/local_archive:/app/local_archive bellingcat/auto-archiver --config secrets/orchestration.yaml --gsheet_feeder.sheet="use it on another sheets doc" --gsheet_feeder.header=2 --gsheet_feeder.columns='{"url": "link"}'
|
||||
# all the configurations come from orchestration.yaml and specifies that s3 files should be private
|
||||
docker run --rm -v $PWD/secrets:/app/secrets -v $PWD/local_archive:/app/local_archive bellingcat/auto-archiver --config secrets/orchestration.yaml --s3_storage.private=1
|
||||
```
|
||||
|
||||
## Installing Locally with Pip
|
||||
|
||||
1. Make sure you have python 3.10 or higher installed
|
||||
2. Install the package with your preferred package manager: `pip/pipenv/conda install auto-archiver` or `poetry add auto-archiver`
|
||||
3. Test it's installed with `auto-archiver --help`
|
||||
4. Install other local dependency requirements (for )
|
||||
5. Run it with your orchestration file and pass any flags you want in the command line `auto-archiver --config secrets/orchestration.yaml` if your orchestration file is inside a `secrets/`, which we advise
|
||||
|
||||
### Example invocations
|
||||
|
||||
Once all your [local requirements](#installing-local-requirements) are correctly installed, the
|
||||
|
||||
```bash
|
||||
# all the configurations come from ./secrets/orchestration.yaml
|
||||
auto-archiver --config secrets/orchestration.yaml
|
||||
# uses the same configurations but for another google docs sheet
|
||||
# with a header on row 2 and with some different column names
|
||||
# notice that columns is a dictionary so you need to pass it as JSON and it will override only the values provided
|
||||
auto-archiver --config secrets/orchestration.yaml --gsheet_feeder.sheet="use it on another sheets doc" --gsheet_feeder.header=2 --gsheet_feeder.columns='{"url": "link"}'
|
||||
# all the configurations come from orchestration.yaml and specifies that s3 files should be private
|
||||
auto-archiver --config secrets/orchestration.yaml --s3_storage.private=1
|
||||
```
|
||||
|
||||
### Installing Local Requirements
|
||||
|
||||
If using the local installation method, you will also need to install the following dependencies locally:
|
||||
|
||||
1.[ffmpeg](https://www.ffmpeg.org/) - for handling of downloaded videos
|
||||
2. [firefox](https://www.mozilla.org/en-US/firefox/new/) and [geckodriver](https://github.com/mozilla/geckodriver/releases) on a path folder like `/usr/local/bin` - for taking webpage screenshots with the screenshot enricher
|
||||
3. (optional) [fonts-noto](https://fonts.google.com/noto) to deal with multiple unicode characters during selenium/geckodriver's screenshots: `sudo apt install fonts-noto -y`.
|
||||
4. [Browsertrix Crawler docker image](https://hub.docker.com/r/webrecorder/browsertrix-crawler) for the WACZ enricher/archiver
|
||||
|
||||
|
||||
|
||||
## Developer Install
|
||||
|
||||
[See the developer guidelines](../development/developer_guidelines)
|
|
@ -0,0 +1,15 @@
|
|||
# Database Modules
|
||||
|
||||
Database modules are used to store the status and results of the extraction and enrichment processes somewhere. The database modules are responsible for creating and managing entires for each item that has been processed.
|
||||
|
||||
The default (enabled) databases are the CSV Database and the Console Database.
|
||||
|
||||
```{include} autogen/database.md
|
||||
```
|
||||
|
||||
```{toctree}
|
||||
:depth: 1
|
||||
:hidden:
|
||||
:glob:
|
||||
autogen/database/*
|
||||
```
|
|
@ -0,0 +1,14 @@
|
|||
# Enricher Modules
|
||||
|
||||
Enricher modules are used to add additional information to the items that have been extracted. Common enrichment tasks include adding metadata to items, such as the hash of the item, a screenshot of the webpage when the item was extracted, or general metadata like the date and time the item was extracted.
|
||||
|
||||
|
||||
```{include} autogen/enricher.md
|
||||
```
|
||||
|
||||
```{toctree}
|
||||
:depth: 1
|
||||
:hidden:
|
||||
:glob:
|
||||
autogen/enricher/*
|
||||
```
|
|
@ -0,0 +1,18 @@
|
|||
# Extractor Modules
|
||||
|
||||
Extractor modules are used to extract the content of a given URL. Typically, one extractor will work for one website or platform (e.g. a Telegram extractor or an Instagram), however, there are several wide-ranging extractors which work for a wide range of websites.
|
||||
|
||||
Extractors that are able to extract content from a wide range of websites include:
|
||||
1. Generic Extractor: parses videos and images on sites using the powerful yt-dlp library.
|
||||
2. Wayback Machine Extractor: sends pages to the Waygback machine for archiving, and stores the link.
|
||||
3. WACZ Extractor: runs a web browser to 'browse' the URL and save a copy of the page in WACZ format.
|
||||
|
||||
```{include} autogen/extractor.md
|
||||
```
|
||||
|
||||
```{toctree}
|
||||
:depth: 1
|
||||
:hidden:
|
||||
:glob:
|
||||
autogen/extractor/*
|
||||
```
|
|
@ -0,0 +1,20 @@
|
|||
# Feeder Modules
|
||||
|
||||
Feeder modules are used to feed URLs into the `auto-archiver` for processing. Feeders can take these URLs from a variety of sources, such as a file, a database, or the command line.
|
||||
|
||||
The default feeder is the command line feeder (`cli_feeder`), which allows you to input URLs directly into the `auto-archiver` from the command line.
|
||||
|
||||
Command line feeder usage:
|
||||
```{code} bash
|
||||
auto-archiver [options] -- URL1 URL2 ...
|
||||
```
|
||||
|
||||
```{include} autogen/feeder.md
|
||||
```
|
||||
|
||||
```{toctree}
|
||||
:depth: 1
|
||||
:glob:
|
||||
:hidden:
|
||||
autogen/feeder/*
|
||||
```
|
|
@ -0,0 +1,13 @@
|
|||
# Formatter Modules
|
||||
|
||||
Formatter modules are used to format the data extracted from a URL into a specific format. Currently the most widely-used formatter is the HTML formatter, which formats the data into an easily viewable HTML page.
|
||||
|
||||
```{include} autogen/formatter.md
|
||||
```
|
||||
|
||||
```{toctree}
|
||||
:depth: 1
|
||||
:hidden:
|
||||
:glob:
|
||||
autogen/formatter/*
|
||||
```
|
|
@ -0,0 +1,15 @@
|
|||
# Storage Modules
|
||||
|
||||
Storage modules are used to store the data extracted from a URL in a persistent location. This can be on your local hard disk, or on a remote server (e.g. S3 or Google Drive).
|
||||
|
||||
The default is to store the files downloaded (e.g. images, videos) in a local directory.
|
||||
|
||||
```{include} autogen/storage.md
|
||||
```
|
||||
|
||||
```{toctree}
|
||||
:depth: 1
|
||||
:hidden:
|
||||
:glob:
|
||||
autogen/storage/*
|
||||
```
|
|
@ -0,0 +1,16 @@
|
|||
|
||||
```{include} ../../README.md
|
||||
```
|
||||
|
||||
```{toctree}
|
||||
:maxdepth: 2
|
||||
:hidden:
|
||||
:caption: Contents:
|
||||
|
||||
Overview <self>
|
||||
installation/installation.rst
|
||||
core_modules.md
|
||||
how_to
|
||||
development/developer_guidelines
|
||||
autoapi/index.rst
|
||||
```
|
|
@ -1,11 +0,0 @@
|
|||
|
||||
User Guidelines
|
||||
===============
|
||||
|
||||
This section of the documentation provides guidelines for users who want to use the tool,
|
||||
without needing to modify the code.
|
||||
To see the developer guidelines, see :ref:`developer_guidelines`.
|
||||
|
||||
.. note::
|
||||
|
||||
This is a work in progress.
|
|
@ -1,156 +0,0 @@
|
|||
steps:
|
||||
# only 1 feeder allowed
|
||||
feeder: gsheet_feeder # defaults to cli_feeder
|
||||
archivers: # order matters, uncomment to activate
|
||||
- bluesky_archiver
|
||||
# - vk_archiver
|
||||
# - telethon_archiver
|
||||
# - telegram_archiver
|
||||
# - twitter_archiver
|
||||
# - twitter_api_archiver
|
||||
# - instagram_api_archiver
|
||||
# - instagram_tbot_archiver
|
||||
# - instagram_archiver
|
||||
# - tiktok_archiver
|
||||
- youtubedl_archiver
|
||||
# - wayback_archiver_enricher
|
||||
# - wacz_archiver_enricher
|
||||
enrichers:
|
||||
- hash_enricher
|
||||
# - meta_enricher
|
||||
# - metadata_enricher
|
||||
# - screenshot_enricher
|
||||
# - pdq_hash_enricher
|
||||
# - ssl_enricher
|
||||
# - timestamping_enricher
|
||||
# - whisper_enricher
|
||||
# - thumbnail_enricher
|
||||
# - wayback_archiver_enricher
|
||||
# - wacz_archiver_enricher
|
||||
# - pdq_hash_enricher # if you want to calculate hashes for thumbnails, include this after thumbnail_enricher
|
||||
formatter: html_formatter # defaults to mute_formatter
|
||||
storages:
|
||||
- local_storage
|
||||
# - s3_storage
|
||||
# - gdrive_storage
|
||||
databases:
|
||||
- console_db
|
||||
# - csv_db
|
||||
# - gsheet_db
|
||||
# - mongo_db
|
||||
|
||||
configurations:
|
||||
gsheet_feeder:
|
||||
sheet: "your sheet name"
|
||||
header: 1
|
||||
service_account: "secrets/service_account.json"
|
||||
# allow_worksheets: "only parse this worksheet"
|
||||
# block_worksheets: "blocked sheet 1,blocked sheet 2"
|
||||
use_sheet_names_in_stored_paths: false
|
||||
columns:
|
||||
url: link
|
||||
status: archive status
|
||||
folder: destination folder
|
||||
archive: archive location
|
||||
date: archive date
|
||||
thumbnail: thumbnail
|
||||
timestamp: upload timestamp
|
||||
title: upload title
|
||||
text: textual content
|
||||
screenshot: screenshot
|
||||
hash: hash
|
||||
pdq_hash: perceptual hashes
|
||||
wacz: wacz
|
||||
replaywebpage: replaywebpage
|
||||
instagram_tbot_archiver:
|
||||
api_id: "TELEGRAM_BOT_API_ID"
|
||||
api_hash: "TELEGRAM_BOT_API_HASH"
|
||||
# session_file: "secrets/anon"
|
||||
telethon_archiver:
|
||||
api_id: "TELEGRAM_BOT_API_ID"
|
||||
api_hash: "TELEGRAM_BOT_API_HASH"
|
||||
# session_file: "secrets/anon"
|
||||
join_channels: false
|
||||
channel_invites: # if you want to archive from private channels
|
||||
- invite: https://t.me/+123456789
|
||||
id: 0000000001
|
||||
- invite: https://t.me/+123456788
|
||||
id: 0000000002
|
||||
|
||||
twitter_api_archiver:
|
||||
# either bearer_token only
|
||||
bearer_token: "TWITTER_BEARER_TOKEN"
|
||||
# OR all of the below
|
||||
# consumer_key: ""
|
||||
# consumer_secret: ""
|
||||
# access_token: ""
|
||||
# access_secret: ""
|
||||
instagram_archiver:
|
||||
username: "INSTAGRAM_USERNAME"
|
||||
password: "INSTAGRAM_PASSWORD"
|
||||
# session_file: "secrets/instaloader.session"
|
||||
|
||||
vk_archiver:
|
||||
username: "or phone number"
|
||||
password: "vk pass"
|
||||
session_file: "secrets/vk_config.v2.json"
|
||||
|
||||
youtubedl_archiver:
|
||||
subtitles: true
|
||||
# use one of the following two methods to authenticate in youtube - either provide a cookies file or use the cookies of the given browser
|
||||
# for more information, see https://github.com/yt-dlp/yt-dlp/wiki/FAQ#how-do-i-pass-cookies-to-yt-dlp
|
||||
# cookie_file: "secrets/youtube_cookies.txt"
|
||||
# cookies_from_browser: firefox
|
||||
# proxy: socks5://proxy-user:password@proxy-ip:port
|
||||
|
||||
screenshot_enricher:
|
||||
width: 1280
|
||||
height: 2300
|
||||
# to save as pdf, uncomment the following lines and adjust the print options
|
||||
# save_to_pdf: true
|
||||
# print_options:
|
||||
# for all options see https://www.selenium.dev/selenium/docs/api/py/webdriver/selenium.webdriver.common.print_page_options.html
|
||||
# background: true
|
||||
# orientation: "portrait"
|
||||
# scale: 1
|
||||
# page_width: 8.5in
|
||||
# page_height: 11in
|
||||
# margin_top: 0.4in
|
||||
# margin_bottom: 0.4in
|
||||
# margin_left: 0.4in
|
||||
# margin_right: 0.4in
|
||||
# page_ranges: ""
|
||||
# shrink_to_fit: true
|
||||
|
||||
wayback_archiver_enricher:
|
||||
timeout: 10
|
||||
key: "wayback key"
|
||||
secret: "wayback secret"
|
||||
hash_enricher:
|
||||
algorithm: "SHA3-512" # can also be SHA-256
|
||||
wacz_archiver_enricher:
|
||||
profile: secrets/profile.tar.gz
|
||||
local_storage:
|
||||
save_to: "./local_archive"
|
||||
save_absolute: true
|
||||
filename_generator: static
|
||||
path_generator: flat
|
||||
s3_storage:
|
||||
bucket: your-bucket-name
|
||||
region: reg1
|
||||
key: S3_KEY
|
||||
secret: S3_SECRET
|
||||
endpoint_url: "https://{region}.digitaloceanspaces.com"
|
||||
cdn_url: "https://{bucket}.{region}.cdn.digitaloceanspaces.com/{key}"
|
||||
# if private:true S3 urls will not be readable online
|
||||
private: false
|
||||
# with 'random' you can generate a random UUID for the URL instead of a predictable path, useful to still have public but unlisted files, alternative is 'default' or not omitted from config
|
||||
key_path: random
|
||||
gdrive_storage:
|
||||
path_generator: url
|
||||
filename_generator: random
|
||||
root_folder_id: folder_id_from_url
|
||||
oauth_token: secrets/gd-token.json # needs to be generated with scripts/create_update_gdrive_oauth_token.py
|
||||
service_account: "secrets/service_account.json"
|
||||
csv_db:
|
||||
csv_file: "./local_archive/db.csv"
|
|
@ -1,5 +1,24 @@
|
|||
# This file is automatically @generated by Poetry 2.0.1 and should not be changed by hand.
|
||||
|
||||
[[package]]
|
||||
name = "accessible-pygments"
|
||||
version = "0.0.5"
|
||||
description = "A collection of accessible pygments styles"
|
||||
optional = false
|
||||
python-versions = ">=3.9"
|
||||
groups = ["docs"]
|
||||
files = [
|
||||
{file = "accessible_pygments-0.0.5-py3-none-any.whl", hash = "sha256:88ae3211e68a1d0b011504b2ffc1691feafce124b845bd072ab6f9f66f34d4b7"},
|
||||
{file = "accessible_pygments-0.0.5.tar.gz", hash = "sha256:40918d3e6a2b619ad424cb91e556bd3bd8865443d9f22f1dcdf79e33c8046872"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
pygments = ">=1.5"
|
||||
|
||||
[package.extras]
|
||||
dev = ["pillow", "pkginfo (>=1.10)", "playwright", "pre-commit", "setuptools", "twine (>=5.0)"]
|
||||
tests = ["hypothesis", "pytest"]
|
||||
|
||||
[[package]]
|
||||
name = "alabaster"
|
||||
version = "1.0.0"
|
||||
|
@ -723,24 +742,6 @@ future = "*"
|
|||
[package.extras]
|
||||
dev = ["Sphinx (==2.1.0)", "future (==0.17.1)", "numpy (==1.16.4)", "pytest (==4.6.1)", "pytest-mock (==1.10.4)", "tox (==3.12.1)"]
|
||||
|
||||
[[package]]
|
||||
name = "furo"
|
||||
version = "2024.8.6"
|
||||
description = "A clean customisable Sphinx documentation theme."
|
||||
optional = false
|
||||
python-versions = ">=3.8"
|
||||
groups = ["docs"]
|
||||
files = [
|
||||
{file = "furo-2024.8.6-py3-none-any.whl", hash = "sha256:6cd97c58b47813d3619e63e9081169880fbe331f0ca883c871ff1f3f11814f5c"},
|
||||
{file = "furo-2024.8.6.tar.gz", hash = "sha256:b63e4cee8abfc3136d3bc03a3d45a76a850bada4d6374d24c1716b0e01394a01"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
beautifulsoup4 = "*"
|
||||
pygments = ">=2.7"
|
||||
sphinx = ">=6.0,<9.0"
|
||||
sphinx-basic-ng = ">=1.0.0.beta2"
|
||||
|
||||
[[package]]
|
||||
name = "future"
|
||||
version = "1.0.0"
|
||||
|
@ -1020,6 +1021,27 @@ files = [
|
|||
[package.dependencies]
|
||||
attrs = ">=19.2.0"
|
||||
|
||||
[[package]]
|
||||
name = "linkify-it-py"
|
||||
version = "2.0.3"
|
||||
description = "Links recognition library with FULL unicode support."
|
||||
optional = false
|
||||
python-versions = ">=3.7"
|
||||
groups = ["docs"]
|
||||
files = [
|
||||
{file = "linkify-it-py-2.0.3.tar.gz", hash = "sha256:68cda27e162e9215c17d786649d1da0021a451bdc436ef9e0fa0ba5234b9b048"},
|
||||
{file = "linkify_it_py-2.0.3-py3-none-any.whl", hash = "sha256:6bcbc417b0ac14323382aef5c5192c0075bf8a9d6b41820a2b66371eac6b6d79"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
uc-micro-py = "*"
|
||||
|
||||
[package.extras]
|
||||
benchmark = ["pytest", "pytest-benchmark"]
|
||||
dev = ["black", "flake8", "isort", "pre-commit", "pyproject-flake8"]
|
||||
doc = ["myst-parser", "sphinx", "sphinx-book-theme"]
|
||||
test = ["coverage", "pytest", "pytest-cov"]
|
||||
|
||||
[[package]]
|
||||
name = "loguru"
|
||||
version = "0.7.3"
|
||||
|
@ -1654,6 +1676,34 @@ files = [
|
|||
{file = "pycryptodomex-3.21.0.tar.gz", hash = "sha256:222d0bd05381dd25c32dd6065c071ebf084212ab79bab4599ba9e6a3e0009e6c"},
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "pydata-sphinx-theme"
|
||||
version = "0.16.1"
|
||||
description = "Bootstrap-based Sphinx theme from the PyData community"
|
||||
optional = false
|
||||
python-versions = ">=3.9"
|
||||
groups = ["docs"]
|
||||
files = [
|
||||
{file = "pydata_sphinx_theme-0.16.1-py3-none-any.whl", hash = "sha256:225331e8ac4b32682c18fcac5a57a6f717c4e632cea5dd0e247b55155faeccde"},
|
||||
{file = "pydata_sphinx_theme-0.16.1.tar.gz", hash = "sha256:a08b7f0b7f70387219dc659bff0893a7554d5eb39b59d3b8ef37b8401b7642d7"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
accessible-pygments = "*"
|
||||
Babel = "*"
|
||||
beautifulsoup4 = "*"
|
||||
docutils = "!=0.17.0"
|
||||
pygments = ">=2.7"
|
||||
sphinx = ">=6.1"
|
||||
typing-extensions = "*"
|
||||
|
||||
[package.extras]
|
||||
a11y = ["pytest-playwright"]
|
||||
dev = ["pandoc", "pre-commit", "pydata-sphinx-theme[doc,test]", "pyyaml", "sphinx-theme-builder[cli]", "tox"]
|
||||
doc = ["ablog (>=0.11.8)", "colorama", "graphviz", "ipykernel", "ipyleaflet", "ipywidgets", "jupyter_sphinx", "jupyterlite-sphinx", "linkify-it-py", "matplotlib", "myst-parser", "nbsphinx", "numpy", "numpydoc", "pandas", "plotly", "rich", "sphinx-autoapi (>=3.0.0)", "sphinx-copybutton", "sphinx-design", "sphinx-favicon (>=1.0.1)", "sphinx-sitemap", "sphinx-togglebutton", "sphinxcontrib-youtube (>=1.4.1)", "sphinxext-rediraffe", "xarray"]
|
||||
i18n = ["Babel", "jinja2"]
|
||||
test = ["pytest", "pytest-cov", "pytest-regressions", "sphinx[test]"]
|
||||
|
||||
[[package]]
|
||||
name = "pygments"
|
||||
version = "2.19.1"
|
||||
|
@ -2360,22 +2410,25 @@ websockets = ">=11"
|
|||
test = ["httpx", "pytest (>=6)"]
|
||||
|
||||
[[package]]
|
||||
name = "sphinx-basic-ng"
|
||||
version = "1.0.0b2"
|
||||
description = "A modern skeleton for Sphinx themes."
|
||||
name = "sphinx-book-theme"
|
||||
version = "1.1.3"
|
||||
description = "A clean book theme for scientific explanations and documentation with Sphinx"
|
||||
optional = false
|
||||
python-versions = ">=3.7"
|
||||
python-versions = ">=3.9"
|
||||
groups = ["docs"]
|
||||
files = [
|
||||
{file = "sphinx_basic_ng-1.0.0b2-py3-none-any.whl", hash = "sha256:eb09aedbabfb650607e9b4b68c9d240b90b1e1be221d6ad71d61c52e29f7932b"},
|
||||
{file = "sphinx_basic_ng-1.0.0b2.tar.gz", hash = "sha256:9ec55a47c90c8c002b5960c57492ec3021f5193cb26cebc2dc4ea226848651c9"},
|
||||
{file = "sphinx_book_theme-1.1.3-py3-none-any.whl", hash = "sha256:a554a9a7ac3881979a87a2b10f633aa2a5706e72218a10f71be38b3c9e831ae9"},
|
||||
{file = "sphinx_book_theme-1.1.3.tar.gz", hash = "sha256:1f25483b1846cb3d353a6bc61b3b45b031f4acf845665d7da90e01ae0aef5b4d"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
sphinx = ">=4.0"
|
||||
pydata-sphinx-theme = ">=0.15.2"
|
||||
sphinx = ">=5"
|
||||
|
||||
[package.extras]
|
||||
docs = ["furo", "ipython", "myst-parser", "sphinx-copybutton", "sphinx-inline-tabs"]
|
||||
code-style = ["pre-commit"]
|
||||
doc = ["ablog", "folium", "ipywidgets", "matplotlib", "myst-nb", "nbclient", "numpy", "numpydoc", "pandas", "plotly", "sphinx-copybutton", "sphinx-design", "sphinx-examples", "sphinx-tabs", "sphinx-thebe", "sphinx-togglebutton", "sphinxcontrib-bibtex", "sphinxcontrib-youtube", "sphinxext-opengraph"]
|
||||
test = ["beautifulsoup4", "coverage", "defusedxml", "myst-nb", "pytest", "pytest-cov", "pytest-regressions", "sphinx_thebe"]
|
||||
|
||||
[[package]]
|
||||
name = "sphinx-copybutton"
|
||||
|
@ -2746,6 +2799,21 @@ tzdata = {version = "*", markers = "platform_system == \"Windows\""}
|
|||
[package.extras]
|
||||
devenv = ["check-manifest", "pytest (>=4.3)", "pytest-cov", "pytest-mock (>=3.3)", "zest.releaser"]
|
||||
|
||||
[[package]]
|
||||
name = "uc-micro-py"
|
||||
version = "1.0.3"
|
||||
description = "Micro subset of unicode data files for linkify-it-py projects."
|
||||
optional = false
|
||||
python-versions = ">=3.7"
|
||||
groups = ["docs"]
|
||||
files = [
|
||||
{file = "uc-micro-py-1.0.3.tar.gz", hash = "sha256:d321b92cff673ec58027c04015fcaa8bb1e005478643ff4a500882eaab88c48a"},
|
||||
{file = "uc_micro_py-1.0.3-py3-none-any.whl", hash = "sha256:db1dffff340817673d7b466ec86114a9dc0e9d4d9b5ba229d9d60e5c12600cd5"},
|
||||
]
|
||||
|
||||
[package.extras]
|
||||
test = ["coverage", "pytest", "pytest-cov"]
|
||||
|
||||
[[package]]
|
||||
name = "uritemplate"
|
||||
version = "4.1.1"
|
||||
|
@ -3101,4 +3169,4 @@ test = ["pytest (>=8.1,<9.0)", "pytest-rerunfailures (>=14.0,<15.0)"]
|
|||
[metadata]
|
||||
lock-version = "2.1"
|
||||
python-versions = ">=3.10,<3.13"
|
||||
content-hash = "9ca114395e73af8982abbccc25b385bbca62e50ba7cca8239e52e5c1227cb4b0"
|
||||
content-hash = "c2503c982b9362c3757f39432cdaa8375b45e2d4a0497fa80c2b82a65d1eedf7"
|
||||
|
|
|
@ -72,7 +72,8 @@ sphinxcontrib-mermaid = "^1.0.0"
|
|||
sphinx-autobuild = "^2024.10.3"
|
||||
sphinx-copybutton = "^0.5.2"
|
||||
myst-parser = "^4.0.0"
|
||||
furo = "^2024.8.6"
|
||||
sphinx-book-theme = "^1.1.3"
|
||||
linkify-it-py = "^2.0.3"
|
||||
|
||||
|
||||
[project.scripts]
|
||||
|
|
|
@ -48,6 +48,7 @@ authentication: {}
|
|||
|
||||
logging:
|
||||
level: INFO
|
||||
|
||||
""")
|
||||
# note: 'logging' is explicitly added above in order to better format the config file
|
||||
|
||||
|
|
|
@ -1,3 +1,8 @@
|
|||
"""
|
||||
Database module for the auto-archiver that defines the interface for implementing database modules
|
||||
in the media archiving framework.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
from abc import abstractmethod
|
||||
from typing import Union
|
||||
|
@ -5,6 +10,11 @@ from typing import Union
|
|||
from auto_archiver.core import Metadata, BaseModule
|
||||
|
||||
class Database(BaseModule):
|
||||
"""
|
||||
Base class for implementing database modules in the media archiving framework.
|
||||
|
||||
Subclasses must implement the `fetch` and `done` methods to define platform-specific behavior.
|
||||
"""
|
||||
|
||||
def started(self, item: Metadata) -> None:
|
||||
"""signals the DB that the given item archival has started"""
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
"""
|
||||
Enrichers are modular components that enhance archived content by adding
|
||||
Base module for Enrichers – modular components that enhance archived content by adding
|
||||
context, metadata, or additional processing.
|
||||
|
||||
These add additional information to the context, such as screenshots, hashes, and metadata.
|
||||
|
@ -13,7 +13,16 @@ from abc import abstractmethod
|
|||
from auto_archiver.core import Metadata, BaseModule
|
||||
|
||||
class Enricher(BaseModule):
|
||||
"""Base classes and utilities for enrichers in the Auto-Archiver system."""
|
||||
"""Base classes and utilities for enrichers in the Auto-Archiver system.
|
||||
|
||||
Enricher modules must implement the `enrich` method to define their behavior.
|
||||
"""
|
||||
|
||||
@abstractmethod
|
||||
def enrich(self, to_enrich: Metadata) -> None: pass
|
||||
def enrich(self, to_enrich: Metadata) -> None:
|
||||
"""
|
||||
Enriches a Metadata object with additional information or context.
|
||||
|
||||
Takes the metadata object to enrich as an argument and modifies it in place, returning None.
|
||||
"""
|
||||
pass
|
||||
|
|
|
@ -29,14 +29,24 @@ class Extractor(BaseModule):
|
|||
valid_url: re.Pattern = None
|
||||
|
||||
def cleanup(self) -> None:
|
||||
# called when extractors are done, or upon errors, cleanup any resources
|
||||
"""
|
||||
Called when extractors are done, or upon errors, cleanup any resources
|
||||
"""
|
||||
pass
|
||||
|
||||
def sanitize_url(self, url: str) -> str:
|
||||
# used to clean unnecessary URL parameters OR unfurl redirect links
|
||||
"""
|
||||
Used to clean unnecessary URL parameters OR unfurl redirect links
|
||||
"""
|
||||
return url
|
||||
|
||||
def match_link(self, url: str) -> re.Match:
|
||||
"""
|
||||
Returns a match object if the given URL matches the valid_url pattern or False/None if not.
|
||||
|
||||
Normally used in the `suitable` method to check if the URL is supported by this extractor.
|
||||
|
||||
"""
|
||||
return self.valid_url.match(url)
|
||||
|
||||
def suitable(self, url: str) -> bool:
|
||||
|
|
|
@ -1,3 +1,7 @@
|
|||
"""
|
||||
The feeder base module defines the interface for implementing feeders in the media archiving framework.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
from abc import abstractmethod
|
||||
from auto_archiver.core import Metadata
|
||||
|
@ -5,5 +9,17 @@ from auto_archiver.core import BaseModule
|
|||
|
||||
class Feeder(BaseModule):
|
||||
|
||||
"""
|
||||
Base class for implementing feeders in the media archiving framework.
|
||||
|
||||
Subclasses must implement the `__iter__` method to define platform-specific behavior.
|
||||
"""
|
||||
|
||||
@abstractmethod
|
||||
def __iter__(self) -> Metadata: return None
|
||||
def __iter__(self) -> Metadata:
|
||||
"""
|
||||
Returns an iterator (use `yield`) over the items to be archived.
|
||||
|
||||
These should be instances of Metadata, typically created with Metadata().set_url(url).
|
||||
"""
|
||||
return None
|
|
@ -1,9 +1,24 @@
|
|||
"""
|
||||
Base module for formatters – modular components that format metadata into media objects for storage.
|
||||
|
||||
The most commonly used formatter is the HTML formatter, which takes metadata and formats it into an HTML file for storage.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
from abc import abstractmethod
|
||||
from auto_archiver.core import Metadata, Media, BaseModule
|
||||
|
||||
|
||||
class Formatter(BaseModule):
|
||||
"""
|
||||
Base class for implementing formatters in the media archiving framework.
|
||||
|
||||
Subclasses must implement the `format` method to define their behavior.
|
||||
"""
|
||||
|
||||
@abstractmethod
|
||||
def format(self, item: Metadata) -> Media: return None
|
||||
def format(self, item: Metadata) -> Media:
|
||||
"""
|
||||
Formats a Metadata object into a user-viewable format (e.g. HTML) and stores it if needed.
|
||||
"""
|
||||
return None
|
|
@ -1,3 +1,7 @@
|
|||
"""
|
||||
Base module for Storage modules – modular components that store media objects in various locations.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
from abc import abstractmethod
|
||||
from typing import IO
|
||||
|
@ -13,6 +17,12 @@ from auto_archiver.modules.hash_enricher.hash_enricher import HashEnricher
|
|||
from auto_archiver.core.module import get_module
|
||||
class Storage(BaseModule):
|
||||
|
||||
"""
|
||||
Base class for implementing storage modules in the media archiving framework.
|
||||
|
||||
Subclasses must implement the `get_cdn_url` and `uploadf` methods to define their behavior.
|
||||
"""
|
||||
|
||||
def store(self, media: Media, url: str, metadata: Metadata=None) -> None:
|
||||
if media.is_stored(in_storage=self):
|
||||
logger.debug(f"{media.key} already stored, skipping")
|
||||
|
@ -22,10 +32,18 @@ class Storage(BaseModule):
|
|||
media.add_url(self.get_cdn_url(media))
|
||||
|
||||
@abstractmethod
|
||||
def get_cdn_url(self, media: Media) -> str: pass
|
||||
def get_cdn_url(self, media: Media) -> str:
|
||||
"""
|
||||
Returns the URL of the media object stored in the CDN.
|
||||
"""
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def uploadf(self, file: IO[bytes], key: str, **kwargs: dict) -> bool: pass
|
||||
def uploadf(self, file: IO[bytes], key: str, **kwargs: dict) -> bool:
|
||||
"""
|
||||
Uploads (or saves) a file to the storage service/location.
|
||||
"""
|
||||
pass
|
||||
|
||||
def upload(self, media: Media, **kwargs) -> bool:
|
||||
logger.debug(f'[{self.__class__.__name__}] storing file {media.filename} with key {media.key}')
|
||||
|
|
|
@ -11,6 +11,8 @@
|
|||
"api_token": {
|
||||
"default": None,
|
||||
"help": "An Atlos API token. For more information, see https://docs.atlos.org/technical/api/",
|
||||
"required": True,
|
||||
"type": "str",
|
||||
},
|
||||
"atlos_url": {
|
||||
"default": "https://platform.atlos.org",
|
||||
|
|
|
@ -1,13 +0,0 @@
|
|||
def get_atlos_config_options():
|
||||
return {
|
||||
"api_token": {
|
||||
"default": None,
|
||||
"help": "An Atlos API token. For more information, see https://docs.atlos.org/technical/api/",
|
||||
"type": str
|
||||
},
|
||||
"atlos_url": {
|
||||
"default": "https://platform.atlos.org",
|
||||
"help": "The URL of your Atlos instance (e.g., https://platform.atlos.org), without a trailing slash.",
|
||||
"type": str
|
||||
},
|
||||
}
|
|
@ -0,0 +1,32 @@
|
|||
{
|
||||
"name": "Atlos Storage",
|
||||
"type": ["storage"],
|
||||
"requires_setup": True,
|
||||
"dependencies": {
|
||||
"python": ["loguru", "boto3"],
|
||||
"bin": []
|
||||
},
|
||||
"description": """
|
||||
Stores media files in a [Atlos](https://www.atlos.org/).
|
||||
|
||||
### Features
|
||||
- Saves media files to Atlos, organizing them into folders based on the provided path structure.
|
||||
|
||||
### Notes
|
||||
- Requires setup with Atlos credentials.
|
||||
- Files are uploaded to the specified `root_folder_id` and organized by the `media.key` structure.
|
||||
""",
|
||||
"configs": {
|
||||
"api_token": {
|
||||
"default": None,
|
||||
"help": "An Atlos API token. For more information, see https://docs.atlos.org/technical/api/",
|
||||
"required": True,
|
||||
"type": "str"
|
||||
},
|
||||
"atlos_url": {
|
||||
"default": "https://platform.atlos.org",
|
||||
"help": "The URL of your Atlos instance (e.g., https://platform.atlos.org), without a trailing slash.",
|
||||
"type": "str"
|
||||
},
|
||||
}
|
||||
}
|
|
@ -32,7 +32,6 @@
|
|||
|
||||
GDriveStorage: A storage module for saving archived content to Google Drive.
|
||||
|
||||
Author: Dave Mateer, (And maintained by: )
|
||||
Source Documentation: https://davemateer.com/2022/04/28/google-drive-with-python
|
||||
|
||||
### Features
|
||||
|
|
|
@ -20,5 +20,6 @@
|
|||
- Processes HTML content of messages to retrieve embedded media.
|
||||
- Sets structured metadata, including timestamps, content, and media details.
|
||||
- Does not require user authentication for Telegram.
|
||||
|
||||
""",
|
||||
}
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
{
|
||||
"name": "telethon_extractor",
|
||||
"name": "Telethon Extractor",
|
||||
"type": ["extractor"],
|
||||
"requires_setup": True,
|
||||
"dependencies": {
|
||||
|
@ -40,5 +40,9 @@ To use the `TelethonExtractor`, you must configure the following:
|
|||
- **Bot Token**: Optional, allows access to additional content (e.g., large videos) but limits private channel archiving.
|
||||
- **Channel Invites**: Optional, specify a JSON string of invite links to join channels during setup.
|
||||
|
||||
### First Time Login
|
||||
The first time you run, you will be prompted to do a authentication with the phone number associated, alternatively you can put your `anon.session` in the root.
|
||||
|
||||
|
||||
"""
|
||||
}
|
|
@ -1,6 +1,6 @@
|
|||
{
|
||||
"name": "WACZ Enricher",
|
||||
"type": ["enricher", "archiver"],
|
||||
"type": ["enricher", "extractor"],
|
||||
"entry_point": "wacz_enricher::WaczExtractorEnricher",
|
||||
"requires_setup": True,
|
||||
"dependencies": {
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
{
|
||||
"name": "Wayback Machine Enricher",
|
||||
"type": ["enricher", "archiver"],
|
||||
"type": ["enricher", "extractor"],
|
||||
"entry_point": "wayback_extractor_enricher::WaybackExtractorEnricher",
|
||||
"requires_setup": True,
|
||||
"dependencies": {
|
||||
|
|
|
@ -2,7 +2,6 @@
|
|||
# we need to explicitly expose the available imports here
|
||||
from .misc import *
|
||||
from .webdriver import Webdriver
|
||||
from .atlos import get_atlos_config_options
|
||||
|
||||
# handy utils from ytdlp
|
||||
from yt_dlp.utils import (clean_html, traverse_obj, strip_or_none, url_or_none)
|
|
@ -1,13 +0,0 @@
|
|||
def get_atlos_config_options():
|
||||
return {
|
||||
"api_token": {
|
||||
"default": None,
|
||||
"help": "An Atlos API token. For more information, see https://docs.atlos.org/technical/api/",
|
||||
"cli_set": lambda cli_val, _: cli_val
|
||||
},
|
||||
"atlos_url": {
|
||||
"default": "https://platform.atlos.org",
|
||||
"help": "The URL of your Atlos instance (e.g., https://platform.atlos.org), without a trailing slash.",
|
||||
"cli_set": lambda cli_val, _: cli_val
|
||||
},
|
||||
}
|
|
@ -1,11 +1,29 @@
|
|||
{
|
||||
# Display Name of your module
|
||||
"name": "Example Module",
|
||||
# The author of your module (optional)
|
||||
"author": "John Doe",
|
||||
# Optional version number, for your own versioning purposes
|
||||
"version": 2.0,
|
||||
# The type of the module, must be one (or more) of the built in module types
|
||||
"type": ["extractor", "feeder", "formatter", "storage", "enricher", "database"],
|
||||
# a boolean indicating whether or not a module requires additional user setup before it can be used
|
||||
# for example: adding API keys, installing additional software etc.
|
||||
"requires_setup": False,
|
||||
"dependencies": {"python": ["loguru"]
|
||||
# a dictionary of dependencies for this module, that must be installed before the module is loaded.
|
||||
# Can be python dependencies (external packages, or other auto-archiver modules), or you can
|
||||
# provide external bin dependencies (e.g. ffmpeg, docker etc.)
|
||||
"dependencies": {
|
||||
"python": ["loguru"],
|
||||
"bin": ["bash"],
|
||||
},
|
||||
# configurations that this module takes. These are argparse-compliant dicationaries, that are
|
||||
# used to create command line arguments when the programme is run.
|
||||
# The full name of the config option will become: `module_name.config_name`
|
||||
"configs": {
|
||||
"csv_file": {"default": "db.csv", "help": "CSV file name"},
|
||||
"required_field": {"required": True, "help": "required field in the CSV file"},
|
||||
},
|
||||
# A description of the module, used for documentation
|
||||
"description": "This is an example module",
|
||||
}
|
Ładowanie…
Reference in New Issue