Got an IndieWeb site? Want to interact with federated social networks like Mastodon, Hubzilla, and more? Bridgy Fed is for you.
 
 
 
 
Go to file
Ryan Barrett 4c12c087ac
AP outbox: turn off querying too
for #1248
2024-08-14 08:32:02 -07:00
.circleci circle: deploy: bug fixes; switch from pip install oauth-dropins to git clone 2024-07-17 13:26:24 -07:00
.github
docs docs: switch to plain bridgy logo 2024-04-29 11:55:50 -07:00
scripts opt_out.py: kbin.social is dead 2024-08-09 12:12:27 -07:00
static add /admin/atproto for rendering current subscribeRepos clients 2024-07-14 09:08:11 -07:00
templates docs link bug fix 2024-08-09 21:51:59 -07:00
tests temporarily disable outboxes 2024-08-13 21:22:00 -07:00
.gcloudignore
.gitignore load LIMITED_DOMAINS from env var if available 2024-07-16 14:28:45 -07:00
.readthedocs.yaml
FEDERATION.md first pass at FEDERATION.md file 2024-05-05 13:52:27 -07:00
LICENSE
README.md README tweak: shipped Bluesky, considering Farcaster 2024-06-17 16:36:44 -07:00
activitypub.py AP outbox: turn off querying too 2024-08-14 08:32:02 -07:00
ap.brid.gy.as2.json docs, front page, etc: remove beta warnings 2024-07-15 15:00:22 -07:00
app.py ATProto: delete polling code and config 2024-05-22 14:28:17 -07:00
app.yaml drop bluesky sandbox env vars, it's dead now 2024-07-22 20:44:18 -07:00
atproto.py ATProto.send: gracefully handle updates/deletes that don't match original AT URI 2024-08-11 07:42:10 -07:00
atproto_firehose.py actually turn on ndb global cache with global_cache_policy 2024-08-01 12:42:13 -07:00
atproto_hub.py actually turn on ndb global cache with global_cache_policy 2024-08-01 12:42:13 -07:00
atproto_hub.yaml drop bluesky sandbox env vars, it's dead now 2024-07-22 20:44:18 -07:00
bsky.brid.gy.as2.json docs, front page, etc: remove beta warnings 2024-07-15 15:00:22 -07:00
common.py minor logging optimizations 2024-08-02 08:03:44 -07:00
config.py cache models.get_originals in memcache with new memcache_memoize decorator 2024-07-30 14:50:33 -07:00
convert.py noop: fix log message in /convert/ 2024-08-12 21:04:20 -07:00
cron.yaml ATProto: delete polling code and config 2024-05-22 14:28:17 -07:00
dispatch.yaml add /admin/atproto for rendering current subscribeRepos clients 2024-07-14 09:08:11 -07:00
eefake.brid.gy.as2.json tighten common.unwrap so it doesn't remove protocol bot user URLs 2024-04-23 12:00:39 -07:00
fa.brid.gy.as2.json tighten common.unwrap so it doesn't remove protocol bot user URLs 2024-04-23 12:00:39 -07:00
fed.brid.gy.as2.json bug fix for bot user profiles, banner image URL 2024-06-18 14:30:09 -07:00
flask_app.py actually turn on ndb global cache with global_cache_policy 2024-08-01 12:42:13 -07:00
follow.py noop: drop unused flask.g imports 2024-05-07 17:14:59 -07:00
ids.py ids: demote a couple log messages to debug 2024-08-06 11:05:40 -07:00
index.yaml atproto_firehose: don't load DIDs for tombstoned AtpRepos 2024-07-22 22:24:13 -07:00
indieauth_client_id
models.py User.enable_protocol: fill in real welcome DM message 2024-08-08 22:21:40 -07:00
oauth_dropins_fonts
oauth_dropins_static
other.brid.gy.as2.json tighten common.unwrap so it doesn't remove protocol bot user URLs 2024-04-23 12:00:39 -07:00
pages.py minor nodeinfo tweaks 2024-08-08 20:16:17 -07:00
protocol.py Protocol.targets: only use orig_obj for replies and reposts 2024-08-13 17:48:50 -07:00
queue.yaml cut down router worker threads, receive and send queue concurrent limit 2024-08-13 21:14:55 -07:00
redirect.py HTTP caching headers: switch to flask_util.headers 2024-06-04 14:19:39 -07:00
requirements.txt build(deps): bump webob from 1.8.7 to 1.8.8 2024-08-14 05:53:40 -07:00
router.py actually turn on ndb global cache with global_cache_policy 2024-08-01 12:42:13 -07:00
router.yaml cut down router worker threads, receive and send queue concurrent limit 2024-08-13 21:14:55 -07:00
ui.py
web.py web: drop feed polling max delay from 1w down to 1d 2024-08-13 09:59:37 -07:00
webfinger.py webfinger: don't serve domain@domain for web users without redirects 2024-08-05 13:34:54 -07:00

README.md

Bridgy Fed Circle CI Coverage Status

Bridgy Fed connects different decentralized social network protocols. It currently supports the fediverse (eg Mastodon) via ActivityPub, Bluesky via the AT Protocol, and the IndieWeb via webmentions and microformats2. Farcaster and Nostr are under consideration. Bridgy Fed translates profiles, likes, reposts, mentions, follows, and more from any supported network to any other. See the user docs and developer docs for more details.

https://fed.brid.gy/

License: This project is placed in the public domain. You may also use it under the CC0 License.

Development

Development reference docs are at bridgy-fed.readthedocs.io. Pull requests are welcome! Feel free to ping me in #indieweb-dev with any questions.

First, fork and clone this repo. Then, install the Google Cloud SDK and run gcloud components install cloud-firestore-emulator to install the Firestore emulator. Once you have them, set up your environment by running these commands in the repo root directory:

gcloud config set project bridgy-federated
python3 -m venv local
source local/bin/activate
pip install -r requirements.txt

Now, run the tests to check that everything is set up ok:

gcloud emulators firestore start --host-port=:8089 --database-mode=datastore-mode < /dev/null >& /dev/null &
python3 -m unittest discover

Finally, run this in the repo root directory to start the web app locally:

GAE_ENV=localdev FLASK_ENV=development flask run -p 8080

If you send a pull request, please include (or update) a test for the new functionality!

If you hit an error during setup, check out the oauth-dropins Troubleshooting/FAQ section.

You may need to change granary, oauth-dropins, mf2util, or other dependencies as well as as Bridgy Fed. To do that, clone their repo locally, then install them in "source" mode with e.g.:

pip uninstall -y granary
pip install -e <path to granary>

To deploy to the production instance on App Engine - if @snarfed has added you as an owner - run:

gcloud -q beta app deploy --no-cache --project bridgy-federated *.yaml

How to add a new protocol

  1. Determine how you'll map the new protocol to other existing Bridgy Fed protocols, specifically identity, protocol inference, events, and operations. Add those to the existing tables in the docs in a PR. This is an important step before you start writing code.
  2. Implement the id and handle conversions in ids.py.
  3. If the new protocol uses a new data format - which is likely - add that format to granary in a new file with functions that convert to/from ActivityStreams 1 and tests. See nostr.py and test_nostr.py for examples.
  4. Implement the protocol in a new .py file as a subclass of both Protocol and User. Implement the send, fetch, serve, and target_for methods from Protocol and handle and web_url from User .
  5. TODO: add a new usage section to the docs for the new protocol.
  6. TODO: does the new protocol need any new UI or signup functionality? Unusual, but not impossible. Add that if necessary.
  7. Protocol logos may be emoji or image files. If this one is a file, add it static/. Then add the emoji or file <img> tag in the Protocol subclass's LOGO_HTML constant.

Stats

I occasionally generate stats and graphs of usage and growth via BigQuery, like I do with Bridgy. Here's how.

  1. Export the full datastore to Google Cloud Storage. Include all entities except MagicKey. Check to see if any new kinds have been added since the last time this command was run.

    gcloud datastore export --async gs://bridgy-federated.appspot.com/stats/ --kinds Follower,Object
    

    Note that --kinds is required. From the export docs:

    Data exported without specifying an entity filter cannot be loaded into BigQuery.

  2. Wait for it to be done with gcloud datastore operations list | grep done.

  3. Import it into BigQuery:

    for kind in Follower Object; do
      bq load --replace --nosync --source_format=DATASTORE_BACKUP datastore.$kind gs://bridgy-federated.appspot.com/stats/all_namespaces/kind_$kind/all_namespaces_kind_$kind.export_metadata
    done
    
  4. Check the jobs with bq ls -j, then wait for them with bq wait.

  5. Run the full stats BigQuery query. Download the results as CSV.

  6. Open the stats spreadsheet. Import the CSV, replacing the data sheet.

  7. Check out the graphs! Save full size images with OS or browser screenshots, thumbnails with the Download Chart button.