kopia lustrzana https://gitlab.com/rysiekpl/libresilient
QUICKSTART: first jab at a simple alt-fetch scenario; fixes here and there
rodzic
c3635371d1
commit
2e261e3ce8
|
@ -23,7 +23,7 @@ We are going to assume a simple website, consisting of:
|
|||
- `01-first.html`
|
||||
- `02-second.html`
|
||||
|
||||
In fact, this hypothetical website is very similar to (and only a bit simpler than) [Resilient.Is](https://resilient.is/), the homepage of this project.
|
||||
In fact, this hypothetical website is very similar to (and only a bit simpler than) [Resilient.Is](https://resilient.is/), the homepage of this project. For the purpose of this tutorial, we will assume we are hosting our website on [`example.org`](https://en.wikipedia.org/wiki/Example.org) as the primary original domain.
|
||||
|
||||
## First steps
|
||||
|
||||
|
@ -39,7 +39,7 @@ To start, we need:
|
|||
This is the heart of LibResilient. Once loaded, it will use the supplied configuration (in `config.json`) to load and configure plugins. Plugins in turn will perform actual requests and other tasks.
|
||||
|
||||
- the [`fetch` plugin](https://gitlab.com/rysiekpl/libresilient/-/blob/master/plugins/fetch.js)\
|
||||
This LibResilient plugin uses the basic [HTTP Fetch](https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API) to retrieve content.\
|
||||
This LibResilient plugin uses the basic [Fetch API](https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API) to retrieve content.\
|
||||
LibResilient expects plugins in the `plugins/` subdirectory of the directory where the `service-worker.js` script is located, so this file should be saved as `/plugins/fetch.js` for our hypothetical website.
|
||||
|
||||
- `config.json`\
|
||||
|
@ -141,6 +141,8 @@ Our `config.json` should now look like this:
|
|||
|
||||
Note the addition of the `cache` plugin config, and a "cache" component in `loggedComponents`. The `cache` plugin does not require any other configuration to work, so everything remains nice and simple.
|
||||
|
||||
When handling a request, LibResilient tries to retrieve the content using plugins in order as they are specified in the `config.json` file. Specifying `fetch` before `cache` effectively means: try retrieving the content using the `fetch` plugin, and if that fails, use the `cache` plugin.
|
||||
|
||||
You will also note the additional key in the config file: `defaultPluginTimeout`. This defines how long (in ms; `1000` there means "1 second") does LibResilient wait for a response from a plugin before it decides that it is not going to work, and moves on to the next plugin. By default this is set to `10000` (so, 10s), which is almost certainly too long for a website as simple as in our example. One second seems reasonable.
|
||||
|
||||
What this gives us is that any content successfully retrieved by `fetch` will now be cached for offline use. If the website goes down for whatever reason (and the `fetch` plugin starts returning errors or just times out), users who had visited before will continue to have access to content they had already accessed.
|
||||
|
@ -159,7 +161,7 @@ What this gives us is that any content successfully retrieved by `fetch` will no
|
|||
|
||||
### Cache-first?
|
||||
|
||||
What if we do it the other way around, and configure the `cache` plugin before the `fetch` plugin? In that case we end up with a so-called ["cache-first"](https://apiumhub.com/tech-blog-barcelona/service-worker-caching/#Cache_first) strategy.
|
||||
What if we do it the other way around, and specify the `cache` plugin before the `fetch` plugin? In that case we end up with a so-called ["cache-first"](https://apiumhub.com/tech-blog-barcelona/service-worker-caching/#Cache_first) strategy.
|
||||
|
||||
In case of LibResilient this means that the first time a visitor loads our example website, as their cache is empty, the `cache` plugin will fail to return content. This will lead LibResilient to try the next configured plugin, which in this case is `fetch`. Content will get fetched by it, and then stashed locally by the `cache` plugin.
|
||||
|
||||
|
@ -167,6 +169,70 @@ Next time that same visitor loads that particular resource, it will be served fr
|
|||
|
||||
> ### Note on stashing in LibResilient
|
||||
>
|
||||
> LibResilient treats stashing plugins in a special way. If there are multiple plugins configured and a stashing plugin (like the `cache` plugin) is among them, then:
|
||||
> - when content is retrieved by a transport plugin (like `fetch`) configured *before* a stashing plugin, that content is then stashed by the stashing plugin for later offline use.
|
||||
> - if all transport plugins configured *before* a stashing plugin fail and stashed content exists and is returned, LibResilient will then run any transport plugins, which are configured *after* the stashing plugin, in the background to try to retrieve a fresh version of the content; if any of these succeeds, the response will be stashed by the stashing plugin.
|
||||
> LibResilient treats stashing plugins in a special way. If the configuration includes multiple transport plugins, and a stashing plugin (like the `cache`) between them, then:
|
||||
>
|
||||
> - when content is retrieved by a transport plugin (like `fetch`) specified *before* a stashing plugin, that content is then stashed by the stashing plugin for later offline use.
|
||||
> - if all transport plugins specified *before* a stashing plugin fail and stashed content exists, it is provided as the response; LibResilient will then run any transport plugins specified *after* the stashing plugin, in the background to try to retrieve a fresh version of the content; if any of these succeeds, the response will be stashed by the stashing plugin.
|
||||
|
||||
For the time being, let's keep using the `fetch`-then-`cache` option, though.
|
||||
|
||||
## Alternative transport
|
||||
|
||||
We have a working Service Worker, we have it configured to retrieve content using the standard HTTPS fetch, and we made sure that successful requests are stashed for later use (using the `cache` stashing plugin). This makes it possible for our visitors to access content they have already accessed, even if our website is unavailable for whatever reason.
|
||||
|
||||
But it does not let them get new content in such a case. For that we need an alternative transport plugin.
|
||||
|
||||
The simplest available is the `alt-fetch` transport plugin. It still uses the Fetch API, but instead of fetching content from the original website address, it uses other, configured endpoints. We will need:
|
||||
|
||||
- the [`alt-fetch` plugin](https://gitlab.com/rysiekpl/libresilient/-/blob/master/plugins/alt-fetch.js)\
|
||||
This LibResilient plugin performs [`fetch()`](https://developer.mozilla.org/en-US/docs/Web/API/fetch) to configured alternative endpoints in order to retrieve content.
|
||||
|
||||
- some actual alternative endpoint where our website content can be made available\
|
||||
This obviously should be not within our domain nor hosted on our main hosting, as the whole point of this is to have an alternative, independent location where content is available in case of any problems with the primary way of accessing our website.\
|
||||
Let's assume we're using [Gitlab Pages](https://docs.gitlab.com/ee/user/project/pages/), as `example.gitlab.io`; of course, any static hosting location would do just fine!
|
||||
|
||||
- relevant configuration changes to `config.json` to enable `alt-fetch` plugin and tell it where it should be able to find the content.
|
||||
|
||||
Updated website structure looks like this now:
|
||||
|
||||
- `index.html`
|
||||
- `favicon.ico`
|
||||
- `/assets/`
|
||||
- `style.css`
|
||||
- `logo.png`
|
||||
- `font.woff`
|
||||
- `/blog/`
|
||||
- `01-first.html`
|
||||
- `02-second.html`
|
||||
- `config.json`
|
||||
- `libresilient.js`
|
||||
- `service-worker.js`
|
||||
- `/plugins/`
|
||||
- `fetch.js`
|
||||
- `cache.js`
|
||||
- **`alt-fetch.js`**
|
||||
|
||||
And the `config.json` file contains:
|
||||
|
||||
```json
|
||||
{
|
||||
"plugins": [{
|
||||
"name": "fetch"
|
||||
},{
|
||||
"name": "cache"
|
||||
},{
|
||||
"name": "alt-fetch",
|
||||
"endpoints": [
|
||||
"https://example.gitlab.io/"
|
||||
]
|
||||
}],
|
||||
"loggedComponents": ["service-worker", "fetch", "cache"],
|
||||
"defaultPluginTimeout": 1000
|
||||
}
|
||||
```
|
||||
|
||||
The `alt-fetch` plugin *requires* the `endpoints` configuration key. That key contains, you guessed it, endpoints to try to retrieve the content from. And yes, there can be many of them — we will use that later on!
|
||||
|
||||
For now, we need to also make sure that our content ends up also on `https://example.gitlab.io/`. That's on you, LibResilient can't really help with that. It's up to the website administrator to make sure that content ends up where it needs to be for LibResilient to be able to access and retrieve it.
|
||||
|
||||
If our website content is available on both `https://example.org/` *and* `https://example.gitlab.io/`, this configuration finally makes it possible to provide content to visitors of our site, who have visited it at least once, *even if the main domain is down*.
|
||||
|
|
Ładowanie…
Reference in New Issue