Previously repo2docker only performed shallow clones when refspec
was not provided. We now also perform shallow clones if refspec is
explicitly set to HEAD. Binder always specifies a refspec, often
HEAD, so this should improve some build times.
- We should stop auto updating 3.6 now, so we can move on to
newer notebook packages - a new notebook version ought to fix
issues like https://github.com/2i2c-org/infrastructure/issues/1170
for example.
- No longer explicitly pin nbconvert, as whatever version notebook
brings in should be sufficient.
- Bump up ipywidgets and jupyterlab while we're at it.
- No longer pin jupyterhub-singleuser either, as that too works
properly with 3.7
MAMBA_EXE is mamba itself, not micromamba
use variable everywhere, so switching fully to micromamba can happen in once place,
assuming they are fully compatible (they _almost_ are, except for `env update` vs `install` in a couple places)
Without this, you *always* needed a repo2docker_config.py
file to configure anything. This PR makes r2d match the
behavior of most traitlets based applications (like jupyter_server,
jupyterhub, nbconvert, etc)
Fixes https://github.com/jupyterhub/repo2docker/issues/1112
matches expectations in the directive sequences
not strictly required after preassemble because we have no root steps between preassemble and assemble,
so we could remove the `last_user = "root"` there instead
includes regression test
r-recommended is a collection of common CRAN packages,
which cause conflicts when trying to install older R.
These same packages can be regular dependencies retrieved from CRAN.
PPA is a specific kind of apt repository, hosted on
launchpad.net. We use https://cran.r-project.org/bin/linux/ubuntu/,
which is just a regular apt repository. The PPA terminology
always confused me, so this just clears that up
- MRAN doesn't seem to have R 4.1 specific snapshots, so let's
default to RSPM for anything 4.1+.
- Otherwise, snapshot dates in 2022 will result in using rspm
- Install a different version of RStudio for R < 4.1,
as latest RStudio doesn't seem to support those. And
newer RStudio isn't supported on these older R versions.
- Cleanup how Shiny is installed - install it with the same
apt invocation as rstudio (saves time), and install shiny-proxy
from PyPI instead or GitHub. The release on PyPI is the same
as our previous GitHub pin.
- Remove outdated comment about different behavior for R 3.6 - I
think now we get all our R versions from the same apt repo. Plus,
the conditional was adding more scripts than just adding extra apt
package repos
We were doing this from an old MRAN snapshot. I moved the pin
a little ahead, so IRKernel can also be installed from CRAN
instead of from GitHub. R <= 4.0 gets the old version, and anything
newer gets a more recent version of devtools. This gives us
fast installs for IRkernel with binary packages.
Also add a R 4.0 and R 4.1 test
packagemanager.rstudio.com is a CRAN mirror provided
by rstudio, with *binary packages* prebuilt for many Linux
Distributions! https://www.rstudio.com/blog/announcing-public-package-manager/
has more excellent detail. It cuts down install times for R packages
by almost 90% in some cases!
Like MRAN (which we use now), they also provide a daily snapshot
of CRAN at that date
(https://docs.rstudio.com/rspm/news/#rstudio-package-manager-2021090).
The URL for CRAN for a particular date can be fetched via an API
call. We call that API, and if there is no snapshot for that date
(anything before Oct 2017), we fall back on to MRAN. Adds a test
to test this fallback.
One possible issue about changing existing binder repos to use binary
builds rather than source builds is that the binary builds sometimes
require you have an apt package installed, and will fail if it is
not. We had to install the zmq library apt package for example -
source installs compile zmq from source, which is where the speedup
comes from. But unlike python wheels or conda packages, these binary
builds are not self-contained - they are linked to apt packages from
the specific distros. So some repos that worked before might fail now.
We can choose a more recent cut-off date to prevent this from happening.