kopia lustrzana https://github.com/jupyterhub/repo2docker
Merge pull request #909 from yuvipanda/feat/new-base
Let `FROM <base_image>` in the Dockerfile template be configurablepull/1288/head
commit
e8eab153e7
|
@ -1,8 +1,8 @@
|
|||
# syntax = docker/dockerfile:1.3
|
||||
ARG ALPINE_VERSION=3.16
|
||||
ARG ALPINE_VERSION=3.17
|
||||
FROM alpine:${ALPINE_VERSION}
|
||||
|
||||
RUN apk add --no-cache git python3 python3-dev py-pip build-base
|
||||
RUN apk add --no-cache git python3 python3-dev py3-pip py3-setuptools build-base
|
||||
|
||||
# build wheels in first image
|
||||
ADD . /tmp/src
|
||||
|
@ -16,7 +16,7 @@ RUN mkdir /tmp/wheelhouse \
|
|||
FROM alpine:${ALPINE_VERSION}
|
||||
|
||||
# install python, git, bash, mercurial
|
||||
RUN apk add --no-cache git git-lfs python3 py-pip bash docker mercurial
|
||||
RUN apk add --no-cache git git-lfs python3 py3-pip py3-setuptools bash docker mercurial
|
||||
|
||||
# install hg-evolve (Mercurial extensions)
|
||||
RUN pip3 install hg-evolve --user --no-cache-dir
|
||||
|
|
|
@ -0,0 +1,33 @@
|
|||
# Change the base image used by Docker
|
||||
|
||||
You may change the base image used in the `Dockerfile` that creates images by repo2docker.
|
||||
This is equivalent to changing the `FROM <base_image>` in the Dockerfile.
|
||||
|
||||
To do so, use the `base_image` traitlet when invoking `repo2docker`.
|
||||
Note that this is not configurable by individual repositories, it is configured when you invoke the `repo2docker` command.
|
||||
|
||||
```{note}
|
||||
By default repo2docker builds on top of the `buildpack-deps:bionic` base image, an Ubuntu-based image.
|
||||
```
|
||||
|
||||
## Requirements for your base image
|
||||
|
||||
`repo2docker` will only work if a specific set of packages exists in the base image.
|
||||
Only images that match the following criteria are supported:
|
||||
|
||||
- Ubuntu based distributions (minimum `18.04`)
|
||||
- Contains a set of base packages installed with [the `buildpack-deps` image family](https://hub.docker.com/_/buildpack-deps).
|
||||
|
||||
Other images _may_ work, but are not officially supported.
|
||||
|
||||
## This will affect reproducibility 🚨
|
||||
|
||||
Changing the base image may have an impact on the reproducibility of repositories that are built.
|
||||
There are **no guarantees that repositories will behave the same way as other repo2docker builds if you change the base image**.
|
||||
For example these are two scenarios that would make your repositories non-reproducible:
|
||||
|
||||
- **Your base image is different from `Ubuntu:bionic`.**
|
||||
If you change the base image in a way that is different from repo2docker's default (the Ubuntu `bionic` image), then repositories that **you** build with repo2docker may be significantly different from those that **other** instances of repo2docker build (e.g., those from [`mybinder.org`](https://mybinder.org)).
|
||||
- **Your base image changes over time.**
|
||||
If you choose a base image that changes its composition over time (e.g., an image provided by some other community), then it may cause repositories build with your base image to change in unpredictable ways.
|
||||
We recommend choosing a base image that you know to be stable and trustworthy.
|
|
@ -15,3 +15,4 @@ Select from the pages listed below to get started.
|
|||
lab_workspaces
|
||||
jupyterhub_images
|
||||
deploy
|
||||
base_image
|
||||
|
|
|
@ -447,6 +447,21 @@ class Repo2Docker(Application):
|
|||
""",
|
||||
)
|
||||
|
||||
base_image = Unicode(
|
||||
"docker.io/library/buildpack-deps:bionic",
|
||||
config=True,
|
||||
help="""
|
||||
Base image to use when building docker images.
|
||||
|
||||
Only images that match the following criteria are supported:
|
||||
- Ubuntu based distributions, minimum 18.04
|
||||
- Contains set of base packages installed with the buildpack-deps
|
||||
image family: https://hub.docker.com/_/buildpack-deps
|
||||
|
||||
Other images *may* work, but are not officially supported.
|
||||
""",
|
||||
)
|
||||
|
||||
def get_engine(self):
|
||||
"""Return an instance of the container engine.
|
||||
|
||||
|
@ -793,12 +808,14 @@ class Repo2Docker(Application):
|
|||
|
||||
with chdir(checkout_path):
|
||||
for BP in self.buildpacks:
|
||||
bp = BP()
|
||||
bp = BP(base_image=self.base_image)
|
||||
if bp.detect():
|
||||
picked_buildpack = bp
|
||||
break
|
||||
else:
|
||||
picked_buildpack = self.default_buildpack()
|
||||
picked_buildpack = self.default_buildpack(
|
||||
base_image=self.base_image
|
||||
)
|
||||
|
||||
picked_buildpack.platform = self.platform
|
||||
picked_buildpack.appendix = self.appendix
|
||||
|
|
|
@ -14,9 +14,19 @@ def rstudio_base_scripts(r_version):
|
|||
shiny_proxy_version = "1.1"
|
||||
shiny_sha256sum = "80f1e48f6c824be7ef9c843bb7911d4981ac7e8a963e0eff823936a8b28476ee"
|
||||
|
||||
rstudio_url = "https://download2.rstudio.org/server/bionic/amd64/rstudio-server-2022.02.1-461-amd64.deb"
|
||||
rstudio_sha256sum = (
|
||||
"239e8d93e103872e7c6d827113d88871965f82ffb0397f5638025100520d8a54"
|
||||
# RStudio server has different builds based on wether OpenSSL 3 or 1.1 is available in the base
|
||||
# image. 3 is present Jammy+, 1.1 until then. Instead of hardcoding URLs based on distro, we actually
|
||||
# check for the dependency itself directly in the code below. You can find these URLs in
|
||||
# https://posit.co/download/rstudio-server/, toggling between Ubuntu 22 (for openssl3) vs earlier versions (openssl 1.1)
|
||||
# you may forget about openssl, but openssl never forgets you.
|
||||
rstudio_openssl3_url = "https://download2.rstudio.org/server/jammy/amd64/rstudio-server-2022.12.0-353-amd64.deb"
|
||||
rstudio_openssl3_sha256sum = (
|
||||
"a5aa2202786f9017a6de368a410488ea2e4fc6c739f78998977af214df0d6288"
|
||||
)
|
||||
|
||||
rstudio_openssl1_url = "https://download2.rstudio.org/server/bionic/amd64/rstudio-server-2022.12.0-353-amd64.deb"
|
||||
rstudio_openssl1_sha256sum = (
|
||||
"bb88e37328c304881e60d6205d7dac145525a5c2aaaf9da26f1cb625b7d47e6e"
|
||||
)
|
||||
rsession_proxy_version = "2.0.1"
|
||||
|
||||
|
@ -27,11 +37,18 @@ def rstudio_base_scripts(r_version):
|
|||
# but here it's important because these recommend r-base,
|
||||
# which will upgrade the installed version of R, undoing our pinned version
|
||||
rf"""
|
||||
curl --silent --location --fail {rstudio_url} > /tmp/rstudio.deb && \
|
||||
curl --silent --location --fail {shiny_server_url} > /tmp/shiny.deb && \
|
||||
echo '{rstudio_sha256sum} /tmp/rstudio.deb' | sha256sum -c - && \
|
||||
echo '{shiny_sha256sum} /tmp/shiny.deb' | sha256sum -c - && \
|
||||
apt-get update > /dev/null && \
|
||||
if apt-cache search libssl3 > /dev/null; then \
|
||||
RSTUDIO_URL="{rstudio_openssl3_url}" ;\
|
||||
RSTUDIO_HASH="{rstudio_openssl3_sha256sum}" ;\
|
||||
else \
|
||||
RSTUDIO_URL="{rstudio_openssl1_url}" ;\
|
||||
RSTUDIO_HASH="{rstudio_openssl1_sha256sum}" ;\
|
||||
fi && \
|
||||
curl --silent --location --fail ${{RSTUDIO_URL}} > /tmp/rstudio.deb && \
|
||||
curl --silent --location --fail {shiny_server_url} > /tmp/shiny.deb && \
|
||||
echo "${{RSTUDIO_HASH}} /tmp/rstudio.deb" | sha256sum -c - && \
|
||||
echo '{shiny_sha256sum} /tmp/shiny.deb' | sha256sum -c - && \
|
||||
apt install -y --no-install-recommends /tmp/rstudio.deb /tmp/shiny.deb && \
|
||||
rm /tmp/*.deb && \
|
||||
apt-get -qq purge && \
|
||||
|
|
|
@ -14,7 +14,7 @@ import jinja2
|
|||
|
||||
# Only use syntax features supported by Docker 17.09
|
||||
TEMPLATE = r"""
|
||||
FROM buildpack-deps:bionic
|
||||
FROM {{base_image}}
|
||||
|
||||
# Avoid prompts from apt
|
||||
ENV DEBIAN_FRONTEND=noninteractive
|
||||
|
@ -211,7 +211,6 @@ class BuildPack:
|
|||
Specifically used for creating Dockerfiles for use with repo2docker only.
|
||||
|
||||
Things that are kept constant:
|
||||
- base image
|
||||
- some environment variables (such as locale)
|
||||
- user creation & ownership of home directory
|
||||
- working directory
|
||||
|
@ -221,9 +220,13 @@ class BuildPack:
|
|||
|
||||
"""
|
||||
|
||||
def __init__(self):
|
||||
def __init__(self, base_image):
|
||||
"""
|
||||
base_image specifies the base image to use when building docker images
|
||||
"""
|
||||
self.log = logging.getLogger("repo2docker")
|
||||
self.appendix = ""
|
||||
self.base_image = base_image
|
||||
self.labels = {}
|
||||
if sys.platform.startswith("win"):
|
||||
self.log.warning(
|
||||
|
@ -257,6 +260,8 @@ class BuildPack:
|
|||
# Utils!
|
||||
"less",
|
||||
"unzip",
|
||||
# Gives us envsubst
|
||||
"gettext-base",
|
||||
}
|
||||
|
||||
@lru_cache()
|
||||
|
@ -535,6 +540,7 @@ class BuildPack:
|
|||
appendix=self.appendix,
|
||||
# For docker 17.09 `COPY --chown`, 19.03 would allow using $NBUSER
|
||||
user=build_args.get("NB_UID", DEFAULT_NB_UID),
|
||||
base_image=self.base_image,
|
||||
)
|
||||
|
||||
@staticmethod
|
||||
|
|
|
@ -16,6 +16,9 @@ class LegacyBinderDockerBuildPack:
|
|||
This buildpack has been deprecated.
|
||||
"""
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
pass
|
||||
|
||||
def detect(self):
|
||||
"""Check if current repo should be built with the Legacy BuildPack."""
|
||||
log = logging.getLogger("repo2docker")
|
||||
|
|
|
@ -196,6 +196,7 @@ class RBuildPack(PythonBuildPack):
|
|||
"libapparmor1",
|
||||
"sudo",
|
||||
"lsb-release",
|
||||
"libssl-dev",
|
||||
]
|
||||
|
||||
return super().get_packages().union(packages)
|
||||
|
@ -216,7 +217,10 @@ class RBuildPack(PythonBuildPack):
|
|||
# Construct a snapshot URL that will give us binary packages for Ubuntu Bionic (18.04)
|
||||
if "upsi" in snapshots:
|
||||
return (
|
||||
"https://packagemanager.posit.co/all/__linux__/bionic/"
|
||||
# Env variables here are expanded by envsubst in the Dockerfile, after sourcing
|
||||
# /etc/os-release. This allows us to use distro specific variables here to get
|
||||
# appropriate binary packages without having to hard code version names here.
|
||||
"https://packagemanager.posit.co/all/__linux__/${VERSION_CODENAME}/"
|
||||
+ snapshots["upsi"]
|
||||
)
|
||||
raise ValueError(
|
||||
|
@ -262,7 +266,10 @@ class RBuildPack(PythonBuildPack):
|
|||
# Hardcoded rather than dynamically determined from a date to avoid extra API calls
|
||||
# Plus, we can always use packagemanager.posit.co here as we always install the
|
||||
# necessary apt packages.
|
||||
return "https://packagemanager.posit.co/all/__linux__/bionic/2022-01-04+Y3JhbiwyOjQ1MjYyMTU7NzlBRkJEMzg"
|
||||
# Env variables here are expanded by envsubst in the Dockerfile, after sourcing
|
||||
# /etc/os-release. This allows us to use distro specific variables here to get
|
||||
# appropriate binary packages without having to hard code version names here.
|
||||
return "https://packagemanager.posit.co/all/__linux__/${VERSION_CODENAME}/2022-06-03+Y3JhbiwyOjQ1MjYyMTU7RkM5ODcwN0M"
|
||||
|
||||
@lru_cache()
|
||||
def get_build_scripts(self):
|
||||
|
@ -343,16 +350,18 @@ class RBuildPack(PythonBuildPack):
|
|||
rf"""
|
||||
R RHOME && \
|
||||
mkdir -p /etc/rstudio && \
|
||||
echo 'options(repos = c(CRAN = "{cran_mirror_url}"))' > /opt/R/{self.r_version}/lib/R/etc/Rprofile.site && \
|
||||
echo 'r-cran-repos={cran_mirror_url}' > /etc/rstudio/rsession.conf
|
||||
EXPANDED_CRAN_MIRROR_URL="$(. /etc/os-release && echo {cran_mirror_url} | envsubst)" && \
|
||||
echo "options(repos = c(CRAN = \"${{EXPANDED_CRAN_MIRROR_URL}}\"))" > /opt/R/{self.r_version}/lib/R/etc/Rprofile.site && \
|
||||
echo "r-cran-repos=${{EXPANDED_CRAN_MIRROR_URL}}" > /etc/rstudio/rsession.conf
|
||||
""",
|
||||
),
|
||||
(
|
||||
"${NB_USER}",
|
||||
# Install a pinned version of devtools, IRKernel and shiny
|
||||
rf"""
|
||||
R --quiet -e "install.packages(c('devtools', 'IRkernel', 'shiny'), repos='{self.get_devtools_snapshot_url()}')" && \
|
||||
R --quiet -e "IRkernel::installspec(prefix='$NB_PYTHON_PREFIX')"
|
||||
export EXPANDED_CRAN_MIRROR_URL="$(. /etc/os-release && echo {cran_mirror_url} | envsubst)" && \
|
||||
R --quiet -e "install.packages(c('devtools', 'IRkernel', 'shiny'), repos=Sys.getenv(\"EXPANDED_CRAN_MIRROR_URL\"))" && \
|
||||
R --quiet -e "IRkernel::installspec(prefix=Sys.getenv(\"NB_PYTHON_PREFIX\"))"
|
||||
""",
|
||||
),
|
||||
]
|
||||
|
|
|
@ -45,10 +45,10 @@ class Git(ContentProvider):
|
|||
self.log.error(
|
||||
f"Failed to check out ref {ref}", extra=dict(phase=R2dState.FAILED)
|
||||
)
|
||||
if ref == "master":
|
||||
if ref == "master" or ref == "main":
|
||||
msg = (
|
||||
"Failed to check out the 'master' branch. "
|
||||
"Maybe the default branch is not named 'master' "
|
||||
f"Failed to check out the '{ref}' branch. "
|
||||
f"Maybe the default branch is not named '{ref}' "
|
||||
"for this repository.\n\nTry not explicitly "
|
||||
"specifying `--ref`."
|
||||
)
|
||||
|
|
|
@ -100,6 +100,14 @@ def run_repo2docker():
|
|||
return run_test
|
||||
|
||||
|
||||
@pytest.fixture()
|
||||
def base_image():
|
||||
"""
|
||||
Base ubuntu image to use when testing specific BuildPacks
|
||||
"""
|
||||
return "buildpack-deps:bionic"
|
||||
|
||||
|
||||
def _add_content_to_git(repo_dir):
|
||||
"""Add content to file 'test' in git repository and commit."""
|
||||
# use append mode so this can be called multiple times
|
||||
|
|
|
@ -6,21 +6,21 @@ from repo2docker import buildpacks
|
|||
|
||||
|
||||
@pytest.mark.parametrize("binder_dir", ["binder", ".binder", ""])
|
||||
def test_binder_dir(tmpdir, binder_dir):
|
||||
def test_binder_dir(tmpdir, binder_dir, base_image):
|
||||
tmpdir.chdir()
|
||||
if binder_dir:
|
||||
os.mkdir(binder_dir)
|
||||
|
||||
bp = buildpacks.BuildPack()
|
||||
bp = buildpacks.BuildPack(base_image)
|
||||
assert binder_dir == bp.binder_dir
|
||||
assert bp.binder_path("foo.yaml") == os.path.join(binder_dir, "foo.yaml")
|
||||
|
||||
|
||||
def test_exclusive_binder_dir(tmpdir):
|
||||
def test_exclusive_binder_dir(tmpdir, base_image):
|
||||
tmpdir.chdir()
|
||||
os.mkdir("./binder")
|
||||
os.mkdir("./.binder")
|
||||
|
||||
bp = buildpacks.BuildPack()
|
||||
bp = buildpacks.BuildPack(base_image)
|
||||
with pytest.raises(RuntimeError):
|
||||
_ = bp.binder_dir
|
||||
|
|
|
@ -7,41 +7,41 @@ from repo2docker.buildpacks import LegacyBinderDockerBuildPack, PythonBuildPack
|
|||
from repo2docker.utils import chdir
|
||||
|
||||
|
||||
def test_legacy_raises():
|
||||
def test_legacy_raises(base_image):
|
||||
# check legacy buildpack raises on a repo that triggers it
|
||||
with TemporaryDirectory() as repodir:
|
||||
with open(pjoin(repodir, "Dockerfile"), "w") as d:
|
||||
d.write("FROM andrewosh/binder-base")
|
||||
|
||||
with chdir(repodir):
|
||||
bp = LegacyBinderDockerBuildPack()
|
||||
bp = LegacyBinderDockerBuildPack(base_image)
|
||||
with pytest.raises(RuntimeError):
|
||||
bp.detect()
|
||||
|
||||
|
||||
def test_legacy_doesnt_detect():
|
||||
def test_legacy_doesnt_detect(base_image):
|
||||
# check legacy buildpack doesn't trigger
|
||||
with TemporaryDirectory() as repodir:
|
||||
with open(pjoin(repodir, "Dockerfile"), "w") as d:
|
||||
d.write("FROM andrewosh/some-image")
|
||||
|
||||
with chdir(repodir):
|
||||
bp = LegacyBinderDockerBuildPack()
|
||||
bp = LegacyBinderDockerBuildPack(base_image)
|
||||
assert not bp.detect()
|
||||
|
||||
|
||||
def test_legacy_on_repo_without_dockerfile():
|
||||
def test_legacy_on_repo_without_dockerfile(base_image):
|
||||
# check legacy buildpack doesn't trigger on a repo w/o Dockerfile
|
||||
with TemporaryDirectory() as repodir:
|
||||
with chdir(repodir):
|
||||
bp = LegacyBinderDockerBuildPack()
|
||||
bp = LegacyBinderDockerBuildPack(base_image)
|
||||
assert not bp.detect()
|
||||
|
||||
|
||||
@pytest.mark.parametrize("python_version", ["2.6", "3.0", "4.10", "3.99"])
|
||||
def test_unsupported_python(tmpdir, python_version):
|
||||
def test_unsupported_python(tmpdir, python_version, base_image):
|
||||
tmpdir.chdir()
|
||||
bp = PythonBuildPack()
|
||||
bp = PythonBuildPack(base_image)
|
||||
bp._python_version = python_version
|
||||
assert bp.python_version == python_version
|
||||
with pytest.raises(ValueError):
|
||||
|
|
|
@ -12,7 +12,7 @@ from repo2docker.buildpacks import (
|
|||
)
|
||||
|
||||
|
||||
def test_cache_from_base(tmpdir):
|
||||
def test_cache_from_base(tmpdir, base_image):
|
||||
cache_from = ["image-1:latest"]
|
||||
fake_log_value = {"stream": "fake"}
|
||||
fake_client = MagicMock(spec=docker.APIClient)
|
||||
|
@ -21,7 +21,7 @@ def test_cache_from_base(tmpdir):
|
|||
|
||||
# Test base image build pack
|
||||
tmpdir.chdir()
|
||||
for line in BaseImage().build(
|
||||
for line in BaseImage(base_image).build(
|
||||
fake_client, "image-2", 100, {}, cache_from, extra_build_kwargs
|
||||
):
|
||||
assert line == fake_log_value
|
||||
|
@ -30,7 +30,7 @@ def test_cache_from_base(tmpdir):
|
|||
assert called_kwargs["cache_from"] == cache_from
|
||||
|
||||
|
||||
def test_cache_from_docker(tmpdir):
|
||||
def test_cache_from_docker(tmpdir, base_image):
|
||||
cache_from = ["image-1:latest"]
|
||||
fake_log_value = {"stream": "fake"}
|
||||
fake_client = MagicMock(spec=docker.APIClient)
|
||||
|
@ -42,7 +42,7 @@ def test_cache_from_docker(tmpdir):
|
|||
with tmpdir.join("Dockerfile").open("w") as f:
|
||||
f.write("FROM scratch\n")
|
||||
|
||||
for line in DockerBuildPack().build(
|
||||
for line in DockerBuildPack(base_image).build(
|
||||
fake_client, "image-2", 100, {}, cache_from, extra_build_kwargs
|
||||
):
|
||||
assert line == fake_log_value
|
||||
|
|
|
@ -7,20 +7,20 @@ import pytest
|
|||
from repo2docker import buildpacks
|
||||
|
||||
|
||||
def test_empty_env_yml(tmpdir):
|
||||
def test_empty_env_yml(tmpdir, base_image):
|
||||
tmpdir.chdir()
|
||||
p = tmpdir.join("environment.yml")
|
||||
p.write("")
|
||||
bp = buildpacks.CondaBuildPack()
|
||||
bp = buildpacks.CondaBuildPack(base_image)
|
||||
py_ver = bp.python_version
|
||||
# If the environment.yml is empty, python_version will get the default Python version
|
||||
assert py_ver == bp.major_pythons["3"]
|
||||
|
||||
|
||||
def test_no_dict_env_yml(tmpdir):
|
||||
def test_no_dict_env_yml(tmpdir, base_image):
|
||||
tmpdir.chdir()
|
||||
q = tmpdir.join("environment.yml")
|
||||
q.write("numpy\n " "matplotlib\n")
|
||||
bq = buildpacks.CondaBuildPack()
|
||||
bq = buildpacks.CondaBuildPack(base_image)
|
||||
with pytest.raises(TypeError):
|
||||
py_ver = bq.python_version
|
||||
|
|
|
@ -12,8 +12,8 @@ from repo2docker.buildpacks import BuildPack
|
|||
URL = "https://github.com/binderhub-ci-repos/repo2docker-ci-clone-depth"
|
||||
|
||||
|
||||
def test_buildpack_labels_rendered():
|
||||
bp = BuildPack()
|
||||
def test_buildpack_labels_rendered(base_image):
|
||||
bp = BuildPack(base_image)
|
||||
assert "LABEL" not in bp.render()
|
||||
bp.labels["first_label"] = "firstlabel"
|
||||
assert 'LABEL first_label="firstlabel"\n' in bp.render()
|
||||
|
|
|
@ -13,7 +13,7 @@ from repo2docker.buildpacks import BaseImage, DockerBuildPack
|
|||
basedir = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
|
||||
|
||||
|
||||
def test_memory_limit_enforced(tmpdir):
|
||||
def test_memory_limit_enforced(tmpdir, base_image):
|
||||
fake_cache_from = ["image-1:latest"]
|
||||
fake_log_value = {"stream": "fake"}
|
||||
fake_client = MagicMock(spec=docker.APIClient)
|
||||
|
@ -27,7 +27,7 @@ def test_memory_limit_enforced(tmpdir):
|
|||
# Test that the buildpack passes the right arguments to the docker
|
||||
# client in order to enforce the memory limit
|
||||
tmpdir.chdir()
|
||||
for line in BaseImage().build(
|
||||
for line in BaseImage(base_image).build(
|
||||
fake_client,
|
||||
"image-2",
|
||||
memory_limit,
|
||||
|
@ -48,14 +48,16 @@ def test_memory_limit_enforced(tmpdir):
|
|||
|
||||
|
||||
@pytest.mark.parametrize("BuildPack", [BaseImage, DockerBuildPack])
|
||||
def test_memlimit_argument_type(BuildPack):
|
||||
def test_memlimit_argument_type(BuildPack, base_image):
|
||||
# check that an exception is raised when the memory limit isn't an int
|
||||
fake_log_value = {"stream": "fake"}
|
||||
fake_client = MagicMock(spec=docker.APIClient)
|
||||
fake_client.build.return_value = iter([fake_log_value])
|
||||
|
||||
with pytest.raises(ValueError) as exc_info:
|
||||
for line in BuildPack().build(fake_client, "image-2", "10Gi", {}, [], {}):
|
||||
for line in BuildPack(base_image).build(
|
||||
fake_client, "image-2", "10Gi", {}, [], {}
|
||||
):
|
||||
pass
|
||||
|
||||
assert "The memory limit has to be specified as an" in str(exc_info.value)
|
||||
|
|
|
@ -6,7 +6,7 @@ from repo2docker import buildpacks
|
|||
|
||||
|
||||
@pytest.mark.parametrize("binder_dir", ["", ".binder", "binder"])
|
||||
def test_combine_preassemble_steps(tmpdir, binder_dir):
|
||||
def test_combine_preassemble_steps(tmpdir, binder_dir, base_image):
|
||||
tmpdir.chdir()
|
||||
if binder_dir:
|
||||
os.mkdir(binder_dir)
|
||||
|
@ -19,7 +19,7 @@ def test_combine_preassemble_steps(tmpdir, binder_dir):
|
|||
with open(os.path.join(binder_dir, "runtime.txt"), "w") as f:
|
||||
f.write("r-2019-01-30")
|
||||
|
||||
bp = buildpacks.RBuildPack()
|
||||
bp = buildpacks.RBuildPack(base_image)
|
||||
files = bp.get_preassemble_script_files()
|
||||
|
||||
assert len(files) == 2
|
||||
|
|
|
@ -10,7 +10,7 @@ from repo2docker import buildpacks
|
|||
@pytest.mark.parametrize(
|
||||
"runtime_version, expected", [("", "4.2"), ("3.6", "3.6"), ("3.5.1", "3.5")]
|
||||
)
|
||||
def test_version_specification(tmpdir, runtime_version, expected):
|
||||
def test_version_specification(tmpdir, runtime_version, expected, base_image):
|
||||
tmpdir.chdir()
|
||||
|
||||
with open("runtime.txt", "w") as f:
|
||||
|
@ -18,17 +18,17 @@ def test_version_specification(tmpdir, runtime_version, expected):
|
|||
runtime_version += "-"
|
||||
f.write(f"r-{runtime_version}2019-01-01")
|
||||
|
||||
r = buildpacks.RBuildPack()
|
||||
r = buildpacks.RBuildPack(base_image)
|
||||
assert r.r_version.startswith(expected)
|
||||
|
||||
|
||||
def test_version_completion(tmpdir):
|
||||
def test_version_completion(tmpdir, base_image):
|
||||
tmpdir.chdir()
|
||||
|
||||
with open("runtime.txt", "w") as f:
|
||||
f.write("r-3.6-2019-01-01")
|
||||
|
||||
r = buildpacks.RBuildPack()
|
||||
r = buildpacks.RBuildPack(base_image)
|
||||
assert r.r_version == "3.6.3"
|
||||
|
||||
|
||||
|
@ -40,17 +40,17 @@ def test_version_completion(tmpdir):
|
|||
("r-3.5-2019-01-01", (2019, 1, 1)),
|
||||
],
|
||||
)
|
||||
def test_mran_date(tmpdir, runtime, expected):
|
||||
def test_mran_date(tmpdir, runtime, expected, base_image):
|
||||
tmpdir.chdir()
|
||||
|
||||
with open("runtime.txt", "w") as f:
|
||||
f.write(runtime)
|
||||
|
||||
r = buildpacks.RBuildPack()
|
||||
r = buildpacks.RBuildPack(base_image)
|
||||
assert r.checkpoint_date == date(*expected)
|
||||
|
||||
|
||||
def test_snapshot_rspm_date():
|
||||
def test_snapshot_rspm_date(base_image):
|
||||
test_dates = {
|
||||
# Even though there is no snapshot specified in the interface at https://packagemanager.posit.co/client/#/repos/1/overview
|
||||
# For 2021 Oct 22, the API still returns a valid URL that one can install
|
||||
|
@ -61,11 +61,12 @@ def test_snapshot_rspm_date():
|
|||
date(2022, 1, 1): date(2022, 1, 1),
|
||||
}
|
||||
|
||||
r = buildpacks.RBuildPack()
|
||||
r = buildpacks.RBuildPack(base_image)
|
||||
for requested, expected in test_dates.items():
|
||||
snapshot_url = r.get_rspm_snapshot_url(requested)
|
||||
assert snapshot_url.startswith(
|
||||
"https://packagemanager.posit.co/all/__linux__/bionic/"
|
||||
# VERSION_CODENAME is handled at runtime during the build
|
||||
"https://packagemanager.posit.co/all/__linux__/${VERSION_CODENAME}/"
|
||||
+ expected.strftime("%Y-%m-%d")
|
||||
)
|
||||
|
||||
|
@ -75,7 +76,7 @@ def test_snapshot_rspm_date():
|
|||
|
||||
@pytest.mark.parametrize("expected", [date(2019, 12, 29), date(2019, 12, 26)])
|
||||
@pytest.mark.parametrize("requested", [date(2019, 12, 31)])
|
||||
def test_snapshot_mran_date(requested, expected):
|
||||
def test_snapshot_mran_date(requested, expected, base_image):
|
||||
def mock_request_head(url):
|
||||
r = Response()
|
||||
if url == "https://mran.microsoft.com/snapshot/" + expected.isoformat():
|
||||
|
@ -86,7 +87,7 @@ def test_snapshot_mran_date(requested, expected):
|
|||
return r
|
||||
|
||||
with patch("requests.head", side_effect=mock_request_head):
|
||||
r = buildpacks.RBuildPack()
|
||||
r = buildpacks.RBuildPack(base_image)
|
||||
assert (
|
||||
r.get_mran_snapshot_url(requested)
|
||||
== f"https://mran.microsoft.com/snapshot/{expected.isoformat()}"
|
||||
|
|
Ładowanie…
Reference in New Issue