kopia lustrzana https://github.com/jupyterhub/repo2docker
Merge remote-tracking branch 'upstream/master' into pipenv-support
commit
2b0aa99d84
|
@ -0,0 +1,34 @@
|
|||
---
|
||||
name: Bug report
|
||||
about: Create a report to help us repair something that is currently broken
|
||||
title: ''
|
||||
labels: ''
|
||||
assignees: ''
|
||||
|
||||
---
|
||||
<!-- Thank you for contributing. These HTML commments will not render in the issue, but you can delete them once you've read them if you prefer! -->
|
||||
|
||||
### Bug description
|
||||
<!-- Use this section to clearly and concisely describe the bug. -->
|
||||
|
||||
#### Expected behaviour
|
||||
<!-- Tell us what you thought would happen. -->
|
||||
|
||||
#### Actual behaviour
|
||||
<!-- Tell us what you actually happens. -->
|
||||
|
||||
### How to reproduce
|
||||
<!-- Use this section to describe the steps that a user would take to experience this bug. -->
|
||||
|
||||
1. Go to '...'
|
||||
2. Click on '....'
|
||||
3. Scroll down to '....'
|
||||
4. See error
|
||||
|
||||
### Your personal set up
|
||||
<!-- Tell us a little about the system you're using. You can see the guidelines for setting up and reporting this information at https://repo2docker.readthedocs.io/en/latest/contributing/contributing.html#setting-up-for-local-development. -->
|
||||
|
||||
- OS: [e.g. linux, OSX]
|
||||
- Docker version: `docker version` <!-- Run this command to get your version. -->
|
||||
- repo2docker version `repo2docker --version` <!-- Run this command to get your version. -->
|
||||
|
|
@ -0,0 +1,29 @@
|
|||
---
|
||||
name: Feature request
|
||||
about: Suggest a new feature or a big change to repo2docker
|
||||
title: ''
|
||||
labels: 'needs: discussion'
|
||||
assignees: ''
|
||||
|
||||
---
|
||||
<!-- Thank you for contributing. These HTML commments will not render in the issue, but you can delete them once you've read them if you prefer! -->
|
||||
|
||||
### Proposed change
|
||||
<!-- Use this section to describe the feature you'd like to be added. -->
|
||||
|
||||
|
||||
### Alternative options
|
||||
<!-- Use this section to describe alternative options and why you've decided on the proposed feature above. -->
|
||||
|
||||
|
||||
### Who would use this feature?
|
||||
<!-- Describe the audience for this feature. This information will affect who chooses to work on the feature with you. -->
|
||||
|
||||
|
||||
### How much effort will adding it take?
|
||||
<!-- Try to estimate how much work adding this feature will require. This information will affect who chooses to work on the feature with you. -->
|
||||
|
||||
|
||||
### Who can do this work?
|
||||
<!-- What skills are needed? Who can be recruited to add this feature? This information will affect who chooses to work on the feature with you. -->
|
||||
|
|
@ -0,0 +1,14 @@
|
|||
---
|
||||
name: Support question
|
||||
about: Ask a question about using repo2docker
|
||||
title: ''
|
||||
labels: ''
|
||||
assignees: ''
|
||||
|
||||
---
|
||||
|
||||
🚨 Please do **not** open an issue for support questions. Instead please search for similar issues or post on http://discourse.jupyter.org/c/questions. 🚨
|
||||
|
||||
More people read the forum than this issue tracker, it is indexed by search engines and easier for others to discover.
|
||||
|
||||
For more details: https://discourse.jupyter.org/t/a-proposal-for-jupyterhub-communications/505
|
|
@ -1,7 +1,26 @@
|
|||
# Contributing to repo2docker development
|
||||
|
||||
The repo2docker developer documentation can be found on these pages:
|
||||
:sparkles: Thank you for thinking about contributing to repo2docker! :sparkles:
|
||||
|
||||
(And thank you particularly for coming to read the guidelines! :heart_eyes:)
|
||||
|
||||
The repo2docker developer documentation is all rendered on our documentation website: [https://repo2docker.readthedocs.io](https://repo2docker.readthedocs.io).
|
||||
If you're here, you're probably looking for the [Contributing to repo2docker development](https://repo2docker.readthedocs.io/en/latest/contributing/contributing.html) page.
|
||||
|
||||
Please make sure you've read the following sections before opening an issue/pull request:
|
||||
* [Process for making a contribution](https://repo2docker.readthedocs.io/en/latest/contributing/contributing.html#process-for-making-a-contribution).
|
||||
* These steps talk you through choosing the right issue template (bug report or feature request) and making a change.
|
||||
* [Guidelines to getting a Pull Request merged](https://repo2docker.readthedocs.io/en/latest/contributing/contributing.html#guidelines-to-getting-a-pull-request-merged).
|
||||
* These are tips and tricks to help make your contribution as smooth as possible for you and for the repo2docker maintenance team.
|
||||
|
||||
There are a few other pages to highlight:
|
||||
|
||||
* [Contributing to repo2docker](https://repo2docker.readthedocs.io/en/latest/contributing/contributing.html)
|
||||
* [Our roadmap](https://repo2docker.readthedocs.io/en/latest/contributing/roadmap.html)
|
||||
* We use the roadmap to develop a shared understanding of the project's vision and direction amongst the community of users, contributors, and maintainers.
|
||||
This is a great place to get a feel for what the maintainers are thinking about for the short, medium, and long term future of the project.
|
||||
* [Design of repo2docker](https://repo2docker.readthedocs.io/en/latest/design.html)
|
||||
* This page explains some of the design principles behind repo2docker.
|
||||
Its a good place to understand _why_ the team have made the decisions that they have along the way!
|
||||
* We absolutely encourage discussion around refactoring, updating or extending repo2docker, but please make sure that you've understood this page before opening an issue to discuss the change you'd like to propose.
|
||||
* [Common developer tasks and how-tos](https://repo2docker.readthedocs.io/en/latest/contributing/tasks.html)
|
||||
* Some notes on running tests, buildpack dependencies, creating a release, updating the changelog and keeping the pip files up to date.
|
||||
|
|
|
@ -2,17 +2,48 @@
|
|||
Changelog
|
||||
=========
|
||||
|
||||
Version 0.x.x
|
||||
|
||||
Version x.x.x
|
||||
=============
|
||||
|
||||
Release date: TBD
|
||||
|
||||
New features
|
||||
------------
|
||||
|
||||
API changes
|
||||
-----------
|
||||
|
||||
Bug fixes
|
||||
---------
|
||||
- Prevent building the image as root if --user-id and --user-name are not specified
|
||||
in :pr:`676` by :user:`Xarthisius`.
|
||||
|
||||
|
||||
Version 0.9.0
|
||||
=============
|
||||
|
||||
Release date: 2019-05-05
|
||||
|
||||
New features
|
||||
------------
|
||||
- Support for julia `Project.toml`, `JuliaProject.toml` and `Manifest.toml` files in :pr:`595` by
|
||||
:user:`davidanthoff`
|
||||
- Set JULIA_PROJECT globally, so that every julia instance starts with the
|
||||
julia environment activated in :pr:`612` by :user:`davidanthoff`.
|
||||
- Update Miniconda version to 4.6.14 and Conda version to 4.6.14 in :pr:`637` by
|
||||
:user:`jhamman`
|
||||
- Install notebook into `notebook` env instead of `root`.
|
||||
Activate conda environments and shell integration via ENTRYPOINT
|
||||
in :pr:`651` by :user:`minrk`
|
||||
- Support for `.binder` directory in addition to `binder` directory for location of
|
||||
configuration files, in :pr:`653` by :user:`jhamman`.
|
||||
- Updated contributor guide and issue templates for bugs, feature requests,
|
||||
and support questions in :pr:`654` and :pr:`655` by :user:`KirstieJane` and
|
||||
:user:`betatim`.
|
||||
- Create a page naming and describing the "Reproducible Execution
|
||||
Environment Specification" (the specification used by repo2docker)
|
||||
in :pr:`662` by :user:`choldgraf`.
|
||||
|
||||
API changes
|
||||
-----------
|
||||
|
@ -27,6 +58,14 @@ Bug fixes
|
|||
buildpack in :pr:`633` by :user:`betatim`.
|
||||
- Update to version 5.7.6 of the `notebook` package used in all environments
|
||||
in :pr:`628` by :user:`betatim`.
|
||||
- Update to version 5.7.8 of the `notebook` package and version 2.0.12 of
|
||||
`nteract-on-jupyter` in :pr:`650` by :user:`betatim`.
|
||||
- Switch to newer version of jupyter-server-proxy to fix websocket handling
|
||||
in :pr:`646` by :user:`betatim`.
|
||||
- Update to pip version 19.0.3 in :pr:`647` by :user:`betatim`.
|
||||
- Ensure ENTRYPOINT is an absolute path in :pr:`657` by :user:`yuvipanda`.
|
||||
- Fix handling of `--build-memory-limit` values without a postfix in :pr:`652`
|
||||
by :user:`betatim`.
|
||||
|
||||
|
||||
Version 0.8.0
|
||||
|
|
|
@ -195,7 +195,7 @@ A script that can contain simple commands to be run at runtime (as an
|
|||
`ENTRYPOINT <https://docs.docker.com/engine/reference/builder/#entrypoint>`_
|
||||
to the docker container). If you want this to be a shell script, make sure the
|
||||
first line is ``#!/bin/bash``. The last line must be ``exec "$@"``
|
||||
equivalent.
|
||||
or equivalent.
|
||||
|
||||
Use this to set environment variables that software installed in your container
|
||||
expects to be set. This script is executed each time your binder is started and
|
||||
|
|
|
@ -1,45 +1,95 @@
|
|||
# Contributing to repo2docker development
|
||||
|
||||
Thank you for thinking about contributing to repo2docker!
|
||||
This is an open source project that is developed and maintained entirely by volunteers.
|
||||
*Your contribution* is integral to the future of the project.
|
||||
THANK YOU!
|
||||
|
||||
## Types of contribution
|
||||
|
||||
There are many ways to contribute to repo2docker:
|
||||
|
||||
* **Update the documentation.**
|
||||
If you're reading a page or docstring and it doesn't make sense (or doesn't exist!), please let us know by opening a bug report.
|
||||
It's even more amazing if you can give us a suggested change.
|
||||
* **Fix bugs or add requested features.**
|
||||
Have a look through the [issue tracker](https://github.com/jupyter/repo2docker/issues) and see if there are any tagged as ["help wanted"](https://github.com/jupyter/repo2docker/issues?q=is%3Aissue+is%3Aopen+label%3A%22help+wanted%22).
|
||||
As the label suggests, we'd love your help!
|
||||
* **Report a bug.**
|
||||
If repo2docker isn't doing what you thought it would do then open a [bug report](https://github.com/jupyter/repo2docker/issues/new?template=bug_report.md).
|
||||
That issue template will ask you a few questions described in more detail below.
|
||||
* **Suggest a new feature.**
|
||||
We know that there are lots of ways to extend repo2docker!
|
||||
If you're interested in adding a feature then please open a [feature request](https://github.com/jupyter/repo2docker/issues/new?template=feature_request.md).
|
||||
That issue template will ask you a few questions described in detail below.
|
||||
* **Review someone's Pull Request.**
|
||||
Whenever somebody proposes changes to the repo2docker codebase, the community reviews
|
||||
the changes, and provides feedback, edits, and suggestions. Check out the
|
||||
[open pull requests](https://github.com/jupyter/repo2docker/pulls?q=is%3Apr+is%3Aopen+sort%3Aupdated-desc)
|
||||
and provide feedback that helps improve the PR and get it merged. Please keep your
|
||||
feedback positive and constructive!
|
||||
* **Tell people about repo2docker.**
|
||||
As we said above, repo2docker is built by and for its community.
|
||||
If you know anyone who would like to use repo2docker, please tell them about the project!
|
||||
You could give a talk about it, or run a demonstration.
|
||||
The sky is the limit :rocket::star2:.
|
||||
|
||||
If you're not sure where to get started, then please come and say hello in our [Gitter channel](https://gitter.im/jupyterhub/binder), or open an discussion thread at the [Jupyter discourse forum](https://discourse.jupyter.org/).
|
||||
|
||||
## Process for making a contribution
|
||||
|
||||
This outlines the process for getting changes to the repo2docker project merged.
|
||||
|
||||
* If your change is a big change, **open an issue to discuss**
|
||||
before spending a lot of time writing. Getting consensus with the
|
||||
community is a great way to save time later.
|
||||
* Make edits in your fork of the repo2docker repository
|
||||
* All changes are made by submitting a pull request. Read the next section for
|
||||
the guidelines for both reviewers and contributors on merging a PR.
|
||||
* Edit [the changelog](./../../changelog)
|
||||
by appending your feature / bug fix to the development version.
|
||||
* Wait for a community member to merge your changes.
|
||||
* (optional) Deploy a new version of repo2docker to mybinder.org by [following these steps](http://mybinder-sre.readthedocs.io/en/latest/deployment/how.html)
|
||||
1. Identify the correct issue template: [bug report](https://github.com/jupyter/repo2docker/issues/new?template=bug_report.md) or [feature request](https://github.com/jupyter/repo2docker/issues/new?template=feature_request.md).
|
||||
|
||||
**Bug reports** ([examples](https://github.com/jupyter/repo2docker/issues?q=is%3Aissue+is%3Aopen+label%3Abug), [new issue](https://github.com/jupyter/repo2docker/issues/new?template=bug_report.md)) will ask you for a description of the problem, the expected behaviour, the actual behaviour, how to reproduce the problem, and your personal set up.
|
||||
Bugs can include problems with the documentation, or code not running as expected.
|
||||
|
||||
It is really important that you make it easy for the maintainers to reproduce the problem you're having.
|
||||
This guide on creating a [minimal, complete and verifiable example](https://stackoverflow.com/help/mcve) is a great place to start.
|
||||
|
||||
**Feature requests** ([examples](https://github.com/jupyter/repo2docker/labels/needs%3A%20discussion), [new issue](https://github.com/jupyter/repo2docker/issues/new?template=feature_request.md)) will ask you for the proposed change, any alternatives that you have considered, a description of who would use this feature, and a best-guess of how much work it will take and what skills are required to accomplish.
|
||||
|
||||
Very easy feature requests might be updates to the documentation to clarify steps for new users.
|
||||
Harder feature requests may be to add new functionality to the project and will need more in depth discussion about who can complete and maintain the work.
|
||||
|
||||
Feature requests are a great opportunity for you to advocate for the use case you're suggesting.
|
||||
They help others understand how much effort it would be to integrate the work,and - if you're successful at convincing them that this effort is worth it - make it more likely that they to choose to work on it with you.
|
||||
|
||||
2. Open an issue.
|
||||
Getting consensus with the community is a great way to save time later.
|
||||
3. Make edits in [your fork](https://help.github.com/en/articles/fork-a-repo) of the [repo2docker repository](https://github.com/jupyter/repo2docker).
|
||||
4. Make a [pull request](https://help.github.com/en/articles/about-pull-requests).
|
||||
Read the [next section](#guidelines-to-getting-a-pull-request-merged) for guidelines for both reviewers and contributors on merging a PR.
|
||||
5. Edit [the changelog](./../../changelog) by appending your feature / bug fix to the development version.
|
||||
6. Wait for a community member to merge your changes.
|
||||
Remember that **someone else must merge your pull request**.
|
||||
That goes for new contributors and long term maintainers alike.
|
||||
7. (optional) Deploy a new version of repo2docker to mybinder.org by [following these steps](http://mybinder-sre.readthedocs.io/en/latest/deployment/how.html)
|
||||
|
||||
## Guidelines to getting a Pull Request merged
|
||||
|
||||
These are not hard rules to be enforced by 🚓 but instead guidelines
|
||||
to help you make a contribution.
|
||||
|
||||
* prefix the title of your pull request with `[MRG]` if the contribution
|
||||
is complete and should be subjected to a detailed review;
|
||||
* create a PR as early as possible, marking it with `[WIP]` while you work on
|
||||
it (good to avoid duplicated work, get broad review of functionality or API,
|
||||
or seek collaborators);
|
||||
* a PR solves one problem (do not mix problems together in one PR) with the
|
||||
minimal set of changes;
|
||||
* describe why you are proposing the changes you are proposing;
|
||||
* try to not rush changes (the definition of rush depends on how big your
|
||||
changes are);
|
||||
* Enter your changes into the [changelog](./../../changelog) in `docs/source/changelog.rst`;
|
||||
* someone else has to merge your PR;
|
||||
* new code needs to come with a test;
|
||||
* apply [PEP8](https://www.python.org/dev/peps/pep-0008/) as much
|
||||
as possible, but not too much;
|
||||
* no merging if travis is red;
|
||||
* do use merge commits instead of merge-by-squashing/-rebasing. This makes it
|
||||
easier to find all changes since the last deployment `git log --merges --pretty=format:"%h %<(10,trunc)%an %<(15)%ar %s" <deployed-revision>..`
|
||||
These are not hard rules to be enforced by 🚓 but they are suggestions written by the repo2docker maintainers to help complete your contribution as smoothly as possible for both you and for them.
|
||||
|
||||
* **Create a PR as early as possible**, marking it with `[WIP]` while you work on it.
|
||||
This avoids duplicated work, lets you get high level feedback on functionality or API changes, and/or helps find collaborators to work with you.
|
||||
* **Keep your PR focused.**
|
||||
The best PRs solve one problem.
|
||||
If you end up changing multiple things, please open separate PRs for the different conceptual changes.
|
||||
* **Add tests to your code.**
|
||||
PRs will not be merged if Travis is failing.
|
||||
* **Apply [PEP8](https://www.python.org/dev/peps/pep-0008/)** as much as possible, but not too much.
|
||||
If in doubt, ask.
|
||||
* **Use merge commits** instead of merge-by-squashing/-rebasing.
|
||||
This makes it easier to find all changes since the last deployment `git log --merges --pretty=format:"%h %<(10,trunc)%an %<(15)%ar %s" <deployed-revision>..` and your PR easier to review.
|
||||
* **Make it clear when your PR is ready for review.**
|
||||
Prefix the title of your pull request (PR) with `[MRG]` if the contribution is complete and should be subjected to a detailed review.
|
||||
* **Enter your changes into the [changelog](./../../changelog)** in `docs/source/changelog.rst`.
|
||||
* **Use commit messages to describe _why_ you are proposing the changes you are proposing.**
|
||||
* **Try to not rush changes** (the definition of rush depends on how big your changes are).
|
||||
Remember that everyone in the repo2docker team is a volunteer and we can not (nor would we want to) control their time or interests.
|
||||
Wait patiently for a reviewer to merge the PR.
|
||||
(Remember that **someone else** must merge your PR, even if you have the admin rights to do so.)
|
||||
|
||||
## Setting up for Local Development
|
||||
|
||||
|
|
|
@ -90,7 +90,7 @@ See the subsections below for more detailed instructions.
|
|||
change log (details below) and commit the change log, then update
|
||||
the pull request.
|
||||
|
||||
|
||||
|
||||
### Make a Pull Request
|
||||
|
||||
Once you've made the commit, please make a Pull Request to the `jupyterhub/repo2docker`
|
||||
|
@ -104,19 +104,54 @@ test to prevent the bug from coming back/the feature breaking in the future.
|
|||
|
||||
We try to make a release of repo2docker every few months if possible.
|
||||
|
||||
We follow semantic versioning.
|
||||
We follow [semantic versioning](https://semver.org/).
|
||||
|
||||
Check that the Change log is ready and then tag a new release locally:
|
||||
A new release will automatically be created when a new git tag is created
|
||||
and pushed to the repository (using
|
||||
[Travis CI](https://github.com/jupyter/repo2docker/blob/master/.travis.yml#L52)).
|
||||
|
||||
To create a new release, follow these steps:
|
||||
|
||||
### Confirm that the changelog is ready
|
||||
|
||||
[The changelog](https://github.com/jupyter/repo2docker/blob/master/docs/source/changelog.rst)
|
||||
should reflect all significant enhancements and fixes to repo2docker and
|
||||
its documentation. In addition, ensure that the correct version is displayed
|
||||
at the top, and create a new `dev` section if needed.
|
||||
|
||||
### Create a new tag and push it
|
||||
|
||||
First, tag a new release locally:
|
||||
|
||||
```bash
|
||||
V=0.7.0 git tag -am "release $V" $V
|
||||
V=0.7.0; git tag -am "release $V" $V
|
||||
```
|
||||
|
||||
Then push this change up to the master repository
|
||||
|
||||
```
|
||||
git push origin --tags
|
||||
```
|
||||
|
||||
When the travis run completes check that the new release is available on PyPI.
|
||||
Travis should automatically run the tests and, if they pass, create a
|
||||
new release on the [repo2docker PyPI](https://pypi.org/project/jupyter-repo2docker/).
|
||||
Once this has completed, make sure that the new version has been updated.
|
||||
|
||||
### Create a new release on the GitHub repository
|
||||
|
||||
### Update the change log
|
||||
Once the new release has been pushed to PyPI, we need to create a new
|
||||
release on the [GitHub repository releases page](https://github.com/jupyter/repo2docker/releases). Once on that page, follow these steps:
|
||||
|
||||
* Click "Draft a new release"
|
||||
* Choose a tag version using the same tag you just created above
|
||||
* The release name is simply the tag version
|
||||
* The description is [a link to the Changelog](https://github.com/jupyter/repo2docker/blob/master/docs/source/changelog.rst),
|
||||
ideally with an anchor to the latest release.
|
||||
* Finally, click "Publish release"
|
||||
|
||||
That's it!
|
||||
|
||||
## Update the change log
|
||||
|
||||
To add your change to the change log, find the relevant Feature/Bug
|
||||
fix/API change section for the next release near the top of the file;
|
||||
|
@ -156,12 +191,12 @@ should be superseded by either a next release candidate, or the final
|
|||
release for that version (bugfix version 0).
|
||||
|
||||
|
||||
### Keeping the Pipfile and requirements files up to date
|
||||
## Keeping the Pipfile and requirements files up to date
|
||||
|
||||
We now have both a `dev-requirements.txt` and a `Pifile` for repo2docker, as
|
||||
such it is important to keep these in sync/up-to-date.
|
||||
We now have both a `dev-requirements.txt` and a `Pifile` for repo2docker, as
|
||||
such it is important to keep these in sync/up-to-date.
|
||||
|
||||
Both files use `pip identifiers` so if you are updating for example the Sphinx version
|
||||
Both files use `pip identifiers` so if you are updating for example the Sphinx version
|
||||
in the `doc-requirements.txt` (currently `Sphinx = ">=1.4,!=1.5.4"`) you can use the
|
||||
same syntax to update the Pipfile and viceversa.
|
||||
|
||||
|
|
|
@ -7,7 +7,7 @@ The philosophy for the repo2docker buildpacks includes:
|
|||
- using common configuration files for familiar installation and packaging tools
|
||||
- allowing configuration files to be combined to compose more complex setups
|
||||
- specifying default locations for configuration files
|
||||
(the repository's root directory or .binder directory)
|
||||
(in the repository's root, `binder` or `.binder` directory)
|
||||
|
||||
|
||||
When designing `repo2docker` and adding to it in the future, the
|
||||
|
|
|
@ -42,6 +42,7 @@ Please report `Bugs <https://github.com/jupyter/repo2docker/issues>`_,
|
|||
:caption: Complete list of configuration files
|
||||
|
||||
config_files
|
||||
specification
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
|
|
@ -0,0 +1,32 @@
|
|||
.. _specification:
|
||||
|
||||
====================================================
|
||||
The Reproducible Execution Environment Specification
|
||||
====================================================
|
||||
|
||||
repo2docker scans a repository for particular :ref:`config-files`, such
|
||||
as ``requirements.txt`` or ``REQUIRE``. The collection of files, their contents,
|
||||
and the resulting actions that repo2docker takes is known
|
||||
as the **Reproducible Execution Environment Specification** (or REES).
|
||||
|
||||
The goal of the REES is to automate and encourage existing community best practices
|
||||
for reproducible computational environments. This includes installing pacakges using
|
||||
community-standard specification files and their corresponding tools,
|
||||
such as ``requirements.txt`` (with ``pip``), ``REQUIRE`` (with Julia), or
|
||||
``apt.txt`` (with ``apt``). While repo2docker automates the
|
||||
creation of the environment, a human should be able to look at a REES-compliant
|
||||
repository and reproduce the environment using common, clear steps without
|
||||
repo2docker software.
|
||||
|
||||
Currently, the definition of the REE Specification is the following:
|
||||
|
||||
Any directory containing zero or more files from the :ref:`config-files` list is a
|
||||
valid reproducible execution environment as defined by the REES. The
|
||||
configuration files have to all be placed either in the root of the
|
||||
directory, in a ``binder/`` sub-directory or a ``.binder/`` sub-directory.
|
||||
|
||||
For example, the REES recognises ``requirements.txt`` as a valid config file.
|
||||
The file format is as defined by the ``requirements.txt`` standard of the Python
|
||||
community. A REES-compliant tool will install a Python interpreter (of unspecified version)
|
||||
and perform the equivalent action of ``pip install -r requirements.txt`` so that the
|
||||
user can afterwards run python and use the packages installed.
|
|
@ -10,7 +10,8 @@ Using ``repo2docker``
|
|||
order to run ``repo2docker``. For more information on installing
|
||||
``repo2docker``, see :ref:`install`.
|
||||
|
||||
``repo2docker`` is called with a URL/path to a git repository. It then
|
||||
``repo2docker`` can build a reproducible computational environment for any repository that
|
||||
follows :ref:`specification`. repo2docker is called with a URL/path to a repository. It then
|
||||
performs these steps:
|
||||
|
||||
1. Inspects the repository for :ref:`configuration files <config-files>`. These will be used to build
|
||||
|
@ -72,13 +73,14 @@ specify the ``branch-name`` or ``commit-hash``. For example::
|
|||
Where to put configuration files
|
||||
================================
|
||||
|
||||
``repo2docker`` will look for configuration files in either:
|
||||
``repo2docker`` will look for configuration files in:
|
||||
|
||||
* A folder named ``binder/`` in the root of the repository.
|
||||
* A folder named ``.binder/`` in the root of the repository.
|
||||
* The root directory of the repository.
|
||||
|
||||
If the folder ``binder/`` is located at the top level of the repository,
|
||||
only configuration files in the ``binder/`` folder will be considered.
|
||||
`repo2docker` searches for these folders in order (``binder/``, ``.binder/``,
|
||||
root). Only configuration files in the first identified folder are considered.
|
||||
|
||||
Check the complete list of :ref:`configuration files <config-files>` supported
|
||||
by ``repo2docker`` to see how to configure the build process.
|
||||
|
|
|
@ -7,6 +7,7 @@ from .app import Repo2Docker
|
|||
from . import __version__
|
||||
from .utils import validate_and_generate_port_mapping, is_valid_docker_image_name
|
||||
|
||||
|
||||
def validate_image_name(image_name):
|
||||
"""
|
||||
Validate image_name read by argparse
|
||||
|
@ -299,7 +300,12 @@ def make_r2d(argv=None):
|
|||
r2d.user_name = args.user_name
|
||||
|
||||
if args.build_memory_limit:
|
||||
r2d.build_memory_limit = args.build_memory_limit
|
||||
# if the string only contains numerals we assume it should be an int
|
||||
# and specifies a size in bytes
|
||||
if args.build_memory_limit.isnumeric():
|
||||
r2d.build_memory_limit = int(args.build_memory_limit)
|
||||
else:
|
||||
r2d.build_memory_limit = args.build_memory_limit
|
||||
|
||||
if args.environment and not r2d.run:
|
||||
print('To specify environment variables, you also need to run '
|
||||
|
|
|
@ -8,6 +8,7 @@ Usage:
|
|||
python -m repo2docker https://github.com/you/your-repo
|
||||
"""
|
||||
import argparse
|
||||
import errno
|
||||
import json
|
||||
import sys
|
||||
import logging
|
||||
|
@ -650,6 +651,19 @@ class Repo2Docker(Application):
|
|||
extra=dict(phase='building'))
|
||||
|
||||
if not self.dry_run:
|
||||
if self.user_id == 0:
|
||||
self.log.error(
|
||||
'Root as the primary user in the image is not permitted.\n'
|
||||
)
|
||||
self.log.info(
|
||||
"The uid and the username of the user invoking repo2docker "
|
||||
"is used to create a mirror account in the image by default. "
|
||||
"To override that behavior pass --user-id <numeric_id> and "
|
||||
" --user-name <string> to repo2docker.\n"
|
||||
"Please see repo2docker --help for more details.\n"
|
||||
)
|
||||
sys.exit(errno.EPERM)
|
||||
|
||||
build_args = {
|
||||
'NB_USER': self.user_name,
|
||||
'NB_UID': str(self.user_id),
|
||||
|
|
|
@ -155,9 +155,13 @@ RUN ./{{ s }}
|
|||
# Add start script
|
||||
{% if start_script is not none -%}
|
||||
RUN chmod +x "{{ start_script }}"
|
||||
ENTRYPOINT ["{{ start_script }}"]
|
||||
ENV R2D_ENTRYPOINT "{{ start_script }}"
|
||||
{% endif -%}
|
||||
|
||||
# Add entrypoint
|
||||
COPY /repo2docker-entrypoint /usr/local/bin/repo2docker-entrypoint
|
||||
ENTRYPOINT ["/usr/local/bin/repo2docker-entrypoint"]
|
||||
|
||||
# Specify the default command to run
|
||||
CMD ["jupyter", "notebook", "--ip", "0.0.0.0"]
|
||||
|
||||
|
@ -167,6 +171,11 @@ CMD ["jupyter", "notebook", "--ip", "0.0.0.0"]
|
|||
{% endif %}
|
||||
"""
|
||||
|
||||
ENTRYPOINT_FILE = os.path.join(
|
||||
os.path.dirname(os.path.abspath(__file__)),
|
||||
"repo2docker-entrypoint",
|
||||
)
|
||||
|
||||
|
||||
class BuildPack:
|
||||
"""
|
||||
|
@ -414,12 +423,26 @@ class BuildPack:
|
|||
"""
|
||||
return None
|
||||
|
||||
@property
|
||||
def binder_dir(self):
|
||||
has_binder = os.path.isdir("binder")
|
||||
has_dotbinder = os.path.isdir(".binder")
|
||||
|
||||
if has_binder and has_dotbinder:
|
||||
raise RuntimeError(
|
||||
"The repository contains both a 'binder' and a '.binder' "
|
||||
"directory. However they are exclusive.")
|
||||
|
||||
if has_dotbinder:
|
||||
return ".binder"
|
||||
elif has_binder:
|
||||
return "binder"
|
||||
else:
|
||||
return ""
|
||||
|
||||
def binder_path(self, path):
|
||||
"""Locate a file"""
|
||||
if os.path.exists('binder'):
|
||||
return os.path.join('binder', path)
|
||||
else:
|
||||
return path
|
||||
return os.path.join(self.binder_dir, path)
|
||||
|
||||
def detect(self):
|
||||
return True
|
||||
|
@ -493,18 +516,28 @@ class BuildPack:
|
|||
src_path = os.path.join(os.path.dirname(__file__), *src_parts)
|
||||
tar.add(src_path, src, filter=_filter_tar)
|
||||
|
||||
tar.add(ENTRYPOINT_FILE, "repo2docker-entrypoint", filter=_filter_tar)
|
||||
|
||||
tar.add('.', 'src/', filter=_filter_tar)
|
||||
|
||||
tar.close()
|
||||
tarf.seek(0)
|
||||
|
||||
limits = {
|
||||
# Always disable memory swap for building, since mostly
|
||||
# nothing good can come of that.
|
||||
'memswap': -1
|
||||
}
|
||||
# If you work on this bit of code check the corresponding code in
|
||||
# buildpacks/docker.py where it is duplicated
|
||||
if not isinstance(memory_limit, int):
|
||||
raise ValueError("The memory limit has to be specified as an"
|
||||
"integer but is '{}'".format(type(memory_limit)))
|
||||
limits = {}
|
||||
if memory_limit:
|
||||
limits['memory'] = memory_limit
|
||||
# We want to always disable swap. Docker expects `memswap` to
|
||||
# be total allowable memory, *including* swap - while `memory`
|
||||
# points to non-swap memory. We set both values to the same so
|
||||
# we use no swap.
|
||||
limits = {
|
||||
'memory': memory_limit,
|
||||
'memswap': memory_limit
|
||||
}
|
||||
|
||||
build_kwargs = dict(
|
||||
fileobj=tarf,
|
||||
|
@ -643,7 +676,12 @@ class BaseImage(BuildPack):
|
|||
return []
|
||||
|
||||
def get_start_script(self):
|
||||
start = self.binder_path('./start')
|
||||
start = self.binder_path('start')
|
||||
if os.path.exists(start):
|
||||
return start
|
||||
# Return an absolute path to start
|
||||
# This is important when built container images start with
|
||||
# a working directory that is different from ${REPO_DIR}
|
||||
# This isn't a problem with anything else, since start is
|
||||
# the only path evaluated at container start time rather than build time
|
||||
return os.path.join('${REPO_DIR}', start)
|
||||
return None
|
||||
|
|
|
@ -28,7 +28,7 @@ class CondaBuildPack(BaseImage):
|
|||
"""
|
||||
env = super().get_build_env() + [
|
||||
('CONDA_DIR', '${APP_BASE}/conda'),
|
||||
('NB_PYTHON_PREFIX', '${CONDA_DIR}'),
|
||||
('NB_PYTHON_PREFIX', '${CONDA_DIR}/envs/notebook'),
|
||||
]
|
||||
if self.py2:
|
||||
env.append(('KERNEL_PYTHON_PREFIX', '${CONDA_DIR}/envs/kernel'))
|
||||
|
@ -42,9 +42,10 @@ class CondaBuildPack(BaseImage):
|
|||
|
||||
"""
|
||||
path = super().get_path()
|
||||
path.insert(0, '${CONDA_DIR}/bin')
|
||||
if self.py2:
|
||||
path.insert(0, '${KERNEL_PYTHON_PREFIX}/bin')
|
||||
path.insert(0, '${CONDA_DIR}/bin')
|
||||
path.insert(0, '${NB_PYTHON_PREFIX}/bin')
|
||||
return path
|
||||
|
||||
def get_build_scripts(self):
|
||||
|
@ -97,6 +98,7 @@ class CondaBuildPack(BaseImage):
|
|||
"""
|
||||
files = {
|
||||
'conda/install-miniconda.bash': '/tmp/install-miniconda.bash',
|
||||
'conda/activate-conda.sh': '/etc/profile.d/activate-conda.sh',
|
||||
}
|
||||
py_version = self.python_version
|
||||
self.log.info("Building conda environment for python=%s" % py_version)
|
||||
|
@ -174,16 +176,15 @@ class CondaBuildPack(BaseImage):
|
|||
"""
|
||||
assembly_scripts = []
|
||||
environment_yml = self.binder_path('environment.yml')
|
||||
env_name = 'kernel' if self.py2 else 'root'
|
||||
env_prefix = "${KERNEL_PYTHON_PREFIX}" if self.py2 else "${NB_PYTHON_PREFIX}"
|
||||
if os.path.exists(environment_yml):
|
||||
assembly_scripts.append((
|
||||
'${NB_USER}',
|
||||
r"""
|
||||
conda env update -n {0} -f "{1}" && \
|
||||
conda clean -tipsy && \
|
||||
conda list -n {0} && \
|
||||
rm -rf /srv/conda/pkgs
|
||||
""".format(env_name, environment_yml)
|
||||
conda env update -p {0} -f "{1}" && \
|
||||
conda clean --all -f -y && \
|
||||
conda list -p {0}
|
||||
""".format(env_prefix, environment_yml)
|
||||
))
|
||||
return super().get_assemble_scripts() + assembly_scripts
|
||||
|
||||
|
|
|
@ -0,0 +1,11 @@
|
|||
# enable conda and activate the notebook environment
|
||||
CONDA_PROFILE="${CONDA_DIR}/etc/profile.d/conda.sh"
|
||||
test -f $CONDA_PROFILE && . $CONDA_PROFILE
|
||||
if [[ "${KERNEL_PYTHON_PREFIX}" != "${NB_PYTHON_PREFIX}" ]]; then
|
||||
# if the kernel is a separate env, stack them
|
||||
# so both are on PATH
|
||||
conda activate ${KERNEL_PYTHON_PREFIX}
|
||||
conda activate --stack ${NB_PYTHON_PREFIX}
|
||||
else
|
||||
conda activate ${NB_PYTHON_PREFIX}
|
||||
fi
|
|
@ -1,5 +1,5 @@
|
|||
# AUTO GENERATED FROM environment.py-3.7.yml, DO NOT MANUALLY MODIFY
|
||||
# Frozen on 2019-03-23 18:02:15 UTC
|
||||
# Frozen on 2019-04-26 14:12:19 UTC
|
||||
name: r2d
|
||||
channels:
|
||||
- conda-forge
|
||||
|
@ -16,71 +16,71 @@ dependencies:
|
|||
- defusedxml=0.5.0=py_1
|
||||
- entrypoints=0.3=py37_1000
|
||||
- ipykernel=5.1.0=py37h24bf2e0_1002
|
||||
- ipython=7.4.0=py37h24bf2e0_0
|
||||
- ipython=7.5.0=py37h24bf2e0_0
|
||||
- ipython_genutils=0.2.0=py_1
|
||||
- ipywidgets=7.4.2=py_0
|
||||
- jedi=0.13.3=py37_0
|
||||
- jinja2=2.10=py_1
|
||||
- jinja2=2.10.1=py_0
|
||||
- jsonschema=3.0.1=py37_0
|
||||
- jupyter_client=5.2.4=py_3
|
||||
- jupyter_core=4.4.0=py_0
|
||||
- jupyterlab=0.35.4=py37_0
|
||||
- jupyterlab_server=0.2.0=py_0
|
||||
- libffi=3.2.1=he1b5a44_1006
|
||||
- libgcc-ng=8.2.0=hdf63c60_1
|
||||
- libsodium=1.0.16=h14c3975_1001
|
||||
- libstdcxx-ng=8.2.0=hdf63c60_1
|
||||
- markupsafe=1.1.1=py37h14c3975_0
|
||||
- mistune=0.8.4=py37h14c3975_1000
|
||||
- nbconvert=5.4.1=py_2
|
||||
- nbformat=4.4.0=py_1
|
||||
- ncurses=6.1=hf484d3e_1002
|
||||
- notebook=5.7.6=py37_0
|
||||
- notebook=5.7.8=py37_0
|
||||
- openssl=1.1.1b=h14c3975_1
|
||||
- pandoc=2.7.1=0
|
||||
- pandoc=2.7.2=0
|
||||
- pandocfilters=1.4.2=py_1
|
||||
- parso=0.3.4=py_0
|
||||
- pexpect=4.6.0=py37_1000
|
||||
- parso=0.4.0=py_0
|
||||
- pexpect=4.7.0=py37_0
|
||||
- pickleshare=0.7.5=py37_1000
|
||||
- pip=19.0.3=py37_0
|
||||
- pip=19.1=py37_0
|
||||
- prometheus_client=0.6.0=py_0
|
||||
- prompt_toolkit=2.0.9=py_0
|
||||
- ptyprocess=0.6.0=py37_1000
|
||||
- ptyprocess=0.6.0=py_1001
|
||||
- pygments=2.3.1=py_0
|
||||
- pyrsistent=0.14.11=py37h14c3975_0
|
||||
- python=3.7.2=h381d211_0
|
||||
- pyrsistent=0.15.1=py37h516909a_0
|
||||
- python=3.7.3=h5b0a415_0
|
||||
- python-dateutil=2.8.0=py_0
|
||||
- pyzmq=18.0.1=py37h0e1adb2_0
|
||||
- pyzmq=18.0.1=py37hc4ba49a_1
|
||||
- readline=7.0=hf8c457e_1001
|
||||
- send2trash=1.5.0=py_0
|
||||
- setuptools=40.8.0=py37_0
|
||||
- setuptools=41.0.1=py37_0
|
||||
- six=1.12.0=py37_1000
|
||||
- sqlite=3.26.0=h67949de_1001
|
||||
- terminado=0.8.1=py37_1001
|
||||
- terminado=0.8.2=py37_0
|
||||
- testpath=0.4.2=py_1001
|
||||
- tk=8.6.9=h84994c4_1000
|
||||
- tornado=6.0.1=py37h14c3975_0
|
||||
- tk=8.6.9=h84994c4_1001
|
||||
- tornado=6.0.2=py37h516909a_0
|
||||
- traitlets=4.3.2=py37_1000
|
||||
- wcwidth=0.1.7=py_1
|
||||
- webencodings=0.5.1=py_1
|
||||
- wheel=0.33.1=py37_0
|
||||
- widgetsnbextension=3.4.2=py37_1000
|
||||
- xz=5.2.4=h14c3975_1001
|
||||
- zeromq=4.2.5=hf484d3e_1006
|
||||
- zeromq=4.3.1=hf484d3e_1000
|
||||
- zlib=1.2.11=h14c3975_1004
|
||||
- libgcc-ng=8.2.0=hdf63c60_1
|
||||
- libstdcxx-ng=8.2.0=hdf63c60_1
|
||||
- pip:
|
||||
- alembic==1.0.8
|
||||
- alembic==1.0.9
|
||||
- async-generator==1.10
|
||||
- chardet==3.0.4
|
||||
- idna==2.8
|
||||
- jupyterhub==0.9.4
|
||||
- mako==1.0.8
|
||||
- nteract-on-jupyter==2.0.0
|
||||
- mako==1.0.9
|
||||
- nteract-on-jupyter==2.0.12
|
||||
- pamela==1.0.0
|
||||
- python-editor==1.0.4
|
||||
- python-oauth2==1.1.0
|
||||
- requests==2.21.0
|
||||
- sqlalchemy==1.3.1
|
||||
- urllib3==1.24.1
|
||||
- sqlalchemy==1.3.3
|
||||
- urllib3==1.24.2
|
||||
prefix: /opt/conda/envs/r2d
|
||||
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
# AUTO GENERATED FROM environment.py-2.7.yml, DO NOT MANUALLY MODIFY
|
||||
# Frozen on 2019-03-23 17:59:14 UTC
|
||||
# Frozen on 2019-04-26 14:09:19 UTC
|
||||
name: r2d
|
||||
channels:
|
||||
- conda-forge
|
||||
|
@ -20,34 +20,34 @@ dependencies:
|
|||
- jupyter_client=5.2.4=py_3
|
||||
- jupyter_core=4.4.0=py_0
|
||||
- libffi=3.2.1=he1b5a44_1006
|
||||
- libgcc-ng=8.2.0=hdf63c60_1
|
||||
- libsodium=1.0.16=h14c3975_1001
|
||||
- libstdcxx-ng=8.2.0=hdf63c60_1
|
||||
- ncurses=6.1=hf484d3e_1002
|
||||
- openssl=1.1.1b=h14c3975_1
|
||||
- pathlib2=2.3.3=py27_1000
|
||||
- pexpect=4.6.0=py27_1000
|
||||
- pexpect=4.7.0=py27_0
|
||||
- pickleshare=0.7.5=py27_1000
|
||||
- pip=19.0.3=py27_0
|
||||
- pip=19.1=py27_0
|
||||
- prompt_toolkit=1.0.15=py_1
|
||||
- ptyprocess=0.6.0=py27_1000
|
||||
- ptyprocess=0.6.0=py_1001
|
||||
- pygments=2.3.1=py_0
|
||||
- python=2.7.15=h721da81_1008
|
||||
- python-dateutil=2.8.0=py_0
|
||||
- pyzmq=18.0.1=py27h0e1adb2_0
|
||||
- pyzmq=18.0.1=py27hc4ba49a_1
|
||||
- readline=7.0=hf8c457e_1001
|
||||
- scandir=1.10.0=py27h14c3975_0
|
||||
- setuptools=40.8.0=py27_0
|
||||
- setuptools=41.0.1=py27_0
|
||||
- simplegeneric=0.8.1=py_1
|
||||
- singledispatch=3.4.0.3=py27_1000
|
||||
- six=1.12.0=py27_1000
|
||||
- sqlite=3.26.0=h67949de_1001
|
||||
- tk=8.6.9=h84994c4_1000
|
||||
- tk=8.6.9=h84994c4_1001
|
||||
- tornado=5.1.1=py27h14c3975_1000
|
||||
- traitlets=4.3.2=py27_1000
|
||||
- wcwidth=0.1.7=py_1
|
||||
- wheel=0.33.1=py27_0
|
||||
- zeromq=4.2.5=hf484d3e_1006
|
||||
- zeromq=4.3.1=hf484d3e_1000
|
||||
- zlib=1.2.11=h14c3975_1004
|
||||
- libgcc-ng=8.2.0=hdf63c60_1
|
||||
- libstdcxx-ng=8.2.0=hdf63c60_1
|
||||
prefix: /opt/conda/envs/r2d
|
||||
|
||||
|
|
|
@ -75,7 +75,7 @@ dependencies:
|
|||
- idna==2.7
|
||||
- jupyterhub==0.9.4
|
||||
- mako==1.0.7
|
||||
- notebook==5.7.6
|
||||
- notebook==5.7.8
|
||||
- nteract-on-jupyter==1.9.6
|
||||
- pamela==0.3.0
|
||||
- python-editor==1.0.3
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
# AUTO GENERATED FROM environment.py-3.6.yml, DO NOT MANUALLY MODIFY
|
||||
# Frozen on 2019-03-23 18:00:20 UTC
|
||||
# Frozen on 2019-04-26 14:10:28 UTC
|
||||
name: r2d
|
||||
channels:
|
||||
- conda-forge
|
||||
|
@ -15,71 +15,71 @@ dependencies:
|
|||
- defusedxml=0.5.0=py_1
|
||||
- entrypoints=0.3=py36_1000
|
||||
- ipykernel=5.1.0=py36h24bf2e0_1002
|
||||
- ipython=7.4.0=py36h24bf2e0_0
|
||||
- ipython=7.5.0=py36h24bf2e0_0
|
||||
- ipython_genutils=0.2.0=py_1
|
||||
- ipywidgets=7.4.2=py_0
|
||||
- jedi=0.13.3=py36_0
|
||||
- jinja2=2.10=py_1
|
||||
- jinja2=2.10.1=py_0
|
||||
- jsonschema=3.0.1=py36_0
|
||||
- jupyter_client=5.2.4=py_3
|
||||
- jupyter_core=4.4.0=py_0
|
||||
- jupyterlab=0.35.4=py36_0
|
||||
- jupyterlab_server=0.2.0=py_0
|
||||
- libffi=3.2.1=he1b5a44_1006
|
||||
- libgcc-ng=8.2.0=hdf63c60_1
|
||||
- libsodium=1.0.16=h14c3975_1001
|
||||
- libstdcxx-ng=8.2.0=hdf63c60_1
|
||||
- markupsafe=1.1.1=py36h14c3975_0
|
||||
- mistune=0.8.4=py36h14c3975_1000
|
||||
- nbconvert=5.4.1=py_2
|
||||
- nbformat=4.4.0=py_1
|
||||
- ncurses=6.1=hf484d3e_1002
|
||||
- notebook=5.7.6=py36_0
|
||||
- notebook=5.7.8=py36_0
|
||||
- openssl=1.1.1b=h14c3975_1
|
||||
- pandoc=2.7.1=0
|
||||
- pandoc=2.7.2=0
|
||||
- pandocfilters=1.4.2=py_1
|
||||
- parso=0.3.4=py_0
|
||||
- pexpect=4.6.0=py36_1000
|
||||
- parso=0.4.0=py_0
|
||||
- pexpect=4.7.0=py36_0
|
||||
- pickleshare=0.7.5=py36_1000
|
||||
- pip=19.0.3=py36_0
|
||||
- pip=19.1=py36_0
|
||||
- prometheus_client=0.6.0=py_0
|
||||
- prompt_toolkit=2.0.9=py_0
|
||||
- ptyprocess=0.6.0=py36_1000
|
||||
- ptyprocess=0.6.0=py_1001
|
||||
- pygments=2.3.1=py_0
|
||||
- pyrsistent=0.14.11=py36h14c3975_0
|
||||
- pyrsistent=0.15.1=py36h516909a_0
|
||||
- python=3.6.7=h381d211_1004
|
||||
- python-dateutil=2.8.0=py_0
|
||||
- pyzmq=18.0.1=py36h0e1adb2_0
|
||||
- pyzmq=18.0.1=py36hc4ba49a_1
|
||||
- readline=7.0=hf8c457e_1001
|
||||
- send2trash=1.5.0=py_0
|
||||
- setuptools=40.8.0=py36_0
|
||||
- setuptools=41.0.1=py36_0
|
||||
- six=1.12.0=py36_1000
|
||||
- sqlite=3.26.0=h67949de_1001
|
||||
- terminado=0.8.1=py36_1001
|
||||
- terminado=0.8.2=py36_0
|
||||
- testpath=0.4.2=py_1001
|
||||
- tk=8.6.9=h84994c4_1000
|
||||
- tornado=6.0.1=py36h14c3975_0
|
||||
- tk=8.6.9=h84994c4_1001
|
||||
- tornado=6.0.2=py36h516909a_0
|
||||
- traitlets=4.3.2=py36_1000
|
||||
- wcwidth=0.1.7=py_1
|
||||
- webencodings=0.5.1=py_1
|
||||
- wheel=0.33.1=py36_0
|
||||
- widgetsnbextension=3.4.2=py36_1000
|
||||
- xz=5.2.4=h14c3975_1001
|
||||
- zeromq=4.2.5=hf484d3e_1006
|
||||
- zeromq=4.3.1=hf484d3e_1000
|
||||
- zlib=1.2.11=h14c3975_1004
|
||||
- libgcc-ng=8.2.0=hdf63c60_1
|
||||
- libstdcxx-ng=8.2.0=hdf63c60_1
|
||||
- pip:
|
||||
- alembic==1.0.8
|
||||
- alembic==1.0.9
|
||||
- async-generator==1.10
|
||||
- chardet==3.0.4
|
||||
- idna==2.8
|
||||
- jupyterhub==0.9.4
|
||||
- mako==1.0.8
|
||||
- nteract-on-jupyter==2.0.0
|
||||
- mako==1.0.9
|
||||
- nteract-on-jupyter==2.0.12
|
||||
- pamela==1.0.0
|
||||
- python-editor==1.0.4
|
||||
- python-oauth2==1.1.0
|
||||
- requests==2.21.0
|
||||
- sqlalchemy==1.3.1
|
||||
- urllib3==1.24.1
|
||||
- sqlalchemy==1.3.3
|
||||
- urllib3==1.24.2
|
||||
prefix: /opt/conda/envs/r2d
|
||||
|
||||
|
|
|
@ -1,11 +1,11 @@
|
|||
# AUTO GENERATED FROM environment.yml, DO NOT MANUALLY MODIFY
|
||||
# Generated on 2019-03-23 18:00:20 UTC
|
||||
# Generated on 2019-04-26 14:10:28 UTC
|
||||
dependencies:
|
||||
- python=3.6.*
|
||||
- ipywidgets==7.4.2
|
||||
- jupyterlab==0.35.4
|
||||
- nbconvert==5.4.1
|
||||
- notebook==5.7.6
|
||||
- notebook==5.7.8
|
||||
- pip:
|
||||
- nteract_on_jupyter==2.0.0
|
||||
- nteract_on_jupyter==2.0.12
|
||||
- jupyterhub==0.9.4
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
# AUTO GENERATED FROM environment.py-3.7.yml, DO NOT MANUALLY MODIFY
|
||||
# Frozen on 2019-03-23 18:02:15 UTC
|
||||
# Frozen on 2019-04-26 14:12:19 UTC
|
||||
name: r2d
|
||||
channels:
|
||||
- conda-forge
|
||||
|
@ -16,71 +16,71 @@ dependencies:
|
|||
- defusedxml=0.5.0=py_1
|
||||
- entrypoints=0.3=py37_1000
|
||||
- ipykernel=5.1.0=py37h24bf2e0_1002
|
||||
- ipython=7.4.0=py37h24bf2e0_0
|
||||
- ipython=7.5.0=py37h24bf2e0_0
|
||||
- ipython_genutils=0.2.0=py_1
|
||||
- ipywidgets=7.4.2=py_0
|
||||
- jedi=0.13.3=py37_0
|
||||
- jinja2=2.10=py_1
|
||||
- jinja2=2.10.1=py_0
|
||||
- jsonschema=3.0.1=py37_0
|
||||
- jupyter_client=5.2.4=py_3
|
||||
- jupyter_core=4.4.0=py_0
|
||||
- jupyterlab=0.35.4=py37_0
|
||||
- jupyterlab_server=0.2.0=py_0
|
||||
- libffi=3.2.1=he1b5a44_1006
|
||||
- libgcc-ng=8.2.0=hdf63c60_1
|
||||
- libsodium=1.0.16=h14c3975_1001
|
||||
- libstdcxx-ng=8.2.0=hdf63c60_1
|
||||
- markupsafe=1.1.1=py37h14c3975_0
|
||||
- mistune=0.8.4=py37h14c3975_1000
|
||||
- nbconvert=5.4.1=py_2
|
||||
- nbformat=4.4.0=py_1
|
||||
- ncurses=6.1=hf484d3e_1002
|
||||
- notebook=5.7.6=py37_0
|
||||
- notebook=5.7.8=py37_0
|
||||
- openssl=1.1.1b=h14c3975_1
|
||||
- pandoc=2.7.1=0
|
||||
- pandoc=2.7.2=0
|
||||
- pandocfilters=1.4.2=py_1
|
||||
- parso=0.3.4=py_0
|
||||
- pexpect=4.6.0=py37_1000
|
||||
- parso=0.4.0=py_0
|
||||
- pexpect=4.7.0=py37_0
|
||||
- pickleshare=0.7.5=py37_1000
|
||||
- pip=19.0.3=py37_0
|
||||
- pip=19.1=py37_0
|
||||
- prometheus_client=0.6.0=py_0
|
||||
- prompt_toolkit=2.0.9=py_0
|
||||
- ptyprocess=0.6.0=py37_1000
|
||||
- ptyprocess=0.6.0=py_1001
|
||||
- pygments=2.3.1=py_0
|
||||
- pyrsistent=0.14.11=py37h14c3975_0
|
||||
- python=3.7.2=h381d211_0
|
||||
- pyrsistent=0.15.1=py37h516909a_0
|
||||
- python=3.7.3=h5b0a415_0
|
||||
- python-dateutil=2.8.0=py_0
|
||||
- pyzmq=18.0.1=py37h0e1adb2_0
|
||||
- pyzmq=18.0.1=py37hc4ba49a_1
|
||||
- readline=7.0=hf8c457e_1001
|
||||
- send2trash=1.5.0=py_0
|
||||
- setuptools=40.8.0=py37_0
|
||||
- setuptools=41.0.1=py37_0
|
||||
- six=1.12.0=py37_1000
|
||||
- sqlite=3.26.0=h67949de_1001
|
||||
- terminado=0.8.1=py37_1001
|
||||
- terminado=0.8.2=py37_0
|
||||
- testpath=0.4.2=py_1001
|
||||
- tk=8.6.9=h84994c4_1000
|
||||
- tornado=6.0.1=py37h14c3975_0
|
||||
- tk=8.6.9=h84994c4_1001
|
||||
- tornado=6.0.2=py37h516909a_0
|
||||
- traitlets=4.3.2=py37_1000
|
||||
- wcwidth=0.1.7=py_1
|
||||
- webencodings=0.5.1=py_1
|
||||
- wheel=0.33.1=py37_0
|
||||
- widgetsnbextension=3.4.2=py37_1000
|
||||
- xz=5.2.4=h14c3975_1001
|
||||
- zeromq=4.2.5=hf484d3e_1006
|
||||
- zeromq=4.3.1=hf484d3e_1000
|
||||
- zlib=1.2.11=h14c3975_1004
|
||||
- libgcc-ng=8.2.0=hdf63c60_1
|
||||
- libstdcxx-ng=8.2.0=hdf63c60_1
|
||||
- pip:
|
||||
- alembic==1.0.8
|
||||
- alembic==1.0.9
|
||||
- async-generator==1.10
|
||||
- chardet==3.0.4
|
||||
- idna==2.8
|
||||
- jupyterhub==0.9.4
|
||||
- mako==1.0.8
|
||||
- nteract-on-jupyter==2.0.0
|
||||
- mako==1.0.9
|
||||
- nteract-on-jupyter==2.0.12
|
||||
- pamela==1.0.0
|
||||
- python-editor==1.0.4
|
||||
- python-oauth2==1.1.0
|
||||
- requests==2.21.0
|
||||
- sqlalchemy==1.3.1
|
||||
- urllib3==1.24.1
|
||||
- sqlalchemy==1.3.3
|
||||
- urllib3==1.24.2
|
||||
prefix: /opt/conda/envs/r2d
|
||||
|
||||
|
|
|
@ -1,11 +1,11 @@
|
|||
# AUTO GENERATED FROM environment.yml, DO NOT MANUALLY MODIFY
|
||||
# Generated on 2019-03-23 18:02:15 UTC
|
||||
# Generated on 2019-04-26 14:12:19 UTC
|
||||
dependencies:
|
||||
- python=3.7.*
|
||||
- ipywidgets==7.4.2
|
||||
- jupyterlab==0.35.4
|
||||
- nbconvert==5.4.1
|
||||
- notebook==5.7.6
|
||||
- notebook==5.7.8
|
||||
- pip:
|
||||
- nteract_on_jupyter==2.0.0
|
||||
- nteract_on_jupyter==2.0.12
|
||||
- jupyterhub==0.9.4
|
||||
|
|
|
@ -3,7 +3,7 @@ dependencies:
|
|||
- ipywidgets==7.4.2
|
||||
- jupyterlab==0.35.4
|
||||
- nbconvert==5.4.1
|
||||
- notebook==5.7.6
|
||||
- notebook==5.7.8
|
||||
- pip:
|
||||
- nteract_on_jupyter==2.0.0
|
||||
- nteract_on_jupyter==2.0.12
|
||||
- jupyterhub==0.9.4
|
||||
|
|
|
@ -21,8 +21,8 @@ from ruamel.yaml import YAML
|
|||
|
||||
# Docker image version can be different than conda version,
|
||||
# since miniconda3 docker images seem to lag conda releases.
|
||||
MINICONDA_DOCKER_VERSION = '4.5.11'
|
||||
CONDA_VERSION = '4.5.11'
|
||||
MINICONDA_DOCKER_VERSION = '4.5.12'
|
||||
CONDA_VERSION = '4.6.14'
|
||||
|
||||
HERE = pathlib.Path(os.path.dirname(os.path.abspath(__file__)))
|
||||
|
||||
|
|
|
@ -3,8 +3,12 @@
|
|||
set -ex
|
||||
|
||||
cd $(dirname $0)
|
||||
MINICONDA_VERSION=4.5.11
|
||||
CONDA_VERSION=4.5.11
|
||||
MINICONDA_VERSION=4.6.14
|
||||
CONDA_VERSION=4.6.14
|
||||
# Only MD5 checksums are available for miniconda
|
||||
# Can be obtained from https://repo.continuum.io/miniconda/
|
||||
MD5SUM="718259965f234088d785cad1fbd7de03"
|
||||
|
||||
URL="https://repo.continuum.io/miniconda/Miniconda3-${MINICONDA_VERSION}-Linux-x86_64.sh"
|
||||
INSTALLER_PATH=/tmp/miniconda-installer.sh
|
||||
|
||||
|
@ -15,10 +19,7 @@ unset HOME
|
|||
wget --quiet $URL -O ${INSTALLER_PATH}
|
||||
chmod +x ${INSTALLER_PATH}
|
||||
|
||||
# Only MD5 checksums are available for miniconda
|
||||
# Can be obtained from https://repo.continuum.io/miniconda/
|
||||
MD5SUM="e1045ee415162f944b6aebfe560b8fee"
|
||||
|
||||
# check md5 checksum
|
||||
if ! echo "${MD5SUM} ${INSTALLER_PATH}" | md5sum --quiet -c -; then
|
||||
echo "md5sum mismatch for ${INSTALLER_PATH}, exiting!"
|
||||
exit 1
|
||||
|
@ -34,22 +35,23 @@ conda config --system --add channels conda-forge
|
|||
conda config --system --set auto_update_conda false
|
||||
conda config --system --set show_channel_urls true
|
||||
|
||||
# install conda itself
|
||||
conda install -yq conda==${CONDA_VERSION}
|
||||
|
||||
# switch Python in its own step
|
||||
# since switching Python during an env update can
|
||||
# prevent pip installation.
|
||||
# we wouldn't have this issue if we did `conda env create`
|
||||
# instead of `conda env update` in these cases
|
||||
conda install -y $(cat /tmp/environment.yml | grep -o '\spython=.*') conda==${CONDA_VERSION}
|
||||
|
||||
# bug in conda 4.3.>15 prevents --set update_dependencies
|
||||
echo 'update_dependencies: false' >> ${CONDA_DIR}/.condarc
|
||||
|
||||
echo "installing root env:"
|
||||
# install conda itself
|
||||
if [[ "${CONDA_VERSION}" != "${MINICONDA_VERSION}" ]]; then
|
||||
conda install -yq conda==${CONDA_VERSION}
|
||||
fi
|
||||
|
||||
echo "installing notebook env:"
|
||||
cat /tmp/environment.yml
|
||||
conda env update -n root -f /tmp/environment.yml
|
||||
conda env create -p ${NB_PYTHON_PREFIX} -f /tmp/environment.yml
|
||||
|
||||
# empty conda history file,
|
||||
# which seems to result in some effective pinning of packages in the initial env,
|
||||
# which we don't intend.
|
||||
# this file must not be *removed*, however
|
||||
echo '' > ${NB_PYTHON_PREFIX}/conda-meta/history
|
||||
|
||||
# enable nteract-on-jupyter, which was installed with pip
|
||||
jupyter serverextension enable nteract_on_jupyter --sys-prefix
|
||||
|
@ -59,22 +61,22 @@ if [[ -f /tmp/kernel-environment.yml ]]; then
|
|||
echo "installing kernel env:"
|
||||
cat /tmp/kernel-environment.yml
|
||||
|
||||
conda env create -n kernel -f /tmp/kernel-environment.yml
|
||||
${CONDA_DIR}/envs/kernel/bin/ipython kernel install --prefix "${CONDA_DIR}"
|
||||
echo '' > ${CONDA_DIR}/envs/kernel/conda-meta/history
|
||||
conda env create -p ${KERNEL_PYTHON_PREFIX} -f /tmp/kernel-environment.yml
|
||||
${KERNEL_PYTHON_PREFIX}/bin/ipython kernel install --prefix "${NB_PYTHON_PREFIX}"
|
||||
echo '' > ${KERNEL_PYTHON_PREFIX}/conda-meta/history
|
||||
conda list -p ${KERNEL_PYTHON_PREFIX}
|
||||
fi
|
||||
# empty conda history file,
|
||||
# which seems to result in some effective pinning of packages in the initial env,
|
||||
# which we don't intend.
|
||||
# this file must not be *removed*, however
|
||||
echo '' > ${CONDA_DIR}/conda-meta/history
|
||||
|
||||
# Clean things out!
|
||||
conda clean -tipsy
|
||||
conda clean --all -f -y
|
||||
|
||||
# Remove the big installer so we don't increase docker image size too much
|
||||
rm ${INSTALLER_PATH}
|
||||
|
||||
# Remove the pip cache created as part of installing miniconda
|
||||
rm -rf /root/.cache
|
||||
|
||||
chown -R $NB_USER:$NB_USER ${CONDA_DIR}
|
||||
|
||||
conda list
|
||||
conda list -n root
|
||||
conda list -p ${NB_PYTHON_PREFIX}
|
||||
|
|
|
@ -21,13 +21,21 @@ class DockerBuildPack(BuildPack):
|
|||
|
||||
def build(self, client, image_spec, memory_limit, build_args, cache_from, extra_build_kwargs):
|
||||
"""Build a Docker image based on the Dockerfile in the source repo."""
|
||||
limits = {
|
||||
# Always disable memory swap for building, since mostly
|
||||
# nothing good can come of that.
|
||||
'memswap': -1
|
||||
}
|
||||
# If you work on this bit of code check the corresponding code in
|
||||
# buildpacks/base.py where it is duplicated
|
||||
if not isinstance(memory_limit, int):
|
||||
raise ValueError("The memory limit has to be specified as an"
|
||||
"integer but is '{}'".format(type(memory_limit)))
|
||||
limits = {}
|
||||
if memory_limit:
|
||||
limits['memory'] = memory_limit
|
||||
# We want to always disable swap. Docker expects `memswap` to
|
||||
# be total allowable memory, *including* swap - while `memory`
|
||||
# points to non-swap memory. We set both values to the same so
|
||||
# we use no swap.
|
||||
limits = {
|
||||
'memory': memory_limit,
|
||||
'memswap': memory_limit,
|
||||
}
|
||||
|
||||
build_kwargs = dict(
|
||||
path=os.getcwd(),
|
||||
|
|
|
@ -10,29 +10,36 @@ trap _term SIGTERM
|
|||
|
||||
# if there is a binder/ sub-directory it takes precedence
|
||||
# files outside it are ignored
|
||||
if [ -e ./binder ]; then
|
||||
nixpath="./binder/default.nix";
|
||||
if [ -f ./binder/start ]; then
|
||||
chmod u+x ./binder/start
|
||||
# Using `$@`` here which is what the internet recommends leads to
|
||||
# errors when the command is something like `jupyter --ip=0.0.0.0 ...`
|
||||
# as nix-shell picks that up as an argument to it instead of the command.
|
||||
# There are several issues on the nix repos discussing this and adding support
|
||||
# for -- to indicate "all arguments after this are for the command, not nix-shell"
|
||||
# but it seems they have stalled/not yet produced an implementation.
|
||||
# So let's use `$*` for now.
|
||||
nix-shell $nixpath --command "./binder/start $*" &
|
||||
else
|
||||
nix-shell $nixpath --command "$*" &
|
||||
# find binder sub-directory (if present)
|
||||
binder_dir="./"
|
||||
for dir in "./binder" "./.binder" ; do
|
||||
if [ -e $dir ]; then
|
||||
binder_dir=$dir
|
||||
break
|
||||
fi
|
||||
done
|
||||
|
||||
# raise error if both binder and .binder are found
|
||||
if [[ -d "./binder" && -d "./.binder" ]]; then
|
||||
echo "Error: Found both binder and .binder directories."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "binder_dir is: $binder_dir"
|
||||
|
||||
nixpath="$binder_dir/default.nix";
|
||||
if [ -f $binder_dir/start ]; then
|
||||
chmod u+x $binder_dir/start
|
||||
# Using `$@`` here which is what the internet recommends leads to
|
||||
# errors when the command is something like `jupyter --ip=0.0.0.0 ...`
|
||||
# as nix-shell picks that up as an argument to it instead of the command.
|
||||
# There are several issues on the nix repos discussing this and adding support
|
||||
# for -- to indicate "all arguments after this are for the command, not nix-shell"
|
||||
# but it seems they have stalled/not yet produced an implementation.
|
||||
# So let's use `$*` for now.
|
||||
nix-shell $nixpath --command "$binder_dir/start $*" &
|
||||
else
|
||||
nixpath="./default.nix";
|
||||
if [ -f ./start ]; then
|
||||
chmod u+x ./start
|
||||
nix-shell $nixpath --command "./start $*" &
|
||||
else
|
||||
nix-shell $nixpath --command "$*" &
|
||||
fi
|
||||
nix-shell $nixpath --command "$*" &
|
||||
fi
|
||||
|
||||
PID=$!
|
||||
|
|
|
@ -84,7 +84,7 @@ class RBuildPack(PythonBuildPack):
|
|||
return True
|
||||
|
||||
description_R = 'DESCRIPTION'
|
||||
if ((not os.path.exists('binder') and os.path.exists(description_R))
|
||||
if ((not self.binder_dir and os.path.exists(description_R))
|
||||
or 'r' in self.stencila_contexts):
|
||||
if not self.checkpoint_date:
|
||||
# no R snapshot date set through runtime.txt
|
||||
|
@ -300,7 +300,7 @@ class RBuildPack(PythonBuildPack):
|
|||
]
|
||||
|
||||
description_R = 'DESCRIPTION'
|
||||
if not os.path.exists('binder') and os.path.exists(description_R):
|
||||
if not self.binder_dir and os.path.exists(description_R):
|
||||
assemble_scripts += [
|
||||
(
|
||||
"${NB_USER}",
|
||||
|
|
|
@ -0,0 +1,9 @@
|
|||
#!/bin/bash -l
|
||||
# lightest possible entrypoint that ensures that
|
||||
# we use a login shell to get a fully configured shell environment
|
||||
# (e.g. sourcing /etc/profile.d, ~/.bashrc, and friends)
|
||||
if [[ ! -z "${R2D_ENTRYPOINT:-}" ]]; then
|
||||
exec "$R2D_ENTRYPOINT" "$@"
|
||||
else
|
||||
exec "$@"
|
||||
fi
|
|
@ -1,8 +1,10 @@
|
|||
#!/usr/bin/env python
|
||||
import sys
|
||||
from subprocess import check_output
|
||||
|
||||
assert sys.version_info[:2] == (3, 5), sys.version
|
||||
|
||||
out = check_output(['conda', '--version']).decode('utf8').strip()
|
||||
assert out == 'conda 4.6.14', out
|
||||
|
||||
import numpy
|
||||
import conda
|
||||
assert conda.__version__ == '4.5.11', conda.__version__
|
||||
|
|
|
@ -2,8 +2,10 @@
|
|||
import sys
|
||||
import os
|
||||
|
||||
# Python should still be in /srv/conda
|
||||
assert sys.executable == '/srv/conda/bin/python'
|
||||
# conda should still be in /srv/conda
|
||||
# and Python should still be in $NB_PYTHON_PREFIX
|
||||
assert sys.executable == os.path.join(os.environ['NB_PYTHON_PREFIX'], 'bin', 'python'), sys.executable
|
||||
assert sys.executable.startswith("/srv/conda/"), sys.executable
|
||||
|
||||
# Repo should be in /srv/repo
|
||||
assert os.path.exists('/srv/repo/verify')
|
||||
|
|
|
@ -13,7 +13,7 @@ assert sorted(specs) == ['python2', 'python3'], specs.keys()
|
|||
import json
|
||||
from subprocess import check_output
|
||||
envs = json.loads(check_output(['conda', 'env', 'list', '--json']).decode('utf8'))
|
||||
assert envs == {'envs': ['/srv/conda', '/srv/conda/envs/kernel']}, envs
|
||||
assert envs == {'envs': ['/srv/conda', '/srv/conda/envs/kernel', '/srv/conda/envs/notebook']}, envs
|
||||
|
||||
pkgs = json.loads(check_output(['conda', 'list', '-n', 'kernel', '--json']).decode('utf8'))
|
||||
pkg_names = [pkg['name'] for pkg in pkgs]
|
||||
|
|
|
@ -1,2 +1,3 @@
|
|||
dependencies:
|
||||
- numpy
|
||||
- pytest
|
||||
|
|
|
@ -1,6 +1,2 @@
|
|||
#!/usr/bin/env python
|
||||
import sys
|
||||
|
||||
assert sys.version_info[:2] == (3, 7), sys.version
|
||||
|
||||
import numpy
|
||||
#!/bin/sh
|
||||
pytest -v ./verify.py
|
||||
|
|
|
@ -0,0 +1,12 @@
|
|||
#!/usr/bin/env python
|
||||
import os
|
||||
import sys
|
||||
|
||||
def test_sys_version():
|
||||
assert sys.version_info[:2] == (3, 7)
|
||||
|
||||
def test_numpy():
|
||||
import numpy
|
||||
|
||||
def test_conda_activated():
|
||||
assert os.environ.get("CONDA_PREFIX") == os.environ["NB_PYTHON_PREFIX"], dict(os.environ)
|
|
@ -17,7 +17,7 @@ end
|
|||
# Verify that kernels are not installed in home directory (issue #620)
|
||||
try
|
||||
using IJulia
|
||||
@assert IJulia.kerneldir() == "/srv/conda/share/jupyter/kernels"
|
||||
@assert IJulia.kerneldir() == ENV["NB_PYTHON_PREFIX"] * "/share/jupyter/kernels"
|
||||
catch
|
||||
exit(1)
|
||||
end
|
||||
|
|
|
@ -17,7 +17,7 @@ end
|
|||
# Verify that kernels are not installed in home directory (issue #620)
|
||||
try
|
||||
using IJulia
|
||||
@assert IJulia.kerneldir() == "/srv/conda/share/jupyter/kernels"
|
||||
@assert IJulia.kerneldir() == ENV["NB_PYTHON_PREFIX"] * "/share/jupyter/kernels"
|
||||
catch
|
||||
exit(1)
|
||||
end
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
FROM ubuntu:artful
|
||||
FROM ubuntu:bionic
|
||||
|
||||
RUN apt-get update && apt-get install --yes python3
|
||||
|
||||
|
|
|
@ -1,3 +1,5 @@
|
|||
import errno
|
||||
import pytest
|
||||
from tempfile import TemporaryDirectory
|
||||
from unittest.mock import patch
|
||||
|
||||
|
@ -101,4 +103,26 @@ def test_run_kwargs(repo_with_content):
|
|||
containers.run.assert_called_once()
|
||||
args, kwargs = containers.run.call_args
|
||||
assert 'somekey' in kwargs
|
||||
assert kwargs['somekey'] == "somevalue"
|
||||
assert kwargs['somekey'] == "somevalue"
|
||||
|
||||
|
||||
def test_root_not_allowed():
|
||||
with TemporaryDirectory() as src, patch('os.geteuid') as geteuid:
|
||||
geteuid.return_value = 0
|
||||
argv = [src]
|
||||
app = make_r2d(argv)
|
||||
with pytest.raises(SystemExit) as exc:
|
||||
app.build()
|
||||
assert exc.code == errno.EPERM
|
||||
|
||||
app = Repo2Docker(
|
||||
repo=src,
|
||||
user_id=1000,
|
||||
user_name='jovyan',
|
||||
run=False,
|
||||
)
|
||||
app.initialize()
|
||||
with patch.object(docker.APIClient, 'build') as builds:
|
||||
builds.return_value = []
|
||||
app.build()
|
||||
builds.assert_called_once()
|
||||
|
|
|
@ -40,6 +40,17 @@ def test_dry_run():
|
|||
assert not r2d.run
|
||||
assert not r2d.push
|
||||
|
||||
|
||||
def test_mem_limit():
|
||||
"""
|
||||
Test various ways of passing --build-memory-limit
|
||||
"""
|
||||
r2d = make_r2d(['--build-memory-limit', '1024', '.'])
|
||||
assert int(r2d.build_memory_limit) == 1024
|
||||
|
||||
r2d = make_r2d(['--build-memory-limit', '3K', '.'])
|
||||
assert int(r2d.build_memory_limit) == 1024 * 3
|
||||
|
||||
def test_run_required():
|
||||
"""
|
||||
Test all the things that should fail if we pass in --no-run
|
||||
|
|
|
@ -0,0 +1,31 @@
|
|||
import os
|
||||
|
||||
import pytest
|
||||
|
||||
from repo2docker import buildpacks
|
||||
|
||||
|
||||
@pytest.mark.parametrize("binder_dir", ['.binder', 'binder'])
|
||||
def test_binder_dir_property(tmpdir, binder_dir):
|
||||
tmpdir.chdir()
|
||||
os.mkdir(binder_dir)
|
||||
|
||||
bp = buildpacks.BuildPack()
|
||||
assert binder_dir in bp.binder_dir
|
||||
assert bp.binder_path('foo.yaml') == os.path.join(binder_dir, 'foo.yaml')
|
||||
|
||||
|
||||
def test_root_binder_dir(tmpdir):
|
||||
tmpdir.chdir()
|
||||
bp = buildpacks.BuildPack()
|
||||
assert bp.binder_dir == ''
|
||||
|
||||
|
||||
def test_exclusive_binder_dir(tmpdir):
|
||||
tmpdir.chdir()
|
||||
os.mkdir('./binder')
|
||||
os.mkdir('./.binder')
|
||||
|
||||
bp = buildpacks.BuildPack()
|
||||
with pytest.raises(RuntimeError):
|
||||
_ = bp.binder_dir
|
|
@ -10,7 +10,6 @@ from repo2docker.buildpacks import BaseImage, DockerBuildPack, LegacyBinderDocke
|
|||
|
||||
|
||||
def test_cache_from_base(tmpdir):
|
||||
FakeDockerClient = MagicMock()
|
||||
cache_from = [
|
||||
'image-1:latest'
|
||||
]
|
||||
|
@ -21,16 +20,14 @@ def test_cache_from_base(tmpdir):
|
|||
|
||||
# Test base image build pack
|
||||
tmpdir.chdir()
|
||||
for line in BaseImage().build(fake_client, 'image-2', '1Gi', {}, cache_from, extra_build_kwargs):
|
||||
for line in BaseImage().build(fake_client, 'image-2', 100, {}, cache_from, extra_build_kwargs):
|
||||
assert line == fake_log_value
|
||||
called_args, called_kwargs = fake_client.build.call_args
|
||||
assert 'cache_from' in called_kwargs
|
||||
assert called_kwargs['cache_from'] == cache_from
|
||||
|
||||
|
||||
|
||||
def test_cache_from_docker(tmpdir):
|
||||
FakeDockerClient = MagicMock()
|
||||
cache_from = [
|
||||
'image-1:latest'
|
||||
]
|
||||
|
@ -44,7 +41,7 @@ def test_cache_from_docker(tmpdir):
|
|||
with tmpdir.join("Dockerfile").open('w') as f:
|
||||
f.write('FROM scratch\n')
|
||||
|
||||
for line in DockerBuildPack().build(fake_client, 'image-2', '1Gi', {}, cache_from, extra_build_kwargs):
|
||||
for line in DockerBuildPack().build(fake_client, 'image-2', 100, {}, cache_from, extra_build_kwargs):
|
||||
assert line == fake_log_value
|
||||
called_args, called_kwargs = fake_client.build.call_args
|
||||
assert 'cache_from' in called_kwargs
|
||||
|
@ -52,7 +49,6 @@ def test_cache_from_docker(tmpdir):
|
|||
|
||||
|
||||
def test_cache_from_legacy(tmpdir):
|
||||
FakeDockerClient = MagicMock()
|
||||
cache_from = [
|
||||
'image-1:latest'
|
||||
]
|
||||
|
@ -65,9 +61,8 @@ def test_cache_from_legacy(tmpdir):
|
|||
with tmpdir.join("Dockerfile").open('w') as f:
|
||||
f.write('FROM andrewosh/binder-base\n')
|
||||
|
||||
for line in LegacyBinderDockerBuildPack().build(fake_client, 'image-2', '1Gi', {}, cache_from, extra_build_kwargs):
|
||||
for line in LegacyBinderDockerBuildPack().build(fake_client, 'image-2', 100, {}, cache_from, extra_build_kwargs):
|
||||
assert line == fake_log_value
|
||||
called_args, called_kwargs = fake_client.build.call_args
|
||||
assert 'cache_from' in called_kwargs
|
||||
assert called_kwargs['cache_from'] == cache_from
|
||||
|
||||
|
|
|
@ -10,9 +10,14 @@ import os
|
|||
import shutil
|
||||
import time
|
||||
|
||||
from unittest.mock import MagicMock
|
||||
|
||||
import docker
|
||||
|
||||
import pytest
|
||||
|
||||
from repo2docker.app import Repo2Docker
|
||||
from repo2docker.buildpacks import BaseImage, DockerBuildPack
|
||||
|
||||
|
||||
basedir = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
|
||||
|
@ -82,3 +87,17 @@ def test_memlimit_same_postbuild():
|
|||
file_contents.append(f.read())
|
||||
# Make sure they're all the same
|
||||
assert len(set(file_contents)) == 1
|
||||
|
||||
|
||||
@pytest.mark.parametrize('BuildPack', [BaseImage, DockerBuildPack])
|
||||
def test_memlimit_argument_type(BuildPack):
|
||||
# check that an exception is raised when the memory limit isn't an int
|
||||
fake_log_value = {'stream': 'fake'}
|
||||
fake_client = MagicMock(spec=docker.APIClient)
|
||||
fake_client.build.return_value = iter([fake_log_value])
|
||||
|
||||
with pytest.raises(ValueError) as exc_info:
|
||||
for line in BuildPack().build(fake_client, 'image-2', "10Gi", {}, [], {}):
|
||||
pass
|
||||
|
||||
assert "The memory limit has to be specified as an" in str(exc_info.value)
|
||||
|
|
Ładowanie…
Reference in New Issue