Porównaj commity

...

10 Commity

Autor SHA1 Wiadomość Data
LinuxServer-CI cbc1cbbb94 Bot Updating Templated Files 2020-11-01 23:48:32 +00:00
LinuxServer-CI c7fc8f9b0f Bot Updating Templated Files 2020-11-01 23:47:24 +00:00
thelamer 93d6c36a82 update baseimages 2020-11-01 15:45:38 -08:00
LinuxServer-CI ad12f7687e Bot Updating Templated Files 2019-12-26 21:19:46 +00:00
LinuxServer-CI 10ac42c591 Bot Updating Templated Files 2019-12-26 21:19:00 +00:00
thelamer 31da0f50aa rebase to 3.11 2019-12-26 13:17:41 -08:00
thelamer 031e1b9852 bumping to alpine 3.10 2019-07-26 12:30:03 -07:00
thelamer 9ba61d3828 adding base files version for alpine containers 2019-06-20 23:42:22 -07:00
LinuxServer-CI 3ed5bbfbc5 Bot Updating Package Versions 2019-05-27 04:22:09 -04:00
thelamer 581fc60781 files branch for stashing compiled verison of cloud9 sdk 2019-05-26 23:13:34 -07:00
15 zmienionych plików z 900 dodań i 428 usunięć

122
.github/CONTRIBUTING.md vendored 100755
Wyświetl plik

@ -0,0 +1,122 @@
# Contributing to baseimage-cloud9
## Gotchas
* While contributing make sure to make all your changes before creating a Pull Request, as our pipeline builds each commit after the PR is open.
* Read, and fill the Pull Request template
* If this is a fix for a typo in code or documentation in the README please file an issue
* If the PR is addressing an existing issue include, closes #\<issue number>, in the body of the PR commit message
* If you want to discuss changes, you can also bring it up in [#dev-talk](https://discordapp.com/channels/354974912613449730/757585807061155840) in our [Discord server](https://discord.gg/YWrKVTn)
## Common files
| File | Use case |
| :----: | --- |
| `Dockerfile` | Dockerfile used to build amd64 images |
| `Dockerfile.aarch64` | Dockerfile used to build 64bit ARM architectures |
| `Dockerfile.armhf` | Dockerfile used to build 32bit ARM architectures |
| `Jenkinsfile` | This file is a product of our builder and should not be edited directly. This is used to build the image |
| `jenkins-vars.yml` | This file is used to generate the `Jenkinsfile` mentioned above, it only affects the build-process |
| `package_versions.txt` | This file is generated as a part of the build-process and should not be edited directly. It lists all the installed packages and their versions |
| `README.md` | This file is a product of our builder and should not be edited directly. This displays the readme for the repository and image registries |
| `readme-vars.yml` | This file is used to generate the `README.md` |
## Readme
If you would like to change our readme, please __**do not**__ directly edit the readme, as it is auto-generated on each commit.
Instead edit the [readme-vars.yml](https://github.com/linuxserver/docker-baseimage-cloud9/edit/master/readme-vars.yml).
These variables are used in a template for our [Jenkins Builder](https://github.com/linuxserver/docker-jenkins-builder) as part of an ansible play.
Most of these variables are also carried over to [docs.linuxserver.io](https://docs.linuxserver.io)
### Fixing typos or clarify the text in the readme
There are variables for multiple parts of the readme, the most common ones are:
| Variable | Description |
| :----: | --- |
| `project_blurb` | This is the short excerpt shown above the project logo. |
| `app_setup_block` | This is the text that shows up under "Application Setup" if enabled |
### Parameters
The compose and run examples are also generated from these variables.
We have a [reference file](https://github.com/linuxserver/docker-jenkins-builder/blob/master/vars/_container-vars-blank) in our Jenkins Builder.
These are prefixed with `param_` for required parameters, or `opt_param` for optional parameters, except for `cap_add`.
Remember to enable param, if currently disabled. This differs between parameters, and can be seen in the reference file.
Devices, environment variables, ports and volumes expects its variables in a certain way.
### Devices
```yml
param_devices:
- { device_path: "/dev/dri", device_host_path: "/dev/dri", desc: "For hardware transcoding" }
opt_param_devices:
- { device_path: "/dev/dri", device_host_path: "/dev/dri", desc: "For hardware transcoding" }
```
### Environment variables
```yml
param_env_vars:
- { env_var: "TZ", env_value: "Europe/London", desc: "Specify a timezone to use EG Europe/London." }
opt_param_env_vars:
- { env_var: "VERSION", env_value: "latest", desc: "Supported values are LATEST, PLEXPASS or a specific version number." }
```
### Ports
```yml
param_ports:
- { external_port: "80", internal_port: "80", port_desc: "Application WebUI" }
opt_param_ports:
- { external_port: "80", internal_port: "80", port_desc: "Application WebUI" }
```
### Volumes
```yml
param_volumes:
- { vol_path: "/config", vol_host_path: "</path/to/appdata/config>", desc: "Configuration files." }
opt_param_volumes:
- { vol_path: "/config", vol_host_path: "</path/to/appdata/config>", desc: "Configuration files." }
```
### Testing template changes
After you make any changes to the templates, you can use our [Jenkins Builder](https://github.com/linuxserver/docker-jenkins-builder) to have the files updated from the modified templates. Please use the command found under `Running Locally` [on this page](https://github.com/linuxserver/docker-jenkins-builder/blob/master/README.md) to generate them prior to submitting a PR.
## Dockerfiles
We use multiple Dockerfiles in our repos, this is because sometimes some CPU architectures needs different packages to work.
If you are proposing additional packages to be added, ensure that you added the packages to all the Dockerfiles in alphabetical order.
### Testing your changes
```
git clone https://github.com/linuxserver/docker-baseimage-cloud9.git
cd docker-baseimage-cloud9
docker build \
--no-cache \
--pull \
-t linuxserver/baseimage-cloud9:latest .
```
The ARM variants can be built on x86_64 hardware using `multiarch/qemu-user-static`
```
docker run --rm --privileged multiarch/qemu-user-static:register --reset
```
Once registered you can define the dockerfile to use with `-f Dockerfile.aarch64`.
## Update the chagelog
If you are modifying the Dockerfiles or any of the startup scripts in [root](https://github.com/linuxserver/docker-baseimage-cloud9/tree/master/root), add an entry to the changelog
```yml
changelogs:
- { date: "DD.MM.YY:", desc: "Added some love to templates" }
```

2
.github/FUNDING.yml vendored 100755
Wyświetl plik

@ -0,0 +1,2 @@
github: linuxserver
open_collective: linuxserver

38
.github/ISSUE_TEMPLATE.md vendored 100755
Wyświetl plik

@ -0,0 +1,38 @@
[linuxserverurl]: https://linuxserver.io
[![linuxserver.io](https://raw.githubusercontent.com/linuxserver/docker-templates/master/linuxserver.io/img/linuxserver_medium.png)][linuxserverurl]
<!--- If you are new to Docker or this application our issue tracker is **ONLY** used for reporting bugs or requesting features. Please use [our discord server](https://discord.gg/YWrKVTn) for general support. --->
<!--- If this acts as a feature request please ask yourself if this modification is something the whole userbase will benefit from --->
<!--- If this is a specific change for corner case functionality or plugins please look at making a Docker Mod or local script https://blog.linuxserver.io/2019/09/14/customizing-our-containers/ -->
<!--- Provide a general summary of the issue in the Title above -->
------------------------------
## Expected Behavior
<!--- Tell us what should happen -->
## Current Behavior
<!--- Tell us what happens instead of the expected behavior -->
## Steps to Reproduce
<!--- Provide a link to a live example, or an unambiguous set of steps to -->
<!--- reproduce this bug. Include code to reproduce, if relevant -->
1.
2.
3.
4.
## Environment
**OS:**
**CPU architecture:** x86_64/arm32/arm64
**How docker service was installed:**
<!--- ie. from the official docker repo, from the distro repo, nas OS provided, etc. -->
<!--- Providing context helps us come up with a solution that is most useful in the real world -->
## Command used to create docker container (run/create/compose/screenshot)
<!--- Provide your docker create/run command or compose yaml snippet, or a screenshot of settings if using a gui to create the container -->
## Docker logs
<!--- Provide a full docker log, output of "docker logs baseimage-cloud9" -->

Wyświetl plik

@ -0,0 +1,43 @@
<!--- Provide a general summary of your changes in the Title above -->
[linuxserverurl]: https://linuxserver.io
[![linuxserver.io](https://raw.githubusercontent.com/linuxserver/docker-templates/master/linuxserver.io/img/linuxserver_medium.png)][linuxserverurl]
<!--- Before submitting a pull request please check the following -->
<!--- If this is a fix for a typo in code or documentation in the README please file an issue and let us sort it out we do not need a PR -->
<!--- Ask yourself if this modification is something the whole userbase will benefit from, if this is a specific change for corner case functionality or plugins please look at making a Docker Mod or local script https://blog.linuxserver.io/2019/09/14/customizing-our-containers/ -->
<!--- That if the PR is addressing an existing issue include, closes #<issue number> , in the body of the PR commit message -->
<!--- You have included links to any files / patches etc your PR may be using in the body of the PR commit message -->
<!--- We maintain a changelog of major revisions to the container at the end of readme-vars.yml in the root of this repository, please add your changes there if appropriate -->
<!--- Coding guidelines: -->
<!--- 1. Installed packages in the Dockerfiles should be in alphabetical order -->
<!--- 2. Changes to Dockerfile should be replicated in Dockerfile.armhf and Dockerfile.aarch64 if applicable -->
<!--- 3. Indentation style (tabs vs 4 spaces vs 1 space) should match the rest of the document -->
<!--- 4. Readme is auto generated from readme-vars.yml, make your changes there -->
------------------------------
- [ ] I have read the [contributing](https://github.com/linuxserver/docker-baseimage-cloud9/blob/master/.github/CONTRIBUTING.md) guideline and understand that I have made the correct modifications
------------------------------
<!--- We welcome all PRs though this doesnt guarantee it will be accepted. -->
## Description:
<!--- Describe your changes in detail -->
## Benefits of this PR and context:
<!--- Please explain why we should accept this PR. If this fixes an outstanding bug, please reference the issue # -->
## How Has This Been Tested?
<!--- Please describe in detail how you tested your changes. -->
<!--- Include details of your testing environment, and the tests you ran to -->
<!--- see how your change affects other areas of the code, etc. -->
## Source / References:
<!--- Please include any forum posts/github links relevant to the PR -->

13
.github/workflows/greetings.yml vendored 100755
Wyświetl plik

@ -0,0 +1,13 @@
name: Greetings
on: [pull_request_target, issues]
jobs:
greeting:
runs-on: ubuntu-latest
steps:
- uses: actions/first-interaction@v1
with:
issue-message: 'Thanks for opening your first issue here! Be sure to follow the [issue template](https://github.com/linuxserver/docker-baseimage-cloud9/blob/master/.github/ISSUE_TEMPLATE.md)!'
pr-message: 'Thanks for opening this pull request! Be sure to follow the [pull request template](https://github.com/linuxserver/docker-baseimage-cloud9/blob/master/.github/PULL_REQUEST_TEMPLATE.md)!'
repo-token: ${{ secrets.GITHUB_TOKEN }}

23
.github/workflows/stale.yml vendored 100755
Wyświetl plik

@ -0,0 +1,23 @@
name: Mark stale issues and pull requests
on:
schedule:
- cron: "30 1 * * *"
jobs:
stale:
runs-on: ubuntu-latest
steps:
- uses: actions/stale@v1
with:
stale-issue-message: "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions."
stale-pr-message: "This pull request has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions."
stale-issue-label: 'no-issue-activity'
stale-pr-label: 'no-pr-activity'
days-before-stale: 30
days-before-close: 365
exempt-issue-labels: 'awaiting-approval,work-in-progress'
exempt-pr-labels: 'awaiting-approval,work-in-progress'
repo-token: ${{ secrets.GITHUB_TOKEN }}

Wyświetl plik

@ -1,23 +1,42 @@
FROM lsiobase/ubuntu:bionic as builder
FROM ghcr.io/linuxserver/baseimage-alpine:3.11
ARG DEBIAN_FRONTEND="noninteractive"
# set version label
ARG BUILD_DATE
ARG VERSION
LABEL build_version="Linuxserver.io version:- ${VERSION} Build-date:- ${BUILD_DATE}"
LABEL maintainer="thelamer"
ARG NPM_CONFIG_UNSAFE_PERM=true
# add local files
COPY /root /
RUN \
echo "**** install build packages ****" && \
apt-get update && \
apt-get install -y \
apk add --no-cache \
g++ \
gcc \
git \
libgcc \
libxml2-dev \
make \
python && \
nodejs \
openssl-dev \
python \
tmux
RUN \
echo "**** Compile Cloud9 from source ****" && \
git clone --depth 1 \
https://github.com/c9/core.git c9sdk && \
cd c9sdk && \
sed -i \
's/node-pty-prebuilt/node-pty/g' \
plugins/node_modules/vfs-local/localfs.js && \
mkdir -p /c9bins && \
HOME=/c9bins scripts/install-sdk.sh && \
sed -i \
'/$URL/c\bash /install.sh' \
scripts/install-sdk.sh && \
HOME=/c9bins scripts/install-sdk.sh
RUN \
echo "**** Restructure files for copy ****" && \
mkdir -p \
/buildout && \
@ -29,39 +48,3 @@ RUN \
mv \
/c9sdk \
/buildout/
# runtime stage
FROM lsiobase/ubuntu:bionic
# set version label
ARG BUILD_DATE
ARG VERSION
LABEL build_version="Linuxserver.io version:- ${VERSION} Build-date:- ${BUILD_DATE}"
LABEL maintainer="thelamer"
# add files from c9base
COPY --from=builder /buildout/ /
RUN \
echo "**** install base packages ****" && \
apt-get update && \
apt-get install -y --no-install-recommends \
curl \
git \
gnupg \
sudo && \
echo "**** Cleanup and user perms ****" && \
usermod -aG sudo \
abc && \
apt-get autoclean && \
rm -rf \
/var/lib/apt/lists/* \
/var/tmp/* \
/tmp/*
# add local files
COPY root/ /
# ports and volumes
EXPOSE 8000
VOLUME /code

Wyświetl plik

@ -1,23 +1,42 @@
FROM lsiobase/ubuntu:arm64v8-bionic as builder
FROM ghcr.io/linuxserver/baseimage-alpine:arm64v8-3.11
ARG DEBIAN_FRONTEND="noninteractive"
# set version label
ARG BUILD_DATE
ARG VERSION
LABEL build_version="Linuxserver.io version:- ${VERSION} Build-date:- ${BUILD_DATE}"
LABEL maintainer="thelamer"
ARG NPM_CONFIG_UNSAFE_PERM=true
# add local files
COPY /root /
RUN \
echo "**** install build packages ****" && \
apt-get update && \
apt-get install -y \
apk add --no-cache \
g++ \
gcc \
git \
libgcc \
libxml2-dev \
make \
python && \
nodejs \
openssl-dev \
python \
tmux
RUN \
echo "**** Compile Cloud9 from source ****" && \
git clone --depth 1 \
https://github.com/c9/core.git c9sdk && \
cd c9sdk && \
sed -i \
's/node-pty-prebuilt/node-pty/g' \
plugins/node_modules/vfs-local/localfs.js && \
mkdir -p /c9bins && \
HOME=/c9bins scripts/install-sdk.sh && \
sed -i \
'/$URL/c\bash /install.sh' \
scripts/install-sdk.sh && \
HOME=/c9bins scripts/install-sdk.sh
RUN \
echo "**** Restructure files for copy ****" && \
mkdir -p \
/buildout && \
@ -29,39 +48,3 @@ RUN \
mv \
/c9sdk \
/buildout/
# runtime stage
FROM lsiobase/ubuntu:arm64v8-bionic
# set version label
ARG BUILD_DATE
ARG VERSION
LABEL build_version="Linuxserver.io version:- ${VERSION} Build-date:- ${BUILD_DATE}"
LABEL maintainer="thelamer"
# add files from c9base
COPY --from=builder /buildout/ /
RUN \
echo "**** install base packages ****" && \
apt-get update && \
apt-get install -y --no-install-recommends \
curl \
git \
gnupg \
sudo && \
echo "**** Cleanup and user perms ****" && \
usermod -aG sudo \
abc && \
apt-get autoclean && \
rm -rf \
/var/lib/apt/lists/* \
/var/tmp/* \
/tmp/*
# add local files
COPY root/ /
# ports and volumes
EXPOSE 8000
VOLUME /code

Wyświetl plik

@ -1,23 +1,42 @@
FROM lsiobase/ubuntu:arm32v7-bionic as builder
FROM ghcr.io/linuxserver/baseimage-alpine:arm32v7-3.11
ARG DEBIAN_FRONTEND="noninteractive"
# set version label
ARG BUILD_DATE
ARG VERSION
LABEL build_version="Linuxserver.io version:- ${VERSION} Build-date:- ${BUILD_DATE}"
LABEL maintainer="thelamer"
ARG NPM_CONFIG_UNSAFE_PERM=true
# add local files
COPY /root /
RUN \
echo "**** install build packages ****" && \
apt-get update && \
apt-get install -y \
apk add --no-cache \
g++ \
gcc \
git \
libgcc \
libxml2-dev \
make \
python && \
nodejs \
openssl-dev \
python \
tmux
RUN \
echo "**** Compile Cloud9 from source ****" && \
git clone --depth 1 \
https://github.com/c9/core.git c9sdk && \
cd c9sdk && \
sed -i \
's/node-pty-prebuilt/node-pty/g' \
plugins/node_modules/vfs-local/localfs.js && \
mkdir -p /c9bins && \
HOME=/c9bins scripts/install-sdk.sh && \
sed -i \
'/$URL/c\bash /install.sh' \
scripts/install-sdk.sh && \
HOME=/c9bins scripts/install-sdk.sh
RUN \
echo "**** Restructure files for copy ****" && \
mkdir -p \
/buildout && \
@ -29,39 +48,3 @@ RUN \
mv \
/c9sdk \
/buildout/
# runtime stage
FROM lsiobase/ubuntu:arm32v7-bionic
# set version label
ARG BUILD_DATE
ARG VERSION
LABEL build_version="Linuxserver.io version:- ${VERSION} Build-date:- ${BUILD_DATE}"
LABEL maintainer="thelamer"
# add files from c9base
COPY --from=builder /buildout/ /
RUN \
echo "**** install base packages ****" && \
apt-get update && \
apt-get install -y --no-install-recommends \
curl \
git \
gnupg \
sudo && \
echo "**** Cleanup and user perms ****" && \
usermod -aG sudo \
abc && \
apt-get autoclean && \
rm -rf \
/var/lib/apt/lists/* \
/var/tmp/* \
/tmp/*
# add local files
COPY root/ /
# ports and volumes
EXPOSE 8000
VOLUME /code

454
Jenkinsfile vendored
Wyświetl plik

@ -2,6 +2,10 @@ pipeline {
agent {
label 'X86-64-MULTI'
}
options {
buildDiscarder(logRotator(numToKeepStr: '10', daysToKeepStr: '60'))
parallelsAlwaysFailFast()
}
// Input to determine if this is a package check
parameters {
string(defaultValue: 'false', description: 'package check run', name: 'PACKAGE_CHECK')
@ -10,9 +14,8 @@ pipeline {
environment {
BUILDS_DISCORD=credentials('build_webhook_url')
GITHUB_TOKEN=credentials('498b4638-2d02-4ce5-832d-8a57d01d97ab')
EXT_GIT_BRANCH = 'master'
EXT_USER = 'c9'
EXT_REPO = 'core'
GITLAB_TOKEN=credentials('b6f0f1dd-6952-4cf6-95d1-9c06380283f0')
GITLAB_NAMESPACE=credentials('gitlab-namespace-id')
BUILD_VERSION_ARG = 'CLOUD9_VERSION'
LS_USER = 'linuxserver'
LS_REPO = 'docker-baseimage-cloud9'
@ -20,16 +23,9 @@ pipeline {
DOCKERHUB_IMAGE = 'lsiobase/cloud9'
DEV_DOCKERHUB_IMAGE = 'lsiodev/cloud9-base'
PR_DOCKERHUB_IMAGE = 'lspipepr/cloud9-base'
DIST_IMAGE = 'ubuntu'
DIST_IMAGE = 'alpine'
MULTIARCH='true'
CI='true'
CI_WEB='true'
CI_PORT='8000'
CI_SSL='false'
CI_DELAY='120'
CI_DOCKERENV='TZ=US/Pacific'
CI_AUTH='user:password'
CI_WEBPATH=''
CI='false'
}
stages {
// Setup all the basic environment variables needed for the build
@ -38,7 +34,7 @@ pipeline {
script{
env.EXIT_STATUS = ''
env.LS_RELEASE = sh(
script: '''docker run --rm alexeiled/skopeo sh -c 'skopeo inspect docker://docker.io/'${DOCKERHUB_IMAGE}':latest 2>/dev/null' | jq -r '.Labels.build_version' | awk '{print $3}' | grep '\\-ls' || : ''',
script: '''docker run --rm ghcr.io/linuxserver/alexeiled-skopeo sh -c 'skopeo inspect docker://docker.io/'${DOCKERHUB_IMAGE}':files-alpine 2>/dev/null' | jq -r '.Labels.build_version' | awk '{print $3}' | grep '\\-ls' || : ''',
returnStdout: true).trim()
env.LS_RELEASE_NOTES = sh(
script: '''cat readme-vars.yml | awk -F \\" '/date: "[0-9][0-9].[0-9][0-9].[0-9][0-9]:/ {print $4;exit;}' | sed -E ':a;N;$!ba;s/\\r{0,1}\\n/\\\\n/g' ''',
@ -52,14 +48,7 @@ pipeline {
env.CODE_URL = 'https://github.com/' + env.LS_USER + '/' + env.LS_REPO + '/commit/' + env.GIT_COMMIT
env.DOCKERHUB_LINK = 'https://hub.docker.com/r/' + env.DOCKERHUB_IMAGE + '/tags/'
env.PULL_REQUEST = env.CHANGE_ID
env.LICENSE_TAG = sh(
script: '''#!/bin/bash
if [ -e LICENSE ] ; then
cat LICENSE | md5sum | cut -c1-8
else
echo none
fi''',
returnStdout: true).trim()
env.TEMPLATED_FILES = 'Jenkinsfile README.md LICENSE ./.github/CONTRIBUTING.md ./.github/FUNDING.yml ./.github/ISSUE_TEMPLATE.md ./.github/PULL_REQUEST_TEMPLATE.md ./.github/workflows/greetings.yml ./.github/workflows/stale.yml'
}
script{
env.LS_RELEASE_NUMBER = sh(
@ -102,23 +91,16 @@ pipeline {
/* ########################
External Release Tagging
######################## */
// If this is a github commit trigger determine the current commit at head
stage("Set ENV github_commit"){
steps{
script{
env.EXT_RELEASE = sh(
script: '''curl -s https://api.github.com/repos/${EXT_USER}/${EXT_REPO}/commits/${EXT_GIT_BRANCH} | jq -r '. | .sha' | cut -c1-8 ''',
returnStdout: true).trim()
}
}
}
// If this is a github commit trigger Set the external release link
stage("Set ENV commit_link"){
steps{
script{
env.RELEASE_LINK = 'https://github.com/' + env.EXT_USER + '/' + env.EXT_REPO + '/commit/' + env.EXT_RELEASE
}
}
// If this is a custom command to determine version use that command
stage("Set tag custom bash"){
steps{
script{
env.EXT_RELEASE = sh(
script: ''' echo c4d1c59d-alpine ''',
returnStdout: true).trim()
env.RELEASE_LINK = 'custom_command'
}
}
}
// Sanitize the release tag and strip illegal docker or github characters
stage("Sanitize tag"){
@ -130,39 +112,45 @@ pipeline {
}
}
}
// If this is a master build use live docker endpoints
// If this is a files-alpine build use live docker endpoints
stage("Set ENV live build"){
when {
branch "master"
branch "files-alpine"
environment name: 'CHANGE_ID', value: ''
}
steps {
script{
env.IMAGE = env.DOCKERHUB_IMAGE
env.GITHUBIMAGE = 'ghcr.io/' + env.LS_USER + '/' + env.CONTAINER_NAME
env.GITLABIMAGE = 'registry.gitlab.com/linuxserver.io/' + env.LS_REPO + '/' + env.CONTAINER_NAME
if (env.MULTIARCH == 'true') {
env.CI_TAGS = 'amd64-' + env.EXT_RELEASE_CLEAN + '-ls' + env.LS_TAG_NUMBER + '|arm32v7-' + env.EXT_RELEASE_CLEAN + '-ls' + env.LS_TAG_NUMBER + '|arm64v8-' + env.EXT_RELEASE_CLEAN + '-ls' + env.LS_TAG_NUMBER
} else {
env.CI_TAGS = env.EXT_RELEASE_CLEAN + '-ls' + env.LS_TAG_NUMBER
}
env.META_TAG = env.EXT_RELEASE_CLEAN + '-ls' + env.LS_TAG_NUMBER
env.EXT_RELEASE_TAG = 'version-' + env.EXT_RELEASE_CLEAN
}
}
}
// If this is a dev build use dev docker endpoints
stage("Set ENV dev build"){
when {
not {branch "master"}
not {branch "files-alpine"}
environment name: 'CHANGE_ID', value: ''
}
steps {
script{
env.IMAGE = env.DEV_DOCKERHUB_IMAGE
env.GITHUBIMAGE = 'ghcr.io/' + env.LS_USER + '/lsiodev-' + env.CONTAINER_NAME
env.GITLABIMAGE = 'registry.gitlab.com/linuxserver.io/' + env.LS_REPO + '/lsiodev-' + env.CONTAINER_NAME
if (env.MULTIARCH == 'true') {
env.CI_TAGS = 'amd64-' + env.EXT_RELEASE_CLEAN + '-pkg-' + env.PACKAGE_TAG + '-dev-' + env.COMMIT_SHA + '|arm32v7-' + env.EXT_RELEASE_CLEAN + '-pkg-' + env.PACKAGE_TAG + '-dev-' + env.COMMIT_SHA + '|arm64v8-' + env.EXT_RELEASE_CLEAN + '-pkg-' + env.PACKAGE_TAG + '-dev-' + env.COMMIT_SHA
} else {
env.CI_TAGS = env.EXT_RELEASE_CLEAN + '-pkg-' + env.PACKAGE_TAG + '-dev-' + env.COMMIT_SHA
}
env.META_TAG = env.EXT_RELEASE_CLEAN + '-pkg-' + env.PACKAGE_TAG + '-dev-' + env.COMMIT_SHA
env.EXT_RELEASE_TAG = 'version-' + env.EXT_RELEASE_CLEAN
env.DOCKERHUB_LINK = 'https://hub.docker.com/r/' + env.DEV_DOCKERHUB_IMAGE + '/tags/'
}
}
@ -175,12 +163,15 @@ pipeline {
steps {
script{
env.IMAGE = env.PR_DOCKERHUB_IMAGE
env.GITHUBIMAGE = 'ghcr.io/' + env.LS_USER + '/lspipepr-' + env.CONTAINER_NAME
env.GITLABIMAGE = 'registry.gitlab.com/linuxserver.io/' + env.LS_REPO + '/lspipepr-' + env.CONTAINER_NAME
if (env.MULTIARCH == 'true') {
env.CI_TAGS = 'amd64-' + env.EXT_RELEASE_CLEAN + '-pkg-' + env.PACKAGE_TAG + '-pr-' + env.PULL_REQUEST + '|arm32v7-' + env.EXT_RELEASE_CLEAN + '-pkg-' + env.PACKAGE_TAG + '-pr-' + env.PULL_REQUEST + '|arm64v8-' + env.EXT_RELEASE_CLEAN + '-pkg-' + env.PACKAGE_TAG + '-pr-' + env.PULL_REQUEST
} else {
env.CI_TAGS = env.EXT_RELEASE_CLEAN + '-pkg-' + env.PACKAGE_TAG + '-pr-' + env.PULL_REQUEST
}
env.META_TAG = env.EXT_RELEASE_CLEAN + '-pkg-' + env.PACKAGE_TAG + '-pr-' + env.PULL_REQUEST
env.EXT_RELEASE_TAG = 'version-' + env.EXT_RELEASE_CLEAN
env.CODE_URL = 'https://github.com/' + env.LS_USER + '/' + env.LS_REPO + '/pull/' + env.PULL_REQUEST
env.DOCKERHUB_LINK = 'https://hub.docker.com/r/' + env.PR_DOCKERHUB_IMAGE + '/tags/'
}
@ -193,24 +184,24 @@ pipeline {
}
steps {
withCredentials([
string(credentialsId: 'spaces-key', variable: 'DO_KEY'),
string(credentialsId: 'spaces-secret', variable: 'DO_SECRET')
string(credentialsId: 'ci-tests-s3-key-id', variable: 'S3_KEY'),
string(credentialsId: 'ci-tests-s3-secret-access-key', variable: 'S3_SECRET')
]) {
script{
env.SHELLCHECK_URL = 'https://lsio-ci.ams3.digitaloceanspaces.com/' + env.IMAGE + '/' + env.META_TAG + '/shellcheck-result.xml'
env.SHELLCHECK_URL = 'https://ci-tests.linuxserver.io/' + env.IMAGE + '/' + env.META_TAG + '/shellcheck-result.xml'
}
sh '''curl -sL https://raw.githubusercontent.com/linuxserver/docker-shellcheck/master/checkrun.sh | /bin/bash'''
sh '''#! /bin/bash
set -e
docker pull lsiodev/spaces-file-upload:latest
docker pull ghcr.io/linuxserver/lsiodev-spaces-file-upload:latest
docker run --rm \
-e DESTINATION=\"${IMAGE}/${META_TAG}/shellcheck-result.xml\" \
-e FILE_NAME="shellcheck-result.xml" \
-e MIMETYPE="text/xml" \
-v ${WORKSPACE}:/mnt \
-e SECRET_KEY=\"${DO_SECRET}\" \
-e ACCESS_KEY=\"${DO_KEY}\" \
-t lsiodev/spaces-file-upload:latest \
-e SECRET_KEY=\"${S3_SECRET}\" \
-e ACCESS_KEY=\"${S3_KEY}\" \
-t ghcr.io/linuxserver/lsiodev-spaces-file-upload:latest \
python /upload.py'''
}
}
@ -218,7 +209,7 @@ pipeline {
// Use helper containers to render templated files
stage('Update-Templates') {
when {
branch "master"
branch "files-alpine"
environment name: 'CHANGE_ID', value: ''
expression {
env.CONTAINER_NAME != null
@ -228,34 +219,34 @@ pipeline {
sh '''#! /bin/bash
set -e
TEMPDIR=$(mktemp -d)
docker pull linuxserver/jenkins-builder:latest
docker run --rm -e CONTAINER_NAME=${CONTAINER_NAME} -e GITHUB_BRANCH=master -v ${TEMPDIR}:/ansible/jenkins linuxserver/jenkins-builder:latest
docker pull linuxserver/doc-builder:latest
docker run --rm -e CONTAINER_NAME=${CONTAINER_NAME} -e GITHUB_BRANCH=master -v ${TEMPDIR}:/ansible/readme linuxserver/doc-builder:latest
if [ "$(md5sum ${TEMPDIR}/${LS_REPO}/Jenkinsfile | awk '{ print $1 }')" != "$(md5sum Jenkinsfile | awk '{ print $1 }')" ] || \
[ "$(md5sum ${TEMPDIR}/${CONTAINER_NAME}/README.md | awk '{ print $1 }')" != "$(md5sum README.md | awk '{ print $1 }')" ] || \
[ "$(cat ${TEMPDIR}/${LS_REPO}/LICENSE | md5sum | cut -c1-8)" != "${LICENSE_TAG}" ]; then
docker pull ghcr.io/linuxserver/jenkins-builder:latest
docker run --rm -e CONTAINER_NAME=${CONTAINER_NAME} -e GITHUB_BRANCH=files-alpine -v ${TEMPDIR}:/ansible/jenkins ghcr.io/linuxserver/jenkins-builder:latest
CURRENTHASH=$(grep -hs ^ ${TEMPLATED_FILES} | md5sum | cut -c1-8)
cd ${TEMPDIR}/docker-${CONTAINER_NAME}
NEWHASH=$(grep -hs ^ ${TEMPLATED_FILES} | md5sum | cut -c1-8)
if [[ "${CURRENTHASH}" != "${NEWHASH}" ]]; then
mkdir -p ${TEMPDIR}/repo
git clone https://github.com/${LS_USER}/${LS_REPO}.git ${TEMPDIR}/repo/${LS_REPO}
git --git-dir ${TEMPDIR}/repo/${LS_REPO}/.git checkout -f master
cp ${TEMPDIR}/${CONTAINER_NAME}/README.md ${TEMPDIR}/repo/${LS_REPO}/
cp ${TEMPDIR}/docker-${CONTAINER_NAME}/Jenkinsfile ${TEMPDIR}/repo/${LS_REPO}/
cp ${TEMPDIR}/docker-${CONTAINER_NAME}/LICENSE ${TEMPDIR}/repo/${LS_REPO}/
cd ${TEMPDIR}/repo/${LS_REPO}
git checkout -f files-alpine
cd ${TEMPDIR}/docker-${CONTAINER_NAME}
mkdir -p ${TEMPDIR}/repo/${LS_REPO}/.github/workflows
cp --parents ${TEMPLATED_FILES} ${TEMPDIR}/repo/${LS_REPO}/
cd ${TEMPDIR}/repo/${LS_REPO}/
git --git-dir ${TEMPDIR}/repo/${LS_REPO}/.git add Jenkinsfile README.md LICENSE
git --git-dir ${TEMPDIR}/repo/${LS_REPO}/.git commit -m 'Bot Updating Templated Files'
git --git-dir ${TEMPDIR}/repo/${LS_REPO}/.git push https://LinuxServer-CI:${GITHUB_TOKEN}@github.com/${LS_USER}/${LS_REPO}.git --all
git add ${TEMPLATED_FILES}
git commit -m 'Bot Updating Templated Files'
git push https://LinuxServer-CI:${GITHUB_TOKEN}@github.com/${LS_USER}/${LS_REPO}.git --all
echo "true" > /tmp/${COMMIT_SHA}-${BUILD_NUMBER}
else
echo "false" > /tmp/${COMMIT_SHA}-${BUILD_NUMBER}
fi
mkdir -p ${TEMPDIR}/gitbook
git clone https://github.com/linuxserver/docker-documentation.git ${TEMPDIR}/gitbook/docker-documentation
if [[ "${BRANCH_NAME}" == "master" ]] && [[ (! -f ${TEMPDIR}/gitbook/docker-documentation/images/docker-${CONTAINER_NAME}.md) || ("$(md5sum ${TEMPDIR}/gitbook/docker-documentation/images/docker-${CONTAINER_NAME}.md | awk '{ print $1 }')" != "$(md5sum ${TEMPDIR}/${CONTAINER_NAME}/docker-${CONTAINER_NAME}.md | awk '{ print $1 }')") ]]; then
cp ${TEMPDIR}/${CONTAINER_NAME}/docker-${CONTAINER_NAME}.md ${TEMPDIR}/gitbook/docker-documentation/images/
if [[ "${BRANCH_NAME}" == "master" ]] && [[ (! -f ${TEMPDIR}/gitbook/docker-documentation/images/docker-${CONTAINER_NAME}.md) || ("$(md5sum ${TEMPDIR}/gitbook/docker-documentation/images/docker-${CONTAINER_NAME}.md | awk '{ print $1 }')" != "$(md5sum ${TEMPDIR}/docker-${CONTAINER_NAME}/docker-${CONTAINER_NAME}.md | awk '{ print $1 }')") ]]; then
cp ${TEMPDIR}/docker-${CONTAINER_NAME}/docker-${CONTAINER_NAME}.md ${TEMPDIR}/gitbook/docker-documentation/images/
cd ${TEMPDIR}/gitbook/docker-documentation/
git add images/docker-${CONTAINER_NAME}.md
git commit -m 'Bot Updating Templated Files'
git commit -m 'Bot Updating Documentation'
git push https://LinuxServer-CI:${GITHUB_TOKEN}@github.com/linuxserver/docker-documentation.git --all
fi
rm -Rf ${TEMPDIR}'''
@ -269,7 +260,7 @@ pipeline {
// Exit the build if the Templated files were just updated
stage('Template-exit') {
when {
branch "master"
branch "files-alpine"
environment name: 'CHANGE_ID', value: ''
environment name: 'FILES_UPDATED', value: 'true'
expression {
@ -282,6 +273,26 @@ pipeline {
}
}
}
/* #######################
GitLab Mirroring
####################### */
// Ping into Gitlab to mirror this repo and have a registry endpoint
stage("GitLab Mirror"){
when {
environment name: 'EXIT_STATUS', value: ''
}
steps{
sh '''curl -H "Content-Type: application/json" -H "Private-Token: ${GITLAB_TOKEN}" -X POST https://gitlab.com/api/v4/projects \
-d '{"namespace_id":'${GITLAB_NAMESPACE}',\
"name":"'${LS_REPO}'",
"mirror":true,\
"import_url":"https://github.com/linuxserver/'${LS_REPO}'.git",\
"issues_access_level":"disabled",\
"merge_requests_access_level":"disabled",\
"repository_access_level":"enabled",\
"visibility":"public"}' '''
}
}
/* ###############
Build Container
############### */
@ -311,143 +322,46 @@ pipeline {
}
stage('Build ARMHF') {
agent {
label 'ARMHF'
label 'X86-64-MULTI'
}
steps {
withCredentials([
[
$class: 'UsernamePasswordMultiBinding',
credentialsId: '3f9ba4d5-100d-45b0-a3c4-633fd6061207',
usernameVariable: 'DOCKERUSER',
passwordVariable: 'DOCKERPASS'
]
]) {
echo 'Logging into DockerHub'
sh '''#! /bin/bash
echo $DOCKERPASS | docker login -u $DOCKERUSER --password-stdin
'''
sh "docker build --no-cache --pull -f Dockerfile.armhf -t ${IMAGE}:arm32v7-${META_TAG} \
--build-arg ${BUILD_VERSION_ARG}=${EXT_RELEASE} --build-arg VERSION=\"${META_TAG}\" --build-arg BUILD_DATE=${GITHUB_DATE} ."
sh "docker tag ${IMAGE}:arm32v7-${META_TAG} lsiodev/buildcache:arm32v7-${COMMIT_SHA}-${BUILD_NUMBER}"
sh "docker push lsiodev/buildcache:arm32v7-${COMMIT_SHA}-${BUILD_NUMBER}"
sh '''docker rmi \
${IMAGE}:arm32v7-${META_TAG} \
lsiodev/buildcache:arm32v7-${COMMIT_SHA}-${BUILD_NUMBER} || :'''
echo 'Logging into Github'
sh '''#! /bin/bash
echo $GITHUB_TOKEN | docker login ghcr.io -u LinuxServer-CI --password-stdin
'''
sh "docker build --no-cache --pull -f Dockerfile.armhf -t ${IMAGE}:arm32v7-${META_TAG} \
--build-arg ${BUILD_VERSION_ARG}=${EXT_RELEASE} --build-arg VERSION=\"${META_TAG}\" --build-arg BUILD_DATE=${GITHUB_DATE} ."
sh "docker tag ${IMAGE}:arm32v7-${META_TAG} ghcr.io/linuxserver/lsiodev-buildcache:arm32v7-${COMMIT_SHA}-${BUILD_NUMBER}"
retry(5) {
sh "docker push ghcr.io/linuxserver/lsiodev-buildcache:arm32v7-${COMMIT_SHA}-${BUILD_NUMBER}"
}
sh '''docker rmi \
${IMAGE}:arm32v7-${META_TAG} \
ghcr.io/linuxserver/lsiodev-buildcache:arm32v7-${COMMIT_SHA}-${BUILD_NUMBER} || :'''
}
}
stage('Build ARM64') {
agent {
label 'ARM64'
label 'X86-64-MULTI'
}
steps {
withCredentials([
[
$class: 'UsernamePasswordMultiBinding',
credentialsId: '3f9ba4d5-100d-45b0-a3c4-633fd6061207',
usernameVariable: 'DOCKERUSER',
passwordVariable: 'DOCKERPASS'
]
]) {
echo 'Logging into DockerHub'
sh '''#! /bin/bash
echo $DOCKERPASS | docker login -u $DOCKERUSER --password-stdin
'''
sh "docker build --no-cache --pull -f Dockerfile.aarch64 -t ${IMAGE}:arm64v8-${META_TAG} \
--build-arg ${BUILD_VERSION_ARG}=${EXT_RELEASE} --build-arg VERSION=\"${META_TAG}\" --build-arg BUILD_DATE=${GITHUB_DATE} ."
sh "docker tag ${IMAGE}:arm64v8-${META_TAG} lsiodev/buildcache:arm64v8-${COMMIT_SHA}-${BUILD_NUMBER}"
sh "docker push lsiodev/buildcache:arm64v8-${COMMIT_SHA}-${BUILD_NUMBER}"
sh '''docker rmi \
${IMAGE}:arm64v8-${META_TAG} \
lsiodev/buildcache:arm64v8-${COMMIT_SHA}-${BUILD_NUMBER} || :'''
echo 'Logging into Github'
sh '''#! /bin/bash
echo $GITHUB_TOKEN | docker login ghcr.io -u LinuxServer-CI --password-stdin
'''
sh "docker build --no-cache --pull -f Dockerfile.aarch64 -t ${IMAGE}:arm64v8-${META_TAG} \
--build-arg ${BUILD_VERSION_ARG}=${EXT_RELEASE} --build-arg VERSION=\"${META_TAG}\" --build-arg BUILD_DATE=${GITHUB_DATE} ."
sh "docker tag ${IMAGE}:arm64v8-${META_TAG} ghcr.io/linuxserver/lsiodev-buildcache:arm64v8-${COMMIT_SHA}-${BUILD_NUMBER}"
retry(5) {
sh "docker push ghcr.io/linuxserver/lsiodev-buildcache:arm64v8-${COMMIT_SHA}-${BUILD_NUMBER}"
}
sh '''docker rmi \
${IMAGE}:arm64v8-${META_TAG} \
ghcr.io/linuxserver/lsiodev-buildcache:arm64v8-${COMMIT_SHA}-${BUILD_NUMBER} || :'''
}
}
}
}
// Take the image we just built and dump package versions for comparison
stage('Update-packages') {
when {
branch "master"
environment name: 'CHANGE_ID', value: ''
environment name: 'EXIT_STATUS', value: ''
}
steps {
sh '''#! /bin/bash
set -e
TEMPDIR=$(mktemp -d)
if [ "${MULTIARCH}" == "true" ]; then
LOCAL_CONTAINER=${IMAGE}:amd64-${META_TAG}
else
LOCAL_CONTAINER=${IMAGE}:${META_TAG}
fi
if [ "${DIST_IMAGE}" == "alpine" ]; then
docker run --rm --entrypoint '/bin/sh' -v ${TEMPDIR}:/tmp ${LOCAL_CONTAINER} -c '\
apk info -v > /tmp/package_versions.txt && \
sort -o /tmp/package_versions.txt /tmp/package_versions.txt && \
chmod 777 /tmp/package_versions.txt'
elif [ "${DIST_IMAGE}" == "ubuntu" ]; then
docker run --rm --entrypoint '/bin/sh' -v ${TEMPDIR}:/tmp ${LOCAL_CONTAINER} -c '\
apt list -qq --installed | sed "s#/.*now ##g" | cut -d" " -f1 > /tmp/package_versions.txt && \
sort -o /tmp/package_versions.txt /tmp/package_versions.txt && \
chmod 777 /tmp/package_versions.txt'
fi
NEW_PACKAGE_TAG=$(md5sum ${TEMPDIR}/package_versions.txt | cut -c1-8 )
echo "Package tag sha from current packages in buit container is ${NEW_PACKAGE_TAG} comparing to old ${PACKAGE_TAG} from github"
if [ "${NEW_PACKAGE_TAG}" != "${PACKAGE_TAG}" ]; then
git clone https://github.com/${LS_USER}/${LS_REPO}.git ${TEMPDIR}/${LS_REPO}
git --git-dir ${TEMPDIR}/${LS_REPO}/.git checkout -f master
cp ${TEMPDIR}/package_versions.txt ${TEMPDIR}/${LS_REPO}/
cd ${TEMPDIR}/${LS_REPO}/
wait
git add package_versions.txt
git commit -m 'Bot Updating Package Versions'
git push https://LinuxServer-CI:${GITHUB_TOKEN}@github.com/${LS_USER}/${LS_REPO}.git --all
echo "true" > /tmp/packages-${COMMIT_SHA}-${BUILD_NUMBER}
echo "Package tag updated, stopping build process"
else
echo "false" > /tmp/packages-${COMMIT_SHA}-${BUILD_NUMBER}
echo "Package tag is same as previous continue with build process"
fi
rm -Rf ${TEMPDIR}'''
script{
env.PACKAGE_UPDATED = sh(
script: '''cat /tmp/packages-${COMMIT_SHA}-${BUILD_NUMBER}''',
returnStdout: true).trim()
}
}
}
// Exit the build if the package file was just updated
stage('PACKAGE-exit') {
when {
branch "master"
environment name: 'CHANGE_ID', value: ''
environment name: 'PACKAGE_UPDATED', value: 'true'
environment name: 'EXIT_STATUS', value: ''
}
steps {
script{
env.EXIT_STATUS = 'ABORTED'
}
}
}
// Exit the build if this is just a package check and there are no changes to push
stage('PACKAGECHECK-exit') {
when {
branch "master"
environment name: 'CHANGE_ID', value: ''
environment name: 'PACKAGE_UPDATED', value: 'false'
environment name: 'EXIT_STATUS', value: ''
expression {
params.PACKAGE_CHECK == 'true'
}
}
steps {
script{
env.EXIT_STATUS = 'ABORTED'
}
}
}
/* #######
Testing
####### */
@ -459,22 +373,23 @@ pipeline {
}
steps {
withCredentials([
string(credentialsId: 'spaces-key', variable: 'DO_KEY'),
string(credentialsId: 'spaces-secret', variable: 'DO_SECRET')
string(credentialsId: 'ci-tests-s3-key-id', variable: 'S3_KEY'),
string(credentialsId: 'ci-tests-s3-secret-access-key ', variable: 'S3_SECRET')
]) {
script{
env.CI_URL = 'https://lsio-ci.ams3.digitaloceanspaces.com/' + env.IMAGE + '/' + env.META_TAG + '/index.html'
env.CI_URL = 'https://ci-tests.linuxserver.io/' + env.IMAGE + '/' + env.META_TAG + '/index.html'
}
sh '''#! /bin/bash
set -e
docker pull lsiodev/ci:latest
docker pull ghcr.io/linuxserver/lsiodev-ci:latest
if [ "${MULTIARCH}" == "true" ]; then
docker pull lsiodev/buildcache:arm32v7-${COMMIT_SHA}-${BUILD_NUMBER}
docker pull lsiodev/buildcache:arm64v8-${COMMIT_SHA}-${BUILD_NUMBER}
docker tag lsiodev/buildcache:arm32v7-${COMMIT_SHA}-${BUILD_NUMBER} ${IMAGE}:arm32v7-${META_TAG}
docker tag lsiodev/buildcache:arm64v8-${COMMIT_SHA}-${BUILD_NUMBER} ${IMAGE}:arm64v8-${META_TAG}
docker pull ghcr.io/linuxserver/lsiodev-buildcache:arm32v7-${COMMIT_SHA}-${BUILD_NUMBER}
docker pull ghcr.io/linuxserver/lsiodev-buildcache:arm64v8-${COMMIT_SHA}-${BUILD_NUMBER}
docker tag ghcr.io/linuxserver/lsiodev-buildcache:arm32v7-${COMMIT_SHA}-${BUILD_NUMBER} ${IMAGE}:arm32v7-${META_TAG}
docker tag ghcr.io/linuxserver/lsiodev-buildcache:arm64v8-${COMMIT_SHA}-${BUILD_NUMBER} ${IMAGE}:arm64v8-${META_TAG}
fi
docker run --rm \
--shm-size=1gb \
-v /var/run/docker.sock:/var/run/docker.sock \
-e IMAGE=\"${IMAGE}\" \
-e DELAY_START=\"${CI_DELAY}\" \
@ -483,15 +398,15 @@ pipeline {
-e PORT=\"${CI_PORT}\" \
-e SSL=\"${CI_SSL}\" \
-e BASE=\"${DIST_IMAGE}\" \
-e SECRET_KEY=\"${DO_SECRET}\" \
-e ACCESS_KEY=\"${DO_KEY}\" \
-e SECRET_KEY=\"${S3_SECRET}\" \
-e ACCESS_KEY=\"${S3_KEY}\" \
-e DOCKER_ENV=\"${CI_DOCKERENV}\" \
-e WEB_SCREENSHOT=\"${CI_WEB}\" \
-e WEB_AUTH=\"${CI_AUTH}\" \
-e WEB_PATH=\"${CI_WEBPATH}\" \
-e DO_REGION="ams3" \
-e DO_BUCKET="lsio-ci" \
-t lsiodev/ci:latest \
-t ghcr.io/linuxserver/lsiodev-ci:latest \
python /ci/ci.py'''
}
}
@ -514,17 +429,30 @@ pipeline {
passwordVariable: 'DOCKERPASS'
]
]) {
echo 'Logging into DockerHub'
retry(5) {
sh '''#! /bin/bash
set -e
echo $DOCKERPASS | docker login -u $DOCKERUSER --password-stdin
echo $GITHUB_TOKEN | docker login ghcr.io -u LinuxServer-CI --password-stdin
echo $GITLAB_TOKEN | docker login registry.gitlab.com -u LinuxServer.io --password-stdin
for PUSHIMAGE in "${GITHUBIMAGE}" "${GITLABIMAGE}" "${IMAGE}"; do
docker tag ${IMAGE}:${META_TAG} ${PUSHIMAGE}:${META_TAG}
docker tag ${PUSHIMAGE}:${META_TAG} ${PUSHIMAGE}:files-alpine
docker tag ${PUSHIMAGE}:${META_TAG} ${PUSHIMAGE}:${EXT_RELEASE_TAG}
docker push ${PUSHIMAGE}:files-alpine
docker push ${PUSHIMAGE}:${META_TAG}
docker push ${PUSHIMAGE}:${EXT_RELEASE_TAG}
done
'''
}
sh '''#! /bin/bash
echo $DOCKERPASS | docker login -u $DOCKERUSER --password-stdin
for DELETEIMAGE in "${GITHUBIMAGE}" "{GITLABIMAGE}" "${IMAGE}"; do
docker rmi \
${DELETEIMAGE}:${META_TAG} \
${DELETEIMAGE}:${EXT_RELEASE_TAG} \
${DELETEIMAGE}:files-alpine || :
done
'''
sh "docker tag ${IMAGE}:${META_TAG} ${IMAGE}:latest"
sh "docker push ${IMAGE}:latest"
sh "docker push ${IMAGE}:${META_TAG}"
sh '''docker rmi \
${IMAGE}:${META_TAG} \
${IMAGE}:latest || :'''
}
}
}
@ -543,42 +471,61 @@ pipeline {
passwordVariable: 'DOCKERPASS'
]
]) {
sh '''#! /bin/bash
echo $DOCKERPASS | docker login -u $DOCKERUSER --password-stdin
'''
sh '''#! /bin/bash
if [ "${CI}" == "false" ]; then
docker pull lsiodev/buildcache:arm32v7-${COMMIT_SHA}-${BUILD_NUMBER}
docker pull lsiodev/buildcache:arm64v8-${COMMIT_SHA}-${BUILD_NUMBER}
docker tag lsiodev/buildcache:arm32v7-${COMMIT_SHA}-${BUILD_NUMBER} ${IMAGE}:arm32v7-${META_TAG}
docker tag lsiodev/buildcache:arm64v8-${COMMIT_SHA}-${BUILD_NUMBER} ${IMAGE}:arm64v8-${META_TAG}
fi'''
sh "docker tag ${IMAGE}:amd64-${META_TAG} ${IMAGE}:amd64-latest"
sh "docker tag ${IMAGE}:arm32v7-${META_TAG} ${IMAGE}:arm32v7-latest"
sh "docker tag ${IMAGE}:arm64v8-${META_TAG} ${IMAGE}:arm64v8-latest"
sh "docker push ${IMAGE}:amd64-${META_TAG}"
sh "docker push ${IMAGE}:arm32v7-${META_TAG}"
sh "docker push ${IMAGE}:arm64v8-${META_TAG}"
sh "docker push ${IMAGE}:amd64-latest"
sh "docker push ${IMAGE}:arm32v7-latest"
sh "docker push ${IMAGE}:arm64v8-latest"
sh "docker manifest push --purge ${IMAGE}:latest || :"
sh "docker manifest create ${IMAGE}:latest ${IMAGE}:amd64-latest ${IMAGE}:arm32v7-latest ${IMAGE}:arm64v8-latest"
sh "docker manifest annotate ${IMAGE}:latest ${IMAGE}:arm32v7-latest --os linux --arch arm"
sh "docker manifest annotate ${IMAGE}:latest ${IMAGE}:arm64v8-latest --os linux --arch arm64 --variant v8"
sh "docker manifest push --purge ${IMAGE}:${META_TAG} || :"
sh "docker manifest create ${IMAGE}:${META_TAG} ${IMAGE}:amd64-${META_TAG} ${IMAGE}:arm32v7-${META_TAG} ${IMAGE}:arm64v8-${META_TAG}"
sh "docker manifest annotate ${IMAGE}:${META_TAG} ${IMAGE}:arm32v7-${META_TAG} --os linux --arch arm"
sh "docker manifest annotate ${IMAGE}:${META_TAG} ${IMAGE}:arm64v8-${META_TAG} --os linux --arch arm64 --variant v8"
sh "docker manifest push --purge ${IMAGE}:latest"
sh "docker manifest push --purge ${IMAGE}:${META_TAG}"
retry(5) {
sh '''#! /bin/bash
set -e
echo $DOCKERPASS | docker login -u $DOCKERUSER --password-stdin
echo $GITHUB_TOKEN | docker login ghcr.io -u LinuxServer-CI --password-stdin
echo $GITLAB_TOKEN | docker login registry.gitlab.com -u LinuxServer.io --password-stdin
if [ "${CI}" == "false" ]; then
docker pull ghcr.io/linuxserver/lsiodev-buildcache:arm32v7-${COMMIT_SHA}-${BUILD_NUMBER}
docker pull ghcr.io/linuxserver/lsiodev-buildcache:arm64v8-${COMMIT_SHA}-${BUILD_NUMBER}
docker tag ghcr.io/linuxserver/lsiodev-buildcache:arm32v7-${COMMIT_SHA}-${BUILD_NUMBER} ${IMAGE}:arm32v7-${META_TAG}
docker tag ghcr.io/linuxserver/lsiodev-buildcache:arm64v8-${COMMIT_SHA}-${BUILD_NUMBER} ${IMAGE}:arm64v8-${META_TAG}
fi
for MANIFESTIMAGE in "${IMAGE}" "${GITLABIMAGE}" "${GITHUBIMAGE}"; do
docker tag ${IMAGE}:amd64-${META_TAG} ${MANIFESTIMAGE}:amd64-${META_TAG}
docker tag ${IMAGE}:arm32v7-${META_TAG} ${MANIFESTIMAGE}:arm32v7-${META_TAG}
docker tag ${IMAGE}:arm64v8-${META_TAG} ${MANIFESTIMAGE}:arm64v8-${META_TAG}
docker tag ${MANIFESTIMAGE}:amd64-${META_TAG} ${MANIFESTIMAGE}:amd64-files-alpine
docker tag ${MANIFESTIMAGE}:arm32v7-${META_TAG} ${MANIFESTIMAGE}:arm32v7-files-alpine
docker tag ${MANIFESTIMAGE}:arm64v8-${META_TAG} ${MANIFESTIMAGE}:arm64v8-files-alpine
docker tag ${MANIFESTIMAGE}:amd64-${META_TAG} ${MANIFESTIMAGE}:amd64-${EXT_RELEASE_TAG}
docker tag ${MANIFESTIMAGE}:arm32v7-${META_TAG} ${MANIFESTIMAGE}:arm32v7-${EXT_RELEASE_TAG}
docker tag ${MANIFESTIMAGE}:arm64v8-${META_TAG} ${MANIFESTIMAGE}:arm64v8-${EXT_RELEASE_TAG}
docker push ${MANIFESTIMAGE}:amd64-${META_TAG}
docker push ${MANIFESTIMAGE}:arm32v7-${META_TAG}
docker push ${MANIFESTIMAGE}:arm64v8-${META_TAG}
docker push ${MANIFESTIMAGE}:amd64-files-alpine
docker push ${MANIFESTIMAGE}:arm32v7-files-alpine
docker push ${MANIFESTIMAGE}:arm64v8-files-alpine
docker push ${MANIFESTIMAGE}:amd64-${EXT_RELEASE_TAG}
docker push ${MANIFESTIMAGE}:arm32v7-${EXT_RELEASE_TAG}
docker push ${MANIFESTIMAGE}:arm64v8-${EXT_RELEASE_TAG}
docker manifest push --purge ${MANIFESTIMAGE}:files-alpine || :
docker manifest create ${MANIFESTIMAGE}:files-alpine ${MANIFESTIMAGE}:amd64-files-alpine ${MANIFESTIMAGE}:arm32v7-files-alpine ${MANIFESTIMAGE}:arm64v8-files-alpine
docker manifest annotate ${MANIFESTIMAGE}:files-alpine ${MANIFESTIMAGE}:arm32v7-files-alpine --os linux --arch arm
docker manifest annotate ${MANIFESTIMAGE}:files-alpine ${MANIFESTIMAGE}:arm64v8-files-alpine --os linux --arch arm64 --variant v8
docker manifest push --purge ${MANIFESTIMAGE}:${META_TAG} || :
docker manifest create ${MANIFESTIMAGE}:${META_TAG} ${MANIFESTIMAGE}:amd64-${META_TAG} ${MANIFESTIMAGE}:arm32v7-${META_TAG} ${MANIFESTIMAGE}:arm64v8-${META_TAG}
docker manifest annotate ${MANIFESTIMAGE}:${META_TAG} ${MANIFESTIMAGE}:arm32v7-${META_TAG} --os linux --arch arm
docker manifest annotate ${MANIFESTIMAGE}:${META_TAG} ${MANIFESTIMAGE}:arm64v8-${META_TAG} --os linux --arch arm64 --variant v8
docker manifest create ${MANIFESTIMAGE}:${EXT_RELEASE_TAG} ${MANIFESTIMAGE}:amd64-${EXT_RELEASE_TAG} ${MANIFESTIMAGE}:arm32v7-${EXT_RELEASE_TAG} ${MANIFESTIMAGE}:arm64v8-${EXT_RELEASE_TAG}
docker manifest annotate ${MANIFESTIMAGE}:${EXT_RELEASE_TAG} ${MANIFESTIMAGE}:arm32v7-${EXT_RELEASE_TAG} --os linux --arch arm
docker manifest annotate ${MANIFESTIMAGE}:${EXT_RELEASE_TAG} ${MANIFESTIMAGE}:arm64v8-${EXT_RELEASE_TAG} --os linux --arch arm64 --variant v8
docker manifest push --purge ${MANIFESTIMAGE}:files-alpine
docker manifest push --purge ${MANIFESTIMAGE}:${META_TAG}
docker manifest push --purge ${MANIFESTIMAGE}:${EXT_RELEASE_TAG}
done
'''
}
}
}
}
// If this is a public release tag it in the LS Github
stage('Github-Tag-Push-Release') {
when {
branch "master"
branch "files-alpine"
expression {
env.LS_RELEASE != env.EXT_RELEASE_CLEAN + '-ls' + env.LS_TAG_NUMBER
}
@ -590,17 +537,17 @@ pipeline {
sh '''curl -H "Authorization: token ${GITHUB_TOKEN}" -X POST https://api.github.com/repos/${LS_USER}/${LS_REPO}/git/tags \
-d '{"tag":"'${EXT_RELEASE_CLEAN}'-ls'${LS_TAG_NUMBER}'",\
"object": "'${COMMIT_SHA}'",\
"message": "Tagging Release '${EXT_RELEASE_CLEAN}'-ls'${LS_TAG_NUMBER}' to master",\
"message": "Tagging Release '${EXT_RELEASE_CLEAN}'-ls'${LS_TAG_NUMBER}' to files-alpine",\
"type": "commit",\
"tagger": {"name": "LinuxServer Jenkins","email": "jenkins@linuxserver.io","date": "'${GITHUB_DATE}'"}}' '''
echo "Pushing New release for Tag"
sh '''#! /bin/bash
curl -s https://api.github.com/repos/${EXT_USER}/${EXT_REPO}/commits/${EXT_GIT_BRANCH} | jq '. | .commit.message' | sed 's:^.\\(.*\\).$:\\1:' > releasebody.json
echo "Updating to ${EXT_RELEASE_CLEAN}" > releasebody.json
echo '{"tag_name":"'${EXT_RELEASE_CLEAN}'-ls'${LS_TAG_NUMBER}'",\
"target_commitish": "master",\
"target_commitish": "files-alpine",\
"name": "'${EXT_RELEASE_CLEAN}'-ls'${LS_TAG_NUMBER}'",\
"body": "**LinuxServer Changes:**\\n\\n'${LS_RELEASE_NOTES}'\\n**'${EXT_REPO}' Changes:**\\n\\n' > start
printf '","draft": false,"prerelease": false}' >> releasebody.json
"body": "**LinuxServer Changes:**\\n\\n'${LS_RELEASE_NOTES}'\\n**Remote Changes:**\\n\\n' > start
printf '","draft": false,"prerelease": true}' >> releasebody.json
paste -d'\\0' start releasebody.json > releasebody.json.done
curl -H "Authorization: token ${GITHUB_TOKEN}" -X POST https://api.github.com/repos/${LS_USER}/${LS_REPO}/releases -d @releasebody.json.done'''
}
@ -621,14 +568,20 @@ pipeline {
]
]) {
sh '''#! /bin/bash
docker pull lsiodev/readme-sync
set -e
TEMPDIR=$(mktemp -d)
docker pull ghcr.io/linuxserver/jenkins-builder:latest
docker run --rm -e CONTAINER_NAME=${CONTAINER_NAME} -e GITHUB_BRANCH="${BRANCH_NAME}" -v ${TEMPDIR}:/ansible/jenkins ghcr.io/linuxserver/jenkins-builder:latest
docker pull ghcr.io/linuxserver/lsiodev-readme-sync
docker run --rm=true \
-e DOCKERHUB_USERNAME=$DOCKERUSER \
-e DOCKERHUB_PASSWORD=$DOCKERPASS \
-e GIT_REPOSITORY=${LS_USER}/${LS_REPO} \
-e DOCKER_REPOSITORY=${IMAGE} \
-e GIT_BRANCH=master \
lsiodev/readme-sync bash -c 'node sync' '''
-v ${TEMPDIR}/docker-${CONTAINER_NAME}:/mnt \
ghcr.io/linuxserver/lsiodev-readme-sync bash -c 'node sync'
rm -Rf ${TEMPDIR} '''
}
}
}
@ -655,16 +608,19 @@ pipeline {
sh 'echo "build aborted"'
}
else if (currentBuild.currentResult == "SUCCESS"){
sh ''' curl -X POST --data '{"avatar_url": "https://wiki.jenkins-ci.org/download/attachments/2916393/headshot.png","embeds": [{"color": 1681177,\
sh ''' curl -X POST -H "Content-Type: application/json" --data '{"avatar_url": "https://wiki.jenkins-ci.org/download/attachments/2916393/headshot.png","embeds": [{"color": 1681177,\
"description": "**Build:** '${BUILD_NUMBER}'\\n**CI Results:** '${CI_URL}'\\n**ShellCheck Results:** '${SHELLCHECK_URL}'\\n**Status:** Success\\n**Job:** '${RUN_DISPLAY_URL}'\\n**Change:** '${CODE_URL}'\\n**External Release:**: '${RELEASE_LINK}'\\n**DockerHub:** '${DOCKERHUB_LINK}'\\n"}],\
"username": "Jenkins"}' ${BUILDS_DISCORD} '''
}
else {
sh ''' curl -X POST --data '{"avatar_url": "https://wiki.jenkins-ci.org/download/attachments/2916393/headshot.png","embeds": [{"color": 16711680,\
sh ''' curl -X POST -H "Content-Type: application/json" --data '{"avatar_url": "https://wiki.jenkins-ci.org/download/attachments/2916393/headshot.png","embeds": [{"color": 16711680,\
"description": "**Build:** '${BUILD_NUMBER}'\\n**CI Results:** '${CI_URL}'\\n**ShellCheck Results:** '${SHELLCHECK_URL}'\\n**Status:** failure\\n**Job:** '${RUN_DISPLAY_URL}'\\n**Change:** '${CODE_URL}'\\n**External Release:**: '${RELEASE_LINK}'\\n**DockerHub:** '${DOCKERHUB_LINK}'\\n"}],\
"username": "Jenkins"}' ${BUILDS_DISCORD} '''
}
}
}
cleanup {
cleanWs()
}
}
}

Wyświetl plik

@ -1,3 +1,6 @@
<!-- DO NOT EDIT THIS FILE MANUALLY -->
<!-- Please read the CONTRIBUTING.md -->
[linuxserverurl]: https://linuxserver.io
[forumurl]: https://forum.linuxserver.io
[ircurl]: https://www.linuxserver.io/irc/

Wyświetl plik

@ -2,14 +2,14 @@
# jenkins variables
project_name: docker-baseimage-cloud9
external_type: github_commit
release_type: stable
release_tag: latest
ls_branch: master
external_type: na
custom_version_command: "echo c4d1c59d-alpine"
release_type: prerelease
release_tag: files-alpine
ls_branch: files-alpine
skip_package_check: true
use_qemu: true
repo_vars:
- EXT_GIT_BRANCH = 'master'
- EXT_USER = 'c9'
- EXT_REPO = 'core'
- BUILD_VERSION_ARG = 'CLOUD9_VERSION'
- LS_USER = 'linuxserver'
- LS_REPO = 'docker-baseimage-cloud9'
@ -17,13 +17,6 @@ repo_vars:
- DOCKERHUB_IMAGE = 'lsiobase/cloud9'
- DEV_DOCKERHUB_IMAGE = 'lsiodev/cloud9-base'
- PR_DOCKERHUB_IMAGE = 'lspipepr/cloud9-base'
- DIST_IMAGE = 'ubuntu'
- DIST_IMAGE = 'alpine'
- MULTIARCH='true'
- CI='true'
- CI_WEB='true'
- CI_PORT='8000'
- CI_SSL='false'
- CI_DELAY='120'
- CI_DOCKERENV='TZ=US/Pacific'
- CI_AUTH='user:password'
- CI_WEBPATH=''
- CI='false'

Wyświetl plik

@ -1,31 +0,0 @@
#!/usr/bin/with-contenv bash
# check for lock file to only run git operations once
if [ ! -e /lock.file ]; then
# give abc a sudo shell for development
chsh abc -s /bin/bash
sed -e 's/%sudo ALL=(ALL:ALL) ALL/%sudo ALL=(ALL:ALL) NOPASSWD: ALL/g' \
-i /etc/sudoers
sed -e 's/^wheel:\(.*\)/wheel:\1,abc/g' -i /etc/group
# create directory for project
mkdir -p /code
# make sure URL is set and folder is empty to clone code
if [ ${GITURL+x} ] && [ ! "$(/bin/ls -A /code 2>/dev/null)" ] ; then \
# clone the url the user passed to this directory
git clone "${GITURL}" /code
fi
else
# lock exists not importing project this is a restart
echo "Lock exists just starting cloud9"
fi
# create lock file after first run
touch /lock.file
# permissions
mkdir -p /c9sdk/build/standalone
echo "[cont-init.d] Setting permissions this may take some time"
chown -R abc:abc \
/c9sdk/build/standalone \
/code \
/c9bins/.c9

Wyświetl plik

@ -1,6 +0,0 @@
#!/usr/bin/with-contenv bash
cd /c9sdk
HOME=/c9bins exec \
s6-setuidgid abc \
/c9bins/.c9/node/bin/node server.js --listen 0.0.0.0 -p 8000 -w /code -a :

367
root/install.sh 100644
Wyświetl plik

@ -0,0 +1,367 @@
#!/bin/bash -e
set -e
has() {
type "$1" > /dev/null 2>&1
}
# Redirect stdout ( > ) into a named pipe ( >() ) running "tee"
# exec > >(tee /tmp/installlog.txt)
# Without this, only stdout would be captured - i.e. your
# log file would not contain any error messages.
exec 2>&1
if has "wget"; then
DOWNLOAD() {
wget --no-check-certificate -nc -O "$2" "$1"
}
elif has "curl"; then
DOWNLOAD() {
curl -sSL -o "$2" "$1"
}
else
echo "Error: you need curl or wget to proceed" >&2;
exit 1
fi
C9_DIR=$HOME/.c9
if [[ ${1-} == -d ]]; then
C9_DIR=$2
shift 2
fi
# Check if C9_DIR exists
if [ ! -d "$C9_DIR" ]; then
mkdir -p $C9_DIR
fi
VERSION=1
NODE_VERSION=v6.3.1
NODE_VERSION_ARM_PI=v0.10.28
NPM=$C9_DIR/node/bin/npm
NODE=$C9_DIR/node/bin/node
export TMP=$C9_DIR/tmp
export TMPDIR=$TMP
PYTHON=python
# node-gyp uses sytem node or fails with command not found if
# we don't bump this node up in the path
PATH="$C9_DIR/node/bin/:$C9_DIR/node_modules/.bin:$PATH"
start() {
if [ $# -lt 1 ]; then
start base
return
fi
check_deps
# Try to figure out the os and arch for binary fetching
local uname="$(uname -s)"
local os=
local arch="$(uname -m)"
case "$uname" in
Linux*) os=linux ;;
Darwin*) os=darwin ;;
SunOS*) os=sunos ;;
FreeBSD*) os=freebsd ;;
CYGWIN*) os=windows ;;
MINGW*) os=windows ;;
esac
case "$arch" in
*arm64*) arch=arm64 ;;
*aarch64*) arch=arm64 ;;
*armv6l*) arch=armv6l ;;
*armv7l*) arch=armv7l ;;
*x86_64*) arch=x64 ;;
*i*86*) arch=x86 ;;
*)
echo "Unsupported Architecture: $os $arch" 1>&2
exit 1
;;
esac
if [ "$arch" == "x64" ] && [[ $HOSTTYPE == i*86 ]]; then
arch=x86 # check if 32 bit bash is installed on 64 bit kernel
fi
if [ "$os" != "linux" ] && [ "$os" != "darwin" ]; then
echo "Unsupported Platform: $os $arch" 1>&2
exit 1
fi
case $1 in
"help" )
echo
echo "Cloud9 Installer"
echo
echo "Usage:"
echo " install help Show this message"
echo " install install [name [name ...]] Download and install a set of packages"
echo " install ls List available packages"
echo
;;
"ls" )
echo "!node - Node.js"
echo "!tmux - TMUX"
echo "!nak - NAK"
echo "!ptyjs - pty.js"
echo "!collab - collab"
echo "coffee - Coffee Script"
echo "less - Less"
echo "sass - Sass"
echo "typescript - TypeScript"
echo "stylus - Stylus"
# echo "go - Go"
# echo "heroku - Heroku"
# echo "rhc - RedHat OpenShift"
# echo "gae - Google AppEngine"
;;
"install" )
shift
# make sure dirs are around
mkdir -p "$C9_DIR"/bin
mkdir -p "$C9_DIR"/tmp
mkdir -p "$C9_DIR"/node_modules
# install packages
while [ $# -ne 0 ]
do
if [ "$1" == "tmux" ]; then
cd "$C9_DIR"
time tmux_install $os $arch
shift
continue
fi
cd "$C9_DIR"
time eval ${1} $os $arch
shift
done
# finalize
pushd "$C9_DIR"/node_modules/.bin
for FILE in "$C9_DIR"/node_modules/.bin/*; do
FILE=$(readlink "$FILE")
# can't use the -i flag since it is not compatible between bsd and gnu versions
sed -e's/#!\/usr\/bin\/env node/#!'"${NODE//\//\\/}/" "$FILE" > "$FILE.tmp-sed"
mv "$FILE.tmp-sed" "$FILE"
done
popd
echo $VERSION > "$C9_DIR"/installed
cd "$C9_DIR"
DOWNLOAD https://raw.githubusercontent.com/c9/install/master/packages/license-notice.md "Third-Party Licensing Notices.md"
echo :Done.
;;
"base" )
echo "Installing base packages. Use --help for more options"
start install node tmux_install nak ptyjs collab
;;
* )
start base
;;
esac
}
check_deps() {
local ERR
local OS
if [[ `cat /etc/os-release 2>/dev/null` =~ CentOS ]]; then
OS="CentOS"
elif [[ `cat /proc/version 2>/dev/null` =~ Ubuntu|Debian ]]; then
OS="DEBIAN"
fi
for DEP in make gcc; do
if ! has $DEP; then
echo "Error: please install $DEP to proceed" >&2
if [ "$OS" == "CentOS" ]; then
echo "To do so, log into your machine and type 'yum groupinstall -y development'" >&2
elif [ "$OS" == "DEBIAN" ]; then
echo "To do so, log into your machine and type 'sudo apt-get install build-essential'" >&2
fi
ERR=1
fi
done
# CentOS
if [ "$OS" == "CentOS" ]; then
if ! yum list installed glibc-static >/dev/null 2>&1; then
echo "Error: please install glibc-static to proceed" >&2
echo "To do so, log into your machine and type 'yum install glibc-static'" >&2
ERR=1
fi
fi
check_python
if [ "$ERR" ]; then exit 1; fi
}
check_python() {
if type -P python2.7 &> /dev/null; then
PYTHONVERSION="2.7"
PYTHON="python2.7"
elif type -P python &> /dev/null; then
PYTHONVERSION=`python -c 'import sys; print(".".join(map(str, sys.version_info[:2])))'`
PYTHON="python"
fi
if [[ $PYTHONVERSION != "2.7" ]]; then
echo "Python version 2.7 is required to install pty.js. Please install python 2.7 and try again. You can find more information on how to install Python in the docs: https://docs.c9.io/ssh_workspaces.html"
exit 100
fi
}
# NodeJS
download_virtualenv() {
VIRTUALENV_VERSION="virtualenv-12.0.7"
DOWNLOAD "https://pypi.python.org/packages/source/v/virtualenv/$VIRTUALENV_VERSION.tar.gz" $VIRTUALENV_VERSION.tar.gz
tar xzf $VIRTUALENV_VERSION.tar.gz
rm $VIRTUALENV_VERSION.tar.gz
mv $VIRTUALENV_VERSION virtualenv
}
ensure_local_gyp() {
# when gyp is installed globally npm install pty.js won't work
# to test this use `sudo apt-get install gyp`
if [ `"$PYTHON" -c 'import gyp; print gyp.__file__' 2> /dev/null` ]; then
echo "You have a global gyp installed. Setting up VirtualEnv without global pakages"
rm -rf virtualenv
rm -rf python
if has virtualenv; then
virtualenv -p python2 "$C9_DIR/python"
else
download_virtualenv
"$PYTHON" virtualenv/virtualenv.py "$C9_DIR/python"
fi
if [[ -f "$C9_DIR/python/bin/python2" ]]; then
PYTHON="$C9_DIR/python/bin/python2"
else
echo "Unable to setup virtualenv"
exit 1
fi
fi
"$NPM" config -g set python "$PYTHON"
"$NPM" config -g set unsafe-perm true
local GYP_PATH=$C9_DIR/node/lib/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js
if [ -f "$GYP_PATH" ]; then
ln -s "$GYP_PATH" "$C9_DIR"/node/bin/node-gyp &> /dev/null || :
fi
}
node(){
echo :Installing Node $NODE_VERSION
DOWNLOAD https://nodejs.org/dist/"$NODE_VERSION/node-$NODE_VERSION-$1-$2.tar.gz" node.tar.gz
tar xzf node.tar.gz
mv "node-$NODE_VERSION-$1-$2" node
rm -f node.tar.gz
cp /usr/bin/node /c9bins/.c9/node/bin/
# use local npm cache
"$NPM" config -g set cache "$C9_DIR/tmp/.npm"
ensure_local_gyp
}
tmux_install(){
mkdir -p /c9bins/.c9/bin/
cp /usr/bin/tmux /c9bins/.c9/bin/tmux
}
collab(){
echo :Installing Collab Dependencies
"$NPM" cache clean
"$NPM" install sqlite3
"$NPM" install sequelize@2.0.0-beta.0
mkdir -p "$C9_DIR"/lib
cd "$C9_DIR"/lib
DOWNLOAD https://raw.githubusercontent.com/c9/install/master/packages/sqlite3/linux/sqlite3.tar.gz sqlite3.tar.gz
tar xzf sqlite3.tar.gz
rm sqlite3.tar.gz
ln -sf "$C9_DIR"/lib/sqlite3/sqlite3 "$C9_DIR"/bin/sqlite3
}
nak(){
echo :Installing Nak
"$NPM" install https://github.com/c9/nak/tarball/c9
}
ptyjs(){
echo :Installing pty.js
"$NPM" install node-pty
if ! hasPty; then
echo "Unknown exception installing pty.js"
"$C9_DIR/node/bin/node" -e "console.log(require('node-pty'))"
exit 100
fi
}
hasPty() {
local HASPTY=$("$C9_DIR/node/bin/node" -p "typeof require('node-pty').createTerminal=='function'" 2> /dev/null)
if [ "$HASPTY" != true ]; then
return 1
fi
}
coffee(){
echo :Installing Coffee Script
"$NPM" install coffee
}
less(){
echo :Installing Less
"$NPM" install less
}
sass(){
echo :Installing Sass
"$NPM" install sass
}
typescript(){
echo :Installing TypeScript
"$NPM" install typescript
}
stylus(){
echo :Installing Stylus
"$NPM" install stylus
}
# go(){
# }
# heroku(){
# }
# rhc(){
# }
# gae(){
# }
start "$@"
# cleanup tmp files
rm -rf "$C9_DIR/tmp"