Support for transparent encryption

Co-authored-by: Florin Peter <Florin-Alexandru.Peter@t-systems.com>
pull/413/head
Florin Peter 2022-03-26 14:05:08 +01:00 zatwierdzone przez GitHub
rodzic f536835aa8
commit 217308abd7
Nie znaleziono w bazie danych klucza dla tego podpisu
ID klucza GPG: 4AEE18F83AFDEB23
19 zmienionych plików z 3066 dodań i 4 usunięć

Wyświetl plik

@ -31,6 +31,9 @@ ENV \
S3PROXY_CORS_ALLOW_METHODS="" \
S3PROXY_CORS_ALLOW_HEADERS="" \
S3PROXY_IGNORE_UNKNOWN_HEADERS="false" \
S3PROXY_ENCRYPTED_BLOBSTORE="" \
S3PROXY_ENCRYPTED_BLOBSTORE_PASSWORD="" \
S3PROXY_ENCRYPTED_BLOBSTORE_SALT="" \
JCLOUDS_PROVIDER="filesystem" \
JCLOUDS_ENDPOINT="" \
JCLOUDS_REGION="" \

76
docs/Encryption.md 100644
Wyświetl plik

@ -0,0 +1,76 @@
S3Proxy
# Encryption
## Motivation
The motivation behind this implementation is to provide a fully transparent and secure encryption to the s3 client while having the ability to write into different clouds.
## Cipher mode
The chosen cipher is ```AES/CFB/NoPadding``` because it provides the ability to read from an offset like in the middle of a ```Blob```.
While reading from an offset the decryption process needs to consider the previous 16 bytes of the AES block.
### Key generation
The encryption uses a 128-bit key that will be derived from a given password and salt in combination with random initialization vector that will be stored in each part padding.
## How a blob is encrypted
Every uploaded part get a padding of 64 bytes that includes the necessary information for decryption. The input stream from a s3 client is passed through ```CipherInputStream``` and piped to append the 64 byte part padding at the end the encrypted stream. The encrypted input stream is then processed by the ```BlobStore``` to save the ```Blob```.
| Name | Byte size | Description |
|-----------|-----------|----------------------------------------------------------------|
| Delimiter | 8 byte | The delimiter is used to detect if the ```Blob``` is encrypted |
| IV | 16 byte | AES initialization vector |
| Part | 4 byte | The part number |
| Size | 8 byte | The unencrypted size of the ```Blob``` |
| Version | 2 byte | Version can be used in the future if changes are necessary |
| Reserved | 26 byte | Reserved for future use |
### Multipart handling
A single ```Blob``` can be uploaded by the client into multiple parts. After the completion all parts are concatenated into a single ```Blob```.
This procedure will result in multiple parts and paddings being held by a single ```Blob```.
### Single blob example
```
-------------------------------------
| ENCRYPTED BYTES | PADDING |
-------------------------------------
```
### Multipart blob example
```
-------------------------------------------------------------------------------------
| ENCRYPTED BYTES | PADDING | ENCRYPTED BYTES | PADDING | ENCRYPTED BYTES | PADDING |
-------------------------------------------------------------------------------------
```
## How a blob is decrypted
The decryption is way more complex than the encryption. Decryption process needs to take care of the following circumstances:
- decryption of the entire ```Blob```
- decryption from a specific offset by skipping initial bytes
- decryption of bytes by reading from the end (tail)
- decryption of a specific byte range like middle of the ```Blob```
- decryption of all previous situation by considering a underlying multipart ```Blob```
### Single blob decryption
First the ```BlobMetadata``` is requested to get the encrypted ```Blob``` size. The last 64 bytes of ```PartPadding``` are fetched and inspected to detect if a decryption is necessary.
The cipher is than initialized with the IV and the key.
### Multipart blob decryption
The process is similar to the single ```Blob``` decryption but with the difference that a list of parts is computed by fetching all ```PartPadding``` from end to the beginning.
## Blob suffix
Each stored ```Blob``` will get a suffix named ```.s3enc``` this helps to determine if a ```Blob``` is encrypted. For the s3 client the ```.s3enc``` suffix is not visible and the ```Blob``` size will always show the unencrypted size.
## Tested jClouds provider
- S3
- Minio
- OBS from OpenTelekomCloud
- AWS S3
- Azure
- GCP
- Local
## Limitation
- All blobs are encrypted with the same key that is derived from a given password
- No support for re-encryption
- Returned eTag always differs therefore clients should not verify it
- Decryption of a ```Blob``` will always result in multiple calls against the backend for instance a GET will result in a HEAD + GET because the size of the blob needs to be determined

Wyświetl plik

@ -461,6 +461,11 @@
<artifactId>commons-fileupload</artifactId>
<version>1.4</version>
</dependency>
<dependency>
<groupId>commons-codec</groupId>
<artifactId>commons-codec</artifactId>
<version>1.15</version>
</dependency>
<dependency>
<groupId>org.apache.jclouds</groupId>
<artifactId>jclouds-allblobstore</artifactId>

Wyświetl plik

@ -0,0 +1,773 @@
/*
* Copyright 2014-2021 Andrew Gaul <andrew@gaul.org>
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.gaul.s3proxy;
import static com.google.common.base.Preconditions.checkArgument;
import java.io.IOException;
import java.io.InputStream;
import java.security.spec.KeySpec;
import java.util.ArrayList;
import java.util.List;
import java.util.Map;
import java.util.Properties;
import java.util.regex.Matcher;
import javax.crypto.SecretKey;
import javax.crypto.SecretKeyFactory;
import javax.crypto.spec.PBEKeySpec;
import javax.crypto.spec.SecretKeySpec;
import com.google.common.base.Strings;
import com.google.common.collect.ImmutableSet;
import com.google.common.hash.HashCode;
import org.apache.commons.codec.digest.DigestUtils;
import org.gaul.s3proxy.crypto.Constants;
import org.gaul.s3proxy.crypto.Decryption;
import org.gaul.s3proxy.crypto.Encryption;
import org.jclouds.blobstore.BlobStore;
import org.jclouds.blobstore.domain.Blob;
import org.jclouds.blobstore.domain.BlobAccess;
import org.jclouds.blobstore.domain.BlobBuilder;
import org.jclouds.blobstore.domain.BlobMetadata;
import org.jclouds.blobstore.domain.MultipartPart;
import org.jclouds.blobstore.domain.MultipartUpload;
import org.jclouds.blobstore.domain.MutableBlobMetadata;
import org.jclouds.blobstore.domain.PageSet;
import org.jclouds.blobstore.domain.StorageMetadata;
import org.jclouds.blobstore.domain.internal.MutableBlobMetadataImpl;
import org.jclouds.blobstore.domain.internal.PageSetImpl;
import org.jclouds.blobstore.options.CopyOptions;
import org.jclouds.blobstore.options.GetOptions;
import org.jclouds.blobstore.options.ListContainerOptions;
import org.jclouds.blobstore.options.PutOptions;
import org.jclouds.blobstore.util.ForwardingBlobStore;
import org.jclouds.io.ContentMetadata;
import org.jclouds.io.MutableContentMetadata;
import org.jclouds.io.Payload;
import org.jclouds.io.Payloads;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
@SuppressWarnings("UnstableApiUsage")
public final class EncryptedBlobStore extends ForwardingBlobStore {
private final Logger logger =
LoggerFactory.getLogger(EncryptedBlobStore.class);
private SecretKeySpec secretKey;
private EncryptedBlobStore(BlobStore blobStore, Properties properties)
throws IllegalArgumentException {
super(blobStore);
String password = properties.getProperty(
S3ProxyConstants.PROPERTY_ENCRYPTED_BLOBSTORE_PASSWORD);
checkArgument(!Strings.isNullOrEmpty(password),
"Password for encrypted blobstore is not set");
String salt = properties.getProperty(
S3ProxyConstants.PROPERTY_ENCRYPTED_BLOBSTORE_SALT);
checkArgument(!Strings.isNullOrEmpty(salt),
"Salt for encrypted blobstore is not set");
initStore(password, salt);
}
static BlobStore newEncryptedBlobStore(BlobStore blobStore,
Properties properties) throws IOException {
return new EncryptedBlobStore(blobStore, properties);
}
private void initStore(String password, String salt)
throws IllegalArgumentException {
try {
SecretKeyFactory factory =
SecretKeyFactory.getInstance("PBKDF2WithHmacSHA256");
KeySpec spec =
new PBEKeySpec(password.toCharArray(), salt.getBytes(), 65536,
128);
SecretKey tmp = factory.generateSecret(spec);
secretKey = new SecretKeySpec(tmp.getEncoded(), "AES");
} catch (Exception e) {
throw new IllegalArgumentException(e);
}
}
private Blob cipheredBlob(String container, Blob blob, InputStream payload,
long contentLength,
boolean addEncryptedMetadata) {
// make a copy of the blob with the new payload stream
BlobMetadata blobMeta = blob.getMetadata();
ContentMetadata contentMeta = blob.getMetadata().getContentMetadata();
Map<String, String> userMetadata = blobMeta.getUserMetadata();
String contentType = contentMeta.getContentType();
// suffix the content type with -s3enc if we need to encrypt
if (addEncryptedMetadata) {
blobMeta = setEncryptedSuffix(blobMeta);
} else {
// remove the -s3enc suffix while decrypting
// but not if it contains a multipart meta
if (!blobMeta.getUserMetadata()
.containsKey(Constants.METADATA_IS_ENCRYPTED_MULTIPART)) {
blobMeta = removeEncryptedSuffix(blobMeta);
}
}
// we do not set contentMD5 as it will not match due to the encryption
Blob cipheredBlob = blobBuilder(container)
.name(blobMeta.getName())
.type(blobMeta.getType())
.tier(blobMeta.getTier())
.userMetadata(userMetadata)
.payload(payload)
.cacheControl(contentMeta.getCacheControl())
.contentDisposition(contentMeta.getContentDisposition())
.contentEncoding(contentMeta.getContentEncoding())
.contentLanguage(contentMeta.getContentLanguage())
.contentLength(contentLength)
.contentType(contentType)
.build();
cipheredBlob.getMetadata().setUri(blobMeta.getUri());
cipheredBlob.getMetadata().setETag(blobMeta.getETag());
cipheredBlob.getMetadata().setLastModified(blobMeta.getLastModified());
cipheredBlob.getMetadata().setSize(blobMeta.getSize());
cipheredBlob.getMetadata().setPublicUri(blobMeta.getPublicUri());
cipheredBlob.getMetadata().setContainer(blobMeta.getContainer());
return cipheredBlob;
}
private Blob encryptBlob(String container, Blob blob) {
try {
// open the streams and pass them through the encryption
InputStream isRaw = blob.getPayload().openStream();
Encryption encryption =
new Encryption(secretKey, isRaw, 1);
InputStream is = encryption.openStream();
// adjust the encrypted content length by
// adding the padding block size
long contentLength =
blob.getMetadata().getContentMetadata().getContentLength() +
Constants.PADDING_BLOCK_SIZE;
return cipheredBlob(container, blob, is, contentLength, true);
} catch (Exception e) {
throw new RuntimeException(e);
}
}
private Payload encryptPayload(Payload payload, int partNumber) {
try {
// open the streams and pass them through the encryption
InputStream isRaw = payload.openStream();
Encryption encryption =
new Encryption(secretKey, isRaw, partNumber);
InputStream is = encryption.openStream();
Payload cipheredPayload = Payloads.newInputStreamPayload(is);
MutableContentMetadata contentMetadata =
payload.getContentMetadata();
HashCode md5 = null;
contentMetadata.setContentMD5(md5);
cipheredPayload.setContentMetadata(payload.getContentMetadata());
cipheredPayload.setSensitive(payload.isSensitive());
// adjust the encrypted content length by
// adding the padding block size
long contentLength =
payload.getContentMetadata().getContentLength() +
Constants.PADDING_BLOCK_SIZE;
cipheredPayload.getContentMetadata()
.setContentLength(contentLength);
return cipheredPayload;
} catch (Exception e) {
throw new RuntimeException(e);
}
}
private Blob decryptBlob(Decryption decryption, String container,
Blob blob) {
try {
// handle blob does not exist
if (blob == null) {
return null;
}
// open the streams and pass them through the decryption
InputStream isRaw = blob.getPayload().openStream();
InputStream is = decryption.openStream(isRaw);
// adjust the content length if the blob is encrypted
long contentLength =
blob.getMetadata().getContentMetadata().getContentLength();
if (decryption.isEncrypted()) {
contentLength = decryption.getContentLength();
}
return cipheredBlob(container, blob, is, contentLength, false);
} catch (Exception e) {
throw new RuntimeException(e);
}
}
// filter the list by showing the unencrypted blob size
private PageSet<? extends StorageMetadata> filteredList(
PageSet<? extends StorageMetadata> pageSet) {
ImmutableSet.Builder<StorageMetadata> builder = ImmutableSet.builder();
for (StorageMetadata sm : pageSet) {
if (sm instanceof BlobMetadata) {
MutableBlobMetadata mbm =
new MutableBlobMetadataImpl((BlobMetadata) sm);
// if blob is encrypted remove the -s3enc suffix
// from content type
if (isEncrypted(mbm)) {
mbm = removeEncryptedSuffix((BlobMetadata) sm);
mbm = calculateBlobSize(mbm);
}
builder.add(mbm);
} else {
builder.add(sm);
}
}
// make sure the marker do not show blob with .s3enc suffix
String marker = pageSet.getNextMarker();
if (marker != null && isEncrypted(marker)) {
marker = removeEncryptedSuffix(marker);
}
return new PageSetImpl<>(builder.build(), marker);
}
private boolean isEncrypted(BlobMetadata blobMeta) {
return isEncrypted(blobMeta.getName());
}
private boolean isEncrypted(String blobName) {
return blobName.endsWith(Constants.S3_ENC_SUFFIX);
}
private MutableBlobMetadata setEncryptedSuffix(BlobMetadata blobMeta) {
MutableBlobMetadata bm = new MutableBlobMetadataImpl(blobMeta);
if (blobMeta.getName() != null && !isEncrypted(blobMeta.getName())) {
bm.setName(blobNameWithSuffix(blobMeta.getName()));
}
return bm;
}
private String removeEncryptedSuffix(String blobName) {
return blobName.substring(0,
blobName.length() - Constants.S3_ENC_SUFFIX.length());
}
private MutableBlobMetadata removeEncryptedSuffix(BlobMetadata blobMeta) {
MutableBlobMetadata bm = new MutableBlobMetadataImpl(blobMeta);
if (isEncrypted(bm.getName())) {
String blobName = bm.getName();
bm.setName(removeEncryptedSuffix(blobName));
}
return bm;
}
private MutableBlobMetadata calculateBlobSize(BlobMetadata blobMeta) {
MutableBlobMetadata mbm = removeEncryptedSuffix(blobMeta);
// we are using on non-s3 backends like azure or gcp a metadata key to
// calculate the part padding sizes that needs to be removed
if (mbm.getUserMetadata()
.containsKey(Constants.METADATA_ENCRYPTION_PARTS)) {
int parts = Integer.parseInt(
mbm.getUserMetadata().get(Constants.METADATA_ENCRYPTION_PARTS));
int partPaddingSizes = Constants.PADDING_BLOCK_SIZE * parts;
long size = blobMeta.getSize() - partPaddingSizes;
mbm.setSize(size);
mbm.getContentMetadata().setContentLength(size);
} else {
// on s3 backends like aws or minio we rely on the eTag suffix
Matcher matcher =
Constants.MPU_ETAG_SUFFIX_PATTERN.matcher(blobMeta.getETag());
if (matcher.find()) {
int parts = Integer.parseInt(matcher.group(1));
int partPaddingSizes = Constants.PADDING_BLOCK_SIZE * parts;
long size = blobMeta.getSize() - partPaddingSizes;
mbm.setSize(size);
mbm.getContentMetadata().setContentLength(size);
} else {
long size = blobMeta.getSize() - Constants.PADDING_BLOCK_SIZE;
mbm.setSize(size);
mbm.getContentMetadata().setContentLength(size);
}
}
return mbm;
}
private boolean multipartRequiresStub() {
String blobStoreType = getBlobStoreType();
return Quirks.MULTIPART_REQUIRES_STUB.contains(blobStoreType);
}
private String blobNameWithSuffix(String container, String name) {
String nameWithSuffix = blobNameWithSuffix(name);
if (delegate().blobExists(container, nameWithSuffix)) {
name = nameWithSuffix;
}
return name;
}
private String blobNameWithSuffix(String name) {
return name + Constants.S3_ENC_SUFFIX;
}
private String getBlobStoreType() {
return delegate().getContext().unwrap().getProviderMetadata().getId();
}
private String generateUploadId(String container, String blobName) {
String path = container + "/" + blobName;
return DigestUtils.sha256Hex(path);
}
@Override
public Blob getBlob(String containerName, String blobName) {
return getBlob(containerName, blobName, new GetOptions());
}
@Override
public Blob getBlob(String containerName, String blobName,
GetOptions getOptions) {
// adjust the blob name
blobName = blobNameWithSuffix(blobName);
// get the metadata to determine the blob size
BlobMetadata meta = delegate().blobMetadata(containerName, blobName);
try {
// we have a blob that ends with .s3enc
if (meta != null) {
// init defaults
long offset = 0;
long end = 0;
long length = -1;
if (getOptions.getRanges().size() > 0) {
// S3 doesn't allow multiple ranges
String range = getOptions.getRanges().get(0);
String[] ranges = range.split("-", 2);
if (ranges[0].isEmpty()) {
// handle to read from the end
end = Long.parseLong(ranges[1]);
length = end;
} else if (ranges[1].isEmpty()) {
// handle to read from an offset till the end
offset = Long.parseLong(ranges[0]);
} else {
// handle to read from an offset
offset = Long.parseLong(ranges[0]);
end = Long.parseLong(ranges[1]);
length = end - offset + 1;
}
}
// init decryption
Decryption decryption =
new Decryption(secretKey, delegate(), meta, offset, length);
if (decryption.isEncrypted() &&
getOptions.getRanges().size() > 0) {
// clear current ranges to avoid multiple ranges
getOptions.getRanges().clear();
long startAt = decryption.getStartAt();
long endAt = decryption.getEncryptedSize();
if (offset == 0 && end > 0 && length == end) {
// handle to read from the end
startAt = decryption.calculateTail();
} else if (offset > 0 && end > 0) {
// handle to read from an offset
endAt = decryption.calculateEndAt(end);
}
getOptions.range(startAt, endAt);
}
Blob blob =
delegate().getBlob(containerName, blobName, getOptions);
return decryptBlob(decryption, containerName, blob);
} else {
// we suppose to return a unencrypted blob
// since no metadata was found
blobName = removeEncryptedSuffix(blobName);
return delegate().getBlob(containerName, blobName, getOptions);
}
} catch (Exception e) {
throw new RuntimeException(e);
}
}
@Override
public String putBlob(String containerName, Blob blob) {
return delegate().putBlob(containerName,
encryptBlob(containerName, blob));
}
@Override
public String putBlob(String containerName, Blob blob,
PutOptions putOptions) {
return delegate().putBlob(containerName,
encryptBlob(containerName, blob), putOptions);
}
@Override
public String copyBlob(String fromContainer, String fromName,
String toContainer, String toName, CopyOptions options) {
// if we copy an encrypted blob
// make sure to add suffix to the destination blob name
String blobName = blobNameWithSuffix(fromName);
if (delegate().blobExists(fromContainer, blobName)) {
fromName = blobName;
toName = blobNameWithSuffix(toName);
}
return delegate().copyBlob(fromContainer, fromName, toContainer, toName,
options);
}
@Override
public void removeBlob(String container, String name) {
name = blobNameWithSuffix(container, name);
delegate().removeBlob(container, name);
}
@Override
public void removeBlobs(String container, Iterable<String> names) {
List<String> filteredNames = new ArrayList<>();
// filter the list of blobs to determine
// if we need to delete encrypted blobs
for (String name : names) {
name = blobNameWithSuffix(container, name);
filteredNames.add(name);
}
delegate().removeBlobs(container, filteredNames);
}
@Override
public BlobAccess getBlobAccess(String container, String name) {
name = blobNameWithSuffix(container, name);
return delegate().getBlobAccess(container, name);
}
@Override
public boolean blobExists(String container, String name) {
name = blobNameWithSuffix(container, name);
return delegate().blobExists(container, name);
}
@Override
public void setBlobAccess(String container, String name,
BlobAccess access) {
name = blobNameWithSuffix(container, name);
delegate().setBlobAccess(container, name, access);
}
@Override
public PageSet<? extends StorageMetadata> list() {
PageSet<? extends StorageMetadata> pageSet = delegate().list();
return filteredList(pageSet);
}
@Override
public PageSet<? extends StorageMetadata> list(String container) {
PageSet<? extends StorageMetadata> pageSet = delegate().list(container);
return filteredList(pageSet);
}
@Override
public PageSet<? extends StorageMetadata> list(String container,
ListContainerOptions options) {
PageSet<? extends StorageMetadata> pageSet =
delegate().list(container, options);
return filteredList(pageSet);
}
@Override
public MultipartUpload initiateMultipartUpload(String container,
BlobMetadata blobMetadata, PutOptions options) {
MutableBlobMetadata mbm = new MutableBlobMetadataImpl(blobMetadata);
mbm = setEncryptedSuffix(mbm);
MultipartUpload mpu =
delegate().initiateMultipartUpload(container, mbm, options);
// handle non-s3 backends
// by setting a metadata key for multipart stubs
if (multipartRequiresStub()) {
mbm.getUserMetadata()
.put(Constants.METADATA_IS_ENCRYPTED_MULTIPART, "true");
if (getBlobStoreType().equals("azureblob")) {
// use part 0 as a placeholder
delegate().uploadMultipartPart(mpu, 0,
Payloads.newStringPayload("dummy"));
// since azure does not have a uploadId
// we use the sha256 of the path
String uploadId = generateUploadId(container, mbm.getName());
mpu = MultipartUpload.create(mpu.containerName(),
mpu.blobName(), uploadId, mpu.blobMetadata(), options);
} else if (getBlobStoreType().equals("google-cloud-storage")) {
mbm.getUserMetadata()
.put(Constants.METADATA_MULTIPART_KEY, mbm.getName());
// since gcp does not have a uploadId
// we use the sha256 of the path
String uploadId = generateUploadId(container, mbm.getName());
// to emulate later the list of multipart uploads
// we create a placeholer
BlobBuilder builder =
blobBuilder(Constants.MPU_FOLDER + uploadId)
.payload("")
.userMetadata(mbm.getUserMetadata());
delegate().putBlob(container, builder.build(), options);
// final mpu on gcp
mpu = MultipartUpload.create(mpu.containerName(),
mpu.blobName(), uploadId, mpu.blobMetadata(), options);
}
}
return mpu;
}
@Override
public List<MultipartUpload> listMultipartUploads(String container) {
List<MultipartUpload> mpus = new ArrayList<>();
// emulate list of multipart uploads on gcp
if (getBlobStoreType().equals("google-cloud-storage")) {
ListContainerOptions options = new ListContainerOptions();
PageSet<? extends StorageMetadata> mpuList =
delegate().list(container,
options.prefix(Constants.MPU_FOLDER));
// find all blobs in .mpu folder and build the list
for (StorageMetadata blob : mpuList) {
Map<String, String> meta = blob.getUserMetadata();
if (meta.containsKey(Constants.METADATA_MULTIPART_KEY)) {
String blobName =
meta.get(Constants.METADATA_MULTIPART_KEY);
String uploadId =
blob.getName()
.substring(blob.getName().lastIndexOf("/") + 1);
MultipartUpload mpu =
MultipartUpload.create(container,
blobName, uploadId, null, null);
mpus.add(mpu);
}
}
} else {
mpus = delegate().listMultipartUploads(container);
}
List<MultipartUpload> filtered = new ArrayList<>();
// filter the list uploads by removing the .s3enc suffix
for (MultipartUpload mpu : mpus) {
String blobName = mpu.blobName();
if (isEncrypted(blobName)) {
blobName = removeEncryptedSuffix(mpu.blobName());
String uploadId = mpu.id();
// since azure not have a uploadId
// we use the sha256 of the path
if (getBlobStoreType().equals("azureblob")) {
uploadId = generateUploadId(container, mpu.blobName());
}
MultipartUpload mpuWithoutSuffix =
MultipartUpload.create(mpu.containerName(),
blobName, uploadId, mpu.blobMetadata(),
mpu.putOptions());
filtered.add(mpuWithoutSuffix);
} else {
filtered.add(mpu);
}
}
return filtered;
}
@Override
public List<MultipartPart> listMultipartUpload(MultipartUpload mpu) {
mpu = filterMultipartUpload(mpu);
List<MultipartPart> parts = delegate().listMultipartUpload(mpu);
List<MultipartPart> filteredParts = new ArrayList<>();
// fix wrong multipart size due to the part padding
for (MultipartPart part : parts) {
// we use part 0 as a placeholder and hide it on azure
if (getBlobStoreType().equals("azureblob") &&
part.partNumber() == 0) {
continue;
}
MultipartPart newPart = MultipartPart.create(
part.partNumber(),
part.partSize() - Constants.PADDING_BLOCK_SIZE,
part.partETag(),
part.lastModified()
);
filteredParts.add(newPart);
}
return filteredParts;
}
@Override
public MultipartPart uploadMultipartPart(MultipartUpload mpu,
int partNumber, Payload payload) {
mpu = filterMultipartUpload(mpu);
return delegate().uploadMultipartPart(mpu, partNumber,
encryptPayload(payload, partNumber));
}
private MultipartUpload filterMultipartUpload(MultipartUpload mpu) {
MutableBlobMetadata mbm = null;
if (mpu.blobMetadata() != null) {
mbm = new MutableBlobMetadataImpl(mpu.blobMetadata());
mbm = setEncryptedSuffix(mbm);
}
String blobName = mpu.blobName();
if (!isEncrypted(blobName)) {
blobName = blobNameWithSuffix(blobName);
}
return MultipartUpload.create(mpu.containerName(), blobName, mpu.id(),
mbm, mpu.putOptions());
}
@Override
public String completeMultipartUpload(MultipartUpload mpu,
List<MultipartPart> parts) {
MutableBlobMetadata mbm =
new MutableBlobMetadataImpl(mpu.blobMetadata());
String blobName = mpu.blobName();
// always set .s3enc suffix except on gcp
// and blob name starts with multipart upload id
if (getBlobStoreType().equals("google-cloud-storage") &&
mpu.blobName().startsWith(mpu.id())) {
logger.debug("skip suffix on gcp");
} else {
mbm = setEncryptedSuffix(mbm);
if (!isEncrypted(mpu.blobName())) {
blobName = blobNameWithSuffix(blobName);
}
}
MultipartUpload mpuWithSuffix =
MultipartUpload.create(mpu.containerName(),
blobName, mpu.id(), mbm, mpu.putOptions());
// this will only work for non s3 backends like azure and gcp
if (multipartRequiresStub()) {
long partCount = parts.size();
// special handling for GCP to sum up all parts
if (getBlobStoreType().equals("google-cloud-storage")) {
partCount = 0;
for (MultipartPart part : parts) {
blobName =
String.format("%s_%08d",
mpu.id(),
part.partNumber());
BlobMetadata metadata =
delegate().blobMetadata(mpu.containerName(), blobName);
if (metadata != null && metadata.getUserMetadata()
.containsKey(Constants.METADATA_ENCRYPTION_PARTS)) {
String partMetaCount = metadata.getUserMetadata()
.get(Constants.METADATA_ENCRYPTION_PARTS);
partCount = partCount + Long.parseLong(partMetaCount);
} else {
partCount++;
}
}
}
mpuWithSuffix.blobMetadata().getUserMetadata()
.put(Constants.METADATA_ENCRYPTION_PARTS,
String.valueOf(partCount));
mpuWithSuffix.blobMetadata().getUserMetadata()
.remove(Constants.METADATA_IS_ENCRYPTED_MULTIPART);
}
String eTag = delegate().completeMultipartUpload(mpuWithSuffix, parts);
// cleanup mpu placeholder on gcp
if (getBlobStoreType().equals("google-cloud-storage")) {
delegate().removeBlob(mpu.containerName(),
Constants.MPU_FOLDER + mpu.id());
}
return eTag;
}
@Override
public BlobMetadata blobMetadata(String container, String name) {
name = blobNameWithSuffix(container, name);
BlobMetadata blobMetadata = delegate().blobMetadata(container, name);
if (blobMetadata != null) {
// only remove the -s3enc suffix
// if the blob is encrypted and not a multipart stub
if (isEncrypted(blobMetadata) &&
!blobMetadata.getUserMetadata()
.containsKey(Constants.METADATA_IS_ENCRYPTED_MULTIPART)) {
blobMetadata = removeEncryptedSuffix(blobMetadata);
blobMetadata = calculateBlobSize(blobMetadata);
}
}
return blobMetadata;
}
@Override
public long getMaximumMultipartPartSize() {
long max = delegate().getMaximumMultipartPartSize();
return max - Constants.PADDING_BLOCK_SIZE;
}
}

Wyświetl plik

@ -257,6 +257,14 @@ public final class Main {
shards, prefixes);
}
String encryptedBlobStore = properties.getProperty(
S3ProxyConstants.PROPERTY_ENCRYPTED_BLOBSTORE);
if ("true".equalsIgnoreCase(encryptedBlobStore)) {
System.err.println("Using encrypted storage backend");
blobStore = EncryptedBlobStore.newEncryptedBlobStore(blobStore,
properties);
}
return blobStore;
}

Wyświetl plik

@ -99,7 +99,8 @@ public final class S3Proxy {
}
if (builder.secureEndpoint != null) {
SslContextFactory sslContextFactory = new SslContextFactory();
SslContextFactory sslContextFactory =
new SslContextFactory.Server();
sslContextFactory.setKeyStorePath(builder.keyStorePath);
sslContextFactory.setKeyStorePassword(builder.keyStorePassword);
connector = new ServerConnector(server, sslContextFactory,

Wyświetl plik

@ -107,6 +107,13 @@ public final class S3ProxyConstants {
public static final String PROPERTY_MAXIMUM_TIME_SKEW =
"s3proxy.maximum-timeskew";
public static final String PROPERTY_ENCRYPTED_BLOBSTORE =
"s3proxy.encrypted-blobstore";
public static final String PROPERTY_ENCRYPTED_BLOBSTORE_PASSWORD =
"s3proxy.encrypted-blobstore-password";
public static final String PROPERTY_ENCRYPTED_BLOBSTORE_SALT =
"s3proxy.encrypted-blobstore-salt";
static final String PROPERTY_ALT_JCLOUDS_PREFIX = "alt.";
private S3ProxyConstants() {

Wyświetl plik

@ -1176,13 +1176,15 @@ public class S3ProxyHandler {
HttpServletResponse response, BlobStore blobStore,
String container) throws IOException, S3Exception {
if (request.getParameter("delimiter") != null ||
request.getParameter("prefix") != null ||
request.getParameter("max-uploads") != null ||
request.getParameter("key-marker") != null ||
request.getParameter("upload-id-marker") != null) {
throw new UnsupportedOperationException();
}
String encodingType = request.getParameter("encoding-type");
String prefix = request.getParameter("prefix");
List<MultipartUpload> uploads = blobStore.listMultipartUploads(
container);
@ -1203,11 +1205,23 @@ public class S3ProxyHandler {
xml.writeEmptyElement("NextKeyMarker");
xml.writeEmptyElement("NextUploadIdMarker");
xml.writeEmptyElement("Delimiter");
xml.writeEmptyElement("Prefix");
if (Strings.isNullOrEmpty(prefix)) {
xml.writeEmptyElement("Prefix");
} else {
writeSimpleElement(xml, "Prefix", encodeBlob(
encodingType, prefix));
}
writeSimpleElement(xml, "MaxUploads", "1000");
writeSimpleElement(xml, "IsTruncated", "false");
for (MultipartUpload upload : uploads) {
if (prefix != null &&
!upload.blobName().startsWith(prefix)) {
continue;
}
xml.writeStartElement("Upload");
writeSimpleElement(xml, "Key", upload.blobName());
@ -2578,6 +2592,15 @@ public class S3ProxyHandler {
"ArgumentValue", partNumberString));
}
// GCS only supports 32 parts so partition MPU into 32-part chunks.
String blobStoreType = getBlobStoreType(blobStore);
if (blobStoreType.equals("google-cloud-storage")) {
// fix up 1-based part numbers
uploadId = String.format(
"%s_%08d", uploadId, ((partNumber - 1) / 32) + 1);
partNumber = ((partNumber - 1) % 32) + 1;
}
// TODO: how to reconstruct original mpu?
MultipartUpload mpu = MultipartUpload.create(containerName,
blobName, uploadId, createFakeBlobMetadata(blobStore),
@ -2629,7 +2652,6 @@ public class S3ProxyHandler {
long contentLength =
blobMetadata.getContentMetadata().getContentLength();
String blobStoreType = getBlobStoreType(blobStore);
try (InputStream is = blob.getPayload().openStream()) {
if (blobStoreType.equals("azureblob")) {
// Azure has a smaller maximum part size than S3. Split a

Wyświetl plik

@ -0,0 +1,48 @@
/*
* Copyright 2014-2021 Andrew Gaul <andrew@gaul.org>
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.gaul.s3proxy.crypto;
import java.nio.charset.StandardCharsets;
import java.util.regex.Pattern;
public final class Constants {
public static final short VERSION = 1;
public static final String AES_CIPHER = "AES/CFB/NoPadding";
public static final String S3_ENC_SUFFIX = ".s3enc";
public static final String MPU_FOLDER = ".mpu/";
public static final Pattern MPU_ETAG_SUFFIX_PATTERN =
Pattern.compile(".*-([0-9]+)");
public static final String METADATA_ENCRYPTION_PARTS =
"s3proxy_encryption_parts";
public static final String METADATA_IS_ENCRYPTED_MULTIPART =
"s3proxy_encryption_multipart";
public static final String METADATA_MULTIPART_KEY =
"s3proxy_mpu_key";
public static final int AES_BLOCK_SIZE = 16;
public static final int PADDING_BLOCK_SIZE = 64;
public static final byte[] DELIMITER =
"-S3-ENC-".getBytes(StandardCharsets.UTF_8);
public static final int PADDING_DELIMITER_LENGTH = DELIMITER.length;
public static final int PADDING_IV_LENGTH = 16;
public static final int PADDING_PART_LENGTH = 4;
public static final int PADDING_SIZE_LENGTH = 8;
public static final int PADDING_VERSION_LENGTH = 2;
private Constants() {
throw new AssertionError("Cannot instantiate utility constructor");
}
}

Wyświetl plik

@ -0,0 +1,319 @@
/*
* Copyright 2014-2021 Andrew Gaul <andrew@gaul.org>
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.gaul.s3proxy.crypto;
import java.io.IOException;
import java.io.InputStream;
import java.nio.charset.StandardCharsets;
import java.util.Arrays;
import java.util.Map;
import java.util.TreeMap;
import javax.annotation.concurrent.ThreadSafe;
import javax.crypto.SecretKey;
import javax.crypto.spec.SecretKeySpec;
import org.apache.commons.io.IOUtils;
import org.apache.commons.io.input.BoundedInputStream;
import org.jclouds.blobstore.BlobStore;
import org.jclouds.blobstore.domain.Blob;
import org.jclouds.blobstore.domain.BlobMetadata;
import org.jclouds.blobstore.options.GetOptions;
@ThreadSafe
public class Decryption {
private final SecretKey encryptionKey;
private TreeMap<Integer, PartPadding> partList;
private long outputOffset;
private long outputLength;
private boolean skipFirstBlock;
private long unencryptedSize;
private long encryptedSize;
private long startAt;
private int skipParts;
private long skipPartBytes;
private boolean isEncrypted;
public Decryption(SecretKeySpec key, BlobStore blobStore,
BlobMetadata meta,
long offset, long length) throws IOException {
encryptionKey = key;
outputLength = length;
isEncrypted = true;
// if blob does not exist or size is smaller than the part padding
// then the file is considered not encrypted
if (meta == null || meta.getSize() <= 64) {
blobIsNotEncrypted(offset);
return;
}
// get the 64 byte of part padding from the end of the blob
GetOptions options = new GetOptions();
options.range(meta.getSize() - Constants.PADDING_BLOCK_SIZE,
meta.getSize());
Blob blob =
blobStore.getBlob(meta.getContainer(), meta.getName(), options);
// read the padding structure
PartPadding lastPartPadding = PartPadding.readPartPaddingFromBlob(blob);
if (!Arrays.equals(
lastPartPadding.getDelimiter().getBytes(StandardCharsets.UTF_8),
Constants.DELIMITER)) {
blobIsNotEncrypted(offset);
return;
}
partList = new TreeMap<>();
// detect multipart
if (lastPartPadding.getPart() > 1 &&
meta.getSize() >
(lastPartPadding.getSize() + Constants.PADDING_BLOCK_SIZE)) {
unencryptedSize = lastPartPadding.getSize();
encryptedSize =
lastPartPadding.getSize() + Constants.PADDING_BLOCK_SIZE;
// note that parts are in reversed order
int part = 1;
// add the last part to the list
partList.put(part, lastPartPadding);
// loop part by part from end to the beginning
// to build a list of all blocks
while (encryptedSize < meta.getSize()) {
// get the next block
// rewind by the current encrypted block size
// minus the encryption padding
options = new GetOptions();
long startAt = (meta.getSize() - encryptedSize) -
Constants.PADDING_BLOCK_SIZE;
long endAt = meta.getSize() - encryptedSize - 1;
options.range(startAt, endAt);
blob = blobStore.getBlob(meta.getContainer(), meta.getName(),
options);
part++;
// read the padding structure
PartPadding partPadding =
PartPadding.readPartPaddingFromBlob(blob);
// add the part to the list
this.partList.put(part, partPadding);
// update the encrypted size
encryptedSize = encryptedSize +
(partPadding.getSize() + Constants.PADDING_BLOCK_SIZE);
unencryptedSize = this.unencryptedSize + partPadding.getSize();
}
} else {
// add the single part to the list
partList.put(1, lastPartPadding);
// update the unencrypted size
unencryptedSize = meta.getSize() - Constants.PADDING_BLOCK_SIZE;
// update the encrypted size
encryptedSize = meta.getSize();
}
// calculate the offset
calculateOffset(offset);
// if there is a offset and a length set the output length
if (offset > 0 && length == 0) {
outputLength = unencryptedSize - offset;
}
}
private void blobIsNotEncrypted(long offset) {
isEncrypted = false;
startAt = offset;
}
// calculate the tail bytes we need to read
// because we know the unencryptedSize we can return startAt offset
public final long calculateTail() {
long offset = unencryptedSize - outputLength;
calculateOffset(offset);
return startAt;
}
public final long getEncryptedSize() {
return encryptedSize;
}
public final long calculateEndAt(long endAt) {
// need to have always one more
endAt++;
// handle multipart
if (partList.size() > 1) {
long plaintextSize = 0;
// always skip 1 part at the end
int partCounter = 1;
// we need the map in reversed order
for (Map.Entry<Integer, PartPadding> part : partList.descendingMap()
.entrySet()) {
// check the parts that are between offset and end
plaintextSize = plaintextSize + part.getValue().getSize();
if (endAt > plaintextSize) {
partCounter++;
} else {
break;
}
}
// add the paddings of all parts
endAt = endAt + ((long) Constants.PADDING_BLOCK_SIZE * partCounter);
} else {
// we need to read one AES block more in AES CFB mode
long rest = endAt % Constants.AES_BLOCK_SIZE;
if (rest > 0) {
endAt = endAt + Constants.AES_BLOCK_SIZE;
}
}
return endAt;
}
// open the streams and pipes
public final InputStream openStream(InputStream is) throws IOException {
// if the blob is not encrypted return the unencrypted stream
if (!isEncrypted) {
return is;
}
// pass input stream through decryption
InputStream dis = new DecryptionInputStream(is, encryptionKey, partList,
skipParts, skipPartBytes);
// skip some bytes if necessary
long offset = outputOffset;
if (this.skipFirstBlock) {
offset = offset + Constants.AES_BLOCK_SIZE;
}
IOUtils.skipFully(dis, offset);
// trim the stream to a specific length if needed
return new BoundedInputStream(dis, outputLength);
}
private void calculateOffset(long offset) {
startAt = 0;
skipParts = 0;
// handle multipart
if (partList.size() > 1) {
// init counters
long plaintextSize = 0;
long encryptedSize = 0;
long partOffset;
long partStartAt = 0;
// we need the map in reversed order
for (Map.Entry<Integer, PartPadding> part : partList.descendingMap()
.entrySet()) {
// compute the plaintext size of the current part
plaintextSize = plaintextSize + part.getValue().getSize();
// check if the offset is located in another part
if (offset > plaintextSize) {
// compute the encrypted size of the skipped part
encryptedSize = encryptedSize + part.getValue().getSize() +
Constants.PADDING_BLOCK_SIZE;
// compute offset in this part
partOffset = offset - plaintextSize;
// skip the first block in CFB mode
skipFirstBlock = partOffset >= 16;
// compute the offset of the output
outputOffset = partOffset % Constants.AES_BLOCK_SIZE;
// skip this part
skipParts++;
// we always need to read one previous AES block in CFB mode
// if we read from offset
if (partOffset > Constants.AES_BLOCK_SIZE) {
long rest = partOffset % Constants.AES_BLOCK_SIZE;
partStartAt =
(partOffset - Constants.AES_BLOCK_SIZE) - rest;
} else {
partStartAt = 0;
}
} else {
// start at a specific byte position
// while respecting other parts
startAt = encryptedSize + partStartAt;
// skip part bytes if we are not starting
// from the beginning of a part
skipPartBytes = partStartAt;
break;
}
}
}
// handle single part
if (skipParts == 0) {
// skip the first block in CFB mode
skipFirstBlock = offset >= 16;
// compute the offset of the output
outputOffset = offset % Constants.AES_BLOCK_SIZE;
// we always need to read one previous AES block in CFB mode
// if we read from offset
if (offset > Constants.AES_BLOCK_SIZE) {
long rest = offset % Constants.AES_BLOCK_SIZE;
startAt = (offset - Constants.AES_BLOCK_SIZE) - rest;
}
// skip part bytes if we are not starting
// from the beginning of a part
skipPartBytes = startAt;
}
}
public final long getStartAt() {
return startAt;
}
public final boolean isEncrypted() {
return isEncrypted;
}
public final long getContentLength() {
if (outputLength > 0) {
return outputLength;
} else {
return unencryptedSize;
}
}
}

Wyświetl plik

@ -0,0 +1,382 @@
/*
* Copyright 2014-2021 Andrew Gaul <andrew@gaul.org>
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.gaul.s3proxy.crypto;
import java.io.FilterInputStream;
import java.io.IOException;
import java.io.InputStream;
import java.util.TreeMap;
import javax.annotation.concurrent.ThreadSafe;
import javax.crypto.Cipher;
import javax.crypto.SecretKey;
import javax.crypto.ShortBufferException;
import org.apache.commons.io.IOUtils;
@ThreadSafe
public class DecryptionInputStream extends FilterInputStream {
// the cipher engine to use to process stream data
private final Cipher cipher;
// the secret key
private final SecretKey key;
// the list of parts we expect in the stream
private final TreeMap<Integer, PartPadding> parts;
/* the buffer holding data that have been read in from the
underlying stream, but have not been processed by the cipher
engine. */
private final byte[] ibuffer = new byte[4096];
// having reached the end of the underlying input stream
private boolean done;
/* the buffer holding data that have been processed by the cipher
engine, but have not been read out */
private byte[] obuffer;
// the offset pointing to the next "new" byte
private int ostart;
// the offset pointing to the last "new" byte
private int ofinish;
// stream status
private boolean closed;
// the current part
private int part;
// the remaining bytes of the current part
private long partBytesRemain;
/**
* Constructs a CipherInputStream from an InputStream and a
* Cipher.
* <br>Note: if the specified input stream or cipher is
* null, a NullPointerException may be thrown later when
* they are used.
*
* @param is the to-be-processed input stream
* @param key the decryption key
* @param parts the list of parts
* @param skipParts the amount of parts to skip
* @param skipPartBytes the amount of part bytes to skip
* @throws IOException if cipher fails
*/
public DecryptionInputStream(InputStream is, SecretKey key,
TreeMap<Integer, PartPadding> parts, int skipParts,
long skipPartBytes)
throws IOException {
super(is);
in = is;
this.parts = parts;
this.key = key;
PartPadding partPadding = parts.get(parts.size() - skipParts);
try {
// init the cipher
cipher = Cipher.getInstance(Constants.AES_CIPHER);
cipher.init(Cipher.DECRYPT_MODE, key, partPadding.getIv());
} catch (Exception e) {
throw new IOException(e);
}
// set the part to begin with
part = parts.size() - skipParts;
// adjust part size due to offset
partBytesRemain = parts.get(part).getSize() - skipPartBytes;
}
/**
* Ensure obuffer is big enough for the next update or doFinal
* operation, given the input length <code>inLen</code> (in bytes)
* The ostart and ofinish indices are reset to 0.
*
* @param inLen the input length (in bytes)
*/
private void ensureCapacity(int inLen) {
int minLen = cipher.getOutputSize(inLen);
if (obuffer == null || obuffer.length < minLen) {
obuffer = new byte[minLen];
}
ostart = 0;
ofinish = 0;
}
/**
* Private convenience function, read in data from the underlying
* input stream and process them with cipher. This method is called
* when the processed bytes inside obuffer has been exhausted.
* <p>
* Entry condition: ostart = ofinish
* <p>
* Exit condition: ostart = 0 AND ostart <= ofinish
* <p>
* return (ofinish-ostart) (we have this many bytes for you)
* return 0 (no data now, but could have more later)
* return -1 (absolutely no more data)
* <p>
* Note: Exceptions are only thrown after the stream is completely read.
* For AEAD ciphers a read() of any length will internally cause the
* whole stream to be read fully and verify the authentication tag before
* returning decrypted data or exceptions.
*/
private int getMoreData() throws IOException {
if (done) {
return -1;
}
int readLimit = ibuffer.length;
if (partBytesRemain < ibuffer.length) {
readLimit = (int) partBytesRemain;
}
int readin;
if (partBytesRemain == 0) {
readin = -1;
} else {
readin = in.read(ibuffer, 0, readLimit);
}
if (readin == -1) {
ensureCapacity(0);
try {
ofinish = cipher.doFinal(obuffer, 0);
} catch (Exception e) {
throw new IOException(e);
}
int nextPart = part - 1;
if (parts.containsKey(nextPart)) {
// reset cipher
PartPadding partPadding = parts.get(nextPart);
try {
cipher.init(Cipher.DECRYPT_MODE, key, partPadding.getIv());
} catch (Exception e) {
throw new IOException(e);
}
// update to the next part
part = nextPart;
// update the remaining bytes of the next part
partBytesRemain = parts.get(nextPart).getSize();
IOUtils.skip(in, Constants.PADDING_BLOCK_SIZE);
return ofinish;
} else {
done = true;
if (ofinish == 0) {
return -1;
} else {
return ofinish;
}
}
}
ensureCapacity(readin);
try {
ofinish = cipher.update(ibuffer, 0, readin, obuffer, ostart);
} catch (ShortBufferException e) {
throw new IOException(e);
}
partBytesRemain = partBytesRemain - readin;
return ofinish;
}
/**
* Reads the next byte of data from this input stream. The value
* byte is returned as an <code>int</code> in the range
* <code>0</code> to <code>255</code>. If no byte is available
* because the end of the stream has been reached, the value
* <code>-1</code> is returned. This method blocks until input data
* is available, the end of the stream is detected, or an exception
* is thrown.
*
* @return the next byte of data, or <code>-1</code> if the end of the
* stream is reached.
* @throws IOException if an I/O error occurs.
*/
@Override
public final int read() throws IOException {
if (ostart >= ofinish) {
// we loop for new data as the spec says we are blocking
int i = 0;
while (i == 0) {
i = getMoreData();
}
if (i == -1) {
return -1;
}
}
return (int) obuffer[ostart++] & 0xff;
}
/**
* Reads up to <code>b.length</code> bytes of data from this input
* stream into an array of bytes.
* <p>
* The <code>read</code> method of <code>InputStream</code> calls
* the <code>read</code> method of three arguments with the arguments
* <code>b</code>, <code>0</code>, and <code>b.length</code>.
*
* @param b the buffer into which the data is read.
* @return the total number of bytes read into the buffer, or
* <code>-1</code> is there is no more data because the end of
* the stream has been reached.
* @throws IOException if an I/O error occurs.
* @see java.io.InputStream#read(byte[], int, int)
*/
@Override
public final int read(byte[] b) throws IOException {
return read(b, 0, b.length);
}
/**
* Reads up to <code>len</code> bytes of data from this input stream
* into an array of bytes. This method blocks until some input is
* available. If the first argument is <code>null,</code> up to
* <code>len</code> bytes are read and discarded.
*
* @param b the buffer into which the data is read.
* @param off the start offset in the destination array
* <code>buf</code>
* @param len the maximum number of bytes read.
* @return the total number of bytes read into the buffer, or
* <code>-1</code> if there is no more data because the end of
* the stream has been reached.
* @throws IOException if an I/O error occurs.
* @see java.io.InputStream#read()
*/
@Override
public final int read(byte[] b, int off, int len) throws IOException {
if (ostart >= ofinish) {
// we loop for new data as the spec says we are blocking
int i = 0;
while (i == 0) {
i = getMoreData();
}
if (i == -1) {
return -1;
}
}
if (len <= 0) {
return 0;
}
int available = ofinish - ostart;
if (len < available) {
available = len;
}
if (b != null) {
System.arraycopy(obuffer, ostart, b, off, available);
}
ostart = ostart + available;
return available;
}
/**
* Skips <code>n</code> bytes of input from the bytes that can be read
* from this input stream without blocking.
*
* <p>Fewer bytes than requested might be skipped.
* The actual number of bytes skipped is equal to <code>n</code> or
* the result of a call to
* {@link #available() available},
* whichever is smaller.
* If <code>n</code> is less than zero, no bytes are skipped.
*
* <p>The actual number of bytes skipped is returned.
*
* @param n the number of bytes to be skipped.
* @return the actual number of bytes skipped.
* @throws IOException if an I/O error occurs.
*/
@Override
public final long skip(long n) throws IOException {
int available = ofinish - ostart;
if (n > available) {
n = available;
}
if (n < 0) {
return 0;
}
ostart += n;
return n;
}
/**
* Returns the number of bytes that can be read from this input
* stream without blocking. The <code>available</code> method of
* <code>InputStream</code> returns <code>0</code>. This method
* <B>should</B> be overridden by subclasses.
*
* @return the number of bytes that can be read from this input stream
* without blocking.
*/
@Override
public final int available() {
return ofinish - ostart;
}
/**
* Closes this input stream and releases any system resources
* associated with the stream.
* <p>
* The <code>close</code> method of <code>CipherInputStream</code>
* calls the <code>close</code> method of its underlying input
* stream.
*
* @throws IOException if an I/O error occurs.
*/
@Override
public final void close() throws IOException {
if (closed) {
return;
}
closed = true;
in.close();
// Throw away the unprocessed data and throw no crypto exceptions.
// AEAD ciphers are fully readed before closing. Any authentication
// exceptions would occur while reading.
if (!done) {
ensureCapacity(0);
try {
cipher.doFinal(obuffer, 0);
} catch (Exception e) {
// Catch exceptions as the rest of the stream is unused.
}
}
obuffer = null;
}
/**
* Tests if this input stream supports the <code>mark</code>
* and <code>reset</code> methods, which it does not.
*
* @return <code>false</code>, since this class does not support the
* <code>mark</code> and <code>reset</code> methods.
* @see java.io.InputStream#mark(int)
* @see java.io.InputStream#reset()
*/
@Override
public final boolean markSupported() {
return false;
}
}

Wyświetl plik

@ -0,0 +1,56 @@
/*
* Copyright 2014-2021 Andrew Gaul <andrew@gaul.org>
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.gaul.s3proxy.crypto;
import java.io.IOException;
import java.io.InputStream;
import java.security.SecureRandom;
import javax.annotation.concurrent.ThreadSafe;
import javax.crypto.Cipher;
import javax.crypto.CipherInputStream;
import javax.crypto.spec.IvParameterSpec;
import javax.crypto.spec.SecretKeySpec;
@ThreadSafe
public class Encryption {
private final InputStream cis;
private final IvParameterSpec iv;
private final int part;
public Encryption(SecretKeySpec key, InputStream isRaw, int partNumber)
throws Exception {
iv = generateIV();
Cipher cipher = Cipher.getInstance(Constants.AES_CIPHER);
cipher.init(Cipher.ENCRYPT_MODE, key, iv);
cis = new CipherInputStream(isRaw, cipher);
part = partNumber;
}
public final InputStream openStream() throws IOException {
return new EncryptionInputStream(cis, part, iv);
}
private IvParameterSpec generateIV() {
byte[] iv = new byte[Constants.AES_BLOCK_SIZE];
SecureRandom randomSecureRandom = new SecureRandom();
randomSecureRandom.nextBytes(iv);
return new IvParameterSpec(iv);
}
}

Wyświetl plik

@ -0,0 +1,126 @@
/*
* Copyright 2014-2021 Andrew Gaul <andrew@gaul.org>
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.gaul.s3proxy.crypto;
import java.io.ByteArrayInputStream;
import java.io.IOException;
import java.io.InputStream;
import java.nio.ByteBuffer;
import javax.crypto.spec.IvParameterSpec;
public class EncryptionInputStream extends InputStream {
private final int part;
private final IvParameterSpec iv;
private boolean hasPadding;
private long size;
private InputStream in;
public EncryptionInputStream(InputStream in, int part,
IvParameterSpec iv) {
this.part = part;
this.iv = iv;
this.in = in;
}
// Padding (64 byte)
// Delimiter (8 byte)
// IV (16 byte)
// Part (4 byte)
// Size (8 byte)
// Version (2 byte)
// Reserved (26 byte)
final void padding() throws IOException {
if (in != null) {
in.close();
}
if (!hasPadding) {
ByteBuffer bb = ByteBuffer.allocate(Constants.PADDING_BLOCK_SIZE);
bb.put(Constants.DELIMITER);
bb.put(iv.getIV());
bb.putInt(part);
bb.putLong(size);
bb.putShort(Constants.VERSION);
in = new ByteArrayInputStream(bb.array());
hasPadding = true;
} else {
in = null;
}
}
public final int available() throws IOException {
if (in == null) {
return 0; // no way to signal EOF from available()
}
return in.available();
}
public final int read() throws IOException {
while (in != null) {
int c = in.read();
if (c != -1) {
size++;
return c;
}
padding();
}
return -1;
}
public final int read(byte[] b, int off, int len) throws IOException {
if (in == null) {
return -1;
} else if (b == null) {
throw new NullPointerException();
} else if (off < 0 || len < 0 || len > b.length - off) {
throw new IndexOutOfBoundsException();
} else if (len == 0) {
return 0;
}
do {
int n = in.read(b, off, len);
if (n > 0) {
size = size + n;
return n;
}
padding();
} while (in != null);
return -1;
}
public final void close() throws IOException {
IOException ioe = null;
while (in != null) {
try {
in.close();
} catch (IOException e) {
if (ioe == null) {
ioe = e;
} else {
ioe.addSuppressed(e);
}
}
padding();
}
if (ioe != null) {
throw ioe;
}
}
}

Wyświetl plik

@ -0,0 +1,88 @@
/*
* Copyright 2014-2021 Andrew Gaul <andrew@gaul.org>
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.gaul.s3proxy.crypto;
import java.io.IOException;
import java.io.InputStream;
import java.nio.ByteBuffer;
import java.nio.charset.StandardCharsets;
import java.util.Arrays;
import javax.crypto.spec.IvParameterSpec;
import org.apache.commons.io.IOUtils;
import org.jclouds.blobstore.domain.Blob;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
public class PartPadding {
private static final Logger logger =
LoggerFactory.getLogger(PartPadding.class);
private String delimiter;
private IvParameterSpec iv;
private int part;
private long size;
private short version;
public static PartPadding readPartPaddingFromBlob(Blob blob)
throws IOException {
PartPadding partPadding = new PartPadding();
InputStream is = blob.getPayload().openStream();
byte[] paddingBytes = IOUtils.toByteArray(is);
ByteBuffer bb = ByteBuffer.wrap(paddingBytes);
byte[] delimiterBytes = new byte[Constants.PADDING_DELIMITER_LENGTH];
bb.get(delimiterBytes);
partPadding.delimiter =
new String(delimiterBytes, StandardCharsets.UTF_8);
byte[] ivBytes = new byte[Constants.PADDING_IV_LENGTH];
bb.get(ivBytes);
partPadding.iv = new IvParameterSpec(ivBytes);
partPadding.part = bb.getInt();
partPadding.size = bb.getLong();
partPadding.version = bb.getShort();
logger.debug("delimiter {}", partPadding.delimiter);
logger.debug("iv {}", Arrays.toString(ivBytes));
logger.debug("part {}", partPadding.part);
logger.debug("size {}", partPadding.size);
logger.debug("version {}", partPadding.version);
return partPadding;
}
public final String getDelimiter() {
return delimiter;
}
public final IvParameterSpec getIv() {
return iv;
}
public final int getPart() {
return part;
}
public final long getSize() {
return size;
}
}

Wyświetl plik

@ -12,6 +12,9 @@ exec java \
-Ds3proxy.cors-allow-methods="${S3PROXY_CORS_ALLOW_METHODS}" \
-Ds3proxy.cors-allow-headers="${S3PROXY_CORS_ALLOW_HEADERS}" \
-Ds3proxy.ignore-unknown-headers="${S3PROXY_IGNORE_UNKNOWN_HEADERS}" \
-Ds3proxy.encrypted-blobstore="${S3PROXY_ENCRYPTED_BLOBSTORE}" \
-Ds3proxy.encrypted-blobstore-password="${S3PROXY_ENCRYPTED_BLOBSTORE_PASSWORD}" \
-Ds3proxy.encrypted-blobstore-salt="${S3PROXY_ENCRYPTED_BLOBSTORE_SALT}" \
-Djclouds.provider="${JCLOUDS_PROVIDER}" \
-Djclouds.identity="${JCLOUDS_IDENTITY}" \
-Djclouds.credential="${JCLOUDS_CREDENTIAL}" \

Wyświetl plik

@ -0,0 +1,282 @@
/*
* Copyright 2014-2021 Andrew Gaul <andrew@gaul.org>
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.gaul.s3proxy;
import static org.assertj.core.api.Assertions.assertThat;
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStream;
import java.io.InputStreamReader;
import java.util.Properties;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.TimeoutException;
import java.util.stream.Collectors;
import com.google.common.collect.ImmutableMap;
import com.google.common.io.ByteSource;
import com.google.common.util.concurrent.Uninterruptibles;
import org.assertj.core.api.Fail;
import org.gaul.s3proxy.crypto.Constants;
import org.jclouds.aws.AWSResponseException;
import org.jclouds.blobstore.BlobStoreContext;
import org.jclouds.blobstore.domain.PageSet;
import org.jclouds.blobstore.domain.StorageMetadata;
import org.jclouds.blobstore.options.ListContainerOptions;
import org.jclouds.http.options.GetOptions;
import org.jclouds.io.Payload;
import org.jclouds.io.Payloads;
import org.jclouds.s3.S3ClientLiveTest;
import org.jclouds.s3.domain.ListMultipartUploadsResponse;
import org.jclouds.s3.domain.ObjectMetadataBuilder;
import org.jclouds.s3.domain.S3Object;
import org.jclouds.s3.reference.S3Constants;
import org.testng.SkipException;
import org.testng.annotations.AfterClass;
import org.testng.annotations.Test;
@SuppressWarnings("UnstableApiUsage")
@Test(testName = "EncryptedBlobStoreLiveTest")
public final class EncryptedBlobStoreLiveTest extends S3ClientLiveTest {
private static final int AWAIT_CONSISTENCY_TIMEOUT_SECONDS =
Integer.parseInt(
System.getProperty(
"test.blobstore.await-consistency-timeout-seconds",
"0"));
private static final long MINIMUM_MULTIPART_SIZE = 5 * 1024 * 1024;
private S3Proxy s3Proxy;
private BlobStoreContext context;
@AfterClass
public void tearDown() throws Exception {
s3Proxy.stop();
context.close();
}
@Override
protected void awaitConsistency() {
Uninterruptibles.sleepUninterruptibly(
AWAIT_CONSISTENCY_TIMEOUT_SECONDS, TimeUnit.SECONDS);
}
@Override
protected Properties setupProperties() {
TestUtils.S3ProxyLaunchInfo info;
try {
info = TestUtils.startS3Proxy("s3proxy-encryption.conf");
s3Proxy = info.getS3Proxy();
context = info.getBlobStore().getContext();
} catch (Exception e) {
throw new RuntimeException(e);
}
Properties props = super.setupProperties();
props.setProperty(org.jclouds.Constants.PROPERTY_IDENTITY,
info.getS3Identity());
props.setProperty(org.jclouds.Constants.PROPERTY_CREDENTIAL,
info.getS3Credential());
props.setProperty(org.jclouds.Constants.PROPERTY_ENDPOINT,
info.getEndpoint().toString() + info.getServicePath());
props.setProperty(org.jclouds.Constants.PROPERTY_STRIP_EXPECT_HEADER,
"true");
props.setProperty(S3Constants.PROPERTY_S3_SERVICE_PATH,
info.getServicePath());
endpoint = info.getEndpoint().toString() + info.getServicePath();
return props;
}
@Test
public void testOneCharAndCopy() throws InterruptedException {
String blobName = TestUtils.createRandomBlobName();
String containerName = this.getContainerName();
S3Object object = this.getApi().newS3Object();
object.getMetadata().setKey(blobName);
object.setPayload("1");
this.getApi().putObject(containerName, object);
object = this.getApi().getObject(containerName, blobName);
assertThat(object.getMetadata().getContentMetadata()
.getContentLength()).isEqualTo(1L);
PageSet<? extends StorageMetadata>
list = view.getBlobStore().list(containerName);
assertThat(list).hasSize(1);
StorageMetadata md = list.iterator().next();
assertThat(md.getName()).isEqualTo(blobName);
assertThat(md.getSize()).isEqualTo(1L);
this.getApi().copyObject(containerName, blobName, containerName,
blobName + "-copy");
list = view.getBlobStore().list(containerName);
assertThat(list).hasSize(2);
for (StorageMetadata sm : list) {
assertThat(sm.getSize()).isEqualTo(1L);
assertThat(sm.getName()).doesNotContain(
Constants.S3_ENC_SUFFIX);
}
ListContainerOptions lco = new ListContainerOptions();
lco.maxResults(1);
list = view.getBlobStore().list(containerName, lco);
assertThat(list).hasSize(1);
assertThat(list.getNextMarker()).doesNotContain(
Constants.S3_ENC_SUFFIX);
}
@Test
public void testPartialContent() throws InterruptedException, IOException {
String blobName = TestUtils.createRandomBlobName();
String containerName = this.getContainerName();
String content = "123456789A123456789B123456";
S3Object object = this.getApi().newS3Object();
object.getMetadata().setKey(blobName);
object.setPayload(content);
this.getApi().putObject(containerName, object);
// get only 20 bytes
GetOptions options = new GetOptions();
options.range(0, 19);
object = this.getApi().getObject(containerName, blobName, options);
InputStreamReader r =
new InputStreamReader(object.getPayload().openStream());
BufferedReader reader = new BufferedReader(r);
String partialContent = reader.lines().collect(Collectors.joining());
assertThat(partialContent).isEqualTo(content.substring(0, 20));
}
@Test
public void testMultipart() throws InterruptedException, IOException {
String blobName = TestUtils.createRandomBlobName();
String containerName = this.getContainerName();
// 15mb of data
ByteSource byteSource = TestUtils.randomByteSource().slice(
0, MINIMUM_MULTIPART_SIZE * 3);
// first 2 parts with 6mb and last part with 3mb
long partSize = 6 * 1024 * 1024;
long lastPartSize = 3 * 1024 * 1024;
ByteSource byteSource1 = byteSource.slice(0, partSize);
ByteSource byteSource2 = byteSource.slice(partSize, partSize);
ByteSource byteSource3 = byteSource.slice(partSize * 2,
lastPartSize);
String uploadId = this.getApi().initiateMultipartUpload(containerName,
ObjectMetadataBuilder.create().key(blobName).build());
assertThat(this.getApi().listMultipartPartsFull(containerName,
blobName, uploadId)).isEmpty();
ListMultipartUploadsResponse
response = this.getApi()
.listMultipartUploads(containerName, null, null, null, blobName,
null);
assertThat(response.uploads()).hasSize(1);
Payload part1 =
Payloads.newInputStreamPayload(byteSource1.openStream());
part1.getContentMetadata().setContentLength(byteSource1.size());
Payload part2 =
Payloads.newInputStreamPayload(byteSource2.openStream());
part2.getContentMetadata().setContentLength(byteSource2.size());
Payload part3 =
Payloads.newInputStreamPayload(byteSource3.openStream());
part3.getContentMetadata().setContentLength(byteSource3.size());
String eTagOf1 = this.getApi()
.uploadPart(containerName, blobName, 1, uploadId, part1);
String eTagOf2 = this.getApi()
.uploadPart(containerName, blobName, 2, uploadId, part2);
String eTagOf3 = this.getApi()
.uploadPart(containerName, blobName, 3, uploadId, part3);
this.getApi().completeMultipartUpload(containerName, blobName, uploadId,
ImmutableMap.of(1, eTagOf1, 2, eTagOf2, 3, eTagOf3));
S3Object object = this.getApi().getObject(containerName, blobName);
try (InputStream actual = object.getPayload().openStream();
InputStream expected = byteSource.openStream()) {
assertThat(actual).hasContentEqualTo(expected);
}
// get a 5mb slice that overlap parts
long partialStart = 5 * 1024 * 1024;
ByteSource partialContent =
byteSource.slice(partialStart, partialStart);
GetOptions options = new GetOptions();
options.range(partialStart, (partialStart * 2) - 1);
object = this.getApi().getObject(containerName, blobName, options);
try (InputStream actual = object.getPayload().openStream();
InputStream expected = partialContent.openStream()) {
assertThat(actual).hasContentEqualTo(expected);
}
}
@Override
public void testMultipartSynchronously() {
throw new SkipException("list multipart synchronously not supported");
}
@Override
@Test
public void testUpdateObjectACL() throws InterruptedException,
ExecutionException, TimeoutException, IOException {
try {
super.testUpdateObjectACL();
Fail.failBecauseExceptionWasNotThrown(AWSResponseException.class);
} catch (AWSResponseException are) {
assertThat(are.getError().getCode()).isEqualTo("NotImplemented");
throw new SkipException("XML ACLs not supported", are);
}
}
@Override
@Test
public void testPublicWriteOnObject() throws InterruptedException,
ExecutionException, TimeoutException, IOException {
try {
super.testPublicWriteOnObject();
Fail.failBecauseExceptionWasNotThrown(AWSResponseException.class);
} catch (AWSResponseException are) {
assertThat(are.getError().getCode()).isEqualTo("NotImplemented");
throw new SkipException("public-read-write-acl not supported", are);
}
}
@Override
public void testCopyCannedAccessPolicyPublic() {
throw new SkipException("blob access control not supported");
}
@Override
public void testPutCannedAccessPolicyPublic() {
throw new SkipException("blob access control not supported");
}
@Override
public void testUpdateObjectCannedACL() {
throw new SkipException("blob access control not supported");
}
}

Wyświetl plik

@ -0,0 +1,835 @@
/*
* Copyright 2014-2021 Andrew Gaul <andrew@gaul.org>
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.gaul.s3proxy;
import static org.assertj.core.api.Assertions.assertThat;
import java.io.BufferedReader;
import java.io.ByteArrayInputStream;
import java.io.InputStream;
import java.io.InputStreamReader;
import java.nio.charset.StandardCharsets;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Properties;
import java.util.Random;
import java.util.stream.Collectors;
import com.google.common.collect.ImmutableList;
import com.google.inject.Module;
import org.gaul.s3proxy.crypto.Constants;
import org.jclouds.ContextBuilder;
import org.jclouds.blobstore.BlobStore;
import org.jclouds.blobstore.BlobStoreContext;
import org.jclouds.blobstore.domain.Blob;
import org.jclouds.blobstore.domain.BlobAccess;
import org.jclouds.blobstore.domain.BlobMetadata;
import org.jclouds.blobstore.domain.MultipartPart;
import org.jclouds.blobstore.domain.MultipartUpload;
import org.jclouds.blobstore.domain.PageSet;
import org.jclouds.blobstore.domain.StorageMetadata;
import org.jclouds.blobstore.domain.StorageType;
import org.jclouds.blobstore.options.CopyOptions;
import org.jclouds.blobstore.options.GetOptions;
import org.jclouds.blobstore.options.ListContainerOptions;
import org.jclouds.blobstore.options.PutOptions;
import org.jclouds.io.Payload;
import org.jclouds.io.Payloads;
import org.jclouds.logging.slf4j.config.SLF4JLoggingModule;
import org.junit.After;
import org.junit.Before;
import org.junit.Test;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
@SuppressWarnings("UnstableApiUsage")
public final class EncryptedBlobStoreTest {
private static final Logger logger =
LoggerFactory.getLogger(EncryptedBlobStoreTest.class);
private BlobStoreContext context;
private BlobStore blobStore;
private String containerName;
private BlobStore encryptedBlobStore;
private static Blob makeBlob(BlobStore blobStore, String blobName,
InputStream is, long contentLength) {
return blobStore.blobBuilder(blobName)
.payload(is)
.contentLength(contentLength)
.build();
}
private static Blob makeBlob(BlobStore blobStore, String blobName,
byte[] payload, long contentLength) {
return blobStore.blobBuilder(blobName)
.payload(payload)
.contentLength(contentLength)
.build();
}
private static Blob makeBlobWithContentType(BlobStore blobStore,
String blobName,
long contentLength,
InputStream is,
String contentType) {
return blobStore.blobBuilder(blobName)
.payload(is)
.contentLength(contentLength)
.contentType(contentType)
.build();
}
@Before
public void setUp() throws Exception {
String password = "Password1234567!";
String salt = "12345678";
containerName = TestUtils.createRandomContainerName();
//noinspection UnstableApiUsage
context = ContextBuilder
.newBuilder("transient")
.credentials("identity", "credential")
.modules(ImmutableList.<Module>of(new SLF4JLoggingModule()))
.build(BlobStoreContext.class);
blobStore = context.getBlobStore();
blobStore.createContainerInLocation(null, containerName);
Properties properties = new Properties();
properties.put(S3ProxyConstants.PROPERTY_ENCRYPTED_BLOBSTORE, "true");
properties.put(S3ProxyConstants.PROPERTY_ENCRYPTED_BLOBSTORE_PASSWORD,
password);
properties.put(S3ProxyConstants.PROPERTY_ENCRYPTED_BLOBSTORE_SALT,
salt);
encryptedBlobStore =
EncryptedBlobStore.newEncryptedBlobStore(blobStore, properties);
}
@After
public void tearDown() throws Exception {
if (context != null) {
blobStore.deleteContainer(containerName);
context.close();
}
}
@Test
public void testBlobNotExists() {
String blobName = TestUtils.createRandomBlobName();
Blob blob = encryptedBlobStore.getBlob(containerName, blobName);
assertThat(blob).isNull();
blob = encryptedBlobStore.getBlob(containerName, blobName,
new GetOptions());
assertThat(blob).isNull();
}
@Test
public void testBlobNotEncrypted() throws Exception {
String[] tests = new String[] {
"1", // only 1 char
"123456789A12345", // lower then the AES block
"123456789A1234567", // one byte bigger then the AES block
"123456789A123456123456789B123456123456789C" +
"1234123456789A123456123456789B123456123456789C1234"
};
Map<String, Long> contentLengths = new HashMap<>();
for (String content : tests) {
String blobName = TestUtils.createRandomBlobName();
InputStream is = new ByteArrayInputStream(
content.getBytes(StandardCharsets.UTF_8));
contentLengths.put(blobName, (long) content.length());
Blob blob = makeBlob(blobStore, blobName, is, content.length());
blobStore.putBlob(containerName, blob);
blob = encryptedBlobStore.getBlob(containerName, blobName);
InputStream blobIs = blob.getPayload().openStream();
InputStreamReader r = new InputStreamReader(blobIs);
BufferedReader reader = new BufferedReader(r);
String plaintext = reader.lines().collect(Collectors.joining());
logger.debug("plaintext {}", plaintext);
assertThat(content).isEqualTo(plaintext);
GetOptions options = new GetOptions();
blob = encryptedBlobStore.getBlob(containerName, blobName, options);
blobIs = blob.getPayload().openStream();
r = new InputStreamReader(blobIs);
reader = new BufferedReader(r);
plaintext = reader.lines().collect(Collectors.joining());
logger.debug("plaintext {} with empty options ", plaintext);
assertThat(content).isEqualTo(plaintext);
}
PageSet<? extends StorageMetadata> blobs =
encryptedBlobStore.list(containerName, new ListContainerOptions());
for (StorageMetadata blob : blobs) {
assertThat(blob.getSize()).isEqualTo(
contentLengths.get(blob.getName()));
}
blobs = encryptedBlobStore.list();
StorageMetadata metadata = blobs.iterator().next();
assertThat(StorageType.CONTAINER).isEqualTo(metadata.getType());
}
@Test
public void testListEncrypted() {
String[] contents = new String[] {
"1", // only 1 char
"123456789A12345", // lower then the AES block
"123456789A1234567", // one byte bigger then the AES block
"123456789A123456123456789B123456123456789C1234"
};
Map<String, Long> contentLengths = new HashMap<>();
for (String content : contents) {
String blobName = TestUtils.createRandomBlobName();
InputStream is = new ByteArrayInputStream(
content.getBytes(StandardCharsets.UTF_8));
contentLengths.put(blobName, (long) content.length());
Blob blob =
makeBlob(encryptedBlobStore, blobName, is, content.length());
encryptedBlobStore.putBlob(containerName, blob);
}
PageSet<? extends StorageMetadata> blobs =
encryptedBlobStore.list(containerName);
for (StorageMetadata blob : blobs) {
assertThat(blob.getSize()).isEqualTo(
contentLengths.get(blob.getName()));
}
blobs =
encryptedBlobStore.list(containerName, new ListContainerOptions());
for (StorageMetadata blob : blobs) {
assertThat(blob.getSize()).isEqualTo(
contentLengths.get(blob.getName()));
encryptedBlobStore.removeBlob(containerName, blob.getName());
}
blobs =
encryptedBlobStore.list(containerName, new ListContainerOptions());
assertThat(blobs.size()).isEqualTo(0);
}
@Test
public void testListEncryptedMultipart() {
String blobName = TestUtils.createRandomBlobName();
String[] contentParts = new String[] {
"123456789A123456123456789B123456123456789C1234",
"123456789D123456123456789E123456123456789F123456",
"123456789G123456123456789H123456123456789I123"
};
String content = contentParts[0] + contentParts[1] + contentParts[2];
BlobMetadata blobMetadata = makeBlob(encryptedBlobStore, blobName,
content.getBytes(StandardCharsets.UTF_8),
content.length()).getMetadata();
MultipartUpload mpu =
encryptedBlobStore.initiateMultipartUpload(containerName,
blobMetadata, new PutOptions());
Payload payload1 = Payloads.newByteArrayPayload(
contentParts[0].getBytes(StandardCharsets.UTF_8));
Payload payload2 = Payloads.newByteArrayPayload(
contentParts[1].getBytes(StandardCharsets.UTF_8));
Payload payload3 = Payloads.newByteArrayPayload(
contentParts[2].getBytes(StandardCharsets.UTF_8));
encryptedBlobStore.uploadMultipartPart(mpu, 1, payload1);
encryptedBlobStore.uploadMultipartPart(mpu, 2, payload2);
encryptedBlobStore.uploadMultipartPart(mpu, 3, payload3);
List<MultipartPart> parts = encryptedBlobStore.listMultipartUpload(mpu);
int index = 0;
for (MultipartPart part : parts) {
assertThat((long) contentParts[index].length()).isEqualTo(
part.partSize());
index++;
}
encryptedBlobStore.completeMultipartUpload(mpu, parts);
PageSet<? extends StorageMetadata> blobs =
encryptedBlobStore.list(containerName);
StorageMetadata metadata = blobs.iterator().next();
assertThat((long) content.length()).isEqualTo(metadata.getSize());
ListContainerOptions options = new ListContainerOptions();
blobs = encryptedBlobStore.list(containerName, options.withDetails());
metadata = blobs.iterator().next();
assertThat((long) content.length()).isEqualTo(metadata.getSize());
blobs = encryptedBlobStore.list();
metadata = blobs.iterator().next();
assertThat(StorageType.CONTAINER).isEqualTo(metadata.getType());
List<String> singleList = new ArrayList<>();
singleList.add(blobName);
encryptedBlobStore.removeBlobs(containerName, singleList);
blobs = encryptedBlobStore.list(containerName);
assertThat(blobs.size()).isEqualTo(0);
}
@Test
public void testBlobNotEncryptedRanges() throws Exception {
for (int run = 0; run < 100; run++) {
String[] tests = new String[] {
"123456789A12345", // lower then the AES block
"123456789A1234567", // one byte bigger then the AES block
"123456789A123456123456789B123456123456789C" +
"1234123456789A123456123456789B123456123456789C1234"
};
for (String content : tests) {
String blobName = TestUtils.createRandomBlobName();
Random rand = new Random();
InputStream is = new ByteArrayInputStream(
content.getBytes(StandardCharsets.UTF_8));
Blob blob = makeBlob(blobStore, blobName, is, content.length());
blobStore.putBlob(containerName, blob);
GetOptions options = new GetOptions();
int offset = rand.nextInt(content.length() - 1);
logger.debug("content {} with offset {}", content, offset);
options.startAt(offset);
blob = encryptedBlobStore.getBlob(containerName, blobName,
options);
InputStream blobIs = blob.getPayload().openStream();
InputStreamReader r = new InputStreamReader(blobIs);
BufferedReader reader = new BufferedReader(r);
String plaintext = reader.lines().collect(Collectors.joining());
logger.debug("plaintext {} with offset {}", plaintext, offset);
assertThat(plaintext).isEqualTo(content.substring(offset));
options = new GetOptions();
int tail = rand.nextInt(content.length());
if (tail == 0) {
tail++;
}
logger.debug("content {} with tail {}", content, tail);
options.tail(tail);
blob = encryptedBlobStore.getBlob(containerName, blobName,
options);
blobIs = blob.getPayload().openStream();
r = new InputStreamReader(blobIs);
reader = new BufferedReader(r);
plaintext = reader.lines().collect(Collectors.joining());
logger.debug("plaintext {} with tail {}", plaintext, tail);
assertThat(plaintext).isEqualTo(
content.substring(content.length() - tail));
options = new GetOptions();
offset = 1;
int end = content.length() - 2;
logger.debug("content {} with range {}-{}", content, offset,
end);
options.range(offset, end);
blob = encryptedBlobStore.getBlob(containerName, blobName,
options);
blobIs = blob.getPayload().openStream();
r = new InputStreamReader(blobIs);
reader = new BufferedReader(r);
plaintext = reader.lines().collect(Collectors.joining());
logger.debug("plaintext {} with range {}-{}", plaintext, offset,
end);
assertThat(plaintext).isEqualTo(
content.substring(offset, end + 1));
}
}
}
@Test
public void testEncryptContent() throws Exception {
String[] tests = new String[] {
"1", // only 1 char
"123456789A12345", // lower then the AES block
"123456789A1234567", // one byte bigger then the AES block
"123456789A123456123456789B123456123456789C1234"
};
for (String content : tests) {
String blobName = TestUtils.createRandomBlobName();
String contentType = "plain/text";
InputStream is = new ByteArrayInputStream(
content.getBytes(StandardCharsets.UTF_8));
Blob blob = makeBlobWithContentType(encryptedBlobStore, blobName,
content.length(), is, contentType);
encryptedBlobStore.putBlob(containerName, blob);
blob = encryptedBlobStore.getBlob(containerName, blobName);
InputStream blobIs = blob.getPayload().openStream();
InputStreamReader r = new InputStreamReader(blobIs);
BufferedReader reader = new BufferedReader(r);
String plaintext = reader.lines().collect(Collectors.joining());
logger.debug("plaintext {}", plaintext);
assertThat(plaintext).isEqualTo(content);
blob = blobStore.getBlob(containerName,
blobName + Constants.S3_ENC_SUFFIX);
blobIs = blob.getPayload().openStream();
r = new InputStreamReader(blobIs);
reader = new BufferedReader(r);
String encrypted = reader.lines().collect(Collectors.joining());
logger.debug("encrypted {}", encrypted);
assertThat(content).isNotEqualTo(encrypted);
assertThat(encryptedBlobStore.blobExists(containerName,
blobName)).isTrue();
BlobAccess access =
encryptedBlobStore.getBlobAccess(containerName, blobName);
assertThat(access).isEqualTo(BlobAccess.PRIVATE);
encryptedBlobStore.setBlobAccess(containerName, blobName,
BlobAccess.PUBLIC_READ);
access = encryptedBlobStore.getBlobAccess(containerName, blobName);
assertThat(access).isEqualTo(BlobAccess.PUBLIC_READ);
}
}
@Test
public void testEncryptContentWithOptions() throws Exception {
String[] tests = new String[] {
"1", // only 1 char
"123456789A12345", // lower then the AES block
"123456789A1234567", // one byte bigger then the AES block
"123456789A123456123456789B123456123456789C1234"
};
for (String content : tests) {
String blobName = TestUtils.createRandomBlobName();
String contentType = "plain/text; charset=utf-8";
InputStream is = new ByteArrayInputStream(
content.getBytes(StandardCharsets.UTF_8));
Blob blob = makeBlobWithContentType(encryptedBlobStore, blobName,
content.length(), is, contentType);
PutOptions options = new PutOptions();
encryptedBlobStore.putBlob(containerName, blob, options);
blob = encryptedBlobStore.getBlob(containerName, blobName);
InputStream blobIs = blob.getPayload().openStream();
InputStreamReader r = new InputStreamReader(blobIs);
BufferedReader reader = new BufferedReader(r);
String plaintext = reader.lines().collect(Collectors.joining());
logger.debug("plaintext {}", plaintext);
assertThat(content).isEqualTo(plaintext);
blob = blobStore.getBlob(containerName,
blobName + Constants.S3_ENC_SUFFIX);
blobIs = blob.getPayload().openStream();
r = new InputStreamReader(blobIs);
reader = new BufferedReader(r);
String encrypted = reader.lines().collect(Collectors.joining());
logger.debug("encrypted {}", encrypted);
assertThat(content).isNotEqualTo(encrypted);
BlobMetadata metadata =
encryptedBlobStore.blobMetadata(containerName,
blobName + Constants.S3_ENC_SUFFIX);
assertThat(contentType).isEqualTo(
metadata.getContentMetadata().getContentType());
encryptedBlobStore.copyBlob(containerName, blobName,
containerName, blobName + "-copy", CopyOptions.NONE);
blob = blobStore.getBlob(containerName,
blobName + Constants.S3_ENC_SUFFIX);
blobIs = blob.getPayload().openStream();
r = new InputStreamReader(blobIs);
reader = new BufferedReader(r);
encrypted = reader.lines().collect(Collectors.joining());
logger.debug("encrypted {}", encrypted);
assertThat(content).isNotEqualTo(encrypted);
blob =
encryptedBlobStore.getBlob(containerName, blobName + "-copy");
blobIs = blob.getPayload().openStream();
r = new InputStreamReader(blobIs);
reader = new BufferedReader(r);
plaintext = reader.lines().collect(Collectors.joining());
logger.debug("plaintext {}", plaintext);
assertThat(content).isEqualTo(plaintext);
}
}
@Test
public void testEncryptMultipartContent() throws Exception {
String blobName = TestUtils.createRandomBlobName();
String content1 = "123456789A123456123456789B123456123456789C1234";
String content2 = "123456789D123456123456789E123456123456789F123456";
String content3 = "123456789G123456123456789H123456123456789I123";
String content = content1 + content2 + content3;
BlobMetadata blobMetadata = makeBlob(encryptedBlobStore, blobName,
content.getBytes(StandardCharsets.UTF_8),
content.length()).getMetadata();
MultipartUpload mpu =
encryptedBlobStore.initiateMultipartUpload(containerName,
blobMetadata, new PutOptions());
Payload payload1 = Payloads.newByteArrayPayload(
content1.getBytes(StandardCharsets.UTF_8));
Payload payload2 = Payloads.newByteArrayPayload(
content2.getBytes(StandardCharsets.UTF_8));
Payload payload3 = Payloads.newByteArrayPayload(
content3.getBytes(StandardCharsets.UTF_8));
encryptedBlobStore.uploadMultipartPart(mpu, 1, payload1);
encryptedBlobStore.uploadMultipartPart(mpu, 2, payload2);
encryptedBlobStore.uploadMultipartPart(mpu, 3, payload3);
List<MultipartUpload> mpus =
encryptedBlobStore.listMultipartUploads(containerName);
assertThat(mpus.size()).isEqualTo(1);
List<MultipartPart> parts = encryptedBlobStore.listMultipartUpload(mpu);
assertThat(mpus.get(0).id()).isEqualTo(mpu.id());
encryptedBlobStore.completeMultipartUpload(mpu, parts);
Blob blob = encryptedBlobStore.getBlob(containerName, blobName);
InputStream blobIs = blob.getPayload().openStream();
InputStreamReader r = new InputStreamReader(blobIs);
BufferedReader reader = new BufferedReader(r);
String plaintext = reader.lines().collect(Collectors.joining());
logger.debug("plaintext {}", plaintext);
assertThat(plaintext).isEqualTo(content);
blob = blobStore.getBlob(containerName,
blobName + Constants.S3_ENC_SUFFIX);
blobIs = blob.getPayload().openStream();
r = new InputStreamReader(blobIs);
reader = new BufferedReader(r);
String encrypted = reader.lines().collect(Collectors.joining());
logger.debug("encrypted {}", encrypted);
assertThat(content).isNotEqualTo(encrypted);
}
@Test
public void testReadPartial() throws Exception {
for (int offset = 0; offset < 60; offset++) {
logger.debug("Test with offset {}", offset);
String blobName = TestUtils.createRandomBlobName();
String content =
"123456789A123456123456789B123456123456789" +
"C123456789D123456789E12345";
InputStream is = new ByteArrayInputStream(
content.getBytes(StandardCharsets.UTF_8));
Blob blob =
makeBlob(encryptedBlobStore, blobName, is, content.length());
encryptedBlobStore.putBlob(containerName, blob);
GetOptions options = new GetOptions();
options.startAt(offset);
blob = encryptedBlobStore.getBlob(containerName, blobName, options);
InputStream blobIs = blob.getPayload().openStream();
InputStreamReader r = new InputStreamReader(blobIs);
BufferedReader reader = new BufferedReader(r);
String plaintext = reader.lines().collect(Collectors.joining());
logger.debug("plaintext {}", plaintext);
assertThat(plaintext).isEqualTo(content.substring(offset));
}
}
@Test
public void testReadTail() throws Exception {
for (int length = 1; length < 60; length++) {
logger.debug("Test with length {}", length);
String blobName = TestUtils.createRandomBlobName();
String content =
"123456789A123456123456789B123456123456789C" +
"123456789D123456789E12345";
InputStream is = new ByteArrayInputStream(
content.getBytes(StandardCharsets.UTF_8));
Blob blob =
makeBlob(encryptedBlobStore, blobName, is, content.length());
encryptedBlobStore.putBlob(containerName, blob);
GetOptions options = new GetOptions();
options.tail(length);
blob = encryptedBlobStore.getBlob(containerName, blobName, options);
InputStream blobIs = blob.getPayload().openStream();
InputStreamReader r = new InputStreamReader(blobIs);
BufferedReader reader = new BufferedReader(r);
String plaintext = reader.lines().collect(Collectors.joining());
logger.debug("plaintext {}", plaintext);
assertThat(plaintext).isEqualTo(
content.substring(content.length() - length));
}
}
@Test
public void testReadPartialWithRandomEnd() throws Exception {
for (int run = 0; run < 100; run++) {
for (int offset = 0; offset < 50; offset++) {
Random rand = new Random();
int end = offset + rand.nextInt(20) + 2;
int size = end - offset + 1;
logger.debug("Test with offset {} and end {} size {}",
offset, end, size);
String blobName = TestUtils.createRandomBlobName();
String content =
"123456789A123456-123456789B123456-123456789C123456-" +
"123456789D123456-123456789E123456";
InputStream is = new ByteArrayInputStream(
content.getBytes(StandardCharsets.UTF_8));
Blob blob = makeBlob(encryptedBlobStore, blobName, is,
content.length());
encryptedBlobStore.putBlob(containerName, blob);
GetOptions options = new GetOptions();
options.range(offset, end);
blob = encryptedBlobStore.getBlob(containerName, blobName,
options);
InputStream blobIs = blob.getPayload().openStream();
InputStreamReader r = new InputStreamReader(blobIs);
BufferedReader reader = new BufferedReader(r);
String plaintext = reader.lines().collect(Collectors.joining());
logger.debug("plaintext {}", plaintext);
assertThat(plaintext).hasSize(size);
assertThat(plaintext).isEqualTo(
content.substring(offset, end + 1));
}
}
}
@Test
public void testMultipartReadPartial() throws Exception {
for (int offset = 0; offset < 130; offset++) {
logger.debug("Test with offset {}", offset);
String blobName = TestUtils.createRandomBlobName();
String content1 = "PART1-789A123456123456789B123456123456789C1234";
String content2 =
"PART2-789D123456123456789E123456123456789F123456";
String content3 = "PART3-789G123456123456789H123456123456789I123";
String content = content1 + content2 + content3;
BlobMetadata blobMetadata = makeBlob(encryptedBlobStore, blobName,
content.getBytes(StandardCharsets.UTF_8),
content.length()).getMetadata();
MultipartUpload mpu =
encryptedBlobStore.initiateMultipartUpload(containerName,
blobMetadata, new PutOptions());
Payload payload1 = Payloads.newByteArrayPayload(
content1.getBytes(StandardCharsets.UTF_8));
Payload payload2 = Payloads.newByteArrayPayload(
content2.getBytes(StandardCharsets.UTF_8));
Payload payload3 = Payloads.newByteArrayPayload(
content3.getBytes(StandardCharsets.UTF_8));
encryptedBlobStore.uploadMultipartPart(mpu, 1, payload1);
encryptedBlobStore.uploadMultipartPart(mpu, 2, payload2);
encryptedBlobStore.uploadMultipartPart(mpu, 3, payload3);
List<MultipartPart> parts =
encryptedBlobStore.listMultipartUpload(mpu);
encryptedBlobStore.completeMultipartUpload(mpu, parts);
GetOptions options = new GetOptions();
options.startAt(offset);
Blob blob =
encryptedBlobStore.getBlob(containerName, blobName, options);
InputStream blobIs = blob.getPayload().openStream();
InputStreamReader r = new InputStreamReader(blobIs);
BufferedReader reader = new BufferedReader(r);
String plaintext = reader.lines().collect(Collectors.joining());
logger.debug("plaintext {}", plaintext);
assertThat(plaintext).isEqualTo(content.substring(offset));
}
}
@Test
public void testMultipartReadTail() throws Exception {
for (int length = 1; length < 130; length++) {
logger.debug("Test with length {}", length);
String blobName = TestUtils.createRandomBlobName();
String content1 = "PART1-789A123456123456789B123456123456789C1234";
String content2 =
"PART2-789D123456123456789E123456123456789F123456";
String content3 = "PART3-789G123456123456789H123456123456789I123";
String content = content1 + content2 + content3;
BlobMetadata blobMetadata = makeBlob(encryptedBlobStore, blobName,
content.getBytes(StandardCharsets.UTF_8),
content.length()).getMetadata();
MultipartUpload mpu =
encryptedBlobStore.initiateMultipartUpload(containerName,
blobMetadata, new PutOptions());
Payload payload1 = Payloads.newByteArrayPayload(
content1.getBytes(StandardCharsets.UTF_8));
Payload payload2 = Payloads.newByteArrayPayload(
content2.getBytes(StandardCharsets.UTF_8));
Payload payload3 = Payloads.newByteArrayPayload(
content3.getBytes(StandardCharsets.UTF_8));
encryptedBlobStore.uploadMultipartPart(mpu, 1, payload1);
encryptedBlobStore.uploadMultipartPart(mpu, 2, payload2);
encryptedBlobStore.uploadMultipartPart(mpu, 3, payload3);
List<MultipartPart> parts =
encryptedBlobStore.listMultipartUpload(mpu);
encryptedBlobStore.completeMultipartUpload(mpu, parts);
GetOptions options = new GetOptions();
options.tail(length);
Blob blob =
encryptedBlobStore.getBlob(containerName, blobName, options);
InputStream blobIs = blob.getPayload().openStream();
InputStreamReader r = new InputStreamReader(blobIs);
BufferedReader reader = new BufferedReader(r);
String plaintext = reader.lines().collect(Collectors.joining());
logger.debug("plaintext {}", plaintext);
assertThat(plaintext).isEqualTo(
content.substring(content.length() - length));
}
}
@Test
public void testMultipartReadPartialWithRandomEnd() throws Exception {
for (int run = 0; run < 100; run++) {
// total len = 139
for (int offset = 0; offset < 70; offset++) {
Random rand = new Random();
int end = offset + rand.nextInt(60) + 2;
int size = end - offset + 1;
logger.debug("Test with offset {} and end {} size {}",
offset, end, size);
String blobName = TestUtils.createRandomBlobName();
String content1 =
"PART1-789A123456123456789B123456123456789C1234";
String content2 =
"PART2-789D123456123456789E123456123456789F123456";
String content3 =
"PART3-789G123456123456789H123456123456789I123";
String content = content1 + content2 + content3;
BlobMetadata blobMetadata =
makeBlob(encryptedBlobStore, blobName,
content.getBytes(StandardCharsets.UTF_8),
content.length()).getMetadata();
MultipartUpload mpu =
encryptedBlobStore.initiateMultipartUpload(containerName,
blobMetadata, new PutOptions());
Payload payload1 = Payloads.newByteArrayPayload(
content1.getBytes(StandardCharsets.UTF_8));
Payload payload2 = Payloads.newByteArrayPayload(
content2.getBytes(StandardCharsets.UTF_8));
Payload payload3 = Payloads.newByteArrayPayload(
content3.getBytes(StandardCharsets.UTF_8));
encryptedBlobStore.uploadMultipartPart(mpu, 1, payload1);
encryptedBlobStore.uploadMultipartPart(mpu, 2, payload2);
encryptedBlobStore.uploadMultipartPart(mpu, 3, payload3);
List<MultipartPart> parts =
encryptedBlobStore.listMultipartUpload(mpu);
encryptedBlobStore.completeMultipartUpload(mpu, parts);
GetOptions options = new GetOptions();
options.range(offset, end);
Blob blob = encryptedBlobStore.getBlob(containerName, blobName,
options);
InputStream blobIs = blob.getPayload().openStream();
InputStreamReader r = new InputStreamReader(blobIs);
BufferedReader reader = new BufferedReader(r);
String plaintext = reader.lines().collect(Collectors.joining());
logger.debug("plaintext {}", plaintext);
assertThat(plaintext).isEqualTo(
content.substring(offset, end + 1));
}
}
}
}

Wyświetl plik

@ -188,6 +188,14 @@ final class TestUtils {
BlobStoreContext context = builder.build(BlobStoreContext.class);
info.blobStore = context.getBlobStore();
String encrypted = info.getProperties().getProperty(
S3ProxyConstants.PROPERTY_ENCRYPTED_BLOBSTORE);
if (encrypted != null && encrypted.equals("true")) {
info.blobStore =
EncryptedBlobStore.newEncryptedBlobStore(info.blobStore,
info.getProperties());
}
S3Proxy.Builder s3ProxyBuilder = S3Proxy.Builder.fromProperties(
info.getProperties());
s3ProxyBuilder.blobStore(info.blobStore);

Wyświetl plik

@ -0,0 +1,20 @@
s3proxy.endpoint=http://127.0.0.1:0
s3proxy.secure-endpoint=https://127.0.0.1:0
#s3proxy.service-path=s3proxy
# authorization must be aws-v2, aws-v4, aws-v2-or-v4, or none
s3proxy.authorization=aws-v2-or-v4
s3proxy.identity=local-identity
s3proxy.credential=local-credential
s3proxy.keystore-path=keystore.jks
s3proxy.keystore-password=password
jclouds.provider=transient
jclouds.identity=remote-identity
jclouds.credential=remote-credential
# endpoint is optional for some providers
#jclouds.endpoint=http://127.0.0.1:8081
jclouds.filesystem.basedir=/tmp/blobstore
s3proxy.encrypted-blobstore=true
s3proxy.encrypted-blobstore-password=1234567890123456
s3proxy.encrypted-blobstore-salt=12345678