Bash syntax highlighting

main
Simon Willison 2024-12-18 21:43:56 -08:00
rodzic 15922d2dfc
commit e896f46f65
1 zmienionych plików z 150 dodań i 127 usunięć

Wyświetl plik

@ -18,8 +18,10 @@ You can use the `s3-credentials policy` command to generate the JSON policy docu
- `--public-bucket` - generate a bucket policy for a public bucket - `--public-bucket` - generate a bucket policy for a public bucket
With none of these options it defaults to a read-write policy. With none of these options it defaults to a read-write policy.
```bash
s3-credentials policy my-bucket --read-only
``` ```
% s3-credentials policy my-bucket --read-only ```json
{ {
"Version": "2012-10-17", "Version": "2012-10-17",
... ...
@ -28,15 +30,19 @@ With none of these options it defaults to a read-write policy.
## whoami ## whoami
To see which user you are authenticated as: To see which user you are authenticated as:
```bash
s3-credentials whoami s3-credentials whoami
```
This will output JSON representing the currently authenticated user. This will output JSON representing the currently authenticated user.
Using this with the `--auth` option is useful for verifying created credentials: Using this with the `--auth` option is useful for verifying created credentials:
``` ```bash
s3-credentials create static.niche-museums.com --read-only > auth.json s3-credentials create static.niche-museums.com --read-only > auth.json
s3-credentials whoami --auth auth.json ```
```bash
s3-credentials whoami --auth auth.json
```
```json
{ {
"UserId": "AIDAWXFXAIOZPIZC6MHAG", "UserId": "AIDAWXFXAIOZPIZC6MHAG",
"Account": "462092780466", "Account": "462092780466",
@ -46,9 +52,9 @@ s3-credentials whoami --auth auth.json
## list-users ## list-users
To see a list of all users that exist for your AWS account: To see a list of all users that exist for your AWS account:
```bash
s3-credentials list-users s3-credentials list-users
```
This will return a pretty-printed array of JSON objects by default. This will return a pretty-printed array of JSON objects by default.
Add `--nl` to collapse these to single lines as valid newline-delimited JSON. Add `--nl` to collapse these to single lines as valid newline-delimited JSON.
@ -59,35 +65,43 @@ Add `--csv` or `--tsv` to get back CSV or TSV data.
Shows a list of all buckets in your AWS account. Shows a list of all buckets in your AWS account.
% s3-credentials list-buckets ```bash
[ s3-credentials list-buckets
{ ```
"Name": "aws-cloudtrail-logs-462092780466-f2c900d3", ```json
"CreationDate": "2021-03-25 22:19:54+00:00" [
}, {
{ "Name": "aws-cloudtrail-logs-462092780466-f2c900d3",
"Name": "simonw-test-bucket-for-s3-credentials", "CreationDate": "2021-03-25 22:19:54+00:00"
"CreationDate": "2021-11-03 21:46:12+00:00" },
} {
] "Name": "simonw-test-bucket-for-s3-credentials",
"CreationDate": "2021-11-03 21:46:12+00:00"
}
]
```
With no extra arguments this will show all available buckets - you can also add one or more explicit bucket names to see just those buckets: With no extra arguments this will show all available buckets - you can also add one or more explicit bucket names to see just those buckets:
% s3-credentials list-buckets simonw-test-bucket-for-s3-credentials ```bash
[ s3-credentials list-buckets simonw-test-bucket-for-s3-credentials
{ ```
"Name": "simonw-test-bucket-for-s3-credentials", ```json
"CreationDate": "2021-11-03 21:46:12+00:00" [
} {
] "Name": "simonw-test-bucket-for-s3-credentials",
"CreationDate": "2021-11-03 21:46:12+00:00"
}
]
```
This accepts the same `--nl`, `--csv` and `--tsv` options as `list-users`. This accepts the same `--nl`, `--csv` and `--tsv` options as `list-users`.
Add `--details` to include details of the bucket ACL, website configuration and public access block settings. This is useful for running a security audit of your buckets. Add `--details` to include details of the bucket ACL, website configuration and public access block settings. This is useful for running a security audit of your buckets.
Using `--details` adds several additional API calls for each bucket, so it is advisable to use it with one or more explicit bucket names. Using `--details` adds several additional API calls for each bucket, so it is advisable to use it with one or more explicit bucket names.
```bash
s3-credentials list-buckets simonw-test-public-website-bucket --details
``` ```
% s3-credentials list-buckets simonw-test-public-website-bucket --details ```json
[ [
{ {
"Name": "simonw-test-public-website-bucket", "Name": "simonw-test-public-website-bucket",
@ -157,8 +171,10 @@ A bucket with `public_access_block` might look like this:
To list the contents of a bucket, use `list-bucket`: To list the contents of a bucket, use `list-bucket`:
```bash
s3-credentials list-bucket static.niche-museums.com
``` ```
% s3-credentials list-bucket static.niche-museums.com ```json
[ [
{ {
"Key": "Griffith-Observatory.jpg", "Key": "Griffith-Observatory.jpg",
@ -186,9 +202,10 @@ Add `--urls` to include a `URL` field in the output providing the full URL to ea
To see a list of inline policies belonging to users: To see a list of inline policies belonging to users:
```bash
s3-credentials list-user-policies s3.read-write.static.niche-museums.com
```
``` ```
% s3-credentials list-user-policies s3.read-write.static.niche-museums.com
User: s3.read-write.static.niche-museums.com User: s3.read-write.static.niche-museums.com
PolicyName: s3.read-write.static.niche-museums.com PolicyName: s3.read-write.static.niche-museums.com
{ {
@ -214,9 +231,9 @@ PolicyName: s3.read-write.static.niche-museums.com
} }
``` ```
You can pass any number of usernames here. If you don't specify a username the tool will loop through every user belonging to your account: You can pass any number of usernames here. If you don't specify a username the tool will loop through every user belonging to your account:
```bash
s3-credentials list-user-policies s3-credentials list-user-policies
```
## list-roles ## list-roles
The `list-roles` command lists all of the roles available for the authenticated account. The `list-roles` command lists all of the roles available for the authenticated account.
@ -227,8 +244,10 @@ You can optionally add one or more role names to the command to display and fetc
Example usage: Example usage:
```bash
s3-credentials list-roles AWSServiceRoleForLightsail --details
``` ```
% s3-credentials list-roles AWSServiceRoleForLightsail --details ```json
[ [
{ {
"Path": "/aws-service-role/lightsail.amazonaws.com/", "Path": "/aws-service-role/lightsail.amazonaws.com/",
@ -342,8 +361,10 @@ Deleting AWS users is a little fiddly: you first need to delete their access key
The `s3-credentials delete-user` handles this for you: The `s3-credentials delete-user` handles this for you:
```bash
s3-credentials delete-user s3.read-write.simonw-test-bucket-10
```
``` ```
% s3-credentials delete-user s3.read-write.simonw-test-bucket-10
User: s3.read-write.simonw-test-bucket-10 User: s3.read-write.simonw-test-bucket-10
Deleted policy: s3.read-write.simonw-test-bucket-10 Deleted policy: s3.read-write.simonw-test-bucket-10
Deleted access key: AKIAWXFXAIOZK3GPEIWR Deleted access key: AKIAWXFXAIOZK3GPEIWR
@ -354,54 +375,54 @@ You can pass it multiple usernames to delete multiple users at a time.
## put-object ## put-object
You can upload a file to a key in an S3 bucket using `s3-credentials put-object`: You can upload a file to a key in an S3 bucket using `s3-credentials put-object`:
```bash
s3-credentials put-object my-bucket my-key.txt /path/to/file.txt s3-credentials put-object my-bucket my-key.txt /path/to/file.txt
```
Use `-` as the file name to upload from standard input: Use `-` as the file name to upload from standard input:
```bash
echo "Hello" | s3-credentials put-object my-bucket hello.txt - echo "Hello" | s3-credentials put-object my-bucket hello.txt -
```
This command shows a progress bar by default. Use `-s` or `--silent` to hide the progress bar. This command shows a progress bar by default. Use `-s` or `--silent` to hide the progress bar.
The `Content-Type` on the uploaded object will be automatically set based on the file extension. If you are using standard input, or you want to over-ride the detected type, you can do so using the `--content-type` option: The `Content-Type` on the uploaded object will be automatically set based on the file extension. If you are using standard input, or you want to over-ride the detected type, you can do so using the `--content-type` option:
```bash
echo "<h1>Hello World</h1>" | \ echo "<h1>Hello World</h1>" | \
s3-credentials put-object my-bucket hello.html - --content-type "text/html" s3-credentials put-object my-bucket hello.html - --content-type "text/html"
```
## put-objects ## put-objects
`s3-credentials put-objects` can be used to upload more than one file at once. `s3-credentials put-objects` can be used to upload more than one file at once.
Pass one or more filenames to upload them to the root of your bucket: Pass one or more filenames to upload them to the root of your bucket:
```bash
s3-credentials put-objects my-bucket one.txt two.txt three.txt s3-credentials put-objects my-bucket one.txt two.txt three.txt
```
Use `--prefix my-prefix` to upload them to the specified prefix: Use `--prefix my-prefix` to upload them to the specified prefix:
```bash
s3-credentials put-objects my-bucket one.txt --prefix my-prefix s3-credentials put-objects my-bucket one.txt --prefix my-prefix
```
This will upload the file to `my-prefix/one.txt`. This will upload the file to `my-prefix/one.txt`.
Pass one or more directories to upload the contents of those directories. Pass one or more directories to upload the contents of those directories.
`.` uploads everything in your current directory: `.` uploads everything in your current directory:
```bash
s3-credentials put-objects my-bucket . s3-credentials put-objects my-bucket .
```
Passing directory names will upload the directory and all of its contents: Passing directory names will upload the directory and all of its contents:
```bash
s3-credentials put-objects my-bucket my-directory s3-credentials put-objects my-bucket my-directory
```
If `my-directory` had files `one.txt` and `two.txt` in it, the result would be: If `my-directory` had files `one.txt` and `two.txt` in it, the result would be:
```
my-directory/one.txt my-directory/one.txt
my-directory/two.txt my-directory/two.txt
```
A progress bar will be shown by default. Use `-s` or `--silent` to hide it. A progress bar will be shown by default. Use `-s` or `--silent` to hide it.
Add `--dry-run` to get a preview of what would be uploaded without uploading anything: Add `--dry-run` to get a preview of what would be uploaded without uploading anything:
```bash
s3-credentials put-objects my-bucket . --dry-run s3-credentials put-objects my-bucket . --dry-run
```
``` ```
out/IMG_1254.jpeg => s3://my-bucket/out/IMG_1254.jpeg out/IMG_1254.jpeg => s3://my-bucket/out/IMG_1254.jpeg
out/alverstone-mead-2.jpg => s3://my-bucket/out/alverstone-mead-2.jpg out/alverstone-mead-2.jpg => s3://my-bucket/out/alverstone-mead-2.jpg
@ -413,49 +434,49 @@ out/alverstone-mead-1.jpg => s3://my-bucket/out/alverstone-mead-1.jpg
`s3-credentials delete-objects` can be used to delete one or more keys from the bucket. `s3-credentials delete-objects` can be used to delete one or more keys from the bucket.
Pass one or more keys to delete them: Pass one or more keys to delete them:
```bash
s3-credentials delete-objects my-bucket one.txt two.txt three.txt s3-credentials delete-objects my-bucket one.txt two.txt three.txt
```
Use `--prefix my-prefix` to delete all keys with the specified prefix: Use `--prefix my-prefix` to delete all keys with the specified prefix:
```bash
s3-credentials delete-objects my-bucket --prefix my-prefix s3-credentials delete-objects my-bucket --prefix my-prefix
```
Pass `-d` or `--dry-run` to perform a dry-run of the deletion, which will list the keys that would be deleted without actually deleting them. Pass `-d` or `--dry-run` to perform a dry-run of the deletion, which will list the keys that would be deleted without actually deleting them.
```bash
s3-credentials delete-objects my-bucket --prefix my-prefix --dry-run s3-credentials delete-objects my-bucket --prefix my-prefix --dry-run
```
## get-object ## get-object
To download a file from a bucket use `s3-credentials get-object`: To download a file from a bucket use `s3-credentials get-object`:
```bash
s3-credentials get-object my-bucket hello.txt s3-credentials get-object my-bucket hello.txt
```
This defaults to outputting the downloaded file to the terminal. You can instead direct it to save to a file on disk using the `-o` or `--output` option: This defaults to outputting the downloaded file to the terminal. You can instead direct it to save to a file on disk using the `-o` or `--output` option:
```bash
s3-credentials get-object my-bucket hello.txt -o /path/to/hello.txt s3-credentials get-object my-bucket hello.txt -o /path/to/hello.txt
```
## get-objects ## get-objects
`s3-credentials get-objects` can be used to download multiple files from a bucket at once. `s3-credentials get-objects` can be used to download multiple files from a bucket at once.
Without extra arguments, this downloads everything: Without extra arguments, this downloads everything:
```bash
s3-credentials get-objects my-bucket s3-credentials get-objects my-bucket
```
Files will be written to the current directory by default, preserving their directory structure from the bucket. Files will be written to the current directory by default, preserving their directory structure from the bucket.
To write to a different directory use `--output` or `-o`: To write to a different directory use `--output` or `-o`:
```bash
s3-credentials get-objects my-bucket -o /path/to/output s3-credentials get-objects my-bucket -o /path/to/output
```
To download multiple specific files, add them as arguments to the command: To download multiple specific files, add them as arguments to the command:
```bash
s3-credentials get-objects my-bucket one.txt two.txt path/to/three.txt s3-credentials get-objects my-bucket one.txt two.txt path/to/three.txt
```
You can pass one or more `--pattern` or `-p` options to download files matching a specific pattern: You can pass one or more `--pattern` or `-p` options to download files matching a specific pattern:
```bash
s3-credentials get-objects my-bucket -p "*.txt" -p "static/*.css" s3-credentials get-objects my-bucket -p "*.txt" -p "static/*.css"
```
Here the `*` wildcard will match any sequence of characters, including `/`. `?` will match a single character. Here the `*` wildcard will match any sequence of characters, including `/`. `?` will match a single character.
A progress bar will be shown by default. Use `-s` or `--silent` to hide it. A progress bar will be shown by default. Use `-s` or `--silent` to hide it.
@ -465,28 +486,30 @@ A progress bar will be shown by default. Use `-s` or `--silent` to hide it.
You can set the [CORS policy](https://docs.aws.amazon.com/AmazonS3/latest/userguide/cors.html) for a bucket using the `set-cors-policy` command. S3 CORS policies are set at the bucket level - they cannot be set for individual items. You can set the [CORS policy](https://docs.aws.amazon.com/AmazonS3/latest/userguide/cors.html) for a bucket using the `set-cors-policy` command. S3 CORS policies are set at the bucket level - they cannot be set for individual items.
First, create the bucket. Make sure to make it `--public`: First, create the bucket. Make sure to make it `--public`:
```bash
s3-credentials create my-cors-bucket --public -c s3-credentials create my-cors-bucket --public -c
```
You can set a default CORS policy - allowing `GET` requests from any origin - like this: You can set a default CORS policy - allowing `GET` requests from any origin - like this:
```bash
s3-credentials set-cors-policy my-cors-bucket s3-credentials set-cors-policy my-cors-bucket
```
You can use the `get-cors-policy` command to confirm the policy you have set: You can use the `get-cors-policy` command to confirm the policy you have set:
```bash
s3-credentials get-cors-policy my-cors-bucket s3-credentials get-cors-policy my-cors-bucket
[ ```
{ ```json
"ID": "set-by-s3-credentials", [
"AllowedMethods": [ {
"GET" "ID": "set-by-s3-credentials",
], "AllowedMethods": [
"AllowedOrigins": [ "GET"
"*" ],
] "AllowedOrigins": [
} "*"
] ]
}
]
```
To customize the CORS policy, use the following options: To customize the CORS policy, use the following options:
- `-m/--allowed-method` - Allowed method e.g. `GET` - `-m/--allowed-method` - Allowed method e.g. `GET`
@ -497,21 +520,21 @@ To customize the CORS policy, use the following options:
Each of these can be passed multiple times with the exception of `--max-age-seconds`. Each of these can be passed multiple times with the exception of `--max-age-seconds`.
The following example allows GET and PUT methods from code running on `https://www.example.com/`, allows the encoming `Authorization` header and exposes the `ETag` header. It also sets the client to cache preflight requests for 60 seconds: The following example allows GET and PUT methods from code running on `https://www.example.com/`, allows the incoming `Authorization` header and exposes the `ETag` header. It also sets the client to cache preflight requests for 60 seconds:
```bash
s3-credentials set-cors-policy my-cors-bucket2 \ s3-credentials set-cors-policy my-cors-bucket2 \
--allowed-method GET \ --allowed-method GET \
--allowed-method PUT \ --allowed-method PUT \
--allowed-origin https://www.example.com/ \ --allowed-origin https://www.example.com/ \
--expose-header ETag \ --expose-header ETag \
--max-age-seconds 60 --max-age-seconds 60
```
## debug-bucket ## debug-bucket
The `debug-bucket` command is useful for diagnosing issues with a bucket: The `debug-bucket` command is useful for diagnosing issues with a bucket:
```bash
s3-credentials debug-bucket my-bucket s3-credentials debug-bucket my-bucket
```
Example output: Example output:
``` ```
Bucket ACL: Bucket ACL:
@ -546,4 +569,4 @@ Bucket public access block:
"RestrictPublicBuckets": false "RestrictPublicBuckets": false
} }
} }
``` ```