@@ -12,7 +12,7 @@ AWS CLI is a common tool allowing to control S3 service. AWS CLI tool is written
To install AWS CLI we recommend using [official AWS docummentation](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html). There you can find the guide on how to install AWS CLI on Linux and Windows as well.
???+ note ""
???+ note "AWS-CLI in virtual environment"
If you need to install AWS CLI in the virtual environment you can use [this guide](https://docs.aws.amazon.com/cli/latest/userguide/install-virtualenv.html).
## Configuration of AWS CLI
...
...
@@ -49,7 +49,7 @@ To show the help (available commands) you can use help - **aws s3** tool allows
aws s3 help
### Operation with buckets
???+ note ""
???+ note "Unique name of the bucket"
The bucket name has to be unique within tenant. It should contain lower letters, numbers, dashes, and dots. The bucket name should begin only with a letter or number and cannot contain dots followed by a dash or dots preceded by a dash or multiple dots. We also recommend not using “slash” in the bucket name. Using the slash will disallow the usage of the bucket via API.
**Bucket creation**
...
...
@@ -58,7 +58,7 @@ To show the help (available commands) you can use help - **aws s3** tool allows
[S3cmd](https://s3tools.org/download) is a free command line tool. It allows you to upload and download your data to the S3 object storage. S3cmd is written in Python. S3cmd is an open-source project available under GNU Public License v2 (GPLv2) and it is free for personal as well as commercial usage.
!!! warning
We recommend you **use preferably [AWS CLI](s3cmd.md)**. We encountered some issues while using s3cmd. For instance, bucket names cannot begin with numbers or capital letters.
## Installation of s3cmd tool
S3cmd is available in the system repositories for CentOS, RHEL and Ubuntu. You can install it via following guide.
**On CentOS/RHEL**
sudo yum install s3cmd
**On Ubuntu/Debian**
sudo apt install s3cmd
## Configuration of s3cmd tool
Please insert the following lines into the config file located at **/home/user/.s3cfg**.
[default]
host_base = https://s3.clX.du.cesnet.cz
use_https = True
access_key = xxxxxxxxxxxxxxxxxxxxxx
secret_key = xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
host_bucket = s3.clX.du.cesnet.cz
**Host base** and **Host bucket** is S3 endpoint URL, which you received via email together with **Access Key** and **Secret Key**. You should receive it via email during the S3 account creation.
S3cmd commands support elementary operations with buckets - creation, listing, and deletion.
### Bucket operations
???+ note "Bucket name"
The bucket name should be unique within tenant and should contain only small letters, capital letters, numerals, dashes, and dots. The bucket name must begin only with a letter or numeral and it cannot contain dots next to dashes or multiple dots.
**Listing all s3 buckets**
s3cmd ls
**Creation of new s3 bucket**
s3cmd mb s3://newbucket
**Removing s3 bucket**
s3cmd rb s3://newbucket
_Only emptied bucket can be removed!_
**Listing s3 bucket size**
s3cmd s3://newbucket/ du
### Files and directories operation
**Listing of s3 bucket**
s3cmd ls s3://newbucket/
**File upload**
s3cmd put file.txt s3://newbucket/
**Upload of encrypted files**
s3cmd put -e file.txt s3://newbucket/
**Directory upload**
s3cmd put -r directory s3://newbucket/
_Please make sure, that you didn't forget to remove the trailing slash (e.g. .: directory/), trailing slash denotes uploading only the content of the desired directory._
In case you have a fast connection of about 1-2Gbps and you want to utilize it for data transfers, you can use the s5cmd tool. It allows you to fully optimize the data transfer. The tool is available in form of compiled binaries for Windows, Linux and macOS. It is also available as a source code or docker image. Detailed information can be found on [the project Github page](https://github.com/peak/s5cmd).
Please insert into **.aws/credentials** the folowing options.
To achieve higher speed for data transfers it is necessary to modify the following parameters, particularly utilize or CPU cores and workers, see below.<br/>