Getting Started¶
Dependencies¶
Building and running the Scality Zenko CloudServer requires node.js 10.x and yarn v1.17.x. Up-to-date versions can be found at Nodesource.
Installation¶
Clone the source code
$ git clone https://github.com/scality/cloudserver.git
Go to the cloudserver directory and use yarn to install the js dependencies.
$ cd cloudserver $ yarn install
Running CloudServer with a File Backend¶
$ yarn start
This starts a Zenko CloudServer on port 8000. Two additional ports, 9990 and 9991, are also open locally for internal transfer of metadata and data, respectively.
The default access key is accessKey1. The secret key is verySecretKey1.
By default, metadata files are saved in the localMetadata directory and data files are saved in the localData directory in the local ./cloudserver directory. These directories are pre-created within the repository. To save data or metadata in different locations, you must specify them using absolute paths. Thus, when starting the server:
$ mkdir -m 700 $(pwd)/myFavoriteDataPath
$ mkdir -m 700 $(pwd)/myFavoriteMetadataPath
$ export S3DATAPATH="$(pwd)/myFavoriteDataPath"
$ export S3METADATAPATH="$(pwd)/myFavoriteMetadataPath"
$ yarn start
Running CloudServer with Multiple Data Backends¶
$ export S3DATA='multiple'
$ yarn start
This starts a Zenko CloudServer on port 8000.
The default access key is accessKey1. The secret key is verySecretKey1.
With multiple backends, you can choose where each object is saved by setting the following header with a location constraint in a PUT request:
'x-amz-meta-scal-location-constraint':'myLocationConstraint'
If no header is sent with a PUT object request, the bucket’s location constraint determines where the data is saved. If the bucket has no location constraint, the endpoint of the PUT request determines location.
See the Configuration section to set location constraints.
Run CloudServer with an In-Memory Backend¶
$ yarn run mem_backend
This starts a Zenko CloudServer on port 8000.
The default access key is accessKey1. The secret key is verySecretKey1.
Run CloudServer with Vault User Management¶
export S3VAULT=vault
yarn start
Note: Vault is proprietary and must be accessed separately. This starts a Zenko CloudServer using Vault for user management.
Run CloudServer for Continuous Integration Testing or in Production with Docker¶
Run Cloudserver with DOCKER
Testing¶
Run unit tests with the command:
$ yarn test
Run multiple-backend unit tests with:
$ CI=true S3DATA=multiple yarn start
$ yarn run multiple_backend_test
Run the linter with:
$ yarn run lint
Running Functional Tests Locally¶
To pass AWS and Azure backend tests locally, modify
tests/locationConfig/locationConfigTests.json so that awsbackend
specifies the bucketname of a bucket you have access to based on your
credentials, and modify azurebackend
with details for your Azure account.
The test suite requires additional tools, s3cmd and Redis installed in the environment the tests are running in.
Install s3cmd
Install redis and start Redis.
Add localCache section to
config.json
:"localCache": { "host": REDIS_HOST, "port": REDIS_PORT }
where
REDIS_HOST
is the Redis instance IP address ("127.0.0.1"
if Redis is running locally) andREDIS_PORT
is the Redis instance port (6379
by default)Add the following to the local etc/hosts file:
127.0.0.1 bucketwebsitetester.s3-website-us-east-1.amazonaws.com
Start Zenko CloudServer in memory and run the functional tests:
$ CI=true yarn run mem_backend $ CI=true yarn run ft_test
Configuration¶
There are three configuration files for Zenko CloudServer:
conf/authdata.json
, for authentication.locationConfig.json
, to configure where data is saved.config.json
, for general configuration options.
Location Configuration¶
You must specify at least one locationConstraint in locationConfig.json (or leave it as pre-configured).
You must also specify ‘us-east-1’ as a locationConstraint. If you put a bucket to an unknown endpoint and do not specify a locationConstraint in the PUT bucket call, us-east-1 is used.
For instance, the following locationConstraint saves data sent to
myLocationConstraint
to the file backend:
"myLocationConstraint": {
"type": "file",
"legacyAwsBehavior": false,
"details": {}
},
Each locationConstraint must include the type
, legacyAwsBehavior
,
and details
keys. type
indicates which backend is used for that
region. Supported backends are mem, file, and scality.``legacyAwsBehavior``
indicates whether the region behaves the same as the AWS S3 ‘us-east-1’
region. If the locationConstraint type is scality
, details
must
contain connector information for sproxyd. If the locationConstraint type
is mem
or file
, details
must be empty.
Once locationConstraints is set in locationConfig.json, specify a default locationConstraint for each endpoint.
For instance, the following sets the localhost
endpoint to the
myLocationConstraint
data backend defined above:
"restEndpoints": {
"localhost": "myLocationConstraint"
},
To use an endpoint other than localhost for Zenko CloudServer, the endpoint
must be listed in restEndpoints
. Otherwise, if the server is running
with a:
- file backend: The default location constraint is
file
- memory backend: The default location constraint is
mem
Endpoints¶
The Zenko CloudServer supports endpoints that are rendered in either:
- path style: http://myhostname.com/mybucket or
- hosted style: http://mybucket.myhostname.com
However, if an IP address is specified for the host, hosted-style requests cannot reach the server. Use path-style requests in that case. For example, if you are using the AWS SDK for JavaScript, instantiate your client like this:
const s3 = new aws.S3({
endpoint: 'http://127.0.0.1:8000',
s3ForcePathStyle: true,
});
Setting Your Own Access and Secret Key Pairs¶
Credentials can be set for many accounts by editing conf/authdata.json
,
but use the SCALITY_ACCESS_KEY_ID
and SCALITY_SECRET_ACCESS_KEY
environment variables to specify your own credentials.
scality-access-key-id-and-scality-secret-access-key
SCALITY_ACCESS_KEY_ID and SCALITY_SECRET_ACCESS_KEY¶
These variables specify authentication credentials for an account named “CustomAccount”.
Note
Anything in the authdata.json
file is ignored.
$ SCALITY_ACCESS_KEY_ID=newAccessKey SCALITY_SECRET_ACCESS_KEY=newSecretKey yarn start
Using SSL¶
To use https with your local CloudServer, you must set up SSL certificates.
Deploy CloudServer using our DockerHub page (run it with a file backend).
Note
If Docker is not installed locally, follow the instructions to install it for your distribution
Update the CloudServer container’s config
Add your certificates to your container. To do this, #. exec inside the CloudServer container.
Run
$> docker ps
to find the container’s ID (the corresponding image name isscality/cloudserver
.Copy the corresponding container ID (
894aee038c5e
in the present example), and run:$> docker exec -it 894aee038c5e bash
This puts you inside your container, using an interactive terminal.
Generate the SSL key and certificates. The paths where the different files are stored are defined after the
-out
option in each of the following commands.Generate a private key for your certificate signing request (CSR):
$> openssl genrsa -out ca.key 2048
Generate a self-signed certificate for your local certificate authority (CA):
$> openssl req -new -x509 -extensions v3_ca -key ca.key -out ca.crt -days 99999 -subj "/C=US/ST=Country/L=City/O=Organization/CN=scality.test"
Generate a key for the CloudServer:
$> openssl genrsa -out test.key 2048
Generate a CSR for CloudServer:
$> openssl req -new -key test.key -out test.csr -subj "/C=US/ST=Country/L=City/O=Organization/CN=*.scality.test"
Generate a certificate for CloudServer signed by the local CA:
$> openssl x509 -req -in test.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out test.crt -days 99999 -sha256
Update Zenko CloudServer
config.json
. Add acertFilePaths
section to./config.json
with appropriate paths:"certFilePaths": { "key": "./test.key", "cert": "./test.crt", "ca": "./ca.crt" }
Run your container with the new config.
- Exit the container by running
$> exit
. - Restart the container with
$> docker restart cloudserver
.
- Exit the container by running
Update the host configuration by adding s3.scality.test to /etc/hosts:
127.0.0.1 localhost s3.scality.test
Copy the local certificate authority (ca.crt in step 4) from your container. Choose the path to save this file to (in the present example,
/root/ca.crt
), and run:$> docker cp 894aee038c5e:/usr/src/app/ca.crt /root/ca.crt
Note
Your container ID will be different, and your path to ca.crt may be different.
Test the Config¶
If aws-sdk is not installed, run $> yarn install aws-sdk
.
Paste the following script into a file named “test.js”:
const AWS = require('aws-sdk');
const fs = require('fs');
const https = require('https');
const httpOptions = {
agent: new https.Agent({
// path on your host of the self-signed certificate
ca: fs.readFileSync('./ca.crt', 'ascii'),
}),
};
const s3 = new AWS.S3({
httpOptions,
accessKeyId: 'accessKey1',
secretAccessKey: 'verySecretKey1',
// The endpoint must be s3.scality.test, else SSL will not work
endpoint: 'https://s3.scality.test:8000',
sslEnabled: true,
// With this setup, you must use path-style bucket access
s3ForcePathStyle: true,
});
const bucket = 'cocoriko';
s3.createBucket({ Bucket: bucket }, err => {
if (err) {
return console.log('err createBucket', err);
}
return s3.deleteBucket({ Bucket: bucket }, err => {
if (err) {
return console.log('err deleteBucket', err);
}
return console.log('SSL is cool!');
});
});
Now run this script with:
$> nodejs test.js
On success, the script outputs SSL is cool!
.