Skip to content
Snippets Groups Projects
Commit d04d1a17 authored by František Dvořák's avatar František Dvořák
Browse files

Update documentation

parent 34455684
No related branches found
No related tags found
No related merge requests found
Pipeline #741 passed
......@@ -9,18 +9,17 @@ Primary goal of this project is to build Hadoop cluster. But the most part is ge
Locally installed:
* [Terraform](https://www.terraform.io/)
* [Ansible](https://www.ansible.com/)
Configuration:
* public part of ssh key uploaded to OpenStack
* ssh-agent with ssh key
* configured access to OpenStack (for example downloaded *cloud.yaml* file or the environment set)
* configured access to OpenStack, see [Cloud Documentation](https://docs.cloud.muni.cz/cloud/cli/#getting-credentials) (either downloaded *cloud.yaml* file or the environment set)
* floating IP created
# Hadoop image
To setup Hadoop on single machine, launch:
To setup Hadoop on single machine using Hadoop image, launch:
/usr/local/sbin/hadoop-setup.sh
......@@ -40,13 +39,14 @@ For example (check also the other values used in *variables.tf*):
flavor = "standard.large" # >4GB memory needed
EOF
terraform init
terraform apply
# Build cluster
#
# 1. check *variables.tf*
#
#
# It is possible to override default values using *\*.auto.tfvars* files.
#
cat <<EOF > mycluster.auto.tfvars
......@@ -62,6 +62,7 @@ For example (check also the other values used in *variables.tf*):
#
# 2. launch the setup
#
terraform init
terraform apply
# Destroy cluster
......@@ -94,7 +95,7 @@ The public IP is in the *public_hosts* file or *inventory* file.
On the terraform client machine:
# decrease number of nodes in terraform
# increase number of nodes in terraform
vim *.auto.tfvars
# check the output
......@@ -115,7 +116,7 @@ Data must be migrated from the removed nodes first in the Hadoop cluster. Theore
On the master machine:
# add nodes to remove (it must be the nodes with the highest numbers), for example:
# nodes to remove (it must be the nodes with the highest numbers), for example:
echo node3.terra >> /etc/hadoop/conf/excludes
# refresh configuration
......@@ -151,8 +152,10 @@ On the master machine:
Launch */usr/local/sbin/hadoop-adduser.sh USER_NAME* in the whole cluster.
For example using Ansible (replace *$USER\_NAME* by the user name):
For example using Ansible from the master machine (replace *$USER\_NAME* by the new user name):
sudo su -l deployadm
cd ~/terraform
ansible -i ./inventory -m command -a "/usr/local/sbin/hadoop-adduser.sh $USER_NAME" all
The generated password is written on the output and stored in the home directory.
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment