Cluster Installation
In this installation guide, we are going to presume the following, please replace the necessary configuration elements matching your environment.
This installation instruction has been verified on Ubuntu 20.04. Generally it should work with *nix like OS if you know how to equate the commands to install the appropriate software packages.
Hard Disk, CPU Cores and RAM sizing are not discussed here, it's highly recommend you to setup a test cluster first and do a performance / benchmarking of the application you are developing. Production cluster sizing is a separate topic, that involves lot of parameters to consider. You can fine tune all of these software components to have a very high volume and throughput solution.
Network & Machines
An Example Network 10.1.1.0/24 Cluster (7 Machines)
- CIDR: 10.1.1.0/24
- 10.1.1.4: Gateway machine (Nginx, Dashboard & UI)
- 10.1.1.5, 10.1.1.6, 10.1.1.7: Service machines
- The above machines will have the below services
- Cassandra
- ElasticSearch
- EMQX (MQTT)
- The above machines will have the below services
- 10.1.1.8, 10.1.1.9, 10.1.1.10: Platform machines
Setup that needs to be performed on all machines in the cluster
sudo apt-get -y update
sudo apt-get -y update && apt-get install -y sudo nodejs npm git software-properties-common libtinfo5 netcat tar curl net-tools nano wget unzip rsyslog psmisc
Limits Tuning
Create a new file /etc/security/limits.d/99-limits.conf with the below content
* soft nofile 1048576 * hard nofile 1048576
System Parameters Tuning
Create a new file /etc/sysctl.d/101-sysctl.conf with the below content
fs.file-max=2097152 fs.nr_open=2097152 net.core.somaxconn=32768 net.ipv4.tcp_max_syn_backlog=16384 net.core.netdev_max_backlog=16384 net.ipv4.ip_local_port_range=1024 65535 net.core.rmem_default=262144 net.core.wmem_default=262144 net.core.rmem_max=16777216 net.core.wmem_max=16777216 net.core.optmem_max=16777216 net.ipv4.tcp_mem=16777216 16777216 16777216 # net.ipv4.tcp_rmem=1024 4096 16777216 # net.ipv4.tcp_wmem=1024 4096 16777216 net.nf_conntrack_max=1000000 net.netfilter.nf_conntrack_max=1000000 net.netfilter.nf_conntrack_tcp_timeout_time_wait=30 net.ipv4.tcp_max_tw_buckets=1048576 # Enable fast recycling of TIME_WAIT sockets. Enabling this # option is not recommended for devices communicating with the # general Internet or using NAT (Network Address Translation). # Since some NAT gateways pass through IP timestamp values, one # IP can appear to have non-increasing timestamps. # net.ipv4.tcp_tw_recycle = 1 # net.ipv4.tcp_tw_reuse = 1 net.ipv4.tcp_fin_timeout = 15 vm.dirty_writeback_centisecs=500 vm.swappiness=10 vm.zone_reclaim_mode=0 vm.extra_free_kbytes=1240000 vm.max_map_count = 262144
Service machine setupPerform the below operations on machines 10.1.1.5, 10.1.1.6 & 10.1.1.7
sudo adduser --disabled-password --gecos "" cassandra
sudo adduser --disabled-password --gecos "" elastic
sudo adduser --disabled-password --gecos "" emqtt
sudo apt-get install -y openjdk-8-jdk
sudo apt-get install -y openjdk-13-jdk
Cassandra Installation (Machine 10.1.1.5, 10.1.1.6 & 10.1.1.7)
sudo su - cassandra
Edit the $HOME/.bash_profile file and enter the below contents
JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
PATH=$JAVA_HOME/bin:$PATH
export JAVA_HOME PATH
Logout and login again
cd $HOME
wget https://archive.apache.org/dist/cassandra/3.11.5/apache-cassandra-3.11.5-bin.tar.gz
tar -xzf apache-cassandra-3.11.5-bin.tar.gz
mv $HOME/apache-cassandra-3.11.5/* .
rm -rf $HOME/apache-cassandra-3.11.5*
Edit the $HOME/conf/cassandra.yaml file, find and replace the below contents
------------------------------------------------------------------
cluster_name: 'Boodskap Cluster'
seed_provider:
- class_name: org.apache.cassandra.locator.SimpleSeedProvider
parameters:
- seeds: "10.1.1.5, 10.1.1.6, 10.1.1.7"
endpoint_snitch: GossipingPropertyFileSnitch
On machine 10.1.1.5
listen_address: 10.1.1.5
rpc_address: 10.1.1.5
On machine 10.1.1.6
listen_address: 10.1.1.6
rpc_address: 10.1.1.6
On machine 10.1.1.7
listen_address: 10.1.1.7
rpc_address: 10.1.1.7
------------------------------------------------------------------
Start Cassandra
$HOME/bin/cassandra
Check Cassandra
$HOME/bin/nodetool status
Create the keyspace using one of the cassandra node
$HOME/bin/cqlsh 10.1.1.5
CREATE KEYSPACE IF NOT EXISTS boodskapks WITH REPLICATION = {'class': 'NetworkTopologyStrategy', 'dc1': '2'} AND durable_writes = true;
Elastic Search Installation (Machine 10.1.1.5, 10.1.1.6 & 10.1.1.7)
sudo su - elastic
Edit the $HOME/.bash_profile file and enter the below contents
JAVA_HOME=/usr/lib/jvm/java-13-openjdk-amd64
PATH=$JAVA_HOME/bin:$PATH
export JAVA_HOME PATH
Logout and login again
Download and extract Elastic Search
cd $HOME
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.5.1-linux-x86_64.tar.gz
tar -xzf elasticsearch-7.5.1-linux-x86_64.tar.gz
mv elasticsearch-7.5.1/* $HOME/
rm -rf $HOME/elasticsearch-7.5.1*
Edit the config file $HOME/config/elasticsearch.yml and find and replace the below content
------------------------------------------------------------------
cluster.name: boodskap-cluster
discovery.seed_hosts: ["10.1.1.5", "10.1.1.6", 10.1.1.7]
cluster.initial_master_nodes: [node-1, node-2, node-3]
On machine 10.1.1.5
node.name: node-1
network.host: 10.1.1.5
On machine 10.1.1.6
node.name: node-2
network.host: 10.1.1.6
On machine 10.1.1.7
node.name: node-3
network.host: 10.1.1.7
------------------------------------------------------------------
Start Elastic Search
$HOME/bin/elasticsearch &
Check Elastic Search
curl -X GET http://10.1.1.5:9200/_cluster/health?pretty=true
EMQX Installation (Machine 10.1.1.5, 10.1.1.6 & 10.1.1.7)
Login as emqtt user and perform these tasks
sudo su - emqtt
cd $HOME
wget --no-check-certificate https://www.emqx.io/downloads/broker/v3.2.7/emqx-ubuntu18.04-v3.2.7.zip
unzip emqx-ubuntu18.04-v3.2.7.zip
mv emqx/* $HOME/
rm -rf $HOME/emqx
echo "{emqx_auth_http, true}." >> $HOME/data/loaded_plugins
Edit $HOME/etc/plugins/emqx_auth_http.conf file, find and replace with the below contents
auth.http.auth_req = http://platform.host:18080/api/emqtt/get/auth
auth.http.auth_req.method = get
auth.http.auth_req.params = clientid=%c,username=%u,password=%P,ipaddr=%a
auth.http.super_req = http://platform.host:18080/api/emqtt/get/superuser
auth.http.super_req.method = get
auth.http.super_req.params = clientid=%c,username=%u,ipaddr=%a
auth.http.acl_req = http://platform.host:18080/api/emqtt/acl
auth.http.acl_req.method = get
auth.http.acl_req.params = access=%A,username=%u,clientid=%c,ipaddr=%a,topic=%t
Edit $HOME/etc/emqx.conf file, find and replace with the below contents
cluster.name = bskp-emqxcl
cluster.discovery = static
cluster.static.seeds = [email protected],[email protected],[email protected]
On machine 10.1.1.5
node.name = [email protected]
On machine 10.1.1.6
node.name = [email protected]
On machine 10.1.1.7
node.name = [email protected]
Edit the $HOME/etc/hosts file and add the below line on each corresponding machine
On machine 10.1.1.5
10.1.1.8 platform.host
On machine 10.1.1.6
10.1.1.9 platform.host
On machine 10.1.1.6
10.1.1.10 platform.host
Start the MQTT service
$HOME/bin/emqx start
Check the MQTT cluster
$HOME/bin/emqx_ctl cluster status
Platform machine setupPerform the below in machines 10.1.1.8, 10.1.1.9 & 10.1.1.10
sudo apt-get install -y openjdk-13-jdk
sudo adduser --disabled-password --gecos "" boodskap
sudo su - boodskap
Edit the $HOME/.bash_profile file and enter the below contents.
JAVA_HOME=/usr/lib/jvm/java-13-openjdk-amd64
BOODSKAP_HOME=$HOME
PATH=$JAVA_HOME/bin:$BOODSKAP_HOME/bin:$PATH
export JAVA_HOME BOODSKAP_HOME PATH
Now logout and login again and perform the below.
cd $HOME
wget --no-check-certificate https://github.com/BoodskapPlatform/boodskap-platform/releases/download/v3.0.2/boodskap-3.0.2.tar.gz
tar -xzf boodskap-3.0.2.tar.gz
chmod +x $HOME/bin/*.sh
wget --no-check-certificate https://github.com/BoodskapPlatform/boodskap-platform/releases/download/v3.0.2/boodskap-patch-3.0.2-0019.tar.gz
tar -xzf boodskap-patch-3.0.2-0019.tar.gz
cd $HOME/libs
rm -rf patches
ln -s $HOME/patches/0019 patches
Edit the config file $HOME/config/cluster.xml and change the below contents
<property name="ipFinder">
<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
<property name="addresses">
<list>
<value>10.1.1.8:48500</value>
<value>10.1.1.9:48500</value>
<value>10.1.1.10:48500</value>
</list>
</property>
</bean>
</property>
Edit the config file $HOME/config/database.properties and change the below contents
cassandra.seeds=cassandra.host:9042
Edit the config file $HOME/config/boodskap.properties and change the below contents
boodskap.elastic_search.urls=elastic.host:9200:http
boodskap.mqtt.endpoint=tcp://mqtt.host:1883
boodskap.mqtt.builtin=false
Edit the /etc/hosts file and add the below line on each corresponding machine
On machine 10.1.1.8
10.1.1.5 cassandra.host
10.1.1.5 elastic.host
10.1.1.5 mqtt.host
On machine 10.1.1.9
10.1.1.6 cassandra.host
10.1.1.6 elastic.host
10.1.1.6 mqtt.host
On machine 10.1.1.10
10.1.1.7 cassandra.host
10.1.1.7 elastic.host
10.1.1.7 mqtt.host
Start the platform as background service
$HOME/bin/ignite.sh -f $HOME/config/cluster.xml &
Once the platform is started in all three machines, perform the below activation task in any one of the machine, let's say 10.1.1.8.
sudo su - boodskap
$HOME/bin/control.sh --activate
) ( )
( /( )\ ) ( /( )
)\()) ( ( (()/( ( )\()) ( /( ` )
((_)\ )\ )\ ((_)))\ ((_)\ )(_)) /(/(
| |(_) ((_) ((_) _| |((_)| |(_)((_)_ ((_)_\
| '_ \/ _ \/ _ \/ _` |(_-<| / / / _` || '_ \)
|_.__/\___/\___/\__,_|/__/|_\_\ \__,_|| .__/
|_| IoT Platform
>>> ver. 3.0.2 - build(0019)
>>> 2020 Copyright(C) Boodskap Inc
>>>
>>> Boodskap documentation: http://developer.boodskap.io
Adding nodes to existing platform cluster
Configure the new node with 1 or two existing node's IP addresses in the ipFinder section below.
Edit the file $HOME/config/cluster.xml
<property name="ipFinder">
<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
<property name="addresses">
<list>
<value>10.1.1.9:48500</value>
<value>10.1.1.10:48500</value>
<value>your_new_ip_address:48500</value>
</list>
</property>
</bean>
</property>
After configuring the new nodes, start them as usual. These newly started nodes won't get initialized until you add them to the existing cluster topology, in order to do this, perform the below operations on any one of the existing cluster node.
Step 1, List the Nodes
sudo su - boodskap
$HOME/bin/control.sh --baseline
The above command will produce something like the below
Cluster state: active
Current topology version: 4
Baseline nodes:
ConsistentID=cde228a8-da90-4b74-87c0-882d9c5729b7, STATE=ONLINE
ConsistentID=dea120d9-3157-464f-a1ea-c8702de45c7b, STATE=ONLINE
ConsistentID=fdf68f13-8f1c-4524-9102-ac2f5937c62c, STATE=ONLINE
--------------------------------------------------------------------------------
Number of baseline nodes: 3
Other nodes:
ConsistentID=5d782f5e-0d47-4f42-aed9-3b7edeb527c0
- Number of baseline nodes are the ones already activated in your cluster
- Other nodes are the ones waiting for activation
Step 2, Add the Nodes
$HOME/bin/control.sh --baseline add consistentId1[,consistentId2,....,consistentIdN]
Above is the command syntax to add one or more new nodes to the cluster. In our case let's say the new node's consistent id is 5d782f5e-0d47-4f42-aed9-3b7edeb527c0, then the command should be
$HOME/bin/control.sh --baseline add 5d782f5e-0d47-4f42-aed9-3b7edeb527c0
If you are adding multiple nodes to the cluster, add them in one go by comma separating the new consistent id's. This will save you a lot of time if you have high volume of data exists in the cluster.
Gateway machine setupPerform the below operations on machine 10.1.1.4
sudo npm install pm2 -g
sudo apt-get install nginx
sudo adduser --disabled-password --gecos "" boodskapui
Setup Developer Console UI
Login as boodskapui user and perform these tasks
sudo su - boodskapui
mkdir $HOME/webapps
cd $HOME/webapps
git clone https://github.com/BoodskapPlatform/boodskap-ui.git
cd $HOME/webapps/boodskap-ui
npm install
Edit the $HOME/webapps/boodskap-ui/boodskapui.properties
Set the basepath=/platform
node build.js
pm2 start boodskap-platform-node.js
Setup Dashboard UI
cd $HOME/webapps
git clone https://github.com/BoodskapPlatform/platform-dashboard.git
cd $HOME/webapps/platform-dashboard
npm install
node build.js
pm2 start platform-dashboard-node.js
There are two major parts of the platform, one is the core platform developer console and the other is the Dashboard UI application. Most of our customers keeps the platform within their intranet and expose only their dashboard solutions to the outer world.
If your IoT devices are going to use the REST services to push the messages to the platform, then you may need to expose the HTTP push API alone.
Nginx installation
sudo apt-get install nginx
Replace /etc/nginx/sites-enabled/default with the below contents
proxy_cache_path /tmp/NGINX_cache/ keys_zone=backcache:10m; map $http_upgrade $connection_upgrade { default upgrade; '' close; } upstream api_cluster { server 10.1.1.8:18080; server 10.1.1.9:18080; server 10.1.1.10:18080; } upstream micro_api_cluster { server 10.1.1.8:19080; server 10.1.1.9:19080; server 10.1.1.10:19080; } upstream mqttws_cluster { server 10.1.1.5:8083; server 10.1.1.6:8083; server 10.1.1.7:8083; } upstream platform_cluster { server localhost:4201; } upstream dashboard_cluster { server localhost:10000; } server { listen 80; client_max_body_size 20M; location /api { if ($request_method = 'OPTIONS') { add_header 'Access-Control-Allow-Origin' '*'; add_header 'Access-Control-Allow-Credentials' 'true'; add_header 'Access-Control-Allow-Headers' 'Authorization,Accept,Origin,DNT,X-CustomH eader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Content-Ra nge,Range'; add_header 'Access-Control-Allow-Methods' 'GET,POST,OPTIONS,PUT,DELETE,PATCH'; add_header 'Access-Control-Max-Age' 1728000; add_header 'Content-Type' 'text/plain charset=UTF-8'; add_header 'Content-Length' 0; return 204; } if ($request_method !~ ^(HEAD|GET|POST|PUT|DELETE)$ ) { return 405; } proxy_pass http://api_cluster; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-NginX-Proxy true; proxy_ssl_session_reuse off; proxy_cache backcache; proxy_buffer_size 256M; proxy_buffers 4 512M; proxy_busy_buffers_size 512M; } location /mservice { proxy_pass http://micro_api_cluster; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-NginX-Proxy true; proxy_ssl_session_reuse off; proxy_cache backcache; } location /mqtt { proxy_pass http://mqttws_cluster; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-NginX-Proxy true; proxy_ssl_session_reuse off; proxy_cache backcache; } location /platform { proxy_pass http://platform_cluster; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-NginX-Proxy true; proxy_ssl_session_reuse off; proxy_cache backcache; proxy_buffer_size 128k; proxy_buffers 4 256k; proxy_busy_buffers_size 256k; } location / { proxy_pass http://dashboard_cluster; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-NginX-Proxy true; proxy_ssl_session_reuse off; proxy_cache backcache; proxy_buffer_size 128k; proxy_buffers 4 256k; proxy_busy_buffers_size 256k; } }
Test and restart the nginx service
nginx -t
sudo service nginx restart
HAProxy installation
Perform the below operations on machine 10.1.1.4
sudo apt-get install haproxy
Edit /etc/haproxy/haproxy.cfg and make sure the following settings are changed/created
global
ulimit-n 8000016
maxconn 2000000
maxpipes 2000000
tune.maxaccept 500
listen mqtt
bind *:1883
mode tcp
option clitcpka # For TCP keep-alive
timeout client 3h #By default TCP keep-alive interval is 2hours in OS kern$
timeout server 3h #By default TCP keep-alive interval is 2hours in OS kern$
option tcplog
balance roundrobin
server node1 10.1.1.5:1883 check
server node2 10.1.1.6:1883 check
server node3 10.1.1.7:1883 check
Restart the haproxy and check the status
sudo service haproxy restart
sudo service haproxy status
Updated about 4 years ago