Boodskap IoT Platform

An Enterprise IoT Platform that helps to quickly connect with hardware devices, systems and backends to build solutions rapidly and robustly.

Get Started

Cluster Installation

In this installation guide, we are going to presume the following, please replace the necessary configuration elements matching your environment.

This installation instruction has been verified on Ubuntu 18.04. Generally it should work with *nix like OS if you know how to equate the commands to install the appropriate software packages.

🚧

Hard Disk, CPU Cores and RAM sizing are not discussed here, it's highly recommend you to setup a test cluster first and do a performance / benchmarking of the application you are developing. Production cluster sizing is a separate topic, that involves lot of parameters to consider. You can fine tune all of these software components to have a very high volume and throughput solution.

πŸ“˜

Network & Machines

An Example Network 10.1.1.0/24 Cluster

  • CIDR: 10.1.1.0/24
  • 10.1.1.4: Load Balancer
    • In a real world scenario, this load balancer would be replaced by a hardware or your cloud provided load balancer
  • 10.1.1.5 & 10.1.1.6: Boodskap UI Machines
    • We are setting up two gateways to load balance between UI and API requests
  • 10.1.1.7 & 10.1.1.8: Service Machines
    • We are going to install and configure all of these below software services on both machines
      • Cassandra
      • ElasticSearch
      • EMQX (MQTT)
    • Instead of installing them all in a machine, you can completely isolate all of these services into separate cluster groups,. You can have individual dedicated nodes for each type of service and group them like below
      • 10.1.1.7 & 10.1.1.8: Cassandra Machines
      • 10.1.1.9 & 10.1.1.10: ElasticSearch Machines
      • 10.1.1.11 & 10.1.1.12: EMQX Machines
  • 10.1.1.9 & 10.1.1.10: Platform Machines

🚧

We just picked 2 nodes for each type of service, you can choose the appropriate cluster size depending upon your message volume, variety and velocity and computational speed you require.

πŸ“˜

Setup that needs to be performed on all machines in the cluster

sudo apt-get -y update
sudo apt-get -y update && apt-get install -y git software-properties-common netcat tar curl net-tools nano wget unzip rsyslog psmisc

🚧

Limits Tuning

Create a new file /etc/security/limits.d/99-limits.conf with the below content

*      soft   nofile      1048576
*      hard   nofile      1048576

🚧

System Parameters Tuning

Create a new file /etc/sysctl.d/99-sysctl.conf with the below content

fs.file-max=2097152
fs.nr_open=2097152
net.core.somaxconn=32768
net.ipv4.tcp_max_syn_backlog=16384
net.core.netdev_max_backlog=16384
net.ipv4.ip_local_port_range=1024 65535
net.core.rmem_default=262144
net.core.wmem_default=262144
net.core.rmem_max=16777216
net.core.wmem_max=16777216
net.core.optmem_max=16777216

net.ipv4.tcp_mem=16777216 16777216 16777216
# net.ipv4.tcp_rmem=1024 4096 16777216
# net.ipv4.tcp_wmem=1024 4096 16777216
net.nf_conntrack_max=1000000
net.netfilter.nf_conntrack_max=1000000
net.netfilter.nf_conntrack_tcp_timeout_time_wait=30
net.ipv4.tcp_max_tw_buckets=1048576

# Enable fast recycling of TIME_WAIT sockets.  Enabling this
# option is not recommended for devices communicating with the
# general Internet or using NAT (Network Address Translation).
# Since some NAT gateways pass through IP timestamp values, one
# IP can appear to have non-increasing timestamps.
# net.ipv4.tcp_tw_recycle = 1
# net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_fin_timeout = 15

vm.dirty_writeback_centisecs=500
vm.swappiness=10
vm.zone_reclaim_mode=0
vm.extra_free_kbytes=1240000

vm.max_map_count = 262144

πŸ“˜

Cassandra, Elastic Search & EMQX

Perform the below operations on machines 10.1.1.7 & 10.1.1.8

sudo adduser --disabled-password --gecos ""  cassandra
sudo adduser --disabled-password --gecos ""  elastic
sudo adduser --disabled-password --gecos ""  emqtt
sudo add-apt-repository -y ppa:openjdk-r/ppa
sudo apt-get update -y
sudo apt-get install -y openjdk-8-jdk
sudo apt-get install -y openjdk-13-jdk

Cassandra Installation

Perform the below operations on machines 10.1.1.7 & 10.1.1.8*

sudo su - cassandra

Edit the $HOME/.bash_profile file and enter the below contents

JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
PATH=$JAVA_HOME/bin:$PATH
export JAVA_HOME PATH
cd $HOME
wget https://archive.apache.org/dist/cassandra/3.11.5/apache-cassandra-3.11.5-bin.tar.gz
tar -xzf apache-cassandra-3.11.5-bin.tar.gz
mv $HOME/apache-cassandra-3.11.5/* .
rm -rf $HOME/apache-cassandra-3.11.5*

Edit the $HOME/conf/cassandra.yaml file, find and replace the below contents

------------------------------------------------------------------

cluster_name: 'Boodskap Cluster'
seed_provider:
    - class_name: org.apache.cassandra.locator.SimpleSeedProvider
      parameters:
          - seeds: "10.1.1.7, 10.1.1.8"
listen_address: 10.1.1.7
rpc_address: 10.1.1.7

listen_address and rpc_address has to be the ip address of that particular machine, on 10.1.1.8 machine, it has to be 10.1.1.18

endpoint_snitch: GossipingPropertyFileSnitch

------------------------------------------------------------------

Logout and login again and start Cassandra in both the machines

logout
sudo su - cassandra
$HOME/bin/cassandra

Perform the below steps in any one of the machine., let's say 10.1.1.7

Create the keyspace

$HOME/bin/cqlsh
CREATE KEYSPACE IF NOT EXISTS boodskapks WITH REPLICATION = {'class': 'NetworkTopologyStrategy', 'dc1': '2'}  AND durable_writes = true;

Initialize the keyspace

curl -sL https://raw.githubusercontent.com/boodskap/cassandra/3.11.5/initdb.cql > $HOME/initdb.cql
$HOME/bin/cqlsh -f $HOME/initdb.cql

🚧

You can visit the Cassandra website for more information

Elastic Search Installation

Perform the below operations on machines 10.1.1.7 & 10.1.1.8*

sudo su - elastic

Edit the $HOME/.bash_profile file and enter the below contents

JAVA_HOME=/usr/lib/jvm/java-13-openjdk-amd64
PATH=$JAVA_HOME/bin:$PATH
export JAVA_HOME PATH

Download and extract Elastic Search

cd $HOME
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.5.1-linux-x86_64.tar.gz
tar -xzf elasticsearch-7.5.1-linux-x86_64.tar.gz
mv elasticsearch-7.5.1/* $HOME/
rm -rf $HOME/elasticsearch-7.5.1*

Edit the config file $HOME/config/elasticsearch.yml and find the replace the below content

cluster.name: boodskap-cluster
node.name: node-1
network.host: your_ip_address
discovery.seed_hosts: [10.1.1.7, 10.1.1.8]
cluster.initial_master_nodes: [node-1, node-2]
  • node.name must be unique in your cluster, if you are configuring 3 nodes, then you must specify node-1, node-2, node-2, etc consecutively.
  • network.host must be unique in your cluster, specify the machine's IP address here, in our case, it should be 10.1.1.7 & 10.1.1.8 for two different machines

Relogin and start the Elastic Search service in background

logout
sudo su - elastic
$HOME/bin/elasticsearch &

🚧

You can visit Elastic Search website for more information

EMQX Installation

Perform the below operations on machines 10.1.1.7 & 10.1.1.8*

Login as emqtt user and perform these tasks

sudo su - emqtt
cd $HOME
wget --no-check-certificate  https://www.emqx.io/downloads/broker/v3.2.7/emqx-ubuntu18.04-v3.2.7.zip
unzip emqx-ubuntu18.04-v3.2.7.zip
mv emqx/* $HOME/
rm -rf $HOME/emqx
echo "{emqx_auth_http, true}." >> $HOME/data/loaded_plugins

Edit $HOME/etc/plugins/emqx_auth_http.conf file, find and replace with the below contents

auth.http.auth_req = http://10.1.1.4/api/emqtt/get/auth
auth.http.auth_req.method = get
auth.http.auth_req.params = clientid=%c,username=%u,password=%P,ipaddr=%a
auth.http.super_req = http://10.1.1.4/api/emqtt/get/superuser
auth.http.super_req.method = get
auth.http.super_req.params = clientid=%c,username=%u,ipaddr=%a
auth.http.acl_req = http://10.1.1.4/api/emqtt/acl
auth.http.acl_req.method = get
auth.http.acl_req.params = access=%A,username=%u,clientid=%c,ipaddr=%a,topic=%t

Start the MQTT service

$HOME/bin/emqx start

🚧

You can visit EMQX website for more information

πŸ“˜

Boodskap IoT Platform Installation

Perform the below operations on machines 10.1.1.9 & 10.1.1.10*

sudo su - boodskap

Edit the $HOME/.bash_profile file and enter the below contents.

JAVA_HOME=/usr/lib/jvm/java-13-openjdk-amd64
BOODSKAP_HOME=$HOME
PATH=$JAVA_HOME/bin:$BOODSKAP_HOME/bin:$PATH
export JAVA_HOME BOODSKAP_HOME PATH

Download and extract Apache Ignite

cd $HOME
wget --no-check-certificate https://archive.apache.org/dist/ignite/2.8.0/apache-ignite-2.8.0-bin.zip
unzip apache-ignite-2.8.0-bin.zip
mv apache-ignite-2.8.0-bin/* $HOME/
rm -rf $HOME/apache-ignite-2.8.0-bin*
mv $HOME/config $HOME/config-old

Download and extract Boodskap Platform archives

wget --no-check-certificate https://github.com/BoodskapPlatform/boodskap-platform/releases/download/3.0.1/boodskap-all-libs-3.0.1.tar.gz
tar -xzf boodskap-all-libs-3.0.1.tar.gz

wget --no-check-certificate https://github.com/BoodskapPlatform/boodskap-platform/releases/download/3.0.1/boodskap-patch-3.0.1-10004.tar.gz
tar -xzf boodskap-patch-3.0.1-10004.tar.gz

Edit the config file $HOME/config/cluster.xml and change the below contents

<property name="ipFinder">
   <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
      <property name="addresses">
         <list>
            <value>10.1.1.9:48500</value>
            <value>10.1.1.10:48500</value>
         </list>
      </property>
   </bean>
</property>

Edit the config file $HOME/config/database.properties and change the below contents

cassandra.seeds=10.1.1.7:9042, 10.1.1.8:9042

Edit the config file $HOME/config/boodskap.properties and change the below contents

boodskap.elastic_search.urls=10.1.1.7:9200:http, 10.1.1.8:9200:http
boodskap.mqtt.endpoint=tcp://10.1.1.4:1883
boodskap.api_base_path=http://10.1.1.4/api

Start the platform as background service

$HOME/bin/ignite.sh -f $HOME/config/cluster.xml &

Once the platform is started in both the machines, perform the below activation tasks in any one of the machine, let's say 10.1.1.9.

$HOME/bin/control.sh --activate

Some times, the initialization may get stuck, if you don't see console logs not progressing, please restart the platform and you should be good.

killall java && sleep 10 && killall -KILL java
$HOME/bin/ignite.sh -f $HOME/config/cluster.xml &

Once the initialization is over, you should see something like this in the console of both machines.

2020-04-18 09:11:59.620 INFO  BootstrapService:195 -


    )             (           )
 ( /(             )\ )     ( /(     )
 )\())  (    (   (()/( (   )\()) ( /(  `  )
((_)\   )\   )\   ((_)))\ ((_)\  )(_)) /(/(
| |(_) ((_) ((_)  _| |((_)| |(_)((_)_ ((_)_\
| '_ \/ _ \/ _ \/ _` |(_-<| / / / _` || '_ \)
|_.__/\___/\___/\__,_|/__/|_\_\ \__,_|| .__/
                                      |_| IoT Platform


>>> ver. 3.0.1 - build(10004)
>>> 2020 Copyright(C) Boodskap Inc
>>>
>>> Boodskap documentation: http://developer.boodskap.io

Adding nodes to existing platform cluster

Configure the new node with 1 or two existing node's IP addresses in the ipFinder section below.

Edit the file $HOME/config/cluster.xml

<property name="ipFinder">
   <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
      <property name="addresses">
         <list>
            <value>10.1.1.9:48500</value>
            <value>10.1.1.10:48500</value>
            <value>your_new_ip_address:48500</value>
         </list>
      </property>
   </bean>
</property>

After configuring the new nodes, start them as usual. These newly started nodes won't get initialized until you add them to the existing cluster topology, in order to do this, perform the below operations on any one of the existing cluster node.

Step 1, List the Nodes

sudo su - boodskap
$HOME/bin/control.sh --baseline

The above command will produce something like the below

Cluster state: active
Current topology version: 4

Baseline nodes:
    ConsistentID=cde228a8-da90-4b74-87c0-882d9c5729b7, STATE=ONLINE
    ConsistentID=dea120d9-3157-464f-a1ea-c8702de45c7b, STATE=ONLINE
    ConsistentID=fdf68f13-8f1c-4524-9102-ac2f5937c62c, STATE=ONLINE
--------------------------------------------------------------------------------
Number of baseline nodes: 3

Other nodes:
    ConsistentID=5d782f5e-0d47-4f42-aed9-3b7edeb527c0
  • Number of baseline nodes are the ones already activated in your cluster
  • Other nodes are the ones waiting for activation

Step 2, Add the Nodes

$HOME/bin/control.sh --baseline add consistentId1[,consistentId2,....,consistentIdN]

Above is the command syntax to add one or more new nodes to the cluster. In our case let's say the new node's consistent id is 5d782f5e-0d47-4f42-aed9-3b7edeb527c0, then the command should be

$HOME/bin/control.sh --baseline add 5d782f5e-0d47-4f42-aed9-3b7edeb527c0

πŸ‘

If you are adding multiple nodes to the cluster, add them in one go by comma separating the new consistent id's. This will save you a lot of time if you have high volume of data exists in the cluster.

πŸ“˜

Boodskap UI Admin Console Installation

Perform the below operations on machines 10.1.1.5 & 10.1.1.6

sudo su - boodskapui
mkdir $HOME/webapps
cd $HOME/webapps
git clone https://github.com/BoodskapPlatform/boodskap-ui.git
cd $HOME/webapps/boodskap-ui
npm install
node build.js

Start the platform UI

cd $HOME/webapps/
pm2 start boodskap-platform-node.js

πŸ“˜

Load Balancer Installation

Nginx installation

Perform the below operations on machine 10.1.1.4

sudo apt-get install nginx

Replace or create a file /etc/nginx/sites-enabled/default with the below content

proxy_cache_path /tmp/NGINX_cache/ keys_zone=backcache:10m;

map $http_upgrade $connection_upgrade {
    default upgrade;
    ''      close;
}

upstream api_cluster {
    server 10.1.1.9:18080;
    server 10.1.1.10:18080;
}

upstream micro_api_cluster {
    server 10.1.1.9:19090;
    server 10.1.1.10:19090;
}

upstream mqttws_cluster {
    server 10.1.1.7:8083;
    server 10.1.1.8:7083;
}

Create a file /etc/nginx/sites-enabled/platform.conf with the below contents

upstream platform_cluster {
    server 10.1.1.5:4201;
    server 10.1.1.6:4201;
}

server {
    listen 80;
    client_max_body_size 1024M;

    location /api {
        proxy_pass http://api_cluster;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-NginX-Proxy true;
        proxy_ssl_session_reuse off;
        proxy_cache backcache;
        proxy_buffer_size 128k;
        proxy_buffers 4 256k;
        proxy_busy_buffers_size 256k;
    }

    location /mservice {
        proxy_pass http://micro_api_cluster;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-NginX-Proxy true;
        proxy_ssl_session_reuse off;
        proxy_cache backcache;
        proxy_buffer_size 128k;
        proxy_buffers 4 256k;
        proxy_busy_buffers_size 256k;
    }

    location /mqtt {
        proxy_pass http://mqttws_cluster;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        proxy_redirect  off;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-NginX-Proxy true;
        proxy_ssl_session_reuse off;
        proxy_cache backcache;
        proxy_buffer_size 128k;
        proxy_buffers 4 256k;
        proxy_busy_buffers_size 256k;
    }

    location / {

        proxy_pass http://platform_cluster;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-NginX-Proxy true;
        proxy_ssl_session_reuse off;
        proxy_cache backcache;
        proxy_buffer_size 128k;
        proxy_buffers 4 256k;
        proxy_busy_buffers_size 256k;
    }

}

HAProxy installation

Perform the below operations on machine 10.1.1.4

sudo apt-get install haproxy

Edit /etc/haproxy/haproxy.cfg and make sure the following settings are changed/created

global
  ulimit-n 8000016
  maxconn 2000000
  maxpipes 2000000
  tune.maxaccept 500
        
listen mqtt
  bind *:1883
  mode tcp
  option clitcpka # For TCP keep-alive
  timeout client 3h #By default TCP keep-alive interval is 2hours in OS kern$
  timeout server 3h #By default TCP keep-alive interval is 2hours in OS kern$
  option tcplog
  balance roundrobin
  server node1 10.1.1.7:1883 check
  server node2 10.1.1.8:1883 check

Updated about a month ago


Cluster Installation


Suggested Edits are limited on API Reference Pages

You can only suggest edits to Markdown body content, but not to the API spec.