Distributed Job
Evenly scalable job
-
Distributed Job gives you the flexibility to run multiple instances of the Job within the cluster. Platform makes sure to allocate the instances into the cluster nodes evenly as possible
-
Distributed jobs are generally used to distribute your work load and utilize the cluster's CPU ad Memory very efficiently. Also, generally distributed jobs are designed to run for ever.
-
Context Variable: job
-
Running Job Handle Variable: THIS
Example
1) Binary Rule (SortFeeder)
- This rule takes a comma separated numbers and pushes as list of numbers into a queue
- Example
def queue = grid.objQueue("SORT_QUEUE"); //create or get the queue
def lines = msg.lines();
int count = 0;
for(line in lines){
def csv = line.trim();
if("".equals(csv)) continue;
def vals = csv.split(",");
def iarray = [];
for(val in vals){
iarray.add(Integer.valueOf(val))
}
queue.offer(iarray);
++count;
}
log.info("%d arrays feeded", count);
Sample data
10,34,27.98,201,11
890,67,25,78,165,908
2) Distributed Job Rule - (ListSorter)
- This Job rule takes the list of numbers from the queue and performs the sort operation
log.info("ListSorter running...");
def queue = grid.objQueue("SORT_QUEUE");
def al = grid.atomicLong("SORT_QUEUE_JOB");
def JOBID = al.incrementAndGet();
try{
while(THIS.isRunning()){
def narray = queue.take(); //wait for data
try{
narray.sort(); //Groovy list sort
log.info("Node[%d][%s] Sorted: %s", JOBID, THIS.getNodeId(), narray);
}catch(Exception ex){
log.error(ex);
}
}
}finally{
al.decrementAndGet();
}
log.info("ListSorter stopped.")
See Also
Related REST APIs
[Create / Update Job(ref:upsertjob)
Get Job
Delete Job
Count Jobs
List Jobs
Set Job State
Count Running Jobs
List Running Jobs
Start Job
Restart Job
Count All Running Jobs
Updated over 3 years ago