🔗Why use a task queue, such as DADI Queue?
Chaos, uncertainty and disorder. These are the building blocks of our world, out of which we evolved, got smart and became capable of organizing the complex network of countless variables around us.
As civilization continued to emerge, we built dens and dwellings and then pyramids and pantheons. Our projects became increasingly complex and we needed other ways of managing the chaos.
So we planned. We wrote task lists and then delegated those jobs to our minions.
And this, if you’ll excuse the rhetoric, is an example of the asynchronous task queue in action.
🔗Okay, I’m sold, lay on the details
So having established that a task queue is, ahem, the cornerstone of human civilisation, let’s talk about DADI Queue and what it can do for your app or service.
DADI Queue is one of the microservices from our open-source box of web service tricks. It’s currently available from the repo on Github and is soon to be available via our decentralized cloud services platform.
Once installed, following the instructions from the repo, you’ll be the proud admin of a lightweight, high-performance task queue running on Node.js and Redis.
Integrating DADI Queue into your app allows you to queue up work asynchronously so that it can be done outside the context of the initiating user request.
For example, let’s say your social networking app sends ‘new message’ emails to all your friends when you post a message. This could be handled in one of these three ways.
- Synchronously, while-u-wait
- Asynchronously, now, using a backgrounded client-process like AJAX
- Asynchronously, later, by storing the details and running a cron job
If you choose the first option your users had better get a brew on ☕️. The other two options are feasible, but neither are inherently scalable or failure tolerant and both of these measures of quality are essential in cultivating positive user sentiment.
The ideal response to the above scenario is, of course, to use a task queue.
🔗So how does it all work?
DADI Queue consists of three parts: the queue, the broker and the workers.
In the scenario above, posting a message would add a task to the queue via the API. A task is usually just a text string with relevant data, e.g.
email:new-message:12322:432. In this case the numbers are user and message IDs.
The broker is like the middleman between queue and workers. It polls the queue for new tasks and then routes them to a matching worker based on the content of the task string (a bit like routing to an endpoint in a REST API).
🔗And what’s the payback?
Fair question. Since you ask, here’s a summary:
Tasks are ‘leased’ to a worker and have a deadline by which the worker must finish. If the worker fails (e.g. sending an email and the SMTP service is down) the broker will put the task back on the queue for a specified number of retries, thus providing fault-tolerance. On the last retry the worker can perform a final remedial action.
The queue part of the system can run on a separate server to the workers, thus providing scalability. Throughput can be monitored and extra worker instances added as necessary. It is also possible to schedule specified tasks to be processed at times when system resources are optimal. Conversely, the processing of tasks can be throttled if required.
Workers can be grouped in folders according to the task domain (e.g. image processing or push notifications) and thus each folder forms a neat abstraction of the functions of that domain. Workers can also chain each other or create new tasks, thus removing complexity from the calling code.
🔗A killer example, please
Naturally. Consider an online shop.
The checkout process may interact with a number of external APIs when a customer places an order: CRM system, payment gateway, fraud prevention, email provider, etc.
Performing these interactions synchronously makes the checkout process slow, tightly coupled to the external APIs and error prone, due to the number of failure points.
Using a task queue, each API interaction can become a worker module. On order confirmation the checkout process simply saves the order data, sends the relevant tasks to the queue, e.g.
create-transaction:14388, etc., then shows the confirmation page without waiting for those tasks to complete.
The queue will then asynchronously process the tasks, retrying any on error. In this example, the workers would likely be chained to send a final notification on success/failure.
The result is a fast user experience, API code decoupled from the checkout, fault-tolerance, plus the ability to scale up on Black Friday. Spot on!
Hopefully I’ve shown that using a task queue can significantly increase the resilience and availability of your app or service.
It’s worth examining your use case carefully to avoid over-engineering or prematurely optimising, but in the right place a task queue can bring many benefits, especially when building large interconnected systems.
Simply put, DADI Queue is a great way to manage complexity in your app.
Happy queuing and don’t forget to register for the crowdsale to get involved.