Each project contains microflows that take a lot of processing power and are executed often. These microflows can negatively influence the performance of your application during peak usage. This module enables you to control the amount of these microflows that are executed at once by assigning them to queues.
Each of these queues can be configured to handle a subset of these microflows and you can also set a limit to the number of microflows each queue can execute at once. This allows you to control the maximum load put on your application during peak usage by these microflows while still ensuring all microflows will be executed eventually.
The queues use a FIFO approach (first-in, first-out) and will automatically restart themselves (and any microflows still left to execute) after a server restart.
IMPORTANT: This module currently does NOT support a multi-instance setup and as a result should NOT be used in a horizontally scaled environment.
Modules5 stars, based on 20 votes
5/5starsUseful but at the same time dangerous when developers are unaware of the pitfalls this module comes with. Definitly a module that needs to be implemented by experienced developers only.
Leon de Kuiper
5/5starsVery usefull to keep the client side away from heavy operations and manage performance spikes. Its quite on the technical side but i've seen the difference with projects that worked with a process queue and projects that worked without and if you plan to have a decent size application then try to get this one going from the start. Saves time and production issues down the line.
Jitze de Groote
5/5starsVery useful module for executing long running parallel tasks
5/5starsNice module! Just implemented it. Works fast and well.