When working with a queue in Typescript (ES6) set to run on an interval of 1 ms, it's important to consider the most efficient approach for performance.
1.
setInterval(() => {
//if (this._queue.filter(a => !a.running && a.cbs.length) { // incorrect
if (this._queue.filter(a => !a.running && a.cbs.length).length) { //edit
for (let i = 0; i < this._queue.filter(a => !a.running && a.cbs.length).length; i++) {
...
}
}
}, 1);
setInterval(() => {
for (let i = 0; i < this._queue.filter(a => !a.running && a.cbs.length).length; i++) {
...
}
}, 1);
Approach #1 includes an additional line of code but may require less CPU computation during each iteration of the interval. Is this assumption accurate?
On the other hand, Approach #2 involves defining 'i', filtering the queue, and then iterating through it.
The difference in performance between the two approaches may be minimal, but I am curious to understand the potential impact nonetheless.