In our unique scenario, we are dealing with a specific use case where we need to load 200 images 10 at a time. Utilizing MergeMap with concurrency in rxjs seems like the ideal approach for handling 200 HTTP requests and executing them in batches of 10. However, given that some images may fail to load without impacting our system, encountering errors has become a challenge.
The issue arises when using MergeMap, as any failure in one observable results in all other requests being canceled or blocked. Despite adding catchError statements at different levels of execution in an attempt to catch errors and return non-error observables, success has remained elusive.
Is there a way to leverage MergeMap concurrency while anticipating errors and ensuring the execution of all requests?
Here is a snippet of the code I am working with:
const limitedParallelObservableExecution = <T>(
listOfItems: T[],
observableMethod: (item: T) => Observable<unknown>,
maxConcurrency: number = maxParallelUploads
): Observable<unknown> => {
if (listOfItems && listOfItems.length > 0) {
const observableListOfItems: Observable<T> = from(listOfItems);
return observableListOfItems.pipe(mergeMap(observableMethod, maxConcurrency), toArray());
} else {
return of({});
}
};
Here's how I execute it:
limitedParallelObservableExecution<T>(queueImages, (item) =>
methodReturnsObservable().pipe(
catchError((error) => {
// Catch error and returns Observable
return of([]);
})
)
).subscribe((value) => console.log('value: ', value));
Unfortunately, the final console.log statement within the subscribe block never gets printed.
EDIT: Upon further investigation, it appears that the issue lies within the 'methodReturnsObservable()' function:
downloadVehicleImage(imageId: string, width: number): Observable<any> {
const params = objectToActualHttpParams({
width,
noSpinner: true
});
... (additional content unchanged)
}
By removing the pipe operation from the http.get call, I was able to resolve the issue. I will now proceed to debug this further to identify the root cause, thank you for the insight!