Wait Until All Promises Complete Even If Some Rejected

Wait until all promises complete even if some rejected

Benjamin's answer offers a great abstraction for solving this issue, but I was hoping for a less abstracted solution. The explicit way to to resolve this issue is to simply call .catch on the internal promises, and return the error from their callback.

let a = new Promise((res, rej) => res('Resolved!')),
b = new Promise((res, rej) => rej('Rejected!')),
c = a.catch(e => { console.log('"a" failed.'); return e; }),
d = b.catch(e => { console.log('"b" failed.'); return e; });

Promise.all([c, d])
.then(result => console.log('Then', result)) // Then ["Resolved!", "Rejected!"]
.catch(err => console.log('Catch', err));

Promise.all([a.catch(e => e), b.catch(e => e)])
.then(result => console.log('Then', result)) // Then ["Resolved!", "Rejected!"]
.catch(err => console.log('Catch', err));

Taking this one step further, you could write a generic catch handler that looks like this:

const catchHandler = error => ({ payload: error, resolved: false });

then you can do

> Promise.all([a, b].map(promise => promise.catch(catchHandler))
.then(results => console.log(results))
.catch(() => console.log('Promise.all failed'))
< [ 'Resolved!', { payload: Promise, resolved: false } ]

The problem with this is that the caught values will have a different interface than the non-caught values, so to clean this up you might do something like:

const successHandler = result => ({ payload: result, resolved: true });

So now you can do this:

> Promise.all([a, b].map(result => result.then(successHandler).catch(catchHandler))
.then(results => console.log(results.filter(result => result.resolved))
.catch(() => console.log('Promise.all failed'))
< [ 'Resolved!' ]

Then to keep it DRY, you get to Benjamin's answer:

const reflect = promise => promise
.then(successHandler)
.catch(catchHander)

where it now looks like

> Promise.all([a, b].map(result => result.then(successHandler).catch(catchHandler))
.then(results => console.log(results.filter(result => result.resolved))
.catch(() => console.log('Promise.all failed'))
< [ 'Resolved!' ]

The benefits of the second solution are that its abstracted and DRY. The downside is you have more code, and you have to remember to reflect all your promises to make things consistent.

I would characterize my solution as explicit and KISS, but indeed less robust. The interface doesn't guarantee that you know exactly whether the promise succeeded or failed.

For example you might have this:

const a = Promise.resolve(new Error('Not beaking, just bad'));
const b = Promise.reject(new Error('This actually didnt work'));

This won't get caught by a.catch, so

> Promise.all([a, b].map(promise => promise.catch(e => e))
.then(results => console.log(results))
< [ Error, Error ]

There's no way to tell which one was fatal and which was wasn't. If that's important then you're going to want to enforce and interface that tracks whether it was successful or not (which reflect does).

If you just want to handle errors gracefully, then you can just treat errors as undefined values:

> Promise.all([a.catch(() => undefined), b.catch(() => undefined)])
.then((results) => console.log('Known values: ', results.filter(x => typeof x !== 'undefined')))
< [ 'Resolved!' ]

In my case, I don't need to know the error or how it failed--I just care whether I have the value or not. I'll let the function that generates the promise worry about logging the specific error.

const apiMethod = () => fetch()
.catch(error => {
console.log(error.message);
throw error;
});

That way, the rest of the application can ignore its error if it wants, and treat it as an undefined value if it wants.

I want my high level functions to fail safely and not worry about the details on why its dependencies failed, and I also prefer KISS to DRY when I have to make that tradeoff--which is ultimately why I opted to not use reflect.

How to wait for all Promises to be finished

First, you have to fix both functionOne() and functionTwo() so that they resolve() the promise they create when their timer fires. Without that, they just create a promise that never resolves which isn't very useful and does not notify the caller when they are done.

Then, to run them in parallel, use Promise.all() which will let you call both functions and then it will track both returned promises together and the promise returned from the call to Promise.all() will resolve when both functions have completed or reject if either one of the functions rejected their promise.

If your example here, none of your promises rejects, but if you want to know when all promises have finished even if some reject, then you would use Promise.allSettled() instead of Promise.all(). The main difference is that Promise.all() will short-circuit and reject it's promise as soon as any promise you pass it rejects, whereas Promise.allSettled() will wait until all promises are done, regardless of resolve/reject. Though you aren't using it here, the resolved value from Promise.allSettled() is different also so that you can tell which promises rejected and which resolved.

Here's a runnable (in the snippet) example that uses Promise.all().
You can swap in Promise.allSettled() if that's the behavior you'd rather see:

    const logger = console;

const functionOne = function () {
logger.info("Starting functionOne");
return new Promise(resolve => {
setTimeout(function () {
logger.info("Finished functionOne after 20 sec.");
resolve();
}, 20000);
});
};

const functionTwo = function () {
logger.info("Starting functionTwo");
return new Promise(resolve => {
setTimeout(function () {
logger.info("Finished functionTwo after 10 sec.");
resolve();
}, 10000);
});
};
const runningFunctions = function () {
logger.info('Start jobs');
return Promise.all([functionOne(), functionTwo()]);
}

runningFunctions().then(() => {
logger.info(`All done after 20 sec.`);
}).catch(err => {
console.log(err);
});

limit concurrency and wait until all promises complete even if some reject

It's simple enough to implement it yourself - make an array of functions that, when called, return the Promise. Then implement a limiter function that takes functions from that array and calls them, and once finished, recursively calls the limiter again until the array is empty:

const request = (file) => new Promise((res, rej) => {
console.log('requesting', file);
setTimeout(() => {
if (Math.random() < 0.5) {
console.log('resolving', file);
res(file);
} else {
console.log('rejecting', file);
rej(file);
}
}, 1000 + Math.random() * 1000);
});
const files = [1, 2, 3, 4, 5, 6];

const makeRequests = files.map(file => () => request(file));
const results = [];
let started = 0;
const recurse = () => {
const i = started++;
const makeRequest = makeRequests.shift();
return !makeRequest ? null : Promise.allSettled([makeRequest()])
.then(result => {
results[i] = result[0];
return recurse();
})
};
const limit = 2;
Promise.all(Array.from({ length: limit }, recurse))
.then(() => {
console.log(results);
});

How to Wait until one promises complete even if some rejected or still in progress?

It seems you need Promise.any():

https://v8.dev/features/promise-combinators#promise.any

https://v8.dev/blog/v8-release-85#javascript

However, you can use it unflagged only with last Node.js canary versions that use V8 85.

With Node.js v14.6.0 or nightly, you need --harmony-promise-any flag.

Is there any functions like Promise.all() but that runs all promises even after one fails?

You can use Promise.allSettled(), the new Promise API would resolve when all the promise objects in the supplied array are settled (i.e. either fulfilled or rejected).

The value in the then callback would have an array of the objects having two keys status and value describing the result of each individual promise in the given array:

Promise.allSettled([

Promise.resolve("Resolved Immediately"),

new Promise((res, rej) => {

setTimeout(res("Resolved after 3 secs"), 3000)

}),

Promise.reject(new Error("Rejected Immediately"))

]).then(arr => console.log(arr));


Related Topics



Leave a reply



Submit