When working with Firestore, it's important to keep in mind the soft limits on writes, which are capped at 500 per second. To efficiently handle a large volume of updates, consider chunking your data into batches. This can be done by writing custom code or utilizing libraries like lodash's chunk method. By batch writing, you ensure atomic updates - where either all writes in the batch succeed or none do. If speed is crucial, parallel write operations are recommended for a faster process. Tools like pMap can help in sending multiple requests simultaneously. Avoid making numerous serial requests, as it will significantly slow down the process.
If you're working in a different language than JavaScript, feel free to disregard the library suggestions provided.
In summary, batch writes are essential for maintaining atomicity in your business logic, while individual parallel writes are ideal for handling a high volume of requests without running into throttling issues.