Let's engage in some quick calculations:
A latitude and longitude coordinate typically consists of 2 digits for the integer part, an optional negative sign, and around 3 to 5 digits for the fractional part, depending on precision. Let's estimate it as 7 characters per coordinate, with two coordinates per point and a separator character. With approximately 500 points, that totals to (7 + 7 + separator) * 500 = 7500
characters. While this may fit within the limit for certain browsers (assuming your domain isn't excessively long), there's a risk of exceeding the limit, especially if my assumptions are slightly optimistic.
Moreover, if you intend for people to share these links, having URLs as lengthy as multi-page articles might not be ideal.
Given these constraints, it seems unlikely that this approach will succeed. Your best bet is to reduce the number of latitude/longitude pairs (500 is quite substantial) and/or decrease the level of precision. As illustrated in the XKCD comic linked here: [link], simplifying the data may be necessary.
For instance, if only one significant digit is needed using Number.prototype.toFixed(1)
(such as for city-to-city routes) and eliminating the decimal point (which can be compensated for during deserialization by dividing by 100) could bring the average down to 3 to 4 characters per coordinate, or roughly 7 characters per pair. If we halve the number of required pairs to 250, we end up with 250 * (7 + separator) = 2000
characters. This may still exceed limits, but it's a step in the right direction.
Lastly, ensure you choose a URL-friendly separator to avoid additional character inflation through percent encoding.
In conclusion, encoding such vast amounts of data into a URL may prove impractical. Nonetheless, this analysis should help in exploring ways to compress the information effectively.