Deciding on a distribution is the sticky part.

There's some sort of proof of concept here.

## Distribution Idea

Targeting two goals:

The distribution should **not** be unbounded for normal use; I believe most users who unwittingly encountered a 50mb bigint would consider it a usability problem in several ways; therefore the distribution for size=200 should be bounded at the point where most people wouldn't consider it a usability problem

Given that I've seen bugs that only manifest for (< Double/MAX_VALUE n), it might be good to have an upper bound somewhere in that area

2^1024 is fairly large for generating on a frequent basis, though, so maybe if we targeted something like ~1% of numbers larger than 2^1024 for size=200; I wouldn't even mind a technically unbounded distribution if the probabilities dropped off dramatically (e.g., numbers larger than 2^2048 not generated before the end of the universe)