Now the downside to this is that you have to have parameter ranges for the model to simulate, and you don't necessarily know the probability distribution for each variable in the model up front. That means you have to estimate/guess at them. This makes the exercise slightly error-prone. There is, however, a mechanism you can use to teach yourself (or others) to do a better job of estimation. The technique I'm thinking of is "calibrated probability assessment".
The book How To Measure Anything by Douglas Hubbard does a really nice job of laying out how to use calibrated probability assessments, mathematical models, and monte carlo simulation, to build a probability distribution for things that look hard/impossible to measure.
Anyway, if you build a model for all of your ideas, and monte carlo simulate all of them to get a probability distribution for the return, then you at least have something somewhat objective to base a decision on.
One last note though: when doing this kind of simulation, one big risk (aside from mis-estimating a parameter) is that you leave a particular parameter out completely. I don't know of any deterministic way to make sure you include all the relevant features in a model. The best way I know of to address that is to "crowd source" some help and get as many people as you can (people who have relevant knowledge / experience) to evaluate and critique your model.