Good questions. For now, this is the best place, yes. At some point we may have a dedicated place for this sort of initial discussion. But this seems like as good a place as any, as there are many new users.
The OpenEEmeter currently primarily implements the CalTRACK methods (https://www.energymarketmethods.org/
). A quote here from the CalTRACK methods intro may help:
> CalTRACK methods yield whole building, site-level savings outputs. Portfolio-level savings confidence is measured by aggregating the performance of a number of individual sites and calculating portfolio fractional savings uncertainty.
Essentially you can use the CalTRACK and the OpenEEmeter to create baseline models and measure the level of uncertainty associated with those models, and CalTRACK also gives you a way of aggregating uncertainty across multiple buildings to increase overall confidence. In short, the answer to your question depends both on accurate you need your results to be and what questions you are trying to answer. You'll find some helpful metrics in the eemeter.metrics module, such as R^2 and CVRMSE. The CVRMSE can be aggregated into a fractional savings uncertainty value which gives you a sense for the percent uncertainty, relative to the size of your measured energy savings or usage differences. As you might expect, we generally we find that savings get more significant with larger sets of buildings and deeper retrofits. Whereas any particular building (especially commercial) may be affected by "non-routine events" of large enough magnitude to mask savings, the savings measured at groups buildings are less easily masked.
The CalTRACK methods and the OpenEEmeter implementation of those methods have been extensively vetted, but are still being refined. The CalTRACK Technical Appendix contains a sampling of the model testing, which is active and ongoing as part of EM2, linked above. You may also be interested in perusing some of the known issues that are currently being discussed as part of the CalTRACK working group. In addition to the model metrics linked above, you can also learn much about whether a model is "reasonable" in practice on small datasets by using the eemeter.visualization module. There are some examples of this in the tutorial.