Date   

A few initial questions about OpenEEMeter

sichen@...
 

Hello everybody,

I hope this is the right place for discussions.  I have a few initial questions about OpenEEMeter:

1.  Do you recommend using it on a single or small number of commercial buildings?  Or have you found that there needs to be a large number of buildings for it to be significant?

2.  Do you recommend comparing the fitted model to any standard benchmarks or standard models, such as one made with EnergyPlus, to make sure that the model is reasonable?


Re: A few initial questions about OpenEEMeter

ngo.phil@...
 

Good questions. For now, this is the best place, yes. At some point we may have a dedicated place for this sort of initial discussion. But this seems like as good a place as any, as there are many new users.

The OpenEEmeter currently primarily implements the CalTRACK methods (https://www.energymarketmethods.org/, https://lists.lfenergy.org/g/em2). A quote here from the CalTRACK methods intro may help:


> CalTRACK methods yield whole building, site-level savings outputs. Portfolio-level savings confidence is measured by aggregating the performance of a number of individual sites and calculating portfolio fractional savings uncertainty.

Essentially you can use the CalTRACK and the OpenEEmeter to create baseline models and measure the level of uncertainty associated with those models, and CalTRACK also gives you a way of aggregating uncertainty across multiple buildings to increase overall confidence. In short, the answer to your question depends both on accurate you need your results to be and what questions you are trying to answer. You'll find some helpful metrics in the eemeter.metrics module, such as R^2 and CVRMSE. The CVRMSE can be aggregated into a fractional savings uncertainty value which gives you a sense for the percent uncertainty, relative to the size of your measured energy savings or usage differences. As you might expect, we generally we find that savings get more significant with larger sets of buildings and deeper retrofits. Whereas any particular building (especially commercial) may be affected by "non-routine events" of large enough magnitude to mask savings, the savings measured at groups buildings are less easily masked.

The CalTRACK methods and the OpenEEmeter implementation of those methods have been extensively vetted, but are still being refined. The CalTRACK Technical Appendix contains a sampling of the model testing, which is active and ongoing as part of EM2, linked above. You may also be interested in perusing some of the known issues that are currently being discussed as part of the CalTRACK working group. In addition to the model metrics linked above, you can also learn much about whether a model is "reasonable" in practice on small datasets by using the eemeter.visualization module. There are some examples of this in the tutorial.


Re: A few initial questions about OpenEEMeter

sichen@...
 

Thanks so much.  This makes a lot of sense.  I will spend some more time looking through your links.

On Tue, Jun 4, 2019 at 1:31 AM <ngo.phil@...> wrote:
Good questions. For now, this is the best place, yes. At some point we may have a dedicated place for this sort of initial discussion. But this seems like as good a place as any, as there are many new users.

The OpenEEmeter currently primarily implements the CalTRACK methods (https://www.energymarketmethods.org/, https://lists.lfenergy.org/g/em2). A quote here from the CalTRACK methods intro may help:


> CalTRACK methods yield whole building, site-level savings outputs. Portfolio-level savings confidence is measured by aggregating the performance of a number of individual sites and calculating portfolio fractional savings uncertainty.

Essentially you can use the CalTRACK and the OpenEEmeter to create baseline models and measure the level of uncertainty associated with those models, and CalTRACK also gives you a way of aggregating uncertainty across multiple buildings to increase overall confidence. In short, the answer to your question depends both on accurate you need your results to be and what questions you are trying to answer. You'll find some helpful metrics in the eemeter.metrics module, such as R^2 and CVRMSE. The CVRMSE can be aggregated into a fractional savings uncertainty value which gives you a sense for the percent uncertainty, relative to the size of your measured energy savings or usage differences. As you might expect, we generally we find that savings get more significant with larger sets of buildings and deeper retrofits. Whereas any particular building (especially commercial) may be affected by "non-routine events" of large enough magnitude to mask savings, the savings measured at groups buildings are less easily masked.

The CalTRACK methods and the OpenEEmeter implementation of those methods have been extensively vetted, but are still being refined. The CalTRACK Technical Appendix contains a sampling of the model testing, which is active and ongoing as part of EM2, linked above. You may also be interested in perusing some of the known issues that are currently being discussed as part of the CalTRACK working group. In addition to the model metrics linked above, you can also learn much about whether a model is "reasonable" in practice on small datasets by using the eemeter.visualization module. There are some examples of this in the tutorial.

--
-----
Si Chen
Open Source Strategies, Inc.


Webinar: LF Energy - OpenEEmeter and EEweather: June 20, 2019, 10am PT

ngo.phil@...
 

Hey folks - I just wanted to send a "save the date" for the upcoming webinar introducing the OpenEEmeter as part of the LF Energy projects. We'll be posting the updated webinar access information as it gets closer to that date.

The webinar is planned for June 20, 2019 from 10-11am pacific.

---

LF Energy brings open source software, methods, and methodologies to the energy sector. LF Energy is a global open source project with 23 members in 10 countries worldwide, hosting projects and working groups that provide software, methods, and open data.

This webinar features project leader Phil Ngo describing OpenEEmeter and EEweather. Using the OpenEEmeter, private companies, utilities, and regulators can consistently calculate changes in energy consumption for building efficiency projects and portfolios with confidence in the methods and replicability of results.

The OpenEEmeter generates consistent and replicable results by always using the same methods to determine changes in energy consumption-there are no discretionary independent variables that change from calculation to calculation. Site level changes in consumption will reflect the same underlying methods across programs and implementations.

OpenEEmeter features:

- Contains reference implementations of standard CalTRACK methods
- Enforces standards compliance by incorporating data sufficiency checking and first-class warnings reporting
- Facilitates integration with external systems and testing of methodological variations with modular design
- Uses public weather sources by default, but allows flexibility
- Is built on top of the popular python scientific stack (scipy/pandas)
- Includes visualization and debugging tools

This project was contributed by Recurve, formerly Open Energy Efficiency.

[Webinar access information to follow]


Re: Webinar: LF Energy - OpenEEmeter and EEweather: June 20, 2019, 10am PT

ngo.phil@...
 

Update: the webinar recording has been posted on https://www.lfenergy.org/projects and on youtube (https://youtu.be/ucVH130_V6g). Thanks again to all who attended, and I apologize for the confusion with the time!


Technical Steering Committee Meeting - August 7, 2019

ngo.phil@...
 

The first OpenEEmeter Technical Steering Committee (TSC) meeting will be held on August 7, 2019 from 8:30-9:00am Pacific. This and all following TSC meetings will be open to the public and recorded.

Agenda:

  • Intros
  • Should we institute a contributor survey?
  • Address open issues and development plans

 

The meeting can be accessed at the following zoom link:

LF Energy is inviting you to a scheduled Zoom meeting.

Topic: LF Energy: OpenEEmeter
Time: This is a recurring meeting Meet anytime

Join Zoom Meeting
https://zoom.us/j/560638485

One tap mobile
+16699006833,,560638485# US (San Jose)
+16465588656,,560638485# US (New York)

Dial by your location
        +1 669 900 6833 US (San Jose)
        +1 646 558 8656 US (New York)
        877 369 0926 US Toll-free
        855 880 1246 US Toll-free
        +1 647 558 0588 Canada
        855 703 8985 Canada Toll-free
Meeting ID: 560 638 485
Find your local number: https://zoom.us/u/adVlPzQyCb

 
- Phil


OpenEEmeter documentation and tutorials feedback tomorrow

ngo.phil@...
 

Dear OpenEEmeter community,

 

We are holding a special meeting tomorrow (Wednesday, September 4, 8:30-9:00am Pacific) to gather feedback and suggestions about the OpenEEmeter tutorials and documentation tomorrow from. We encourage all new users (and also established users) who are available to attend this meeting so that we can hear your voice and build out the support you need to use the OpenEEmeter effectively.

This meeting will be held in place of the usual Technical Steering Committee meeting normally scheduled for first and third Wednesdays at 8:30 am Pacific.

We look forward to seeing you. The credentials for joining the meeting using Zoom are listed below.

All the best,

Phil

Join Zoom Meeting: https://zoom.us/j/560638485

One tap mobile
+16699006833,,560638485# US (San Jose)
+16465588656,,560638485# US (New York)

Dial by your location
        +1 669 900 6833 US (San Jose)
        +1 646 558 8656 US (New York)
        877 369 0926 US Toll-free
        855 880 1246 US Toll-free
        +1 647 558 0588 Canada
        855 703 8985 Canada Toll-free
Meeting ID: 560 638 485
Find your local number: https://zoom.us/u/adVlPzQyCb


Re: OpenEEmeter documentation and tutorials feedback tomorrow

ngo.phil@...
 

Dear OpenEEmeter users:

I wanted to follow up on this feedback meeting. Special thanks to those who showed up to give feedback and contribute to the discussion. I wanted to share minutes from the meeting (copied below) and note improvements we are releasing today in response to the discussion we had.

We've added a new tutorial which shows how all the pieces of the library fit together, which is available here: http://eemeter.openee.io/tutorial.html. We'd love for folks to check out this tutorial and offer feedback or improvements. If anyone finds issues in the new tutorial, please feel free to either 1) respond to this message 2) create a github issue) or 3) create a pull request to fix it.

We invite other users to take a look at this list of suggestions and add their votes, voices, and contributions.

All the best,

Phil

Feedback from Sept 4 documentation feedback meeting (also recorded in wiki):

  • Hourly methods documentation needs to be bulked up.
    • For instance, how do all the pieces fit together?
  • There seemed to be some specific trouble with a eeweather string method, as well as inconsistency between functions.
  • How do users report issues and share feedback about consistency of library functions?
  • Would be great to have more examples of code usage inline.
  • One user needed to figure out the outputs manually by running the code.
  • There are lots of functions in eeweather that do just one thing - can they be bundled into single functions or at least into sections in the documentation?
  • It would be great to have links to the raw public sources? (such as TMY3) this is a little bit obscure and difficult to find in the existing methods.
  • One user developed a bridge from the WBAN codes to other codes, like NCDC codes, USAF_ID, ICAO.
    • Could this be a possible contribution?
  • Along these lines: having “Quick steps” for how to make contributions would be helpful to potential contributors. We need to make this as easy as possible if we want it to happen
  • Could be helpful to add more context and background on actual/performance vs normal year savings, including definitions of terminology and internal/external consistency in using these definitions.
  • How do we know how users are using the library and what they are struggling with?

Major takeaways:

  • Integrate the tutorial into the documentation and make it easily available inline
  • Schedule specific follow ups or working sessions to work through specific issues
  • Write definitions of flavors of savings or include these in the documentation
  • Be more clear about how users report issues, share feedback, and offer contributions.
  • Have a place to post a "show and tell" of things users have accomplished using the library, or features they have developed for their own use. This would give us a way to tell how people are using the library and where the gaps are in the current library functions.


Suggestions on integrating OpenEEMeter

Si Chen <sichen@...>
 

Hello,

I'd like to get your suggestions on integrating OpenEEMeter with our open source opentaps energy application (opentaps.org) 

opentaps is an open source application built in Django.  It allows the user to set up multiple buildings, each with a list of all the equipment.  It then uses VOLTTRON to get building-level energy data with BACNET and MODBUS.  Once the data is there, the users could build dashboards and visualizations with Grafana and develop and run applications on top of it.

Where I see a potentially very cool integration is to use OpenEEMeter to build baseline energy consumption model for any meter, physical or virtual, that we have a time series data.  This baseline could then be used for measurement and verification (M&V) and financing purposes.  The meter data would be stored in a time series database like Crate or Timescale which provide a SQL-compatible interface.

Does this make sense?  Does it sound like what OpenEEMeter is designed for?

If so, do you have suggestions on how to do the integration with Django and SQL databases?


Re: Suggestions on integrating OpenEEMeter

ngo.phil@...
 

Great idea! The OpenEEmeter can certainly be integrated into a django application, and it sounds like you have the right idea of what the library can be used to do. It's also python of course, so just pip install eemeter and you'll be able to call the library directly from django code. What you'll need to do is to get your time series data into pandas DataFrames with the right columns and indexes.

If you haven't used the library before, I'd suggest checking out our new tutorial and paying special attention to the data formats section, where we call out the three main required inputs: 1) meter data, 2) temperature data, and 3) project or intervention dates, and show demonstrate how to create a datasets with necessary format.

Phil


Re: Suggestions on integrating OpenEEMeter

Si Chen <sichen@...>
 

Thanks!  The tutorial was very helpful.

We’ll look into the integration and let you know how it goes.

On Tue, Oct 1, 2019 at 8:37 PM <ngo.phil@...> wrote:

Great idea! The OpenEEmeter can certainly be integrated into a django application, and it sounds like you have the right idea of what the library can be used to do. It's also python of course, so just pip install eemeter and you'll be able to call the library directly from django code. What you'll need to do is to get your time series data into pandas DataFrames with the right columns and indexes.

If you haven't used the library before, I'd suggest checking out our new tutorial and paying special attention to the data formats section, where we call out the three main required inputs: 1) meter data, 2) temperature data, and 3) project or intervention dates, and show demonstrate how to create a datasets with necessary format.

Phil

--
-----
Si Chen
Open Source Strategies, Inc.

opentaps and open source business models
at the VOLTTRON conference: https://youtu.be/2jnyIOBHrkU



OpenEEmeter public meeting

ngo.phil@...
 

The OpenEEmeter public meeting is scheduled for 8:30am PT. (Meetings are held at this time on first and third Wednesdays.) All are welcome to attend to ask questions or offer feedback, including both new and experienced users. Please use the following link to join: https://zoom.us/j/4709973201

Phil


how to persist and instantiate models?

Si Chen <sichen@...>
 

Hello,

I'm looking at integrating OpenEEMeter with opentaps some more, and I have a couple of questions about how to save and re-use the models from OpenEEMeter.

Are these the classes with the hourly and daily models?
eemeter.CalTRACKHourlyModel
eemeter.CalTRACKUsagePerDayCandidateModel

They both have .json() methods to return the model as a JSON.  Should we save those JSON strings in our database?  

Is there a method for bringing those models back from JSON string?

Thanks.

-----
Si Chen
Open Source Strategies, Inc.

opentaps and open source business models
at the VOLTTRON conference: https://youtu.be/2jnyIOBHrkU



Re: how to persist and instantiate models?

ngo.phil@...
 

For hourly, you are correct. For billing/daily, you'll want to make sure you are using the correct candidate model - the one chosen by the balance point grid search. See the model attribute of eemeter.CalTRACKUsagePerDayModelResults, returned by eemeter.fit_caltrack_usage_per_day_model.

The .json() methods are indeed intended for model serialization if you'd like to store the models and use them for prediction at a later point. The .json() method should give you all the data you need to be able to recreate the model. I don't think we currently have a method in the library for de-serializing the model back into its original state, but we certainly should! So if you find it's not readily doable in the context of your code, would you mind making an issue on GitHub openeemeter/eemeter and we can work together to add a de-serialization method to the library?

Phil

On Wed, Oct 30, 2019 at 4:09 PM Si Chen <sichen@...> wrote:
Hello,

I'm looking at integrating OpenEEMeter with opentaps some more, and I have a couple of questions about how to save and re-use the models from OpenEEMeter.

Are these the classes with the hourly and daily models?
eemeter.CalTRACKHourlyModel
eemeter.CalTRACKUsagePerDayCandidateModel

They both have .json() methods to return the model as a JSON.  Should we save those JSON strings in our database?  

Is there a method for bringing those models back from JSON string?

Thanks.

-----
Si Chen
Open Source Strategies, Inc.

opentaps and open source business models
at the VOLTTRON conference: https://youtu.be/2jnyIOBHrkU



Re: how to persist and instantiate models?

Si Chen <sichen@...>
 

Hello Phil,

Thanks for getting back to me.

It seems that eemeter.fit_caltrack_usage_per_day_model returns eemeter.CalTRACKUsagePerDayModelResults, which then has eemeter.CalTRACKUsagePerDayCandidateModel  as the model.  Similarly, eemeter.fit_caltrack_hourly_model returns CalTRACKHourlyModelResults, which has eemeter.CalTRACKHourlyModel.  

So in both cases, we should look in the Results and look for the model.  Is that correct?

Sure, we'd be happy to make methods to de-serialize the models and contribute them back.  Do you have any suggestions on you'd like it done?

BTW if you're interested, this Google doc is a draft of how we're planning to integrate OpenEEMeter into an overall system for M&V and financing.  If anybody else is interested, just let me know -- I'm happy to share.

-----
Si Chen
Open SoHellource Strategies, Inc.

opentaps and open source business models
at the VOLTTRON conference: https://youtu.be/2jnyIOBHrkU




On Thu, Oct 31, 2019 at 8:01 PM <ngo.phil@...> wrote:
For hourly, you are correct. For billing/daily, you'll want to make sure you are using the correct candidate model - the one chosen by the balance point grid search. See the model attribute of eemeter.CalTRACKUsagePerDayModelResults, returned by eemeter.fit_caltrack_usage_per_day_model.

The .json() methods are indeed intended for model serialization if you'd like to store the models and use them for prediction at a later point. The .json() method should give you all the data you need to be able to recreate the model. I don't think we currently have a method in the library for de-serializing the model back into its original state, but we certainly should! So if you find it's not readily doable in the context of your code, would you mind making an issue on GitHub openeemeter/eemeter and we can work together to add a de-serialization method to the library?

Phil

On Wed, Oct 30, 2019 at 4:09 PM Si Chen <sichen@...> wrote:
Hello,

I'm looking at integrating OpenEEMeter with opentaps some more, and I have a couple of questions about how to save and re-use the models from OpenEEMeter.

Are these the classes with the hourly and daily models?
eemeter.CalTRACKHourlyModel
eemeter.CalTRACKUsagePerDayCandidateModel

They both have .json() methods to return the model as a JSON.  Should we save those JSON strings in our database?  

Is there a method for bringing those models back from JSON string?

Thanks.

-----
Si Chen
Open Source Strategies, Inc.

opentaps and open source business models
at the VOLTTRON conference: https://youtu.be/2jnyIOBHrkU



Re: how to persist and instantiate models?

Si Chen <sichen@...>
 

Here's the link to the Google doc: https://docs.google.com/document/d/1nRd-SV8Ws-2_ufXaNZy0XoNngLhEYMQ4pXrTfDny7CU/edit#heading=h.fqluq97j06qm
-----
Si Chen
Open Source Strategies, Inc.

opentaps and open source business models
at the VOLTTRON conference: https://youtu.be/2jnyIOBHrkU




On Fri, Nov 1, 2019 at 3:22 PM Si Chen <sichen@...> wrote:
Hello Phil,

Thanks for getting back to me.

It seems that eemeter.fit_caltrack_usage_per_day_model returns eemeter.CalTRACKUsagePerDayModelResults, which then has eemeter.CalTRACKUsagePerDayCandidateModel  as the model.  Similarly, eemeter.fit_caltrack_hourly_model returns CalTRACKHourlyModelResults, which has eemeter.CalTRACKHourlyModel.  

So in both cases, we should look in the Results and look for the model.  Is that correct?

Sure, we'd be happy to make methods to de-serialize the models and contribute them back.  Do you have any suggestions on you'd like it done?

BTW if you're interested, this Google doc is a draft of how we're planning to integrate OpenEEMeter into an overall system for M&V and financing.  If anybody else is interested, just let me know -- I'm happy to share.

-----
Si Chen
Open SoHellource Strategies, Inc.

opentaps and open source business models
at the VOLTTRON conference: https://youtu.be/2jnyIOBHrkU




On Thu, Oct 31, 2019 at 8:01 PM <ngo.phil@...> wrote:
For hourly, you are correct. For billing/daily, you'll want to make sure you are using the correct candidate model - the one chosen by the balance point grid search. See the model attribute of eemeter.CalTRACKUsagePerDayModelResults, returned by eemeter.fit_caltrack_usage_per_day_model.

The .json() methods are indeed intended for model serialization if you'd like to store the models and use them for prediction at a later point. The .json() method should give you all the data you need to be able to recreate the model. I don't think we currently have a method in the library for de-serializing the model back into its original state, but we certainly should! So if you find it's not readily doable in the context of your code, would you mind making an issue on GitHub openeemeter/eemeter and we can work together to add a de-serialization method to the library?

Phil

On Wed, Oct 30, 2019 at 4:09 PM Si Chen <sichen@...> wrote:
Hello,

I'm looking at integrating OpenEEMeter with opentaps some more, and I have a couple of questions about how to save and re-use the models from OpenEEMeter.

Are these the classes with the hourly and daily models?
eemeter.CalTRACKHourlyModel
eemeter.CalTRACKUsagePerDayCandidateModel

They both have .json() methods to return the model as a JSON.  Should we save those JSON strings in our database?  

Is there a method for bringing those models back from JSON string?

Thanks.

-----
Si Chen
Open Source Strategies, Inc.

opentaps and open source business models
at the VOLTTRON conference: https://youtu.be/2jnyIOBHrkU



Re: how to persist and instantiate models?

ngo.phil@...
 

Yes, that's the most straightforward way to use the models. Also keep in mind that both the Results models and the model those reference have json/predict methods, so you may be able to consider that as well, although the Results models have some status fields that may or may not be relevant for you.

Thanks for taking a pass at it. The de-serialization methods could stand on their own, probably taking in json and returning a model, or, if there's a nice way to integrate them into the model classes then we can consider that as well. When you're ready, the best way to do a review and continue the discussion will be to make a GitHub pull request.

Thanks for sharing your plans! Looks like a good fit for the OpenEEmeter, and I'm glad to see you are also making use of EEweather.

Phil

On Fri, Nov 1, 2019 at 4:22 PM Si Chen <sichen@...> wrote:
Hello Phil,

Thanks for getting back to me.

It seems that eemeter.fit_caltrack_usage_per_day_model returns eemeter.CalTRACKUsagePerDayModelResults, which then has eemeter.CalTRACKUsagePerDayCandidateModel  as the model.  Similarly, eemeter.fit_caltrack_hourly_model returns CalTRACKHourlyModelResults, which has eemeter.CalTRACKHourlyModel.  

So in both cases, we should look in the Results and look for the model.  Is that correct?

Sure, we'd be happy to make methods to de-serialize the models and contribute them back.  Do you have any suggestions on you'd like it done?

BTW if you're interested, this Google doc is a draft of how we're planning to integrate OpenEEMeter into an overall system for M&V and financing.  If anybody else is interested, just let me know -- I'm happy to share.

-----
Si Chen
Open SoHellource Strategies, Inc.

opentaps and open source business models
at the VOLTTRON conference: https://youtu.be/2jnyIOBHrkU




On Thu, Oct 31, 2019 at 8:01 PM <ngo.phil@...> wrote:
For hourly, you are correct. For billing/daily, you'll want to make sure you are using the correct candidate model - the one chosen by the balance point grid search. See the model attribute of eemeter.CalTRACKUsagePerDayModelResults, returned by eemeter.fit_caltrack_usage_per_day_model.

The .json() methods are indeed intended for model serialization if you'd like to store the models and use them for prediction at a later point. The .json() method should give you all the data you need to be able to recreate the model. I don't think we currently have a method in the library for de-serializing the model back into its original state, but we certainly should! So if you find it's not readily doable in the context of your code, would you mind making an issue on GitHub openeemeter/eemeter and we can work together to add a de-serialization method to the library?

Phil

On Wed, Oct 30, 2019 at 4:09 PM Si Chen <sichen@...> wrote:
Hello,

I'm looking at integrating OpenEEMeter with opentaps some more, and I have a couple of questions about how to save and re-use the models from OpenEEMeter.

Are these the classes with the hourly and daily models?
eemeter.CalTRACKHourlyModel
eemeter.CalTRACKUsagePerDayCandidateModel

They both have .json() methods to return the model as a JSON.  Should we save those JSON strings in our database?  

Is there a method for bringing those models back from JSON string?

Thanks.

-----
Si Chen
Open Source Strategies, Inc.

opentaps and open source business models
at the VOLTTRON conference: https://youtu.be/2jnyIOBHrkU



Re: how to persist and instantiate models?

Si Chen <sichen@...>
 

Great, we'll work on it and let you know!
-----
Si Chen
Open Source Strategies, Inc.

opentaps and open source business models
at the VOLTTRON conference: https://youtu.be/2jnyIOBHrkU




On Mon, Nov 4, 2019 at 4:30 PM <ngo.phil@...> wrote:
Yes, that's the most straightforward way to use the models. Also keep in mind that both the Results models and the model those reference have json/predict methods, so you may be able to consider that as well, although the Results models have some status fields that may or may not be relevant for you.

Thanks for taking a pass at it. The de-serialization methods could stand on their own, probably taking in json and returning a model, or, if there's a nice way to integrate them into the model classes then we can consider that as well. When you're ready, the best way to do a review and continue the discussion will be to make a GitHub pull request.

Thanks for sharing your plans! Looks like a good fit for the OpenEEmeter, and I'm glad to see you are also making use of EEweather.

Phil

On Fri, Nov 1, 2019 at 4:22 PM Si Chen <sichen@...> wrote:
Hello Phil,

Thanks for getting back to me.

It seems that eemeter.fit_caltrack_usage_per_day_model returns eemeter.CalTRACKUsagePerDayModelResults, which then has eemeter.CalTRACKUsagePerDayCandidateModel  as the model.  Similarly, eemeter.fit_caltrack_hourly_model returns CalTRACKHourlyModelResults, which has eemeter.CalTRACKHourlyModel.  

So in both cases, we should look in the Results and look for the model.  Is that correct?

Sure, we'd be happy to make methods to de-serialize the models and contribute them back.  Do you have any suggestions on you'd like it done?

BTW if you're interested, this Google doc is a draft of how we're planning to integrate OpenEEMeter into an overall system for M&V and financing.  If anybody else is interested, just let me know -- I'm happy to share.

-----
Si Chen
Open SoHellource Strategies, Inc.

opentaps and open source business models
at the VOLTTRON conference: https://youtu.be/2jnyIOBHrkU




On Thu, Oct 31, 2019 at 8:01 PM <ngo.phil@...> wrote:
For hourly, you are correct. For billing/daily, you'll want to make sure you are using the correct candidate model - the one chosen by the balance point grid search. See the model attribute of eemeter.CalTRACKUsagePerDayModelResults, returned by eemeter.fit_caltrack_usage_per_day_model.

The .json() methods are indeed intended for model serialization if you'd like to store the models and use them for prediction at a later point. The .json() method should give you all the data you need to be able to recreate the model. I don't think we currently have a method in the library for de-serializing the model back into its original state, but we certainly should! So if you find it's not readily doable in the context of your code, would you mind making an issue on GitHub openeemeter/eemeter and we can work together to add a de-serialization method to the library?

Phil

On Wed, Oct 30, 2019 at 4:09 PM Si Chen <sichen@...> wrote:
Hello,

I'm looking at integrating OpenEEMeter with opentaps some more, and I have a couple of questions about how to save and re-use the models from OpenEEMeter.

Are these the classes with the hourly and daily models?
eemeter.CalTRACKHourlyModel
eemeter.CalTRACKUsagePerDayCandidateModel

They both have .json() methods to return the model as a JSON.  Should we save those JSON strings in our database?  

Is there a method for bringing those models back from JSON string?

Thanks.

-----
Si Chen
Open Source Strategies, Inc.

opentaps and open source business models
at the VOLTTRON conference: https://youtu.be/2jnyIOBHrkU



Live version of OpenEEmeter available?

Steve Schmidt
 

Just joined the group! (Didn't know I was missing out.)

I recall that Recurve used to host a version of the latest eeMeter code that could be run on test data for individual buildings. Is this still available somewhere? I can't seem to find it.

Thanks.

  -Steve


Re: how to persist and instantiate models?

Si Chen <sichen@...>
 

Hello Phil,

Please take a look at our pull request
https://github.com/openeemeter/eemeter/pull/383

and let me know what you think.  Thanks!