Date   

OpenEEmeter Meetup - Wed, 01/27/2021 12:30pm-2:00pm, Please RSVP #cal-reminder

openeemeter@lists.lfenergy.org Calendar <openeemeter@...>
 

Reminder: OpenEEmeter Meetup

When: Wednesday, 27 January 2021, 12:30pm to 2:00pm, (GMT-08:00) America/Los Angeles

Where:https://zoom.us/meeting/register/tJ0vcOyqqD4vGdSFUCNpKfCbeOWUpGaO0vA9

An RSVP is requested. Click here to RSVP

Organizer: Phil Ngo phil@...

Description: OpenEEmeter users are getting together quarterly in a series of recurring meetups to share ideas, make new partnerships, participate in workshops, and be inspired by guest speakers. Users new and old are invited to participate in this event.

Free to attend, register now!


Re: Instantaneous current and voltage measurements

Phil Ngo
 

Hi Ben,

If I'm understanding correctly, it doesn't sound to me like the OpenEEmeter will be a good fit for that application. The OpenEEmeter is not generally used directly on IoT devices and works with hourly energy usage data (e.g., kWh, Therms), not yet with current or voltage data, and not yet with data sampled at rates higher than hourly.

If in the future you would like to build models of building energy usage based on temperature and or time of day, the OpenEEmeter will definitely be able to help!

Thanks for reaching out and best wishes with your project.

Phil

On Wed, Dec 30, 2020 at 1:48 PM <benpayeur@...> wrote:

[Edited Message Follows]

Hello All,

I'm new here and trying to get some background about OpenEEmeter.  I am looking for something that will facilitate the collection of instantaneous current and voltage measurements at a reasonably high sampling rate.  The purpose of doing this is for microgrid research at a university.  Is OpenEEmeter appropriate for this application?  Are there examples of common hardware platforms that OpenEEmeter is run on?  Any advice or direction that the group is willing to give is greatly appreciated!

-Ben



--

Phil Ngo

Director of Engineering

801.244.9860 LinkedIn

 

Recurve.com Newsletter LinkedInAngelListTwitter


Instantaneous current and voltage measurements

benpayeur@...
 
Edited

Hello All,

I'm new here and trying to get some background about OpenEEmeter.  I am looking for something that will facilitate the collection of instantaneous current and voltage measurements at a reasonably high sampling rate.  The purpose of doing this is for microgrid research at a university.  Is OpenEEmeter appropriate for this application?  Are there examples of common hardware platforms that OpenEEmeter is run on?  Any advice or direction that the group is willing to give is greatly appreciated!

-Ben


OpenEEmeter Meetup - Wed, 10/28/2020 12:30pm-2:00pm, Please RSVP #cal-reminder

openeemeter@lists.lfenergy.org Calendar <openeemeter@...>
 

Reminder: OpenEEmeter Meetup

When: Wednesday, 28 October 2020, 12:30pm to 2:00pm, (GMT-07:00) America/Los Angeles

Where:https://zoom.us/meeting/register/tJ0vcOyqqD4vGdSFUCNpKfCbeOWUpGaO0vA9

An RSVP is requested. Click here to RSVP

Organizer: Phil Ngo phil@...

Description: OpenEEmeter users are getting together quarterly in a series of recurring meetups to share ideas, make new partnerships, participate in workshops, and be inspired by guest speakers. Users new and old are invited to participate in this event.

Free to attend, register now!


Re: OpenEEmeter Meetup

Phil Ngo
 

Hi everyone!

The OpenEEmeter meetup is just less than a week away, on October 28 @ 12:30-2pm Pacific. We'll be having a show and tell, with one or two spots left for showing off a project either using or adjacent to the OpenEEmeter. As we did last time, we'll also have time for Q&A about using or contributing to the OpenEEmeter python packages, as well as a quick intro to the project if you're joining for the first time.

Si - we'd love to have you present on what you're doing at the Hyperledger Energy Working Group if you are still available. Any other takers?

Webinar signup link: https://zoom.us/meeting/register/tJ0vcOyqqD4vGdSFUCNpKfCbeOWUpGaO0vA9.

Looking forward to it!
Phil

On Mon, Sep 21, 2020 at 12:51 PM Si Chen <sichen@...> wrote:
Hi Phil,

Thanks for setting this up.

I'd like to share a little bit about what we're doing at the Hyperledger Energy Working Group.  Hyperledger is the open source blockchain project of the Linux Foundation.  I think it will have a lot of potential synergies with what you're doing.

Would you be able to give me 15 minutes to talk about this?

-----
Si Chen
Open Source Strategies, Inc.

Join our Hyperledger Open Source Carbon Accounting & Certification Working Group - Video



On Sat, Sep 19, 2020 at 10:22 AM Phil Ngo <phil@...> wrote:
Hi OpenEEmeter users!

The second OpenEEmeter meetup is scheduled for October 28, 12:30-2pm Pacific, so be sure to get it on your calendar if you haven't yet! As requested at the kickoff event, this meetup will be centered around a "show and tell", for which we will be offering a few 15-30 minute slots to discuss a project either using or adjacent to the OpenEEmeter. Please send proposals to me with a short description of what you would like to show off and how much time you think you'd need and we'll get you in if we still have room. We will send out a sign up link for attendees and presenters when we get closer to the event.

I'm looking forward to this!

Phil

--

Phil Ngo

Director of Engineering

801.244.9860 LinkedIn

 

Recurve.com Newsletter LinkedInAngelListTwitter



--

Phil Ngo

Director of Engineering

801.244.9860 LinkedIn

 

Recurve.com Newsletter LinkedInAngelListTwitter


Re: OpenEEmeter Meetup

Si Chen
 

Hi Phil,

Thanks for setting this up.

I'd like to share a little bit about what we're doing at the Hyperledger Energy Working Group.  Hyperledger is the open source blockchain project of the Linux Foundation.  I think it will have a lot of potential synergies with what you're doing.

Would you be able to give me 15 minutes to talk about this?

-----
Si Chen
Open Source Strategies, Inc.

Join our Hyperledger Open Source Carbon Accounting & Certification Working Group - Video



On Sat, Sep 19, 2020 at 10:22 AM Phil Ngo <phil@...> wrote:
Hi OpenEEmeter users!

The second OpenEEmeter meetup is scheduled for October 28, 12:30-2pm Pacific, so be sure to get it on your calendar if you haven't yet! As requested at the kickoff event, this meetup will be centered around a "show and tell", for which we will be offering a few 15-30 minute slots to discuss a project either using or adjacent to the OpenEEmeter. Please send proposals to me with a short description of what you would like to show off and how much time you think you'd need and we'll get you in if we still have room. We will send out a sign up link for attendees and presenters when we get closer to the event.

I'm looking forward to this!

Phil

--

Phil Ngo

Director of Engineering

801.244.9860 LinkedIn

 

Recurve.com Newsletter LinkedInAngelListTwitter


OpenEEmeter Meetup

Phil Ngo
 

Hi OpenEEmeter users!

The second OpenEEmeter meetup is scheduled for October 28, 12:30-2pm Pacific, so be sure to get it on your calendar if you haven't yet! As requested at the kickoff event, this meetup will be centered around a "show and tell", for which we will be offering a few 15-30 minute slots to discuss a project either using or adjacent to the OpenEEmeter. Please send proposals to me with a short description of what you would like to show off and how much time you think you'd need and we'll get you in if we still have room. We will send out a sign up link for attendees and presenters when we get closer to the event.

I'm looking forward to this!

Phil

--

Phil Ngo

Director of Engineering

801.244.9860 LinkedIn

 

Recurve.com Newsletter LinkedInAngelListTwitter


Event: OpenEEmeter Meetup #cal-invite

openeemeter@lists.lfenergy.org Calendar <openeemeter@...>
 

OpenEEmeter Meetup

When:
Wednesday, 28 October 2020
12:30pm to 2:00pm
(UTC-07:00) America/Los Angeles
Repeats: Every 3 months on the fourth Wednesday

Where:
Zoom

Organizer: Phil Ngo phil@...

An RSVP is requested. Click here to RSVP

Description:
OpenEEmeter users are getting together quarterly in a series of recurring meetups to share ideas, make new partnerships, participate in workshops, and be inspired by guest speakers. Users new and old are invited to participate in this event.


LF energy data architecture working group presentation

Sander
 

Hi openEEmeter community,

With LF energy data architecture working group we like to have more insight in the current LF energy projects and their data architecture. The goal of the data architecture is to improve interopabilty of the LF energy projects.

We would like to get insight in the following topics. Can you guys give an 30 minute presentation around this topics during one of the office hours?
Project focus and introduction
Data input
Data output
Used semantics (e.g. What information standards are used?)

Please select a date and I will send an invite.
https://wiki.lfenergy.org/display/HOME/Data+Architecture+Working+Group

Data architecture working document:
https://docs.google.com/document/d/1QcHqPRSmUUJQlJnfygGDkOpDPlId6U1V22pBuvZvDYk/edit#heading=h.g0v5yhj0kiyj

 

Kind regards,

Sander Jansen

Data Architect |  Alliander IT Data & Insight

E    sander.jansen@...

Alliander N.V.  .  Postbus 50, 6920 AB Duiven, Nederland  .  Locationcode: 2PB2100
  .  Utrechtseweg 68, 6812 AH Arnhem  .  KvK 09104351 Arnhem  .  www.alliander.com

 

The content of this e-mail, including any attachments, is personal and confidential. If this message is not intended for you, please inform the sender by return and destroy the message. Please do not use or copy the content of this e-mail, including any attachments, or send it to third parties.

 


Reminder: OpenEEmeter Meetup Kickoff tomorrow at 12:30pm Pacific

Phil Ngo
 

I'm looking forward to getting together with some of you at the OpenEEmeter Meetup Kickoff tomorrow!

Phil


Re: Implementing Fractional Savings Uncertainty (FSU)

nhi.ngo@...
 

Hi Phil and Steve,

Thank you very much for your responses. I really appreciate you spending time sharing your insights.

Steve, your experiment seems very interesting. I would have to think more about the impact of these measurements on the portfolio FSU.

Phil, your explanation is very clear and helpful. To clarify, yes, my intent is to know the expected FSU for a portfolio of 100 sites. However, with your explanation above, I think I understand and now what to do for my analysis. So thank you very much.

Though there is one more issue I would like to raise. Ideally, my team is hoping to eventually move toward using hourly data. However, since uncertainty for hourly method is a tricky subject, we are planning to use daily method FSU to gauge uncertainty. My question is from your experience, do you expect FSU to decrease as we move from billing to daily method for the same site? That has been my assumption since the beginning as we have more granular data when using daily method, thus I would expect non-fractional uncertainty to decrease. However, when I conducted testing on 1 site using hourly, daily and billing method, metered savings results seem to be within acceptable range but FSU results seem to contradict my assumptions. I have significantly larger non-fractional and fractional savings uncertainty when using daily method. Do you have any suggestion or thought on this matter?

Thank you very for your help!

-Nhi


Re: Implementing Fractional Savings Uncertainty (FSU)

Steve Schmidt
 

For my own edification I duplicated the FSU calculation Nhi provided in the attached spreadsheet. Hopefully I got it right.

Then for fun I "swapped" savings and FSU values between the two buildings in two different scenarios to see the impact on Portfolio FSU. Again, I hope someone will check these thought experiments to see if they make sense.

If they are, it shows that the individual project saving rates, absolute saving amounts, and FSU percentage have big impacts on the portfolio FSU.

  -Steve


Re: Implementing Fractional Savings Uncertainty (FSU)

ngo.phil@...
 

Hi Nhi,

Thanks for reaching out. I'll do my best to answer your questions, although I will ask for some clarification on the second one.

1) It looks to me like you are interpreting the equation correctly. The intuition for why the combined FSU is lower than either of the individual FSUs is that more data generally leads to lower uncertainty. It may be helpful to remember that the FSU is a fractional/normalized value, quantifying the uncertainty relative to the level of savings. The non-fractional or non-normalized total savings uncertainty adds like variances do, that is, by taking the root of the sum of the squares. So the total uncertainty will be increasing, but by less than the total value of the savings, and thus the ratio of these decreases.

2) I'm not sure I completely understand the intent - do you want to know the expected FSU for a portfolio of 100 sites? or something else?

2a) I can point you this document from the CPUC that describes "Normalized Metered Energy Consumption Working Group Recommendations for Population-Level Approaches" which lists the 25% threshold. I will also ask around at Recurve if there is a publication or public dataset that can be shared to back up the statement read on the website, but I can confirm from personal experience that it is a reasonable expectation for programs with either deep savings or a very large number of projects (or both). Because the FSU is divided by the savings value, you can expect higher FSU values if you're expecting lower percent savings (this would be the case for 1% savings - you will likely find in your bootstrapping analysis that you will need a lot more projects to hit this threshold than you would with deeper savings).


On Wed, Mar 11, 2020 at 1:33 PM <nhi.ngo@...> wrote:
Hi all,

My name is Nhi Ngo from Arcadia and I am working on implementing CalTRACK to evaluate one of our products. I am specifically looking at the Billing Method and want to better understand the interpretation and implementation of FSU at the residential portfolio level. I have a few questions if you don't mind.

1. Before asking other questions, I want to make sure we interpret the FSU aggregation method correctly. The FSU error band at 80% column is the direct output of the model from error_bands['FSU Error Band'] and we try to demonstrate the aggregation method for 2 sites.  As this example suggests, the portfolio FSU is smaller than each individual site's FSU (46% and 13.6%). Does this seem to be counter-intuitive and can you help confirm whether our calculation is correct?



2. Based on the statement on Recurve's website: "CalTRACK tests have shown that aggregation of individual projects, can, in most cases, deliver less than 25 percent portfolio-level savings uncertainty (at the 90 percent confidence level) with very reasonably sized portfolios", we are working on deriving expected FSU for different portfolio sizes to set confidence expectation for our pilot program. We are aiming for 1000 sites but we might have much less so it is important to set expectation if we don't have the portfolio size we want.

We currently have a sample of 1700 residential sites with average savings ~1% (some have negative savings). We are planning to use bootstrapping method to resample different portfolio sizes (100, 200...) from the 1700 and derive expected FSU. For example, we will resample 10,000 100-site portfolios and derive FSU error bands and FSU for each 100-site portfolio using the #1 calculation. However, we are unsure how to proceed to derive FSU for 10,000 portfolios. Should we take mean(FSU)? Or should we take mean(FSU error bands)/mean(sum(metered_savings)) where sum(metered_savings) is the total savings for each 100-site portfolio? If we do the later, the results seem to be more reasonable but we want to ask for your comment and suggestion if the method is statistically correct.

Also, if you can share with us the test or methodology you performed to arrive at the bolded statement above, that would be great. I think we could simulate similar test to achieve our goal.

I hope the questions are clear enough and please feel free to ask more questions to clarify. Thank you very much for your time and I apologize if the questions seem trivial.

Sincerely,
Nhi Ngo



Implementing Fractional Savings Uncertainty (FSU)

nhi.ngo@...
 

Hi all,

My name is Nhi Ngo from Arcadia and I am working on implementing CalTRACK to evaluate one of our products. I am specifically looking at the Billing Method and want to better understand the interpretation and implementation of FSU at the residential portfolio level. I have a few questions if you don't mind.

1. Before asking other questions, I want to make sure we interpret the FSU aggregation method correctly. The FSU error band at 80% column is the direct output of the model from error_bands['FSU Error Band'] and we try to demonstrate the aggregation method for 2 sites.  As this example suggests, the portfolio FSU is smaller than each individual site's FSU (46% and 13.6%). Does this seem to be counter-intuitive and can you help confirm whether our calculation is correct?



2. Based on the statement on Recurve's website: "CalTRACK tests have shown that aggregation of individual projects, can, in most cases, deliver less than 25 percent portfolio-level savings uncertainty (at the 90 percent confidence level) with very reasonably sized portfolios", we are working on deriving expected FSU for different portfolio sizes to set confidence expectation for our pilot program. We are aiming for 1000 sites but we might have much less so it is important to set expectation if we don't have the portfolio size we want.

We currently have a sample of 1700 residential sites with average savings ~1% (some have negative savings). We are planning to use bootstrapping method to resample different portfolio sizes (100, 200...) from the 1700 and derive expected FSU. For example, we will resample 10,000 100-site portfolios and derive FSU error bands and FSU for each 100-site portfolio using the #1 calculation. However, we are unsure how to proceed to derive FSU for 10,000 portfolios. Should we take mean(FSU)? Or should we take mean(FSU error bands)/mean(sum(metered_savings)) where sum(metered_savings) is the total savings for each 100-site portfolio? If we do the later, the results seem to be more reasonable but we want to ask for your comment and suggestion if the method is statistically correct.

Also, if you can share with us the test or methodology you performed to arrive at the bolded statement above, that would be great. I think we could simulate similar test to achieve our goal.

I hope the questions are clear enough and please feel free to ask more questions to clarify. Thank you very much for your time and I apologize if the questions seem trivial.

Sincerely,
Nhi Ngo



Re: How to Use Model Metrics to Gauge Uncertainty

Si Chen
 

Thanks for pointing that out, Phil.  It seems that CalTrack 4.3.2.4 has replaced ASHRAE's 1.26 "empirical coefficient" with a formula, and for M=12 (12 reporting periods) it comes out to 1.30 for billing (monthly) data and 1.39 for daily data.  

Is P' calculated from P the same way here that n' is calculated from n from the ASHRAE formula, using the autocorrelation coefficient rho?

Finally how do we get the number of model parameters or "number of explanatory variables in the baseline model"?  

-----
Si Chen
Open Source Strategies, Inc.

Our Mission: https://www.youtube.com/watch?v=Uc7lmvnuJHY




On Wed, Mar 4, 2020 at 4:30 PM <ngo.phil@...> wrote:
1. Correct - autocorr_resid is rho
2. The value of n should be 365, that is correct. It sounds like you have the right idea for m as well (i.e, if you have 30 daily predictions and want to know the uncertainty of the sum of those thirty predictions, m should be 30) with a slight caveat that CalTRACK suggests handling these calculations using a polynomial correction using experimentally derived coefficients. See section 4.3, http://docs.caltrack.org/en/latest/methods.html#section-4-aggregation. In that case, there is also an M (capitalized) to keep track of, which is the number of months (regardless of frequency - which is taken into account by using different coefficients for daily and monthly billing data.)

On Wed, Mar 4, 2020 at 3:01 PM Si Chen <sichen@...> wrote:

[Edited Message Follows]

We've fitted some models and would like to know how to use them to really understand the quality of the models.  The model metrics look like this:



and comparing it to ASHRAE 14 guidelines, which gives us these formulas:



My questions are:

1. Is the autocorr_resid the rho (p) is B-14?
2.  What are the right parameters for n and m?  According to an early page in ASHRAE 14, n and m are "number of observations in the baseline (or pre- retrofit) and the post-ECM periods, respectively"   If the model is a daily, should n be 365, so in this case, n' = 365 * (1-0.4792) / (1+0.4792) = 128.5?  If the model is used to compare energy savings over a year, should m be 365?  Or should m be 30 if we're comparing the energy savings on a monthly basis?
3.  How many model parameters are there?  In a combined heating and cooling model, should it be 5 -- 2 betas, 2 balance points, and an intercept -- or 3?

Calculating all this from my example model, I get a 25.8% uncertainty for F (energy savings) of 20% at 68% confidence (t = 1)  Does that seem reasonable for a daily model with this much CVRMSE?

Thanks.


Re: How to Use Model Metrics to Gauge Uncertainty

Steve Schmidt
 

My belief: if the building is "well behaved" with respect to outdoor temperatures and heating and cooling loads, then other non-HVAC loads should have no impact on model fit. But I'm not an OEE expert so I'll let Phil correct this.


Re: How to Use Model Metrics to Gauge Uncertainty

Michael S Uhl
 

Is it possible for energy loads that occur at specific times of day (unrelated to CDD or HDD), due to time-of-use pricing, to negatively impact the model accuracy? If so, how can these other variables be addressed? 


On Thu, Mar 5, 2020 at 4:53 PM Steve Schmidt <steve@...> wrote:
A few additional comments --
  1. I'd call this a "bad building". Based on the CalTRACK model fit results, energy use is not very predictable. The ASHRAE Guideline 14 requirement for a good model fit is CVRMSE < 0.25; this value of 0.46 is far above that target. Perhaps you can note this to users of your system, so they don't rely too heavily on the model.
  2. Savings calculations using such a [poor] model will be inaccurate. I'm no statistician, but I believe an R-squared value of 0.4 indicates some correlation, but is not considered useful for prediction. Current CalTRACK methods use any model with CVRMSE values below 1.0 to predict the counterfactual, so it's up to users to recognize when a fit is good and when it's not.
  3. It's odd that the cooling and heating balance points are the same; normally there are several degrees separation between the two. Maybe it's a strange building, or maybe Phil can explain this.

--
All the Best, M.
System Smart
484.553.4570
Sent on the go. Pardon grammar/spelling.


Re: How to Use Model Metrics to Gauge Uncertainty

Steve Schmidt
 

A few additional comments --
  1. I'd call this a "bad building". Based on the CalTRACK model fit results, energy use is not very predictable. The ASHRAE Guideline 14 requirement for a good model fit is CVRMSE < 0.25; this value of 0.46 is far above that target. Perhaps you can note this to users of your system, so they don't rely too heavily on the model.
  2. Savings calculations using such a [poor] model will be inaccurate. I'm no statistician, but I believe an R-squared value of 0.4 indicates some correlation, but is not considered useful for prediction. Current CalTRACK methods use any model with CVRMSE values below 1.0 to predict the counterfactual, so it's up to users to recognize when a fit is good and when it's not.
  3. It's odd that the cooling and heating balance points are the same; normally there are several degrees separation between the two. Maybe it's a strange building, or maybe Phil can explain this.


Re: How to Use Model Metrics to Gauge Uncertainty

ngo.phil@...
 

1. Correct - autocorr_resid is rho
2. The value of n should be 365, that is correct. It sounds like you have the right idea for m as well (i.e, if you have 30 daily predictions and want to know the uncertainty of the sum of those thirty predictions, m should be 30) with a slight caveat that CalTRACK suggests handling these calculations using a polynomial correction using experimentally derived coefficients. See section 4.3, http://docs.caltrack.org/en/latest/methods.html#section-4-aggregation. In that case, there is also an M (capitalized) to keep track of, which is the number of months (regardless of frequency - which is taken into account by using different coefficients for daily and monthly billing data.)


On Wed, Mar 4, 2020 at 3:01 PM Si Chen <sichen@...> wrote:

[Edited Message Follows]

We've fitted some models and would like to know how to use them to really understand the quality of the models.  The model metrics look like this:



and comparing it to ASHRAE 14 guidelines, which gives us these formulas:



My questions are:

1. Is the autocorr_resid the rho (p) is B-14?
2.  What are the right parameters for n and m?  According to an early page in ASHRAE 14, n and m are "number of observations in the baseline (or pre- retrofit) and the post-ECM periods, respectively"   If the model is a daily, should n be 365, so in this case, n' = 365 * (1-0.4792) / (1+0.4792) = 128.5?  If the model is used to compare energy savings over a year, should m be 365?  Or should m be 30 if we're comparing the energy savings on a monthly basis?
3.  How many model parameters are there?  In a combined heating and cooling model, should it be 5 -- 2 betas, 2 balance points, and an intercept -- or 3?

Calculating all this from my example model, I get a 25.8% uncertainty for F (energy savings) of 20% at 68% confidence (t = 1)  Does that seem reasonable for a daily model with this much CVRMSE?

Thanks.


How to Use Model Metrics to Gauge Uncertainty

Si Chen
 
Edited

We've fitted some models and would like to know how to use them to really understand the quality of the models.  The model metrics look like this:



and comparing it to ASHRAE 14 guidelines, which gives us these formulas:



My questions are:

1. Is the autocorr_resid the rho (p) is B-14?
2.  What are the right parameters for n and m?  According to an early page in ASHRAE 14, n and m are "number of observations in the baseline (or pre- retrofit) and the post-ECM periods, respectively"   If the model is a daily, should n be 365, so in this case, n' = 365 * (1-0.4792) / (1+0.4792) = 128.5?  If the model is used to compare energy savings over a year, should m be 365?  Or should m be 30 if we're comparing the energy savings on a monthly basis?
3.  How many model parameters are there?  In a combined heating and cooling model, should it be 5 -- 2 betas, 2 balance points, and an intercept -- or 3?

Calculating all this from my example model, I get a 25.8% uncertainty for F (energy savings) of 20% at 68% confidence (t = 1)  Does that seem reasonable for a daily model with this much CVRMSE?

Thanks.

1 - 20 of 71