Date   

LF energy data architecture working group presentation

Sander
 

Hi openEEmeter community,

With LF energy data architecture working group we like to have more insight in the current LF energy projects and their data architecture. The goal of the data architecture is to improve interopabilty of the LF energy projects.

We would like to get insight in the following topics. Can you guys give an 30 minute presentation around this topics during one of the office hours?
Project focus and introduction
Data input
Data output
Used semantics (e.g. What information standards are used?)

Please select a date and I will send an invite.
https://wiki.lfenergy.org/display/HOME/Data+Architecture+Working+Group

Data architecture working document:
https://docs.google.com/document/d/1QcHqPRSmUUJQlJnfygGDkOpDPlId6U1V22pBuvZvDYk/edit#heading=h.g0v5yhj0kiyj

 

Kind regards,

Sander Jansen

Data Architect |  Alliander IT Data & Insight

E    sander.jansen@...

Alliander N.V.  .  Postbus 50, 6920 AB Duiven, Nederland  .  Locationcode: 2PB2100
  .  Utrechtseweg 68, 6812 AH Arnhem  .  KvK 09104351 Arnhem  .  www.alliander.com

 

The content of this e-mail, including any attachments, is personal and confidential. If this message is not intended for you, please inform the sender by return and destroy the message. Please do not use or copy the content of this e-mail, including any attachments, or send it to third parties.

 


Reminder: OpenEEmeter Meetup Kickoff tomorrow at 12:30pm Pacific

Phil Ngo
 

I'm looking forward to getting together with some of you at the OpenEEmeter Meetup Kickoff tomorrow!

Phil


Re: Implementing Fractional Savings Uncertainty (FSU)

nhi.ngo@...
 

Hi Phil and Steve,

Thank you very much for your responses. I really appreciate you spending time sharing your insights.

Steve, your experiment seems very interesting. I would have to think more about the impact of these measurements on the portfolio FSU.

Phil, your explanation is very clear and helpful. To clarify, yes, my intent is to know the expected FSU for a portfolio of 100 sites. However, with your explanation above, I think I understand and now what to do for my analysis. So thank you very much.

Though there is one more issue I would like to raise. Ideally, my team is hoping to eventually move toward using hourly data. However, since uncertainty for hourly method is a tricky subject, we are planning to use daily method FSU to gauge uncertainty. My question is from your experience, do you expect FSU to decrease as we move from billing to daily method for the same site? That has been my assumption since the beginning as we have more granular data when using daily method, thus I would expect non-fractional uncertainty to decrease. However, when I conducted testing on 1 site using hourly, daily and billing method, metered savings results seem to be within acceptable range but FSU results seem to contradict my assumptions. I have significantly larger non-fractional and fractional savings uncertainty when using daily method. Do you have any suggestion or thought on this matter?

Thank you very for your help!

-Nhi


Re: Implementing Fractional Savings Uncertainty (FSU)

Steve Schmidt
 

For my own edification I duplicated the FSU calculation Nhi provided in the attached spreadsheet. Hopefully I got it right.

Then for fun I "swapped" savings and FSU values between the two buildings in two different scenarios to see the impact on Portfolio FSU. Again, I hope someone will check these thought experiments to see if they make sense.

If they are, it shows that the individual project saving rates, absolute saving amounts, and FSU percentage have big impacts on the portfolio FSU.

  -Steve


Re: Implementing Fractional Savings Uncertainty (FSU)

ngo.phil@...
 

Hi Nhi,

Thanks for reaching out. I'll do my best to answer your questions, although I will ask for some clarification on the second one.

1) It looks to me like you are interpreting the equation correctly. The intuition for why the combined FSU is lower than either of the individual FSUs is that more data generally leads to lower uncertainty. It may be helpful to remember that the FSU is a fractional/normalized value, quantifying the uncertainty relative to the level of savings. The non-fractional or non-normalized total savings uncertainty adds like variances do, that is, by taking the root of the sum of the squares. So the total uncertainty will be increasing, but by less than the total value of the savings, and thus the ratio of these decreases.

2) I'm not sure I completely understand the intent - do you want to know the expected FSU for a portfolio of 100 sites? or something else?

2a) I can point you this document from the CPUC that describes "Normalized Metered Energy Consumption Working Group Recommendations for Population-Level Approaches" which lists the 25% threshold. I will also ask around at Recurve if there is a publication or public dataset that can be shared to back up the statement read on the website, but I can confirm from personal experience that it is a reasonable expectation for programs with either deep savings or a very large number of projects (or both). Because the FSU is divided by the savings value, you can expect higher FSU values if you're expecting lower percent savings (this would be the case for 1% savings - you will likely find in your bootstrapping analysis that you will need a lot more projects to hit this threshold than you would with deeper savings).


On Wed, Mar 11, 2020 at 1:33 PM <nhi.ngo@...> wrote:
Hi all,

My name is Nhi Ngo from Arcadia and I am working on implementing CalTRACK to evaluate one of our products. I am specifically looking at the Billing Method and want to better understand the interpretation and implementation of FSU at the residential portfolio level. I have a few questions if you don't mind.

1. Before asking other questions, I want to make sure we interpret the FSU aggregation method correctly. The FSU error band at 80% column is the direct output of the model from error_bands['FSU Error Band'] and we try to demonstrate the aggregation method for 2 sites.  As this example suggests, the portfolio FSU is smaller than each individual site's FSU (46% and 13.6%). Does this seem to be counter-intuitive and can you help confirm whether our calculation is correct?



2. Based on the statement on Recurve's website: "CalTRACK tests have shown that aggregation of individual projects, can, in most cases, deliver less than 25 percent portfolio-level savings uncertainty (at the 90 percent confidence level) with very reasonably sized portfolios", we are working on deriving expected FSU for different portfolio sizes to set confidence expectation for our pilot program. We are aiming for 1000 sites but we might have much less so it is important to set expectation if we don't have the portfolio size we want.

We currently have a sample of 1700 residential sites with average savings ~1% (some have negative savings). We are planning to use bootstrapping method to resample different portfolio sizes (100, 200...) from the 1700 and derive expected FSU. For example, we will resample 10,000 100-site portfolios and derive FSU error bands and FSU for each 100-site portfolio using the #1 calculation. However, we are unsure how to proceed to derive FSU for 10,000 portfolios. Should we take mean(FSU)? Or should we take mean(FSU error bands)/mean(sum(metered_savings)) where sum(metered_savings) is the total savings for each 100-site portfolio? If we do the later, the results seem to be more reasonable but we want to ask for your comment and suggestion if the method is statistically correct.

Also, if you can share with us the test or methodology you performed to arrive at the bolded statement above, that would be great. I think we could simulate similar test to achieve our goal.

I hope the questions are clear enough and please feel free to ask more questions to clarify. Thank you very much for your time and I apologize if the questions seem trivial.

Sincerely,
Nhi Ngo



Implementing Fractional Savings Uncertainty (FSU)

nhi.ngo@...
 

Hi all,

My name is Nhi Ngo from Arcadia and I am working on implementing CalTRACK to evaluate one of our products. I am specifically looking at the Billing Method and want to better understand the interpretation and implementation of FSU at the residential portfolio level. I have a few questions if you don't mind.

1. Before asking other questions, I want to make sure we interpret the FSU aggregation method correctly. The FSU error band at 80% column is the direct output of the model from error_bands['FSU Error Band'] and we try to demonstrate the aggregation method for 2 sites.  As this example suggests, the portfolio FSU is smaller than each individual site's FSU (46% and 13.6%). Does this seem to be counter-intuitive and can you help confirm whether our calculation is correct?



2. Based on the statement on Recurve's website: "CalTRACK tests have shown that aggregation of individual projects, can, in most cases, deliver less than 25 percent portfolio-level savings uncertainty (at the 90 percent confidence level) with very reasonably sized portfolios", we are working on deriving expected FSU for different portfolio sizes to set confidence expectation for our pilot program. We are aiming for 1000 sites but we might have much less so it is important to set expectation if we don't have the portfolio size we want.

We currently have a sample of 1700 residential sites with average savings ~1% (some have negative savings). We are planning to use bootstrapping method to resample different portfolio sizes (100, 200...) from the 1700 and derive expected FSU. For example, we will resample 10,000 100-site portfolios and derive FSU error bands and FSU for each 100-site portfolio using the #1 calculation. However, we are unsure how to proceed to derive FSU for 10,000 portfolios. Should we take mean(FSU)? Or should we take mean(FSU error bands)/mean(sum(metered_savings)) where sum(metered_savings) is the total savings for each 100-site portfolio? If we do the later, the results seem to be more reasonable but we want to ask for your comment and suggestion if the method is statistically correct.

Also, if you can share with us the test or methodology you performed to arrive at the bolded statement above, that would be great. I think we could simulate similar test to achieve our goal.

I hope the questions are clear enough and please feel free to ask more questions to clarify. Thank you very much for your time and I apologize if the questions seem trivial.

Sincerely,
Nhi Ngo



Re: How to Use Model Metrics to Gauge Uncertainty

Si Chen <sichen@...>
 

Thanks for pointing that out, Phil.  It seems that CalTrack 4.3.2.4 has replaced ASHRAE's 1.26 "empirical coefficient" with a formula, and for M=12 (12 reporting periods) it comes out to 1.30 for billing (monthly) data and 1.39 for daily data.  

Is P' calculated from P the same way here that n' is calculated from n from the ASHRAE formula, using the autocorrelation coefficient rho?

Finally how do we get the number of model parameters or "number of explanatory variables in the baseline model"?  

-----
Si Chen
Open Source Strategies, Inc.

Our Mission: https://www.youtube.com/watch?v=Uc7lmvnuJHY




On Wed, Mar 4, 2020 at 4:30 PM <ngo.phil@...> wrote:
1. Correct - autocorr_resid is rho
2. The value of n should be 365, that is correct. It sounds like you have the right idea for m as well (i.e, if you have 30 daily predictions and want to know the uncertainty of the sum of those thirty predictions, m should be 30) with a slight caveat that CalTRACK suggests handling these calculations using a polynomial correction using experimentally derived coefficients. See section 4.3, http://docs.caltrack.org/en/latest/methods.html#section-4-aggregation. In that case, there is also an M (capitalized) to keep track of, which is the number of months (regardless of frequency - which is taken into account by using different coefficients for daily and monthly billing data.)

On Wed, Mar 4, 2020 at 3:01 PM Si Chen <sichen@...> wrote:

[Edited Message Follows]

We've fitted some models and would like to know how to use them to really understand the quality of the models.  The model metrics look like this:



and comparing it to ASHRAE 14 guidelines, which gives us these formulas:



My questions are:

1. Is the autocorr_resid the rho (p) is B-14?
2.  What are the right parameters for n and m?  According to an early page in ASHRAE 14, n and m are "number of observations in the baseline (or pre- retrofit) and the post-ECM periods, respectively"   If the model is a daily, should n be 365, so in this case, n' = 365 * (1-0.4792) / (1+0.4792) = 128.5?  If the model is used to compare energy savings over a year, should m be 365?  Or should m be 30 if we're comparing the energy savings on a monthly basis?
3.  How many model parameters are there?  In a combined heating and cooling model, should it be 5 -- 2 betas, 2 balance points, and an intercept -- or 3?

Calculating all this from my example model, I get a 25.8% uncertainty for F (energy savings) of 20% at 68% confidence (t = 1)  Does that seem reasonable for a daily model with this much CVRMSE?

Thanks.


Re: How to Use Model Metrics to Gauge Uncertainty

Steve Schmidt
 

My belief: if the building is "well behaved" with respect to outdoor temperatures and heating and cooling loads, then other non-HVAC loads should have no impact on model fit. But I'm not an OEE expert so I'll let Phil correct this.


Re: How to Use Model Metrics to Gauge Uncertainty

Michael S Uhl
 

Is it possible for energy loads that occur at specific times of day (unrelated to CDD or HDD), due to time-of-use pricing, to negatively impact the model accuracy? If so, how can these other variables be addressed? 


On Thu, Mar 5, 2020 at 4:53 PM Steve Schmidt <steve@...> wrote:
A few additional comments --
  1. I'd call this a "bad building". Based on the CalTRACK model fit results, energy use is not very predictable. The ASHRAE Guideline 14 requirement for a good model fit is CVRMSE < 0.25; this value of 0.46 is far above that target. Perhaps you can note this to users of your system, so they don't rely too heavily on the model.
  2. Savings calculations using such a [poor] model will be inaccurate. I'm no statistician, but I believe an R-squared value of 0.4 indicates some correlation, but is not considered useful for prediction. Current CalTRACK methods use any model with CVRMSE values below 1.0 to predict the counterfactual, so it's up to users to recognize when a fit is good and when it's not.
  3. It's odd that the cooling and heating balance points are the same; normally there are several degrees separation between the two. Maybe it's a strange building, or maybe Phil can explain this.

--
All the Best, M.
System Smart
484.553.4570
Sent on the go. Pardon grammar/spelling.


Re: How to Use Model Metrics to Gauge Uncertainty

Steve Schmidt
 

A few additional comments --
  1. I'd call this a "bad building". Based on the CalTRACK model fit results, energy use is not very predictable. The ASHRAE Guideline 14 requirement for a good model fit is CVRMSE < 0.25; this value of 0.46 is far above that target. Perhaps you can note this to users of your system, so they don't rely too heavily on the model.
  2. Savings calculations using such a [poor] model will be inaccurate. I'm no statistician, but I believe an R-squared value of 0.4 indicates some correlation, but is not considered useful for prediction. Current CalTRACK methods use any model with CVRMSE values below 1.0 to predict the counterfactual, so it's up to users to recognize when a fit is good and when it's not.
  3. It's odd that the cooling and heating balance points are the same; normally there are several degrees separation between the two. Maybe it's a strange building, or maybe Phil can explain this.


Re: How to Use Model Metrics to Gauge Uncertainty

ngo.phil@...
 

1. Correct - autocorr_resid is rho
2. The value of n should be 365, that is correct. It sounds like you have the right idea for m as well (i.e, if you have 30 daily predictions and want to know the uncertainty of the sum of those thirty predictions, m should be 30) with a slight caveat that CalTRACK suggests handling these calculations using a polynomial correction using experimentally derived coefficients. See section 4.3, http://docs.caltrack.org/en/latest/methods.html#section-4-aggregation. In that case, there is also an M (capitalized) to keep track of, which is the number of months (regardless of frequency - which is taken into account by using different coefficients for daily and monthly billing data.)


On Wed, Mar 4, 2020 at 3:01 PM Si Chen <sichen@...> wrote:

[Edited Message Follows]

We've fitted some models and would like to know how to use them to really understand the quality of the models.  The model metrics look like this:



and comparing it to ASHRAE 14 guidelines, which gives us these formulas:



My questions are:

1. Is the autocorr_resid the rho (p) is B-14?
2.  What are the right parameters for n and m?  According to an early page in ASHRAE 14, n and m are "number of observations in the baseline (or pre- retrofit) and the post-ECM periods, respectively"   If the model is a daily, should n be 365, so in this case, n' = 365 * (1-0.4792) / (1+0.4792) = 128.5?  If the model is used to compare energy savings over a year, should m be 365?  Or should m be 30 if we're comparing the energy savings on a monthly basis?
3.  How many model parameters are there?  In a combined heating and cooling model, should it be 5 -- 2 betas, 2 balance points, and an intercept -- or 3?

Calculating all this from my example model, I get a 25.8% uncertainty for F (energy savings) of 20% at 68% confidence (t = 1)  Does that seem reasonable for a daily model with this much CVRMSE?

Thanks.


How to Use Model Metrics to Gauge Uncertainty

Si Chen <sichen@...>
 
Edited

We've fitted some models and would like to know how to use them to really understand the quality of the models.  The model metrics look like this:



and comparing it to ASHRAE 14 guidelines, which gives us these formulas:



My questions are:

1. Is the autocorr_resid the rho (p) is B-14?
2.  What are the right parameters for n and m?  According to an early page in ASHRAE 14, n and m are "number of observations in the baseline (or pre- retrofit) and the post-ECM periods, respectively"   If the model is a daily, should n be 365, so in this case, n' = 365 * (1-0.4792) / (1+0.4792) = 128.5?  If the model is used to compare energy savings over a year, should m be 365?  Or should m be 30 if we're comparing the energy savings on a monthly basis?
3.  How many model parameters are there?  In a combined heating and cooling model, should it be 5 -- 2 betas, 2 balance points, and an intercept -- or 3?

Calculating all this from my example model, I get a 25.8% uncertainty for F (energy savings) of 20% at 68% confidence (t = 1)  Does that seem reasonable for a daily model with this much CVRMSE?

Thanks.


Re: What do the red vs green lines in the model graph mean?

Si Chen <sichen@...>
 

OK, thanks for the clarification!

-----
Si Chen
Open Source Strategies, Inc.

Our Mission: https://www.youtube.com/watch?v=Uc7lmvnuJHY



On Thu, Feb 27, 2020 at 9:58 AM <ngo.phil@...> wrote:
Great question - green are qualified model candidates, red are disqualified model candidates, orange is the selected model candidate. If you look carefully, you'll notice all the red ones have negative model parameters, which disqualify the models. This behavior is described in CalTRACK 3.4.3.2. (http://docs.caltrack.org/en/latest/methods.html#section-3-a-modeling-billing-and-daily-methods). It's not well documented, as you noticed, but here's the code if you're interested: https://github.com/openeemeter/eemeter/blob/c10c384e0ceb39a4eb79d4532d5516d40f9bf3be/eemeter/caltrack/usage_per_day.py#L2273-L2282.

On Thu, Feb 27, 2020 at 9:28 AM Si Chen <sichen@...> wrote:
Hello,

We just fitted a daily model, and the graph shows both red and green lines.  What do these different colored lines mean?


Re: What do the red vs green lines in the model graph mean?

ngo.phil@...
 

Great question - green are qualified model candidates, red are disqualified model candidates, orange is the selected model candidate. If you look carefully, you'll notice all the red ones have negative model parameters, which disqualify the models. This behavior is described in CalTRACK 3.4.3.2. (http://docs.caltrack.org/en/latest/methods.html#section-3-a-modeling-billing-and-daily-methods). It's not well documented, as you noticed, but here's the code if you're interested: https://github.com/openeemeter/eemeter/blob/c10c384e0ceb39a4eb79d4532d5516d40f9bf3be/eemeter/caltrack/usage_per_day.py#L2273-L2282.


On Thu, Feb 27, 2020 at 9:28 AM Si Chen <sichen@...> wrote:
Hello,

We just fitted a daily model, and the graph shows both red and green lines.  What do these different colored lines mean?


What do the red vs green lines in the model graph mean?

Si Chen <sichen@...>
 

Hello,

We just fitted a daily model, and the graph shows both red and green lines.  What do these different colored lines mean?


Re: gas and electric meters at the same building

Si Chen <sichen@...>
 

OK, thank you very much for clearing that up.

-----
Si Chen
Open Source Strategies, Inc.

Our Mission: https://www.youtube.com/watch?v=Uc7lmvnuJHY



On Tue, Feb 25, 2020 at 8:25 AM <ngo.phil@...> wrote:
Definitely, for buildings with multiple meters, there should be a separate model for each meter. It is important to use different parameters for gas and electricity, as you read in the CalTRACK compliance document.

On Mon, Feb 24, 2020 at 2:55 PM Si Chen <sichen@...> wrote:
Hello,

What do you recommend when a building has gas and electric meters?  I read on https://github.com/openeemeter/eemeter/blob/fc91df2b5fa69125a85b1235d24783c350d5b99a/docs/caltrack_compliance.rst:
 
For natural gas meter use data, the function :any:`eemeter.fit_caltrack_usage_per_day_model` must set fit_cdd=False and cooling_balance_points=None so that models using cooling degree days are not considered.
 
3.4.3.1:any:`eemeter.fit_caltrack_usage_per_day_model` must set fit_cdd=True, fit_intercept_only=True, fit_cdd_only=True, fit_hdd_only=True, fit_cdd_hdd=True for electricity data, and fit_cdd=False, fit_intercept_only=True, fit_cdd_only=False, fit_hdd_only=True, fit_cdd_hdd=False for gas data.

So do you recommend building separate models for gas and electric with the parameters changed?


Re: gas and electric meters at the same building

ngo.phil@...
 

Definitely, for buildings with multiple meters, there should be a separate model for each meter. It is important to use different parameters for gas and electricity, as you read in the CalTRACK compliance document.

On Mon, Feb 24, 2020 at 2:55 PM Si Chen <sichen@...> wrote:
Hello,

What do you recommend when a building has gas and electric meters?  I read on https://github.com/openeemeter/eemeter/blob/fc91df2b5fa69125a85b1235d24783c350d5b99a/docs/caltrack_compliance.rst:
 
For natural gas meter use data, the function :any:`eemeter.fit_caltrack_usage_per_day_model` must set fit_cdd=False and cooling_balance_points=None so that models using cooling degree days are not considered.
 
3.4.3.1:any:`eemeter.fit_caltrack_usage_per_day_model` must set fit_cdd=True, fit_intercept_only=True, fit_cdd_only=True, fit_hdd_only=True, fit_cdd_hdd=True for electricity data, and fit_cdd=False, fit_intercept_only=True, fit_cdd_only=False, fit_hdd_only=True, fit_cdd_hdd=False for gas data.

So do you recommend building separate models for gas and electric with the parameters changed?


gas and electric meters at the same building

Si Chen <sichen@...>
 

Hello,

What do you recommend when a building has gas and electric meters?  I read on https://github.com/openeemeter/eemeter/blob/fc91df2b5fa69125a85b1235d24783c350d5b99a/docs/caltrack_compliance.rst:
 
For natural gas meter use data, the function :any:`eemeter.fit_caltrack_usage_per_day_model` must set fit_cdd=False and cooling_balance_points=None so that models using cooling degree days are not considered.
 
3.4.3.1:any:`eemeter.fit_caltrack_usage_per_day_model` must set fit_cdd=True, fit_intercept_only=True, fit_cdd_only=True, fit_hdd_only=True, fit_cdd_hdd=True for electricity data, and fit_cdd=False, fit_intercept_only=True, fit_cdd_only=False, fit_hdd_only=True, fit_cdd_hdd=False for gas data.

So do you recommend building separate models for gas and electric with the parameters changed?


Re: OpenEEMeter integrated in opentaps

Si Chen <sichen@...>
 

Sure.  But maybe use the opentaps.org/forum -- It might be off topic for OpenEEMeter?
-----
Si Chen
Open Source Strategies, Inc.

opentaps in 1 minute: https://youtu.be/r0AY2P738QY



On Fri, Jan 31, 2020 at 7:51 AM Michael S Uhl <system.smart.llc@...> wrote:
So glad to see you've built this (and that I won't need to duplicate the work). Can I share a few use cases and see if you think opentaps fills the needs described (or will shortly)?

On Thu, Jan 30, 2020 at 6:57 PM Si Chen <sichen@...> wrote:
That would be great!  Please let me know if you need anything else from me for that.

-----
Si Chen
Open Source Strategies, Inc.

opentaps in 1 minute: https://youtu.be/r0AY2P738QY



On Thu, Jan 30, 2020 at 3:39 PM <ngo.phil@...> wrote:
Great work Si and OpenTaps team! This is very exciting. A hearty thank you to you and your team for the contributions back to the OpenEEmeter library along the way. I am looking forward to seeing what comes next from the OpenTaps team and your OpenTaps/OpenEEmeter integration.

If you are open to it, I would be more than happy to feature your integration in the eemeter docs, with the idea that someone wanting to try out the OpenEEMeter or make an integration themselves could head over to OpenTaps to see how it can be done.

On Thu, Jan 30, 2020 at 2:38 PM Si Chen <sichen@...> wrote:
Hello everybody,

It's done!

Please take a look at https://opentaps.org/2020/01/30/green-button-xml-openeemeter-added-opentaps-mv/

and let us know your thoughts and suggestions.



--
System Smart LLC

Imagination is the beginning of creation...  You imagine what you desire, you will what you imagine, and at last you create what you will.  ~George Bernard Shaw


Re: OpenEEMeter integrated in opentaps

Michael S Uhl
 

So glad to see you've built this (and that I won't need to duplicate the work). Can I share a few use cases and see if you think opentaps fills the needs described (or will shortly)?

On Thu, Jan 30, 2020 at 6:57 PM Si Chen <sichen@...> wrote:
That would be great!  Please let me know if you need anything else from me for that.

-----
Si Chen
Open Source Strategies, Inc.

opentaps in 1 minute: https://youtu.be/r0AY2P738QY



On Thu, Jan 30, 2020 at 3:39 PM <ngo.phil@...> wrote:
Great work Si and OpenTaps team! This is very exciting. A hearty thank you to you and your team for the contributions back to the OpenEEmeter library along the way. I am looking forward to seeing what comes next from the OpenTaps team and your OpenTaps/OpenEEmeter integration.

If you are open to it, I would be more than happy to feature your integration in the eemeter docs, with the idea that someone wanting to try out the OpenEEMeter or make an integration themselves could head over to OpenTaps to see how it can be done.

On Thu, Jan 30, 2020 at 2:38 PM Si Chen <sichen@...> wrote:
Hello everybody,

It's done!

Please take a look at https://opentaps.org/2020/01/30/green-button-xml-openeemeter-added-opentaps-mv/

and let us know your thoughts and suggestions.



--
System Smart LLC

Imagination is the beginning of creation...  You imagine what you desire, you will what you imagine, and at last you create what you will.  ~George Bernard Shaw

21 - 40 of 83