I am trying to learn how to properly model storage processes in TIMES and thus, the following questions related to a specific case I am working on.
Hope you can provide me with some answers and clarifications.
At the moment my goal is to properly model a thermal storage, which currently is available in the district heating network of a city I model. The information I know/assume is the following (also in the attached file):
1) Installed capacity in terms of the maximum amount of heat that can be in the storage in a given moment = 900 MWh
2) Minimum capacity = 200 MWh, is an assumption, which limits the lowest level of stored heat in a given moment
3) Charge and discharge rate = 60 MW
4) STG_EFF and STG_LOSS are assumed values for efficiency and losses.
My main questions to you are related to the attributes, which regulate installed capacity (in MWh) and charge and discharge rates (in MW).
Could you please take a look at the attached file and check if my logic in defining the
- STGIN_BND,
- STGOUT_BND,
- NCAP_AF~LO,
- NCAP_AF~UP,
- STOCK~2015
attributes is correct and reflects the input information above (also in the file)?
I believe there is another way of defining the indicated attributes - via the NCAP_AFC attribute, but I do not think I totally understand how that would work.
If possible, could you please provide me with an alternative, which would define the capacity and the charge/discharge rates using the NCAP_AFC attribute.
Of course, it would be great if you could check other parameters in the file as well since I have very limited experience of modelling storage processes in TIMES and still finding my way around with attributes and other modelling features. E.g., if I want my storage to be able to operate not only at the Day/Night level but at all the available levels Day/Night-Weekly-Season (please see my time-slices in the file), is it correct to identify the process with the "STS" storage type qualifier?
Looking forward to hear back from you.
Best regards,
Dmytro
• Your Stock is ok, if you want to model the capacity in terms of maximum amount stored, and if the flows are really in TJ;
• Your NCAP_AF(UP/LO) are ok, if if you want to model the capacity in terms of maximum amount stored;
• STG_LOSS is ok;
• STGIN_BND / STGOUT_BND are not ok for the purpose intended.
But are the flows really in TJ (it is a rather small unit)?
Concerning the STGIN_BND / STGOUT_BND, as they are defined now, they are ANNUAL bounds, and so are not actually limiting the charge /discharge rates, but total ANNUAL flows. You could define bounds for each and every timeslice for defining those max charge / discharge rates, but in my opinion that would get rather cumbersome, and so I would suggest to use NCAP_AFC instead. Then your capacity unit could be in MW, and thus Stock=60, NCAP_AFC(NRG,DAYNITE)=1, NCAP_AFC(ACT,DAYNITE)= 0.625, NCAP_AF(ANNUAL,LO)=0.1388889 and PRC_CAPACT=31.536 (assuming TJ is indeed your flow unit). ( 900/(60×24)=0.625; 200/(60×24)=0.13888889 ).
Yes it is correct to identify the process with the "STS" storage type qualifier.
1) the flows are in TJ and, I have checked again, should be correct. It is a district heating system with yearly heat deliveries of around 600 GWh (2160 TJ), which has a thermal storage of 900 MWh (3.24 TJ) of capacity.
2) Ok, then I will try to apply the approach of expressing storage parameters via the NCAP_AFC attribute. Thanks a lot for providing me with the instruction on how to do so!
But since this approach is new to me, could you please elaborate a little more on how to implement your suggestions in the actual file?
Because of limited TIMES knowledge I am having difficulties understanding how exactly to implement e.g.,
- NCAP_AFC(NRG,DAYNITE)=1,
- NCAP_AFC(ACT,DAYNITE)= 0.625.
It is the same attribute but one has "NRG" and another one has "ACT" in parentheses. How do I include this in my excel table(s)?
3) Also, could you please elaborate shortly what the proposed entrances mean?
- NCAP_AFC(NRG,DAYNITE)=1 - means that the capacity of the storage is available at all times, correct? all year long and in each time-slice, right?
- NCAP_AFC(ACT,DAYNITE)= 0.625 - means that each hour the storage can be charged/discharged with 60 (MW) * 0.625 = 37.5 MW of heat? this also means that over a day (24 hr) up to 37.5 * 24 = 900 MWh can be charged or discharged? Am I understanding this correct?
- NCAP_AF(ANNUAL, LO)=0.138. Could you please explain this in more details? I mean why do we indicate ANNUAL instead of DAYNITE? And how do we get mathematically to 200 MWh of lowest allowable storage level with this attribute value? I am having difficulties here.
Hope to hear back from you! I think I am learning but still need a bit more understanding.
Best regards,
Dmytro
01-09-2021, 02:48 PM (This post was last modified: 01-09-2021, 03:33 PM by Antti-L.)
Please find my answers below:
1) Ok, fine.
2) NCAP_AFC has the indexes NCAP_AFC(r,y,p,cg,tsl). Here r=region, y=year, p=process, cg=commgrp and tsl=timeslice level. The shorthand notation NCAP_AFC(NRG,DAYNITE) just tells that cg='NRG' and tsl='DAYNITE'. NCAP_AFC(NRG,DAYNITE) tells that cg='ACT' and tsl='DAYNITE'. You can specify cg in the CommGrp column, and tsl in the Timeslice column. You could also use the header NCAP_AFC~DAYNITE. So, in one row you could write 'NRG' in the CommGrp column and put the value 1 under the column NCAP_AFC~DAYNITE. And then in a second row you could write 'ACT' in the CommGrp column and put the value 0.625 under the column NCAP_AFC~DAYNITE. It is easy, but as always, check the parameters in Browse.
3) NCAP_AFC(NRG,DAYNITE)=1 defines a DAYNITE level availability factor for the NRG flows (NRG means energy). An availability factor of 1 (=100%) means that the full capacity can be used. The full capacity is 60 MW. So, with an availability factor of 1 (=100%) you can discharge max. 60 MW of energy output in any timeslice, and charge energy into the storage with max. 60 MW power in any timeslice.
NCAP_AFC(ACT,DAYNITE)= 0.625 defines a DAYNITE level availability factor for the activity (ACT refers to the activity). The activity represents the storage level. By convention, in a DAYNITE level storage an availability factor of 1 (100%) would correspond to a maximum storage level of CAP × 24 h = 60 MW × 24 h = 1440 MWh. Therefore (for a storage with maximum storage level of 900 MWh), an availability factor of 0.625 would correspond to a maximum storage level of 60 MW × 24 h × 0.625 = 900 MWh. See the documentation, section 4.3.7 Availability factors for storage processes.
NCAP_AFC can only be used for UP/FX availability factors. Therefore, for defining the minimum storage level of 200 MWh, NCAP_AF(ANNUAL, LO)=0.138889 is needed, because an availability factor of 0.138889 for the storage level would correspond to a storage level of 60 MW × 24 h × 0.1388889 = 200 MWh. NCAP_AF is always levelized, and this can therefore be defined for the ANNUAL timeslice only.
I forgot to give an explicit answer to this question:
>NCAP_AFC(ACT,DAYNITE)= 0.625 - means that each hour the storage can be charged/discharged with 60 (MW) * 0.625 = 37.5 MW of heat? this also means that over a day (24 hr) up to 37.5 * 24 = 900 MWh can be charged or discharged? Am I understanding this correct?
NO, that is not correct at all. NCAP_AFC(ACT,DAYNITE) defines an availability factor for the activity (ACT). The activity represents the storage level, and so this parameter defines the maximum amount of energy that can be stored at any timeslice. It does NOT define a limit for the amount charged or discharged. The bounds for the amount charged or discharged are defined by the availability factors for the input / output flows (NCAP_AFC(NRG,DAYNITE)).
(01-09-2021, 02:48 PM)Antti-L Wrote: 2) NCAP_AFC has the indexes NCAP_AFC(r,y,p,cg,tsl). Here r=region, y=year, p=process, cg=commgrp and tsl=timeslice level. The shorthand notation NCAP_AFC(NRG,DAYNITE) just tells that cg='NRG' and tsl='DAYNITE'. NCAP_AFC(NRG,DAYNITE) tells that cg='ACT' and tsl='DAYNITE'. You can specify cg in the CommGrp column, and tsl in the Timeslice column. You could also use the header NCAP_AFC~DAYNITE. So, in one row you could write 'NRG' in the CommGrp column and put the value 1 under the column NCAP_AFC~DAYNITE. And then in a second row you could write 'ACT' in the CommGrp column and put the value 0.625 under the column NCAP_AFC~DAYNITE. It is easy, but as always, check the parameters in Browse.
3) NCAP_AFC(NRG,DAYNITE)=1 defines a DAYNITE level availability factor for the NRG flows (NRG means energy). An availability factor of 1 (=100%) means that the full capacity can be u
Dear Antti,
sorry for being back with yet another question so quickly.
I am trying to follow your instructions (above) on how to indicate indexes for the NCAP_AFC attribute but keep getting the "VI: Commodity expected for parameter" error when synchronizing the file. Cell J:12 in the attached Excel file is highlighted when I double-click on the error.
Could you please check the file and see if I made any entrances in some rows or columns in a wrong way. Maybe I misunderstood some instructions.
01-09-2021, 08:46 PM (This post was last modified: 01-09-2021, 08:58 PM by Antti-L.)
That spurious error is a VEDA issue. It seems you are using some old version of VEDA, is that correct?
Old versions of VEDA may have some bugs. I remember having seen a similar issue myself years ago, when using an old version of VEDA. But there is an easy work-around: Just put HETHTHP also into Cell F12. Putting it there is correct (it is an output) and so it will do no harm, but by doing so you should get rid of the buggy error.
Of course, you should also consider upgrading your VEDA... But anyway, this is not the VEDA Forum.
thanks, adding HETHTHP commodity helped! We are doing our best to move our models from the old VEDA to the new one. Hope that VEDA related issues will not be a problem soon.
Ah, I have changed it back to STG in order to compare model results with a storage process being described as STG and STS. This is simply a learning exercise. But thanks for this attention to details and bringing this up.
I have been trying to use the knowledge you shared with me (in your replies above) to model more storage options in my TIMES model.
At this point I would love to use a bit of your time to check if my understanding of the storage modelling has improved.
I here attach two files in which I try to model existing thermal storages (VT_CITY_STG_ESK_72ts.xlsx file) and storage technologies in which the model can invest (SubRES_b-NewTechs_STG.xlsx file).
In the first file, I try to model the piping network of a district heating system as a thermal energy storage. I try to represent this type of storage in exact same way as you helped me represent the existing accumulator tank. The only differences being that storage in the network
1) has no lower bound on the level of stored heat
2) has no fixed O&M cost
3) is identified as STG and not as STS type
4) has way higher (assumed for now) storage loss. Do you think the network storage is represented correctly in the attached file given the points above and the parameters I assumed for the storage in the file, table E29:F37?
In the second file, I try to model storage options available for investments. I hope that the "conventional" storage options, i.e., accumulator tanks and the underground storage (rows 8-10) are represented correctly. Would you agree?
The interesting part of this file is the representation of buildings as thermal energy storage. The idea is that we can store thermal energy using the thermal mass of buildings (overheating and underheating buildings). This is what I have been trying to represent in the model. The main characteristics of this storage type are:
1) input and output commodity is NOT the heat at the district heating system side but the heat at the building side (for each building type!)
2) high storage losses (STG_LOSS = 43.8 based on the input data). Makes sense?
3) low investment cost associated with the installation of additional smart metering equipment.
The input data and assumptions (L52:S67) are still being verified but I am wondering if this way of representing building storage is correct? Could you please give your opinion on this?
I am not sure if I missed any details in this message but I would be glad to answer any additional questions:
Hope to hear back from you and wish you a nice day!
/Dmytro
>Do you think the network storage is represented correctly in the attached file given the points above and the parameters I assumed for the storage in the file, table E29:F37?
Well yes, it looks correct to me, given the points you mentioned.
>In the second file, I try to model storage options available for investments. I hope that the "conventional" storage options, i.e., accumulator tanks and the underground storage (rows 8-10) are represented correctly. Would you agree?
For these, you have not defined the capacity bounding the input/output flows in any way. The capacity is thus bounding only the amount of energy stored. If that's as intended, I agree.
>The input data and assumptions (L52:S67) are still being verified but I am wondering if this way of representing building storage is correct?
I am not able to see the commodity characterization, and so cannot see the full picture. The RHAPA, RHAPB, RHAPC, RHAPE etc. commodities are apparently demand commodities? And you say that they represent "heat at the building side". So, is your demand actually space heat, and not the volume of heated space by building class? Anyway, obviously they should be on the DAYNITE level for the storage processes to work. As far as I can see, many of these "Buildings Thermal Energy Storage (BiTES)" processes have neither capacity-related costs nor capacity bounds in the Subres, and I think that might be problematic, especially as most of them have zero losses. For such cases (e.g. STGRHABiTES101), is there anything bounding the technology operation? If there are no capacity-related costs or bounds, there would be no capacity variable, and so the activity would not be bounded by any capacity, and it seems there are no activity costs or losses either. So it looks like the storage activity could take arbitrary values? Likewise, the input and output flows are not bounded by anything, and STG_EFF=1, meaning that also the input and output flows (in the same commodity) could seemingly take arbitrary values without affecting the objective function?
However, those that have investment costs defined look ok to me.
thank you very much for the prompt and detailed answer!
1. Great, I will keep the description of the network storage as it is then.
2. Yes, no bounds on the input/output capacity were set. The thing is that for the Network or Buildings storage I do know their limitations, e.g., knowing the number and thermal masses of the buildings their total max energy capacity (max amount of heat stored) as well as in/out capacity can be estimated (as it is now in the SubRES file). But for the conventional storage investment options (e.g. a tank) I do not know
- the maximum invested energy capacity (it is the model's variable, right?)
- the ratio between the invested energy capacity (maximum amount of energy stored) and the max input/output flows.
However, I totally agree that values for maximum input/output flows should be indicated in some way to prevent the invested storage(s) go "crazy".
Are there any TIMES attributes that can regulate input/output flows for a storage, which has no size estimation yet?
P.S. I do not set any constraint on the maximum storage investment, i.e., maximum amount of energy stored or max in/out capacity, for the conventional storage techs.
3. Yes, the RHAPA, RHAPB ... commodities are demand commodities that have the DAYNITE timeslice and TJ as a unit.
Indeed, some of the storage processes (e.g. STGRHABiTES101) have no parameters since some of the building types (e.g. APA, APC, APD) are available in the model structure but have no data (assumed that there are no buildings of the type APA, APC, APD in the modeled city). With this in mind, I was trying to make a SubRES_STG file with a structure that will allow all possible building types to be used as a storage if all the building types were available. However for now, only the building types that do have data in the ASSUMPTIONS table (e.g. type APB), and hence in the ~FI_T table, are assumed to be available for storing heat. Hope this is more clear now and sorry for this confusion.
As I can see from your last sentence, the building storages that do have input data (max capacity, losses, inv. cost) are seemed to be defined correctly. This is good news, thanks!
18-10-2021, 08:10 PM (This post was last modified: 18-10-2021, 09:21 PM by Antti-L.)
>Are there any TIMES attributes that can regulate input/output flows for a storage, which has no size estimation yet?
Well, you could use e.g. FLO_SHAR(r,y,p,com,'ACT','ANNUAL','UP') = 1, to have the output of com bounded by the activity (1×VAR_ACT) at the beginning of each timeslice. And with aux flows and/or UCs you can define almost any kind of (linear) relations you might consider for regulating the flows. In general, having storage processes with no capacity variable, or the same commodity IN and OUT, may cause numerical problems for the solver, unless the activity / flow levels are bounded in some way or have some impact on the OBJ (e.g. via losses defined by STG_LOSS / STG_EFF).
BTW, You have defined NCAP_BND with the description "Max capacity bound". Note that it defines a bound for the new capacity installed in the period t specified in NCAP_BND(r,t,p,bd). So, your NCAP_BND bounds in the Subres seem to be defined on the new capacity in the Base year only.
thanks again for your answer and for the additional information.
Let me start from the second point you made. I think I understood what you mean and hence made a change to the file (attached). Basically, the column previously named NCAP_BND I renamed to NCAP_BND~2025 and added another column named NCAP_BND~2050 with the same values. Will this change make sure that the model can't invest in more than 50 MW (as in the example) of storage through out all of the year between the start year 2025 and the end year 2050? Or is there another way to do this?
Regarding your first remark. You mentioned "unless the activity / flow levels are bounded in some way or have some impact on the OBJ (e.g. via losses defined by STG_LOSS / STG_EFF)." In this case I did define STG_LOSS / STG_EFF for the "conventional" storage investment options. Do you think this will not suffice? I mean can there be numerical problems with this particular formulation?
I will try to learn more about TIMES and play with some attributes (as you indicated "FLO_SHAR(r,y,p,com,'ACT','ANNUAL','UP') = 1, to have the output of com bounded by the activity (1×VAR_ACT) at the beginning of each timeslice") but only if I get more time in the project For now, I would like to be more certain that the representations are good enough.