Model not solving
#1
Hello Antti,

I have a problem that my model is not getting solved. It is running for a long time and never gets solved. I tried improving the scaling of the model, but still the problem persists. 

Could you please have a look? I have attached the .DD and .Run files. Also, I have attached the whole model with this post. Due to file size restrictions, the scenario and subRES files are separately attached. I suspect the problem is with the subRES files (file names starting with 'Decentralized' and 'Centralized'), because the problem starting after I made changes in these files.

Thanks & Regards
Abi Afthab


Attached Files
.zip   SuppXLS.zip (Size: 1.89 MB / Downloads: 2)
.zip   subRES files.zip (Size: 3.35 MB / Downloads: 2)
.zip   BE_POWER_V9 - Copy.zip (Size: 6.6 MB / Downloads: 1)
.zip   DD and run files.zip (Size: 468.36 KB / Downloads: 1)
Reply
#2
So, you have a big model, which is not solving, and you ask me to find out the reason for you, by reading through all the 81 Excel templates (with a total size of 16 Mb) for a model that I know hardly anything about?  Does not feel like a very reasonable request...  Cry

Anyway, I looked at your SysSettings, and it seems that you have not disabled the dummy imports, and you are still defining huge costs for those dummy import processes:

ACTCOST    222200000   IRE IMP*Z
ACTCOST     88880000   IRE IMPDEMZ

So, I can only repeat my former advice: Get rid of the dummy imports by disabling all the IMP*Z processes, e.g. by defining START=2200 for all of them.
Reply
#3
(26-06-2019, 08:05 PM)Antti-L Wrote: So, you have a big model, which is not solving, and you ask me to find out the reason for you, by reading through all the 81 Excel templates (with a total size of 16 Mb) for a model that I know hardly anything about?  Does not feel like a very reasonable request...  Cry

Anyway, I looked at your SysSettings, and it seems that you have not disabled the dummy imports, and you are still defining huge costs for those dummy import processes:

ACTCOST    222200000   IRE IMP*Z
ACTCOST     88880000   IRE IMPDEMZ

So, I can only repeat my former advice: Get rid of the dummy imports by disabling all the IMP*Z processes, e.g. by defining START=2200 for all of them.

Thanks for the reply Antti. 
I can get rid of the big numbers in dummy imports. But, if I disable the dummy imports, will it not be risky? I may fail to find some of my demand not being met if something is wrong with the modelling. Am I right?
Reply
#4
Dear Antti,

Even without including all the subRES files, I was having this problem. Just now, I tested the model with three cases.

Along with all scenario files,
1) Including only the following subRES files
SubRES_BackupBoilers
SubRES_ELCConventionalTechs
SubRES_ELCRenewableTechs
SubRES_DECENTRALISED_ALK_MOBILITY
SubRES_DECENTRALISED_ALK_INDUSTRY

This took 4 mins to solve.
Reducing the dummy import cost values (as you suggested) have definitely improved the solving time. Last time it took 50 mins to solve this case.

2)Including the following subRES files
SubRES_BackupBoilers
SubRES_ELCConventionalTechs
SubRES_ELCRenewableTechs
SubRES_DECENTRALISED_ALK_MOBILITY
SubRES_DECENTRALISED_ALK_INDUSTRY

This took 25 mins to solve

3)Including all the subRES files
The model is still running for last 3.3 hours. I can see the solver has calculated the objective value. But it's not just ending.


I have no clue what to do with this. I have been trying many options for last three days.

Could you please look at the second case which I mentioned. Why just including one additional subRes file has increased the solving time? May if you see the parameters in the SubRES_DECENTRALISED_ALK_INDUSTRY file, you might be able to suggest some solution. I have attached the .DD and .Run files of this second case.

I could try disabling the dummy import. I am skeptical about the risk of not detecting errors of demand being unmet.

Thanks & Regards
Abi Afthab


Attached Files
.zip   model.zip (Size: 453.73 KB / Downloads: 1)
Reply
#5
Dear Antti,

I tried solving the model after disabling the dummy import. The model got solved. However, it took 2 hours and 43 minutes to solve. I am little worried. Because this will not enable me to assess the results smoothly in good time, since I will have to assess many scenarios for my master's thesis. Is it a common thing that if we have such a big model this can happen? What could I do to improve this? Could you list out a few possible problems which usually increases the time of solving like you mentioned poor scaling of model?

Regards
Abi Afthab
Reply
#6
Big models tend to call for some additional meticulousness in the modelling process.
I tested the run with all the Subreses listed below, by following my earlier advice, plus making small additional fine-tuning:
$BATINCLUDE elcconventionaltechs.dd
$BATINCLUDE elcrenewabletechs.dd
$BATINCLUDE backupboilers.dd
$BATINCLUDE centralised_alk_industry.dd
$BATINCLUDE centralised_alk_mobility.dd
$BATINCLUDE centralised_pem_industry.dd
$BATINCLUDE centralised_pem_mobility.dd
$BATINCLUDE centralised_pipeline_industry.dd
$BATINCLUDE centralised_pipeline_mobility.dd
$BATINCLUDE centralised_smr_industry.dd
$BATINCLUDE centralised_smr_mobility.dd
$BATINCLUDE centralised_tubetrailer_mobility.dd
$BATINCLUDE decentralised_alk_industry.dd
$BATINCLUDE decentralised_alk_mobility.dd
$BATINCLUDE decentralised_pem_industry.dd
$BATINCLUDE decentralised_pem_mobility.dd
$BATINCLUDE decentralised_smr_industry.dd
$BATINCLUDE decentralised_smr_mobility.dd

The resulting model had 1,595,487 equations, and the run solved quite nicely, in less than 500 seconds (about 8 minutes), see below the console output after crossover:


I see you have abandoned my suggestion for bounding the outflow of the storage in each timeslice by using FLO_SHAR, which you said you will incorporate.  May I ask why did you not incorporate it, but changed all storage to be AFC-based?


Attached Files Thumbnail(s)
   
Reply
#7
(27-06-2019, 06:17 PM)Antti-L Wrote: Big models tend to call for some additional meticulousness in the modelling process.
I tested the run with all the Subreses listed below, by following my earlier advice, plus making small additional fine-tuning:
$BATINCLUDE elcconventionaltechs.dd
$BATINCLUDE elcrenewabletechs.dd
$BATINCLUDE backupboilers.dd
$BATINCLUDE centralised_alk_industry.dd
$BATINCLUDE centralised_alk_mobility.dd
$BATINCLUDE centralised_pem_industry.dd
$BATINCLUDE centralised_pem_mobility.dd
$BATINCLUDE centralised_pipeline_industry.dd
$BATINCLUDE centralised_pipeline_mobility.dd
$BATINCLUDE centralised_smr_industry.dd
$BATINCLUDE centralised_smr_mobility.dd
$BATINCLUDE centralised_tubetrailer_mobility.dd
$BATINCLUDE decentralised_alk_industry.dd
$BATINCLUDE decentralised_alk_mobility.dd
$BATINCLUDE decentralised_pem_industry.dd
$BATINCLUDE decentralised_pem_mobility.dd
$BATINCLUDE decentralised_smr_industry.dd
$BATINCLUDE decentralised_smr_mobility.dd

The resulting model had 1,595,487 equations, and the run solved quite nicely, in less than 500 seconds (about 8 minutes), see below the console output after crossover:


I see you have abandoned my suggestion for bounding the outflow of the storage in each timeslice by using FLO_SHAR, which you said you will incorporate.  May I ask why did you not incorporate it, but changed all storage to be AFC-based?

Dear Antti,

Regarding storage, later I decided to model based on a predefined storage size based on other simulation models. Since TIMES is not a simulation model, I thought this would be better. Therefore, what I have done is, I calculated Investment cost and Fixed O & M cost based on a fixed storage size.

I will give an example how I did it, it would be helpful if you could comment on this. For my hydrogen refuellling stations, I am modelling based on a 1000 kg/day refuelling station costs and parameters. So I know an optimized storage size for such a refuelling station (say 500 kg hydrogen capacity). Now I know the investment cost and fixed O&M cost for the same. Based on this I calculated the cost for one year (MEuro/PJa). A 1000*365 kg/year demand would have an investment cost which is equal to the capital cost of 500 kg storage capacity. This capital cost divided by (1000*365*1.19*10 ^-7) would give me cost per PJa. Since I have AFC based modelling for storage, my capacity would be the annual flow out of the storage which would be little higher than my annual demand (I have modelled in such a way that the hydrogen flows only through the storage). Therefore, the storage cost can be rightly reflected. I guess this approach is right. What do you think? 


Regards
Abi Afthab
Reply
#8
Dear Antti,

One more point why I did the storage modelling this way is the availability of storage cost in this format in literatures. I was referring to a similar hydrogen modelling related work in TIMES model for storage cost parameters. So it was easy for me to compare my cost with those costs mentioned in that paper.

It's a happy news for me if the modelling is solving in 8 minutes. May I ask you what are the slight modifications you made. I had incorporated following changes to the model based on your earlier advices. But still the model was taking time to solve.

1) Disabled the dummy import costs by putting START as 2200.

2) Made all the costs values in the order of minimum 10^-1 and not less than that.

3)My transportation is kept in million vehicle km to get a efficiency value not too large as you told earlier.

Regards
Abi Afthab
Reply
#9
(27-06-2019, 06:17 PM)Antti-L Wrote: Big models tend to call for some additional meticulousness in the modelling process.
I tested the run with all the Subreses listed below, by following my earlier advice, plus making small additional fine-tuning:
$BATINCLUDE elcconventionaltechs.dd
$BATINCLUDE elcrenewabletechs.dd
$BATINCLUDE backupboilers.dd
$BATINCLUDE centralised_alk_industry.dd
$BATINCLUDE centralised_alk_mobility.dd
$BATINCLUDE centralised_pem_industry.dd
$BATINCLUDE centralised_pem_mobility.dd
$BATINCLUDE centralised_pipeline_industry.dd
$BATINCLUDE centralised_pipeline_mobility.dd
$BATINCLUDE centralised_smr_industry.dd
$BATINCLUDE centralised_smr_mobility.dd
$BATINCLUDE centralised_tubetrailer_mobility.dd
$BATINCLUDE decentralised_alk_industry.dd
$BATINCLUDE decentralised_alk_mobility.dd
$BATINCLUDE decentralised_pem_industry.dd
$BATINCLUDE decentralised_pem_mobility.dd
$BATINCLUDE decentralised_smr_industry.dd
$BATINCLUDE decentralised_smr_mobility.dd

The resulting model had 1,595,487 equations, and the run solved quite nicely, in less than 500 seconds (about 8 minutes), see below the console output after crossover:


I see you have abandoned my suggestion for bounding the outflow of the storage in each timeslice by using FLO_SHAR, which you said you will incorporate.  May I ask why did you not incorporate it, but changed all storage to be AFC-based?

Dear Antti,

In this file, did you see dummy imports in the year 2017 and 2018 for the commodities HYGNINDfscap and HYGNINDfsmer? For me, there are dummy imports despite of having enough capacity. I do not understand why this is happening. In my model, the existing technologies capacity are defined as 'STOCK' (I had mentioned this in another post). But same STOCK value has been kept till the end of its lifetime. In my current model, I kept the same stock values which are actually sufficient to meet the demand in 2017 and 2018. However, the model is still importing. Instead of STOCK, I tried NCAP_PASTI and then the model is taking the capacity. Unfortunately, now If I change this to NCAP_PASTI instead of STOCK I will have to make lot of changes in my subRES files. Could you tell me why this could be happening?

Regards
Abi Afthab
Reply
#10
No, the existing capacity defined HYGNINDtscap and HYGNINDtsmer is not at all sufficient for the demands in 2017 & 2018, because of the following:

  – Your hydrogen demand profile for DEMHYGNINDcap and DEMHYGNINDmer is not even, but relatively steep. The peak occurs in timeslice S19Q04, where the demand fraction is 0.00208333 and the year fraction is 0.000494693, which gives a peak load being 4.21 times the annual average. Your peak duration time is thus only 2080 hours. More importantly, the total load in season S19 is 3.37287 times the average, which means that, for example, the production capacity of METHANEREFmer should be 3.74 times the annual demand (taking into account the storage efficiencies), unless you have seasonal storage. That would require a capacity of 106.4, but the existing capacity (defined by PRC_RESID) was only 43.5 in your model. So, it is far from sufficient.
  – You have no seasonal storage for HYGNINDtscap and HYGNINDtsmer (earlier you had STS storage in the model, but for some reason no such any longer), and so the calculation above is valid.  I have also verified it by testing with your model, and therefore I increased the capacities of METHANEREFmer and METHANEREFcap accordingly, to eliminate the dummy imports.  In reality, existing supply capacities must of course satisfy the historical demand.
Reply
#11
(28-06-2019, 07:32 PM)MohammedAbiAfthab Wrote: In my model, the existing technologies capacity are defined as 'STOCK' (I had mentioned this in another post). But same STOCK value has been kept till the end of its lifetime. In my current model, I kept the same stock values which are actually sufficient to meet the demand in 2017 and 2018. However, the model is still importing. Instead of STOCK, I tried NCAP_PASTI and then the model is taking the capacity. Unfortunately, now If I change this to NCAP_PASTI instead of STOCK I will have to make lot of changes in my subRES files. Could you tell me why this could be happening?

I don't understand the issue. I can see that you have defined the existing capacity for SMR hydrogen production as follows:
PARAMETER PRC_RESID ' '/
BE.2017.METHANEREFcap 32.475
BE.2017.METHANEREFmer 43.5
BE.2037.METHANEREFcap 0
BE.2037.METHANEREFmer 0
/;
Thus, you are defining a gradual phase-out of the existing capacities, linearly interpolated from 2017 to zero in 2037. If you instead would like to define the capacity to stay at a constant value CA from 2017 to some year, say 2040, just define it so: PRC_RESID(2017) = CA, PRC_RESID(2040) = CA.  Then the capacity remains at the constant value CA between 2017 and 2040. Or, if you would have defined NCAP_TLIFE=25, you could also use the interpolation option 5 to accomplish the same.

[EDIT:] I also fail to understand why you say that you would have to make lot of changes in your subRES files, if you would use NCAP_PASTI instead of STOCK.  Huh
Reply
#12
(29-06-2019, 11:46 PM)Antti-L Wrote:
(28-06-2019, 07:32 PM)MohammedAbiAfthab Wrote: In my model, the existing technologies capacity are defined as 'STOCK' (I had mentioned this in another post). But same STOCK value has been kept till the end of its lifetime. In my current model, I kept the same stock values which are actually sufficient to meet the demand in 2017 and 2018. However, the model is still importing. Instead of STOCK, I tried NCAP_PASTI and then the model is taking the capacity. Unfortunately, now If I change this to NCAP_PASTI instead of STOCK I will have to make lot of changes in my subRES files. Could you tell me why this could be happening?

I don't understand the issue. I can see that you have defined the existing capacity for SMR hydrogen production as follows:
PARAMETER PRC_RESID ' '/
BE.2017.METHANEREFcap 32.475
BE.2017.METHANEREFmer 43.5
BE.2037.METHANEREFcap 0
BE.2037.METHANEREFmer 0
/;
Thus, you are defining a gradual phase-out of the existing capacities, linearly interpolated from 2017 to zero in 2037. If you instead would like to define the capacity to stay at a constant value CA from 2017 to some year, say 2040, just define it so: PRC_RESID(2017) = CA, PRC_RESID(2040) = CA.  Then the capacity remains at the constant value CA between 2017 and 2040. Or, if you would have defined NCAP_TLIFE=25, you could also use the interpolation option 5 to accomplish the same.

[EDIT:] I also fail to understand why you say that you would have to make lot of changes in your subRES files, if you would use NCAP_PASTI instead of STOCK.  Huh

Dear Antti,

I had changed everything to NCAP_PASTI and adjusted everything by removing some subRES files . But still the model has been running for hours. Please see the snapshot. Also, I have attached the .DD and .RUN files.

Now according to your other  post, I will try to correct that problem of the demand being not met. May be that will solve the issue?

Regards
Abi Afthab


Attached Files Thumbnail(s)
   

.zip   model (2).zip (Size: 523.56 KB / Downloads: 1)
Reply
#13
(30-06-2019, 01:11 AM)MohammedAbiAfthab Wrote: I had changed everything to NCAP_PASTI and adjusted everything by removing some subRES files . But still the model has been running for hours. Please see the snapshot. Also, I have attached the .DD and .RUN files.

I don't understand why you think that "changing everything to NCAP_PASTI" would help for making the model solving faster? Can you explain why you would expect it to be useful in that respect (or otherwise)?  As far as I can remember, at least I have never suggested to you that you should use NCAP_PASTI instead of PRC_RESID.
Reply
#14
(30-06-2019, 05:03 AM)Antti-L Wrote:
(30-06-2019, 01:11 AM)MohammedAbiAfthab Wrote: I had changed everything to NCAP_PASTI and adjusted everything by removing some subRES files . But still the model has been running for hours. Please see the snapshot. Also, I have attached the .DD and .RUN files.

I don't understand why you think that "changing everything to NCAP_PASTI" would help for making the model solving faster? Can you explain why you would expect it to be useful in that respect (or otherwise)?  As far as I can remember, at least I have never suggested to you that you should use NCAP_PASTI instead of PRC_RESID.


Yes, you are right. You never told me using NCAP_PASTI would solve the problem Smile. In one of my earlier runs, when I tried with NCAP_PASTI the model was solving (changes made only in base file just to see how it works). That's why I changed my subRES files also accordingly (when I use NCAP_PASTI, I could get rid of three subRES files of which the technologies are represented in the base year template already). But still the problem is not solved. Sometimes if logic is not working, then we just do different trial and error. That's why I tried this.


Now I increased the existing capacity of my baseyear technologies to completely meet the demand. I thought at least this would solve the issue. Still the model is not solving. It's been more than 2 hours, it's running. I am not able to track the problem.
Reply
#15
(30-06-2019, 05:13 AM)MohammedAbiAfthab Wrote:
(30-06-2019, 05:03 AM)Antti-L Wrote:
(30-06-2019, 01:11 AM)MohammedAbiAfthab Wrote: I had changed everything to NCAP_PASTI and adjusted everything by removing some subRES files . But still the model has been running for hours. Please see the snapshot. Also, I have attached the .DD and .RUN files.

I don't understand why you think that "changing everything to NCAP_PASTI" would help for making the model solving faster? Can you explain why you would expect it to be useful in that respect (or otherwise)?  As far as I can remember, at least I have never suggested to you that you should use NCAP_PASTI instead of PRC_RESID.


Yes, you are right. You never told me using NCAP_PASTI would solve the problem Smile. In one of my earlier runs, when I tried with NCAP_PASTI the model was solving (changes made only in base file just to see how it works). That's why I changed my subRES files also accordingly (when I use NCAP_PASTI, I could get rid of three subRES files of which the technologies are represented in the base year template already). But still the problem is not solved. Sometimes if logic is not working, then we just do different trial and error. That's why I tried this.


Now I increased the existing capacity of my baseyear technologies to completely meet the demand. I thought at least this would solve the issue. Still the model is not solving. It's been more than 2 hours, it's running. I am not able to track the problem.

I did a small modification in the model related to my storage. Fortunately, the model is solving, although it took 54 minutes.
Reply


Forum Jump:


Users browsing this thread: 1 Guest(s)