Just a question while I am waiting for the compiler...
The mhl patch currently only contains of changes to some .h and .cpp files, settings.ini, settings_type.h english.txt.
I am just changing things in a quite early patch of the queue, for a full test, I have to apply the other patches, if this full test shows something to be fixed, I have to go back.
By far the most time in this workflow is for compiling, simply because english.txt and settings.* has dependencies nearly everywhere.
Did you ever think about splitting those files up into module-oriented ones, e.g. english_vehicles, english_mapgen, settings_vehicles, settings_mapgen, and so on. Such that a change in mapgen wouldn´t trigger recompiling all and everything from AI to vehicles? Would that be feasible?
Or is there another way to circumwent that compiling time problem?
Compile dependencies
Moderator: OpenTTD Developers
Re: Compile dependencies
Splitting the settings up will mean that there won't be a simple check for _settings_game and _settings_newgame, but there would need to be many different _settings_game_XXX and _settings_newgame_XXX variables.
For strings it's even nastier; their IDs are almost always derived from their order. As a result adding a string in the begin will change all IDs of the following strings, and as such you'd still need to do a compile of everything.
You might want to consider taking a look at ccache to prevent having to actually compile stuff that has not changed, but where the modification time changed by popping and pushing (--with-ccache).
For strings it's even nastier; their IDs are almost always derived from their order. As a result adding a string in the begin will change all IDs of the following strings, and as such you'd still need to do a compile of everything.
You might want to consider taking a look at ccache to prevent having to actually compile stuff that has not changed, but where the modification time changed by popping and pushing (--with-ccache).
Re: Compile dependencies
Yes. Would be more global variables, but they would follow a common namespace convention.Rubidium wrote:Splitting the settings up will mean that there won't be a simple check for _settings_game and _settings_newgame, but there would need to be many different _settings_game_XXX and _settings_newgame_XXX variables.
Hm, what data type has their ID? One might define a range for each english_foo.txt, e.g. from_id = 10000, to_id = 19999; if the underlying data structure would be a 32bit-integer, one could probably make the blocks large enough for all practical purposes, and a check for non-overlapping could also be implemented in a static way (count not-outcomented STR_foo in a file, and check wether their count is smaller than the block size).For strings it's even nastier; their IDs are almost always derived from their order. As a result adding a string in the begin will change all IDs of the following strings, and as such you'd still need to do a compile of everything.
Thank you for that hint, I will have a look at it. Maybe it already solves my problem.You might want to consider taking a look at ccache to prevent having to actually compile stuff that has not changed, but where the modification time changed by popping and pushing (--with-ccache).
Re: Compile dependencies
Running 'make' in parallel also helps a lot, have a job for each CPU core you want to use, often throwing in one or two more jobs pushes the limit even further.
For example to have 5 jobs running in parallel, do 'make -j5' (you don't want to use "-j" without number as it will start ALL compile jobs at the same time, which is murder for any form of throughput )
If you have 'time', you can easily measure the time it takes to compile ('time make -j5').
EDIT: Another trick is to make a new patch 'stuff_to_merge' at the end where you collect things you want early in the patch queue. Once you have collected some things, you can go back, and 'hg qfold stuff_to_merge' the stuff. There is a bigger risk at merge conflicts though.
For example to have 5 jobs running in parallel, do 'make -j5' (you don't want to use "-j" without number as it will start ALL compile jobs at the same time, which is murder for any form of throughput )
If you have 'time', you can easily measure the time it takes to compile ('time make -j5').
EDIT: Another trick is to make a new patch 'stuff_to_merge' at the end where you collect things you want early in the patch queue. Once you have collected some things, you can go back, and 'hg qfold stuff_to_merge' the stuff. There is a bigger risk at merge conflicts though.
Being a retired OpenTTD developer does not mean I know what I am doing.
Who is online
Users browsing this forum: No registered users and 47 guests