The math efforts mentioned last time are still ticking along, but keeping my own passion for the project fired doesn’t fit well with letting it languish completely until I crack that nut. Consequently I’ve gotten a number of important but sometimes low-visibility things done since last post.
Possibly the oldest part of the racing UI at this stage was this element that showed some properties about the relationship between the player’s ship and a given object. Relative speed, distance, heading information and the lot. It’s been of little use and partially broken for quite a while now, and I’ve finally replaced it with something more useful.
The old panel originally required you to type in the name of the object you wanted information on, and for a while has simply been fixed to the largest object in the system. The new data panel makes use of the recently implemented improved system tree querying to detect what object currently dominates your motion(in other words, whatever you are orbiting or flying by) and use that as its default, with buttons in the system summary to override. The panel also shows directions you’ll probably need to know if you are hand-editing common orders, and shows some basic orbital information.
Working on this has also led to tightening up some of the panel layout and text rendering settings generally, visible in a full screenshot. There’s still plenty of room for improvement here, especially for higher resolution displays, but every step is useful.
Under the hood a number of edge cases with objective evaluation have been cleaned up. Due to the nature of the objectives the timeline as a whole needs to be examined to validate the objective state, which can take up some runtime. This process is given a lower priority than simulating motion, so when there’s a course change the objectives aren’t rechecked until the ship’s path is fully simulated again. This has meant that if you meet the objectives and them make a course change the level could now be failed, but the objectives won’t notice for potentially several seconds. This is undesirable for several reasons, so now the objectives are given a single evaluation cycle at the end of every simulation frame, so they can always notice and react to course changes. A single check is low enough impact that it’s not a significant cost.
At the same time objective evaluation has now been made aware of how much of the timeline has been checked, and will report a preliminary state immediately but the level will not be considered passed until that coverage is complete.
With that done, and a few other legacy bad decisions cleaned up along the way, victory detection is now in a much better place than it was. I next turned my attention to level selection, score saving and save loading. The level select screen isn’t pretty yet, but shows most of the information I want it to now.
When you complete a level’s objectives it scores your solution on three categories, speed, simplicity, and efficiency. If it beats your existing score in that category the solution is saved, and you can load it later at any time. There’s currently a single save slot that you can use to keep one other solution. The level select screen now shows a basic summary of the level’s contents, and of your score or any saved solutions. In the future this should also show you information about the general playerbase’s solutions so you can compare.
Finally, all the tunable values of the bloom settings are now exposed to the user as profile options, as well as an option to scale line width. This should let people who find my preferred bloom settings either overdone or insufficient to find their own comfort zone.