Saturday, July 30, 2011

50 – Nostalgia trip

Well, here we are at the milestone post with the number 50. I am only counting posts that are clearly labeled as bringing new features or progress in the development. Unfortunately, post 50 kind of crept up upon me and I did not prepare something well in advance.

So something from the back log will have tot do. Something that I have been foreshadowing in the past. I did not get to finish it back then mainly because of the blasted ramps. But now, as a rampless engine, I managed to get the final touches in. So, without further ado, here it is:

Alternative topdown view
Wait a minute. Topdown again? Are we back to that? Now don't worry. This mode is an alternative view point on the action and it will never be the default and will not take up any development resources. With a press of a button you can switch in the fly from isometric to this:

I'll go right ahead and list all the advantages having an alternative "2D" mode has:

  • It is basically free. I had to write it once, but it was very easy. Unlike isometric, rendering in a topdown mode is very easy, just two loops. This mode is an alternative view, this means that it only renders the map and you can issue commands like in isometric mode, but the rendering code is a separate and stand alone entity. What works in one, works in the other. Resources and content is only created once.
  • I do not need a second set of graphics. As you can see, the game looks pretty much the same in both modes. Isometric tiles are converted into square tiles by the editor automatically. This conversion is not perfect, and an artist could do so much more than an automatic conversion, but since I do not want to greatly invest into this alternative mode, it will have to do.
  • It is almost impossible to break. Since rendering like this is so easy, it is almost impossible to break. I can test for something having a bug in this mode and thus can rule out the rendering engine having bugs and concentrate on the real source of the problem. Also, developing new features can be easier for this mode first.
  • It offers a good view point of the action. Maybe not at this zoom level, but once we zoom out a little: 

At this zoom level the contrast could be a lot better. Maybe a schematic strategical overview rendering mode should be added in the future.
  • Free form zoom. This is not implemented yet, but since topdown does not have strict tile size and proportion rules, you can create any zoom level. In the future I'll add this possibility and make this rendering mode play double role, the second role being of strategic whole map display. Maybe a minimap also.
  • It is the perfect point to relaunch the 3D investigation. The previous 3D engine was a failure, but the new one might take the 2D mode as a starting point. The first phase would be to draw the same thing, but this time using polygons. Then add a little zoom and rotate. And then a tiny bit of perspective.
  • More performance. Rendering top-down is faster. While I do try to make the isometric engine as fast as possible, topdwon mode might enable a few extra people with relatively old hardware to run the game.
  • Good starting point for portable port. Imagine that: DwarvesH on an iPad or and Android tablet (on a phone it is harder because of the small screen, but maybe doable). The added performance from the 2D engine, together with some platform specific optimizations (I have one special optimization I can do to top down mode that is a lot of effort for PC but could greatly improve framerate on weaker devices) could enable the game to work on these portable devices.
There are a few other advantages that do not come to mind right now, but you get the idea: having this secondary render mode is an advantage, especially if it only took 80 lines of code to implement.

One additional feature this mode has is current level highlighting. Let's start with the above map, but viewed from another point:

Here we only see a single Z level. But if we go to a higher level, where more than one Z level is visible, we get this effect:

I'll add here a few more images, each taken from a consecutively higher Z level:

One thing I could change is make the effect change in intensity according to height difference, to give a better overview of elevation. Not a priority right now.

This dual rendering mode caused me to reorganize the cache folder's structure, streamline tile loading and ordering and made all tile sheets use 256 tiles (in the past some had 400). This keeps perfect uniformity between isometric and topdown, and make both my life the life of future moders easier.

Very precise mouse cursor
You may have noticed in my past videos that mouse movement is kind of awkward. I had to maintain a high level of concentration to partially hide this fact while I was doing my videos. The fix was easy but tedious so I did not bother with it until now. But the new precision is a godsend! I can now do very precise selections and it is near perfect. There is a small accommodation phase if you turn off half height walls (the default is on) because the mouse tracks the surface of floors, not the top of walls, but after you get used to this you should have no troubles in either mode.

With this last change I consider the rendering and input engine done! From now on it is only content and new mechanics.

I still have a few posts to do, but after that a new pre-alpha phase will be started. This phase will also consist of third party (but probably not open) testing and once it is finished version 0.1 will be complete. I looked over the feature overview I did a few months ago, and things are looking pretty good. A few things were left out, a few things were added and I am roughly in schedule.

But with the new added things, "Stoneage" will probably not make the deadline. If this happens, I'll try to think of something to compensate. Maybe a public tech demo. I don't know.

Wednesday, July 27, 2011

Early concept art and announcements

It's time for some announcements! First let me get the ones I am not going to get into detail out of the way.

DwarvesH (under a new name) should be coming to IndieDB soon. I've done some initial research and this seems to be a fine platform to get some exposure. So I can secure a huge fan base that I can continue to endlessly tease with release dates that keep getting pushed back. OK, I won't do that, I am just making fun of the release date dynamics of such ambitious yet very small team projects. So if you have some quick and dirty gossip related to IndieDB that I am not aware off, things like "they charge bandwidth after 1GiB of total traffic", "there is a strong but silent anti-dwarf movement on that site" or "imma let you finish, but BringieGM is the best indie site of all time!" then let me know now before it is not too late! :) I'll keep this blog as the main communication channel, where I go into my usual detail. I'll write more on this topic when the game is up on IndieDB. Of all time!

The second announcement is that soon I am going to be accepting pure optional donations. Pretty run of the mill process, so you should be familiar with it form other games or open source projects. I'll do a lengthy post about donations, the motivation for them, what are the conditions and future business model when the time is right.

But what I am going to talk about more about is my new collaborator:  Lucifielle. Say "Hi Lucifielle!"! Or more precisely, what we are going to be collaborating on. As you may know, my previous attempt at getting some graphics into DwarvesH did not go over that smoothly. I started with placeholder art taken from other compatible sources, like Stonesense, and had absolutely zero luck finding a local talented artist interested in working with me. That is very strange, seeing that I am virtually surrounded by people in the IT and design domain. And I also abide to highest dwarven hygiene standards. I had a short and fruitful collaboration with someone, but that said person decided to ignore all my attempts at communication after a point without explanation. Strange again, since I also abide to the highest dwarven personality and pleasant company standards. Right now I am collaborating with another person, but it is not working out too well.

So third time's a charm! Right? Let's hope so. If not, there is always number four. Like in that movie.

With the help of Lucifielle I'll try to give a distinctive look to the game, starting with the dwarves. Right now we are at a very early stage of creating concept art. I call this exploratory concept art. The general look, shape, size, face features, clothing style and everything else is not set into stone yet, and we are trying a lot of things. First, body shape. Here are one of the first early sketches:

This are really "top of the head" sketches. Soon the look will become more stable, but the first experiments are going to based on the these two bodies:

Dwarven women with beards and chest hair FTW!

Thursday, July 21, 2011

Z2 – 02 – How to start? (part 2)

In order to break the trend of huge Z2 posts I’ll dive right into it, going for the benchmark explanation and skipping most theory and other things I wanted to mention in part two.

For this test, I’ll consider 26 master sets of classes, each contained in its own file. Each master set is named after a capital letter in the English language: so we’ll have master set “A” in the file “A.zsr”, master set “B” in the file “B.zsr” and so on until “Z” in “Z.zsr”. A master set file will be using the next file that comes in alphabetic order, so “D” will be using “E”, except for “Z” who won’t be using any other file. Taking the master set’s name, we add three other characters, again from “a” to “z” (lower case) in order to get every possible permutation. So we’ll get the names: “Aaaa”, “Aaab”, … “Aaaz”, “Aaba”, …, “Aabz”, …, “Zaaa”, …, “Zzzz”. All names that start with the capital letter corresponding to a master set will be in the same file. Each name is the name of a class. Each class contains 26 constants, named form “A” to “Z”. The constants form a master set are initialized with the same constants from the similarly named class from the very next master set, plus one, so Aaaa.A = Baaa.A + 1, Fghj.O = Gghj.O + 1 and so on, except for the constant from the “Z” master set, which will be initialized with values from 0 to 25.

So we have 26 files, with 17,576 classes each (26 * 26 * 26). Every class has 26 constants, so every file has 456,976 constants. In order to initialize this number of constants, we need the next 456,976 constants from the next master set/file. So the total number of constants is 11,881,376. Now clearly, this test is completely ludicrous and no compiler on Earth is expected to be able to compile this, and if it is somehow capable of actually compiling it, it will take a lot of time and use an astronomic amount of resources.

Each file corresponding to a master set has a size of 10.3 MiBs, except for the last one, which is 6.6 MiBs. The main program file, one that prints a subset of these constants is 7.4 MiBs and tries to print only the Z subset. The total amount of disk space used by the test suite is 272.6 MiBs. Ludicrous amount of constants. Let me add some screenshot, first from “A.zsr”:

Then “Z.zsr”:

And finally “main.zsr”:

But when firing up the Z2 compiler, with a “main.zsr” that only uses the “Z” master set, we get very interesting results: the first time it takes roughly two seconds and on successive tries it goes down to 1.5 seconds (caching must set in). The execution time is very good, but we are not asking the compiler to do complicated stuff, only a lot of simple tasks. Memory wise, during compilation it eats up between 150 and 180 MiBs. When I set out to experiment with this project, I knew that I wanted to achieve the paradoxical goal of getting better compilation times with Z2 than with C++, and while I am not there yet, at this early stage at least the Z2 compiler is not hindering me. And Z2 compiles the constants without any requirement on their order and while checking for circular dependencies.

But the interesting part is related to the resulting C++ file. And here is where I encounter my first huge roadblock: the resulting C++ file is 23.1 MiBs. Notepad needs a good deal of seconds to open it and my C++ IDE has some troubles with editing the file with syntax highlighting. And when I tried to compile it the compiler gave me this message:

fatal error C1128: number of sections exceeded object file format limit : compile with /bigobj
test2: 1 file(s) built in (7:31.77), 451779 msecs / file, duration = 454057 msecs, parallelization 0%

After working for over 7 minutes, it gave up and said that it goes over some internal limit it has related to object file size. This was in debug mode. Let me try in optimal mode:

fatal error C1128: number of sections exceeded object file format limit : compile with /bigobj
test2: 1 file(s) built in (5:22.68), 322688 msecs / file, duration = 322782 msecs, parallelization 0%

Now it takes less time, probably because it does not have to generate debug information, but still fails. I am going to have to think seriously about this problem. Is it worth it to fix? Can it be done? Does using the option “/bigobj” help? I can try and insert the constants inline into the statements where they are needed so the backend compiler does not need to parse the constant definitions. But the resulting C++ code will be less readable. Using header files goes against the principles of this project. I can also try and break up the resulting C++ file into a lot of small ones. I would probably need to use a combination of methods.

Anyway, I can’t continue the official testing phase until I can get the resulting C++ code to compile.

But I can test Z2 a little bit more. I’ll change “main.zsr” to use “Y.zsr” and print the constants from that file. And the results are quite predictable: double the number of constants, double the time and memory use on the worst case scenario. After caching sets in, time goes down to even 2.3 seconds. Memory consumption does not go down. It is important to note that the “Y” set uses more RAM then the “Z” set, because “Y” constants are “Z” constants plus 1, while “Z” constants are just plain integers.

Repeating the experiment with using the “X” set is not possible on this computer. Either I do not have enough RAM or there is a problem with the memory allocator. Anyway, the compiler handled 913,952 constants and chocked before reaching the next milestone, 1,370,928 constants. Not that bad for some fresh code without any actual optimizations or a lot of development time sunk into it.

And such a short Z2 post! After more tests I’ll put up the test suite and the compiler somewhere on the Intewebz. See you next time!

Tuesday, July 19, 2011

49 – Not Skynet

Today I will be doing the final steps necessary to finish this iteration of the grass system. A little cosmetic touch, without any serious influence on the gameplay.

But in order to demo this, I need a way to make dwarves walk a lot on the same path.

So I decided to not skip any corners and get started prematurely on a basic A.I. system. I want an A.I. capable of taking high level objectives like "build a successful settlement", "create a huge dam" or "become a feared military power". It will break down these task in a lot of small tasks and will try to solve them one by one while reacting to the environment. It will take a lot of time before the A.I. becomes more than a bumbling jester tripping over a rock and destroying everything it has built, but I need to start somewhere:

Basic A.I. movement system
The first micro task that the A.I. should be able to do is move dwarves around. This may not seem like much, but this movement is different from normal movement when the use issues a command. Three actions are implemented.

The first one is the "Group Move" action. This will cause all dwarves to move to a desired area. There is support in the GUI for designating said area for testing purposes. Not that interesting and I do not like the feel of the group movement so I won't be demoing this today.

The second action is the "Random Move". When giving this order, the dwarves will continue to select destination squares from the designated area and run to them, doing this until they get exhausted. Selection of destinations is random but uniformly distributed, so on the long run every square will be visited roughly the same number of times. This is true for normal "Group Move", even though a destination is chosen only once.

The third action allows you to cancel the move commands.

Right now these command are issued by the user. After I add a few more commands, like "Place Stockpile in a Strategic Location" and "Harvest Everything" I will be demoing a game session completely devoid of user input.

As an implementation detail, I started using lists for A.I. task lists. This was slightly harder than expected since U++ does not provide a classical list container. Now lists are the prime choice for a lot of tasks and when their use is justified they usually destroy the competition performance wise, but they are not a particularly good general purpose container. With emphasis on container. Even in cases where lists are recommended, they shine usually because the accent is put upon the node, not the container itself, which plays second fiddle in such cases. I don't want to bore you with the theory of lists. If only I had a place where I could rant on for pages about programming languages. *Wink Wink*. Expect a comprehensive post about lists in the Z2 section one day.

If everything works out as planned, I'll transition all tasks to lists and greatly increase the number of tasks that can be scheduled simultaneously as a task that need to be executed and tested before version 0.1.

Grass erosion due to heavy trampling
This is why I needed a way to make dwarves walk in a controllable fashion all over the place: grass gets trampled if you walk over it. I may need to adjust the weights for the erosion, but I like the effect. It is one of those small touches that can improve the experience and its perceived depth without really affecting gameplay. Once I have animals in game they will also eat grass if they are herbivores. Combine the two sources of grass erosion and the landscape will seem a little bit more dynamic.

I consider the grass system finished for now.

Here is a short video demoing the A.I. control panel and grass erosion.

Monday, July 18, 2011

Screens of the day 12 - Not that big

In the above video you can see a map with a playable area of 4500 square meters.

I wanted to create a 10 square kilometer area as a first test, followed by a 20 one. I reckoned that there would be some problems when a large area needs to be updated, like when grass need to grow. And I was right. I do encounter a big snag every time there is a seasonal or daily change. This can be fixed by breaking up the update operation into smaller chunks and executing them in order on consecutive cycles. I won't implement this since I am not planning on allowing such massive maps. I am only doing some tests.

But small snags were just the beginning of the problem. After a certain threshold maps start to randomly segfault on creation. This does not happen every time. Figuring out why everything works fine on let's say 4800 size maps but breaks on 4801 is no easy task.

And there is a secondary threshold after which hardware acceleration fails to kick in. Luckily, Irrlicht is graceful about this failure and falls back to software rendering.

I will be slowly fine tuning the process and make sure that 10 square km maps work fine, but for now I'll abandon the more ambitious 20 square km ones. After my investigation I can tell with certainty that the 20 one would be pushing the limits of a 32 build. Probably even exceed them.

On the other hand, I no longer have guests :(. So back to regular schedule. I have two important posts planned for the week and I'll keep you updated with "Screens" posts in the meantime.

Wednesday, July 13, 2011

Z2 – 01 – How to start? (part 1)

Very good question? How to start the series, especially since I want to avoid major rants like the introduction was. My first idea was to really dive into the belly of the beast, and tackle one of the major failings of C when viewed from a modern perspective. Strings are a perfect candidate, strings in C being one of the most horrible missfeatures of it. Strings are bad for a lot of reasons that I will get into at a later date, but one of the reasons is that they inherit everything that is bad about C arrays. So tackling arrays first would make more sense. But array are bad due to their design plus the bad API used to handle common task for them. So I would get a huge post again that tries to tackle too much at once. Plus doing all the coding would take too much time.

Diving into advanced topics without presenting syntax may seem strange, but it is a widely used approach. A lot of programming books have a first chapter that walks you through a large set of features as an introduction. But since I’ll skip arrays and memset for now, maybe I could do a post about syntax. But I found this boring to write and read, since in my introduction I made a huge case out of how syntax is very important, but getting hung up on small details is just silly and this potential post would be about the small details.

So I’ll talk about definitions and dependencies, touching just a little bit on syntax and going into circular dependencies. The way a language handles modules is a vital point, which C handles the same way it handles most of its “features”: it has zero support for it and misuses another feature to give a workable but often problematic solution. C has no module support (and passes on this lack to C++) and uses the precompiler to stitch together a forward declaration system and the linker to handle the final processing and merging of your object files. You compile every single source file separately and then the linker puts them together, often causing link errors when something is not found or repeated. This would never happen with a module system. A lot of programmers are not aware how rudimentary this system is, how little C does when compared to any modern (and a few old) alternative, because by having a good and consistent convention you can mask a lot of the problems. But still: how many times have you had problems including headers, especially from third party components? If your answer is “never”, then either you are part of a very lucky minority or never had to work with huge code bases.

So having a good module support is essential for a language and thus for Z2. Another facet of a module system is how you handle modified sources, updated incremental builds and search for other modules. For C and Linux, one usually uses “make” or a more advance IDE. Make is a simple but generally good tool for automating some tasks. But it is particularly poorly suited for the needs of C, so a more powerful tool is needed. This is where “autotools” and friends come in. A.K.A. the Antichrists! Yes, plural! I will not talk about autotools in fear of my head exploding due to sheer cranial pressure induced by massive rage. Maybe I’ll write a post in which I systematically analyze and give arguments why autotools and friends and any other tool that uses the same principles are not the right tools for the job.

But the conclusion is that the Z2 compiler will handle these tasks for you. Sure, you will still need to tell it where to search your file system for the modules, and you will be able to use shell scripts or autotools or whatever to do that, but the actual building is handled by Z2. You will be able to only tell it the location of your main source file and the compiler will handle everything, locating every module automatically that is in the “object search path” and only compiling what is needed based on timestamps. As an added bonus, it will only pull in definitions once per module and compiler sessions. Look at any compiler time break down and you will see that the preprocessor ends up taking a disproportionately large chunk out of the total compilation time. Headers get pulled in and preprocessed in every compilation unit. Add to that C++ template instantiation and you will see why compiling C++ is so slow. There is this myth that C++ compilers are extremely fast. And this is true, but they have to do so much more than compilers for other better designed languages that the end result is a lot worse.

So C has zero module support, it uses external clunky tools that have (and need to have) a huge pile of flaming ancient wisdom in order to be portable and it uses the preprocessor to handle definitions while not having any built in mechanism to do this or even prevent multiple definitions. And we did not even get to the capabilities of the compiler itself. Which are lacking again. There are multiple ways to handle the act of locating definitions of items in different locations of the code, and C does the most basic of them: take them in order. And when this is not possible, uses forward/extern declarations and header files. Based on moving around blocks of text. The compiler can only refer to entities that it has encountered before in the linear process of compiling a single file.

Finally, here we are at the topic of this post! After two pages of ranting! Z2 being a research compiler, I will be doing the very opposite of what C does: full circular reference resolution. There are other methods that use some slight compromises, getting better performance, but this is not essential for our needs. The compiler will be able to reference any object that is included in the module or other modules that are used by the current one, without the programmer having to think about how to assure visibility by manually placing items at key locations. This feature can be abused by programmers, making things unreadable, but I am going to assume that you are working with people with good intentions that will structure programs in a readable fashion. And since this post is already too long, I am only going to talk about the resolution of constants, leaving variable for another time. Even as such, the topic of constants is going to be a two part post.

Let’s get to the first snippet of code. Since doing text formatting on Blogger is not that easy, I’ll use pictures to allow for better syntax highlighting and indentation:

Do not worry about the syntax. Z2 supports multiple levels of detail when expressing what you want the compiler to do, and I will generally be using the most spartan one available. Still, it should be quite readable to anyone. On the first line we are pulling in a module. This is actually a normal C header, not a Z2 module, which is why it ends with “.h”. This is temporary and used only until we can get a minimal standard I/O module rolling. Then we declare a class called “Foo”, which has a single constant called “Bar”. I am intentionally using ambiguous naming conventions. More on this later. And finally, and empty main method. You will notice that there are no semicolons at the end of statements. This should be familiar to people who use modern scripting languages. A lot of people use them for rapid prototyping, automation and other small tools. I tend to use C++ for these tasks with the aid of powerful libraries but I do sometimes use Python or Ruby. Whatever the case, there is one thing I do not miss: semicolons. When designing something, generally speaking, it is good to cater for the most frequent use scenario. The overwhelming majority of statements in most programming languages are one liners. Sometimes you need to extend to more lines, but there are a lot of good alternatives to semicolons that do not cause ambiguity and even more that cause. There is a huge class of languages out there that get by perfectly without semicolons as statement terminators and including them in Z2 feels like an anachronism.

Now let’s see what the equivalent generated C++ code looks like:

Hmmm, a lot shorter. I put great value on compilation speed and I try to avoid making both compilers do the same job. Z2 has already handled the entire source code and determined that it should eliminate both the constant and the class. There is no need for the backend, in this case C++ (and all other cases for the foreseeable future) to parse the class only to decide that it is not needed. Now, let us actually use the constant. I will also use this opportunity to show you a more verbose syntax that is semantically identical to the first one, but more explicit, giving information that the compiler can figure out on its own:

The constant “Bar” now has an explicit type. In the first sample I left the compiler figure out the type of the expression, but this time I have chosen to give it explicitly. I also gave the return type for the “main” method. One thing you will notice both from the naming conventions and the syntax highlighting is that types, including “built in” types like integers start with a capital letter. Z2 is class, value, reference, copy and move centric (I’ll explain all these keywords in the future), thus everything is an object. Like in most dynamic languages. But the difference is that the objects actually map to true hardware types when possible so there is no performance penalty involved. Even though normal integers are called “Int” and the declaration of this class is available in text form to the compiler in the same manner as “custom” classes are, and Int has a bunch of constants and methods, after compilation Int is mapped to a 32 bit signed integer and is no different from “int” in C. “printf” is also not the normal printf, but it is enhanced so it understands the types of the parameters it is getting and you will be able to get the same behavior for any function without any hardcoding or the compiler understanding or treating I/O or varargs specially.

And C++:

This time the definition of “Foo” has been pulled in. We have a forward class declaration section after the include directive. I could avoid this, adding classes that are only needed but right now it does not seem worth the effort.

Now let’s do dome circular constant initialization:

Yikes! What is that? A = A (= is assignment)? This makes sense for variables, but not for constants. This is obviously a compilation error and should be signaled as such. I could signal it as an “undefined identifier error”, but instead you get this:

The two numbers after “error” give us the line and column of the error: 4 and 11. At these coordinates we have exactly the beginning of the constant “A”. Then the compiler informs us again that something is wrong with “Foo.A”: a circular constant initialization. It also informs us that I make spelling errors. I noticed too late that I spelled the error message wrong and I ma not redoing the screenshot. Then we get the breakup of the cycle: the constant “Foo.A” form the file “0103.zsr” at coordinates 4, 11. So the value of A from the first coordinates is dependent on the value of A from the same coordinates. This makes a lot more sense if we consider a more complex example:

The first constant, “A” is initialized properly. But when initializing the rest of the constant chain, the programmer made a mistake: instead of initializing E with “F % 4”, it was initialized with “C % 4”, thus creating a circular reference. And this is what the compiler tells us, but in its own words:

I couldn’t go on without correcting Foo:

 And let’s check out the resulting C++ code:

You will notice that the constants have been evaluated and we only get the final result in the C++ file. As said, I do not want both the Z2 compiler and the backend compiler to do the same computations. But the main reason for this is that there is no way C/C++ can handle such constant initializations because they expect a linear progression of value dependencies and Z2 does not have such a progressions. There are multiple cases where one cannot reorder the constants when dealing with multiple classes. One would need to break up the classes and/or insert dummy constants to make C happy. Using evaluated values hits two birds with the same rock. And the results are equivalent in both cases.

I hope that the advantages of this constant system are clearly visible. The word “class” may make you think about OOP, but here we are actually creating a named constant repository. An absolute repository that grants its values to everyone to use, including other constants. And it has no troubles initializing values based on values that were not encountered before as long as they are initialized somewhere else. This is similar to the way human minds work. Let’s say you are using C and the constant M_PI a lot. After using it for an extended time, you notice that you use the expression “2 * M_PI” a lot, so you decide to create a new constant “M_2PI”. If someone asks you what the value is, you answer “two times M_PI”, without actually stopping to think what value Pi has. And if you use M_2PI exclusively for extended periods of time, you (or a programmer new to the project) may actually write “M_2PI / 2” when Pi is needed, associating the desired value with one you are overly familiar with. The human mind is not as ordered as a compiler that can only see statements encountered until it has reached the current line. Here is the problem in other words: I care about giving constants a symbolic name. Only the name is important. The value can change. I am in charge of naming and giving straight forward values to them. The compiler is in charge of figuring out the values when computations are needed. Classes as abstract constant pools do not care about order since you cannot reinitialize a constant.

The only problem here is that I have intentionally left the scoping rules ambiguous and I did not impose a coding convention and thus it is easy to image a scenario that might cause problems, like when “B” is both a constant in the current class and the name of a different class. I will give clear scoping rules in due time. Right now I want to focus on the little things first.

I would also like to point out that Z2 outputs proper C++ code only for convenience reasons. It would be trivial to make it output other types of code, but some languages are more problematic. Without classes or namespaces, Foo::A would look something like Z2_C__Foo__A or some other mangled name in C. But with time I’ll add this option too. As stated in the first post, a LLVM backend would make the most sense and translating to C++ is just the fastest solution right now, not the ideal one.

This concludes the first part of this topic. In the second one we’ll have more fun, this time initializing constants across classes, checking out again circular references and doing an extensive benchmark. For the benchmark I am thinking about using several MiBs of constant definitions in multiple classes/files and seeing how fast we can compile, the size of the resulting translated file and memory consumption. In the next post I’ll try to be more structured and slowly migrate away from the “huge block of ranting” model, but the plan is for Z2 posts to be around 5 pages a piece, so do not expect short posts like for DwarvesH.

Sunday, July 10, 2011

Screens of the day 11 - The future is in the shadows

Let me start by mentioning that the video for the last post is up! So please check it out and I apologize again for the delay.

While working on grass growth I discovered an interesting effect if I greatly speed up the rate of said growth:

Sure, this is completely useless as is, but I find the visual effect of the grass growing and reaching out to the dwarf, nearly drowning him to be quite interesting.

This gave me an idea for an end game creature for your dwarves to fight: a shadow fiend. A bodiless entity that stretches across the floors and can harm any living creature it touches. It would move with the same pattern and visual affect as the grass growth in the video, but with shadow replacing the grass. The only way to kill it would be to lure it into the light and it would be smart enough to make it near impossible to lure it out into the open during day time. So you would need to trap it using more creative methods. I do not have combat and traps yet, but once I do, I'll see to it that the shadow fiend becomes reality.

Wednesday, July 6, 2011

48 – Grass pokies

I am having guests right now, so do not expect July to be such a fruitful month as June was. And June was a great month! I had both the highest number of posts per month and the highest number of views. Maybe the two are correlated? Hmmm?

Anyway, I am not able to spend a lot of time on coding and testing, so I think I’ll cap it at 1-2 posts per week.

Procedurally generated grass levels

I am using my procedurally generated grass transitions in game now. For now, I’ll leave the levels of grass random, even if it not that realistic since an untouched landscape would have an abundance of grass:

It looks pretty good. I do not like the contrast, neither in the above image nor in others:

The algorithm is not that smart and cell edges are a little bit pronounced. This would not happen if the transition would be handmade or the algorithm could compensate for this:

But still, the entire transition is procedurally generated and cached, giving acceptable results:

This is a pretty big thing! I am caching the results of the generation and moved the actual generation process to a manual step in the editor so that I do not have to wait for the generation each time I start the game. But I could easily make it so that every time you start the game or even load a map, you get slightly different transitions. Think about it. Loading the same map over and over and getting slightly different visuals. Not very useful, but definitely a cute idea.

Real time grass growing

The grass now grows in real time. The way coverage percentage works should be explained. 0% means that there is not a single grass leaf on that portion of floor, and 100% means that the entire volume that could be occupied by that particular grass type is actually occupied, taking into account maximum leaf length and density. So it is not a straight leaf number or anything. 1% could mean that you have 10 very short leaves in the area, while 3% could mean that you still have 10 leaves, but the leafs are longer and 5% could mean that the previous 10 leaves are the same length, but 3 more medium leaves have grown since there. This is just a random example. But using this meaning, it makes sense that grass can increase 1% per day. This is the actual rate of grass increase. Grass grows all year long, but I’m thinking of slowing it down for autumn and even adding a slight decline when things get cold.

I won’t post a screenshot with grass growing. I’ll post a video at the end of this post that will also demo the next feature.

Grass cutting

A new floor related action has been added. Using this action, you can remove all grass from the selected area, lowering the grass percentage to zero. But the grass will continue to grow there. This is why in the feature list for 0.1 I labeled this action as cosmetic.

And speaking of feature list, I’ll have to update it soon. And I am getting very close to my 50th post (I have more than 50, but I am only counting the main ones) and I don’t have anything special planned for it. Maybe I can figure out something.

One thing that you will notice in the screenshots and the video is the lack of ramps. Ramps are very problematic. They have a ton of angles and combined with all the states generate thousands of tiles. Figuring out when and where to use each ramp is easy, but time consuming and tedious. Each time I touch ramps, floors, item system or any material that can be used in ramps I break the ramp system. These things are a pain in the butt. So I decided to temporarily remove them. Once the underlying systems have proven to be stable, I’ll reeneable ramps. Until then, good riddance. I hate to make my world more Minecrafty, but for a while there is no way around it:

In the video I am showing grass cutting, followed by grass growing for every soil type: