Programming and other useless stuff

Friday, January 16, 2009

Visual Assist review

Hi,

As I stated yesterday, I’ll start to review my current development environment. If you have read the list of tools I review, you understand that some if not most of them are used in combination. For example: I extensively use Visual Assist together with Visual Sidekick.
Today it’s up to:

Visual Assist by Whole Tomato Software

Visual Assist is what I call a productivity improvement tool. It integrates seamlessly into Visual Studio and replaces or enhances some of its features. It’s focusing on enhancing the existing IntelliSense, improving code readability, helping you write code faster and last but not least refactoring your code. There are some more features such as code navigation, but there are better solutions for that (see my upcoming Visual Sidekick review).
Integration and functions

Visual Assist integrates into Visual Studio in 7 ways: the VAssist X menu, the Visual Assist X toolbar, two tool windows (Outline and View), the integrated menu when you press the right mouse button, a small VA button within the source code, the IntelliSense enhancement during typing and the down arrow in the context field of the Visual Studio code editor.
The toolbar (see Image 1) gives access to the most used features such as “Find reference”, “Find next” and “Find previous” by context or the cpp/h switch button.


Image 1: The toolbar


The VAssist X menu (see Image 2) basically has the same options but contains additional functions such as the code snippets or the refactoring (which will reappear when I’ll talk about the context menu or the IntelliSense enhancements).
The menu itself is the less used integration feature of Visual Assist. It I could decide, it would totally disappear from the menu bar because it’s just one more menu at the top. And if, as I do, you have multiple tools that integrate their own menu at the top, finding a menu that you really might need such as the debug menu or the tools menu is means that you have to re-read the menu title anytime you want to access it... They just put themselves at any place they want in the menu bar and the Visual Assist is right between the View menu and the Project menu.


Image 2: The menu

The two tool windows are almost never used by me. They have their place, if you don’t have a better alternative to them.

The first one is the Outline window (see Image 3). Basically it displays the outline of the current file. As you can see in the image, it groups the functions by classes (similar to the Visual Studio class view) but only for the current file. You can easily access the class functions by double clicking on them... and that’s about it. You cannot for example add a new class member or a new function to the class as you could with the Visual Studio class viewer.
What you can do is re-order your header files or your functions via drag and drop. Simply grab a function, a class member or an include and drop it anywhere you want. The code is then inserted between the element before and the element after the drop place. This comes handy if you want to group your function in a particular way. It’s a lot faster than cut and paste.


Image 3: The outline window
The VA View window (see Image 4) is more or less a search window. There are 2 combo boxes: one to find files in you solution, the other one to find symbols. Just type in some letters into the combo box edit field and a list with corresponding files or symbols or functions will appear in the list beneath.

Now this sounds like a cool feature. But it’s only useful if you’re searching for a function or class member and you’re not completely sure about the name. Chances are that you’re a little bit faster if you use your keyboard wisely and if you assign one or another Visual Assist function to a key. I’ll talk about this later.


Image 4: The view window
The view window contains another function: If you hover your mouse over a class name, a function name or a class member, the lower part of the view window will display the context of that element. For example, if you hover over a class member name, the class outline is shown with the class member visible (see the image 5 where I hover over m_onExit; m_onExit is highlighted in the VA View window).


Image 5: The cursor hovering above a class member variable.

The next feature I would like to talk about is the context menu integration (when you press the right mouse button;see Image 6) ). In the example in image 6, I put my cursor on load and clicked the right mouse button. As usual, the context menu opens. Visual Assist integrates into this menu through 2 new sub-menus: “Refactor (VA X)” and “Surround With (VA X)”.


Image 6: The refactoring context menu
I imagine that you know what it’s all about. While “Refactor” gives you access to the refactoring functions of Visual Assist, “Surround” enables you to easily alter your source code by adding code snippets (see Image 7).


Image 7: The surround with source code context menu.
While I extensively make use of the refactoring functions, I almost never use the surround functions. But that’s me. I know a lot of people who are keen of this feature. In fact, you can add your own code snippets to Visual Assist which then can be inserted into the source code through this menu. As you can see in the screenshot, there’s already a nice selection of snippets which, as I must state here, are dependent on the programming language you use.
The refactoring contains functions such as symbol renaming, method extraction, field encapsulation, etc. It would take too much time to explain them all, so please refer to the website (link at the bottom of the review).
If you hover with your mouse above a symbol (here a function name), a small button containing a triangle will appear (see Image 8).



Image 8: The cursor hovering above a class member function.

Once you press this button, a list of possible Visual Assist operations will appear(see Image 9). These contain all functions related to the symbol. In my example, you can add a similar member to the class, change the signature, etc. If I would have hovered above the class name, I would have the option to add a member.


Image 9: Once we click on the triangle button, we see the possible Visual Assist operations.

While hovering above the symbol, further information appears such as the function declaration or the variable declaration. Any comment that is just above the declaration is displayed.
It’s difficult to make good screen shots of the next features I’ll talk about. So please refer to the website to see them “in action”.
The IntelliSense enhancement is what everybody loves most. It’s one of those features you see in the beginning and you get so customized to it, that you don’t even remark it anymore because it just feels so natural. But once you have to work with a Visual Studio which is missing the enhanced IntelliSense, you’re totally lost... at least, that’s my case.
The enhanced IntelliSense has an improved suggestion list which will not only display the possible symbols (variables, defines, enumerations and functions) but also those that fit closest to the current situation. For example, you have used within the function you’re currently developing calls to other functions which needs specific variables from your context. The second or third time you call that function, it will suggest the symbols you have previously used to make that function call. If you have passed 2 times the variable “a” to that function, the third time you call that function, the variable “a” will be the top suggestion.
And this feature can become really scary... Once I had to hard-code (bad bad) a table of 256 values. These 256 values were based upon 8 or 9 different enum-values. When I started the table coding, the suggestions seemed to be quite random. But the more values I coded into the table, the more precise was the suggestion. And this up to a point that I had the feeling that VA actually knew in which order I had to encode them because I almost never selected another one than the suggested one after I had entered the first 50 values.
Other features that you don’t see anymore is the automatic parenthesis and braces setting. Whenever you put a “(“, a ”)” is put, too. If you put a “{“, a “}” is put also.
Another cool feature is the “.” to “->” option. Dependent on the source code and the variable declaration, Visual Assist recognizes if a “.” or a “->” is needed and adjusts the source code while you’re typing. I almost never type anything else than “.”.
Acronyms and shorthand typing is another very cool feature. Just type in a few letters, select the right symbol, function or code from the appearing list box and you’re done. In most cases you only need to type 4-5 letters to enter a function name consisting of 10, 15 or 20 letters.
One of the last features I would like to mention is the improved method list in the context field on top of your source code window. If you click on the down arrow, all classes and methods of the current file are listed and you simply select one to instantly jump to its position.
On the very right is the “go to” button. Simply put your mouse cursor on a function name or a variable, click onto the “go to” button and it will either directly go to the implementation or the declaration, or it will display a list with suggestions of files in which the function or the variable has been declared. It’s not necessary to activate the browse information within Visual Studio to use this feature.
This is all for the feature tour...

To improve your workflow

One of the features I use most is the “go to” functionality. While it’s ok to press the “go to” button on the top right of the source code window, I’m more comfortable with simple key strokes. So, if I can give you an advice, then make you life easier and assign the most often used functions to key codes. The “go to” feature for example is set to F3 on my key board. I go to a function, a variable or any other symbol and press F3 to go to its definition. If my cursor is on a function call, I get the choice between the header file in which the function has been declared or the source file where it has been implemented.
To assign your keys to functions, go to “Tools”->”Options”->”Keyboard”, find the “VAssistX.GotoImplementation” command in the command list and assign it to whatever key you want. If you enter “VAssist” in the command line, you’ll get all the Visual Assist commands you might want to assign to a key. The most useful are “GotoImplementation”, “FindReferences”, “FindNextByContext” and “FindPreviousByContext”. Another one, which is not part of Visual Assist, is “Edit.GotoBrace” to quickly jump between enclosing braces.
Options

A lot of features can be adjusted to your needs. If you want to change the syntax coloring, you can do it either through the quick config (which lets you switch between max, default and min) or the advanced font and color settings.
The list box content and the suggestions can be adjusted and the code snippets can be edited.
You can even adjust the number of spaces between the method name and the braces.
Pricing

Visual Assist is no free tool. In fact I discovered that it got more expensive since I initially bought it. And to top it all, they have changed their licensing model. Before you paid $149 for your license and you had full year of software updates and technical support.
So since mid 2008 (?) you have the choice between paying

$249 including 1 year of support and updates and qualifying for maintenance renewal (which costs $49 per year),
Or

$99 for a personal license which is not renewable and only includes 6 months of updates and technical support.
Conclusion

Visual Assist is a great tool containing a lot of features which help you to work more efficiently and more precise.
The suggestion list of the enhanced IntelliSense helps you to quickly find and enter any symbol you might need. Once you have started using it, you don’t want to miss it. The “go to” function is a precious help which you never want to miss again.
The refactoring functions help you a lot when you have to extract parts of your code to put them into functions or if you want to rename them. Even changing the function signature is possible.
The tool windows as the outline window might be handy if you shuffle your functions around to create a clearer structure of you source files.
All in all Visual Assist integrates greatly into Visual Studio.

Nevertheless I’m missing quite some stuff in Visual Assist: I would like to have better control when changing function signatures or when I move want to create functions; I would like to see improved controls to create getters and setters for member variables; And a lot more... If you have ever worked with Eclipse you might understand what I mean. The amount of possible refactoring functions is incredible. The Visual Assist developers might want to take a look at Eclipse because a lot of developers have suggested them to do so...
Pros:
- Great feature list
- Seamless integration into Visual Studio
- Options let you configure almost everything
- Create your own code snippets
- Improved IntelliSense
- Great “go to” functionality
- Great community through their forums
- Great support
- Multiple updates per year containing new features and bug-fixes

Cons:
- Could use more and better refactoring functions
- Although requested bugs and features get “case” ids, users (registered or not) cannot see the “cases”, so bugs or features are discussed multiple times.
- The outline and view windows are almost useless. They are no real improvement compared to the already available class viewer.
- The changed license model which is more expensive than before and could hinder small developers to purchase it.

Links:
http://www.wholetomato.com/

Labels: ,

Thursday, January 15, 2009

Development tools...

Hi,

Since 1996, when I have entered the gaming industry as a programmer, I have seen a lot of tools coming and going which helped me to improve my code and facilitate the work I had to achieve. While I had to let lose some of them because they no more fitted my needs, there are still some I have kept all the time.

Last year I signed a contract containing huge amounts of code to maintain. I’m not talking about 300.000, 400.000 or 600.000 lines of code. The code base contains not less than 2340 files with 886.000 lines of code (comment lines not included).

Since maintaining such a huge code base is really a pita, I had to find a way to handle the code navigation, the code maintenance and the refactoring. So, during several blog entries, I’m going to talk about the tools I currently use citing their pros and cons and their pricing.

Before I start, some additional information:

  • Although I also do Java development, I focus on Visual Studio (2005/2008) since this my main development tool. For Java development I use Eclipse.
  • Whenever I search a tool, I try to find those which integrate into Visual Studio. This is not because I don’t like other tools but because I don’t want to switch forth and back between different environments. Nevertheless, for some tasks it’s not possible or useful to concentrate on Visual Studio. You’ll understand when I’ll talk about those tools.
  • I mainly do C++ development. Almost all of my clients do so also. Some of them use Java and others C#. I can develop in all of them (and more since I also have experience using Delphi, FORTRAN, and Assembler...) but I excel in C++. So all tools I discuss support at least C++.
  • If a tool is useful to me, I spend money to buy it. While there are a lot of freeware tools out there, those which might be useful sometimes lack support in case of problems. I do not buy tools at any price. My ROI estimation must fit to some criteria: how useful is it to me? How is the support? Who else is using it? What does it achieve? Is there another tool which does almost the same for less money?

Ok... that said, here’s the list of tools I’ll discuss:

  • Visual Assist by Whole Tomato Software, Inc.
  • Visual Sidekick by Syntaxia Technologies
  • PC Lint by Gimpel Software
  • Visual Lint by Riverblade
  • UltraEdit by IDM Computer Solutions, Inc.
  • UltraCompare by IDM Computer Solutions, Inc.
  • WinMerge (Open Source)
  • Enterprise Architect by Sparx Systems
  • Team Foundation Server and Team Foundation Explorer by Microsoft
  • AnkhSVN (Open Source)
  • TortoiseSVN (Open Source)

This is already quite a list of tools to discuss. Tomorrow I’ll start with Visual Assist...

Have fun,
Stefan

Labels: , , , , , , , , ,

Tuesday, January 06, 2009

Enhancing SSCXML and it's future

Ok,

as promised I'll talk about the missing stuff in the new data module. But I have a second topic that I'ld like to mention... but that's for later on.

So, if you take a look at the saturday's blog entry, you'll have a little idea of how the new data module interfaces will look like. They're neat, they're simple, they're incomplete...

In fact, an entire interface is missing. Here's why: When you're working with data value whose type is non-specific as expressed through the IDataValue interface, you cannot simply make a "new something" call. In fact, you'll need a factory that is able to create the data value of the type you need. Therefore a IDataValueFactory interface is needed.

My current definition looks like this:

class IDataValueFactory
{
/** \brief Access the number of types this factory can create.
*
* \return value containing the number of types.
*/
virtual size_t getNrTypes() const = 0;

/** \brief Identifies the indexed type of the data value that can be created using this factory.
*
* \param Index to the type identifier string to retrieve.
* \return a string containing the identifier.
*/
virtual const ScxmlString& type(unsigned int _index) const = 0;

/** \brief Checks if the type of the data value can be created using this factory.
*
* \param a string containing the data value type.
* \return true, if the type can be created using this factory.
*/
virtual bool canCreate(const ScxmlString& _type) const = 0;

/** \brief Create a new data value of the given type.
*
* \param a string containing the data value type.
* \return Pointer to the data value created using this factory.
*/
virtual IDataValue* create(const ScxmlString &_type) = 0;

/** \brief Destroy the data value.
*
* \param Pointer to the data value to destroy.
* \return true, if the data value has been destroyed by the factory.
*/
virtual bool destroy(IDataValue* _value) = 0;
};


In my opinion, any factory should be able to create more than only one data value type. This makes sense because you don't want to implement one factory for every data value type you want to support. So, the first 4 functions enable the user of the factory to check which data types are supported by the factory. Again this is entirely based upon strings to facilitate the usage.

The last two function create or destroy a data value. I think that for a clean structure, the one who has created an object is responsible for it's deletion. Simply incoke the factory's delete function and that's it.

Now, using the factories themself might be a little painful because you'll first have to get the right one and then make the create/destroy call.

So, I enhanced the interface of the data module by some functions: addFactory, removeFactory, createValue and destroyValue. Basically the developer is able to create his own factories, add them to the data module and create or destroy data values through the data module... hence everything data related "flows" through the data module.

One last thing I did was to add a clone function to the data value which might come handy when managing variables, creating copies, etc.

I still have to verify some of the interface functions to see if and how well they work together with a separate script implementation (remember: data management and scripting language linked together).


So far so good.

Now the other thing: The future of SSCXML.

I admit that, although I hoped otherwise, I didn't sell any license of SSCXML. I had some companies showing interest, some people asking questions but it didn't start off as I had hoped. Although the possible application fields are widely ranged, the interest itself was (and is) quite low.

SSCXML was the first publicly available implementation in C++, having a very easy to understand and very easy to use interface structure. It doesn't cost anything as long as you don't develop a commercial application using it and if you do, the price seems to be ok to me for the work that has been put into it (and the time you save when using it). Although these advantages, it just didn't pull off.

Now, almost a 9 month after my beta testing session and the launch, I've seen Qt (aka Trolltech/NOKIA!!) implement a SCXML lib into their Qt library. As far as I have learned, there have been several people working on it (while I was working alone), they're talking about some stuff I have on my road-map (ie. visual editor) and their google hits are higher (I suppose it's due to Qt overall higher presence and their tags on their webpages).

While my implementation is quite free from any library that must be available to use it, the Qt SCXML implementation is very bound to the Qt library itself.

So, now, seeing all this, I'm wondering what I should do. The product doesn't sell, the user base isn't high (tending to a handful of people coming and going) and so is the feedback I get.

I have several options:

1. I keep everything as is and I change nothing. I keep the business model, I continue development at my own speed (taking user requests into account as fast as possible) and hope that both, the user base and the licenses, will rise.

2. I make it completely open-source handing out the source code, scratching any idea of it being "my baby" and let an eventual user base take over the control.

3. Make it in a Qt similar way: under dual licensing handing out the complete source code but people have to pay to make commercial usage. This actually is almost the same way I do it now, except that I keep the source code closed.

4. Keep the source code closed and hand them only to those who pay for it. The non-paying people can use it for free.

5. Make it "donation" soft or "donation" open source... however you might want to call it. Hand out the source code, taking some people into the team who would like to contribute and ask for some money which is spent on material (books, hardware, etc).

So my question to you is: Which option would you use? Or do you have another suggestion?

Thanks,
Stefan

Saturday, January 03, 2009

Interfaces of the data value and data module

Hi,

as promised yesterday today I'll show you the interface to the data value and the data module. In fact, as I stated yesterday, the data value interface is quite small since it only has to identify the type and to provide basic getter and setter functions.

I was thinking about the most flexible way of usage for the interface. Since it is this tidy and tiny, the actual data getter and setter must be simple, too. I finally decided to make usage of strings (Note: ScxmlString is actually a typedef of std::string). The reason is simple: Everyone can handle strings with too much hassle and almost every type of data can be described using a string. An integer is as easy to put into a string as is a float. An array of numbers can be put into a string used a separator.

Furthermore, as I said yesterday, the datamodel itself is 90% linked to the script module (since the script module would be responsible for the expression parsing). This implies that the developer who's implementing the script module interface, would also make sure that the implementations of the different types of data values (integer, strings, floats, even objects) can easily be handled by the script module. This means, that the derived class may include access functions that bypass the get/set functions that make usage of strings. A derived integer class easily can implement a get and set function that makes usage of an integer. The script module only has to determine the data value type and cast the interface accordingly.

Thus, the interface of the data value in its basic form looks like this:


class IDataValue
{
/** \brief Identifies the type of the data.
*
* \return a string containing the identifier.
*/
virtual const ScxmlString& type() = 0;

/** \brief Access the data.
*
* \return a string containing the data value.
*/
virtual const ScxmlString& get() = 0;

/** \brief Set the data.
*
* \param a string containing the data value.
*/
virtual void set(const ScxmlString& _value ) = 0;
};


My first draft of the data module interface looks as simple as the interface for the data value. Currently only 3 functions are defined: a function to set data, a function to get data and a function to remove data.

Here it is:

class IDataModule
{
/** \brief Get data as a string from the data module.
*
* \return true, if the named date exists within the data module. False, if not.
* \param _name Name of the data to fetch.
* \param _out This variable will store the fetch result.
*/
virtual IDataValue* getData(const ScxmlString &_name) = 0;

/** \brief Set data.
*
* Data that is set in this way will be set to the data module.
*
* \return true, if the data could be set; false otherwise.
* \param _name Name of the data. This name must be unique. If the name is not unique, it will replace the data value that has been stored using that name before.
* \param _val Content of the data.
*/
virtual bool setData(const ScxmlString &_name, const IDataValue*_val ) = 0;

/** \brief Remove a data entry.
*
* The named data will be removed from the data module.
*
* \return true, if the data could be removed. false otherwise.
* \param _name Name of the data.
*/
virtual bool removeData(const ScxmlString &_name) = 0;
};


These 3 function would enable the developer to manage a set of data values within the data module. The identifiers (names) must be unique but that's part of the script module (or the implementation that makes usage of the datamodel interface).

Nevertheless, the interface for the data module is not complete. At least, that is my opinion ;) On monday or latest tuesday I'll tell you what is still missing and how that stuff is looking like.

Until then: Have fun,
Stefan

Labels: , , ,

Friday, January 02, 2009

Enhancing SSCXML

Ok... so as you might have read I plan to improve SSCXML (Simple State Chart XML) to mirror a little bit better what is actually written in the W3C draft. One thing that is mentioned in the latest version of the draft is the split into different parts such as the data module, the script module etc.

Although they describe them as being two different parts, they mention (in the same time) that the data module must provide a data access language which will allow to access the data stored in the data module.

Now, this sounds like a great idea, but IMHO this also implies that a complete expression parser must be included in the data module. Why? Because once you define a "language" that enables you to access the data, there are also some operations that you want to handle: using a variable to index an access to an array of variables ("a[b]" or "a[10]") or combine variables to create a new one (AKA as assign a value to another variable). Those are two examples of what ideally should be possible with the data module.

Unfortunately this implies a lot more: In fact, implementing a data access language into the data module ultimately means that you have a complete expression parser implemented to access, assign, handle, alter and delete the data.

And there's now the notion of the script module. According to the draft, the script module adds scripting capability to the state machine. They're giving the example of ECMAScript (aka JavaScript).

Now, think a second about what I've written above... complete expression parser... script module... You see it? Yes... for a developer this basically means the same: When you want to include scripting into your state machine, you automatically also have the data module, since you cannot script without holding a basic set of data. This means that for a programmer the data module almost always goes in combination with a script language and vice versa.

Now, my idea is this: The data module is only there to store data is using a very basic scheme: values put into relation with an identifier. For every value pushed into the data module, an identifier must be given. If an identifier is used twice, the previously stored data is replaced by the new value.

This notion of the data module has some advantages and some drawbacks:

Pros:
- no expression parser must be included.
- replacing data is easy.
- the data module is not responsible for solving the value addressing.

Cons:
- Access of large datasets might be more difficult
Example: a is an array of 1000 values. You can access a value by directly indexing it. Since the data module only has simple identifiers, an array must be mapped to a simple scheme such as "a[10] -> a.10" (access the 10th element of an array).
- The script module/expression parser is responsible to create unique ids and to verify that data is handled as expected (IE. temporary data storage).
- The data module would have no garbage collection.

To solve some of the issues, it is important that the value that is actually stored within the data module, is as flexible as possible. Also, one might want to have the possibility to enable the developer to implement his own data values (IE. an object, a specific data type, etc.). The data module must not now how to handle these new data value types. It only has to store them and provide access to them.

I'll write about the interfaces themselves tomorrow. So... keep coming back to take a look at it :)

Have fun and a happy New Year,
Stefan

Labels: , , ,

Sunday, December 28, 2008

The end of the year is nearby...

First of all I wish you a Happy Christmas (a bit late, I know) :)

It has been a long time since my last blog entry. The primary reason for this is that currently I'm submerged in work. I've several clients who were waiting for stuff. For at least one client, the situation will be stressful until end of January.

Nevertheless from time to time you have to take some days off for a special occasion (well... Christmas is special, isn't it?), get some clear ideas about the future and think of some New Year's resolutions.

I've spent a couple of hours thinking of mine... here's a work related list:

  1. Simple SCXML:

    - Improve the underlying code.
    - Modularize the code base (data module, parser module, script module, ...)
    - Make parallel states being parallel threads.
    - Make a C# version of SSCXML.
    - Create a simple editor for state machines.
    - Make a Linux compatible version (I might need some help here).
    - Think of how to integrate behavior trees into SSCXML (I'm still convinced that this is more or less easily possible).
    - Enable the user to write his own executable.
    - And finally: Think of how to promote SSCXML. While I got some companies interested into SCXML and my implementation, the sales are quite low...
  2. Serve my clients even better by improving my knowledge (AKA read some books and write some test code).
  3. Write at least 2 articles that get published somewhere (Note: I wrote some comments in an online games magazine forum and another games magazine took my comments and patched them together into a ten pages article. It's about the working conditions in the games industry in Germany and the overall situation for the developers. The main intention of my comments was to sensitise the gamers out there to the hard working conditions we encounter and that, although we must have a masochist vein, we are not into getting insulted. It's all in German but for those who are interested: Offen, ehrlich, lesenwert - ein Programmierer spricht Klartext ("Open, honest, worth reading - a programmer speaks plain language").
  4. Continue working on Prohibition. The game currently is "On Hold" due to my work charge. Nevertheless, the idea still is there. Unfortunately, some of my ideas must be changed because either the technical part is not good enough (IE. Navi as the Gui or etwork for the network part), or my game ideas are too complex (IE. the mixing of role playing and strategy creates just too much work, the idea to enable the in game characters to make usage of the environment (chairs, tables, etc) creates to much work on the AI implementation).

This is already quite much for an entire year and I'm not sure that I can realize or progress the way I would like to. Especially the Prohibition stuff creates me some head ache. The idea spins around since almost 10 years and I'm no further than 10 years ago. I wanted to focus more on the game development part and I fall back into the technical part which is not as I wanted it to be.

SSCXML has not done the progress I wanted it to have. The idea of a state machine entirely defined by an XML is very good, but then you encounter problems on the developer side: While game AI developers currently are more into behavior trees (which in my opinion is more or less a special case of a state machine), application developers either have their own state machine (if applicable) or don't see the need of using a) a state machine middleware or b) an external expertize on that matter. A small part of developers are waiting of a Linux version of SSCXML, which is a whole history of it's own.

I'll try to write more often in the future. I'll refocus on SSCXML, start my own network library implementation (based upon etwork), and talk about the progress done on Prohibition.

Have fun,
Stefan

Monday, November 24, 2008

The PS3 can be...

a real bi*** when you want to copy your DVD collection onto an HDD. I have about 200 DVDs plus all Star Trek Series waiting to be pulled off the DVD and transformed into a PS3 readable format.

Although one might think that this is an easy task, I went from failure to failure in the last couple of weeks. I tested ~20 different programs, combining several video and audio codecs, only to see that almost all of them fail (or I am too dumb to succeed).

My objectives were:

1. PS3 readable
2. Small data footprint (aka "Please... no big files...")
3. Good quality

2 and 3 actually is a question of fine tuning. I don't want to have a 1 gig file for a 40 minute Star Trek Episode. 500 Mb is my limit. Normally, one would rip off the DVD the VOBs and encode them via a tool or another with DivX, xviD or whatever else codec to reduce the data footprint.

Actually point 1 is difficult and the number of discussion one might find if you google for it, is quite impressiv. While different video and audio codecs are available, they are not "mixable" as you want. I could not manage to mix a DivX video stream with an MP3 audio stream. While it play fine on my computer, the PS3 reports "Corrupted data" or "Incompatible data".

I messed around with the vobs but they are just too huge. The only "good" combination I have found is ht263 combined with AC3. Unfortunately, not very much software can combine it. Or they're not fast (waiting ~2 hours on a quad-core machine to ripp-encode an 1:30 h film isn't my idea of fast).

Finally, I stumbled onto CloneDVD mobile (by Slysoft) which combined all I need: PS3 readable without a glitch, fast (1h 30 to ripp-encode 1 film (1:23h in length) and 2 episodes (42 minutes long each)). That is the speed I wanted. I set the quality of video encoding to 18 (whatever that means) which results to a good quality (can't see the difference between DVD and encoded file).

The file size is 905 Mb for the 1:23h film and 459 Mb for the 42 minutes episodes.

And these files (in opposite to the files generated by much other software) does play without a glitch.

Ah yes, I combined Clone DVD mobile with AnyDVD (by Slysoft also) which removes copy protection from any DVD.

Another remark or two:

1. No, I won't hand you any of my encoded files if you ask for them. I encode them for my personal use; I don't spread them...

2. Please refer to your countries laws to know if you may or may not copy your DVDs (and bypass any copy protection). Don't do anything illegal.

Have fun,
Stefan