Thursday, December 16, 2010

NHibernate performance issues #4: slow query compilation - named queries

Martin Podval
NHibernate provides many approaches how to query the database:
First three of these querying methods define the body of query in the other than native SQL. It implies that NHibernate must transform these queries into native SQL - according to given dialect, e.g. into native MS SQL query.

If you really want to develop all-times fast application the described process can present unpleasant behavior. How to avoid query compilation?

Compiled named queries

Its ridiculous but everyone met compiled (and named) query. If you have at least once browsed the log which NHibernate produced, you have had to meet similar set of lines:

2010-12-13 21:26:42,056 DEBUG [7] NHibernate.Loader.Entity.AbstractEntityLoader .ctor:0 Static select for entity eu.podval.NHibernatePerformanceIssues.Model.Library: SELECT library0_.Id as Id1_0_, library0_.version as version1_0_, library0_.Address as Address1_0_, library0_.Director as Director1_0_, library0_.Name as Name1_0_ FROM [Library] library0_ WHERE library0_.Id=?

When NHibernate starts (SessionFactory as spring.net bean in my case) he complies certainly usable queries. In displayed case, NHibernate knows that he'll probably search library entity in database by it's id so he'll generate native SQL query into internal cache and if the developer's code call Find for Library entity, he'll use exactly this pre-compiled select query in native SQL.

What's the main benefit? If you would call thousand times library's Find NHibernate would always has to compile the query. Instead of this inefficient behavior, NHibernate does it only once and uses pre-compiled version later.

How to speed you query? Use pre-complied named queries

As I've already written, NHibernate generates pre-compiled queries, stores them in the cache and uses them if necessary. NHibernate is the great framework, so he makes available such functionality even for you :-)

Simply you can declare list of HQL (or SQL) named queries within any mapping (hbm.xml) file, NHibernate loads the file, parse queries and preserves the pre-compiled version in his internal cache so called queries in runtime not have to be intricately parsed to native SQL, but he'll use already prepared ones.

How to use named queries?
  1. Define new hbm.xml file which will include your named queries, e.g. named-queries.hbm.xml
  2. Place this file within such assembly which NHibernate searches for hbm mapping files. Maybe you should have to update you Fluent NHibernate configuration to search for these files. Following code will do it for you or see FluentNHibernateSessionFactory.
m.HbmMappings.AddFromAssembly(Assembly.Load(assemblyName));
Now, it's time to write your named-queries.hbm.xml file. It has the following structure.
<hibernate-mapping xmlns="urn:nhibernate-mapping-2.2">

  <query name="Book.with.any.Rental">
    <![CDATA[
    select b from Book b
      where exists elements(b.Rentals)
    ]]>
  </query>

</hibernate-mapping>
How to use it?
IList<Book> books = SessionFactory.GetCurrentSession().GetNamedQuery("Book.with.any.Rental").List<Book>();

How is the speed up of named queries?

Lets say, I'll use Book.with.any.Rental query for any measures to we'll see how omitted query compilation improves test response.

I've executed the test for both named query and plain HQL. According to debug labels, plain HQL case spent 40ms by parsing of HQL to native SQL.

Note that all written till now applies only to the first call. NHibernate is tricky framework so he caches queries for you automatically when he compiles them for first time. Lets call method to get books with any rental two times (first level cache is cleared among the calls):
  • first call took 190ms
  • second one only 26ms 
It's also necessary to admit that database has also own query cache :-) The result is clear anyway.

What are real advantages of named queries?

It doesn't seem such brilliant think to moil with writing the queries in xml file. What are real benefits?
  1. Speed (of the first call) - described example save 40ms of method call. It doesn't seem so much. Imagine that you are developing huge project having almost hundred queries. It can save a lot of time! You should also notice that chosen query was very simple. According to my experiences, the compilation of more complicated query takes at least 200ms. It's not small amount of time when you develop very quick application
  2. HQL parse for error on startup - you'll find out that your query is correct or wrong at application's startup because NHibernate do these things when he starts. You haven't wait till the call of desired query
  3. Clean code - you aren't mixing C# code together SQL (HQL) code
  4. Possibility to change query code after the application compilation - consider that you can change your HQL or SQL even if application was already compiled. You can simply expose named query hbm.xml file as ordinary xml file and you can tune your queries at runtime - means without additional compilation
You can also see Series of .NET NHibernate performance issues to read all series articles.

Sunday, December 5, 2010

NHibernate performance issues #3: slow inserts (stateless session)

Martin Podval
The whole series of NHibernate performance issues isn't about simple use-cases. If you develop small app, such as simple website, you don't need to care about performance. But if you design and develop huge application and once you have decided to use NHibernate you'll solve various sort of issue. For today the use-case is obvious: how to insert many entities into the database as fast as possible?

Why I'm taking about previous stuff? The are a lot of articles how the original NHibernate's purpose isn't to support batch operations, like inserts. Once you have decided to NHibernate, you have to solve this issue.

Slow insertion
The basic way how to insert mapped entity into database is:
SessionFactory.GetCurrentSession().Save(object);
But what happen when I try to insert many entities? Lets say, I want to persist
  • 1000 libraries
  • each library has 100 books = 100k of books
  • each book has 5 rentals - there are 500k of rentals 
It's really slow! The insertion took exactly 276 seconds! What's the problem?
Each SQL insert is sent to server within own server request.

Batch processing with using adonet.batch_size
You can set property adonet.batch_size within your hibernate configuration to tell NHibernate that he can sent more queries to the SQL server within one statement. I'm going to set this value to 100. What's the improvement? Insertion took 171 seconds right now. Better than 276! But isn't it a lot of time? Yes it is!
The major problem is that NHibernate standard insertion via Session.Save is not intended to use for batch processing. NHibernate generated events, go through mapping, doesn't group insert statements together in proper way by default. Obviously, it must take some time. Now, it's the time to introduce ...

Stateless session
NHibernate's developers are smart guys so this significant functionality can't stay in "not-intended for batch processing" state. Stateless session is tool intended for batch processing.

Stateless session is lightweight version of Session.Save method, it doesn't throw so much events, it's fast, it just generates one insert for given object according to mapping. It's fast, so it apparently has any drawbacks.

Stateless session's drawbacks
  • stateless session isn't compatible with standard NHibernate session! There is another interface because it has completely different purpose. Spring.net's support is missing, you can't use transaction template. You must handl all the stuff by yourself.
  • because of intended fast behavior, stateless session doesn't handle any cascade operation on children. You must manually push all objects to session, all children, their children, etc.
The last point seems to be very unpleasant drawback but if you look at previous picture showing NHibernate profiler you can see the major benefit of this approach.

Despite I've set adonet.batch_size to 100,  only 5 inserts are sent to SQL server within one statement. NHibernate groups inserts only for same type of entity. You aren't able to achieve optimized query count with using standard way.

As I've said, you must call Insert method for each entity, so you can group all inserts of each specific entity by your code. Here are results of insertion:
  • 149 seconds - no advanced grouping when inserts are sent to sql server - insertion of first library followed by  insertion of it's books, insertion of all book's rentals, insertion of another library - we aren't still use fully utilized power of adonet.batch_size because only 5 inserts are sent in one statement
foreach (Library library in libraries) {
  session.Insert(library);
  foreach (Book book in library.Books) {
    session.Insert(book);
    foreach (Rental rental in book.Rentals) {
      session.Insert(rental);
    }
  }
}
  • 86 seconds - first of all libraries are processed by session's insert following by all books and all rentals - this approach efficiently uses batch size, because for 100k of books it sents only 1000 statements to SQL server, each having 100 of insert followed by set of 5k of inserts statements for rentals
foreach (Library library in libraries) {
  session.Insert(library);    
}

foreach (Library library in libraries) {
  foreach (Book book in library.Books) {
      session.Insert(book);
  }
}

foreach (Library library in libraries) {
  foreach (Book book in library.Books) {
    foreach (Rental rental in book.Rentals) {
        session.Insert(rental);
    }
  }
}
  • 80 seconds - adonet.batch_size = 1000

Stateless session is efficient!
The best result is the small summary of measured times because the main benefit of stateless session will exactly appear. The example persists (1k + 100k + 500k) 601k of entities.

session type adonet.batch_size additional groupping time [s]
standard no no 276
standard 100 no 171
stateless session 100 no 149
stateless session 100 yes 86
stateless session 1000 yes 80

If you need to improve your application's insertion time, just use stateless session.

You can also see Series of .NET NHibernate performance issues to read all series articles.

Monday, November 29, 2010

HD2 + Energy rom - improve battery life

Martin Podval
Before a time I criticized battery life of my HD2 flashed by energy rom. Last week I installed new version and it seems that HD2 can simply alive almost four days per one charge! I'm usually read internet at least 30 minutes per day, write a few messages, sync to google exchange, couple of calls.

How to improve HD2 battery life?

  • install energy rom :-)
  • definitely switch CHT (Cookie home tab) Editor's lock screen off. Despite it makes your device really unstable, it drains battery a lot - use WM's default because the android version can be simply unlocked in a pocket - significant battery life improve
  • use GPRS instead of 3G - significant battery life improve 
  • lower backlight to 20% - it's enough for almost all cases
  • switch vibration when any virtual key is pressed off
  • install all programs into main memory
  • switch weather animation widget off

Wednesday, November 17, 2010

NHibernate performance issues #2: slow cascade save and update (flushing)

Martin Podval
What's the most powerful NHibernate's feature, except object mapping? Cascade operations, like insert, update or save. What's the best NHibernate's performance issue: cascade saving.

Cascade insert, save and update

If you let NHibernate to manage your entities (e.g. you load them from persistence), NHibernate can provide all persistence operations for you, it includes automatic:
  • insert
  • update
  • delete
All depends only on your cascade settings. What says documentation?
cascade (optional): Specifies which operations should be cascaded from the parent object to the associated object.
Attribute declares which kind of operation would be performed for particular case. All this stuff can be adjusted for traditional pattern parent - child.

Following code declares specific behavior what happen to children (books) when parent entity (Library) will be affected any of mentioned operations.
HasMany(l => l.Books).
  Access.CamelCaseField().
  AsBag().
  Inverse().
  KeyColumn("LIBRARY_ID").
  ForeignKeyConstraintName("FK_BOOK_LIBRARY_ID").
  ForeignKeyCascadeOnDelete().
  LazyLoad().
  Cascade.AllDeleteOrphan();
As you can see, each children will be affected by all kinds of operation performed on parent object. Obviously there is more options: it needn't do anything, it can only update etc.

That's all about cascade operation, it's pretty and fine NHibernate's stuff and it's well described at manual.

Cascade update issue

We have defined cascade operations. What's the problem?

When NHibernate finds it appropriate to go through all  relations and objects stored in first level cache - session, it checks all dirty flags and performs proper operations which can be very expensive operation.

What means "finds it appropriate"? It means when NHibernate flush the session to database (to opened transaction scope). It can be performed in following situations:
  • before execution of any query - HQL or criteria 
  • before commit 

You can see it in log. It looks like following log snippet:
done cascade NHibernate.Engine.CascadingAction+SaveUpdateCascadingAction for collection: eu.podval.NHibernatePerformanceIssues.Model.Library.Books
NHibernate.Engine.Cascade CascadeCollectionElements:0 deleting orphans for collection: eu.podval.NHibernatePerformanceIssues.Model.Library.Books
Lets show the problem in example. Assume that you let NHibernate manage pretty huge amount of objects, e.g. you fetched them from database. NHibernate stores all of them at first level cache - session. Than you execute a few HQL queries and finish it by commit. NHibernate call flush before each action. It's serious performance hit!

NHibernate can spent huge amount of time checking all loaded data if anyone has changed. Even if you suppose to read them only.

Avoiding described situation can significantly faster your application.

Avoid needless cascade save or update

1. Read only transaction
First of all, you can tell NHibernate that it needn't perform checking because you doesn't suppose to write any change. Simple set you transaction read-only. I'm using spring.net for transaction management.
[Transaction(TransactionPropagation.RequiresNew, ReadOnly = true)]
public void FindAllNewBooks(IEnumerable<Book> books);
NHibernate won't perform any cascade checking because you tell him that you aren't suppose to update data. But what if you want to update the data?

You can provide write transaction for all places, where you need to write the data - but it looses A as atomicity from ACID for the whole - business - operation.

2. Evict entity from session
You can tell NHibernate to do not care about entity in the future, means do not store entity in session and NHibernate will forget about the entity.
SessionFactory.GetCurrentSession().Evict(library);
But be aware to evict entity from NHibernate session. If NHibernate forget the entity, there is no way how to perform lazy loading so you can't use this solution if children or properties are lazy loaded.

3. Set Status.ReadOnly on entity when it's loaded
There is really tricky way how to set read-only attribute placed in NHibernate properties, see this article Ensuring updates on Flush. 

I'm not using this type because it's really low-level approach.

4. Use IStatelessSession for bulk operation
NHibernate provides IStatelessSession interface to perform bulk operation. I'd like to write the whole article about stateless session later.

5. There isn't really simple way how to fetch entity as read-only?
No there isn't. Despite that basic Factory pattern is often declared with method FindById({id}, bool forUpdate);, NHibernate is not able to provide such kind of functionality, you must use one of described work-arounds.

Summary

If you want to develop fast and scalable application, you need to deal with cascade save and update. It's the first issue you'll find in log.

The most useful and secure way is to decide which transactions can be marked as read-only. Mark as many as possible because the most use-cases of average application read the data only.

If you really need to write data, you should lower the amount of entities placed in first level cache - session. You avoid the situation when your code loaded thousand of entities, you add one new and all thousand + one are check for dirty data.

If you insert thousand entities into the dabase, e.g. in import use-case, just use stateless session.

You can also see Series of .NET NHibernate performance issues to read all series articles.

Tuesday, November 16, 2010

HD2 Energy rom + CHT editor 2.0 = still unstable

Martin Podval
I was really curious about new CHT (Cookie Home Tab) Editor 2.0. It's UI heart of the whole work with this device.

Without undue hesitation I've yesterday installed famous energy rom including final CHT 2.0 into my HD2. My experience is really bad, I fear that it's still really unstable piece of software.

CHT Editor brings really great new graphic, no doubt about that, see following screen-shots.




The down side of new graphic is unstable device, I was forced to restart my HD2 four times only for today. Why?

First restart followed execution of google maps app. I wasn't able to return to today screen. After an hour, I've tried to perform any call. I've chose person, clicked at photo and the call started but I was unable to go to "call screen". The last case was again about music player and today screen, I was again unable to return back.

I'm silently waiting to next release of new energy rom version, I hope that it will work, you should wait too because it's annoying to restart HD2 each hour.

Tuesday, November 9, 2010

Series of .NET NHibernate performance issues articles

Martin Podval
I've spent with NHibernate persistence implementation to our product last four months. I'd like to provide set of articles regarding performance issues of the NHibernate usage.

NHibernate team has been releasing huge manual having 189 pages. It contains the basic description allowing the developer to write persistence and not totally mess it up. If you want to develop fast application, you need to read discussions (such as those at stackoverflow.com) and solve the problems particularly one by one. According to my experiences, I've decide to write the series of articles regarding NHibernate, especially performance.

Prerequsities
All following articles will contain examples. I love spring.net so it's nonsense to do not use it because it's fine integrated together as well as log4net.

All examples which will be described or used in the whole series will use three classes as domain model. According to Domain Driven Design, there are root aggregate Library, child Book and it's child Rental. Don't linger on various circumstances like book's author should be separated aggregate or library is identified by it's name. Already, it's certainly that domain model needs to be changed because there are still no identificator at the database level.

Following picture defines relations between all domain classes, standard UML:



Examples at github.com
All examples are placed at github.com, see: https://github.com/MartinPodval/eu.podval/tree/master/NHibernatePerformanceIssues

All series part
  1. NHibernate performance issues #1: evil List (non-inverse relationhip)
  2. NHibernate performance issues #2: slow cascade save and update (flushing)
  3. NHibernate performance issues #3: slow inserts (stateless session)
  4. NHibernate performance issues #4: slow query compilation - named queries

NHibernate performance issues #1: evil List (non-inverse relationhip)

Martin Podval
Lists are evil, at least when using NHibernate. You should re-consider if you really need to use specific List implementation, because it has unsuitale index column owned by parent, not children. List can't be used in inverse relationship which implies few (but major) inclusions:
  • extra sql UPDATE to persist mentioned index value
  • unscalable addition to the list - NHibernate needs to fetch all items and add new item after
  • inability to use fast cascade deletion by foreign keys
  • inability to use IStatelessSession for fast data insertion

Basic theorem: you don't need Lists! Furthermore, we'll discuss each bullet in details.

What is inverse relation?

First of all, it's necessary to clarify what inverse relation means. Reference between parent and child isn't hold by parent but child! See following picture:


Here is NHibernate mapping definition for inverse relation with using excellent Fluent NHibernate:
HasMany(l => l.Books).
  Access.CamelCaseField().
  AsBag().
  Inverse().
  KeyColumn("LIBRARY_ID").
  ForeignKeyConstraintName("FK_BOOK_LIBRARY_ID").
  LazyLoad().
  Cascade.AllDeleteOrphan();

Or hbm mapping:
<bag access="field.camelcase" cascade="all-delete-orphan" inverse="true" lazy="true" name="Books" mutable="true">
  <key foreign-key="FK_BOOK_LIBRARY_ID">
    <column name="LIBRARY_ID" />
  </key>
  <one-to-many class="Book" />
</bag>

Unscalable addition to List

Each item stored in standard (non-inverse) List holds index in special column defining collection order. It's simple smallint index.

Imagine the situation the you want to insert any new item to middle of mentioned list. Indexes of all items stored within rest of list need to be incremented. NHibernate doesn't execute any update query incrementing indexes but fetch all items from database.

Lets imagine described process for huge amount of children. Lets say that parent has 50 thousand of children. Object graph is stored in database, no object is cached. Parent is loaded from database, children list is marked as lazy. If you need to insert (or add) new item to collection, all 50 thousand items are loaded too. Again, again and again if you perform the operation.

Addition to List is totally unscalable.

Inability to use fast cascade deletion by foreign keys

Best unified approach to delete huge (various) object graph is to use cascade deletion hanged on foreign keys. I'll described this approach in separated articles in near future.

Cascade deletion is based on automatic removal of orphaned item. If relation between parent and child disappears, orphaned child is removed too. According to described process, cascade deletion is intended to be used along with inverse relation.

Following code snippet displays example of SQL foreign keys with cascade deletion:
alter table [Book] 
  add constraint FK_BOOK_LIBRARY_ID 
  foreign key (LIBRARY_ID) 
  references [Library] 
  on delete cascade

Extra sql update to store item's index

In the case of non-inverse relation, NHibernate inserts parent first and than all children. After these operations, extra update is produced to insert index column. I'll use already described situation counting with 50 thousand children, NHibernate produces 50 thousand of sql inserts and than sends additional 50 thousand of updates to SQL server - it means twice more sql statements!

See the following two sql queries extracted from log:
INSERT INTO [Book] (Isbn, Name, Pages, LIBRARY_ID, Id) VALUES (...);

UPDATE [Book] SET LIBRARY_ID = ... WHERE Id = ...;

What to pick instead of List?

It's simple to write that lists are evil. And what now, what to pick instead? There are three options.

1. At almost all cases, you really don't need List. See next chapter to find out which instance of collection to use.

List's index declares and defines list's order. At almost all cases, the list can be sorted according to any item's property. Does it make any sense to sort the list according to when items were added? I think it doesn't. I think that list needs to be sorted according to item's creation date, item's size, count's of item's replies etc., simply according to any item's property.

As our example defines, list of rentals should be sorted according to rental's start date, see mapping:
HasMany(b => b.Rentals).
  Access.CamelCaseField().
  AsBag().
  Inverse().
  KeyColumn("BOOK_ID").
  ForeignKeyConstraintName("FK_RENTAL_BOOK_ID").
  ForeignKeyCascadeOnDelete().
  LazyLoad().
  OrderBy("StartOfRental").
  Cascade.AllDeleteOrphan();

2. You need to sort list according to index and you can move index to child entit. Map it as a standard private property and than sort the whole collection according the property - see previous example.

See example of index property placed at children:
private int OrderIndex {
    get { return Book.Rentals.IndexOf(this); }
    set { }
}

The drawback is also that you need to expose parent's children list (instead of ICollection or IEnumerable) because child has to be able to find out it's position within list.

3. You need to sort list according to index, you can't change domain model and it's impossible to sort items according to any item's property. There is no other choice than simply live with non-inverse collection along with it's all disadvantages described above.

Recommended solution how to use collection when using NHibernate persistence
  1. Parent entity contains Add and Remove method for each collection
  2. Use ICollection as collection interface
  3. If you need to expose the whole colletion use IEnumerable - it's just read-only collection, support sequential fetching, etc.
Here is code snippet of Book entity.
public class Book {

    private ICollection<Rental> rentals = new List<Rental>();
    ...
    public void AddRental(String rentee) {
        ...
        rentals.Add(new Rental(this, rentee));
    }
    ...
    public IEnumerable<Rental> Rentals {
        get { return rentals; }
    }
}

Summary
You should be aware of List usage along with NHibernate persistence. It can bring you serious performance issues. Try to re-design you collections to be index independent and provide sorting approach by another than simple index way.

Examples at github.com
All examples are placed at github.com, see: https://github.com/MartinPodval/eu.podval/tree/master/NHibernatePerformanceIssues

You can also see Series of .NET NHibernate performance issues to read all series articles.

Friday, November 5, 2010

Git on Windows: MSysGit

Martin Podval
I have started to use Git today. I read a lot of discussions that there is no good tool for Windows platform. After forethought I have decided to used TortoiseGit. I also feared of difficult work related with Git as a lot of articles mentioned many instructions. As I already said, I have decided to use TortoiseGit, because I'm used to work with TortoiseSvn, but for start, MSysGit is enought. So this article is about MSysGit, next will be about TortoiseGit.

How to start with MSysgit on local machine?
  1. Download and install Git for Windows
  2. Create source code directory for your git app
  3. Right click the directory at your favorite file browser. Menu should contain item "Git init here". It initializes chosen directory to be git-abled :-)
It was your first usage of Git.

Commit data to local Git repository

Now, you can add any file, your first source code, to created directory. If you are prepared to commit any changes to your local git repository, follow next instructions.
  1. Right-click the directory.
  2. Choose "Git Add All file now". The command adds all files to git control. No commit so far, lets say that it's similar to subversion's "Add file"
  3. Right-click again, now choose "Git gui" item to start graphical GIT client.
  4. You should see your changes prepared to commit to local git storage. You can write your comment and commit changes.

Push data to remote server - github.com

As I said, your commit so far affected only local repository. If you decided to share (or backup) your sources to real server, thats another story :-)

To do that, I have assigned to free-for-public-projects github.com hosting service. It seems really good, very nice web UI. After your registration, you obviously need to create any new repository, just follow website's instructions.

How to push data?
  1. First of all, you need to create any SSH public key. Thats simple way how to trust you. MSysGit can generate key for you - Help -> Show SSH Key -> Generate Key.
  2. Insert the key to your profile at github.com. Click at "Account settings", choose "SSH Public Keys". Now, your key is synchronized.
  3. Lets change focus back to MSysGit GUI to upload repository to server.
  4. Select "Remote" item at menu, choose "Add remote". Fill the location you have already get when you created repository. You can find the url placed at repository overview page. Url has following pattern: git@github.com:{your-user-name/{project-name}.git. After successful filling, the remote github.com repository is paired with your local git repository.
  5. Now, you are able to push the data to remote server. Choose "Remote" -> "Push" and confirm the dialog.

I'm really interested in how I will like the work with Git and right now I really don't understand  how potential conflicts are solved, I'll probable see later :-)

Thursday, October 28, 2010

HD2 Energy Rom 10/14 steals battery!

Martin Podval
I really like energy rom for my HD2. After month and half I re-flashed HD2 device from later august energy rom to current one - 14.10.2010. I have chose great looking .Sencity theme, see attached screenshot. I love it.

But the worst surprise wait for me. After two weeks of using I can say that current energy version consumes two times more of battery. HD2 has drain the whole battery within three days, but current version is out easily in one or one and half day. Consider if you need to move on to new version!

I have suspected lock screen of Cookie Home Tab in beta 2 but switching to the normal build-in one didn't improve battery consumption and the only app I have installed is great Twitter MoTweets. Maybe it can be caused by the whole betaversion of CHT. NRG still supports CTH in final 1.8.5 version. So, I'll try it :-)

Wednesday, October 27, 2010

C#: LINQ and Foreach performance comparison

Martin Podval
I'm currently solving performance issues we have programmed within the last milestone. A lot of bugs were already fixed but very interesting issue hasn't.

I have liked and preferred LINQ queries over simple foreach command. Mentioned performance issue was based on the following query.

Entity e = entities.Where(e => e.Id.Equals(id)).FirstOrDefault();

Our list contains few items, for the worst case the only one. It's necessary to make the search because there can be obviously more items. I was surprised by results raised from simple test which is attached at the bottom of current article.

Test simply inserts 1, 50 and finally 100 unique entities to the list. Each entity holds only string identificator (UUID). Each listed algorithm try to find entity placed at the end.

I have tried these approaches:

  1. LINQ query
  2. simple foreach
  3. map having entity id as key

The result is very interesting as attached table and chart show. The map was included only for the confrontation, it obviously doesn't support any querying.

Conclusions: LINQ, foreach or dictionary?



It seems that querying above IList with using LINQ spent significant time to initialize the whole technology. LINQ query can be almost 10x worst than simple foreach for small amount of data. Note that for the described issue, the map is total winner because it has constant search response.



Reject LINQ? No!

The results are not so bad for LINQ as it can be seen. Consider that boundary 7 second is for one million of calls in the case of 50 items within list. So LINQ takes 7 microseconds for one search, foreach six. There is also almost no difference in absolute relations when there is small amount of items within list. LINQ 0.9 microsecond, foreach 0.1 micro. I believe that LINQ can be rejected only for very performance tuned solutions.

Sunday, October 17, 2010

JDownloader: hotfile, rapidshare best downloader

Martin Podval
I was used Free download manager to download my favorite TV series (e.g. IT Crowd or new interesting BBC's Sherlock) from well know data storages, like rapidshare or hotfile. I've tried JDownloader and I'm really impressed all those features which the tools provides in comparison with standard lightweight managers.


Interesting features
  • you can only copy links to download into the clipboard at your favorite forum, switch the focus back to JDownloader, it automatically use these links as datasource, add them into link grabber and than check their online status (if they are downloadable)! You can see their availability in a few seconds
  • support the usage of huge amount of paid accounts. You simply add your credentials into system, add links and it works - download files - you need obviously possess these accounts :-) 
  • build in extract support - if you add links to the data spread into more rars, the JDownloader's plugin can unrar them after successful download and even delete them when unrar is finally done
  • using pair and free account together - assume that you add two simultaneous downloads and trying to download another file placed on the same storage. JDownloader tries to use free account, wait until the captcha succeeded and download the last one via free account
  • JDownloader also creates container and directory wrapping each download so all files are not downloaded into one directory but spreads into new one according to the downloanded file names
  • great plugin support, automatic computer shutdown is supported too. JDownloader can be also up-dated by fine online updater, it simply download all new system parts or plugins
Downsides? Are you still waiting even with paid account?

I was about to uninstall JDownloader because it always waits even if I correctly set up the pair rapidshare account! What happen? Although I inserted credentials, I didn't enable this account globally for the whole application so JDownloader tries to download all files through free account. Rear. Described global switch is that small check button at the left down application corner.

Miniwelt - Mini World, Lichtenstein, Germany

Martin Podval
Do you want to see the best know buildings in the world ever all together at the large garden? Visit Lichtenstein, the small village near Dresden, east Germany. The miniwelt, means "mini world" in german, allow you to do that during one afternoon.


Eifell tower, pyramids, White house, Statue of Liberty, Pisa tower and a more and more. See the image gallery.

What else in the west Germany? Lichtenstein is about 80 kilometers far by higway E40 from Dresden. The town, totally destroyed by World War II. The historical center is now, 55 years later, amazingly rebuilt, worth to see. Note that all shopping centers are not opened on Sundays in Germany :-)

Saturday, September 11, 2010

How to improve HTC HD2? Use energy ROM!

Martin Podval
I'm really satisfied with my cell phone HTC HD2. I've bought the device in February and I truly like the change, the innovation.

Logically, I've started to consider flash HD2 by one of many ROM's developed by guys around xda-developers.

There were three main reasons why I did it.
  • the device suffers by child ailment like wrong delivering of SMS. I've no time to watch and search forums and apply released patches
  • there are a lot of handful applications around HD2
  • there are a lot kinds of settings, it's hard to fine tune the device
The most widely known rom is the Energy rom.

What do you need to do to flash your device? There are two simple steps described in details within the article. Note that you should make the all flash stuff on the notebook to prevent against the blackout.

What the energy ROM brings me?

  • I have got the updated device all the time as I'm flashing it couple times per month. The rom contains almost 40 applications, see the complete list
  • HD2 keeps alive almost three days per one charge! This is the best improvement at all
  • the fine tuned and fresh graphic of sense environment, see screenshots, note that there are seven different graphics designs!
  • the fine tuned environment, all applications works together in the complete bundle
  • the whole device is faster, this is the main reason why "energy" :-)
 NRGZ28 made and is making the great work, I like it and I appreciate that! I also have to admit that sometimes the HD2 went down without any reason but it happen a few times.

Thursday, August 26, 2010

Hockey arena: Modanovi Hosi RIP

Martin Podval
As I declared within my previous article, I have sold out all my players and 30 day dead-line is currently running so my team will be deleted in a while.

Hockeyarena is going down as there are no new improvements so I have started to play power play manager.

Modanovi Hosi is former first czech league team, the top was the sixth place in there. Here is my history (shorten when my paid account expired) as well as the latest financial state.

Sunday, August 8, 2010

HTC HD2 produces terrible photos

Martin Podval
I have relied on my HTC HD2 as secondary photo-compact device on vacation. I took also old-school 4mpx samsung too which became later my primary compact. I'm still really disapointed how poor photos HTC HD2 produces.

Despite automatic or default settings of HD2's photo-module it looks like the oil paint. See attached images.

Thursday, August 5, 2010

Souhrn nejčtenějších zpráv

Martin Podval
Když se člověk vrátí po dovolené, rád by si někde přečetl souhrn nejčtenějších zpráv, které se udály za dobu jeho nepřítomnosti. Existuje něco takového na zpravodajských serverech?
  • Idnes.cz nabízí archiv, kde je možné zvolit ostrov a datum. Bohužel neumí datumové intervaly ani řadil dle nejčtenějích.
  • lidovky.cz jsou nadějnější, mají na své základní stránce box "nejčtenější", bohužel vybírat lze jen za poslední den, tři dny a týden. Zobrazeno je pak jen pět nejčtenějších článků.
  • aktuálně.cz nabízí asi nejlepší archiv, bohužel bez možnosti řazení podle čtenosti
Pokud zabrousíme na zahraniční servery, najdeme výrazně zajímavější stránky.
Obávám se tedy, že přečíst si zmeškané zpravodajství za dovolenkovou dobu není tak snadný oříšek. Člověk se opět musí prodírat spoustou nepodstatných zpráv, tak aby se dostal k podstatnému.

Wednesday, July 14, 2010

Sell out of all players at hockeyarena.net

Martin Podval
I'm selling out all players right now. One player will go to the market each work day. See the following image to get further informations about them.



Here is even their loyalty:

Friday, April 30, 2010

Limited "unlimited" plans available in Czech T-Mobile

Martin Podval
Ufff, Czech T-Mobile sux! Czech is the country having the worst set of operators within emerging markets. Before almost two years I have conclude the agreement with T-Mobile. Unlimited whenever minutes to five numbers for 500 czech crowns per month. Looks good.

Till next month, T-Mobile is changing the agreement to 300 whenever minutes in the same price. Sounds almost illegal. I don't want to T-Mobile wrong, you can quit your agreement but you have to do it 20 days before new policy will start to serve. I'll bet that 90% of customers let it alone.

Honestly, described policy change comes in useful to me. I have consider to leave T-Mobile to Vodafone in four months but I'm (actually was) still under contract so I can legally quit my contract right now. Good for me but I don't understand the arrogance of T-Mobile guys.

Beside this issue I'm really disappointed of their mobile internet. I'm paying 10 euro per month to 100MB FUP mobile internet with port restriction. It means that sometimes I get invoice having variable amount of paid internet. Why? I run Twitter or Facebook app which doesn't use standard 80 port. It sucks, it really sucks!

Czech T-Mobile is really funny company at all. Their web portal mainly doesn't work. I got 50% failure last month. I would like to know which company make this terrible piece of **** for them.

The worst thing? I believe that they can't afford such poor approach to customer in original T-Mobile country - Germany. But we still live in Czech.

Thursday, March 4, 2010

Google apps and blogger, blogspot

Martin Podval
I spent half and hour by searching how to use blogger service within my own website which is powered by Google Apps. How I went through all forums topics, It seemed that Google Apps is not supposed to work and allowed blogger.com to use. Ok, I created new blog on blogger site and write first article. But I haven't give it up and have tried to search again this evening. So I have been surprised that solution is so simple. Sometimes reading of manual is worth, see How do I use a custom domain on my blog?

Monday, March 1, 2010

Great reading: studying material for Sun' SCJP by Sybex

Martin Podval
I have started to read book SCJP: Sun Certified Programmer for Java Platform Study Guide: SE6 (Exam CX-310-065). This reading is great for huge spectrum of programmers, even senior java one has to find places where he can gain new knowledge. I had to continue reading SCJP book despite I'm currently working on project having .NET behind. It helped me to understand stuff "behind the wall", what compiler allows, how works inheritance. The book covers all materials likely using within technical interviews :-) You couldn't be caught so easy :-)

About me

Blog Archive

Powered by Blogger.