Čas dát NHibernatu sbohem #2 – performance

V minulém díle jsem jsem rozebíral pár nedostatků NHibernatu, které mají z mého pohledu zásadní vliv k nasazení či nenasazení toho výborného frameworku. V tomhle díle bych rád rozebral performance.

Performance lze různě ladit, jak jsem již před pár lety rozbíral. Je spousta možností, jak věci delat. Cachovani, stateless session atd. Ve chvíli, kdy se dostane člověk na hranu rychlosti databáze, musí se začít zaobírat méně obvyklými věcmi, které se běžně neladí.

Kontrukce entit

Vždy jsme na našel projektu ctili DDD, čili chtěli jsme mít rich entity a ne naked. Tím chci říct, že všechny atributy/fieldy jsou encapsulované get/set metodami. Pokud nechcete zveřejnit privátní fieldy, musíte začít používat reflection, což při velkém měřítku při používání pro většinu mapovaných entit začíná být pro .NET problém. Je to pomalé.

Existuje několik frameworků (např. FastReflect), které dokážou cachovat reflection přístup, čistou performance přístupu k privátním fieldům to pak zrychlí v našem případě 20x. Samozřejmě custom reflection framework není snadné do NHibernatu nejak integrovat. Je nutné si vyvinout vlastní komponentu, která bude konstruovat entity a tu tam zaintegrovat.

Konstrukce hierarchií entit

Popsaný problém pomalé reflexe plynule přechází v další issue, které člověk musí řešit. Většinou z persistence netaháte jeden typ class, ale hierachii tříd. To znamená, že se persistentního frameworku dotazuji na jeden objekt, který má však další dependance. Typicky v DDD se jedná o agregát, který je separovaný od ostatních agregátů – nemají mezi sebou přímou relaci – a ten obsahuje kolekce dalších druhů entit a tak dál rekurzivně. V našem případě jeden typ agregátu nabobtnal až na dvacet druhů class.

Z logiky věci NHibernate konstruuje celý agregát pomocí mappingu, tzn. map klíčů, entit a typů class. Jinak řečeno, materializované entity bude NHibernate ukládat do session a pro dané klíče a typy je linkovat mezi sebou do výsledného agregátu, až z toho nakonec vypadne pouze root instance. Ve chvíli, kdy těch typů a instancí není málo, snadno narazíme na to, že konstruování takového agregátu je dost pomalé.

Nhibernate je obecný framework, který se snaží za runtime zkonstruovat něco, co vy si můžete napsat sami. Performance programového přístupu versus vašeho zakompilovaného je samozřejmě velká v řádu desítek procent ušetřeného času.

Jediné, co musí programátor udělat, je napsat si vlastní linker a vlastní session, která je schopná absorvovat entity a propojovat je přes identifikátory.

Mapping

Přestože se to nemusí zdát, i samotné mapování může mít dopad na výkon Vaší aplikace. Ačkoliv je NHibernate jeden z nejvíc ohebných frameworků, co jsem poznal, mapping neumí zdaleka všechno. Pokud máte složitejší věci, např. kolekce kolekcí nebo dictinary kolekcí, musíte v mapování přijít s workaroundem, protože tohle standardně mapovat nelze. V našem případě jsem vždycky pro daný záznam vytvořili novou (zbytečnou) entitu, která tohle dokázala obalit a tu také namapovali.

Místo přirozeného řešení jsem kvůli jemnému omezení museli vytvořit entitu navíc. Samozřejmě, že takový zásah má vliv na performance, např. vytváření entit s následným garbage-collectingem atd. V serverových aplikacích je občas nutné hledět na každou zbytečnou entitu.

Garbage Collecting

Poslední téma, kterého bych se rád dotknul, je produkce entit a následný garbage collecting. Škálovatelná server-side aplikace by neměla plýtvat zdroji. Pokud to dělá, brzy v rámci optimalizací performance narazí. Tahle metrika je vždy strašně těžko měřitelná, protože každý profiler nebo typ měření dává jiné výsledky.

Jedna věc je však jistá. Jakékoliv ORM bude produkovat víc odpadu, protože bude vždy víc obecné – to znamená, že bude produkovat obalovací entity – než Vaše custom persistenční logika, která je prostě na míru. Bude produkovat víc obecný kód – jak jste to již popisoval v předchozí kapitole.

Při nasazení NHibernatu do produkce zvedne vždycky čas v GC.

Rozhřešení

Po několika letech používání NHibernatu jsme dospěli do situace, kdy jsme si pro zrychlení persistentní vrstvy museli přepsat určité komponenty nebo vyměnit NHibernate celý. Začali jsme s tím prvním. Jak jsme skončili uvidíme v další kapitole.

Čas dát NHibernatu sbohem #1 – nízká transparentnost

Mám rád frameworky. Použít sílu, inteligenci a práci ostatních, kterou nabízí zdarma, mi přijde jako dobrý nápad, který šetří lidské zdroje a člověka učí. Proto také v našem .NET produktu HP Service Virtualization používáme NHibernate, Spring.NET, NUnit atd.

Ostatně princip ORM je dobrý nápad. Team, starající se o persistenci, rychle nabootuje do problematiky persistence, nevynalézá kolo s různými koncepty jako je optimistic locking, session handling, podporuje relativní přenositelnost mezi databázemi a persistentní úroveň napíšete za zlomek času. V následujicí sérii článků bych rád popsal pár postřehů, proč se skoro po třech letech chceme NHibernatu na našem produktu zbavit. Pořadí problému se nemusí nutně shodovat s jejich závažností 🙂

Poslední dobou slýchávám názor, že jsou věci složité jinak by je dělal každý. Tohle tvrzení je skoro v rozporu s dobrými programátorskými mravy 🙂 Asi před rokem jsem četl knížku Tajemství inovací Steva Jobse. Ač nejsem milovník Apple, kniha ve mě zásadně vylepšila náhled na složitost naprogramovaného kódu. Od té doby se snažím dbát na jednoduchost všeho co dělám.

NHibernate je vysoce konfigurovatelný, umí mnoho věcí nastavit, často velmi umě schovává hodně složitou funkcionalitu za jednoduché koncepty. Přesto to pro složitější aplikace nemusí stačit.

Jeden příklad složitosti za všechny. Hibernate Session. Výborný koncept, který umí pomocí optimistického přístupu cachováním šetřit přístup do DB. Zaručuje dokonce prakticky vyšší izolační level než standardní Read Committed – pokud jednou načtete entitu, dostanete během životnosti session vždy stejnou instanci – samozřejmě s určitými vyjímkami. Což jsou přesně ty složitosti o kterých mluvím.

Problém je, že tento koncept musíte vystavit mimo databázovou vrstvu, např. s použitím Transaction API ve Springu v komponentách, které řeší transakce. Jakmile tohle uděláte, nemáte věci pod kontrolou. Potom pracuje se session, alespoň na pozadí, váš celý tým. Všichni členové pak musí umět správně se session fungovat, vědět kdy co dostanou, kdy mají co updatovat, kdy refreshovat. Handlování práce se session začne prorůstat mimo databázi. Mužete ho sice skrýt za repository, ale už je vidět.

Pokud je počet uživatelů podobného API větší, vždycky se najde pár jedinců, kteří jej používají špatně. Vše se později zvrtne k tomu, že místo šetřeni dotazů do databáze se queruje vždycky.

V kombinaci s magičností sklouzneme zákonitě k tomu, že bychom lépe spali, kdybychom si session komponentu psali sami, vynutili si jasný lifecycle manažovaných entit a měli vše pod kontrolou, protože celý zbytek teamu nerozumí detailně session logice a nemá čas číst stovku stránek NHibernate manuálu.

Proč magičností? Springové transakční API handluje lifecycle session. Rádi bychom během celého processení jednoho callu na náš server měli session jako unit of work. Springové transakční API se bohužel nechová tak, že bychom mimo něj poznali, kdy vytvárí novou session a kdy ji znovupoužívá – což je chování, kterého chceme dosahnout. Tímto přesně zajistíme konzistenčnost během celého zpracování.

Další magičnost? Např. v Second Level Cache není občas jasné proč je držena starší entita než očekáváme a několik dnů strávených debugováním NHibernate knihoven nenese ovoce. Musíte potom obcházet celý koncept, tu a tam session či cache clearovat nebo naopak flushovat, aby se v ní objevilo to správné. Začíná se to pak celé nebezpečně hákovat.

NHibernate je mocný nástroj, ale jako vždycky věci normalizuje. Zjednodušení konceptu má vždy velkou nevýhodu a to skrytí určitých věcí o kterých rozhoduje sám framework za vás. Pokud je ale chcete měnit, musíte si přepsat tu a tam nějakou komponentu. Prostě vám občas nesedí defaultní chování.

Pokud tohle chcete, raděj se HNibernatu vyhněte.

Co bude příště? Performance. Kdy a proč se vyhnout NHibernate, když chcete vysokou performance.

Book: The Art of Unit Testing: With Examples in .Net

I finished the reading of The Art of Unit Testing: With Examples in .Net book before a week. The book was categorized as an agile book so you can forget as the book is something like the bible of unit testing. It just summarizes the agile and pragmatic point of view to unit testing more then programmer’s manual how to write correctly unit tests.

First chapters define unit testing basis, point to differences between unit and integration testing. The problem is that such text doesn’t highlight those differences enough and the reader couldn’t understand why the isolation is so much necessary. It’s followed by the expression of stubs and mocks and the list of mock frameworks.

The rest of the content is sort of novel about importance of unit testing, the purpose of testing in the company. The most valuable chapter is almost last one which describes how to implement unit testing as a new feature in your company. Such text is applicable for any new technology you want to use in your business.

As I’ve already said, this book is sort of novel about unit testing, one text which can help you to form your point of view why and how to unit testing, that’s all. There is lack of facts important to understand how and why to use isolation frameworks, what can happen when you would write w/o that – which happen in my company anyway 🙂 – how you would “unit test” any class etc.

Just read the book to complete your glance of unit testing but not suitable for unit testing novices.

Using of GUID as primary key in MS SQL is antipattern

We’ve decided to persist our domain model into database probably a year ago so beside the xml persistence our product uses any database system as a primary storage. The next step was key failure because we’ve designed the database schema according to current state of our domain model. Why not as NHibernate is so powerful that you are able to map almost every relation type, every scope of accessibility etc.

Beside slow performance, I’ve already written those articles before, which we’ve fixed yet the another major issue appeared before a few months. The performance of MS SQL database during continuous insertion of medium-large rows slows down until the system appears almost dead. Why?

As I’ve already written above, we’ve decided to not adjust the model according common database recommendations and just persist it. The lifecycle of our domain model needn’t be usually coupled with database persistence. So all primary keys in all domain entities were absolutely unique GUIDs.

It doesn’t sound so dramatic, we’ve hoped that it would only occupied more space than simple integer and maybe slightly slows the performance but now I know that choose GUID as primary key is terrible decision.

GUID as primary key decreases performance a lot

I’ve made PoC how to increase the performance of a few complicated selects statements. They were no “join hell” but they were slow according to our acceptance criteria. I’ve discover clustered index as very powerful feature which can improve performance itself a lot. I’ve made further inquiry and found that simple select above sorted table using clustered index is 35% faster with using int as indexed key with respect to previous GUID. Very nice!

Notice that the index was already ordered, for one milion of rows it took almost four minutes so it was useless for us anyway.

GUID as primary key kills the insertion performance

I believe that you can tune the performance for the all time so 35% up was great but expected when you start to play with all those nice profiler apps. The worst issue we’ve met was that already referenced above.

The primary key was unique GUID generated during the model creation. The consequent inserts just use those IDs and use them in the particular insert, no big deal. The blocker problem was absolutely heavy decreasing performance of those inserts. Lets look at real numbers: first thousand of inserts took approximately one second, when the database table contains almost one million of rows, one thousand of additional inserts took almost 20 seconds!

The database was dead, processor was not doing anything, hard drive as well. No dead-lock was found, it just slept. The first suspicious was cluster primary key, it’s cut-off only postponed the problem. After a few tests we’ve discovered that the clue is primary key. Cut-off the primary key notation returns the performance into original and requested numbers. Unfortunately our system also reads during these insertions so we are unable to switch all keys off.

The current solution is to rewrite model to use auto-generated integer primary keys which also clarified our view to domain driven design’s value object to avoid the meaningless use of entity type of object everywhere it’s possible.

Do not use GUID within database

I know that there are certainly places where GUID approach is usable but be aware of use them as any key. It:

  • takes significantly more space (4 bytes vs. 38 bytes), it can be serious issue when you use database with space limit, e.g. MS SQL Express
  • kills the database during the insertion
  • slows the performance of select clause, 35% down with respect to int
  • slows joins to tables 
  • almost exclude your application from the use of clustered index 

NHibernate performance issues #4: slow query compilation – named queries

NHibernate provides many approaches how to query the database:

First three of these querying methods define the body of query in the other than native SQL. It implies that NHibernate must transform these queries into native SQL – according to given dialect, e.g. into native MS SQL query.

If you really want to develop all-times fast application the described process can present unpleasant behavior. How to avoid query compilation?

Compiled named queries

Its ridiculous but everyone met compiled (and named) query. If you have at least once browsed the log which NHibernate produced, you have had to meet similar set of lines:

2010-12-13 21:26:42,056 DEBUG [7] NHibernate.Loader.Entity.AbstractEntityLoader .ctor:0 Static select for entity eu.podval.NHibernatePerformanceIssues.Model.Library: SELECT library0_.Id as Id1_0_, library0_.version as version1_0_, library0_.Address as Address1_0_, library0_.Director as Director1_0_, library0_.Name as Name1_0_ FROM [Library] library0_ WHERE library0_.Id=?

When NHibernate starts (SessionFactory as spring.net bean in my case) he complies certainly usable queries. In displayed case, NHibernate knows that he’ll probably search library entity in database by it’s id so he’ll generate native SQL query into internal cache and if the developer’s code call Find for Library entity, he’ll use exactly this pre-compiled select query in native SQL.

What’s the main benefit? If you would call thousand times library’s Find NHibernate would always has to compile the query. Instead of this inefficient behavior, NHibernate does it only once and uses pre-compiled version later.

How to speed you query? Use pre-complied named queries

As I’ve already written, NHibernate generates pre-compiled queries, stores them in the cache and uses them if necessary. NHibernate is the great framework, so he makes available such functionality even for you 🙂

Simply you can declare list of HQL (or SQL) named queries within any mapping (hbm.xml) file, NHibernate loads the file, parse queries and preserves the pre-compiled version in his internal cache so called queries in runtime not have to be intricately parsed to native SQL, but he’ll use already prepared ones.

How to use named queries?

  1. Define new hbm.xml file which will include your named queries, e.g. named-queries.hbm.xml
  2. Place this file within such assembly which NHibernate searches for hbm mapping files. Maybe you should have to update you Fluent NHibernate configuration to search for these files. Following code will do it for you or see FluentNHibernateSessionFactory.
m.HbmMappings.AddFromAssembly(Assembly.Load(assemblyName));

Now, it’s time to write your named-queries.hbm.xml file. It has the following structure.




<![CDATA[
select b from Book b
where exists elements(b.Rentals)
]]>


How to use it?

IList books = SessionFactory.GetCurrentSession().GetNamedQuery("Book.with.any.Rental").List();

How is the speed up of named queries?

Lets say, I’ll use Book.with.any.Rental query for any measures to we’ll see how omitted query compilation improves test response.

I’ve executed the test for both named query and plain HQL. According to debug labels, plain HQL case spent 40ms by parsing of HQL to native SQL.

Note that all written till now applies only to the first call. NHibernate is tricky framework so he caches queries for you automatically when he compiles them for first time. Lets call method to get books with any rental two times (first level cache is cleared among the calls):

  • first call took 190ms
  • second one only 26ms 

It’s also necessary to admit that database has also own query cache 🙂 The result is clear anyway.

What are real advantages of named queries?

It doesn’t seem such brilliant think to moil with writing the queries in xml file. What are real benefits?

  1. Speed (of the first call) – described example save 40ms of method call. It doesn’t seem so much. Imagine that you are developing huge project having almost hundred queries. It can save a lot of time! You should also notice that chosen query was very simple. According to my experiences, the compilation of more complicated query takes at least 200ms. It’s not small amount of time when you develop very quick application
  2. HQL parse for error on startup – you’ll find out that your query is correct or wrong at application’s startup because NHibernate do these things when he starts. You haven’t wait till the call of desired query
  3. Clean code – you aren’t mixing C# code together SQL (HQL) code
  4. Possibility to change query code after the application compilation – consider that you can change your HQL or SQL even if application was already compiled. You can simply expose named query hbm.xml file as ordinary xml file and you can tune your queries at runtime – means without additional compilation

You can also see Series of .NET NHibernate performance issues to read all series articles.

NHibernate performance issues #3: slow inserts (stateless session)

The whole series of NHibernate performance issues isn’t about simple use-cases. If you develop small app, such as simple website, you don’t need to care about performance. But if you design and develop huge application and once you have decided to use NHibernate you’ll solve various sort of issue. For today the use-case is obvious: how to insert many entities into the database as fast as possible?

Why I’m taking about previous stuff? The are a lot of articles how the original NHibernate’s purpose isn’t to support batch operations, like inserts. Once you have decided to NHibernate, you have to solve this issue.

Slow insertion
The basic way how to insert mapped entity into database is:

SessionFactory.GetCurrentSession().Save(object);

But what happen when I try to insert many entities? Lets say, I want to persist

  • 1000 libraries
  • each library has 100 books = 100k of books
  • each book has 5 rentals – there are 500k of rentals 

It’s really slow! The insertion took exactly 276 seconds! What’s the problem?

Each SQL insert is sent to server within own server request.

Batch processing with using adonet.batch_size
You can set property adonet.batch_size within your hibernate configuration to tell NHibernate that he can sent more queries to the SQL server within one statement. I’m going to set this value to 100. What’s the improvement? Insertion took 171 seconds right now. Better than 276! But isn’t it a lot of time? Yes it is!

The major problem is that NHibernate standard insertion via Session.Save is not intended to use for batch processing. NHibernate generated events, go through mapping, doesn’t group insert statements together in proper way by default. Obviously, it must take some time. Now, it’s the time to introduce …

Stateless session
NHibernate’s developers are smart guys so this significant functionality can’t stay in “not-intended for batch processing” state. Stateless session is tool intended for batch processing.

Stateless session is lightweight version of Session.Save method, it doesn’t throw so much events, it’s fast, it just generates one insert for given object according to mapping. It’s fast, so it apparently has any drawbacks.

Stateless session’s drawbacks

  • stateless session isn’t compatible with standard NHibernate session! There is another interface because it has completely different purpose. Spring.net’s support is missing, you can’t use transaction template. You must handl all the stuff by yourself.
  • because of intended fast behavior, stateless session doesn’t handle any cascade operation on children. You must manually push all objects to session, all children, their children, etc.

The last point seems to be very unpleasant drawback but if you look at previous picture showing NHibernate profiler you can see the major benefit of this approach.

Despite I’ve set adonet.batch_size to 100,  only 5 inserts are sent to SQL server within one statement. NHibernate groups inserts only for same type of entity. You aren’t able to achieve optimized query count with using standard way.

As I’ve said, you must call Insert method for each entity, so you can group all inserts of each specific entity by your code. Here are results of insertion:

  • 149 seconds – no advanced grouping when inserts are sent to sql server – insertion of first library followed by  insertion of it’s books, insertion of all book’s rentals, insertion of another library – we aren’t still use fully utilized power of adonet.batch_size because only 5 inserts are sent in one statement
foreach (Library library in libraries) {
session.Insert(library);
foreach (Book book in library.Books) {
session.Insert(book);
foreach (Rental rental in book.Rentals) {
session.Insert(rental);
}
}
}
  • 86 seconds – first of all libraries are processed by session’s insert following by all books and all rentals – this approach efficiently uses batch size, because for 100k of books it sents only 1000 statements to SQL server, each having 100 of insert followed by set of 5k of inserts statements for rentals
foreach (Library library in libraries) {
session.Insert(library);
}

foreach (Library library in libraries) {
foreach (Book book in library.Books) {
session.Insert(book);
}
}

foreach (Library library in libraries) {
foreach (Book book in library.Books) {
foreach (Rental rental in book.Rentals) {
session.Insert(rental);
}
}
}
  • 80 seconds – adonet.batch_size = 1000

Stateless session is efficient!
The best result is the small summary of measured times because the main benefit of stateless session will exactly appear. The example persists (1k + 100k + 500k) 601k of entities.

session type adonet.batch_size additional groupping time [s]
standard no no 276
standard 100 no 171
stateless session 100 no 149
stateless session 100 yes 86
stateless session 1000 yes 80

If you need to improve your application’s insertion time, just use stateless session.

You can also see Series of .NET NHibernate performance issues to read all series articles.

NHibernate performance issues #2: slow cascade save and update (flushing)

What’s the most powerful NHibernate’s feature, except object mapping? Cascade operations, like insert, update or save. What’s the best NHibernate’s performance issue: cascade saving.

Cascade insert, save and update

If you let NHibernate to manage your entities (e.g. you load them from persistence), NHibernate can provide all persistence operations for you, it includes automatic:

  • insert
  • update
  • delete

All depends only on your cascade settings. What says documentation?

cascade (optional): Specifies which operations should be cascaded from the parent object to the associated object.

Attribute declares which kind of operation would be performed for particular case. All this stuff can be adjusted for traditional pattern parent – child.

Following code declares specific behavior what happen to children (books) when parent entity (Library) will be affected any of mentioned operations.

HasMany(l => l.Books).
Access.CamelCaseField().
AsBag().
Inverse().
KeyColumn("LIBRARY_ID").
ForeignKeyConstraintName("FK_BOOK_LIBRARY_ID").
ForeignKeyCascadeOnDelete().
LazyLoad().
Cascade.AllDeleteOrphan();

As you can see, each children will be affected by all kinds of operation performed on parent object. Obviously there is more options: it needn’t do anything, it can only update etc.

That’s all about cascade operation, it’s pretty and fine NHibernate’s stuff and it’s well described at manual.

Cascade update issue

We have defined cascade operations. What’s the problem?

When NHibernate finds it appropriate to go through all  relations and objects stored in first level cache – session, it checks all dirty flags and performs proper operations which can be very expensive operation.

What means “finds it appropriate“? It means when NHibernate flush the session to database (to opened transaction scope). It can be performed in following situations:

  • before execution of any query – HQL or criteria 
  • before commit 

You can see it in log. It looks like following log snippet:

done cascade NHibernate.Engine.CascadingAction+SaveUpdateCascadingAction for collection: eu.podval.NHibernatePerformanceIssues.Model.Library.Books
NHibernate.Engine.Cascade CascadeCollectionElements:0 deleting orphans for collection: eu.podval.NHibernatePerformanceIssues.Model.Library.Books

Lets show the problem in example. Assume that you let NHibernate manage pretty huge amount of objects, e.g. you fetched them from database. NHibernate stores all of them at first level cache – session. Than you execute a few HQL queries and finish it by commit. NHibernate call flush before each action. It’s serious performance hit!

NHibernate can spent huge amount of time checking all loaded data if anyone has changed. Even if you suppose to read them only.

Avoiding described situation can significantly faster your application.

Avoid needless cascade save or update

1. Read only transaction
First of all, you can tell NHibernate that it needn’t perform checking because you doesn’t suppose to write any change. Simple set you transaction read-only. I’m using spring.net for transaction management.

[Transaction(TransactionPropagation.RequiresNew, ReadOnly = true)]
public void FindAllNewBooks(IEnumerable books);

NHibernate won’t perform any cascade checking because you tell him that you aren’t suppose to update data. But what if you want to update the data?

You can provide write transaction for all places, where you need to write the data – but it looses A as atomicity from ACID for the whole – business – operation.

2. Evict entity from session
You can tell NHibernate to do not care about entity in the future, means do not store entity in session and NHibernate will forget about the entity.

SessionFactory.GetCurrentSession().Evict(library);

But be aware to evict entity from NHibernate session. If NHibernate forget the entity, there is no way how to perform lazy loading so you can’t use this solution if children or properties are lazy loaded.

3. Set Status.ReadOnly on entity when it’s loaded
There is really tricky way how to set read-only attribute placed in NHibernate properties, see this article Ensuring updates on Flush. 

I’m not using this type because it’s really low-level approach.

4. Use IStatelessSession for bulk operation
NHibernate provides IStatelessSession interface to perform bulk operation. I’d like to write the whole article about stateless session later.

5. There isn’t really simple way how to fetch entity as read-only?
No there isn’t. Despite that basic Factory pattern is often declared with method FindById({id}, bool forUpdate);, NHibernate is not able to provide such kind of functionality, you must use one of described work-arounds.

Summary

If you want to develop fast and scalable application, you need to deal with cascade save and update. It’s the first issue you’ll find in log.

The most useful and secure way is to decide which transactions can be marked as read-only. Mark as many as possible because the most use-cases of average application read the data only.

If you really need to write data, you should lower the amount of entities placed in first level cache – session. You avoid the situation when your code loaded thousand of entities, you add one new and all thousand + one are check for dirty data.

If you insert thousand entities into the dabase, e.g. in import use-case, just use stateless session.

You can also see Series of .NET NHibernate performance issues to read all series articles.