Corey Coogan

Python, .Net, C#, ASP.NET MVC, Architecture and Design

  • Subscribe

  • Archives

  • Blog Stats

    • 110,056 hits
  • Meta

JQuery’s Dialog and Form Problems

Posted by coreycoogan on December 1, 2010


Sometimes there is a need to put certain elements within a FORM in a JQuery Dialog.  This sounds simple enough, but unfortunately when the dialog is displayed, the form elements can get lost as they get added outside the Form’s DOM.

The solution is simple, append the dialog element to the form. Here’s an example of how it’s done.

<script>
$(document).ready(function() {

		
       //define the dialog for later use
        var dlg = $('#AddressVerification').dialog(
        {
            autoOpen: false,
            closeOnEscape: false,
            modal: true,
            width: 550 
           }
        });

        //This is where we tie the dialog content to the parent form
       $("#AddressVerification").parent().appendTo($("form:first"));
		
		//other code not pertinent to this
});
</script>
<form>
<!-- form elements -->

<div id="AddressVerification" style="display: none;">
        <!-- address form elements -->
        <p>
            <button id="btnSave">Save</button>
            <button id="btnCancel">Cancel</button>
        </p>
    </div>

</form>
Advertisements

Posted in jQuery, UI | 2 Comments »

Is Developing Software Hard Work?

Posted by coreycoogan on September 24, 2010


I got to thinking about this a couple months ago as I sat in my chair pondering what a pain it was to accomplish some task I was working on.  There was a lot to do and I thought briefly to myself, “this is hard work”.  It was brief because I immediately sat back and enjoyed a small chuckle.  Can I honestly say that developing software is “hard” work?

Working Hard for Real

Many of friends from back home are in the trades – carpenters, electricians, plumbers, landscapers.  If they heard me describe anything I do in my profession as hard work, I’d probably get punched in the teeth, perhaps deservedly so.  When one of these guys has a day of hard work, it means breaking their back, spending thousands of calories, excreting gallons of sweat.  Sometimes they work so hard they even hurt themselves, but pick up the pieces and come back for more the next day.

My Kind of Working Hard

Now compare that to what I do.  When I am really working hard, would anyone be able to tell?  Could someone stop by my cube one day when I’m doing something easy and another day when I’m doing something hard and be able to tell the difference?  Would they be able to comment at the water cooler, “I just stopped by Corey’s cube and he is working really hard”.  I think not.

Often times working hard in the software profession is associated with the number of hours being worked.  I would agree, that this can be very difficult and the signs of this kind of hard work can certainly be visible to the naked eye.  For the purpose of this post, I’m talking about a regular 8 hours work day.

Thinking is Work

Back to my friends in the trades for a moment.  They would find it hysterical to hear me come home and tell the wife, “I’m tired…I worked very hard today.  Will you rub my fingers?”. Worked hard doing what?  Sitting on my ass?  Pushing buttons?  Wiggling my mouse around?  The truth is, yes, all of the above.  Hard work is a relative term.

For starters, let’s look at the definition of work:

sustained physical or mental effort to overcome obstacles and achieve an objective or result

Notice the word mental?  So we’re covered from a semantics standpoint.  Thinking actually is work – by definition!  If we go a little further, we find that there’s science that shows our brains consume quite a bit of energy.  The more we think, the more energy our brain needs.  This is confirmed by the fact that when we as humans stimulate our brains, we even get very hungry (which explains a lot about the typical IT break room and our physique as an industry).  If we’re consuming more energy and getting hungry, it must be hard work.

What does this Mean?

Great, so I now I know the harder I think, the harder I work.  What does this mean?  Do we as software developers ever take the easy way out because the alternative is so exhausting?  Probably not.  My gut tells me it’s more likely due to laziness than anything else. Sure, sometimes the situation demands that we cut corners, but other times there’s no real good excuse.

It was Vince Lombardi who said:

The only place success comes before work is in the dictionary.

I say we embrace hard work.  If we are going to sit on our butts for eight hours pushing buttons, we might as well make them count.

Posted in Developer Life | Tagged: , | Comments Off on Is Developing Software Hard Work?

Good Developers Know When to Write Bad Code

Posted by coreycoogan on September 22, 2010


I read a blog post last week that stated plainly if you “knowingly create a bad design or write bad code”, then you are a bad developer. The post went on to list 4 criteria that must be followed if you break this rule that would save an offender from earning the “Bad Developer” badge of shame.  (UPDATE: I misquoted.  The post actually said “one or more of the following)

  1. Get upset because they know they are writing less than perfect code/design.
  2. Leave a comment stating why the code/design was done in this factor.
  3. Add a ToDo for refactoring later.
  4. Will do the imperfect code/design in such a way that the refactoring effort will require less effort.

I read this post and had to disagree with such a dogmatic, black-and-white view. So here’s my take on the whole thing, breaking it down as simply as possible.

  • Time is finite.
  • Resources are finite.
  • Requirements are seemingly infinite.
  • Deadlines exist, sometimes arbitrarily and other times for real reasons like market pressures or legal diligence.
  • To meet deadlines when you are in the hole there are 2 options:
    – Cut features and scope.
    – Cut corners.
  • Sometimes features and scope can’t be cut (see Deadlines).
  • Some tasks require commodity code – simple crud forms for updating contact information for example.
  • Some tasks require specialization code – getting to the meat of the domain to add real value.
  • Many clients, who are paying the bills, don’t care about the commodity code’s design principles as much as they care about their deadlines and commitments.
  • Refactoring takes time, and given that time is finite, must be prioritized from highest ROI to lowest ROI.
  • When commodity code is poorly written but works just fine and causes zero friction, choosing to refactor it over specialty code can come with a hefty opportunity cost

Given this information, I say that a person can still be a good developer even if they knowingly write bad code. To me, a good developer knows when to write bad code and where to focus their time and energy.  In a perfect world, this would never happen – we would always do things the best way.  Obviously this isn’t reality and sometimes sacrifices have to be made. I believe that the good developer understands this and knows to sacrifice that which offers little value for that which offers great value.

Nobody is going to lose in the market because their “contact us” form is saving to the database in the code-behind. But choosing to devote an hour “doing it right” that could be spent refining the domain model or refactoring real friction points that cause actual pain can come at a high price. Like the old saying goes – Good, Fast or Cheap… pick any two.

Posted in Business, Developer Life | 5 Comments »

Using NHibernate from a Batch Program

Posted by coreycoogan on September 17, 2010


I use Batch Programs, aka Batch Jobs, all the time for tasks that aren’t necessarily time-critical. In most organizations, there are many of these things running on servers all over the place.   I recently had to write a batch job that did a bunch of database updates using NHibernate.  The job updates two tables in a foreach loop.  Within each loop, an NH transaction is started, committed or rolled back and disposed of.  I usually don’t write all my code into the default program.cs file, but opt for writing some service to do the heavy lifting.  I recently wrote one that had ran into some problems.

I wrote the service as I usually would.  I setup my constructor to take everything it would need and made a StructureMap registry to bootstrap all my dependencies so I could work with the existing components.

Here’s my original batch service constructor:

public ProfileUpdateService(
ISession session, //for managing transactions
IProfileRepository repo //for getting/saving
)
{
    //set private vars
}

My StructureMap registry was pretty basic.  It takes an ISessionFactory instance in the constructor and uses it to setup the ISession resolution.  The repository happens with a scanner’s default conventions:

For<ISession>()
   .Use(() => _sessionFactory.OpenSession()); //use the lambda to ensure my factory is configured when this is actually accessed

Here’s what my original code looked like (sort of):

var records = _repo.GetAll();

if(!records .Any())
    return;

try
{

    foreach (var record in records)
    {

        try
        {
           
            _transaction = _session.BeginTransaction();
           
            var otherRecord = _repo.GetOtherThing(record.Something);
 
            //do some processing first then save
            _repo.DoFirstThing(record);
            
           //do some processing first, then save
            _repo.DoSecondThing(otherRecord);

            _transaction.Commit();
        }
        catch(Exception ex)
        {
            //log it
            //rollback
            _transaction.Rollback();
        }
        finally
        {
            if(_transaction != null)
                _transaction.Dispose();
        }
    }
    
}
finally
{
    _session.Dispose();
}

During testing, some edge cases emerged that caused an exception while committing my changes.  Since my use of NHibernate has followed the Session/Request pattern in web and WCF environments, I never really had to deal with exceptions and their impact on the NHibernate Session.  But this case was different.  There was no single Request to contain my Unit of Work and I soon found out that the exception being thrown was hosing up my Session.  RTFM, right Corey?

Here’s what the NH reference doc has to say about it:

If the ISession throws an exception you should immediately rollback the transaction, call ISession.Close()
and discard the ISession instance. Certain methods of ISession will not leave the session in a consistent state.

I had to make some obvious changes.  My batch service needs to be able to create an instance of an ISession and since my repository class takes an ISession in its constructor, I need to be able to create a new IProfileRepository instance as well.  Since I wanted to maintain testability, I used the mighty Func<> as a factory.

Here’s what my constructor looks like now:

public ProfileUpdateService(
ISessionFactory sessionFactory,
Func<ISession,IProfileRepository> repoFactory
)
{
    //set private vars
}

Now I had to change StructureMap to know how to resolve those dependencies.  Using the Lambda in the Use() method gives me deferred execution, so I don’t have to worry about premature invocation of my ISessionFactory.

For<ISessionFactory>()
                .Use(() => _factory);

For<Func<ISession, IProfileRepository>>()
                .Use((session) => new ProfileRepository(session));

Finally, I had to restructure the code to better handle the exceptions.  When one happens, I want to close and dispose of the ISession instance being used.  I would also want to use a new Repository object since the current one will be using the damaged ISession object.  To handle this, I relied on properties and lazy initialization.  (note: this code may not be the most efficient, but it’s a batch job after all, so give me a break)

//use the lazy initialized repo
var records = ProfileRepository.GetAll();

if(!records .Any())
    return;

try
{

    foreach (var record in records)
    {

        try
        {
          
            _transaction = Session.BeginTransaction();
           
            var otherRecord = ProfileRepository.GetOtherThing(record.Something);
 
            //do some processing first then save
            ProfileRepository.DoFirstThing(record);
            
           //do some processing first, then save
            ProfileRepository.DoSecondThing(otherRecord);

            _transaction.Commit();

        }
        catch(Exception ex)
        {
            //log it
            //rollback
            _transaction.Rollback();
            
            _session.Dispose();
            _session = null;
            _profileRepository = null;
        }
        finally
        {
            if(_transaction != null)
                _transaction.Dispose();
        }
    }
   
}
finally
{
    _session.Dispose();
}


//Using lazy initialization so I don't have to worry about these being destroyed
IProfileRepository ProfileRepository 
{
    get
    {
        if (_profileRepository == null)
            _profileRepository = _profileRepositoryFactory(Session);

        return _profileRepository;
    }
}

ISession Session
{
    get
    {
        if (_session == null)
            _session = _sessionFactory.OpenSession();

        return _session;
    }
}   

This really isn’t rocket science, but it’s a different way of thinking for those of us using NHibernate from the Web.

Posted in C#, IoC, NHibernate, StructureMap | Tagged: , , , , | Comments Off on Using NHibernate from a Batch Program

How to Give Advice Online

Posted by coreycoogan on September 16, 2010


We’ve all be there – stuck with some problem and the solution won’t reveal itself no matter how much we search.  It’s at this point that I turn my sights towards forums and user groups to try and get advice from the online community.  This can be Stack Overflow, a topic specific Google Group or a well known blog or forum site like Microsoft Social.

Side Note

It’s laughable to see how many people actually turn to the community for help before even attempting to figure things out themselves.  Ayende wrote about his frustrations on the topic, so please don’t be that person.  I digress…

I try to write my question to be as complete as possible.  I want to include everything I think someone would need to help me, fearing that if I leave something out the person who has the answer and was just willing to share it with me is now gone forever, helping someone else who wrote a more complete question.  The key is not to put too much information in the question.  I don’t want to scare anyone away by showing them 5000 lines of code when they open that sucker up.  So I try and find just the right balance and give disclaimers that some things are left out.

Enough setup, now for the crux of this post – "How to Give Advice Online", which could have also been titled "Don’t be a Dick when Giving Advice Online".  So often, the people that choose to answer questions commit one or more of the following dick behaviors:

  • Nitpick at unimportant details, like “you should close your connection” or “you should catch that exception”.
  • Make assumptions about the code that’s left out or the entire context of the problem, usually supporting the preconceived notion that they are smart and you are stupid.
  • Belittle, make fun and/or talk down to you during every exchange.
  • Give short, one-line solutions with no detail that are of little or no use and most certainly leave many a solution seeker more confused then they were when they started.  If you are going to throw someone a bone, spend the extra 60 seconds and at least make it complete enough to get the reader started in the right direction.

When I get these types of responses, I really try to remain patient.  It’s hard to resist the urge to stoop to their level and let things turn into a cat fight.  That’s just a waste of time and it’s unprofessional.  Not to mention there will be a permanent record of your childish behavior available for anyone who searches your name for all eternity.

I’ve read that people don’t act this way in the Ruby community, but it runs rampant in the .NET developer community.  Are .NET developers so arrogant and full of themselves that they believe they can treat people like crap?  What created this culture?  Very perplexing indeed.  The best I can do is try not to perpetuate this behavior and keep the stereotype going.

If you wish to donate some of your time and give free advice online, do it for the right reasons.  Don’t use it as an opportunity to make people feel stupid so you can feel superior.  Just be patient and share what you know.  Everybody needs help at some point, so think about how you’d you’d feel if you were given the response you’re writing.  If a question seems exceedingly stupid or you have a beef with the person asking it, move on. Like an offensive television or radio program – you can always turn the channel.  Put simply, don’t be a dick.

Posted in Developer Life | Tagged: , , | 2 Comments »

Reviewing “Getting Real” and “REWORK” by 37 Signals

Posted by coreycoogan on September 15, 2010


NoMojo I’ve recently been in a funk.  I felt as though a big part of my Mojo was missing.  I’ve been in a bit of a dead-end gig lately where the work brings me little to no joy, so that could have something to do with it.  On the other hand, I have a fun side project that I’m trying to get off the ground, but it’s been almost impossible to get motivated in the last several months.  Part of this is due to the side work I do at night, but mostly it’s as simple as a loss of Mojo.

I’m fascinated with 37 Signals (highly recommend their blog). Not only from a technical perspective, but also from a UI Design and business standpoint as well.  I have been poking around the free eBook version of Getting Real for while.  One day, while wallowing in my funk, I decided to buy Getting Real and REWORK from Amazon and search for some inspiration.

Inspiration

I found my lost Mojo!  Reading these books helped revive my passion for software development.  They got me excited about the side project again – so much so that I removed the half completed features and deployed what I have so I could launch the blog to start building relevance and begin driving traffic.

Getting Real

Getting Real says it’s about building “a successful web application “, but it’s about so much more.  It covers a wide range of business concepts as well and was really fun to read.  This book emphasizes simplicity.  Keep things simple and deploy.  Don’t get bogged down in analysis paralysis.  That’s what happened to me with RoomParentsOnline.com.  I got so consumed with the “simplest” way of making the classroom portion self governing, yet safe and secure for the students, that I ended up conceiving a solution that was so complex and difficult to start, my brain just shut down and found excuses not to work on the project.

After reading Getting Real, my whole thought process of handling features changed.  I reevaluated what I was doing and found a solution that is 1000 times simpler and probably just as effective.  It also helped get things in perspective about how cheap and easy it is to try something and deploy.  Get people using your product.  If it sucks, change it.  In many cases, the details we obsess over are meaningless to the end user.  I’ve always tried to follow the Agile mantra of “Simplest Solution that could Possibly Work”, but I somehow drifted.

I highly recommend this book for anyone doing web development or involved in web development.  It’s an eye opener and a breath of fresh air.

REWORK

This is another amazing book.  Unlike Getting Real, REWORK is not specifically about developing web applications, but about business and succeeding in it.  The back cover says a lot of what this book is all about:

ASAP is poison
Underdo the competition
Meetings are toxic
Fire the workaholics / emulate drug dealers
Pick a fight
Planning is guessing
Inspiration is perishable

Before reading the book, I subscribed to most of these ideas.  I hate wasting time in meetings and nothing drives me more insane than documentation for the sake of documentation.  BDUF is a proven failure, yet it continues to dominate big Corporate America.  After reading REWORK, I was so happy to hear that breaking away from this mentality can work – and there’s proof.

A fair amount of content from Getting Real is repeated in REWORK, but not enough for me to recommend one book over the other.  Since reading it, I’ve been recommending it all over the place.  This book is a must-read and I wish I could send every client, every partner and every consultant I work with a fresh new copy.

The theme is the same as Getting Real.  Do things simple and do them fast.  Say NO to new features until they come up again and again – then you know they have real value.  The authors do a great job of citing their own successes, as well as those of other companies.  They also use simple metaphors that are easy to understand and fun to read.  I strongly encourage anyone reading this blog to get your copy immediately and read it.

Conclusion

Getting Real and REWORK are fast and easy reads and I found the casual writing style felt like I was a conversation with the authors.  Both books are phenomenal and capable of being real game changers when read with an open mind.  So please, when you do read them, be open.  Don’t hold on to your old way of thinking.  Do yourself the favor of entertaining the ideas in these books so you can appreciate the true merit.

Posted in Business, Review | Tagged: , , | Comments Off on Reviewing “Getting Real” and “REWORK” by 37 Signals

Initializing NHProf in MSTest

Posted by coreycoogan on September 4, 2010


Alternate title for this post could be Running initialization code before an MSTest Test Run

I’m not at all a fan of MSTest for various reasons. My preference is NUnit, however xUnit, mbUnit or any other framework is just as well. There are often cases when a client wants to use MSTest, usually for its Visual Studio integration. This is a compelling argument for a shop new TDD or unit testing, that wants the least amount of friction and doesn’t want to invest in TestDriven.Net or R#.

Integration tests written with MSTest are a great place to profile your NHibernate data access and catch any gotchas early in the development cycle. When the client is willing to pay for it, there’s no better tool than Ayende’s NHProf for this. NHProf can be easily initialized in code, but this should be done only once per test run, as you would bootstrap NHibernate itself only once. This led me on a search in MSTest for how to initialize once per test run. There’s widely known attributes for executing setup code when a test class fires and also whenever a test fires, but the answer to my requirement was found in the lesser known [AssemblyInitialize] attribute.

Just throw that attribute on a static void method that takes a TestContext argument and it will execute once before each test run. The class that houses the initialization method must also be decorated with the [TestClass] attribute.

[TestClass]
    public class AssemblyInit
    {
   
        [AssemblyInitialize]
        public static void Init(TestContext testContext)
        {
             HibernatingRhinos.Profiler.Appender.NHibernate.NHibernateProfiler.Initialize();
        }
    }

Posted in Architecture and Design, C#, NHibernate, TDD | Tagged: , , | Comments Off on Initializing NHProf in MSTest

The Singleton and Composition/Testability

Posted by coreycoogan on September 3, 2010


I read this post from Kellabyte which got me thinking about solutions I’ve used in the past.  There’s ways to live with Singletons, at least from a composition and testability standpoint.  This can come in especially handy when working with a legacy code base.

The first thing, is that your property/method that returns the instance of the Singleton should return an interface, not a concrete.

public static ILog Instance
{
     get{ return _logger;}
}

I assume when talking about composition, we are all using an IoC container of some kind.  My favorite is StructureMap.  Most containers have the ability to configure a type resolution through a factory method.

In SM, it might look like this:

ForSingletonOf<ILog>()
.Use(() => Logger.Instance)

Now any requests from your container that require an ILog instance can come from the Singleton in production, but resolve to a mock or anything else in test scenarios.

//example of Dependency Injection via constructor
public class SoSomethingService(ISomethingRepository repo, ILog logger)
{
//set to instance vars
}

Posted in C#, Design Patterns, IoC, StructureMap | Tagged: , | 1 Comment »

The Composite Key Conundrum

Posted by coreycoogan on June 2, 2010


Preface

This is an edited version of an argument I wrote in hopes of convincing some DBA’s to start adopting surrogate keys.  This was for an Oracle shop, hence the heavy use of of Sequence speak, but the arguments are pretty much the same for any DB. We’re also using NHibernate, so that tool is also talked about here, however, other popular ORM frameworks will benefit from the arguments as well.

Overview

There’s long been a debate amongst practitioners as to what is better – a natural key, which is often a composite, or a surrogate key. Application and Database Developers tend to favor surrogate keys for their simplicity and ease to work with while DBA’s often favor natural keys for the same reasons. There are many arguments from both sides of the debate, each having validity. As with any decision, the “right” choice depends on the level of risk, cost and ROI.

Surrogate Key

A surrogate key, or artificial key, is a database field that acts as the primary key for a table but has no real meaning in the problem domain. Surrogate keys are typically in the form of an auto-incrementing integer (Sequence/Identity) or a UUID/GUID.

Pros:

  • The primary key values never change, so foreign key values are stable.
  • All of the primary key and foreign key indexes are compact, which is very good for join performance.
  • The SQL code expression of a relationship between any two tables is simple and very consistent.
  • Data retrieval code can be written faster and with fewer bugs due to the consistency and simplicity provided by a surrogate key.
  • With surrogate keys there is only one column from each table involved in most joins, and that column is obvious.
  • Object Relational Mapper (ORM) frameworks, such as NHibernate, SubSonic, LLBLGEN and others are designed to work optimally with surrogate keys, offering much simpler implementations over composite keys.
  • Allows for a higher degree of normalization since key data doesn’t need to be repeated.

Cons:

  • In order to guarantee data integrity, a unique index must be created for the fields that would have made up the composite key. This can increase administrative overhead.
  • If a sequence/identity is used for the surrogate key, it must be created, which can increase administrative overhead.
  • In Oracle, Sequences can have slight performance penalties, typically realized only under very heavy load, when a proper caching strategy is not utilized.
  • Tables can be perceived by some as “cluttered” when an extra column of meaningless data is added.
  • The primary key of the table has no real business meaning.
  • The natural key values are often included in the WHERE clause, although not part of the join semantics.
  • Extra disk space is needed to store the key values, although a sequence will account for only 8 bytes.

Natural Key

A natural key is the true unique indicators of a database record. It is this value, or composite values, that have business meaning and allow applications to distinguish one row from another. Unlike the surrogate key, the natural key can be one or more columns of any type. Examples include [social security number], [last name, date of birth, phone number]

Pros:

  • Natural keys have real business meaning and identify unique records by nature.
  • When there is no surrogate key, there is no need to create unique indexes or sequences, thereby reducing administrative overhead.
  • Fewer sequences and database objects give DBA’s less to worry about.
  • Reduced performance concerns that could result from mismanaged sequences.
  • Reduced disk space usage.

Cons:

  • Querying with joins can become more complicated as multiple columns are involved.
  • The use of date fields in keys often requires error prone casting to write queries.
  • The keys can change which can cause a ripple effect of breaking queries and require participating tables to be updated.
  • Reduced form of normalization since key values will be duplicated throughout tables.
  • Keys names and types are inconsistent which may require developers to visually inspect table definitions to understand how to query.
  • Makes application development that interacts with a database more complex and time consuming due to the semantics of the keys and how they join to other tables.
  • Makes using ORM frameworks very difficult and time consuming because they are designed to work best with surrogate keys.

Common Arguments against Surrogate Keys

  • Using a sequence in Oracle will decrease performance.

    It is true that using a sequence for a surrogate key comes with some overhead. This overhead is very minimal and apparent only under heavy load. Using a sequence caching strategy, however, can greatly improve performance issues associated with sequence generation.
  • Using a surrogate key means more data has to be stored which will require more disk space. Disk space is cheap these days, but the cost of a software developer is not. Hardware costs will continue to decrease over time while the cost of developer staff will continue to rise.
  • The record already has a meaningful, natural key. The key will be maintained in the form of a unique index, which provides the benefits of the natural key with the benefits of a surrogate key.
  • Software development tools, such as ORMs, shouldn’t dictate database architecture. Although using surrogate keys opens many doors for the use of an ORM, there is also significant benefits beyond (see the list of Pros). A database is a place to store data used by applications. Without applications, the database offers little value. Because the database exists solely to support applications, it stands to reason that there is great benefit in optimizing the database to work with the applications that offer the real value to the business.

Object Relational Mapping

Why use an ORM

Using an ORM, such as NHibernate, provides significant value to applications and application developers. It removes the need to write tedious and time consuming database access code. It also optimizes queries and makes database query code more consistent and easy to read and write. In addition, an ORM manages database sessions and provides a consistent way for reusing connections while employing data caching and lazy loading to reduce unnecessary traffic. When implemented correctly, an ORM can save a tremendous amount of developer time and remove countless database related concerns during application development.

ORM’s with a Natural Key

Because ORM’s are designed to work with surrogate keys, it takes substantial effort and testing to “fit the square peg in the round hole”. Below is an outline of the ramifications of using natural keys in NHibernate (NH) and how to get around them.

Natural Key Issue Impact Work Around
Because natural keys are assigned by the application, NH has no way to know if a record is a new or existing record (insert vs. update). Upon saving an object or group of objects, NH must query the database first to determine how the record should be persisted. Using a version column to each table, such as a timestamp that changes automatically with each update, can eliminate this. Without a version column, the extra trip to the database can’t be avoided.
NH defaults its queries to use a parent’s primary key as the foreign key into related records. Loading related tables doesn’t “just work” out of the box. Lazy loading is lost and the developer must explicitly retrieve child objects with hand-written code. To get around this, it may be possible to force developers to use hand-written XML configuration files and write queries that take the expected parameters but use them in an unexpected way to get the desired result. For example, when a needless date parameter is passed, the clause may contain a statement like “where ‘1/1/1800’ NOT EQUAL :PolicyDate”. Where this isn’t an option, hand written code will have to be written and called.

Conclusion

Surrogate keys offer many benefits when a database is used for software development. Aside from the simplicity, consistency and stability it also makes the use of an ORM extremely viable. That’s not to say that they come without a price. This price falls mainly in the area of database administration and is relatively low, especially when weighed against cost benefits of using them. Since the databases exists to support applications, optimizing the database for that purpose seems like a sensible choice. Using an ORM in software development can have an extremely positive impact on not only development time but also quality. Both of these reasons lead to an increased bottom line in the form of lower development costs and decreased cost of ownership.

“Humans should use natural keys, and computers should use machine-generated surrogate keys”
– Jeffrey Palermo, CIO HeadSpring Systems, Austin TX

Posted in Architecture and Design, NHibernate | Tagged: , , , , , | 5 Comments »

StructureMap + WCF + NHibernate Part 2

Posted by coreycoogan on May 27, 2010


Introduction

In Part 1 of this two part series, I showed what it takes to create a library that will enable Dependency Injection in WCF services by leveraging the extension points in the WCF pipeline. My solution had very slight modifications to what was provided by Jimmy Bogard with plans for extending to allow an NHibernate (NH) Session/Call pattern in WCF utilizing StructureMap (SM).

In this post I’ll show my implementation, derived from the solution posted on Real Fiction, that is built on top of my Wcf.IoC library. This library will be responsible for getting a single ISession opened at the start of each call, injecting it into all dependent classes and then flushing and disposing it at the end of the call.

Why Session per Call?

Session per Call, also known as Session per Request, is the pattern where the application is given the responsibility of opening an ISession (DataContext or other UnitOfWork) and then disposing of it at the end of the call. Some variations of this pattern may also open an ITransaction at the beginning of the call and Commit or Rollback at the end. I prefer to return a status in my service response when something goes wrong, therefore I give the service the responsibility of creating and managing transactions. This way I can catch exceptions and react accordingly.

Employing this technique has several advantages.

  • Once setup, developers don’t have to worry about the ISession and where it comes from.
  • Once setup, developers don’t have to worry about properly disposing of the ISession when they are done with it.
  • If there are unhandled exceptions, the ISession is always disposed.
  • Ensures a single ISession is created and used by the service and component dependencies, thereby reducing resource consumption.

First, a special point of emphasis on the last bullet above, which is very important to get the full benefit of the Session/Call pattern we’re implementing here. For this to work, any classes that depend on an ISession must be configured to have an ISession injected into them (I prefer constructor injection for this). It is important that your components don’t configure ISession resolution, but rather let the application handle this detail. To learn more about configuring components from your application using StructureMap, have a look at this post on the topic.

NHibernate Session Context

To ensure the same NHibernate session is used throughout the WCF service call we’ll need to store the session somewhere. In a web application, this place is typically the HttpContext. In WCF, this is done by creating and adding an extension to the InstanceContext, which is basically a storage area scoped to the instance of the service object.

To do this, we simply implement the IExtension<> interface with a class that is responsible for holding a reference to the call’s ISession object and handles Flush and Disposal when the call is over. My implementation here is just like that of the inspiring post with the exception of my try/catch around the Session disposal.

/// <summary>
/// Holds a reference to an NH ISession and gets cached in the InstanceContext of a
/// WCF call.
/// </summary>
public class NhContextManager : IExtension<InstanceContext>
{
	public ISession Session { get; set; }

	public void Attach(InstanceContext owner)
	{
	}

	public void Detach(InstanceContext owner)
	{
		if (Session != null)
		{
			try
			{
				Session.Flush();
				Session.Dispose();
			}
			catch (Exception ex)
			{
				//log the error but don't throw
			}

		}

	}
}

IInstanceProvider

We’ll use a custom IInstanceProvider to add our Extension into the InstanceContext. This is a very straight forward process, as you’ll see. In Part 1 I showed the custom IInstanceProvider implementation that used SM to create an instance of the requested service. We’ll derive our NH Session-aware InstanceProvider from this one so we can add our extension before resolving the type from SM.

public class NhInstanceProvider : IocInstanceProvider
{
	public NhInstanceProvider(Type serviceType) : base(serviceType)
	{

	}

	public override object GetInstance(InstanceContext instanceContext, System.ServiceModel.Channels.Message message)
	{
		//add the NH Session manager to the context of this WCF request
		var nhSessMgrExtension = instanceContext.Extensions.Find<NhContextManager>();
		if (nhSessMgrExtension == null)
			instanceContext.Extensions.Add(new NhContextManager());

		//let the base handle the IoC resolution
		return base.GetInstance(instanceContext, message);
	}

	public override void ReleaseInstance(InstanceContext instanceContext, object instance)
	{
		var nhSessMgrExtension = instanceContext.Extensions.Find<NhContextManager>();
		if (nhSessMgrExtension != null)
			instanceContext.Extensions.Remove(nhSessMgrExtension);

		base.ReleaseInstance(instanceContext, instance);
	}

}

Setting the NhContextManager.Session Property

The NhContextManager’s Session property is what will be used access the current call’s open ISession object. Remember that the NhContextManager is just our way of storing the ISession in the context of the call. This is where we’ll utilize the ISessionFactory.GetCurrentSession() extension point that was graciously granted to us by NHibernate for things just like this.

NHibernate’s ISessionFactory interface has a method, GetCurrentSession(), intended for getting a contextual instance of a session. Getting this working requires two steps.

First, we must create an implementation of the ICurrentSessionContext interface. This is what NHibernate will use to get an open session when one is requested from the GetCurrentSession() method. The other step is to tell the NHibernate configuration what our implementation of ICurrentSessionContext is so it knows how to respond to GetCurrentSession() calls.

ICurrentSessionContext Implementation

First thing to notice is that the constructor takes an ISessionFactory instance. This is required for getting an open Session the first time. The CurrentSession() method is where all the work is done. First, search extensions on the InstanceContext and find the NhContextManager instance. Now get the Session from the NhContextManager.Session property. If it’s there, we’ll return it. If it’s not there, we’ll open a new one, set it on the NhContextManager object, then return it. This is the nuts and bolts of getting this whole thing working. By delegating the ISession.CurrentSession() method to use the WCF instance context, we are ensuring a single open ISession is used for each WCF service instance, which is created once per call.

public class WcfSessionContext : ICurrentSessionContext
{
	private readonly ISessionFactory _factory;
	public WcfSessionContext(ISessionFactory factory)
	{
		_factory = factory;
	}

	public ISession CurrentSession()
	{
		// Get the WCF InstanceContext:
		var contextManager = OperationContext.Current
			.InstanceContext.Extensions.Find<NhContextManager>();

		if (contextManager == null)
		{
			throw new InvalidOperationException(
				@"There is no context manager available.
				Check whether the NHibernateContextManager is added as InstanceContext extension.
				Make sure the service is being created with the NhServiceHostFactory.
				This Session Provider is intended only for WCF services.");
		}

		var session = contextManager.Session;
		if (session == null)
		{

			session = _factory.OpenSession();

			contextManager.Session = session;
		}

		return contextManager.Session;
	}
}

Setting up NHibernate with our ICurrentSessionContext Implementation

Now we have to tell NHibernate what we want to use to extend the ISessionFactory.GetCurrentSession() method. This may seem a bit goofy, but it works. Before building the SessionFactory, we add a property to the NHibernate.Cfg.Configuration using the necessary magic string and the fully qualified type of our ICurrentSessionContext class.

NOTE: this step would be done by the application in the BootStrapper, where NHibernate initialization should be taking place.

void AddCurrentSessionImpl(NHibernate.Cfg.Configuration config)
{

var sessionContextType = typeof (WcfSessionContext);

            var currentSessionContextImplTypeName = sessionContextType.FullName + ", " +
                                                    sessionContextType.Assembly.FullName;

            var props = config.Properties;
            if(props==null)
            {
                props = new Dictionary<string, string>();
                config.AddProperties(props);
            }

            props.Add("current_session_context_class", currentSessionContextImplTypeName);
}

Configuring StructureMap for ISession Resolution

Since we are relying our StructureMap to handle the Dependency Injection of our services, it must be configured to get our ISession from the ISessionFactory.GetCurrentSession() method. Even though StructureMap is to returning a transient instance, which is the default behavior, we’re trusting our WcfSessionContext to give us the same instance per WCF service call.

The best way to do this is create an SM Registry in our Wcf.NHib library. Our Registry will take an instance of an ISessionFactory. This is required so that our application can configure and initialize NHibernate however the application requires and then let our Wcf.Nhib library handle the specific registration. In this way, the library can be leveraged across any number of WCF services that have their own distinct NHibernate configurations.

/// <summary>
/// A StructureMap registry for telling the container how to resolve an ISession request.
/// This must be instantiated and added to the SM configuration so it has an instance of the
/// SessionFactory to use.
/// </summary>
public class WcfNHibernateRegistry : Registry
{
	public WcfNHibernateRegistry(ISessionFactory sessionFactory)
	{

		For<NHibernate.ISession>()
			.Use(() => sessionFactory.GetCurrentSession())
			;
	}
}

Ensuring the Proper IInstanceProvider Implementation

The lowest level object in our WCF pipeline is the InstanceProvider, which is responsible for creating the instance of our service, fully injected with Repositories, ISession, and whatever else is necessary. To make sure my NhInstanceProvider is used throughout the WCF call pipeline, I’ll want to implement a custom ServiceBehavior, ServiceHost and ServiceHostFactory. Luckily, I can just derive from the classes created in my Wcf.IoC library and override the necessary pieces. This is pretty simple and ensures that my service is both DI-enabled and Session/Call aware.

ServiceBehavior

All I have to do here is construct my NhInstanceProvider.

public class NhServiceBehavior : IocServiceBehavior
{
	/// <summary>
	/// A Func that takes the ServiceType in the constructor and instantiates a new IInstanceProvider.
	/// Defaults to an IocInstanceProvider
	/// </summary>
	public override Func<Type, IocInstanceProvider> InstanceProviderCreator
	{
		get
		{
			return (type) => new NhInstanceProvider(type);
		}
	}

}

ServiceHost

The custom ServiceHost has a little more custom code. We’re going to derive from the IocServiceHost to meet our IoC requirement. We’ll override the ServiceBehavior method where we’ll create an instance of the NhServiceHost. Before we return our instance though, we’ll make one modification.

Since our Session/Call implementation requires that WCF is operating in PerCall mode, let’s make sure that’s always the case by forcing that setting. If we use a stateful mode, we may run into the danger of the same ISession being shared between calls which could lead to all sorts of obvious problems.

To do this, we simply find the ServiceBehaviorAttribute of the service instance. If it’s not there, we’ll add it. Now it’s just a matter of setting the InsntaceContextMode property to InstanceContextMode.PerCall.

public class NhServiceHost : IocServiceHost
{
	public NhServiceHost(Type serviceType, params Uri[] baseAddresses)
		: base(serviceType, baseAddresses)
	{
	}

	public override IocServiceBehavior ServiceBehavior
	{
		get
		{
			var behavior = Description.Behaviors.Find<ServiceBehaviorAttribute>();
			if (behavior == null)
			{
				behavior = new ServiceBehaviorAttribute();
				Description.Behaviors.Add(behavior);
			}
			//force PerCall to ensure a single session is not shared
			behavior.InstanceContextMode = InstanceContextMode.PerCall;

			return new NhServiceBehavior();
		}
	}
}

ServiceHostFactory

The ServiceHostFactory, like the ServiceBehavior, will do nothing more than override the SvcHost property to ensure that the NhServiceHost is used when the IocServiceHostFactory asks for a ServiceHost instance.

public class NhServiceHostFactory : IocServiceHostFactory
{
	protected override Func<Type, Uri[], IocServiceHost> SvcHost
	{
		get
		{
			return (type, uri) => new NhServiceHost(type, uri);
		}
	}
}

Now all we need to do to get our NH Session/Call wired up is edit the markup of our .SVC files and tell it what factory to use.

<%@ ServiceHost
Language="C#"
Service="Service.ProfileService"
CodeBehind="ProfileService.svc.cs"
Factory="Wcf.NHib.NhServiceHostFactory,Wcf.NHibernate"
 %>

What This Gets Us

Now that we’ve jumped through all these hoops, what’s the payoff? We now have the ability to write a service that looks like this.

public class ProfileService : IProfileService
{
	readonly IProfileRepository _repository;
	readonly ISession _session;
	public ProfileService(IProfileRepository repository, ISession session)
	{
		_session = session;
		_repository = repository;
	}

	public ProfileDto GetProfile(int id)
	{
		using(var trans = _session.BeginTransaction())
		{
			var result = _repository.FindById(id);
			return result;
		}
	}
}

Conclusion

In Part 1 we saw how to build off a previous example to enable an extendable IoC-enabled WCF Service solution. This gives us testable services without relying on Poor Man’s DI. In Part 2, we extended the library so we could have not only DI provided by StructureMap but to also leverage NHibernate’s extension points which help us achieve a Session/Call pattern. This solution involved several moving pieces, but in the end they fit together rather nicely and quite cohesively. As you play with the solution you may be surprised first to see that it actually works, but more importantly how much more time you’ll have to devote to the important stuff since you quit worrying about Data Access plumbing.

Posted in Uncategorized | 14 Comments »