Archive for November, 2010

Mock that Database Layer (or not)

November 29, 2010 Leave a comment

Like many architects and developers, I believe that test driven development can have its place and be a worthwhile endeavor for certain projects and skill sets. The advantages and disadvantages of this methodology are well chronicled. Whether or not you are a practitioner of TDD, I hope that at a minimum, you’re writing unit tests using a framework such as NUnit or the built-in Visual Studio Unit Test Framework.

If you’re working in an ideal environment, you probably have a system that has been designed with testing in mind, unit tests have been well written and categorized, external dependencies such as the database have been mocked and stubbed, you’re using a build server that will run the tests and if they fail, notify the developer. While some practice this application life cycle discipline, I would venture to guess many more do not. If you fit into the latter category, you may not yet have the resources or know-how to be stubbing and mocking. If this is the case, I believe that all is not lost.

I have spent considerable time coding a domain specific language (DSL) that I call Frameworks. It has the ability to create many different types of artifacts including those for the data access layer. If you have a table called Shape, it will create the Shape Entity object as well as the Shape data access object (DAO). The DAO then uses an ORM to communicate with the database. If desired, artifacts will also be constructed for their interface counter-parts, that is IShape and IShapeDao. Dependency injection configuration is constructed allowing a natural integration with an Inversion of Control container.

These interfaces and separation of dependencies allow for the greatest unit test flexibility. Now that we have achieved all this flexibility, one must decide how to use it. I am completely aware that removing dependencies from a unit test is highly desirable hence all the work to make this happen automatically. One formidable dependency in most business applications is data access. Mocking is a very effective and popular way to remove this dependency from your tests. Mocking can be fairly straight forward if you have designed your system appropriately.

There are plenty of compelling reasons to ‘fake’ the database and they include but are not limited to:

  1. The database or data access layer could have defects which could give you false positives or false negatives in your test results.
  2. The database might not be ready for testing as it could be lagging behind the code.
  3. The database could be sitting on a server that is not be readily accessible to duplicate for the purposes of testing.
  4. The work that needs to be performed in order to ready a database for testing could be beyond scope.
  5. Database configuration issues get in the way.
  6. There is a dependency on not only the database, but the type of data that is needed.
  7. There are thousands of unit tests and all the reading and writing to the DB slows testing down to a crawl.

As compelling as this list is, it doesn’t mean that you can’t build useful tests that depend on the database. They’re just no longer called unit tests, rather they would be integration tests. I find myself asking what is a dependency in the first place. I would argue that every unit test incurs some dependency. For example, we have to depend on the compiler, the CPU and other ‘moving’ parts.

To this end, what if a code generator created everything from the DDL to create the database, to the data access objects to service CRUD? Once the code generator is tested, what is the difference between this code generation and that of the compiler to create MSIL code?

In summary, I think bypassing the classic definition of a unit test in favor an integration test accomplishes the goal in a smaller scale project. If the project is mission-critical, scalable, functionally complicated or expected to have a long-life, use an isolation framework such as Moq. Deciding how to proceed is what architecture is all about; one size does not fit all!

C++ vs Managed Code

November 27, 2010 Leave a comment

This blog discusses the virtues of using lower-level languages such as C++ vs. higher-level, managed code such as Microsoft’s C# or Java in a business application atmosphere.

Disclaimer: Although I have roots in C++ and have used Java, I have been using the .NET stack from its inception and tend to gravitate towards this technology. For this reason, it is written with C# in mind. One could make many of the same observations from a Java point of view.

Perhaps the key concern for most would be the speed in which managed code such as C# compares against the performance of C++. Studies have shown that given the same algorithms, C++ has an advantage in this category. It would not be out of line to say that C++ can provide a 10-20% increase in performance.

Java and .NET provide for memory management, hence ‘Managed Code’, whereas the C++ developer must provide their own memory management facility. As with most levels of abstraction, Managed Code must generally consider the problem at hand and as such, lose some optimizations that can be otherwise enjoyed.

I would be remiss if I did not state the obvious: C++ needs a compiler which in turn, is its own level of abstraction. If one was a purist, they would have to consider writing native machine code! Therefore, reasonable people can begin to discuss what level of abstraction is the right one for the job at hand, not the virtues of abstraction itself.

Portability is another issue that should be considered. I don’t know of a computing platform that does not have a C++ compiler. Similarly, Java is easily ported to many operating systems and through the Mono Project, so is .NET.

Proponents of managed code would say that it is more-scalable given this portability issue. Java and .NET for the most part, are alleviated from worrying about the specifics of Linux vs. Windows as their code will just ‘run’.

I often sometimes hear from some industry experts that they need absolute speed. What exactly does that mean? How fast do they need to be and at what cost? There are many ways to achieve speed, especially with current day hardware and software advancements, parallel programming and perhaps the ultimate in scalability, cloud computing.

I would argue that in almost every case, there is so much more to the argument than which platform will provide a faster sorting algorithm! Performance optimization always comes at a price and it is argued on almost every level of computing.

Architects have to consider many application needs, platform cost, development time and cost, code portability and scalability, and more before they embark on a road that will lead them down a particular path.

I believe that using unmanaged code is a form of optimization and as such should be scrutinized carefully before implementation. One can theorize that there is a direct relationship between optimization, abstraction and resources (time and money to develop, test and deploy). In short, the further we go away from abstraction into optimization, performance increases but so do resources. Conversely and generally, performance and resources descrease when tipping the scales in the other direction towards abstraction.

If one subscribes to this theory, the argument can then be reduced to the level of performance gain per unit of resource expended.

It is further my belief that well architected abstraction such as memory management decreases performance minimally while achieving great benefits in resource expenditure.

Moreover, one could argue that if a developer spends their time having to manage memory as is the case with C++ they spend fewer precious brain cycles with the solution that they are attempting to implement. Managed code can have the effect of freeing the mind to worry about the poetry of the code vs. the perils of stepping on ones pointers. The increase that one might gain could easily dissipate if the algorithm is not elegantly constructed in the first place.

I have always been under the belief that if you’re writing an application such as a compiler, C++ is the place to be, but if you’re writing business applications, C# or Java should be considered. I have recently begun to question my theory after I heard Anders Hejlsberg, the architect of C# and the original author of Delphi, announce that the next version of the C# compiler will be written in managed code.

In summary, C++ is alive and well and will be for years and most likely, decades to come. The usage window however, is shrinking as managed code becomes more popular and efficient than ever before.

Categories: Uncategorized Tags: , , ,

Aspect Oriented Programming

November 27, 2010 3 comments

Aspect Oriented Programming or AOP as it is so-called is an interesting technique to employ cross-cutting concerns such as logging, exception handling, security and others into your applications with hardly any effort at all.

AOP vendors such as SharpCrafters use an interesting technique at build time to ‘weave’ aspect code in with your methods. The result is that your methods are in a sense altered to incorporate these ‘Aspects’.

For example consider the proverbial ‘Foo’ method…

public void Foo()
     // Do something here

Now consider that you have the need to log when you are entering and exiting from the method. You could add the following code…

public void Foo()
     Trace.TraceInformation("Entering the Foo method.")
     // Do something here
     Trace.TraceInformation("Exiting the Foo method.")

This code works well if you need information about this one specific method. But, what if you need this information for all methods of a class? Writing this code is not only tedious, it detracts from code readability. Now imagine that you want this level of information on several classes, perhaps the entire project.

Enter Aspect Oriented Programming and a product such as PostSharp. To accomplish the logging capability at the method level, you would simply add an attribute as so…

public void Foo()
     // Do something here

[Trace] is an Aspect. Aspects are classes that you write in order to execute code within the boundaries of the method. In this example, code was written to perform some action on entry and another prior to exiting the method.

As mentioned earlier, you could apply this Aspect to the entire class or even an assembly with a single line of code…

[assembly: Trace( AttributeTargetTypes="SomeApp.YourBusinessRules.*",
  AttributeTargetMemberAttributes = AttributeTargetElements.Public )]

This works because a product like PostSharp uses a post compilation step to inject code into the IL. The code can be seen with a decompiler product such as Redgate Reflector.

If you’re interested in serious logging and performance monitoring you can combine the power of PostSharp with a product like Gibraltar for amazing results with almost no effort at all.

AOP – The Other Side of the Coin

If your thinking that you don’t want post compile code to be modifying the compiler-generated IL, you’re not alone. It is probably the single biggest argument against utilizing this technology. My personal opinion is that I consider many variables such as the application’s life-cycle. Is it a mature piece of code, is it legacy, or are we beginning a new project?

Certainly it is a compelling story to be able to gain much-needed cross-cutting concerns that were left out of a legacy code base with such little effort. In fact, it could even extend the life of the application dramatically.

However, if I’m starting a project from scratch, I tend to look at other alternatives such as code generation or policy injection with Microsoft Unity.


AOP is a very interesting technology that can yield almost immediate benefits with very little effort. Like any other pattern or technology, it has a time and place and you as the architect should weigh the risks and benefits before a single line of code is written.

The Mythical IT Budget – Part 2: Leadership, Attitude and Culture

November 25, 2010 1 comment

Leadership,  Attitude and Culture

This could be the single most important aspect of your IT efforts. As we all know, engineers are of their own breed. For better or worse, they think differently and often act differently than most ‘normal folk’. In my experience they are super-sensitive to culture. A good culture can transform an average engineer into a very good one. Conversely, I have seen the best of developers wallow in a dictatorship-like atmosphere.

Managers – Lose the Attitude!

If you think that you know it all, perhaps you should try your hand at another industry. As is the case with many in my profession, technology is my passion and as such, I study it every single day, night and weekend. I know enough to know that my knowledge pails in comparison to what I wish I knew.

Engineers and managers at all levels need to understand that even the most junior person has knowledge that you can learn from. If you truly believe that, its hard not to build a culture where everyone, top to bottom, feels empowered and enthusiastic that they can contribute in a meaningful fashion.

Lose the Ego!

Whether you’re architecting network infrastructure or software framework, do not marry your design – in fact don’t treat it like it is your design at all.

If you’re white boarding, step back, let others talk and build a relationship with subordinates that will allow them to be critical. Your designs will improve and everyone will feel like they had a hand in the creation. Even better, everyone will have a stake in insuring success.

Many people feel that if they don’t have a better suggestion, then they shouldn’t raise their concerns. I believe this to be detrimental to the organization. Simply by raising an issue, others might see it and solve it. None of this takes place however, if leadership is married to every word that comes of their mouth.

Don’t Be a Clock Watcher

Do your best to hire motivated people and then let them work. Engineers like to solve problems or in the case of QA, find problems! Their minds are working in the morning before they come to work and at night when they leave work. Managers should build an atmosphere that recognize and reward this type of attitude.

Many engineers do their best work on their own time, when they feel free to create and solve. Within reason, allow for flexibility and recognize their efforts. Your organization will be rewarded several fold and will be much more productive over the clock-punching mentality.

Promote Research of New Technologies

Engineers recognize the pace in which technology moves and no one wants their skills to grow stale. Managers who don’t recognize this, allow their products and services to fall behind the curve. Find ways to incorporate new technologies and allow your people to learn and practice them.

Ask your engineers to periodically research a new technology and present it in an informal way – perhaps as a brown bag session. You will reap the benefits many times over.

Have a killer snack cabinet!

Who doesn’t like to eat? A small investment in snacks ranging from fresh fruit to hard-core junk food says something about the work place.

My personal favorite is the monthly birthday cake. It doesn’t have to be anybody’s birthday. In fact, people who work with me know that a fictitious worker named ‘Jebidiah’ has a birthday every month.

In Summary

Build a cozy and comfortable atmosphere where creativity wins over attitude and watch productivity sky-rocket.

Frameworks – A Domain Specific Language

November 24, 2010 3 comments

Domain Specific Languages (DSL) can be a tremendous performance boost for many software development environments. When used properly, it can dramatically reduce tedious hand-written code, decrease defect rates and increase code readability. They can also help bridge the gap between the developer and business personnel.

I have been working with a DSL named Frameworks that brings forth an entire architectural framework into focus. It has been used in several enterprise-grade products and development is continuous and on-going.

Frameworks, while easy to use, deserves an in-depth discussion on its philosophies, artifacts, coding techniques and limitations. For these reasons, this blog will be multi-part, addressing a different topic with each post.

What is Frameworks?

It is a DSL that utilizes a Model first approach to development. There are some excellent writings on this subject and I encourage you to explore it if you have not already done so. The Domain Model provides a soft transition of knowledge from domain level expertise to developer ‘speak’. It is a model that business analysts digest quite easily and one that the developer can use in most every phase of a project. Domain Models look similar to a class diagram as shown below.

Frameworks loosely uses the Domain Model as the hub for many artifacts. Amongst them are business objects and their interfaces, data-access objects and their interfaces, ORM and dependency injection configuration and others.

The goal of the Frameworks is therefore several-fold:

  1. Help to bridge the gap between business analyst and developer roles using the domain model.
  2. Use the domain model to generate artifacts so that the developer does not have to write tedious, error-prone code.
  3. Generate the code utilizing best practices that build on the work of others.
  4. Allow the developer to choose between NHibernate and Microsoft’s Entity Framework to achieve object-relational-mapping (ORM).
  5. Take advantage of dependency injections (DI) systems such as Microsoft Unity into the generated code.
  6. Allow for internal caching of ‘category’ type data upon simple declaration.
  7. Allow the developer to declare database partitioning (‘sharding’) without having to write the code to do so.

As you can see, the project is quite ambitious. This series of blogs will attempt to describe in various levels of detail, how these goals have been realized.

The next blog in the series will discuss the domain model in detail and setup some of the philosophies around the architecture.

The Mythical IT Budget – Part 1: Introduction

November 24, 2010 Leave a comment

If you’re a CEO, CFO or private business owner, chances are that you have experienced software that is late, over-budget and under-delivered.  Or perhaps even worse, one or more of these has taken place without you knowing it! How do you know if that 4 million dollar budget is cost-effective? What is the metric by which you determine effectiveness?

I would submit that most organizations are not enjoying a very high rate of return on their development dollar. I would further speculate that many decision makers don’t even know that this is the case. After all, how is an executive supposed to fully understand the ramifications of their budgets when those who are in charge of the budgets aren’t making the right decisions in the first place?

The subject is a complex one and as is often the case, there is no simple answer or magic bullet. That being said, there must be a methodology in which one can measure effectiveness. Many have attempted to create business models to verify the effectiveness of this often costly line-item. Some of them have had limited success for a time period and then as technology changes, the model itself fails to deliver on its promise.

Surely there must be some something that an organization can do to ensure that they are not wasting money.  In short, there are many steps that you can take. This series of blogs will attempt to outline in various levels of detail, the best practices of software development.

Part 2: Leadership, Attitude and Culture

As is the case with most of the subjects in this series, the issues are quite involved. Each of them could be and have been the subject of an entire book. Part two of this series will attempt to push your thought process in the right direction.

Part 3: Software Methodologies – Beyond Agile and Waterfall

Certainly choosing the right methodology (Waterfall, Agile, Scrum, Extreme or others) is very important and Part 3 won’t attempt to delve into each of these. Rather, it looks at methodology through a broader lens from the perceived pain point all the way through solution and deployment.

Part 4: Software languages and databases, religion or engineering?

Which is better, Java or Microsoft’s NET? Should you use Oracle or SQL Server? While we won’t delve into this hornet’s nest, I will discuss the attitude that your technical leaders should have in choosing tools. Disclaimer, I currently have a higher level of expertise in .NET than I do Java.

Part 5: QA vs. Development – the constant battle

QA and development have a similar relationship to architects and contractors; in a word, it is typically a contentious one. I will discuss the role of each department and who should have the final word in each phase and why.

Part 6: Architectural Standards and Frame Work

Hopefully, your organization has and pays serious attention to these. If not, stop reading this blog and turn your attention to it immediately as it is money going out the window. I will explain why these are so vitally important to a healthy organization.

Please stayed tuned to this blog for Part 2 of this series.

Categories: Uncategorized Tags:
%d bloggers like this: