Archive

Author Archive

Frameworks 2013 and IDesign’s Method

April 28, 2013 2 comments

Architects and developers alike have firmly adopted the SOA revolution and with it, the special challenges the paradigm commands. Some have successfully navigated the issues but many are bearing the battle scars from failed attempts.  The SOA movement can benefit from tried and true methodologies and tools.  Seasoned architects who have practiced SOA for years have greatly improved all aspects of the software process, from design to implementation, testing and Project Design utilizing IDesign’s “Method”. If you are not currently using the Method, I suggest that you read about it at the IDesign web site.

I have been using a tool that will help both the Architect and developer’s SOA efforts. The tool has been used for a number of years and the latest release now supports the Method. The combination of the two tells a compelling story that is the subject of this blog.

Method practitioners know that the modeling starts with a “Static Diagram” depicting the various services the architect has designed.  Instead of creating the diagram in PowerPoint, we use Frameworks 2013, a Visual Studio add-in to develop the model.

Static Diagram

In this rather small static diagram, there are 5 services. If you’re like me, you’ll want to keep your solutions small so they can compile fast and as a result, form a perfect environment in which to write and test code quickly. A one-to-one relationship between solution and service has many benefits which I will delve into in a subsequent blog of this series. For this static diagram, you would therefore have 5 solutions to create. Each of the 5 solutions would have a number of projects. Typically, the architect or developers will need to create these solutions and the projects within. Frameworks will do that work for you at the click of a button. But the story only begins there; let’s dig in a little further.

The next set of models to create would be the Call Chains. Like the Static Diagram, the services you draw with PowerPoint are just colored rectangles with lines and as such, are simply one step up from using a paper napkin.  Conversely, Frameworks treats them as a first class citizen; you may be surprised as to what is possible when the model truly becomes a living, breathing aspect of your work.

Call Chain

Only the services shown on the Static Diagram are allowed on the call chains as dictated by the Method. Logically, the culmination of all call chains dictates the dependency diagram. Frameworks can utilize the information to produce it without further work by the architect.

Dependency

The WCF Services diagram is also supported. A host is represented by a red box that you drag on to the “design surface”. You then drag Managers, Engines and Resource Access services into the host as you have designed it. Once again, the model represents valuable information that can be used for great benefit; in this case, a host solution can be created and even deployed. As you will see in a future article, Frameworks can produce a complete vertical slice of the application without a single line of code being written by the human being.

Host

As stated earlier, the static diagram can create a solution for each service. Frameworks will utilize any number of Visual Studio project templates to setup the solutions and projects in a manner that is warranted by your organization – you have complete control.

Once the solutions have been created, it is time to create the detail design. That is to say, you create service contracts and the data contracts it will consume and produce. This is where the power of the tool comes to life.

Detail Design 1.2

Optionally each operation can have a DataContract as input, a parameter list, or both. The tool imposes no limits on what you can currently do with WCF.

Suppose the Service is of type Resource Access, you will want to model the Entities, ala EF, in the same model.  You might also want to reverse engineer an existing database. If you use NHibernate, you would have to write all of those ugly XML “hbm” files.  Why do that by hand when it doesn’t add any real business value? The answer is that you shouldn’t, you would want the tool to write it for you.

On many occasions you might want to map some or the entire data contract to an entity and back. That mapping code is also created by the tool. Moreover, you might not want to share your data contracts from the Access layer through the engine to the manager. At each layer, you can copy the data contracts from one model to another, make adjustments and the mapping code will be generated.

Detail Design 2.1

It is important to note that all code artifacts are optional and when you do use them, there are plenty of places to inject your own code. Further, all code that you write to augment the generated code are placed in partial classes and as such, your code is never modified.

The road-map includes plugging the modeling tools into project management tools such as JIRA and TFS, creating NuSpec files for each service, creating build definitions for Team City and TFS, and hooking in IDesign’s Project Templates.

The goal of this blog was to merely give you a flavor of what is possible with Frameworks 2013. There are many more features to cover and a fair amount of detail to investigate. We will do that in subsequent entries of this multi-part blog series.

Database Sharding

December 2, 2010 1 comment

Database Sharding is not a new concept but has been historically difficult to implement. Society and business alike have an insatiable appetite for growing amounts of data and as such, there is a need for larger databases than ever before.

The onset of Microsoft’s Azure presents interesting possibilities to scale in massive proportion. Utilizing Azure tables, massive scale is possible. Azure tables however is not SQL based and present a different paradigm than most development shops are familiar with. SQL Azure essentially gives us the option of having Microsoft SQL-Server in the cloud but is currently limited in database size.

How then would one implement a solution that accounts for large amounts of relational data to scale in similar fashion to Azure Tables Storage? One method would be to shard the database or in other words, breakup the database into smaller chunks and store them in as many databases as makes sense.

The details of the implementation can sound a bit daunting but as with most well architected patterns, it isn’t quite as complicated as it may first seem. This blog will introduce one such pattern that was used in a social networking application. The application was built for ultimate scalability utilizing Frameworks, Windows Azure Web Roles, Azure Worker Roles, Azure Table Storage and SQL Azure. 

Aside: An interesting by-product of this approach, especially when using Frameworks, is that you can easily mix and match relational data with Azure Tables. Frameworks can provide persistence ignorance. Both storage mechanisms have their strengths, so why not take advantage of them both within the same application?

The basic premise is that a database schema is used for a functional purpose such as person information or blogging data. Within each function and logical schema, there is any number of physical shards (or partitions if you will).

When a user logs on, we ‘fan out’ to all Person shards to find their primary key and partition id. Once we have keys, the PartitionId will point us to the database in which that person’s data lives. We then perform normal SQL operations.

Frameworks code generates the data access layer and abstracts the sharding patterns. Since the details of data access are left to code generation, the developer just needs to call the DAOs in an intelligent manner. The code below shows a PersonManger which will call the data access layer. A front-end might first call the PersonManager with a user name and password to retrieve a Person object. The Person will now have a PartitionId that points us to the shard in which their data lives. The pattern continues as blogs are kept in another shard while posts of the blog in yet another shard.

public class PersonManager : IPersonManager
{
     // Store the Data Access objects injected by the IOC
     // so that we can use them for the life of the object.
     private IPersonDao _PersonDao;
     private IBlogDao _BlogDao;
     private IPostDao _PostDao;

     // Use IOC to inject the needed concrete implementations...
     public PersonManager(IPersonDao personDao, IBlogDao blogDao, IPostDao postDao)
     {
          _PersonDao = personDao;
          _BlogDao = blogDao;
          _PostDao = postDao;
     }

     // This is a pass through method as we only want the BusinessRules layer
     // to interact with Data Access Layer.
     public Person GetByUserNameAndPassword(string username, string password)
     {
          // The generated code will 'Fan out' to all the database in
          // an asyncronous manner.
          return _PersonDao.GetByUserNameAndPassword(username, password);
     }

     // Now that we have the person, we can retrieve their blog
     public Person GetBlog(IPerson person)
     {
          // Fan out is not needed because we know the id of the
          // partition in other words, shard
          return _BlogDao.GetByPersonId(person.PartitionId, person.Id);
     }

     // Post are kept yet in another shard...
     public IList<Post> GetPostList(IBlog blog, string[] tagArray)
     {
          // Again, fanning is not needed
          return _PostDao.GetByBlogAndTags(blog.PostPartitionId, blogId, tagArray);
     }

} 

This has been an introduction into Database sharding using Frameworks. In the next part we will deep dive into the data access layer and look at the generated code.

Mock that Database Layer (or not)

November 29, 2010 Leave a comment

Like many architects and developers, I believe that test driven development can have its place and be a worthwhile endeavor for certain projects and skill sets. The advantages and disadvantages of this methodology are well chronicled. Whether or not you are a practitioner of TDD, I hope that at a minimum, you’re writing unit tests using a framework such as NUnit or the built-in Visual Studio Unit Test Framework.

If you’re working in an ideal environment, you probably have a system that has been designed with testing in mind, unit tests have been well written and categorized, external dependencies such as the database have been mocked and stubbed, you’re using a build server that will run the tests and if they fail, notify the developer. While some practice this application life cycle discipline, I would venture to guess many more do not. If you fit into the latter category, you may not yet have the resources or know-how to be stubbing and mocking. If this is the case, I believe that all is not lost.

I have spent considerable time coding a domain specific language (DSL) that I call Frameworks. It has the ability to create many different types of artifacts including those for the data access layer. If you have a table called Shape, it will create the Shape Entity object as well as the Shape data access object (DAO). The DAO then uses an ORM to communicate with the database. If desired, artifacts will also be constructed for their interface counter-parts, that is IShape and IShapeDao. Dependency injection configuration is constructed allowing a natural integration with an Inversion of Control container.

These interfaces and separation of dependencies allow for the greatest unit test flexibility. Now that we have achieved all this flexibility, one must decide how to use it. I am completely aware that removing dependencies from a unit test is highly desirable hence all the work to make this happen automatically. One formidable dependency in most business applications is data access. Mocking is a very effective and popular way to remove this dependency from your tests. Mocking can be fairly straight forward if you have designed your system appropriately.

There are plenty of compelling reasons to ‘fake’ the database and they include but are not limited to:

  1. The database or data access layer could have defects which could give you false positives or false negatives in your test results.
  2. The database might not be ready for testing as it could be lagging behind the code.
  3. The database could be sitting on a server that is not be readily accessible to duplicate for the purposes of testing.
  4. The work that needs to be performed in order to ready a database for testing could be beyond scope.
  5. Database configuration issues get in the way.
  6. There is a dependency on not only the database, but the type of data that is needed.
  7. There are thousands of unit tests and all the reading and writing to the DB slows testing down to a crawl.

As compelling as this list is, it doesn’t mean that you can’t build useful tests that depend on the database. They’re just no longer called unit tests, rather they would be integration tests. I find myself asking what is a dependency in the first place. I would argue that every unit test incurs some dependency. For example, we have to depend on the compiler, the CPU and other ‘moving’ parts.

To this end, what if a code generator created everything from the DDL to create the database, to the data access objects to service CRUD? Once the code generator is tested, what is the difference between this code generation and that of the compiler to create MSIL code?

In summary, I think bypassing the classic definition of a unit test in favor an integration test accomplishes the goal in a smaller scale project. If the project is mission-critical, scalable, functionally complicated or expected to have a long-life, use an isolation framework such as Moq. Deciding how to proceed is what architecture is all about; one size does not fit all!

C++ vs Managed Code

November 27, 2010 Leave a comment

This blog discusses the virtues of using lower-level languages such as C++ vs. higher-level, managed code such as Microsoft’s C# or Java in a business application atmosphere.

Disclaimer: Although I have roots in C++ and have used Java, I have been using the .NET stack from its inception and tend to gravitate towards this technology. For this reason, it is written with C# in mind. One could make many of the same observations from a Java point of view.

Perhaps the key concern for most would be the speed in which managed code such as C# compares against the performance of C++. Studies have shown that given the same algorithms, C++ has an advantage in this category. It would not be out of line to say that C++ can provide a 10-20% increase in performance.

Java and .NET provide for memory management, hence ‘Managed Code’, whereas the C++ developer must provide their own memory management facility. As with most levels of abstraction, Managed Code must generally consider the problem at hand and as such, lose some optimizations that can be otherwise enjoyed.

I would be remiss if I did not state the obvious: C++ needs a compiler which in turn, is its own level of abstraction. If one was a purist, they would have to consider writing native machine code! Therefore, reasonable people can begin to discuss what level of abstraction is the right one for the job at hand, not the virtues of abstraction itself.

Portability is another issue that should be considered. I don’t know of a computing platform that does not have a C++ compiler. Similarly, Java is easily ported to many operating systems and through the Mono Project, so is .NET.

Proponents of managed code would say that it is more-scalable given this portability issue. Java and .NET for the most part, are alleviated from worrying about the specifics of Linux vs. Windows as their code will just ‘run’.

I often sometimes hear from some industry experts that they need absolute speed. What exactly does that mean? How fast do they need to be and at what cost? There are many ways to achieve speed, especially with current day hardware and software advancements, parallel programming and perhaps the ultimate in scalability, cloud computing.

I would argue that in almost every case, there is so much more to the argument than which platform will provide a faster sorting algorithm! Performance optimization always comes at a price and it is argued on almost every level of computing.

Architects have to consider many application needs, platform cost, development time and cost, code portability and scalability, and more before they embark on a road that will lead them down a particular path.

I believe that using unmanaged code is a form of optimization and as such should be scrutinized carefully before implementation. One can theorize that there is a direct relationship between optimization, abstraction and resources (time and money to develop, test and deploy). In short, the further we go away from abstraction into optimization, performance increases but so do resources. Conversely and generally, performance and resources descrease when tipping the scales in the other direction towards abstraction.

If one subscribes to this theory, the argument can then be reduced to the level of performance gain per unit of resource expended.

It is further my belief that well architected abstraction such as memory management decreases performance minimally while achieving great benefits in resource expenditure.

Moreover, one could argue that if a developer spends their time having to manage memory as is the case with C++ they spend fewer precious brain cycles with the solution that they are attempting to implement. Managed code can have the effect of freeing the mind to worry about the poetry of the code vs. the perils of stepping on ones pointers. The increase that one might gain could easily dissipate if the algorithm is not elegantly constructed in the first place.

I have always been under the belief that if you’re writing an application such as a compiler, C++ is the place to be, but if you’re writing business applications, C# or Java should be considered. I have recently begun to question my theory after I heard Anders Hejlsberg, the architect of C# and the original author of Delphi, announce that the next version of the C# compiler will be written in managed code.

In summary, C++ is alive and well and will be for years and most likely, decades to come. The usage window however, is shrinking as managed code becomes more popular and efficient than ever before.

Categories: Uncategorized Tags: , , ,

Aspect Oriented Programming

November 27, 2010 3 comments

Aspect Oriented Programming or AOP as it is so-called is an interesting technique to employ cross-cutting concerns such as logging, exception handling, security and others into your applications with hardly any effort at all.

AOP vendors such as SharpCrafters use an interesting technique at build time to ‘weave’ aspect code in with your methods. The result is that your methods are in a sense altered to incorporate these ‘Aspects’.

For example consider the proverbial ‘Foo’ method…

public void Foo()
{
     // Do something here
}

Now consider that you have the need to log when you are entering and exiting from the method. You could add the following code…

public void Foo()
{
     Trace.TraceInformation("Entering the Foo method.")
     // Do something here
     Trace.TraceInformation("Exiting the Foo method.")
}

This code works well if you need information about this one specific method. But, what if you need this information for all methods of a class? Writing this code is not only tedious, it detracts from code readability. Now imagine that you want this level of information on several classes, perhaps the entire project.

Enter Aspect Oriented Programming and a product such as PostSharp. To accomplish the logging capability at the method level, you would simply add an attribute as so…

[Trace]
public void Foo()
{
     // Do something here
}

[Trace] is an Aspect. Aspects are classes that you write in order to execute code within the boundaries of the method. In this example, code was written to perform some action on entry and another prior to exiting the method.

As mentioned earlier, you could apply this Aspect to the entire class or even an assembly with a single line of code…

[assembly: Trace( AttributeTargetTypes="SomeApp.YourBusinessRules.*",
  AttributeTargetMemberAttributes = AttributeTargetElements.Public )]

This works because a product like PostSharp uses a post compilation step to inject code into the IL. The code can be seen with a decompiler product such as Redgate Reflector.

If you’re interested in serious logging and performance monitoring you can combine the power of PostSharp with a product like Gibraltar for amazing results with almost no effort at all.

AOP – The Other Side of the Coin

If your thinking that you don’t want post compile code to be modifying the compiler-generated IL, you’re not alone. It is probably the single biggest argument against utilizing this technology. My personal opinion is that I consider many variables such as the application’s life-cycle. Is it a mature piece of code, is it legacy, or are we beginning a new project?

Certainly it is a compelling story to be able to gain much-needed cross-cutting concerns that were left out of a legacy code base with such little effort. In fact, it could even extend the life of the application dramatically.

However, if I’m starting a project from scratch, I tend to look at other alternatives such as code generation or policy injection with Microsoft Unity.

Summary

AOP is a very interesting technology that can yield almost immediate benefits with very little effort. Like any other pattern or technology, it has a time and place and you as the architect should weigh the risks and benefits before a single line of code is written.

The Mythical IT Budget – Part 2: Leadership, Attitude and Culture

November 25, 2010 1 comment

Leadership,  Attitude and Culture

This could be the single most important aspect of your IT efforts. As we all know, engineers are of their own breed. For better or worse, they think differently and often act differently than most ‘normal folk’. In my experience they are super-sensitive to culture. A good culture can transform an average engineer into a very good one. Conversely, I have seen the best of developers wallow in a dictatorship-like atmosphere.

Managers – Lose the Attitude!

If you think that you know it all, perhaps you should try your hand at another industry. As is the case with many in my profession, technology is my passion and as such, I study it every single day, night and weekend. I know enough to know that my knowledge pails in comparison to what I wish I knew.

Engineers and managers at all levels need to understand that even the most junior person has knowledge that you can learn from. If you truly believe that, its hard not to build a culture where everyone, top to bottom, feels empowered and enthusiastic that they can contribute in a meaningful fashion.

Lose the Ego!

Whether you’re architecting network infrastructure or software framework, do not marry your design – in fact don’t treat it like it is your design at all.

If you’re white boarding, step back, let others talk and build a relationship with subordinates that will allow them to be critical. Your designs will improve and everyone will feel like they had a hand in the creation. Even better, everyone will have a stake in insuring success.

Many people feel that if they don’t have a better suggestion, then they shouldn’t raise their concerns. I believe this to be detrimental to the organization. Simply by raising an issue, others might see it and solve it. None of this takes place however, if leadership is married to every word that comes of their mouth.

Don’t Be a Clock Watcher

Do your best to hire motivated people and then let them work. Engineers like to solve problems or in the case of QA, find problems! Their minds are working in the morning before they come to work and at night when they leave work. Managers should build an atmosphere that recognize and reward this type of attitude.

Many engineers do their best work on their own time, when they feel free to create and solve. Within reason, allow for flexibility and recognize their efforts. Your organization will be rewarded several fold and will be much more productive over the clock-punching mentality.

Promote Research of New Technologies

Engineers recognize the pace in which technology moves and no one wants their skills to grow stale. Managers who don’t recognize this, allow their products and services to fall behind the curve. Find ways to incorporate new technologies and allow your people to learn and practice them.

Ask your engineers to periodically research a new technology and present it in an informal way – perhaps as a brown bag session. You will reap the benefits many times over.

Have a killer snack cabinet!

Who doesn’t like to eat? A small investment in snacks ranging from fresh fruit to hard-core junk food says something about the work place.

My personal favorite is the monthly birthday cake. It doesn’t have to be anybody’s birthday. In fact, people who work with me know that a fictitious worker named ‘Jebidiah’ has a birthday every month.

In Summary

Build a cozy and comfortable atmosphere where creativity wins over attitude and watch productivity sky-rocket.

Frameworks – A Domain Specific Language

November 24, 2010 3 comments

Domain Specific Languages (DSL) can be a tremendous performance boost for many software development environments. When used properly, it can dramatically reduce tedious hand-written code, decrease defect rates and increase code readability. They can also help bridge the gap between the developer and business personnel.

I have been working with a DSL named Frameworks that brings forth an entire architectural framework into focus. It has been used in several enterprise-grade products and development is continuous and on-going.

Frameworks, while easy to use, deserves an in-depth discussion on its philosophies, artifacts, coding techniques and limitations. For these reasons, this blog will be multi-part, addressing a different topic with each post.

What is Frameworks?

It is a DSL that utilizes a Model first approach to development. There are some excellent writings on this subject and I encourage you to explore it if you have not already done so. The Domain Model provides a soft transition of knowledge from domain level expertise to developer ‘speak’. It is a model that business analysts digest quite easily and one that the developer can use in most every phase of a project. Domain Models look similar to a class diagram as shown below.

Frameworks loosely uses the Domain Model as the hub for many artifacts. Amongst them are business objects and their interfaces, data-access objects and their interfaces, ORM and dependency injection configuration and others.

The goal of the Frameworks is therefore several-fold:

  1. Help to bridge the gap between business analyst and developer roles using the domain model.
  2. Use the domain model to generate artifacts so that the developer does not have to write tedious, error-prone code.
  3. Generate the code utilizing best practices that build on the work of others.
  4. Allow the developer to choose between NHibernate and Microsoft’s Entity Framework to achieve object-relational-mapping (ORM).
  5. Take advantage of dependency injections (DI) systems such as Microsoft Unity into the generated code.
  6. Allow for internal caching of ‘category’ type data upon simple declaration.
  7. Allow the developer to declare database partitioning (‘sharding’) without having to write the code to do so.

As you can see, the project is quite ambitious. This series of blogs will attempt to describe in various levels of detail, how these goals have been realized.

The next blog in the series will discuss the domain model in detail and setup some of the philosophies around the architecture.

%d bloggers like this: