Archive

Posts Tagged ‘DSL’

Frameworks 2013 and IDesign’s Method

April 28, 2013 2 comments

Architects and developers alike have firmly adopted the SOA revolution and with it, the special challenges the paradigm commands. Some have successfully navigated the issues but many are bearing the battle scars from failed attempts.  The SOA movement can benefit from tried and true methodologies and tools.  Seasoned architects who have practiced SOA for years have greatly improved all aspects of the software process, from design to implementation, testing and Project Design utilizing IDesign’s “Method”. If you are not currently using the Method, I suggest that you read about it at the IDesign web site.

I have been using a tool that will help both the Architect and developer’s SOA efforts. The tool has been used for a number of years and the latest release now supports the Method. The combination of the two tells a compelling story that is the subject of this blog.

Method practitioners know that the modeling starts with a “Static Diagram” depicting the various services the architect has designed.  Instead of creating the diagram in PowerPoint, we use Frameworks 2013, a Visual Studio add-in to develop the model.

Static Diagram

In this rather small static diagram, there are 5 services. If you’re like me, you’ll want to keep your solutions small so they can compile fast and as a result, form a perfect environment in which to write and test code quickly. A one-to-one relationship between solution and service has many benefits which I will delve into in a subsequent blog of this series. For this static diagram, you would therefore have 5 solutions to create. Each of the 5 solutions would have a number of projects. Typically, the architect or developers will need to create these solutions and the projects within. Frameworks will do that work for you at the click of a button. But the story only begins there; let’s dig in a little further.

The next set of models to create would be the Call Chains. Like the Static Diagram, the services you draw with PowerPoint are just colored rectangles with lines and as such, are simply one step up from using a paper napkin.  Conversely, Frameworks treats them as a first class citizen; you may be surprised as to what is possible when the model truly becomes a living, breathing aspect of your work.

Call Chain

Only the services shown on the Static Diagram are allowed on the call chains as dictated by the Method. Logically, the culmination of all call chains dictates the dependency diagram. Frameworks can utilize the information to produce it without further work by the architect.

Dependency

The WCF Services diagram is also supported. A host is represented by a red box that you drag on to the “design surface”. You then drag Managers, Engines and Resource Access services into the host as you have designed it. Once again, the model represents valuable information that can be used for great benefit; in this case, a host solution can be created and even deployed. As you will see in a future article, Frameworks can produce a complete vertical slice of the application without a single line of code being written by the human being.

Host

As stated earlier, the static diagram can create a solution for each service. Frameworks will utilize any number of Visual Studio project templates to setup the solutions and projects in a manner that is warranted by your organization – you have complete control.

Once the solutions have been created, it is time to create the detail design. That is to say, you create service contracts and the data contracts it will consume and produce. This is where the power of the tool comes to life.

Detail Design 1.2

Optionally each operation can have a DataContract as input, a parameter list, or both. The tool imposes no limits on what you can currently do with WCF.

Suppose the Service is of type Resource Access, you will want to model the Entities, ala EF, in the same model.  You might also want to reverse engineer an existing database. If you use NHibernate, you would have to write all of those ugly XML “hbm” files.  Why do that by hand when it doesn’t add any real business value? The answer is that you shouldn’t, you would want the tool to write it for you.

On many occasions you might want to map some or the entire data contract to an entity and back. That mapping code is also created by the tool. Moreover, you might not want to share your data contracts from the Access layer through the engine to the manager. At each layer, you can copy the data contracts from one model to another, make adjustments and the mapping code will be generated.

Detail Design 2.1

It is important to note that all code artifacts are optional and when you do use them, there are plenty of places to inject your own code. Further, all code that you write to augment the generated code are placed in partial classes and as such, your code is never modified.

The road-map includes plugging the modeling tools into project management tools such as JIRA and TFS, creating NuSpec files for each service, creating build definitions for Team City and TFS, and hooking in IDesign’s Project Templates.

The goal of this blog was to merely give you a flavor of what is possible with Frameworks 2013. There are many more features to cover and a fair amount of detail to investigate. We will do that in subsequent entries of this multi-part blog series.

Database Sharding

December 2, 2010 1 comment

Database Sharding is not a new concept but has been historically difficult to implement. Society and business alike have an insatiable appetite for growing amounts of data and as such, there is a need for larger databases than ever before.

The onset of Microsoft’s Azure presents interesting possibilities to scale in massive proportion. Utilizing Azure tables, massive scale is possible. Azure tables however is not SQL based and present a different paradigm than most development shops are familiar with. SQL Azure essentially gives us the option of having Microsoft SQL-Server in the cloud but is currently limited in database size.

How then would one implement a solution that accounts for large amounts of relational data to scale in similar fashion to Azure Tables Storage? One method would be to shard the database or in other words, breakup the database into smaller chunks and store them in as many databases as makes sense.

The details of the implementation can sound a bit daunting but as with most well architected patterns, it isn’t quite as complicated as it may first seem. This blog will introduce one such pattern that was used in a social networking application. The application was built for ultimate scalability utilizing Frameworks, Windows Azure Web Roles, Azure Worker Roles, Azure Table Storage and SQL Azure. 

Aside: An interesting by-product of this approach, especially when using Frameworks, is that you can easily mix and match relational data with Azure Tables. Frameworks can provide persistence ignorance. Both storage mechanisms have their strengths, so why not take advantage of them both within the same application?

The basic premise is that a database schema is used for a functional purpose such as person information or blogging data. Within each function and logical schema, there is any number of physical shards (or partitions if you will).

When a user logs on, we ‘fan out’ to all Person shards to find their primary key and partition id. Once we have keys, the PartitionId will point us to the database in which that person’s data lives. We then perform normal SQL operations.

Frameworks code generates the data access layer and abstracts the sharding patterns. Since the details of data access are left to code generation, the developer just needs to call the DAOs in an intelligent manner. The code below shows a PersonManger which will call the data access layer. A front-end might first call the PersonManager with a user name and password to retrieve a Person object. The Person will now have a PartitionId that points us to the shard in which their data lives. The pattern continues as blogs are kept in another shard while posts of the blog in yet another shard.

public class PersonManager : IPersonManager
{
     // Store the Data Access objects injected by the IOC
     // so that we can use them for the life of the object.
     private IPersonDao _PersonDao;
     private IBlogDao _BlogDao;
     private IPostDao _PostDao;

     // Use IOC to inject the needed concrete implementations...
     public PersonManager(IPersonDao personDao, IBlogDao blogDao, IPostDao postDao)
     {
          _PersonDao = personDao;
          _BlogDao = blogDao;
          _PostDao = postDao;
     }

     // This is a pass through method as we only want the BusinessRules layer
     // to interact with Data Access Layer.
     public Person GetByUserNameAndPassword(string username, string password)
     {
          // The generated code will 'Fan out' to all the database in
          // an asyncronous manner.
          return _PersonDao.GetByUserNameAndPassword(username, password);
     }

     // Now that we have the person, we can retrieve their blog
     public Person GetBlog(IPerson person)
     {
          // Fan out is not needed because we know the id of the
          // partition in other words, shard
          return _BlogDao.GetByPersonId(person.PartitionId, person.Id);
     }

     // Post are kept yet in another shard...
     public IList<Post> GetPostList(IBlog blog, string[] tagArray)
     {
          // Again, fanning is not needed
          return _PostDao.GetByBlogAndTags(blog.PostPartitionId, blogId, tagArray);
     }

} 

This has been an introduction into Database sharding using Frameworks. In the next part we will deep dive into the data access layer and look at the generated code.

Mock that Database Layer (or not)

November 29, 2010 Leave a comment

Like many architects and developers, I believe that test driven development can have its place and be a worthwhile endeavor for certain projects and skill sets. The advantages and disadvantages of this methodology are well chronicled. Whether or not you are a practitioner of TDD, I hope that at a minimum, you’re writing unit tests using a framework such as NUnit or the built-in Visual Studio Unit Test Framework.

If you’re working in an ideal environment, you probably have a system that has been designed with testing in mind, unit tests have been well written and categorized, external dependencies such as the database have been mocked and stubbed, you’re using a build server that will run the tests and if they fail, notify the developer. While some practice this application life cycle discipline, I would venture to guess many more do not. If you fit into the latter category, you may not yet have the resources or know-how to be stubbing and mocking. If this is the case, I believe that all is not lost.

I have spent considerable time coding a domain specific language (DSL) that I call Frameworks. It has the ability to create many different types of artifacts including those for the data access layer. If you have a table called Shape, it will create the Shape Entity object as well as the Shape data access object (DAO). The DAO then uses an ORM to communicate with the database. If desired, artifacts will also be constructed for their interface counter-parts, that is IShape and IShapeDao. Dependency injection configuration is constructed allowing a natural integration with an Inversion of Control container.

These interfaces and separation of dependencies allow for the greatest unit test flexibility. Now that we have achieved all this flexibility, one must decide how to use it. I am completely aware that removing dependencies from a unit test is highly desirable hence all the work to make this happen automatically. One formidable dependency in most business applications is data access. Mocking is a very effective and popular way to remove this dependency from your tests. Mocking can be fairly straight forward if you have designed your system appropriately.

There are plenty of compelling reasons to ‘fake’ the database and they include but are not limited to:

  1. The database or data access layer could have defects which could give you false positives or false negatives in your test results.
  2. The database might not be ready for testing as it could be lagging behind the code.
  3. The database could be sitting on a server that is not be readily accessible to duplicate for the purposes of testing.
  4. The work that needs to be performed in order to ready a database for testing could be beyond scope.
  5. Database configuration issues get in the way.
  6. There is a dependency on not only the database, but the type of data that is needed.
  7. There are thousands of unit tests and all the reading and writing to the DB slows testing down to a crawl.

As compelling as this list is, it doesn’t mean that you can’t build useful tests that depend on the database. They’re just no longer called unit tests, rather they would be integration tests. I find myself asking what is a dependency in the first place. I would argue that every unit test incurs some dependency. For example, we have to depend on the compiler, the CPU and other ‘moving’ parts.

To this end, what if a code generator created everything from the DDL to create the database, to the data access objects to service CRUD? Once the code generator is tested, what is the difference between this code generation and that of the compiler to create MSIL code?

In summary, I think bypassing the classic definition of a unit test in favor an integration test accomplishes the goal in a smaller scale project. If the project is mission-critical, scalable, functionally complicated or expected to have a long-life, use an isolation framework such as Moq. Deciding how to proceed is what architecture is all about; one size does not fit all!

Frameworks – A Domain Specific Language

November 24, 2010 3 comments

Domain Specific Languages (DSL) can be a tremendous performance boost for many software development environments. When used properly, it can dramatically reduce tedious hand-written code, decrease defect rates and increase code readability. They can also help bridge the gap between the developer and business personnel.

I have been working with a DSL named Frameworks that brings forth an entire architectural framework into focus. It has been used in several enterprise-grade products and development is continuous and on-going.

Frameworks, while easy to use, deserves an in-depth discussion on its philosophies, artifacts, coding techniques and limitations. For these reasons, this blog will be multi-part, addressing a different topic with each post.

What is Frameworks?

It is a DSL that utilizes a Model first approach to development. There are some excellent writings on this subject and I encourage you to explore it if you have not already done so. The Domain Model provides a soft transition of knowledge from domain level expertise to developer ‘speak’. It is a model that business analysts digest quite easily and one that the developer can use in most every phase of a project. Domain Models look similar to a class diagram as shown below.

Frameworks loosely uses the Domain Model as the hub for many artifacts. Amongst them are business objects and their interfaces, data-access objects and their interfaces, ORM and dependency injection configuration and others.

The goal of the Frameworks is therefore several-fold:

  1. Help to bridge the gap between business analyst and developer roles using the domain model.
  2. Use the domain model to generate artifacts so that the developer does not have to write tedious, error-prone code.
  3. Generate the code utilizing best practices that build on the work of others.
  4. Allow the developer to choose between NHibernate and Microsoft’s Entity Framework to achieve object-relational-mapping (ORM).
  5. Take advantage of dependency injections (DI) systems such as Microsoft Unity into the generated code.
  6. Allow for internal caching of ‘category’ type data upon simple declaration.
  7. Allow the developer to declare database partitioning (‘sharding’) without having to write the code to do so.

As you can see, the project is quite ambitious. This series of blogs will attempt to describe in various levels of detail, how these goals have been realized.

The next blog in the series will discuss the domain model in detail and setup some of the philosophies around the architecture.

%d bloggers like this: