Thursday, June 22, 2017

Micro-services, common pitfalls and Lambda architectures as a potential solution to avoid them

There is a fundamental conflict between services and data systems. Micro services are designed to encapsulate data within their inner workings, data systems are designed for the purpose of exposing data.

In a micro services based architecture an application is split into fragments or micro services which are separated by a network and collectively collaborate and work together to achieve the same goals as a monolithic application.

Micro services encapsulate distinct code and logic and manage their own data, they allow for easier scaling in both performance and organizational terms, and their biggest benefit, if done correctly is their capability to be deployed independently of one another.

There are several anti-patterns that can lead to micro-services being coupled to each other and therefor cause them to lose their most important benefit of being able to be deployed independently.
Having a shared package amongst micro services is an anti-pattern since making changes to the shared package might require several micro services to make changes to their inner workings and their interactions with this shared package and hence not allowing the micro services to evolve and be deployed independently of each other. To counter this scenario, it is advised to have well defined responsibilities and clean separation between services and develop contracts that clearly define the roles of a specific service, if a new requirement does not fit into any of these contracts then it probably is a good candidate for a new micro service.

In a micro services architecture each micro service will own and manage its own set of data, and if there is a reporting need for a larger set of data which is an aggregation of all the data managed by all the micro services then the data from all these micro services is gathered into a central location where reporting needs are met. In such a set up each micro service potentially has a different way of representing the same data structure and over time these data structures which define the same concepts in the overall system can potentially diverge, and also copying and moving around all this data in a distributed system where essentially these is no single source of truth does not come without problems.

We should always design systems with flexibility in mind, in such way that allow for extensibility and change, because In the real world business evolves and with new requirements there is always the possibility that a shared set of data between these fragments would become a requirement, this could potentially lead to the coupling services by using a shared data source where data manipulation occurs by multiple services, as opposed to where only querying and reporting tasks are achieved which does not affect the agility of micro services.

The introduction of event stream analysis technology such as Apache Kafka and Microsoft Azure Stream Analytics allows for a new approach to designing micro services that can allow us to avoid these common pitfalls.

Using event sourcing, event stream analysis platforms and Implementing a distributed log, can allow us to avoid the coupling caused by having a single source of truth where data manipulation and querying takes place by multiple services and allow the micro services to grow independently.

In such systems all data is first treated as an event, and all events are stored in a distributed log managed by stream analytics services, essentially this means that there is a single source of truth for all the data and events that can be tapped into, both for reporting purposes and business requirements. The micro services that need to access this data have no control over it and can only query, react to or copy the data into their own storage mechanisms.

With this setup we can be sure that we can easily scale as the events are stored in a distributed log, we also are making sure that the separation of concerns between the micro services is held as none of the micro services can modify the data in a way which would cause discrepancies and we also have a single source of truth which can be analysed at any time both for machine learning purposes or business reporting however event sourcing does not come without downsides and you are well advised to take these into consideration before building a system based on this pattern.

Tuesday, March 31, 2015

Using Strategy and Factory Design Patterns to write SOLID code

I was given the following user story to work on, and I immediately recognized the potential to apply the strategy pattern:
   As a user I would like to be able to search for candidates

        Given the user is signed-in
        Given the user has subscription 
        Then return Non-anonymized search results

        Given the user is signed-in
        Given the user does not have a subscription
        Then return anonymized search results

        Given the user is signed-out
        Then return signed-out search results

It is obvious that there are three different behaviours depending on whether or not the user is signed in and has a valid subscription.
if we were to write all this functionality into a single object we would be breaking the single responsibility principle and would have a number of if and else statements, increasing the cyclomatic and maintenance complexities of our code. so I set out to apply the single responsibility principle, and the first I took was to define the responsibilities of my objects. from the above user story we can see that there are three different search behaviours and therefore we could easily create three objects that represent each search behaviour.
first I created a "role interface" for my strategies like so:
    public interface ISearchStrategy
    {
        Task<ResultModel> SearchFor(SearchCriteria criteria);
    }

Then I started implementing the search strategies for each user story scenario like so, this was all done using tdd but I have omited the tests and implementations as I am trying to demonstrate the use of design patterns:
"then return Non-anonymized search results" :
    public class NonAnonymizedSearchStrategy:ISearchStrategy
    {
        private readonly IQuery query;

        public NonAnonymizedSearchStrategy(IQuery Query)
        {
            this.query = Query;
        }

        public async Task<ResultModel> SearchFor(SearchCriteria criteria)
        {

        }
    }

"Then return anonymized search results" :
    public class AnonymizedSearchStrategy:ISearchStrategy
    {
        private readonly IQuery query;

        public AnonymizedSearchStrategy(IQuery Query)
        {
            this.query = Query;
        }

        public async Task<ResultModel> SearchFor(SearchCriteria criteria)
        {

    }

"Then return signed-out search results" :
    public class SignedOutSearchStrategy:ISearchStrategy
    {
        private readonly IQuery Query;

        public SignedOutSearchStrategy(IQuery Query)
        {
            this.Query = Query;
        }

        public async Task<ResultModel> SearchFor(SearchCriteria criteria)
        {

        }
    }

Now that we have our behaviours, policies or strategies as they are more popularly called defined, we need to come up with a way of instantiating the right strategy at runtime, this is were the factory pattern is used.
    public class SearchStrategyFactory:ISearchStrategyFactory
    {
        private readonly ICurrentUserService currentUserService;
        private readonly IQuery query;

        public SearchStrategyFactory(ICurrentUserService currentUserService, IQuery Query)
        {
            this.currentUserService = currentUserService;
            this.query = Query;
        }

        public ISearchStrategy Create()
        {
            if (currentUserService.IsSignedIn == false)
            {
                return new SignedOutSearchStrategy(query);
            }

            if (currentUserService.HasSubscription)
            {
                return new NonAnonymizedSearchStrategy(query);
            }

            return new AnonymizedSearchStrategy(query);
        }
    }

I used convention over configuration to register all our factories that are appended by the word "Factory" like so in the composition root:
            container.Register(
                Classes
                .FromAssemblyContaining<CandidateService>()
                .Where(t => t.Name.EndsWith("Factory"))
                .WithService
                .AllInterfaces().LifestylePerWebRequest());

and now I am able to inject my factory into my application service and use it like so:
    public class SearchService:ISearchService
    {
        private readonly ISearchStrategyFactory searchStrategyFactory;

        public SearchService(ISearchStrategyFactory searchStrategyFactory)
        {
            this.searchStrategyFactory = searchStrategyFactory;
        }

        public Task<ResultModel> SearchFor(SearchCriteria criteria)
        {
            return searchStrategyFactory.Create().SearchFor(criteria);
        }
    }

Monday, March 30, 2015

Using Mapper and Abstract Factory Design Patterns to write SOLID code

Dapper provides a great way to map between sql result sets and POCO classes but even using dapper sometimes we end up with code like the following example:
public class Query:QueryBase,IQuery
    {
        public Query(IConnectionFactory connectionFactory): base(connectionFactory, DatabaseName.Search){}

        public async Task Search(ISearchCriteria criteria)
        {
            SearchResultDto SearchResult = null;

            await Fetch(conn => conn.QueryMultipleAsync(@"dbo.SearchGet",
                new
                {
                    criteria.Keywords,
                    criteria.LocationId,
                    criteria.GridEast,
                    criteria.GridNorth,
                    criteria.GridRadius,
                    criteria.RateFrom,
                    criteria.RateTo,
                    criteria.IsContract,
                    criteria.HoursUpdatedOffset,
                    SortById = criteria.OrderBy,
                    RowTake = criteria.DbTakeAmount,
                    RowSkip = criteria.Page <= 1 ? 0 : (criteria.Page - 1)*criteria.PageSize
                },
                commandType: CommandType.StoredProcedure)).ContinueWith(t =>
                {
                    var result = t.Result;

                    if (result != null)
                    {
                        SearchResult = new SearchResultDto();

                        var last30Days = DateTime.Today.AddDays(-30);

                        SearchResult.s = t.Result.Read().Select(x =>
                        {
                            SearchResult.MaxRowsCount = x.MaxRows;

                            return new Dto()
                            {
                                CandidateId = x.Candidate_Id,
                                Name = x.Forename + " " + x.Surname,
                                Surname = x.Surname,
                                Forename = x.Forename,
                                Location = x.Location_Name,
                                DistanceFromSearchedLocation = x.Distance > 0 ? Convert.ToDecimal(x.Distance) : null,
                                JobTitle = x.Job_Title,
                                Employer = x.Previous_Employer,
                                LastSignIn = x.Last_Login,
                                PayRate = x.Minimum__Rate > 0 ? Convert.ToDecimal(x.Minimum__Rate) : null,
                                CalculatedSalary = x.CalcSal > 0 ? Convert.ToDecimal(x.CalcSal) : null,
                                MinimumSalary = x.Minimum_Salary > 0 ? Convert.ToDecimal(x.Minimum_Salary) : null,
                                AdditionalInfo = x.AdditionalInfo,
                                IsAvailable = x.ShowLabel,
                                AvailableMornings = (x.DoesMornings ?? false) && (x.AvailabilityUpdatedOn > last30Days),
                                AvailableAfternoons = (x.DoesAfternoons ?? false) && (x.AvailabilityUpdatedOn > last30Days),
                                AvailableEvenings = (x.DoesEvenings ?? false) && (x.AvailabilityUpdatedOn > last30Days),
                                AvailableWeekends = (x.DoesWeekends ?? false) && (x.AvailabilityUpdatedOn > last30Days),
                                AvailableShiftWork = (x.DoesShiftWork ?? false) && (x.AvailabilityUpdatedOn > last30Days),
                                AvailableNights = (x.DoesNights ?? false) && (x.AvailabilityUpdatedOn > last30Days),
                                AvailabilityUpdatedOn = x.AvailabilityUpdatedOn
                            };

                        }).ToList();
                    }
                });

            return SearchResult;
        }
    }
there are a few concerning issues with the above code.
SRP states that "Classes should have only one reason to change."
first It breaks the single responsibility principle.  lets determine the responsibilities in the class.
the class is responsible for querying the database and getting a result set of data. the class is also responsible for mapping that result set to the return type. in a small class like the above this might not seem like a big issue, but as a project grows you can be sure that so does the above class and it will be harder to maintain this code.

it breaks the open closed principle.
OCP states that "Classes should be open for extension but closed for modification."
if we need to map a new field from sql to the poco class then we will have to open this class and modify it. even though this change is the side effect of the mapping responsibility in this class.
The best way to avoid introducing new bugs into a unit of code that already works is to just avoid making changes to that unit. but we know that is not possible as new features are requested and we always have to make changes to pieces of code in order to deliver new features to our customers or fix new bugs that we introduce along the way. one of the benefits of following solid principles is that the code that is produced is composed of smaller modules which are easier to understand as they are smaller and have explicit boundaries due to following SRP, we can work faster because we wont have to read through unnecessary code which is not related to the unit that we are working on and another side effect of having smaller units of code is that there is less risk of introducing new bugs. I broke this class into two distinct classes with explicit responsibilites. first I created the following interface for my mappers:
    public interface IMapper<out T>
    {
        T Map(Dapper.SqlMapper.GridReader reader);
    }
I want to follow a convention based naming scheme for my mappers. any mapper that maps a certain query result to a DTO should be named as [DTO name + Mapper], ie. the mapper for SearchResultDto will be called SearchResultDtoMapper. if I follow this convention I will be able to use a dependency injection framework and tell it to use this convention to wire all my mappers for me. this is called convention over configuration. I created the following mapper for the SearchResultDto:
    public class SearchResultDtoMapper:IMapper<searchResultDto>
    {
        public SearchResultDto Map(SqlMapper.GridReader reader)
        {
            var SearchResult = new SearchResultDto();
            
            var last30Days = DateTime.Today.AddDays(-30);

            SearchResult.s = reader.Read<dynamicT>().Select(x =>
            {
                SearchResult.MaxRowsCount = x.MaxRows;

                return new Dto()
                {
                    CandidateId = x.Candidate_Id,
                    Name = x.Forename + " " + x.Surname,
                    Surname = x.Surname,
                    Forename = x.Forename,
                    Location = x.Location_Name,
                    DistanceFromSearchedLocation = x.Distance > 0 ? Convert.ToDecimal(x.Distance) : null,
                    JobTitle = x.Job_Title,
                    Employer = x.Previous_Employer,
                    LastSignIn = x.Last_Login,
                    PayRate = x.Minimum__Rate > 0 ? Convert.ToDecimal(x.Minimum__Rate) : null,
                    CalculatedSalary = x.CalcSal > 0 ? Convert.ToDecimal(x.CalcSal) : null,
                    MinimumSalary = x.Minimum_Salary > 0 ? Convert.ToDecimal(x.Minimum_Salary) : null,
                    AdditionalInfo = x.AdditionalInfo,
                    IsAvailable = x.ShowLabel,
                    AvailableMornings = (x.DoesMornings ?? false) && (x.AvailabilityUpdatedOn > last30Days),
                    AvailableAfternoons = (x.DoesAfternoons ?? false) && (x.AvailabilityUpdatedOn > last30Days),
                    AvailableEvenings = (x.DoesEvenings ?? false) && (x.AvailabilityUpdatedOn > last30Days),
                    AvailableWeekends = (x.DoesWeekends ?? false) && (x.AvailabilityUpdatedOn > last30Days),
                    AvailableShiftWork = (x.DoesShiftWork ?? false) && (x.AvailabilityUpdatedOn > last30Days),
                    AvailableNights = (x.DoesNights ?? false) && (x.AvailabilityUpdatedOn > last30Days),
                    AvailabilityUpdatedOn = x.AvailabilityUpdatedOn
                };

            }).ToList();

            return SearchResult;
        }
    }
Now I can use my mapper to mapp the objects that I need but I will first have to wire up my DI so that it can Inject my implementation. as you can see I am telling Castle Windsor to wire all the classes that end with "Mapper" to their appropriate implementations.
 
container.Register(
                Classes
                .FromAssemblyContaining<searchResultDtoMapper>()
                .Where(t => t.Name.EndsWith("Mapper"))
                .WithService
                .AllInterfaces());
Now I can inject my IMapper interface into my query class like so and start using it:

 
    public class Query:QueryBase,IQuery
    {
        private readonly IMapper<searchResultDto> mapper;

        public Query(IMapper<searchResultDto> mapper)
            : base(Connection.DatabaseName.Search)
        {
            this.mapper = mapper;
        }

        public Task<searchResultDto> Search(ISearchCriteria criteria)
        {
            return Fetch(async conn =>
            {

                var result = await conn.QueryMultipleAsync(@"dbo.SearchGet", new
                {
                    criteria.Keywords,
                    criteria.LocationId,
                    criteria.GridEast,
                    criteria.GridNorth,
                    criteria.GridRadius,
                    criteria.RateFrom,
                    criteria.RateTo,
                    criteria.IsContract,
                    criteria.HoursUpdatedOffset,
                    SortById = criteria.OrderBy,
                    RowTake = criteria.DbTakeAmount,
                    RowSkip = criteria.Page <= 1 ? 0 : (criteria.Page - 1)*criteria.PageSize
                },
                    commandType: CommandType.StoredProcedure);

                return mapper.Map(result);
            });
        }
   }
This is great, now that we have dependency injection set-up, we can start injecting as many mappers as we would need, but that would quickly lead to another problem called "bloated constructor", where we end up injecting too many dependencies into a constructor. this is an indicator of bad object oriented design and could indicate that we have a class that has too many dependencies.
public class TempQuery:QueryBase,ITempQuery
    {
        private readonly IMapper<SearchResultDto> searchResultDtoMapper;
        private readonly IMapper<PreviewDto> previewDtoMapper;

        public TempQuery(IMapper<SearchResultDto> searchResultDtoMapper, IMapper<PreviewDto> previewDtoMapper)
            : base(Connection.DatabaseName.CandidateSearch)
        {
            this.searchResultDtoMapper = searchResultDtoMapper;
            this.previewDtoMapper = previewDtoMapper;

        }
        
        ...........
    }
one solution for dealing with bloated constructors is to separate the responsibilities and put them behind a facade and instead inject those facades into the constructor, but in this scenario since we are injecting multiple instances of the same types i.e. implementations of IMapper we can use the "Abstract Factory" design pattern.

The first step is to create an interface for our mapper factory:

public interface IMapperFactory
    {
        IMapper<T> Create<T>();
    }
And the implementation of our mapper factory interface:
    public class MapperFactory:IMapperFactory
    {
        private readonly IWindsorContainer container;

        public MapperFactory(IWindsorContainer container)
        {
            this.container = container;
        }

        public IMapper<T> Create<T>()
        {
            return this.container.Resolve<IMapper<T>>();
        }
    }

Here we face another anti-pattern. since this class is referencing the "Inversion of control Container" directly, it would cause the assembly where it lives to have a hard dependency on the IOC, and would definitely make this implementation a "service locator".
service locator is considered to be an anti pattern because it hides a class' dependencies, causing run-time errors instead of compile-time errors, as well as making the code more difficult to maintain because it becomes unclear when you would be introducing a breaking change.
We can fix this problem and avoid the service locator anti pattern by moving the factory implementation into the IOC composition root:
    public class QueriesInstaller:IWindsorInstaller
    {
        public void Install(Castle.Windsor.IWindsorContainer container, Castle.MicroKernel.SubSystems.Configuration.IConfigurationStore store)
        {
            container.Register(
                Classes
                .FromAssemblyContaining<SearhcQuery>()
                .Where(t => t.Name.EndsWith("Query"))
                .WithService
                .AllInterfaces().LifestylePerWebRequest());

            container.Register(
                Classes
                .FromAssemblyContaining<SearchResultDtoMapper>()
                .Where(t => t.Name.EndsWith("Mapper"))
                .WithService
                .AllInterfaces());

            container.AddFacility<TypedFactoryFacility>();

            container.Register(Component.For<IMapperFactory>().AsFactory());
        }
    }

We can then inject our mapper factory instance into the classes that are dependent on it and create any of the mappers that we need at run time:
public class Query:QueryBase,IQuery
    {
        private readonly IMapperFactory mapperFactory;

        public Query(IMapperFactory mapperFactory) : base(Connection.DatabaseName.Search)
        {
            this.mapperFactory = mapperFactory;
        }

        public Task<SearchResultDto> Search(ISearchCriteria criteria, int? ecruiterId, int? ouId)
        {
            return Fetch(async conn =>
            {

                var result = await conn.QueryMultipleAsync(@"dbo.SearchGet", new
                {
                    criteria.Keywords,
                    criteria.LocationId,
                    criteria.GridEast,
                    criteria.GridNorth,
                    criteria.GridRadius,
                    criteria.RateFrom,
                    criteria.RateTo,
                    criteria.IsContract,
                    criteria.HoursUpdatedOffset,
                    SortById = criteria.OrderBy,
                    EcruiterId = ecruiterId,
                    OuId = ouId,
                    RowTake = criteria.DbTakeAmount,
                    RowSkip = criteria.Page <= 1 ? 0 : (criteria.Page - 1)*criteria.PageSize
                },
                    commandType: CommandType.StoredProcedure);

                return mapperFactory.Create<SearchResultDto>().Map(result);
            });
        }
    }

Monday, March 02, 2015

Debugging Knockout ViewModels the easy way!

I have found an easy way of debugging all the knockout ViewModels that exist in a page.
Simply add a pre element to your page and position it somewhere that you know will not be covering any other elements.



The below snippet is placed in a partial view. it simply creates a pre element and sets its text data binding to the json value of the root viewmodel.


In the layout page you check whether the application is in debug mode and if it is being run in debug mode then you load and the partial view which contains the jsonified root ViewModel and append it to the layout, this way it does not appear in the live version of the site also having it in the layout means that any page which has a KnockOut ViewModel will have the benefit of having this debugging feature.



Monday, January 19, 2015

Enabling Knockout Components in IE6 to IE8

In order to enable knockout components in Internet explorer 8 and below, we need to register the components in the head section of the html, this way IE will not discard the components when the html page is loaded.
If you are using Asp.Net MVC then you can organize your components into subfolders and use the following snippet which gets the name of the folders containing each component and creates an element using the folder name that houses each component.
    
         

Monday, December 29, 2014

Abstract Factory Pattern

a common scenario where dependency injection is being used is that we end up passing multiple dependencies through the constructor of the dependent class and this sometimes causes the constructor to get bloated.
    public class Program
    {        
        ICustomerRepository _customerRepository;
        IItemRepository _itemRepository;

        public Program(ICustomerRepository customerRepository, IItemRepository itemRepository)
        {
            _customerRepository = customerRepository;
            _itemRepository = itemRepository;
        }

        public Customer GetCustomer()
        {
            return _customerRepository.Get(1);
        }

        public Item GetItem()
        {
            return _itemRepository.Get(123);
        }
    }

In the above code not all injected dependencies will be used equally and in the same manner, _customerRepository might be called more than _itemRepository which raises the question of why we instantiate them and inject them into a constructor using an IOC even though they are not needed immediately.

If we took a different approach and forced the repositories to implement a common interface like so:
    public interface IRepository
    {
        
    }

    public interface IRepository: IRepository where T : class, new()
    {
        T Add(T entity);
        T Update(T entity);
        T Remove(int id);
        T Get(int id);
    }

    
    public interface IRepositoryFactory
    {
        T GetRepository() where T : IRepository;
    }

we would be able to use an IOC such as structuremap to get an instance of Type T where T is IRepository and return it in our repository factory:

public class RepositoryFactory: IRepositoryFactory
{
    public T GetRepository() where T : IRepository
    {
       return ObjectFactory.GetInstance(T);
    }
}

now we can inject our Repository factory into our dependent class and use it to get our dependencies when they are needed:
    public class Program
    {
        IRepositoryFactory _repositoryFactory;

        public Program(IRepositoryFactory repositoryFactory)
        {
            _repositoryFactory = repositoryFactory;
        }

        public Customer GetCustomer()
        {
            var customerRepo = _repositoryFactory.GetRepository();
            return customerRepo.Get(1);
        }

        public Item GetItem()
        {
            var itemRepo = _repositoryFactory.GetRepository();
            return itemRepo.Get(1);
        }

        static void Main(string[] args)
        {
            BootStrapper.Execute();

            var p = new Program(ObjectFactory.GetInstance());         
        }
    }

Wednesday, December 03, 2014

Using Unit Test to Enforce Separation of Concerns

Separation of Concerns is a design principle which states that distinct parts of a software system should be separated and should not overlap in terms of features and functionality.

In technical terms this can be achieved by grouping common functionality into separate projects but that in itself is not enough, any hard reference to other projects that violate the separation of concerns should be avoided and this can be tested and enforced using an integration test.

As an example in the sample solution below we have four projects. to truly have separated concerns in the following solution the presentation layer should only reference the domain services layer and it should not have any hard references to the data access layer.



In order to write a unit test to test for hard references from the presentation layer to the data access layer we can create a unit test project and write the following test:


[TestMethod]
public void ShouldNotAccessDataAccessLayerDirectly() {
    Type webRepresentative = typeof(HomeController);
    Type dataAccessRepresentative = typeof(BaseRepository);

    var references = webRepresentative.Assembly.GetReferencedAssemblies();

    AssemblyName webAssemblyName = webRepresentative.Assembly.GetName();
    AssemblyName unwantedReferenceAssemblyName = dataAccessRepresentative.Assembly.GetName();

    Assert.IsFalse(
    references.Any(a = > AssemblyName.ReferenceMatchesDefinition(unwantedReferenceAssemblyName, a)),
    string.Format("{0} should not be referenced by {1}",
    unwantedReferenceAssemblyName, webAssemblyName));
}

Thursday, November 20, 2014

Command Prompt To Folder

Create a text file named commandPrompt.reg and enter the following text into it and save it:

Windows Registry Editor Version 5.00 
[HKEY_CLASSES_ROOT\Directory\shell\CommandPrompt] @=”Command Prompt:”
[HKEY_CLASSES_ROOT\Directory\shell\CommandPrompt\Command] @=”cmd.exe /k cd %1″


double click the file and you will create a command prompt here option in the right click of windows.

Monday, October 27, 2014

Dependency Inversion Principle and Dependency Injection


The Dependency inversion principle is one of the five S.O.L.I.D principles of object oriented programming and design which were popularized by Robert Cecil Martin better known as Uncle Bob through the publication of his books and articles.

Uncle Bob is also the person who initiated the 2001 meeting of the group that eventually came up with the agile software development methodology and the creation of the agile manifesto.

Why Dependency Inversion

If the dependency inversion principle is adhered to while designing classes that interact with each other, these classes will end up having lower coupling, lower coupling between classes makes them more welcoming to changes and extensions as changes and modifications to one class would not force changes upon the other classes that it interacts with.


There are several ways of implementing dependency inversion in the context of software development. but I will explain this using the more common method of dependency injection.

Dependency injection itself can be implemented in three different ways, constructor injection, property injection and method injection.  the most common method is constructor injection and in my opinion this is the preferred way as the class you are constructing requests the dependency up front and without it being provided the dependency, it cannot be instantiated.

Take the following example, which can be a very common scenario in a Asp.Net MVC eCommerce site ,where a list of items is retrieved by a controller through the use of a repository in order to be shown inside a view.


as visible in the above diagram and the below code snippet, in this scenario the ItemController depends on a concrete implementation of ItemRepository in order to be able to function.


    public class ItemController : Controller
    {
        private ItemRepository _itemRepository;
 
        public ItemController()
        {
            _itemRepository = new ItemRepository();          
        }
 
        public ActionResult Index()
        {
            IList items = _itemRepository.GetAllItems();
 
            return View(items);
        }
    }


the main problem in the above scenario is that if we wanted to create unit tests for our controller we would have to provide it with a concrete implementation of the ItemRepository.
The ItemRepository accesses the database in order to retrieve the items, and that would cause the test to take some time to complete. this might not be an issue if there is only one or two tests that you need to run each time a change has been made to your class but if you have tens or hundreds of tests in a project you will be wasting development time.
its generally accepted that tests should be able to run fast and that they should be run in isolation of different parts of the system. the dependency in the above controller would not allow us to achieve this.

In order to be able to overcome the above problem we will use a technique called constructor injection. In constructor injection the dependency is passed through into the constructor of the dependent class as an interface.


    public class ItemController : Controller
    {
        private IitemRepository _itemRepository;
 
        public ItemController(IitemRepository itemRepository)
        {
            _itemRepository = itemRepository;
        }
 
        public ActionResult Index()
        {
            IList items = _itemRepository.GetAllItems();
 
            return View(items);
        } 
    }

In our updated controller above we are now depending on an interface called IitemRepository instead of a concrete implementation of the ItemRepository.

the reason for introducing the interface is that we can implement a mock version of the itemRepository which does not access the database, and pass it to our tests and our code will still work.

By introducing the interface and injecting the dependency we have decoupled our controller from the itemRepository.

Monday, August 25, 2014

Mocking HttpContext and it's dependencies

If your controllers access HttpSession or any other dependencies within the HttpContext then you wont be able to write tests for that controller unless you're able to mock HttpContext.

Asp.Net MVC provides the abstract HttpContextBase class which represents the dependencies within HttpContext as virtual properties that are mockable using a mocking framework such as Moq.

I have written the following class which is a helper that allows for the mocked HttpContext to return a default set of mocked dependencies but it also allows you to override those dependencies and set them up according to the scenario being exercised in your tests.

using System.Security.Principal;
using System.Web;
using Moq;

namespace UnitTests.Helpers
{
    public class FakeHttpContext
    {
        private static Mock<HttpSessionStateBase> _session;
        public static Mock<HttpSessionStateBase> Session
        {
            get { return _session ?? (_session = new Mock<HttpSessionStateBase>()); }
            set { _session = value; }
        }

        private static Mock<HttpRequestBase> _request;
        public static Mock<HttpRequestBase> Request
        {
            get
            {
                if (_request != null) return _request;
                _request = new Mock<HttpRequestBase>();
                _request.Setup(req => req.ApplicationPath).Returns("/");
                _request.Setup(req => req.AppRelativeCurrentExecutionFilePath).Returns("~/");
                _request.Setup(req => req.PathInfo).Returns(string.Empty);
                return _request;
            }

            set { _request = value; }
        }

        private static Mock<HttpResponseBase> _response;
        public static Mock<HttpResponseBase> Response
        {
            get{return _response ?? (new Mock<HttpResponseBase>());}
            set { _response = value; }
        }
        
        private static Mock<HttpServerUtilityBase> _server;
        public static Mock<HttpServerUtilityBase> Server
        {
            get
            { return _server ?? (new Mock<HttpServerUtilityBase>()); }
            set { _server = value; }
        }

        private static Mock<IPrincipal> _user;
        public static Mock<IPrincipal> User
        {
            get
            { return _user ?? (new Mock<IPrincipal>()); }
            set { _user = value; }
        }

        private static Mock<IIdentity> _identity;
        public static Mock<IIdentity> Identity
        {
            get
            { return _identity ?? (new Mock<IIdentity>()); }
            set { _identity = value; }
        }


        public static HttpContextBase Context
        {
            get
            {
                var context = new Mock<HttpContextBase>();
                context.Setup(ctx => ctx.Request).Returns(Request.Object);
                context.Setup(ctx => ctx.Response).Returns(Response.Object);
                context.Setup(ctx => ctx.Session).Returns(Session.Object);
                context.Setup(ctx => ctx.Server).Returns(Server.Object);

                //bind identity to _user
                User.Setup(usr => usr.Identity).Returns(Identity.Object);

                context.Setup(ctx => ctx.User).Returns(User.Object);
                return context.Object;
            }
        }
            
    }
}

Monday, August 04, 2014

Angular Base Url Factory

Changes to the URLs and the structure of an MVC app are common and expected when you work on a new product.

however the cost of making these changes can expand greatly depending on how entangled your application's code is.

If one of these changes involves updating the base url used by your ajax calls, then you will have to go through every javascript or html file and update each instance individually.

using AngularJs factories, you cant get an instance of the base url on page load and use it in a function which takes an string for example the name of an Action in a controller and preappends this base url to the action's name.


angular.module('AngularApp').factory('urlFactory', function () {
    return {
        format: function (url) {
            return '@string.Format("{0}://{1}{2}home/", Request.Url.Scheme, Request.Url.Authority, Url.Content("~"))' + url;
        }
    }
});

and in your ajax calls you only need to pass in the action name that is being called.


$http.post(urlFactory.format('credits'), { token: $scope.cd.Token, Id: $scope.cd.Id }).
success(function (data, status, headers, config) {

    $scope.result = data;
    $scope.running = false;

}).
error(function (data, status, headers, config) {

    $scope.result = data;
    $scope.running = false;

});

Saturday, June 21, 2014

AngularJs: Unit Testing with Karma, Jasmine, Chutzpah

In this post I will explain how to set up javascript unit testing for an AngularJs project and utalizing the best visual studio has to offer.


first set up a web project and create folders to hold your angularJs files and your javascript test files. these folder directories have to later be set up in the karma config files to tell karma where to find our tests and their associated script files.






AngularJs
If you install AngularJs through nuget then it will add all the AngularJs project files including the translation files to your solution. create another folder inside the scripts folder called angular and move all the added files into it.
add a reference in the bundle configs to this folder.

Jasmine
Jasmine is a javascript behavior driven testing framework which you can install through nuget. you don't necessarily need to use the jasmine framework for testing purposes but this is the framework used in the documentations in the angular site itself.
after installing add a reference to jasmine script file in the bundle configs of your app.

Chutzpah
Chutzpah is a test runner which allows you to run your javascript tests from inside visual studio. you can install cutzpah through nuget.
There is also an extension for chutzpah that can be installed using the extension manager which integrates with the test explorer in visual studio 2012/2013.

Node.Js and Karma
Karma is a test runner for javascript. In this post I will go through the steps required to set up karma on windows. Karma runs on NodeJs and is available through the node package manager which you will be able to get access to after you install NodeJs.

After installing NodeJs open up the command prompt as an administrator, navigate to the project folder and enter the following command to tell the node package manager to install karma.
the -g identifier above tells the node package manager to install Karma globally. this means that all projects will use the same version of karma. if you need to use a different version of karma for different projects you can install karma into the current directory.
 npm install -g karma
now enter the following command to install the module required for using karma through the command interface.
 npm install -g karma-cli
before proceeding with the next step you must make sure that you have javascript files in both the source folder and the test folders other files you will get an error.
now that karma is installed we can begin to configure it by entering the following command
 karma init
in the following steps you will have to configure karma, such as setting the location of the source and test files, the default browser etc.
after the configuration has finished karma creates a javascript file called  karma.conf.js in the project directory which holds the configs.
now we have to make sure that the needed AngularJs files are refrenced in the karma config file.
now that we have finished configuring karma we can start it by using the following command.
 karma start
and we would see karma running in the command line, and returning the results of any tests that were run.


Thursday, June 19, 2014

AngularJs: Custom Directives

One of the benefits of using Angular is that it allows you to create small fragments of Html which can be reused across multiple pages. this promotes code reuse and minimizes duplication.

ng-include 
The ng-include directive in angular can be used to fetch an external html fragment and compile it as part of the page.

Lets say that in the below example the description portion of each course is an snippet that has the potential to be reused across the application.
 we can extract the markup that represents the description portion and insert it into a separate html file.
after we have extracted the reusable portion into it's own html file like below,
We can then use ng-include to tell angular to fetch it and include it in the index page. because ng-include expects an string we therefore use single quotes to wrap the name of the html file.

using this technique we can reuse the description portion across the web app as many times as needed.

Custom Directives
Custom directives allow us to write code that is expressive and makes it easier to understand what the application is trying to do.
one of the easiest types of custom directives that we can implement in angular are the template expanding directives.
template expanding directives can include controller logic and also they allow us to define an attribute with a meaningful name that can be replaced by the template.
in the following code snippet we are declaring a custom directive called course description.
In the above snippet we are restricting the directive to match the elements with the name "course-description". we have to explicitly state the restriction type because by default angular directives match attributes not elements.
the name of the directive is produced when angular translates the camel cased name "courseDescription" of the directive into this format "course-description".
now that we have got our directive we can include it in the index page or any other page in the following manner.
Custom directives can also include controller functions. we are going to add a controller to the course-description directive so that we can implement a read more, read less link.
the user can show the full description by clicking on the read more link or can revert it back to it's shortened version by clicking on read less.
in the above snippet I have added a controller function to the course description directive and given it an alias of ctrl. this alias can be used in the mark up to access the data in this controller.
I have added a boolean value to the controller called showMore and have set it to default value of false. this is used to check whether the whole text should be shown or only an smaller portion.
when the user clicks on the "Read more" link the boolean value is negated and the linkText is changed to indicate the new state.
the mark up above is the course description directive after it has been updated. there are two span elements, one holding the full description text and the other using the limitTo directive only shows the first 30 characters. according to the showMore boolean value in the ctrl controller one of the spans is hidden and the other is made visible.
as you can see from the above screenshot it is very easy to create custom directives that have both functionality through the use of controllers and can be reused across a project.

Wednesday, June 18, 2014

AngularJs: Modules and Dependency Injection

AngularJS has a built-in dependency injection mechanism. using AngularJS you can divide your applications into multiple types of components and register them with angular's injector, later when you need these components You can request them and the injector will inject them where they are needed.

Module
modules can be thought of as containers that can encapsulate different components of an angular application.
you can use modules to implement modularization, this means the division of the application code into separate components and it allows for code reuse, easier configuration and testing.
var appModule = angular.module("appModule", []);
Dependencies Between Modules
sometimes we need to use the componenets of one module in another. In order to do so, a module needs to declare a dependency on the module which contains the needed components.
Below we are creating a module called appModule and giving it a value called numberVal.
we need the numberVal of appModule in our second module called depModule. when we create depModule we pass the dependency on appModule into the constructor as an array. this array can contain all the dependencies that depModule has on other components of an angular application.
var appModule = angular.module("appModule", []);

appModule.value("numberVal",123);


var depModule = angular.module("depModule", ['appModule']);

depModule.controller("depController", function($scope,numberVal) {
    
   this.val = numberVal;
});
Value
A value in AngularJS can be a simple object, a number or a string. 
Values are usually used for configuration and are injected into factories, services or controllers.
A value has to belong to an AngularJS module. To define a value we call the value() function on the module and, The first parameter is the name of the value, and the second parameter is the value itself. 
var appModule = angular.module("appModule", []);

appModule.value("numberVal", 123);

appModule.value("stringVal", "xyz");

appModule.value("objectVal", { prop1 : 123, prop2: "abc"} );
Factory
Factories allow you to configure a function that returns an object which can be then injected into controllers. Here, the game parameter to the controller is injected and matched to the game factory, which returns an object with a title attribute.
var appModule = angular.module("appModule", []);

appModule.factory("appFactory", function() {
    return 123;
});


appModule.controller("appController", function($scope, appFactory) {

    //prints 123
    console.log(appFactory);

});
in the above example, it is not the factory function that is injected, but the value returned by the factory function.

Service
Angular services are objects that You can use to organize and share common functionality across your app.
Services are lazily instantiated meaning that Angular only instantiates them when an application component depends on it.
angular services are also Singletons meaning that Each component dependent on a service gets a reference to the single instance generated by the service factory.
function appService() {
    this.doIt = function() {
        console.log("done");
    }
}

var appModule = angular.module("appModule", []);

appModule.service("appService", appService);

Monday, June 16, 2014

Rhino Mocks Basics

Rhino mocks is a mocking framework for the .Net platform which eases the process of creating mock objects for the purposes of unit testing and test driven development.


Creating mock objects

in the following example our aim is to test the ItemController, but the ItemController has a dependency on the ItemRepository. we need to inject this dependency into the constructor of the ItemController in order to be able to instantiate it and test it.


the first concern with the above scenario is that the item repository probably accesses a database which will make our tests run very slowly.

the second concern is that we only want to test the behavior of the item controller classes and using the above method we are actually testing the behavior of the ItemRepository class too. the dependency on ItemRepository also complicates debugging because the scope of the tests is widened to include both of the classes, if something was to go wrong we would have to look into both these classes to find the source of that problem.

Rhino mocks allows us to create a mock object of ItemRepository if we can provide it with the appropriate interface. Rhino mocks internally uses a third party library Castle Dynamic Proxy to create proxy classes from interfaces.


Now that we have created our mock object the arrange portion of our test is complete.
we can now begin the act portion of our test. we would like to test the behavior inside the ItemController index method.

We can write the act portion of our method by calling the Index method of the ItemController and then using Rhino mocks extension method AssertWasCalled, we can verify that the method GetAllItems of the ItemRepository was called by the Index method of the ItemController.


Creating Stubs

Stubs are controllable replacements for dependencies. by creating stubs you can control the program flow and avoid dealing with a dependency directly.

in the following scenario, our method uses the ItemValidator to validate the item being passed in. if the item is valid then it is passed to the ItemRepository to be saved, but if its not then an argument exception is thrown.
our method above depends on the ItemValidator and if we were to create item instances(valid and invalid items) just so we could write some unit tests for the above method, then we would essentially be testing both the above method and the ItemValidator class which means that our test is not isolated.

the other option is to mock the ItemValidator however  the ItemValidator class might be too complex to be mocked and we would not want to be held back from writing production code, having to create hand rolled mocks every time we encounter such dependencies.


Fortunately an stub can be created using Rhino mocks to overcome the above problem. 
an stub is essentially a mocked object that we can setup to return an expected value.  in the below code the mockItemValidator and mockItemRepository dependencies are injected into the ItemController Constructor.

since the itemValidator is a mock then it does not have any implementation, therefor we have to configure it so that it does what we want it to do.
by default an stub returns the default values of it's methods. the ItemValidator's Validate method returns a boolean and the default value of a boolean is false. however we would like to write a test to confirm that the Save method calls the save method of the ItemRepository and that would require the Validate method to return true. the below snippet basically states, create a stub from mockItemValidator and configures it so that if the validate method is called and passed in any Item then it should return a true boolean.

The completed test case below demonstrates how we could use stubs to control program flow and get rid of dependencies. we set up our stub to return a true value and therefor direct the flow to the ItemRepository save method. if we were to change the stub to return false or if we didn't include the stub so that it would return the default value of false, then we would be able to direct the flow to the else statement and test that the ArgumentException is thrown.

Constraints

Constraints enable you to verify that the arguments passed into a method match a certain criteria.
Imagine that the responsibility of creating an Item in the above scenario was given to the ItemController and that we had to pass in the primitive types that comprise an item into the save method instead of an actual instance of an item as shown in the below code.

Now to test the above scenario we have to pass in the parameters, but in the assert section we encounter a problem. the ItemController save method takes in the primitives and creates an instance of Item and then passed that Item to the ItemRepository to be saved.

Because the assert section checks for reference equality we cant create an instance of an item with the same primitive values and use it in the assert section, this would cause the test to fail since object references point to two different instances of Item.

Rhino Mocks provides the Is, Matches, List, Text constraints for such scenarios.

In this scenario we can use the matches constraint to check that each primitive matches the property value of the item being saved by the ItemRepository.


Sunday, June 15, 2014

TDD : Test Driven Development

Test driven development (TDD) is an advanced technique which is an evolution of the test first programming concept in extreme programming.

In TDD the programmer first writes the test for a particular routine before actually writing the routine itself. naturally the test will fail because there is no working code that satisfies the test conditions.
the programmer then writes just enough code to pass the test(Adhering to SOLID, YAGNI and KISS principles).
after the test passes successfully the programmer then returns to the production code in order to refactor it by Changing the code to remove duplication and to improve the design while ensuring that all tests still pass.

this cycle is called Red, Green, Refactor and is done repetitively until the piece of software being developed is complete.



Benefits of Test Driven Development
  • The suite of unit tests provides constant feedback that each component is still working.
  • The unit tests act as documentation that cannot go out-of-date, unlike separate documentation, which can and frequently does.
  • When the test passes and the production code is refactored to remove duplication, it is clear that the code is finished, and the developer can move on to a new test.
  • Test-driven development forces critical analysis and design because the developer cannot create the production code without truly understanding what the desired result should be and how to test it.
  • The software tends to be better designed, that is, loosely coupled and easily maintainable. the tests give the developer the confidence to make design decisions and refactor at any time. This confidence is gained by running the tests frequently and after each change.
  • The test suite acts as a regression safety net on bugs. If a bug is found, the developer should create a test to reveal the bug and then modify the production code so that the bug goes away and all other tests still pass. On each successive test run, all previous bug fixes are verified.
  • Reduced debugging time.


Characteristics of a Good Unit Test
  • Runs fast. If the tests are slow, they will not be run often.
  • Separates or simulates environmental dependencies such as databases, file systems, networks, queues, and so on. Tests that exercise these will not run fast, and a failure does not give meaningful feedback about what the problem actually is.
  • Is very limited in scope. If the test fails, it's obvious where to look for the problem. Use few Assert calls so that the offending code is obvious. It's important to only test one thing in a single test.
  • Runs and passes in isolation. If the tests require special environmental setup or fail unexpectedly, then they are not good unit tests. Change them for simplicity and reliability. Tests should run and pass on any machine. The "works on my box" excuse doesn't work.
  • Often uses stubs and mock objects. If the code being tested typically calls out to a database or file system, these dependencies must be simulated, or mocked. These dependencies will ordinarily be abstracted away by using interfaces.
  • Clearly reveals its intention. Another developer can look at the test and understand what is expected of the production code.


Tuesday, March 26, 2013

Report Builder

Report Builder has an intuitive and familiar user interface which allows you to start creating reports quickly. 
Report Builder can be downloaded from the Microsoft website and installed to be used by itself, to run reports against a SQL server database instance or in conjunction with a reporting services server.

To begin launch Report Builder 3.0 and By default the Getting Started dialog is displayed immediately:
Note: The Getting Started dialog is always displayed when you start Report Builder unless you click the "Don't show this dialog box at startup" checkbox in the bottom left corner of the dialog.  If the Getting Started dialog isn't displayed on startup, you can go to the Report Builder Options and choose to display it on startup (we'll cover Report Builder Options later in this tutorial).

The Getting Started dialog is only displayed right after you launch Report Builder.  After you make a selection from the Getting Started dialog or close it, you cannot get back to it.  You can display the New Report or Dataset dialog which contains essentially the same options as the Getting Started dialog by clicking on the main menu icon on the top left corner of the report builder menu bar and clicking on the new option.

Creating New Reports

The New Report option in both the "Getting started" and "New report and dataset" dialogs allow you to choose a wizard or a blank report as your starting point.  The wizards walk you through creating a report in a sequence of steps. 
click on "Table or matrix wizard" option to start creating a report.

you will be prompted to choose a data set. If you are already working on or editing a report and are trying to create another report in the same project you can choose the dataset used in the project, but if this is a new report then you will have to click on the "Create dataset" option and then click on the next button.

In the next screen you will be asked to provide a data source(SQL server instance or reporting services server) if you already have an instance that you want to use you can browse to its location and choose it, if not then you can create one by clicking on the "New" button.

In this screen you will have to create a connection to the database instance. click on build to create a new connection string.

Enter the details of the database that you would like to use as your data source for the reports into the appropriate boxes and click the Ok button.
after clicking on the ok button in the connection properties the connection string generated will populate the connection string box in the data source properties window.
Now that you have set up the connection to the server you can click ok and a data source for the report will be created and added to the data source connections window in the new table or matrix wizard. choose the new data source and click next.