Thursday, July 29, 2010

Silverlight - Binding System uses Visual Tree

Recently I was working on a control where I need to re-arrange layout of passed in user controls, which alters visual tree structure. During this I found binding system relies on visual tree.

With example:


Page1
|
|--UserControl1
|--UserControl2
|--UserControl3

Transforms into

Custom Control
|
|-UserControl1
|-UserControl2
|-Page1
| |-UserControl3


Above page when passed to this custom control need to strip few controls and arrange in different layout. If Binding done at Page1 level, once you remove user controls to different place than this visual tree, bindings will not be resolved.

Friday, July 16, 2010

Object oriented Javascript

In this blog, I will try to explore how to perform basic object oriented development using Javascript.

Lets explore by a sample shopping cart application. In this we will have

  1. Items which represent shopping items which will have attributes like item name, Price per Unit, Quantity, and method Total Price which will return Total sub price of particular Items.
  2. Customers can Add Multiple Items to Shopping Cart. This will support two behaviors, adding items to cart, TotalPrice which will return total price of items selected into cart.

<script language=javascript>
//Class representing Item.
function Item(productName,pricePerUnit,numberOfUnits) {
this.ProductName = productName;
this.PricePerUnit = pricePerUnit;
this.NumberOfItems = numberOfUnits;
//Total Price Method will return computed total price of this item
this.TotalPrice = function () {
return this.PricePerUnit * this.NumberOfItems;
}
}
//Class representing Shopping Cart which will hold all items
function ShoppingCart() {
//internal variable holding all items
var Items = new Array();
//Method which will add item to shopping cart.
this.AddItem = function (ShoppingItem) {
Items[Items.length++] = ShoppingItem;
}
//Method which will return computed Total price of all items in cart.
this.TotalPrice = function () {
var totalPrice = 0;
for (var i = 0; i <>
totalPrice += Items[i].TotalPrice();
}
return totalPrice;
}
}

var soapItems = new Item("Pears",10,10);
var foodItems = new Item("Lays", 20, 10);
var cart = new ShoppingCart();
cart.AddItem(soapItems);
cart.AddItem(foodItems);
alert("Total Price :" +cart.TotalPrice());
</script>

In JavaScript classes can be defined using functions. You can alter this definition by accessing function prototype later at any point of time. We will see more in future posts.

Thursday, July 15, 2010

Policy Injection: WCF Instance Provider

Policy Injection is one of Enterprise Application Blocks, which can be used to perform AOP(Aspect Oriented Programming). In this blog I will discuss about using Policy Injection for WCF services. For this to happen your WCF service objects should be created using Policy Injection. WCF provides behavior extension points to customize various aspects. We will see how to provide custom instance provider which will be used by WCF to create WCF objects.

NOTE: In order for a class to be instantiable through Policy Injection it has to either implement a interface or it should be derived from MarshalByRef class

In scenarios where you want to intercept WCF calls and apply policies defined in Policy Injection, you need to provide your custom WCF instance provider which will use Policy Injection application block to create instances of your WCF services.

You need to devise custom instance provider and implement custom behavior to use this custom instance provider. Once you define custom behavior you can apply this behavior to your WCF service end points.

Custom WCF instance provider must implement IInstanceProvider.

WCF Instance Provider:

public class PolicyInjectionInstanceProvider:IInstanceProvider
{
private Type serviceContractType { get; set; }
private static readonly IUnityContainer container;
private readonly object _sync=new object();
private static readonly TransparentProxyInterceptor injector = new TransparentProxyInterceptor();

static PolicyInjectionInstanceProvider()
{
container = new UnityContainer()
.AddNewExtension();

IConfigurationSource configSource = ConfigurationSourceFactory.Create();
PolicyInjectionSettings settings = (PolicyInjectionSettings)configSource.GetSection(PolicyInjectionSettings.SectionName);
if (settings != null)
{
settings.ConfigureContainer(container, configSource);
}
}

public PolicyInjectionInstanceProvider(Type t)
{
if (t != null && !t.IsInterface)
{
throw new ArgumentException("Specified type must be an interface.");
}
this.serviceContractType = t;
}

#region IInstanceProvider Members

public object GetInstance(System.ServiceModel.InstanceContext instanceContext, System.ServiceModel.Channels.Message message)
{
Type type = instanceContext.Host.Description.ServiceType;

if (serviceContractType != null)
{
lock (_sync)
{
container.Configure().SetDefaultInterceptorFor(serviceContractType, injector);
container.RegisterType(serviceContractType, type);
return container.Resolve(serviceContractType);
}
}
else
{
if (!type.IsMarshalByRef)
{
throw new ArgumentException("Type must inherit from MarhsalByRefObject if no ServiceInterface is Specified.");
}
lock (_sync)
{
container.Configure().SetDefaultInterceptorFor(type, injector);
return container.Resolve(type);
}
}
}

public object GetInstance(System.ServiceModel.InstanceContext instanceContext)
{
return GetInstance(instanceContext, null);
}

public void ReleaseInstance(System.ServiceModel.InstanceContext instanceContext, object instance)
{
IDisposable disposable = instance as IDisposable;
if (disposable != null)
{
disposable.Dispose();
}
}

#endregion
}

Custom Behavior Extension Element:

public class PolicyInjectionBehavior : BehaviorExtensionElement, IEndpointBehavior
{
public override Type BehaviorType
{
get { return typeof(PolicyInjectionBehavior ); }
}

protected override object CreateBehavior()
{
return new PolicyInjectionBehavior ();
}
#region IEndpointBehavior Members

public void AddBindingParameters(ServiceEndpoint endpoint, System.ServiceModel.Channels.BindingParameterCollection bindingParameters)
{

}

public void ApplyClientBehavior(ServiceEndpoint endpoint, System.ServiceModel.Dispatcher.ClientRuntime clientRuntime)
{

}

public void ApplyDispatchBehavior(ServiceEndpoint endpoint, System.ServiceModel.Dispatcher.EndpointDispatcher endpointDispatcher)
{
Type contractType = endpoint.Contract.ContractType;
endpointDispatcher.DispatchRuntime.InstanceProvider = new PolicyInjectionInstanceProvider(contractType);

}

public void Validate(ServiceEndpoint endpoint)
{

}

#endregion
}

Import above defined behavior in web.config using behavior extensions provided by WCF. Note that type attribute should provide fully qualify name (and be careful with spaces), along with assembly and version number.

<system.serviceModel>
<extensions>
<behaviorExtensions>
<add name="policyInjectionInstanceProvider" type="policyInjectionBehavior, assembly_name, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null"/>
</behaviorExtensions>
</extensions>
<behaviors>
<endpointBehaviors>
<behavior name="PolicyInjectionProviderBehavior">
</behavior>
</endpointBehaviors>
</behaviors>
</System.serviceModel>

Once you import this custom behavior extension, you can use this as behavior in your service model and specify this behavior in service end points.

Whenever client tries to access your WCF service, service object will be created using PolicyInjectionInstanceProvider class.

Silverlight: Auto Notifying Delegate Commands

Prism provides Delegate Command equivalent to WPF's Delegate Command (ICommand) implementation in Silverlight. Using Delegate Command in Silverlight you can bind Commands to your view.
Whenever state of command changes, you will be doing something like this:

MyDelegateCmd.RaiseCanExecuteChanged()

which will notify binding system to requery binded command's CanExecute. In above approach your code will be scattered with RaiseCanExecteChanged across application. Instead you can use following class which will auto notify binding system.

public class AutoDelegateCommand<T> : DelegateCommand<T>
{

public INotifyPropertyChanged ViewModel { get; set; }
private void ListenForPropertyChagnedEventAndRequery(INotifyPropertyChanged presentationModel)
{
if (presentationModel != null)
{

presentationModel.PropertyChanged += (sender, args) =>
{
QueryCanExecute(args);
};
}
}

public void QueryCanExecute(PropertyChangedEventArgs args)
{
this.RaiseCanExecuteChanged();
}
}
public AutoDelegateCommand(INotifyPropertyChanged PresentationModel, Action executeMethod)
: base(executeMethod)
{
this.ViewModel = PresentationModel;
ListenForPropertyChagnedEventAndRequery(PresentationModel);
}
public AutoDelegateCommand(INotifyPropertyChanged PresentationModel, Action executeMethod, Func canExecuteMethod)
: base(executeMethod, canExecuteMethod)
{
this.ViewModel = PresentationModel;
ListenForPropertyChagnedEventAndRequery(PresentationModel);
}

}

By using above delegate command there will be no need to call requery in your view model. AutoDelegateCommand will subscribe to PropertyChanged event of passed in view model, and for each property change event it will raise RaiseCanExecute event.

Wednesday, July 14, 2010

Silverlight Profiling

You can do profiling of your silverlight 4 applications. Inside VS 2010 you will not be able to do it. You have to fire up VS Command Prompt and run following commands to perform profiling.
  1. VSPerfClrEnv /sampleon
  2. "C:\Program Files\Internet Explorer\iexplore.exe" http:\\youhostname\pathtosilverlightapplication
  3. VSPerfCmd /start:sample /output:ProfileTraceFileName /attach. You can Identify Process ID using Task Manager.
  4. Run you scenarios.
  5. VSPerfcmd /detach
  6. VSPerfCmd /shutdown
  7. VSPerfClrEnv /off

You can open your ProfileTraceFileName file in VS 2010 and see you code execution paths. But before you open you need to provide paths to MS Symbol Server and also path to your profiling application's pdb files to load symbols of your application. You can do this using VS 2010's menu "Debug->Settings", In settings option window, select symbols under Debug category. Ensure Microsoft Symbol server is selected, and also add path to your pdb files. After doing this, if you open Trace file you can see your code execution paths.

Continuous Integration using TFS 2008 and VS 2010 RC

What is Continuous Integration?
Continuous Integration is a practice where integration happens on frequent basis, and each integration will trigger automated build along with verification of code with automated unit tests.


Why should we use it?
Few of issues you may face during frequent integrations are:
1. Uncompilable code – Occurred largely whenever somebody changed Public API of code and checked in. Depended code elsewhere got broken due to these changes.
2. Bugs – Due to change in internal logic of one module, without checking impact elsewhere, might cause bugs.
By introducing Continuous Integration into development process you will be auto building for each integration, build report can be mailed to all team members. If somebody checked in some code which made solution uncompilable, with this process in place, you will be able to detect as soon as it is failed and can act upon it.
Once build happens, unit tests can be configured to run tests to verify code correctness. By doing this bugs can be detected early, which will help in reducing effort required to fix that bug. The sooner the easier it will be to fix the bug.
NOTE: Using check-in policy, check-ins of uncompilable code can be avoided.

Implementing using TFS 2008 and VS 2010 RC

We will discuss how to implement in a situation where VS 2010 RC is used for development, and TFS 2008 server is still being used.
Till TFS 2005, there is no easy way to implement continuous integration. But from TFS 2008, Microsoft started support Continuous Integration which made it easier to implement.

Simple Architecture of TFS 2008


I wont discuss TFS 2008 installation here.

Steps to implement continuous Integration using TFS 2008:
Build Server Setup

1. Designate a machine as build server. Ensure VS 2008 is installed in this machine to be able to successfully build. Install TFS Build service on this system. (Setup of TFS build service can be found in your TFS server CD under BUILD directory in root).
2. During installation of TFS Build Service, you need to provide credentials using which TFS Build service has to run. Ensure this user is also part of particular projects Build Service group in TFS server.
3. In Visual Studio Open Team Explorer. Right Click on Builds and select Manage Builds to add Build agent to use build server.


4. In Manage Build Agents dialog box, click on New button to add build server.

5. In Build Agent properties window, enter build server details

6. By clicking on OK, you had successfully added build server to TFS service. Now you can define builds to run on this build server.
Once you had added Build Agent, you can define builds and run on that build server.

Create Build Defintion

1. Open Team Explorer window, Right click on Builds and select New Build Definition…


2. In Build Definition window,
a. In General Tab enter

b. In Workspace tab, select Source Control Folder from which build files needs to be retrieved and Local Folder, specifies local folder on build server to which these build files will be downloaded for building.

c. In Project File, under selected source control folder if TFSBuild.Proj is not created, as is case of New Build Definition, click on Create… to create TFSBuild.Proj.

d. In MS Build Project File creation Wizard select solution for which you want to build.

e. Select configuration for Build

f. In Options, select unit tests and code analysis criteria and click on Finish to finish build file creation.

g. Select Retention Policy, which lays criteria for build management.

h. In Build Defaults, specify Build Agent using which you need to execute this Build, and also specify a UNC path of Drop location, to which all build files will be deployed.

i. In Trigger, for Continuous Integration select Build each check-in option. So, for each check-in, build will happen and unit test and code analysis if specified will also be performed.

j. By Clicking on OK, you had successfully created a Build Definition. Now each check in will trigger a Build.
3. By double clicking on Build Definition in Team Explorer, Build Explorer window will be displayed where you can monitor builds, and see build reports.
With TFS 2008 and VSTS2010RC

If you are using VSTS 2008 and TFS 2008, you had successfully completed. But if you are using VS 2010RC, you are not yet completed. BuildAgent of TFS 2008 uses MSBuild 3.5 engine to compile your solutions. This build engine will not be able to compile VS 2010 solutions. In order to compile VS 2010 solutions as well, on Build Server performing below steps:
1. Install VS 2010RC on Build Server to make sure .Net 4.0 & SDK’s and MSBuild 4.0 are installed.
2. Configure Team Build 2008 to use MSBuild 4.0 instead of MSBuild 3.5. To do this edit %Program Files%\Microsoft Visual Studio\9.0\Common7\IDE\PrivateAssemblies\TFSBuildService.exe.config and set MSBuildPath property to C:\Windows\Microsoft.Net\Framework\v4.0.30128\
3. Restart the Team Foundation Build Service.

Send Mail with Build Report

Till now, you had created Build Agent, created Build Definition, and specified check-in as trigger point for continuous Integration process. Now, we want to take step further a bit, and need to send mail with build status along with build report.

For this there are two options available.
1. All Team members should subscribe for Build Completed events using Project Alerts window.

Using alert “A build completes” you can specify multiple email id’s in Send to for whom you want to alert. This needs to be done for all users.
2. Using Custom Tasks.
You can develop custom tasks which can be executed during Build Process. A custom task can be defined in a class library by deriving from Task abstract class, or by implementing ITask interface.
MSBuild Extensions pack already implemented some custom tasks, of which send mail is one task.

Conclusion
Though Continuous integration doesn’t prevent check-ins of uncompilable code or buggy code, it will enable to identify them quickly and act upon.

Unit Testing

Introduction


In this article we will discuss about unit testing, why we need it and what are various quirks in implementing them.


Why?


In today’s world of ever increasing complexity of software, where requirements are always changing, we need to control cost of delivery with highest quality. Whoever worked on medium to long term projects will know the complexity of bug fixing after release and maintenance.
Cost of fixing bug increases with lifecycle of software. During requirements phase it costs cheap, and it increases with stages like development, testing, maintenance. To detect bugs in early cycle of development we need a mechanism. Unit test provides a mechanism to detect bugs in development cycle.



What is Unit Test?


In unit test we will take unit of code and isolate it from dependencies and inspect it for any defects. Mostly unit will be a method or set of method in case we are testing public APIs.



How to write Unit Tests?


Before discussing how to write unit tests, let’s inspect problem more thoroughly so that we can understand our approach better.
In development mostly we find following problems relating to quality
1. Requirements not implemented.
2. Requirements not implemented properly.
3. Missed Requirements (Requirements are not defined)
Developer might have missed some requirements to implement in production code, few times implemented but not correctly implemented, in other cases requirements are not defined, but developer has implemented them during development.
Our unit tests should be able to detect above problems. There are three unit test approaches to tackle above problems



Structural Unit Testing


In structural approach we will try to write unit test cases based on production code we are trying to test. Here we will use code coverage, number of operators, operands in a statement, number of parameters as benchmark in deciding how many tests cases are required.
a. We will try to achieve 100% code coverage.
b. Depending upon number of parameters, number of ways to invoke method will increases. To reduce complexity try to keep number of parameters to minimal. We will try to write unit test cases covering all parameters.
c. Need to write test cases using Boundary values.



Functional Unit Testing


Using functional testing, we will write unit test cases for each requirement. Here we will take requirement/functionality as unit try to test in that aspect isolating class from other dependencies.



No approach


No approach – doesn’t have specific set, but written by experienced developers who will be able to think of cases using which they can test class. It can effective but not a systematic way to assure things are always the way we wanted.


Using Structural testing we will be able to test missed requirements, bugs in code. Using functional testing we will able to detect improper implementation of requirements and bugs as well.



Does it solve the problem?


Unit tests are better to ensure quality to a finite set of scenarios you had tested. You can reasonably assure that for scenarios you had tested, your code will work. Unit tests acts as a safety net whenever code changes are required. There are still uncertain scenarios which are not tested, where bugs might be lurking. Overtime you will reduce uncertainty in quality.



I can’t afford to write Unit Tests?


In this fast paced world, it looks obvious that there is no time to write unit test cases. By not writing unit tests, you are increasing uncertainty in your code quality. If you take a horizon of more than one year for your code base, by investing in Unit Test you will be reducing effort required for testing and maintenance. Cost is less during development, but increases in maintenance. Unless are running away by not maintaining your code base, you will reduce costs by investing in unit tests in maintenance phase which will be huge.



What about Integration Tests?


You should be able to test Integration tests as well in Unit Test. Integration scenarios are scenarios where class being tested will rely on another class to perform certain task. In those scenarios, typically you will be using mocking dependencies and setting expectations. It is expected by dependent object to perform those expectations as is envisioned in our unit tests. Make these expectations as unit tests of dependent object. By ensuring this you are testing integration scenarios as well. It requires careful planning but never impossible.



Apart from that make your classes less chatty. Reduce the surface area of interaction between classes. By doing this you are decreasing integration scenarios and making fewer expectations.



My class is Un-Testable?


During testing most often we will stumble upon code which is not testable. Most of the times you should be able to refactor code and make it testable. If you are using Static Method invocations, which are not testable, you can create a proxy and use this proxy to interact with static objects and methods. By doing this you can mock proxy and make this untestable code into testable.



Reduce Maintenance of Unit Tests


As with any code, overtime as with changing code, corresponding unit tests will also get changed. To reduce maintenance cost of unit tests without compromising quality, you can test only public APIs, i.e. Public methods, as these are the methods used by users. Using these methods you should be able to invoke private and protected functionality as well and should be able to test them. This is sort of adding features of structural testing into functional testing and should be able to achieve same results. By testing only public API now your unit tests are more resilient to refactoring of class internals.



Conclusion


If you take cost of software, maintenance costs are huge than construction costs itself. In order to be properly equipped to reduce maintenance costs, unit tests are most indispensible tool. Unit tests reduce uncertainty in your code.