Wednesday, 5 August 2009

Testing an Internet Banking Simulator using WatiN

I blogged about WatiN back in December. Here I will show how to use it to automate a simplified Internet Banking Simulator.  Most Internet banking sites do not yet use two-factor authentication schemes, such as Barclays' PINsentry card reader system. Instead you are typically asked for a membership number and one or more pass codes. A pass code consists of a string of alphanumeric characters. When the user logs into the site they are not asked to enter the complete code. Instead they are asked to enter (usually three) characters from random positions within the code. The idea is to protect against keystroke logging programs.

Suppose a users' secret code is 373549. A typical login screen may ask the user to enter the 2nd , 4th and 5th digits. In a subsequent session it will ask for the 1st, 2nd and 6th. The three positions are randomly selected for each session.

On most sites these positions are selected as a strictly monotonically increasing sequence.  In other words, they might ask you to enter the 2nd, 4th and 5th digits in that order. But some sites do not impose an increasing monotonicity. In other words, they might ask you to enter the 5th, 2nd and 4th digits in that order. In my example I will stick to a monotonically increasing sequence although it works just as well without.

In order to simulate the generation of random positions I first had to write some code to generate a subset of unique random positions from all the positions in the given secret code.  The standard random number generator in .NET could not be used as is because it would potentially generate repetitions of the positions.  For the purposes of this post we can take it that this problem is solved. We can then concentrate on hooking this up to ASP.NET and WatiN.

A typical Internet Banking entry screen looks like this:

image

The secret code is shown at the bottom for visual checking.

We wish to check that

  1. Entering valid entries and clicking the Next button navigates to the account page (just an empty finish page for this exercise).
  2. Entering invalid entries and clicking the Next button displays an error message.

To do so we create a pair of NUnit functional tests that invoke WatiN to type the entries and click the Next button for us.

Test 1 - Entering valid entries

/// <summary>
///
Enters valid entries that should display finish page.
/// </summary>
[Test]
public void EnterValidEntriesShouldDisplayFinishPage()
{
// Extract entry positions
int firstPosition;
int secondPosition;
int thirdPosition;
ExtractEntryPositions(
out firstPosition,
out secondPosition,
out thirdPosition
);

// Enter valid entries for those positions
string validFirstEntry = SecretCode[firstPosition - 1].ToString();
ie.
TextField(Find.ByName("txtFirstEntry")).
TypeText(validFirstEntry);


string validSecondEntry = SecretCode[secondPosition - 1].ToString();
ie.
TextField(Find.ByName("txtSecondEntry")).
TypeText(validSecondEntry);


string validThirdEntry = SecretCode[thirdPosition - 1].ToString();
ie.
TextField(Find.ByName("txtThirdEntry")).
TypeText(validThirdEntry);


// Click Next
ie.Button(Find.ByName("btnNext")).Click();

// Assert Finished page displays
string expectedPageName = "Finished.aspx";
Assert.IsTrue(
ie.Url.Contains(expectedPageName),
String.Format("Url should contain {0}.", expectedPageName
)
);
}

image


Here we have just shown the first two entries. It is a snapshot of WatiN as it was typing the entries.


Test 2 - Entering invalid entries

/// <summary>
///
Enters invalid entries that should display home page and error message.
/// </summary>
[Test]
public void EnterInvalidEntriesShouldDisplayHomePageAndErrorMessage()
{
// Extract entry positions
int firstPosition;
int secondPosition;
int thirdPosition;
ExtractEntryPositions(
out firstPosition,
out secondPosition,
out thirdPosition
);

// Enter invalid entries for those positions
string invalidFirstEntry =
(SecretCode[firstPosition - 1] + 1).ToString();
ie.
TextField(Find.ByName("txtFirstEntry")).
TypeText(invalidFirstEntry);


string invalidSecondEntry =
(SecretCode[secondPosition - 1] + 1).ToString();
ie.
TextField(Find.ByName("txtSecondEntry")).
TypeText(invalidSecondEntry);


string invalidThirdEntry =
(SecretCode[thirdPosition - 1] + 1).ToString();
ie.
TextField(Find.ByName("txtThirdEntry")).
TypeText(invalidThirdEntry);


// Click Next
ie.Button(Find.ByName("btnNext")).Click();

// Assert Home page displays
string expectedPageName = "default.aspx";
Assert.IsTrue(
ie.Url.Contains(expectedPageName),
String.Format("Url should contain {0}.", expectedPageName
)
);

// Assert error message displays
string expectedErrorText = "Invalid code. Please try again.";
string actualErrorText = ie.Span("CustomValidator1").Text;
Assert.AreEqual(expectedErrorText, actualErrorText);
}

image


 


In the above the relevant WatiN code is highlighted in bold. The ie variable represents the Internet Explorer object. In the NUnit test setup methods we start this up and navigate to the start page like this.

ie = new IE();
ie.GoTo(startUrl);

As you can see the WatiN API sports a fluid interface. The best way of getting up to speed quickly is to use the WatiN Test Recorder and also refer to the HTML Mapping Table.  It provides a list of mappings between the HTML element code in a web page and the WatiN API.


At the moment both WatiN and WatiN Test Recorder are at version 1 for their official releases and support automation of Internet Explorer only. Both are in beta for version 2 and will support Firefox.

Thursday, 4 June 2009

NUnit 2.5 Dabblings

Version 2.5 of NUnit was released recently. As I often do when a new version of a tool is released, I looked to see what's new. This also often gives me the opportunity to take a peek at features that were there in the last version which I hadn't noticed or hadn't had a reason to make use of. Looking back at unit tests I've written to date I've not been that adventurous in my use of NUnit. Here are the assertions I've mostly used.

Assert.IsTrue
Assert.IsFalse
Assert.AreEqual

I have also occasionally used the alternative "fluent" syntax


but found that they don't offer much for simple asserts, e.g.,


Assert.That(x, Is.EqualTo(y))


vs


Assert.AreEqual(x, y)


The former doesn't offer anything over the latter and is more unwieldy to write. However, the fluent form comes into its own in contexts like this:


Assert.That(x, Is.EqualTo(y).Within(0.000001))


This could also have been written


Assert.AreEqual(x, y, 0.000001)


but in this case it is clear that the fluent form is more readable.


Parameterised Tests

These allow you to supply data to a test case via parameters. The MbUnit framework added them some time ago via their RowTest attribute and prior to NUnit 2.5 it was possible to get the same behaviour via an NUnit extension. Parameterised tests aid in reducing code duplication for tests that use the same algorithm but with differing inputs. Consider a simple test of an email address validation routine.


Example Using Test Attribute
[Test]
public void ValidEmailInUsername()
{
string email = "joe@abc.co.uk";
Assert.IsTrue(ValidationTool.IsValidEmail(email));
}

[Test]
public void ValidEmailWithPeriodInUsername()
{
string email = "joe.bloggs@abc.com";
Assert.IsTrue(ValidationTool.IsValidEmail(email));
}

[Test]
public void ValidEmailWithUnderscoreInUsername()
{
string email = "joe_bloggs@abc.com";
Assert.IsTrue(ValidationTool.IsValidEmail(email));
}

[Test]
public void ValidEmailWithHyphenInUsername()
{
string email = "joe-bloggs@abc.com";
Assert.IsTrue(ValidationTool.IsValidEmail(email));
}

Of course, there is no real logic in the test cases in this simple example but we can see how we can easily start to get nasty code duplication.  As the tests develop we can factor out this code into helper routines but we're still faced with staring at a bunch of tests that look structurally the same.


When we load these in NUnit we get:


image


Example Using TestCase Attribute - First Version
[TestCase(
"joe@abc.co.uk",
Description = "Valid email in username"
)]
[TestCase(
"joe.bloggs@abc.com",
Description = "Valid email with period in username"
)]
[TestCase(
"joe_bloggs@abc.com",
Description = "Valid email with underscore in username"
)]
[TestCase(
"joe-bloggs@abc.com",
Description = "Valid email with hyphen in username"
)]
public void ValidEmail(string email)
{
Assert.IsTrue(ValidationTool.IsValidEmail(email));
}

Loading this in the NUnit GUI produces the following:


image


However, there is a disadvantage in that you cannot easily tell what is being tested. In the GUI you can see different parameters being passed to each test but you don't get a description such as ValidEmailWithPeriodInUsername, ValidEmailWithUnderscoreInUsername, etc. This is described in a post at Vadim Kreynin's blog. In our case we just have a single parameter but clearly it would be even worse with multiple parameters.


However, we can improve on this scenario while still getting the benefit of the TestCase attribute. We can add the TestName attribute.


Example Using TestCase Attribute - Second Version
[TestCase(
"joe@abc.co.uk",
Description = "Valid email in username",
TestName = "ValidEmailInUserName"
)]
[TestCase(
"joe.bloggs@abc.com",
Description = "Valid email with period in username",
TestName = "ValidEmailWithPeriodInUsername"
)]
[TestCase(
"joe_bloggs@abc.com",
Description = "Valid email with underscore in username",
TestName = "ValidEmailWithUnderscoreInUsername"
)]
[TestCase(
"joe-bloggs@abc.com",
Description = "Valid email with hyphen in username",
TestName = "ValidEmailWithHyphenInUsername"
)]
public void ValidEmail(string email)
{
Assert.IsTrue(ValidationTool.IsValidEmail(email));
}

Loading this in the NUnit GUI produces the following:


image


We also get the valid email tests nicely grouped as sub-nodes of ValidEmail. So we get both the visual readability of the Test attribute and the elimination of code duplication of the TestCase attribute. Nice.

Monday, 11 May 2009

Readability in Method Calls

I generally prefer to call methods using variables rather than literals or expressions as arguments . I don't follow this religiously but it is especially helpful when calling API methods that have boolean or object parameters that can be true/false or null. For example,

AuthorizationRuleCollection rules = 
fileSecurity.GetAccessRules(true, true, typeof(NTAccount));

This is a fairly mild case but what does true mean? I've no idea. It's worse when we see this kind of thing,

DoSomething(width, height, null, null, name, false);

Here, I have no clue what null or false represents.


What I do in such cases is replace the null or boolean with a local variable. So, in the first case I write,

bool includeExplicit = true;
bool includeInherited = true;
Type type = typeof(NTAccount);
AuthorizationRuleCollection rules =
fileSecurity.GetAccessRules(includeExplicit, includeInherited, type);

Suppose includeExplicit or includeInherited is false? Then I write,

bool includeExplicit = true;
bool includeInherited = true;
Type type = typeof(NTAccount);
AuthorizationRuleCollection rulesCol =
fileSecurity.GetAccessRules(!includeExplicit, includeInherited, type);

(The only problem with the last case is that it easy to miss the ! operator. This is one of the shortcomings of the C-family languages. A not keyword would have been preferable.)


Even outside the cases discussed it is generally more readable to use variables instead of literals or expressions as arguments.

Monday, 6 April 2009

Developer Productivity Tools

My primary developer tool is Microsoft Visual Studio. However, I use a number of Visual Studio add-ins and other complementary tools.  Here I describe what (other than Visual Studio) I use and why.

Visual Studio Add-ins

The MSDN articles, Ten Must-Have Tools Every Developer Should Download Now and Visual Studio Add-Ins Every Developer Should Download Now, are a useful point of reference.

Smart Paster

Smart Paster allows you to paste text on the clipboard into a Visual Studio code document as a comment, a string, a StringBuilder or Region. See here. The link mentioned there is broken. The Visual Studio 2008 version can be found here. I most often use "Paste as Comment." This is useful for inserting words from technical specs. into your code as comments. Smart Paster works with both C# and VB. I like it for the reasons stated in the MSDN article.

CodeKeep

CodeKeep is a repository for storing code snippets online. You can make these snippets available to the public or keep them private.  Snippets are available in multiple programming languages. You can either grab a snippet by browsing to the site snippet and copying and pasting it into your code editor or you can make use of a handy Visual Studio add-in. I find it most useful for being able to access my own code repository when I'm working at different client sites.

GhostDoc

This is one of the slickest add-ins I've used. Basically it reduces the tedium of writing XML documentation comments. Visual Studio allows you to type /// + Enter to generate an empty summary element for a class or member in C#. At that point, to complete it you must fill in your summary plus, for a method, documentation for any parameters and return value. What GhostDoc does is provide placeholders for these and also tries to infer a "starter" description from the name of your class or method. Often, depending on how well you've named your method, it gets the descriptions exactly right. But even when it doesn't then simply reducing the tedium of the angled brackets is a godsend. See also here.

GhostDoc is also intelligent enough to update its generated documentation should you, say, add an extra parameter to a method. It can also generate any existing documentation from base class methods in inherited classes. Its description rules are customisable, though I've barely scratched the surface. Because GhostDoc reduces the tedium of documentation it actually encourages you to write more of it than you otherwise would. For example, I used to be fairly good at writing at least summary documentation but now I pay more attention to documenting parameters as well, especially when combined with another excellent but more obscure plug-in I use called CR_Documentor which I discuss next. GhostDoc works with both C# and VB, although VB support is described as "experimental." There are indeed one or two glitches with VB, though nothing too serious.

CR_Documentor

CR_Documentor is a plug-in for Developer Express's freely downloadable DXCore extensibility engine. If you are a user of CodeRush or Refactor!, either commercial or free, then DXCore is installed with them. It is the engine that makes those products work. Alternatively, DXCore can be installed by itself. There is a small community of plug-in developers who have provided a number of useful plug-ins. CR_Documentor is one such plug-in. Here is a good overview

Below is an example from my own code. To examine this in more detail see here. Click on the magnifying glass icon to zoom in.

CR_Documentor

The great thing about CR_Documentor is that it allows you to view "in-place" and in real time what your XML documentation comments look like when rendered in tools such as NDoc or Sandcastle without actually having to first build your solution and then run these tools.  With CR_Documentor you can spot any errors there and then rather than having to wait for Sandcastle to generate the docs in order for you to identify and correct the errors and re-run. Again, because this is such a fun product, it actually encourages you to write documentation so you can get instant gratification.

Refactor! Pro

Developer Express is a .NET components and tools vendor. One of their products is a code refactoring tool called Refactor! Pro. There is a companion tool called CodeRush that includes Refactor! Pro.  The two products together compete with the better known ReSharper from JetBrains.

I used ReSharper a few years ago at a client site. It was an excellent product and I daresay it must be even better today. But a while later I discovered Refactor! Pro initially via the licensing agreement between Microsoft and Developer Express in 2005 to include Refactor! for Visual Basic in Visual Basic 2005. I happened to be working in a Visual Basic contract and one of my assignments was to engage in a major refactoring exercise, so I thought I'd give Refactor! a spin. I became hooked immediately by it's slick, highly visual and non-modal UI paradigm. Below is a picture of the Extract Method refactoring. Visual Studio already has this for C# but I prefer the Refactor! implementation. Besides, Visual Studio C# has only about a half dozen built-in refactorings. Refactor! Pro now has nearly 200 and had about 50+ when I bought it in about 2006.

Extract Method

Not long after I took the plunge and purchased the full version.  Apart from preferring its UI paradigm my other reasons for preferring it over ReSharper were that Refactor! Pro offered support for both C# and VB, as well as C++ and more recently JavaScript. At the time of my decision ReSharper only offered C#.  As I anticipated using all these languages Refactor! Pro was a no-brainer. Thus far I've not taken the plunge and opted for the full CodeRush package. How do CodeRush and ReSharper compare today? As far as I can tell CodeRush/Refactor! Pro may be a little stronger on refactoring while ReSharper is stronger on code analysis and unit testing. Apart from this which is preferable seems largely to be a matter of taste.

CodeRush Xpress

Recently Developer Express made available a cutdown version of CodeRush for  C# developers called CodeRush Xpress. I have started using this and its best features are its file and class navigation support. You could say "Solution Explorer kiss my ass."

File Navigation

image

Quick Navigation

 image

The idea here is that CodeRush dynamically displays a list of file names or code elements as you type additional letters. Moreover it searches for any fragment within a name, not just the starting characters. Especially useful is its Pascal/Camel Case feature. This is easiest to explain by examining the following picture.

image

Typing the letters BDA displays a list of all types that are made up of words starting in BDA.

Code Metrics

Visual Studio Team System (VSTS) has a code metrics feature. This measures properties such as cyclomatic complexity and maintainability.  If you don't have VSTS then it is possible to obtain similar information via Reflector and its code metrics plug-in. See here for a close-up.

Code Metrics

CodeRush/Refactor! Pro also has a code metrics feature. The major difference and advantage it has over the other two is that the display is dynamic, i.e., the complexity graphs update themselves immediately after edits.

Cyclomatic Complexity

It's a useful way of driving your refactoring efforts. For example, Developer Express suggest that code complexity should be <= 10 and maintenance complexity should be <= 200. The complexity measures also work with C++ code. I don't think VSTS complexity does.

Saturday, 31 January 2009

A Visual Basic Annoyance - Option Strict Off By Default

In Microsoft's .NET environment I normally program in C#. But occasionally I am asked to do Visual Basic development.  Visual Basic has a project property known as option strict. This is set off by default. Unfortunately, I usually forget to set it to on. When I eventually remember I update the global environment setting to make on the default for new projects. However, when developing, I usually have one or two test projects on the side for trying things out before incorporating them in my main project. This was exactly the situation I found myself  in recently.  I had created some test projects but, unknown to me , some of them were created before I remembered to change the global environment setting, so they still had option strict off. So I merrily tried out my ideas and then copied and pasted them back into my main project, which had option strict on, to find they didn't compile. Why didn't Microsoft make option strict on the default?

Wednesday, 17 December 2008

Web Application Testing In .NET

Recently I had the opportunity to use the open source software tool, WatiN (Web Application Testing in .NET), in a commercial environment. It is inspired by the similar Watir (Web Application Testing in Ruby). Initially (about a couple of years ago) I dabbled in Watir after being alerted to its existence by my friend, Mark Hudson. Also, it gave me an excuse to play around with Ruby a bit, Ruby being the sleek new kid on the block at the time.

As it happens, although WatiN and Watir are web testing frameworks,  they can be used for automating browser operations - text entry, mouse clicks, navigation, etc., independently of any testing. So you could use them to log into Internet banking sites. Well, that's becoming increasingly impossible with the rise of two-factor authentication schemes, such as Barclays' PINsentry system, that additionally use card readers.

What else do we need for testing our web applications with WatiN?

First, we should note that WatiN is for testing the UI as such, not for testing the application as a whole. Depending on how the web application is structured we can use standard test-driven development for the rest of the application.

The other tools I used in combination with WatiN were:

WatiN Test Recorder is helpful for generating rough starter code from your mouse clicks and keystrokes and giving you a feel for how to use the WatiN API.

The IE Developer Toolbar is useful for helping you identify the HTML control identifiers. (Currently WatiN requires Internet Explorer but Firefox support is in beta.)

NUnit is the framework for running the tests in C# or VB .NET.

WatiN itself has recently been upgraded to include .NET 3.5 and C# 3.0  language features - LINQ and lambda expressions. It has an excellent and active mailing list for technical support.

I will discuss WatiN in more detail in a subsequent post.

Friday, 12 December 2008

Taming the "Broken" IDisposable Implementation for WCF Clients

One of the early lessons we learn in Microsoft .NET development is to always use the using statement on objects that implement the IDisposable interface.  The effect is that the Dispose() method is called at the end of the using statement to ensure that resources are cleaned up.

Unfortunately, this pattern fails for Windows Communication Foundation (WCF) clients for the reason that is explained quite well in the posts, WCF Clients and the "Broken" IDisposable Implementation by David Barrett and Indisposable: WCF Gotcha #1 by Jesse Ezell. A Microsoft WCF development team member, Brian McNamara, explained the thinking behind this design decision in a post at the MSDN WCF forum Why does ClientBase Dispose need to throw on faulted state?

David and Jesse provide some quite elegant workarounds to this problem. I have adapted their solutions and applied them to a simple WCF service to illustrate their use. Let's take a step by step approach.

Imagine we have a simple WCF Calculator service.  This contains a single operation that adds two integers and returns the result.

Accessing the Calculator Service the Wrong Way

We can access the Calculator service using either a ChannelFactory<T> or a client proxy (a generated ClientBase<T> derived class). In that case, the client code might look like this.

private const string Uri = 
"net.pipe://localhost/CalculatorService/Calculator";

private static void RunUsingChannelFactoryWithUsingStatement()
{
NetNamedPipeBinding binding = new NetNamedPipeBinding();
EndpointAddress ep = new EndpointAddress(Uri);

ICalculator channel =
ChannelFactory<ICalculator>.CreateChannel(
binding, ep
);

using (channel as IDisposable)
{
int result = channel.Add(1, 2);
Console.WriteLine(result);
}
}

private static void RunUsingProxyWithUsingStatement()
{
NetNamedPipeBinding binding = new NetNamedPipeBinding();
EndpointAddress endpoint = new EndpointAddress(Uri);

using (
CalculatorClient client =
new CalculatorClient(binding, endpoint)
)
{
int result = client.Add(1, 2);
Console.WriteLine(result);
}
}

However,  we should not do this since Dispose() calls Close() and Close() can throw an exception.  Instead we have to be more long-winded.


Accessing the Calculator Service the Correct Way

private static void RunUsingChannelFactoryWithTryFinally()
{
NetNamedPipeBinding binding = new NetNamedPipeBinding();
EndpointAddress endpoint = new EndpointAddress(Uri);
ChannelFactory<ICalculator> channelFactory =
new ChannelFactory<ICalculator>(binding, endpoint);

IClientChannel channel =
channelFactory.CreateChannel() as IClientChannel;

bool closed = false;
try
{
int result = (channel as ICalculator).Add(1, 2);
Console.WriteLine(result);
channel.Close();
closed = true;
}
finally
{
if (!closed)
{
channel.Abort();
}
}
}

private static void RunUsingProxyWithTryFinally()
{
NetNamedPipeBinding binding = new NetNamedPipeBinding();
EndpointAddress endpoint = new EndpointAddress(Uri);
ClientBase<ICalculator> client =
new CalculatorClient(binding, endpoint);

bool closed = false;
try
{
int result = (client as CalculatorClient).Add(1, 2);
Console.WriteLine(result);
client.Close();
closed = true;
}
finally
{
if (!closed)
{
client.Abort();
}
}
}

Can we improve on this? Yes.


Improved Solution 1 - Implement IDisposable in Proxy Partial Class

public partial class CalculatorClient : IDisposable
{
void IDisposable.Dispose()
{
try
{
Close();
}
catch (CommunicationException)
{
Abort();
}
catch (TimeoutException)
{
Abort();
}
}
}

With this in place we can force the using statement to do the right thing. This is now valid.

using (CalculatorClient client = new CalculatorClient(binding, endpoint))
{
int result = client.Add(1, 2);
Console.WriteLine(result);
}

This is an elegant solution but it has two disadvantages.



  1. It only works for proxy-based client access, not ChannelFactory-based access.
  2. The client must write the partial class for each new service they wish to access (though this could be automated with code generation).

Improved Solution 2 - Use Generics and Delegates


First we define these two delegates that represent code blocks for proxy-based and ChannelFactory-based client access respectively.

/// <summary>
///
Represents a code block containing WCF proxy client method calls.
/// </summary>
public delegate void
UseProxyDelegate<TInterface, TClass>(TClass proxy);
/// <summary>
///
Represents a code block containing WCF client method calls
/// using a <see cref="ChannelFactory"/>.
/// </summary>
public delegate void
UseChannelFactoryDelegate<TInterface>(TInterface channel);

Then we define a class that abstracts away the closing of the connection for us.

public static class Service
{
/// <summary>
///
Uses a WCF client via its proxy.
/// </summary>
public static void UseProxy<TInterface, TClass>(
ClientBase<TInterface> proxy,
UseProxyDelegate<TInterface, TClass> codeBlock
)
where TInterface : class
where
TClass : ClientBase<TInterface>, TInterface
{
if (proxy == null)
throw new ArgumentNullException(
"proxy", "proxy is null."
);
if (codeBlock == null)
throw new ArgumentNullException(
"codeBlock", "codeBlock is null."
);

bool closed = false;
try
{
codeBlock(proxy as TClass);
proxy.Close();
closed = true;
}
finally
{
if (!closed)
{
proxy.Abort();
}
}
}

/// <summary>
///
Uses a WCF client via its <see cref="ChannelFactory"/>.
/// </summary>
public static void UseChannelFactory<TInterface>(
ChannelFactory<TInterface> channelFactory,
UseChannelFactoryDelegate<TInterface> codeBlock
)
where TInterface : class
{
if (channelFactory == null)
throw new ArgumentNullException(
"channelFactory", "channelFactory is null."
);
if (codeBlock == null)
throw new ArgumentNullException(
"codeBlock", "codeBlock is null."
);

IClientChannel channel =
channelFactory.CreateChannel() as IClientChannel;
bool closed = false;
try
{
codeBlock(channel as TInterface);
channel.Close();
closed = true;
}
finally
{
if (!closed)
{
channel.Abort();
}
}
}
}

With this in place we can now proceed to the final solution.

private static void RunUsingChannelFactory()
{
NetNamedPipeBinding binding = new NetNamedPipeBinding();
EndpointAddress endpoint = new EndpointAddress(Uri);
ChannelFactory<ICalculator> channelFactory =
new ChannelFactory<ICalculator>(binding, endpoint);

Service.UseChannelFactory<ICalculator>(
channelFactory,
delegate(ICalculator calculator)
{
int result = calculator.Add(1, 2);
Console.WriteLine(result);
}
);
}

private static void RunUsingProxy()
{
NetNamedPipeBinding binding = new NetNamedPipeBinding();
EndpointAddress endpoint = new EndpointAddress(Uri);
ClientBase<ICalculator> client =
new CalculatorClient(binding, endpoint);

Service.UseProxy<ICalculator, CalculatorClient>(
client,
delegate(CalculatorClient calculator)
{
int result = calculator.Add(1, 2);
Console.WriteLine(result);
}
);
}

If we are using C# 3.0 we can replace the anonymous delegates with statement lambdas but there is no real gain in expressiveness in doing so in this case.


This final solution is more general purpose but is straightforward to use.