Attached Behaviors Memory Leaks

“Behavior is the base class for providing attachable state and commands to an object. The types the Behavior can be attached to can be controlled by the generic parameter. Override OnAttached() and OnDetaching() methods to hook and unhook any necessary handlers from the AssociatedObject.”
If you using of the behaviors or trigger actions and these subscribe internally on events you’re in trouble. The memory used by them is never released. And could held lot of objects including views in memory causing memory leaks.

“When you subscribe to an event the event source ends up with a hard reference to the event handler. This creates a situation where the event handler cannot be cleaned up as long as the event source exists.”
So to unhook events you probably write code in OnDetaching methods however a behavior might not detach when you expect it to and vice versa, leaving added event handler on the control to survive GC. OnDetaching is only called when you explicitly remove behaviour.
The solution:
The OnAttached is called when XAML parser parses XAML and creates instance of behaviour and adds this to BehaviorCollection of target control which is exposed as DependencyAttached property. However when view is disposed, the collection (Behavior collection) was disposed of, it will never trigger OnDetaching method. If the behaviour is not properly cleanup it will not be collected by GC and will also hold BehaviorCollection and other behaviors in that collection. The behaviours are designed to extend AssociatedObject, as long as you are subscribing to AssociatedObject events its fine as the AssociatedObject (publisher) will die and your behaviour will be collected by garbage collector.

Use BehaviorBase (see code below) to avoid memory leak from behaviours. The same technique can also be used for triggers.
Drive all your behaviors from BehaviorBase class and override OnSetup and OnCleanup methods. OnSetup is triggered when behaviour explicitly is attached to already loaded object at runtime or when object is loaded.
BehaviorBase

 
public abstract class BehaviorBase<T> : Behavior<T> where T : FrameworkElement
{
private bool _isSetup = true;
private bool _isHookedUp;
private WeakReference _weakTarget;

protected virtual void OnSetup() {}
protected virtual void OnCleanup() {}
protected override void OnChanged()
{
       var target = AssociatedObject;
       if (target != null)
       {
              HookupBehavior(target);
       }
       else
       {
              UnHookupBehavior();
       }
}

private void OnTarget_Loaded(object sender, RoutedEventArgs e) { SetupBehavior(); }

private void OnTarget_Unloaded(object sender, RoutedEventArgs e) { CleanupBehavior(); }

private void HookupBehavior(T target)
{
       if (_isHookedUp) return;
       _weakTarget = new WeakReference(target);
       _isHookedUp = true;
       target.Unloaded += OnTarget_Unloaded;
       target.Loaded += OnTarget_Loaded;
       SetupBehavior();
}

private void UnHookupBehavior()
{
       if (!_isHookedUp) return;
       _isHookedUp = false;
       var target = AssociatedObject ?? (T)_weakTarget.Target;
       if (target != null)
       {
              target.Unloaded -= OnTarget_Unloaded;
              target.Loaded -= OnTarget_Loaded;
       }
       CleanupBehavior();
}

private void SetupBehavior()
{
       if (_isSetup) return;
       _isSetup = true;
       OnSetup();
 }

private void CleanupBehavior()
{
       if (!_isSetup) return;
       _isSetup = false;
       OnCleanup();
}
}

Windows 8: TopMost window

TopMost

I am working on my next ambitious project “MouseTouch” which is multi touch simulator application for windows 8 platform and intended to increase the productivity if you are running windows 8 on non-touch device.

This will bring the touch feature of windows 8 to life even if you are using mouse pad.

The first challenge is how to render something on top of metro start menu items?

So if you want to create a true topmost window which can float even on top of windows 8 metro apps here are the simple steps..

  1. Create WPF visual studio application (or any other window app )
  2. Set TopMost=True for MainWindow
  3. Right-click on your project in the Solution Explorer
  4. Select “Add New Item” from the context menu.
  5. Choose “Application Manifest File” from the list of options in the dialog box that appears.
  6. Right-click on your project in the Solution Explorer and click “Properties” (or double-click on the “Properties” item under your project).
  7. Under the first tab (“Application”),select your app.manifest file from the drop-down box labeled “Manifest”.
  8. Set the, <requestedExecutionLevel level=”asInvoker” uiAccess=”true” />

The next part is to create a certificate and install it in trusted root authorities.

  1. Create certificate 
    1. makecert -r -pe -ss PrivateCertStore-n “CN=TopMost.com” topmost.cer
  2. Import certificate to (Local Machine) trusted root certification authorities using mmc.exe.

Now sign your executable using the certificate, either by using the command, or using visual studio (check the delay sign flag).

  1. Signtool sign /v /s PrivateCertStore /n TopMost.com /t http://timestamp.verisign.com/scripts/timestamp.dll TopMost.exe

Now copy the TopMost.exe to trusted location like c:Windows or program files and execute the TopMost.exe..

Still struggling ..

Ok if , executable is not signed or certificate is not installed properly, you see following exception

referralError

 

 

 

 

 

 

 

To avoid this exception, open mmc.exe, add certificates snap-in>Select Computer account>Local Computer>

Go to trusted root certification authorities>certificates>right click > tasks and import the certificate ..

CertInstall

.Net Cryptography (Encryption / Decryption)

There are two techniques for encrypting data: symmetric encryption (secret key encryption) and asymmetric encryption (public key encryption.)

Symmetric Encryption

Symmetric encryption is the oldest and best-known technique. A secret key, which can be a number, a word, or just a string of random letters, is applied to the text of a message to change the content in a particular way. This might be as simple as shifting each letter by a number of places in the alphabet. As long as both sender and recipient know the secret key, they can encrypt and decrypt all messages that use this key.

Asymmetric Encryption

The problem with secret keys is exchanging them over the Internet or a large network while preventing them from falling into the wrong hands. Anyone who knows the secret key can decrypt the message. One answer is asymmetric encryption, in which there are two related keys–a key pair. A public key is made freely available to anyone who might want to send you a message. A second, private key is kept secret, so that only you know it.

Any message (text, binary files, or documents) that are encrypted by using the public key can only be decrypted by applying the same algorithm, but by using the matching private key. Any message that is encrypted by using the private key can only be decrypted by using the matching public key.

This means that you do not have to worry about passing public keys over the Internet (the keys are supposed to be public). A problem with asymmetric encryption, however, is that it is slower than symmetric encryption. It requires far more processing power to both encrypt and decrypt the content of the message

 

Lets see how both the techniques can be implemented using C#.Net 4.0

//The CryptoBase class represents the base class for both kind of techniques. the code is mentioned below

 

public abstract class CryptoBase:IDisposable
{
public CryptoBase(IDisposable provider)
{
this.Provider = provider;
}

protected IDisposable Provider { get; private set; }

/// <summary>
/// Encryption the source stream and save result to target stream
/// </summary>
/// <param name="source"></param>
/// <param name="target"></param>
public abstract void Encrypt(System.IO.Stream source, System.IO.Stream target);

/// <summary>
/// Decrypt the source stream and save result to target stream
/// </summary>
/// <param name="source"></param>
/// <param name="target"></param>
public abstract void Decrypt(System.IO.Stream source, System.IO.Stream target);

protected abstract void OnDisposeProvider();

/// <summary>
/// Copy the source stream to target stream and transform the bytes (using function deligate) before copying onto target stream
/// </summary>
/// <param name="source">Source stream</param>
/// <param name="target">Target stream</param>
/// <param name="BytesProcessor">Function deligate to process the data beforw it get copied to target stream.</param>
protected void CopyStream(Stream source, Stream target, Func<byte[], byte[]> BytesProcessor)
{
const int bufSize = 1024;
byte[] buf = new byte[bufSize];
int bytesRead = 0;
while ((bytesRead = source.Read(buf, 0, bufSize)) > 0)
{
//extract the actual buffer using bytesRead
byte[] buffactual = new byte[bytesRead];

//Copt the data to buffer
Array.Copy(buf, buffactual, bytesRead);

//Call the bytes processor to process the bytes before we write it to target
byte[] processed = BytesProcessor(buffactual);

//Write the new data to target.
target.Write(processed, 0, processed.Length);
}
}

/// <summary>
/// Copy the source tream to target stream.
/// </summary>
/// <param name="source">Source stream</param>
/// <param name="target">Target stream</param>
protected void CopyStream(Stream source, Stream target)
{
const int bufSize = 1024;
byte[] buf = new byte[bufSize];
int bytesRead = 0;
while ((bytesRead = source.Read(buf, 0, bufSize)) > 0)
target.Write(buf, 0, bytesRead);
}

private bool isDisposing = false;
~CryptoBase()
{
this.Dispose();
}

private void Dispose()
{
if (!isDisposing && Provider != null)
{
OnDisposeProvider();
isDisposing = true;
Provider.Dispose();
Provider = null;
}
}

void IDisposable.Dispose()
{
this.Dispose();
}
}

Now lets see the Symmetric implementation based on RijndaelManaged provider

/// <summary>
/// Symmetric encryption/decryption class based on RijndaelManaged provider
/// </summary>
public sealed class Symmetric : CryptoBase
{

private readonly byte[] passcode;
private readonly byte[] vector;

/// <summary>
/// Initilize the Symmetric Encryption from Vector, PassCode and Salt
/// </summary>
/// <param name="Vector">Randomly generated 16 bit array (16 means - 128 bit AES encryption)</param>
/// <param name="PassCode">Random passcode bytes. passcode size should be from (5 to 15)*8 bytes</param>
/// <param name="Salt"></param>
public Symmetric(byte[] Vector, byte[] PassCode,string Salt):base(new RijndaelManaged())
{
MD5CryptoServiceProvider md5Crypt = new MD5CryptoServiceProvider();

//Construct the derived password, hash name should be SHA1 or MD5
PasswordDeriveBytes password = new PasswordDeriveBytes(PassCode, md5Crypt.ComputeHash(UnicodeEncoding.ASCII.GetBytes(Salt)), "SHA1", 2);
//C# provides 4 different symmetric crypto algorithms:
//RijndaelManaged, DESCryptoServiceProvider, RC2CryptoServiceProvider, and TripleDESCryptoServiceProvider.

//Rijndael is the same as AES (Advanced Encryption Standard - approved by NSA, very strong)
//but with more choice about the size of your key.

SymmetricAlgorithm symmetricProvider = base.Provider as RijndaelManaged;
symmetricProvider.Mode = CipherMode.CBC;

//Set the passcode and vector
passcode = password.GetBytes(32);
vector = Vector;
}
public override void Encrypt(Stream source, Stream target)
{
SymmetricAlgorithm symmetricProvider = base.Provider as RijndaelManaged;

//Create encryptor from passcode and vector
using (ICryptoTransform CryptoTransformer = symmetricProvider.CreateEncryptor(passcode, vector))
{
//Encrypt the stream
using (CryptoStream cryptostream = new CryptoStream(target, CryptoTransformer, CryptoStreamMode.Write))
{
base.CopyStream(source, cryptostream);
}
}

}

public override void Decrypt(Stream source, Stream target)
{
SymmetricAlgorithm symmetricProvider = base.Provider as RijndaelManaged;

//Create decryptor from passcode and vector
using (ICryptoTransform CryptoTransformer = symmetricProvider.CreateDecryptor(passcode, vector))
{
//Decrypt the stream
using (CryptoStream cryptostream = new CryptoStream(source, CryptoTransformer, CryptoStreamMode.Read))
{
CopyStream(cryptostream, target);
}
}
}

protected override void OnDisposeProvider()
{
SymmetricAlgorithm symmetricProvider = base.Provider as RijndaelManaged;
symmetricProvider.Clear();
}
}

and the asymmetric implementation as well

 
/// <summary>
/// Asymmetric (RSA) encryption/decryption class
/// </summary>
internal sealed class Asymmetric : CryptoBase
{

/* Note: the RSACryptoServiceProvider reverses the order of encrypted bytes
* after encryption and before decryption. If you do not require compatibility
* with Microsoft Cryptographic API (CAPI) and/or other vendors
* Set CAPICompatibility = False;
*/

private readonly bool CAPICompatibility = true;

/// <summary>
/// Create the provider from RSACryptoService Provider
/// </summary>
/// <param name="provider"></param>
public Asymmetric(RSACryptoServiceProvider provider):base(provider)
{

}

/// <summary>
/// Encrypt the data using RSACryptoServiceProvider
/// </summary>
/// <param name="bytes"></param>
/// <returns></returns>
private byte[] Encrypt(byte[] bytes)
{
RSACryptoServiceProvider Provider = base.Provider as RSACryptoServiceProvider;

int keySize = Provider.KeySize / 8;

// The hash function in use by the .NET RSACryptoServiceProvider here is SHA1
int maxLength = (keySize) - 2 - (2 * SHA1.Create().ComputeHash(bytes).Length);

int dataLength = bytes.Length;
//Compute the iterations based on data length
int iterations = dataLength / maxLength;
List<byte> result = new List<byte>();
//loop through data and encrypt the bytes
for (int i = 0; i <= iterations; i++)
{
byte[] tempBytes = new byte[(dataLength - maxLength * i > maxLength) ? maxLength : dataLength - maxLength * i];
Buffer.BlockCopy(bytes, maxLength * i, tempBytes, 0, tempBytes.Length);
byte[] encryptedBytes = Provider.Encrypt(tempBytes, false);
//The microsoft crypto api reverse the bytes after encryption
//if CAPICompatibility is required the reverse the data
if (CAPICompatibility)
Array.Reverse(encryptedBytes);
result.AddRange(encryptedBytes);
}
//return the encrypted data
return result.ToArray();
}

/// <summary>
/// Decrypt the byte array based on RSACryptoServiceProvider
/// </summary>
/// <param name="bytes"></param>
/// <returns></returns>
private byte[] Decrypt(byte[] bytes)
{
RSACryptoServiceProvider Provider = base.Provider as RSACryptoServiceProvider;

//Compute the key size
int keySize = Provider.KeySize / 8;

int maxLength = keySize;

int dataLength = bytes.Length;
//Compute the iterations based on data length
int iterations = dataLength / maxLength;

List<byte> result = new List<byte>();
for (int i = 0; i <= iterations; i++)
{
byte[] tempBytes = new byte[(dataLength - maxLength * i > maxLength) ? maxLength : dataLength - maxLength * i];
if (tempBytes.Length > 0)
{
Buffer.BlockCopy(bytes, maxLength * i, tempBytes, 0, tempBytes.Length);
//The microsoft crypto api requires the reversed bytes before decryption
//if CAPICompatibility is set then reverse the data
if (CAPICompatibility)
Array.Reverse(tempBytes);
//Decrypt the data
byte[] encryptedBytes = Provider.Decrypt(tempBytes, false);
result.AddRange(encryptedBytes);
}
}
//return the decrypted data
return result.ToArray();
}
public override void Encrypt(Stream source, Stream target)
{
//Call the Copy Stream function and pass the refrence to Encrypt function as byte processor
//The byte processor will be called before writting final data to output stream
CopyStream(source, target, Encrypt);
}

public override void Decrypt(Stream source, Stream target)
{
//Call the Copy Stream function and pass the refrence to Decrypt function as byte processor
//The byte processor will be called before writting final data to output stream
CopyStream(source, target, Decrypt);
}

protected override void OnDisposeProvider()
{
RSACryptoServiceProvider symmetricProvider = base.Provider as RSACryptoServiceProvider;
symmetricProvider.Clear();
}

}

And at last the usage function will look like

class Program
{
private static Random random = new Random();

static void Main(string[] args)
{
//Asymmetric Encryption/Decryption usage
//Get the certificate

X509Certificate2 certificate = null;// = Get the certificate from store (Left for ideveloper/implementor )

//User the private key to decrypt the data
using (CryptoBase asymmetric = new Asymmetric((RSACryptoServiceProvider)certificate.PrivateKey))
{

//asymmetric.Decrypt(source ,target);

}

//User the public key to encrypt the data
using (CryptoBase asymmetric = new Asymmetric((RSACryptoServiceProvider)certificate.PublicKey.Key))
{
//asymmetric.Encrypt(source ,target);
}
//Symmetric Encryption/Decryption usage
//Get the certificate

byte[] vector = GenerateRandomBytes(16);
byte[] passcode = GenerateRandomBytes(10);
string salt = "ChooseSalt";

//User the private key to decrypt the data
using (CryptoBase asymmetric = new Symmetric(vector,passcode,salt))
{

//asymmetric.Decrypt(source ,target);

}

//User the public key to encrypt the data
using (CryptoBase asymmetric = new Symmetric(vector, passcode, salt))
{
//asymmetric.Encrypt(source ,target);
}

}

public static byte[] GenerateRandomBytes(int Size)
{
byte[] buffer = new byte[Size];
random.NextBytes(buffer);
return buffer;
}
}

The initialization of certificate from data store is not implemented in above sample.

If you have huge data and Asymmetric encryption is doesn’t meet you performance requirements, you could mix both the techniques. For eg. Decrypt the Symmetric keys using Asymmetric technique and write it on header of the data. Decrypt the rest of data segment with fast Symmetric technique.To make it more stronger you might develop custom decryption technique which decrypt most of data with symmetric but also decrypt some segment of data with Asymmetric technique. Use certificates with key size of 1024 bit or higher for more security.

ClickOnce Application Patching

Problem description: We a desktop application including other components like windows service, Local SQL express database etc. The first release of our application will be part of our gold build and we will be having full control over target environment. Once the build is released to our remote customers we will lose our control over that environment. We need to provide on-going application update mechanism for each client at the same time; any unauthorized users should not be allowed to access these updates. Each client might have different application versions installed and maintained. The system should allow us to control who will receive the application update, let’s assume we will charge our customer for 6 months of updates…

Solution Summary:

  • Use ClickOnce as base deployment technology.
  • Provide client side ClickOnce proxy (clickonce wrapper) component which can be integrated in any client app.
  • Build Asp.net server side HttpHandler with client authentication and security.
  • Build some trust mechanism between client and server side component.
  • Prepare you initial build and use CD as deployment method (full trusted) and configure some parameters. Some components which might not need any updates like Windows host service; SQL instance etc could be deployed as msi installation.
  • The server side component should record the client ip-address and at some later stage, plot these locations on Bing map or google map. – (out of scope at this stage)

Theory:

How ClickOnce Deployment Work’s: The core ClickOnce deployment architecture is based on two XML manifest files: an application manifest and a deployment manifest.

The application manifest describes the application itself, including the assemblies, the dependencies and files that make up the application, the required permissions, and the location where updates will be available. The application developer authors the application manifest by using the Publish Wizard in Visual Studio or the manifest generation tool (Mage.exe or MageUI.exe) in the .NET Framework SDK.

The deployment manifest describes how the application is deployed, including the location of the application manifest, and the version of the application that clients should run. An administrator authors the deployment manifest using the manifest generation tool (Mage.exe) in the .NET Framework SDK or Visual studio publishing wizard.

How ClickOnce Performs Application Updates: ClickOnce uses the file version information specified in an application’s deployment manifest to decide whether to update the application’s files. After an update begins, ClickOnce uses a technique called file patching to avoid redundant downloading of application files. File patching :When updating an application, ClickOnce does not download all of the files for the new version of the application unless the files have changed. Instead, it compares the hash signatures of the files specified in the application manifest for the current application against the signatures in the manifest for the new version. If a file’s signatures are different, ClickOnce downloads the new version. If the signatures match, the file has not changed from one version to the next. In this case, ClickOnce copies the existing file and uses it in the new version of the application. This approach prevents ClickOnce from having to download the entire application again, even if only one or two files have changed.

Choosing a ClickOnce Deployment Strategy: There are three different strategies for deploying a ClickOnce application; the strategy that you choose depends primarily on the type of application that you are deploying. We will use “Install from a CD” in this scenario.

When you use this strategy, your application is deployed to removable media such as a CD-ROM or DVD. As with the previous option, when the user chooses to install the application, it is installed and started, and items are added to the Start menu and Add or Remove Programs in Control Panel.

This strategy works best for applications that will be deployed to users without persistent network connectivity or with low-bandwidth connections. Because the application is installed from removable media, no network connection is necessary for installation; however, network connectivity is still required for application updates.

To enable this deployment strategy in Visual Studio, click From a CD-ROM or DVD-ROM on the How Installed page of the Publish Wizard.

To enable this deployment strategy manually, change the deploymentProvider tag in the deployment manifest so that the value is blank. In Visual Studio, this property is exposed as Installation URL on the Publish page of the Project Designer. In Mage.exe, it is Start Location.

Choosing a ClickOnce Update Strategy: ClickOnce can provide automatic application updates. A ClickOnce application periodically reads its deployment manifest file to see whether updates to the application are available. If available, the new version of the application is downloaded and run. For efficiency, only those files that have changed are downloaded.

When designing a ClickOnce application, you have to determine which strategy the application will use to check for available updates. There are three basic strategies that you can use: checking for updates on application startup, checking for updates after application startup (running in a background thread), or providing a user interface for updates. When using this strategy (user interface for updates), the application developer provides a user interface that enables the user to choose when or how often the application will check for updates. For example, you might provide a “Check for Updates Now” command, or an “Update Settings” dialog box that has choices for different update intervals. The ClickOnce deployment APIs provide a framework for programming your own update user interface. For more information, see the System.Deployment.Application namespace.

If your application uses deployment APIs to control its own update logic, you should block update checking. To block update checking, clear the application should check for updates check box in the Application Updates Dialog Box. You can also block update checking by removing the <Subscription> tag from the deployment manifest.

This strategy works best when you need different update strategies for different users.

Publish Directory Structure

The folder and file structure created by Visual Studio 2005 includes a root folder for the application, a default deployment manifest, a version specific deployment manifest, and a subfolder for each version of the application. The subfolder contains the application files suffixed with a file extension of .deploy and the application manifest for that version. The .deploy extension is added to all of the application files by default when publishing from Visual Studio to simplify the configuration required on the deployment server. Doing so ensures that the only file extensions you need to set up mime-type mappings for are the .application, .manifest, and .deploy files. The runtime on the client will remove the .deploy file extension after download to restore the original file name.

Additionally, a Setup.msi file for the configured pre-requisites is placed in the root-publishing folder, along with a publish.htm file that can be used to test the deployment. The default deployment manifest (the one without a version number embedded in its name) in the root folder is updated each time you publish a new version so that it always refers to the application manifest in the most recently published version. Figure A-1 depicts this folder structure for an application published to the local machine instance of IIS.

The structure used by Visual Studio is arbitrary and does not have to be followed explicitly if you manually publish your applications. The deployment manifest contains a path to the application manifest, and the application manifest contains relative paths to the application files. As a result, they can be placed in whatever folder structure you choose if you create or edit your manifests with Mage.

ClickOnce Reference: http://msdn.microsoft.com/en-us/library/6ae39a7c(v=vs.90)

Administering ClickOnce Deployments: http://msdn.microsoft.com/en-us/library/aa480721.aspx

ClickOnce HttpHandler: http://www.codeproject.com/Articles/176120/ClickOnce-Licensing-HTTPHandler

Check for updates : http://msdn.microsoft.com/en-us/library/ms404263(v=vs.90)

Implementation:

Creating Scenario: We have to split our application into two components, one which requires update and another one which doesn’t change or requires update like bootstrap program, SQL server installation etc. Pack one component using ClickOnce publishing tech. mentioned above and pack another one using msi. Now pack these two into single MSI. The setup is ready for rollout on CD.

We definitely need some short of unique ID (say GUID) to identify each client, lets assume on our service side we have this information available in xml file. These Id’s will be provided as part of first installation bundle and each update request will have this id in query string. At server end,ASP.net httpHandler will log the source ip address and validate the client id from it local xml file and respond as per our business rule.

So the next important thing we need is the server side component of ClickOnce, i.e Asp.Net HttpHandler. The problem with the custom approaches (asp.net HttpHandler) is tied to the fact that when a ClickOnce application is installed or updated, a series of separate file requests come in to the deployment server from the client machine. There is no way for the deployment server to identify who those calls are coming from and that they are all part of a single logical installation or update “session” from the perspective of the user and the client application. The only way to solve this problem is to introduce something onto the client that can make an association between the individual requests and introduce information into the calls leaving the client to identify the user to the deployment server.

This kind of client side interception of requests is exactly what an HTTP proxy was designed for, even though they are usually used for other purposes related to network infrastructure security. If a proxy can be introduced on the client that intercepted requests destined for one or a set of ClickOnce deployment servers, and added user credential information to those requests before they left the client machine, then those credentials can be used on the server not only for authenticating the deployment manifest requests, but for all file requests on the server. The proxy adds a custom cookie to the HTTP requests that contained the user credentials. You then need some code on the server side, such as another HTTP module, that checks for the presence of those cookies and only allows authenticated access to the server

Using a Custom  to Establish an Application Session: As described in an earlier section, if you can introduce a custom proxy on the client side, you can ensure that a user token is included with every request for ClickOnce manifests or application files that leave the client machine for a particular site or set of sites. As a result, you can use the user token to make authorization decisions on the server side to determine whether to return the requested file or not. This can again be done using a custom authorization HTTP module that extracted the user token (typically in cookie form) and used it to decide whether to satisfy the request.

The downsides to this approach are the complexity in setting up the custom proxy as discussed earlier, as well as coming up with a way to store and associate user tokens on the client side that is secure. To protect the token in transit, SSL would work fine, and token lookup on the server side could be done using the ASP.NET Membership API or some other custom database lookup.

Content Type Mappings: When publishing over HTTP, the content type (also known as MIME type) for the .application file should be “application/x-ms-application.” If you have .NET Framework 2.0 installed on the server, this will be set for you automatically. If this is not installed, then you need to create a MIME type association for the ClickOnce application vroot (or entire server).

If you deploy using an IIS server, run inetmgr.exe and add a new content type of “application/x-ms-application” for the .application extension.

Coming Soon with ref implementation

Reactive Extensions (Rx) Data Streaming

You have probably heard about Reactive Extensions, a library from Microsoft that greatly simplifies working with asynchronous data streams and allows to query them with LINQ operators.In my previous post I briefly discussed about loading data via entity framework asynchronously however it lacks getting chunks of data. This post demonstrates how to use Reactive Extensions for loading data from database asynchronously in chunks covering brief around Reactive extension library.

Reactive extensions (Rx)
The Reactive Extensions (Rx) is a library for composing asynchronous and event-based programs using observable sequences and LINQ-style query operators. Using Rx, developers represent asynchronous data streams with Observables, query asynchronous data streams using LINQ operators, and parameterize the concurrency in the asynchronous data streams using Schedulers. Simply put, Rx = Observables + LINQ + Schedulers.Data sequences can take many forms, such as a stream of data from a file or web service, web services requests, system notifications, or a series of events such as user input. Reactive Extensions represents all these data sequences as observable sequences. An application can subscribe to these observable sequences to receive asynchronous notifications as new data arrive. The Rx library is available for desktop application development in .NET. It is also released for Silverlight, Windows Phone 7 and JavaScript.

Reactive programming allows you to turn those aspects of your code that are currently imperative into something much more event-driven and flexible.Reactive programming can be applied to a range of situations—from WPF applications to Windows Phone apps—to improve coding efficiency and boost performance.

Code snippets to asynchronous load data via entity framework in batches of 200 records.

public void LoadPostCodes()
{
 btnStatus.Content = "Started";
 listBox1.Items.Clear();
 (from p in cx.MAS_PostCode select p)
 .ToObservable(Scheduler.NewThread)
 .Buffer(200)
 .ObserveOn(SynchronizationContext.Current)
 .Subscribe(ld =>
 {
 foreach (var item in ld)
 {
 ListBoxItem litem = new ListBoxItem();
 litem.Content = string.Format("{0} {1}", item.PC_PostCode, item.PC_Address1);
 listBox1.Items.Add(litem);
 listBox1.ScrollIntoView(litem);
 }
 button1.Content = listBox1.Items.Count.ToString();
 },
 () => { btnStatus.Content = "Finished"; }
 );
}

Have finished writing MVVM(Nano View model) based scenario & Rx implementations …

Will write soon and publish code..

Regards
Rajnish

N-tier Sync Framework – OCA (Occasional connected Application)

Sync Framework is a comprehensive synchronization platform that enables collaboration and offline access for applications, services, and devices. Sync Framework features technologies and tools that enable roaming, data sharing, and taking data offline. By using Sync Framework, developers can build synchronization ecosystems that integrate any application with data from any store, by using any protocol over any network.

This article shows how to synchronize efficiently with a remote server by using a proxy provider on the local computer over secure WCF channel. The proxy provider uses the Remote Change Application pattern and Windows Communication Foundation (WCF) to send serialized metadata and data to the remote replica so synchronization processing can be performed on the remote computer (server) with fewer round trips between the client and server computers. Microsoft Sync Framework synchronizes data between data stores. Typically, these data stores are on different computers or devices that are connected over a network. In our case we will be using synchronization between local SQL Server 2008 and central (remote) SQL server 2008 (express edition). Following are the different (Visual Studio 2010 – .Net 4.0) projects involved in the solution

  1. Sync.WebServer : Web server (Asp.Net) project to host WCF Sync service, Authentication service and web portal to manage Sync clients.
  2. Sync.Library : Sync library (Class Library) used by client/server which provides server proxy to client and also provides base class for RelationalSyncService, ISqlSyncContract for WCF Sync service.
  3. Sync.Client : Windows based client which will perform database sync between SyncLocal and SyncServerCert via WCF service.

For synchronizing two databases, Sync Framework supports two-tier and N-tier architectures that use any server database for which an ADO.NET provider is available. For synchronizing between a client database and other types of data sources, Sync Framework supports a service-based architecture. This architecture requires more application code than two-tier and N-tier architectures; however, it does not require a developer to take a different approach to synchronization.

The following illustrations show the components that are involved in N-tier, and service-based architectures. Each illustration shows a single client, but there are frequently multiple clients that synchronize with a single server. Sync Framework uses a hub-and-spoke model for client and server database synchronization. Synchronization is always initiated by the client. All changes from each client are synchronized with the server before the changes are sent from the server to other clients. (These are clients that do not exchange changes directly with one another.)

N-tier architecture requires a proxy, a service, and a transport mechanism to communicate between the client database and the server database. This architecture is more common than a two-tier architecture, because an N-tier architecture does not require a direct connection between the client and server databases.

N-Tier Architecture

For demo purpose the server side database is very simple, with just two tables in used in synchronization process. Create blank database SyncCenter and execute script SyncCenter_Script.sql (See database script attached with source code)

Class ServerProvisioning.cs (in project Sync.WebServer) is used to create sync filter template and then create filtered scope for each client based on this template.

We need add our tables to function CreateTemplate (filtered template)

//Add tables which will participate in Sync sequence matters
scopeDesc.Tables.Add(GetDescriptionForTable("Clients", ConnectionString));
scopeDesc.Tables.Add(GetDescriptionForTable("Products", ConnectionString));

//With each table we have to add @Id and filter records based on Client id

SqlSyncTableProvisioning Clients = serverTemplate.Provisioning.Tables[GetTableFullName("Clients")];
Clients.AddFilterColumn("Id");
Clients.FilterClause = "[side].[Id] = @Id";
Clients.FilterParameters.Add(new SqlParameter("@Id", SqlDbType.UniqueIdentifier));

SqlSyncTableProvisioning Products = serverTemplate.Provisioning.Tables[GetTableFullName("Products")];
Products.AddFilterColumn("ClientId");
Products.FilterClause = "[side].[ClientId] = @Id ";
Products.FilterParameters.Add(new SqlParameter("@Id", SqlDbType.UniqueIdentifier));

Please note that you for each table which is used in synchronization process we have to add filter clause to filter records based on client id. You may use complex SQL queries with joins etc. to specify which records should be synchronized between multiple clients. The central server (single) holds data for multiple clients (multi – tenancy).See article Sync framework – choose your primary keys type carefully: http://www.codeproject.com/Articles/63275/Sync-framework-choose-your-primary-keys-type-caref

The above mentioned code id called when you click on create template button from Central MS Sync web site (Project: Sync.WebServer).

Before running our web server project (Sync.WebServer), you need to change connection string SyncCenterConnectionString in web.config file.The web server project Sync.WebServer is used to host WCF Sync service and also provides you admin panel from where you can setup and create Sync template and sync scopes for each client. The sync template will create filter based template and specify tables used in sync process and also define filter clause (Sql Queries) where as the sync scope will create scope of each client based on these templates where clientId is fixed. So that whenever you setup new sync client you need to create scope for this client before it can take participate in sync process.

OK now, web server project is almost ready to run however we need to install membership provider and create few certificates which will be used in later stage. The Asp.Net membership provider is used to access this web portal.

Follow below mentioned steps to install membership provider on your SyncCenter database.

Run aspnet_regsql.exe utility from C:windowsMicrosoft.NETFrameworkv2.0.50727 folder on your machine.

Choose your database and click next, next … finish

And click next, next…. Finish.

Now we need two X.509 certificates “SyncServerCert” and “SyncClientCert“

Certificate SyncServerCert will be used by web server where as SyncClientCert will be distributed to its clients.

To create certificate follow tasks mentioned below

Execute Make Cert.bat available under certificates folders (download).This batch file is having following commands.

Makecert.exe -r -pe -n "CN= SyncServerCert " -b 01/01/2000 -e 01/01/2050 -eku 1.3.6.1.5.5.7.3.1   -ss my -sr localMachine -sky exchange -sp   "Microsoft RSA SChannel Cryptographic Provider" -sy 12

Winhttpcertcfg.exe -g -c  LOCAL_MACHINEMy -s "SyncServerCert" -a ASPNET

Winhttpcertcfg.exe -g -c  LOCAL_MACHINEMy -s "SyncServerCert" -a "NETWORK SERVICE"

Winhttpcertcfg.exe -g -c  LOCAL_MACHINEMy -s "SyncServerCert" -a "LOCAL SERVICE"

Makecert.exe -r -pe -n "CN= SyncClientCert " -b 01/01/2000 -e 01/01/2050 -eku 1.3.6.1.5.5.7.3.1   -ss my -sr localMachine -sky exchange -sp   "Microsoft RSA SChannel Cryptographic Provider" -sy 12

Makecert is available with visual studio installation and you can download from Winhttpcertcfg.exe from http://www.microsoft.com/download/en/details.aspx?displaylang=en&id=19801

Launch mmc and add Certificates (Local computer) to mmc. You will found certificates SyncServerCert & SyncClientCert under Personal Certificates.

Copy (right click copy and paste) these certificates under Trusted People /certificates and under Trusted Root Certificate Authorities/certificates.

Now export SyncClientCert certificate (alone with Private key) as pfx file. This can be deployed to sync clients.

Server project is ready to run now, Launch web server project,create login (register) and navigate to sync Clients tab.

Click on create template and then click on setup sync for each client

Setup sync will execute following code

//For filter parameter name see template below
serverProv.Provisioning.PopulateFromTemplate(SyncConfigurations.ClientScopeName(ClientId), ServerProvisioning.TemplateName);
serverProv.Provisioning.Tables[GetTableFullName("Clients")].FilterParameters["@Id"].Value = ClientId;
serverProv.Provisioning.Tables[GetTableFullName("Products")].FilterParameters["@Id"].Value = ClientId;

which will create sync scope of each client.

At this stage, our server is ready to sync with 3 clients. Please note that Sync service uses wsHTTP binding with certificate authentication SyncServerCert.

Sync.Client

For sync client setup, create blank database “SyncLocal” on local machine or where your client application will run,Edit the connection string app.config and also specify the clientId in config file.Not, in production system you may need to provide service from where user will request authentication and once authenticated from server the server will provide clientId based on logon details. In that case you don’t need to hard code client id however for simplicity I have just used the fixed value (read from app.config).

Run this application and click “Sync With Client”, Change Client Id in text box and again click sync With Client.

Please note that sync framework will only create table (and primary keys) however relationships and constraints were not in scope o Sync. For more information check Microsoft documentation.

Source Code