Azure Tip:- WCF WebRole Accessing Table Storage:- SetPublisherAccess error

If you are using a WCF Web Role to access table storage you might come across the following error

SetConfigurationSettingPublisher needs to be called before FromConfigurationSetting can be used

The following steps can help you

1. Ensure you have the Cloud project set up as the start up project

2. Ensure you are using the service reference of the Dev Environment web role and not your local IIS port( Another post on that incase you are facing an Add Service reference issue here but for now just replace the port in the web.config with http://127.0.0.1:81/yourservice.svc).

3. If you are using Azure SDK 1.2 then add the following code to your WebRole.cs OnStart method

CloudStorageAccount.SetConfigurationSettingPublisher((configName,configSettingPublisher) =>
   {
    var connectionString = RoleEnvironment.GetConfigurationSettingValue(configName);
    configSettingPublisher(connectionString);
   });

4. If you are using Azure SDK 1.3 then add a Global.asax to your WebService Web Role and add the following code

protected void Application_Start(object sender, EventArgs e)
       {
           CloudStorageAccount.SetConfigurationSettingPublisher(
       (configName, configSettingPublisher) =>
       {
           var connectionString =
               RoleEnvironment.GetConfigurationSettingValue(configName);
           configSettingPublisher(connectionString);
       } );
        
       }

5. Ensure you have WCF Http Activation On ( in control panel—> Windows Features)

Watch this space for lots more Azure tips!

Cennest!

 

Bulb Flash:- Azure Development Quick Tip!!

Recently we were stumped with the following error while working on an Azure project

Error 1 The specified path, file name, or both are too long. The fully qualified file name must be less than 260 characters, and the directory name must be less than 248 characters. C:\Program Files\MSBuild\Microsoft\Cloud Service\1.0\Visual Studio 10.0\Microsoft.CloudService.targets 202 5 OrderManagement

After a lot of head-banging, a simple Quick-Fix was to move the project to a root directory like a C: or a D: (Basically reduce the path of the project)

Hope you get to this quick-fix before you get to the head-bangingSmile

Cennest!

Bulb Flash:-SQL Azure tip; Firewall rules: allow all incoming IPs during testing and Development!

Before we even show you how , do note that this should be done only during testing of non-critical data which will cause no issues if accessed from unintended applications..

SQL Azure tracks the IP address for the purpose of security . However sometimes during development( and co-development) when you don’t have a static IP, it becomes quite a pain to keep checking the Firewall rules every time the app starts misbehaving…so during testing and development you can see the firewall rules to allow all connections by giving it the following settings..

 

image

Hope this saves you some time and headache…do remember to change these during production and when testing with critical data!

Until next time!

Cennest

Microsoft Azure App-Fabric Caching Service Explained!

MIX always comes with a mix of feelings…excitement at the prospect of trying out the new releases and the heartache that comes with trying to understand  “in depth” the new technologies being released…and so starts the “googling..oops binging”,,,blogs,,,videos etc…What does it mean?? How does it impact me??

One such very important release at MIX 2011 is the AppFabric Caching ServiceAt Cennest we do a lot of Azure development and Migration work and this feature caught our immediate attention as something which will have a high impact on the architecture, cost and performance of new applications and Migrations .

So we collated information from various sources (references below ) and here is an attempt is simplify the explanation for you!

What is caching?

The Caching service is a distributed, in-memory, application cache service that accelerates the performance of Windows Azure and SQL Azure applications by allowing you to keep data in-memory and saving you the need to retrieve that data from storage or database.(Implicit Cost Benefit? Well depends on the costing of Cache service…yet to be released..)

Basically it’s a layer that sits between the Database and the application and which can be used to “store” data prevent frequent trips to the database thereby reducing latency and improving performance

image_thumb1

How does this work?

Think of the Caching service as Microsoft running a large set of cache clusters for you, heavily optimized for performance, uptime, resiliency and scale out and just exposed as a simple network service with an endpoint for you to call. The Caching service is a highly available multitenant service with no management overhead for its users

As a user, what you get is a secure Windows Communication Foundation (WCF) endpoint to talk to and the amount of usable memory you need for your application and APIs for the cache client to call in to store and retrieve data.

image_thumb3

The Caching service does the job of pooling in memory from the distributed cluster of machines it’s running and managing to provide the amount of usable memory you need. As a result, it also automatically provides the flexibility to scale up or down based on your cache needs with a simple change in the configuration.

Are there any variations in the types of Cache’s available?

Yes, apart from using the cache on the Caching service there is also the ability to cache a subset of the data that resides in the distributed cache servers, directly on the client—the Web server running your website. This feature is popularly referred to as the local cache, and it’s enabled with a simple configuration setting that allows you to specify the number of objects you wish to store and the timeout settings to invalidate the cache.

image_thumb5

What can I cache?

You can pretty much keep any object in the cache: text, data, blobs, CLR objects and so on. There’s no restriction on the size of the object, either. Hence, whether you’re storing explicit objects in cache or storing session state, the object size is not a consideration to choose whether you can use the Caching service in your application.

However, the cache is not a database! —a SQL database is optimized for a different set of patterns than the cache tier is designed for. In most cases, both are needed and can be paired to provide the best performance and access patterns while keeping the costs low.

How can I use it?

  • For explicit programming against the cache APIs, include the cache client assembly in your application from the SDK and you can start making GET/PUT calls to store and retrieve data from the cache.
  • For higher-level scenarios that in turn use the cache, you need to include the ASP.NET session state provider for the Caching service and interact with the session state APIs instead of interacting with the caching APIs. The session state provider does the heavy lifting of calling the appropriate caching APIs to maintain the session state in the cache tier. This is a good way for you to store information like user preferences, shopping cart, game-browsing history and so on in the session state without writing a single line of cache code.

image_thumb7

When should I use it?

A common problem that application developers and architects have to deal with is the lack of guarantee that a client will always be routed to the same server that served the previous request.

When these sessions can’t be sticky, you’ll need to decide what to store in session state and how to bounce requests between servers to work around the lack of sticky sessions. The cache offers a compelling alternative to storing any shared state across multiple compute nodes. (These nodes would be Web servers in this example, but the same issues apply to any shared compute tier scenario.) The shared state is consistently maintained automatically by the cache tier for access by all clients, and at the same time there’s no overhead or latency of having to write it to a disk (database or files).

How long does the cache store content?

Both the Azure and the Windows Server AppFabric Caching Service use various techniques to remove data from the cache automatically: expiration and eviction. A cache has a default timeout associated with it after which an item expires and is removed automatically from the cache.

This default timeout may be overridden when items are added to the cache. The local cache similarly has an expiration timeout. 

Eviction refers to the process of removing items because the cache is running out of memory. A least-recently used algorithm is used to remove items when cache memory comes under pressure – this eviction is independent of timeout.

What does it mean to me as a Developer?

One thing to note about the Caching service is that it’s an explicit cache that you write to and have full control over. It’s not a transparent cache layer on top of your database or storage. This has the benefit of providing full control over what data gets stored and managed in the cache, but also means you have to program against the cache as a separate data store using the cache APIs.

This pattern is typically referred to as the cache-aside, where you first load data into the cache and then check if it exists there for retrieving and, only when it’s not available there, you explicitly read the data from the data tier. So, as a developer, you need to learn the cache programming model, the APIs, and common tips and tricks to make your usage of cache efficient.

What does it mean to me as an Architect?

What data should you put in the cache? The answer varies significantly with the overall design of your application. When we talk about data for caching scenarios, usually we break it into the data types and access patterns

  • Reference Data( Shared Read Data):-Reference data is a great candidate for keeping in the local cache or co-located with the client

image_thumb9

  • Activity Data( Exclusive Write):- Data relevant to the current session between the user and the application.

Take for example a shopping cart!During the buying session, the shopping cart is cached and updated with selected products. The shopping cart is visible and available only to the buying transaction. Upon checkout, as soon as the payment is applied, the shopping cart is retired from the cache to a data source application for additional processing.

Such an collection of data would be best stored in the Cache Server providing access to all the distributed servers which can send updates to the shopping cart . If this cache were stored at the local cache then it would get lost.

image_thumb11

 

  • Shared Data(Multiple Read and Write):-There is also data that is shared, concurrently read and written into, and accessed by lots of transactions. Such data is known as resource data.

Depending upon the situation, Caching shared data on a single computer can provide some performance improvements but for large-scale auctions, a single cache cannot provide the required scale or availability. For this purpose, some types of data can be partitioned and replicated in multiple caches across the distributed cacheimage

Be sure to spend enough time in capacity planning for your cache. Number of objects, size of each object, frequency of access of each object and pattern for accessing these objects are all critical in not only determining how much cache you need for your application, but also on which layers to optimize for (local cache, network, cache tier, using regions and tags, and so on).

If you have a large number of small objects, and you don’t optimize for how frequently and how many objects you fetch, you can easily get your app to be network-bound.

Also Microsoft will soon release the pricing for using the caching service so obviously you need to ensure usage of the Caching service is “Optimized” and when it comes to the cloud “Optimized= Performance +Cost”!!

Hope this helps you understand this new term better wrt Azure.

Until Next Time

Cennest!                                                                                                                                                                                           We can help you move to the cloud!”

References:

The First Few Steps to Building a Claims Based Application

If you’ve been through my earlier blog “An Introduction to the world of claims” you are now familiar with the most frequently used terms in the “Identity world”.

Next lets take a generic walkthrough of the basic steps you will go through when you make a Claims Aware/Claims Based/Relying party application

Step 1:-  You will make you application Claims Aware

The Windows Identity Foundation (WIF) provides a common programming model for claims that can be used by both Windows Communication Foundation (WCF) and ASP.NET applications. You need to add the Microsoft.IdentityModel.dll as a reference dll which gives you another property called Identity.Claims.

Making your application Claims aware involves a couple of steps at the config level(we won’t get into details right now) but the good news is that your core code does not get changed. Most probably your code is of the form

protected void Page_Load(object sender, EventArgs e)
{
    var user = (User)Session["LoggedUser"];
    var repository = new TimeSheetRepository();
    var expenses = repository.GetTimesheets(user.Id);
    this.MyTimesheetsGrid.DataSource = expenses;
    this.DataBind();
}

Even after making your application claims aware this code remains unchanged. What changes is an addition to your Global.aspx which reads the claims delivered by the STS and creates the Session User out of it.

So you will have something like this in your Global.asax file

protected void Session_Start(object sender, EventArgs e)
{
    if (this.Context.User.Identity.IsAuthenticated)
    {
        string issuer =
        ClaimHelper.GetCurrentUserClaim(
        System.IdentityModel.Claims.ClaimTypes.Name).
        OriginalIssuer;
        string givenName =
        ClaimHelper.GetCurrentUserClaim(
        WSIdentityConstants.ClaimTypes.GivenName).Value;
        string surname =
        ClaimHelper.GetCurrentUserClaim(
        WSIdentityConstants.ClaimTypes.Surname).Value;
        string costCenter =
        ClaimHelper.GetCurrentUserClaim(
        Adatum.ClaimTypes.CostCenter).Value;
        var repository = new UserRepository();
        string federatedUsername =
        GetFederatedUserName(issuer, identity.Name);
        var user = repository.GetUser(federatedUsername);
        user.CostCenter = costCenter;
        user.FullName = givenName + " " + surname;
        this.Context.Session["LoggedUser"] = user;
    }
}

Don’t focus on the code details yet. The point to be taken home is that your code did not change. You just had to make some changes in your web.config, added a handler in your Global.asax and your app is ready

 

Step2:- Decide on your Issuer(STS)

You accept claims from “Issuers” you trust. Examples of standard off the shelf issuers are ADFS, LiveID, Kerberos etc. Based on your requirement you may end up using one of these ready made issuers or maybe as an application developer with custom authentication requirements you may be asked to make an issuer. Even if its the latter case, there are sample issuer templates(ASP.NET Security Token Service and WCF Security Token Service) provided in VS2010 to allow you to make Custom Issuers and lot of guidance on how to configure them so don’t worry about it.

image

Templates in VS2010 Create new Website!

Step 3:- Make your application “Trust” the issuer.

Like i said earlier,your application needs to trust the token issuer , only then should it accept a claim it received. When you configure an application to rely on a specific issuer, you are establishing a trust (or trust relationship) with that issuer.
There are several important things to know about an issuer when you establish trust with it:
• What claims does the issuer offer?
• What key should the application use to validate signatures on
the issued tokens?
• What URL must users access in order to request a token
from the issuer?

Just like in a Web Service information about the service can be read from a WSDL, in this case you get all your answers by asking for a FederationMetadata document. This is an XML document that the issuer provides to the application. It includes a serialized copy of the issuer’s certificate that provides your application with the correct public key to verify incoming tokens. It also includes a list of claims the issuer offers, the URL where users can go to get a token, and other more technical details, such as the token formats that it knows about

VS2010 makes trusting an issuer easy by providing an “Add Service reference kind of experience” for adding an Issuer..

Right Click on your application and select “Add STS Reference” to start the configuration wizard..More details on how to populate the wizard in another post..Point to be taken home :- its easy to add a trusted issuer!

image

Step4:- Configure your STS to know about the application!!

The issuer needs to know a few things about an application before it can issue it any tokens:
• What Uniform Resource Identifier (URI) identifies this application?
• Of the claims that the issuer offers, which ones does this application require and which are optional?
• Should the issuer encrypt the tokens? If so, what key should it use?
• What URL does the application expose in order to receive tokens?
Each application is different, and not all applications need the same claims. One application might need to know the user’s groups or roles, while another application might only need a first and last name. So
when a client requests a token, part of that request includes an identifier for the application the user is trying to access.

This step might be different for each Issuer. For ADFS for example the ADFS management console allows you to added a relying party..using an “Add a trusted relying party” option. This would allow you to configure ADFS by telling it what claims you need etc. If you are using an Existing STS you will need to study its behaviour. Incase of custom STS using the VS templates you can customise in code and a lot of work is done by the FedUtil(Add STS ref) also.

Now that you know the overall steps in making an end to end claims aware app..we will get into some more details next time!

Until Then

Cennest!

Introduction into the world of “Claims”( Windows Identity Foundation & Azure App fabric Access Control!)

Have you been working on a website or an application where you maintain a user database for people coming from different companies/domains ? Someone is responsible for maintaining the consistency of this database? Also you as an application developer are required to write code to check for authorization rights allowing/disallowing visitors to visit only authorized sections of your site?

All issues i’ve mentioned so far are pretty common in a multi-tenant type of an applications(like SAAS) which caters to multiple companies . Usually you would have each user of your app create a new username/password and store it in your app database..here are the disadvantages of such an approach

1. Your users have to remember ANOTHER set of username/passwords

2. You end up storing not just their username/passwords but also other details like their role, reporting manager etc etc which is btw already present on their corp net so you are basically duplicating information

3.You are responsible for maintaining the duplicate information…if the person got promoted to a manager , your database needs to be updated also

4. What if the person leaves the company? He still has a login into your application until it is manually removed!

Even if your user’s don’t really come from a domain or a company, aren’t there enough “Identity Providers” like Live, Google, OpenID out there?. Why do you need to “authenticate” these users?. Why not just ask an existing Identity provider to check out the user’s authenticity and let you know more about the user?Basically “Outsource” your “authentication” work  and focus on your core capability which is application logic..Sounds too good to be true?? Welcome to Claim based Architectures!!

Microsoft’s Windows Identity Foundation  provides a framework for building such “Claim based applications”.My next sequence of blogs will be an attempt to demystify the “Claims based Identity Model”.

If you are still reading this you are probably saying demystify WHAT???

So lets start with what claims based identity model means..

When you build claims-aware applications, the user presents her identity to your application as a set of claims  One claim could be the user’s name; another might be her email address. The idea here is that an external identity system is configured to give your application everything it needs to know about the user with each request she makes, along with cryptographic assurance that the identity data you receive comes from a trusted source.

image

Under this model, single sign-on is much easier to achieve, and your application is no longer responsible
for:
 Authenticating users.
 Storing user accounts and passwords.
 Calling to enterprise directories to look up user identity details.
 Integrating with identity systems from other platforms or companies.
Under this model, your application makes identity-related decisions based on claims supplied by the
user. This could be anything from simple application personalization with the user’s first name, to
authorizing the user to access higher valued features and resources in your application.

Lets also define a few more key terms you would hear again and again..

Claims:- You can think of a claim as a bit of identity information, such as name, email address, age, membership in the sales role, and so on. The more claims your service receives, the more you’ll know about the user who is making the request. Claims are signed by an issuer, and you trust a set of claims only as much as you trust that issuer. Part of accepting a claim is verifying that it came from an issuer that you trust.There are steps as to how to establish a trust relationship between your application and an issuer. Will elaborate on those in another post..

Security Token:- A set of claims serialized and digitally signed by the issuer

This next one is confusing

Issuing Authority & Identity Provider

An issuing authority has two main features. The first and most obvious is that it issues security tokens. The second feature is the logic that determines which claims to issue. This is based on the user’s identity, the resource to which the request applies, and possibly other contextual data such as time of day.

Some authorities, such as Windows Live ID, know how to authenticate a user directly. Their job is to validate some credential from the user and issue a token with an identifier for the user’s account and possibly other identity attributes. These types of authorities are called identity providers

So basically not All Issuing Authorities are Identity Providers. Some of them just accept claims from Identity providers and convert them into claims acceptable by your application (Azure app fabric Access Control is such an example)..basically they don’t have authentication logic..just mapping logic.

Security Token Service(STS):- This is another confusing term as you will see people using Issuer and STS interchangeably.Basically A security token service (STS) is a technical term for the Web interface in an issuing authority that allows clients to request and receive a security token according to certain interoperable protocols

Relying Party:- When you build a service that relies on claims, technically you are building a relying party. Some synonyms that you may have heard are claims aware application, or claims-based application.

Pretty heavy definitions!!.. It took us some reading to finally find definitions that are easy to understand. Surprizingly the easiest definitions were not in “A Guide to Claims-based Identity” or “WindowsIdentityFoundationWhitepaperForDevelopers-RTW” but in “A Developer’s Guide to Access Control in Windows Azure platform AppFabric

If you have reached this line then you are definitely on your way to building the next gen identity aware apps…so look forward to our next set of blogs!!

Till then!

Cennest

Optimizing your Azure Deployments for Cost and Performance!

This month’s Tech net magazine has quite a useful article on “Cost architecting” your azure deployments.

We had written something on the same lines here.

So be sure you check out both these resources before you think you are ready to get set, design and deploy!!

Happy Architecting!

Cennest