Bulb Flash:-SQL Azure tip; Firewall rules: allow all incoming IPs during testing and Development!

Before we even show you how , do note that this should be done only during testing of non-critical data which will cause no issues if accessed from unintended applications..

SQL Azure tracks the IP address for the purpose of security . However sometimes during development( and co-development) when you don’t have a static IP, it becomes quite a pain to keep checking the Firewall rules every time the app starts misbehaving…so during testing and development you can see the firewall rules to allow all connections by giving it the following settings..

 

image

Hope this saves you some time and headache…do remember to change these during production and when testing with critical data!

Until next time!

Cennest

We are Moving!

Cennest is finally moving to its new virtual home at www.cennest.com.

While we will be blogging at both places for sometime, you will find more information about “Whats Up at Cennest “ at our website.

Do visit!

Thinking of moving to the Cloud?

Cennest Technologies is a Microsoft Cloud Essentials Partner with focus on developing new applications/Migrating existing applications to the Cloud.

With in-depth knowledge of the framework and practical experience of custom development and live migrations we are aware of the usual bottlenecks you would face while trying to achieve an optimized deployment on Azure

Optimized on the Cloud= Performance+ Cost Optimization and Cennest has the knowledge you need to take the right decisions.

So get in touch with us at anshulee@cennest.com and lets reach for the clouds together!

Anshulee Asthana

Founder:- Cennest Technologies

We will move you to the Cloud!

Microsoft Azure App-Fabric Caching Service Explained!

MIX always comes with a mix of feelings…excitement at the prospect of trying out the new releases and the heartache that comes with trying to understand  “in depth” the new technologies being released…and so starts the “googling..oops binging”,,,blogs,,,videos etc…What does it mean?? How does it impact me??

One such very important release at MIX 2011 is the AppFabric Caching ServiceAt Cennest we do a lot of Azure development and Migration work and this feature caught our immediate attention as something which will have a high impact on the architecture, cost and performance of new applications and Migrations .

So we collated information from various sources (references below ) and here is an attempt is simplify the explanation for you!

What is caching?

The Caching service is a distributed, in-memory, application cache service that accelerates the performance of Windows Azure and SQL Azure applications by allowing you to keep data in-memory and saving you the need to retrieve that data from storage or database.(Implicit Cost Benefit? Well depends on the costing of Cache service…yet to be released..)

Basically it’s a layer that sits between the Database and the application and which can be used to “store” data prevent frequent trips to the database thereby reducing latency and improving performance

image_thumb1

How does this work?

Think of the Caching service as Microsoft running a large set of cache clusters for you, heavily optimized for performance, uptime, resiliency and scale out and just exposed as a simple network service with an endpoint for you to call. The Caching service is a highly available multitenant service with no management overhead for its users

As a user, what you get is a secure Windows Communication Foundation (WCF) endpoint to talk to and the amount of usable memory you need for your application and APIs for the cache client to call in to store and retrieve data.

image_thumb3

The Caching service does the job of pooling in memory from the distributed cluster of machines it’s running and managing to provide the amount of usable memory you need. As a result, it also automatically provides the flexibility to scale up or down based on your cache needs with a simple change in the configuration.

Are there any variations in the types of Cache’s available?

Yes, apart from using the cache on the Caching service there is also the ability to cache a subset of the data that resides in the distributed cache servers, directly on the client—the Web server running your website. This feature is popularly referred to as the local cache, and it’s enabled with a simple configuration setting that allows you to specify the number of objects you wish to store and the timeout settings to invalidate the cache.

image_thumb5

What can I cache?

You can pretty much keep any object in the cache: text, data, blobs, CLR objects and so on. There’s no restriction on the size of the object, either. Hence, whether you’re storing explicit objects in cache or storing session state, the object size is not a consideration to choose whether you can use the Caching service in your application.

However, the cache is not a database! —a SQL database is optimized for a different set of patterns than the cache tier is designed for. In most cases, both are needed and can be paired to provide the best performance and access patterns while keeping the costs low.

How can I use it?

  • For explicit programming against the cache APIs, include the cache client assembly in your application from the SDK and you can start making GET/PUT calls to store and retrieve data from the cache.
  • For higher-level scenarios that in turn use the cache, you need to include the ASP.NET session state provider for the Caching service and interact with the session state APIs instead of interacting with the caching APIs. The session state provider does the heavy lifting of calling the appropriate caching APIs to maintain the session state in the cache tier. This is a good way for you to store information like user preferences, shopping cart, game-browsing history and so on in the session state without writing a single line of cache code.

image_thumb7

When should I use it?

A common problem that application developers and architects have to deal with is the lack of guarantee that a client will always be routed to the same server that served the previous request.

When these sessions can’t be sticky, you’ll need to decide what to store in session state and how to bounce requests between servers to work around the lack of sticky sessions. The cache offers a compelling alternative to storing any shared state across multiple compute nodes. (These nodes would be Web servers in this example, but the same issues apply to any shared compute tier scenario.) The shared state is consistently maintained automatically by the cache tier for access by all clients, and at the same time there’s no overhead or latency of having to write it to a disk (database or files).

How long does the cache store content?

Both the Azure and the Windows Server AppFabric Caching Service use various techniques to remove data from the cache automatically: expiration and eviction. A cache has a default timeout associated with it after which an item expires and is removed automatically from the cache.

This default timeout may be overridden when items are added to the cache. The local cache similarly has an expiration timeout. 

Eviction refers to the process of removing items because the cache is running out of memory. A least-recently used algorithm is used to remove items when cache memory comes under pressure – this eviction is independent of timeout.

What does it mean to me as a Developer?

One thing to note about the Caching service is that it’s an explicit cache that you write to and have full control over. It’s not a transparent cache layer on top of your database or storage. This has the benefit of providing full control over what data gets stored and managed in the cache, but also means you have to program against the cache as a separate data store using the cache APIs.

This pattern is typically referred to as the cache-aside, where you first load data into the cache and then check if it exists there for retrieving and, only when it’s not available there, you explicitly read the data from the data tier. So, as a developer, you need to learn the cache programming model, the APIs, and common tips and tricks to make your usage of cache efficient.

What does it mean to me as an Architect?

What data should you put in the cache? The answer varies significantly with the overall design of your application. When we talk about data for caching scenarios, usually we break it into the data types and access patterns

  • Reference Data( Shared Read Data):-Reference data is a great candidate for keeping in the local cache or co-located with the client

image_thumb9

  • Activity Data( Exclusive Write):- Data relevant to the current session between the user and the application.

Take for example a shopping cart!During the buying session, the shopping cart is cached and updated with selected products. The shopping cart is visible and available only to the buying transaction. Upon checkout, as soon as the payment is applied, the shopping cart is retired from the cache to a data source application for additional processing.

Such an collection of data would be best stored in the Cache Server providing access to all the distributed servers which can send updates to the shopping cart . If this cache were stored at the local cache then it would get lost.

image_thumb11

 

  • Shared Data(Multiple Read and Write):-There is also data that is shared, concurrently read and written into, and accessed by lots of transactions. Such data is known as resource data.

Depending upon the situation, Caching shared data on a single computer can provide some performance improvements but for large-scale auctions, a single cache cannot provide the required scale or availability. For this purpose, some types of data can be partitioned and replicated in multiple caches across the distributed cacheimage

Be sure to spend enough time in capacity planning for your cache. Number of objects, size of each object, frequency of access of each object and pattern for accessing these objects are all critical in not only determining how much cache you need for your application, but also on which layers to optimize for (local cache, network, cache tier, using regions and tags, and so on).

If you have a large number of small objects, and you don’t optimize for how frequently and how many objects you fetch, you can easily get your app to be network-bound.

Also Microsoft will soon release the pricing for using the caching service so obviously you need to ensure usage of the Caching service is “Optimized” and when it comes to the cloud “Optimized= Performance +Cost”!!

Hope this helps you understand this new term better wrt Azure.

Until Next Time

Cennest!                                                                                                                                                                                           We can help you move to the cloud!”

References:

Bulb Flash:- Firing a Complex SQL Query using Entity Framework 4.0

In one of our recent projects, we decided to swap SQL Express with SQL Compact to ensure easy installation on client machines…Though the move was smooth there was one hitch….SQL Compact does not support Stored Procedures!!!

And here we were with lots of  medium  complexity SPs with Joins and GroupBys ,OrderBy and Min Max functions!

For Example

select top(@number) p.ProjectName as ProjectName, p.ProjectID as ProjectID,c.Client_Name as ClientName,Max(pl.EndTimeStamp) as LastTimeWorked
from Projects as p join ProgramLog as pl
on
p.ProjectID  =pl.fk_ClientCode
join Clients c
on pl.fk_ClientCode = c.ClientID

Group By p.ClientProjectID, p.ProjectName ,c.Client_Name
Order by MAX (pl.EndTimeStamp) desc

We clearly had two options

1. Replace the stored proc logic with LINQ to Entity queries. We gave that a shot but realized its not easy to write a LINQ query with all the Joins, GroupBys and Max statements etc

2. Fire the SQL query from Entity Framework…We knew this could be done using LINQ to SQL  very easily but hadn’t tried it with the EF yet. Searching online brought us to the “ObjectQuery” feature . You can read more about it here.

Am not going to get into the details because this didn’t work for us also. Although ObjectQuery works well for simple statements( even parameterized), it didn’t work for us for using “Joins”. We tried many options including those given here but to no avail..

What worked!!

What did work for us was this really simple and beautiful feature provided in EF 4.0 called ExecuteStoreQuery<>

So the solution was as simple as

string query = @"select top(@number ) p.ProjectName as ProjectName, p.ClientProjectID as ProjectID,c.Client_Name as ClientName,Max(pl.EndTimeStamp) as LastTimeWorked
           from Projects as p join ProgramLog as pl
           on
           p.ClientProjectID  =pl.fk_ClientCode
           join Clients c
           on pl.fk_ClientCode = c.ClientID

           Group By p.ClientProjectID, p.ProjectName ,c.Client_Name
           Order by MAX (pl.EndTimeStamp) desc";
           var args = new DbParameter[] { new SqlParameter { ParameterName = "number", Value = count } };
           var result = entityContext.ExecuteStoreQuery<MostRecentProjects>(query, args);
           List<MostRecentProjects> resultList = result.ToList();

If you’ve been following closely you will notice I have the ExecuteStoreQuery method returning a list of MostRecentProjects…Since my query does not return a previously generated “entity” but involves a join statement returning a mix of values , this is not an edmx generated class but a simple custom class I created which has one property for each value returned by the query and the ExecuteStoreQuery method was nice enough to fill it for me!!(This was of course trial and error and are we glad it worked!!!)

This is what the class looks like

public partial class MostRecentProjects
  {
      public int ProjectID { get; set; }
      public string ProjectName { get; set; }
      public string ClientName { get; set; }
      public DateTime LastTimeWorked { get; set; }

  }

Hope this helps!

Until Next Time!

Cennest!

Bulb Flash:- Some practical WPF MVVMLight tips!

One of our major projects recently used WPF 4.0 with MVVM pattern.We used the MVVM Light Toolkit for implementing the MVVM Pattern.

The MVVM Light Toolkit definitely has a lot to offer to make your life easier but the documentation is not exemplary!

Few things we learnt down the road as we came to a close on the project are

1. You are better off using the ViewModalLocator. We didn’t use it initially and realized that our ViewModels are not getting Disposed, multiple view models are getting created especially when using the “Commanding” feature!

2. When using Commands , if you are not using the ViewModalLocator, try not to set the IsDataSource property as it will change the DataContext of your screen to the new ViewModel and there will be inconsistency during commanding as previously set variables will not be available in the new VM. Or if you do need to set the IsDataSource property send all data needed by the command handler as arguments( because previously set data will not be available to the new VM)

3. You need a default empty constructor in all your ViewModels or your XAML screens commands won’t work(They instantiate the empty construtor)

4. You can pass event args parameters in commands using PassEventArgsToCommand="True" in your EventToCommand XAML

5. When using messaging use the method overload Send<TMessage>(TMessage message, object token).Using the token to register and send messages ensures it is delivered only to valid subscribers!

6. Every time you register for an event using the Messenger.Default.Register….ensure you Unregister also using the Messenger.Default.Unregister or else you might have memory leaks!!

Will try to write more elaborate posts detailing out these and more issues .Meanwhile drop a note at anshulee@cennest.com if you have any queries/ need more tips related to MVVM…

Until then!

Cennest!

EF 4.0 Bulb Flash!:- Load only what you need!!

A small tip for those working with Entity Framework 4.0:- We all know the concept of Lazy Loading in the Entity Framework: With lazy loading enabled, related objects are loaded when they are accessed through a navigation property.

With lazy loading a drawback is that an object retrieved from the database comes loaded with all its navigable Objects so you may be querying an “Order” class but it comes loaded with the Order.Customer object .

[Thanks to Julielerman for pointing out this inconsistency.  Lazy Loading loads related entities on Navigation and does not come “loaded” with them]

While you may want this in some cases, it makes sense to disable this feature in performance oriented applications and load only what you need!

As against what is written in MSDN , our experience is that when an entity context object gets created, its LazyLoadingEnabled property is defaulted to true!… This is also a reported issue with microsoft

So first step would be to disable the LazyLoadingEnabled!

ProgramEntities entityContext = new ProgramEntities();
entityContext.ContextOptions.LazyLoadingEnabled = false;

List<Order> orderList= entityContext.Orders.ToList();

Next, load what you need!

orderList.ForEach(p => entityContext.LoadProperty(p, "Customer"));

Now you can access the Customer as Order.Customer while ensuring you did not load other related  navigable properties like Order.Contents etc etc.

Hope this helps “lighten” up your code a bit!!

Cennest !!