Posted by .Ronald on

Creating an Angular Single Page Application with Azure Active Directory and adal.js that uses an ASP.NET WebAPI

This sample shows how to create a single page application (SPA) that uses Azure Active Directory (AAD) authentication with adal.js and uses an ASP.NET WebAPI with AAD.

The source code for this sample can be found in the angular2-adaljs-webapi GitHub repository.

Set up the applications

  1. Create an Angular application.
    I’ve started from the Angular QuickStart seed to bootstrap an easy to use SPA.
  2. Create a WebAPI application.
    I’ve started in Visual Studio by creating a new ASP.NET Web Application, using the Empty template with Web API folders and core references added to it.

Implementing the WebAPI

I’ll setup the WebAPI first to provide data to the SPA without any authentication

  1. Create a model
    public class Message
    {
        public int Id { get; set; }
        public string Title { get; set; }
        public string Body { get; set; }
        public string Author { get; set; }
        public DateTime PublishedAt { get; set; }
    }
  2. Create a WebAPI 2 controller
    public class MessageController : ApiController
    {
        private IList<Message> _messages = new List<Message>()
        {
            new Message { Id = 1, Title = "Lorem ipsum", Body = "Lorem ipsum dolor sit amet, consectetur adipiscing elit.", PublishedAt = DateTime.Now},
            new Message { Id = 2, Title = "Pellentesque convallis", Body = "Pellentesque convallis finibus erat, sed lacinia eros mattis quis.", PublishedAt = DateTime.Now},
            new Message { Id = 3, Title = "Maecenas scelerisque", Body = "Maecenas scelerisque pretium risus, eu gravida elit porttitor id.", PublishedAt = DateTime.Now}
        };
    
        public IHttpActionResult Get(int id)
        {
            var message = _messages.FirstOrDefault(m => m.Id == id);
    
            if (message == null)
            {
                return NotFound();
            }
    
            return Ok(message);
        }
    }

This result in a WebAPI that can be consumed like this:

Implementing the Angular single page application

The SPA will consume the WebAPI I’ve created before and show the data on the screen. I will try to follow at least a few of the standards and best practices in Angular development. But bear in mind that this application is not meant to serve as a sample application as such.

  1. Create a model to represent your data
    export class Message {
        Id: number;
        Title: string;
        Body: string;
        Author: string;
        PublishedAt: Date;
    }
  2. Create a service that consumes the WebAPI
    import { Injectable } from '@angular/core';
    import 'rxjs/add/operator/toPromise';
    import { Http } from '@angular/http';
    import { Message } from './message';
    
    @Injectable()
    export class MessageService {
        private messageUrl = 'http://localhost:50071/api/';
    
        constructor(private http: Http) {}
    
        getMessage(id: number): Promise<Message> { 
            return this.http.get(this.messageUrl + 'message/' + id)
                .toPromise()
                .then(response => response.json() as Message)
                .catch(this.handleError);
        }
    
        private handleError(error: any): Promise<any> {
            console.error('An error occurred', error); // for demo purposes only
            return Promise.reject(error.message || error);
        }
    }
  3. Create a component to call the service and display the data
    import { Component } from '@angular/core';
    import { Message } from './message'
    import { MessageService } from './message.service';
    
    @Component({
        selector: 'message',
        template: `<div *ngIf="message; else noMessage">
            <h2>{{message.Title}}</h2>
            <div>{{message.Body}}</div>
            <br />
            <div><label>Author: </label>{{message.Author}}</div>
            <div>{{message.PublishedAt | date:'fullDate'}}</div>
        </div>
        <button (click)="getMessage()">Get message</button>`
    })
    export class MessageComponent {
        messageId: number;
        message: Message;
    
        constructor(private messageService: MessageService) {
            this.messageId = 0;
            this.message = null;
        }
    
        getMessage() {
            this.messageId = Math.floor((Math.random() * 3) + 1);;
            this.messageService.getMessage(this.messageId).then(m => this.message = m);
        }
    }
  4. Add routing and declarations to the Angular app.
    This is what my app.module.ts looks like:

    import { NgModule }      from '@angular/core';
    import { BrowserModule } from '@angular/platform-browser';
    import { RouterModule } from '@angular/router';
    import { HttpModule } from '@angular/http';
    import { AppComponent }  from './app.component';
    import { MessageComponent } from './message.component';
    import { MessageService } from './message.service';
    
    var routeConfig = [
      {
        path: 'messages',
        component: MessageComponent
      }
    ];
    
    @NgModule({
      imports: [BrowserModule, RouterModule.forRoot(routeConfig), HttpModule ],
      declarations: [ AppComponent, MessageComponent ],
      providers: [ MessageService ],
      bootstrap:    [ AppComponent ]
    })
    export class AppModule { }

    And this is what my app.component.ts looks like:

    import { Component } from '@angular/core';
    
    @Component({
      selector: 'my-app',
      template: `<a routerLink="/">HOME</a> <a routerLink="messages">Messages</a>
        <h1>Hello {{name}}</h1>
        <router-outlet></router-outlet>`
    })
    export class AppComponent { 
      name = 'Angular'; 
    }

Enable CORS to allow cross origin web requests

As the Angular application and the WebAPI are served from different hosts, we need to explicitly allow the SPA to consume data from the WebAPI. We do this by enabling CORS in the WebAPI and allow requests originating from the SPA.

  1. Add the CORS package to the WebAPI
    Install-Package Microsoft.AspNet.WebApi.Cors
  2. Enalbe CORS in WebApiConfig.cs
    public static class WebApiConfig
    {
        public static void Register(HttpConfiguration config)
        {
            config.EnableCors();
    
            // Web API configuration and services
    
            // Web API routes
            config.MapHttpAttributeRoutes();
    
            config.Routes.MapHttpRoute(
                name: "DefaultApi",
                routeTemplate: "api/{controller}/{id}",
                defaults: new { id = RouteParameter.Optional }
            );
        }
    }
  3. Allow GET request coming from the SPA to the controller
    [EnableCors(origins: "http://localhost:3000", headers: "*", methods: "get")]
    public class MessageController : ApiController
    {

Add Azure Active Directory Authentication to the Angular SPA

I’ve already described how to add AAD authentication to an existing Angular 2 application in a previous blog post Add Azure Active Directory to an existing Angular 2 Single Page Application. I will follow the steps outlined there. This is in short how this is done:

  1. Configure the app to use SSL
  2. Register the application in Azure Active Directory
  3. Configure it to use OAuth2
  4. Implement and configure adal.js
  5. Add login and (optionally) logout functionality to your app that logs in to AAD
  6. Add a route guard to protect routes from unauthorized access and force AAD authentication

I’ve now created an Angular2 SPA that requires Azure Active Directory authentication (in some parts of the application), and that consumes a WebAPI, which not yet requires authentication.

Set up the WebAPI to require authentication

Next step is to set up the ASP.NET WebAPI to require authentication on the service the Angular2 SPA is consuming. This step is also already described in a previous blog post. You can read it here: Add Azure Active Directory to an existing ASP.NET MVC web application.

I am using Visual Studio 2017, so the easiest way for me to add Azure Active Directory authentication is by right-clicking on the Connected Services item in the project

  1. Right-click on the Connected Services item and select Add Connected Service
  2. From the list of connected services select Authentication with Azure Active Directory to configure single sign-on in your application
  3. On the introduction screen of the wizard click Next
  4. On the next screen enter your Domain (tenant) and an App ID URI
    If your WebAPI wasn’t already configured to use SSL, the wizard will do that for you.
  5. Optionally you can click Next to enable Directory access, so the application can read profile information from AAD
  6. Click Finish and the wizard will make the necessary changes to your code like adding Owin middleware, packages the Authorize attribute to the controllers, configure authentication

Tie the ends together

Now with both a WebAPI and a SPA configured to require Azure Active Directory, all I have to do is have them work together. I do this by telling the Angular2 SPA to send a JSON Web Token with every request sent to the WebAPI.

  1. Add the angular2-jwt libraries to the Angular2 SPA
    npm install angular2-jwt --save
  2. Have the route guard acquire the token for the logged in user and store the token in the localStorage. Therefore it needs the App ID URI from the WebAPI service. You can find this in the Azure Portal.
    Add this App ID URI to your AdalJs settings.

    import { Injectable } from '@angular/core';
    @Injectable()
    export class SecretService {
        public get adalConfig(): any {
            return {
                tenant: '[your tenant]',
                clientId: '[a GUID, the application ID]',
                redirectUri: window.location.origin + '/',
                postLogoutRedirectUri: window.location.origin + '/',
                resourceId: "[App ID URI]]"
            };
        }
    }

    Use it to acquire the token in the route guard:

    if (this.adalService.userInfo.isAuthenticated) {
        this.adalService.acquireToken(this.secretService.adalConfig.resourceId)
            .subscribe(tokenOut => localStorage.setItem('id_token', tokenOut));
    
        return true;
    } else {
  3. Change the root URL of the API to its https counterpart and replaced the regular Http provider with the AuthHttp provider in the service
    import { AuthHttp, AuthConfig, AUTH_PROVIDERS, provideAuth } from 'angular2-jwt/angular2-jwt';
    constructor(private http: AuthHttp) {}
  4. Add the provideAuth configuration in app.module.ts to tell it that it needs to add the token from the adal service (from step 2) to each request to the WebAPI that requires authentication
    providers: [MessageService, SecretService, AdalService, RouteGuard, provideAuth({
      tokenGetter: (() => localStorage.getItem('id_token'))
    }) ],
  5. In the Azure Portal, add API access to the WebAPI application to the SPA application
Posted by .Ronald on

Add Azure Active Directory to an existing Angular 2 Single Page Application

This article will guide you through the process of configuring your Single Page Application (SPA) in TypeScript (or JavaScript) to use Azure Active Directory (AAD) authentication.

We will use adal.js Active Directory Authentication Library (ADAL) for JavaScript and ng2-adal (which is built upon adal-angular)

Prepare your application to use Azure Active Directory

  1. If your application doesn’t already use SSL, it is hightly recommended to do it now. AAD without SSL, thus running over an unsecure connection is not advisable and a real hassle to setup.
    Note the SSL url of your application, as you will need it later to register the application in AAD.

Register your application in Azure Active Directory

If you haven’t already registered your application in the Azure Portal, follow the steps below:

  1. Sign in to the Azure portal
  2. Choose Azure Active Directory from your services (search using More Services if it isn’t shown yet)
  3. Choose App registrations and Add
  4. Enter a Name, choose Web app / API for Application Type and enter the URL of your web application under Sign-on URL (without the trailing slash)
    The URL is the SSL url we got earlier when we enable SSL for our web application
  5. Click Create
  6. Still in your application registrations, choose your application, choose All settings and Properties
  7. Copy the Application ID
  8. Enter the Logout URL as the Sign-on URL you entered earlier, followed by /Account/EndSessionThis will link to the single sign out URL of our application
  9. Also from the Settings menu, add a Key with a duration of 1 or 2 years
    Note down the key, as you will not be able to retrieve it afterwards.

Additional steps required for your SPA

Authentication happens using OAuth2 protocol. Applications provisioned in AAD are not by default enabled to use OAuth2, so you need to explicitly opt-in to do so:

  1. Still in the Azure Portal and in the page of the application you created before, click on Manifest to open the manifest editor.
    Alternatively, you can download, edit and upload the manifest afterwards, but the inline manifest editor is much easier to use.
  2. Look for the oauth2AllowImplicitFlow setting, which by default is set to false. Set it to true and save the manifest
    "oauth2AllowImplicitFlow": true,

Implementing and configuring adal.js in your Angular 2 SPA – Overview

In general you will need to follow theses steps, I will explain them in detail further on:

  1. Acquire the adal.js resources
  2. Create a service that provides you with the AAD settings
  3. Create and use a routeguard
  4. Add the Adal services to your application and initialize them
  5. Create a component to login and logout

Acquire the adal.js resources

  1. If you’re using the Node Package Manager (npm) system, it’s as easy as executing 1 single command to pull in the ng2-adal package and all it’s dependencies
    npm install ng2-adal --saveYou can also pull in the ng2-adal package with another package manager or manually. Make sure to also pull in all the required dependencies.
  2. If you’re using a module loader like SystemJS, you will need to add the modules to its configuration file, like it is shown for the systemjs.config.js file for SystemJS:
    (function (global) {
      System.config({
        paths: {
          // paths configuration
        },
        map: {
          // existing map configuration
    
          // adal libraries
          'ng2-adal': 'npm:ng2-adal',
          'adal': 'npm:adal-angular/lib',
          'adal-angular': 'npm:adal-angular/lib',
        },
        packages: {
          // existing package configuration
    
          // adal packages
          'ng2-adal': { main: 'core.js', defaultExtension: 'js' },
          'adal-angular': { main: 'adal-angular', defaultExtension: 'js' },
          'adal': { main: 'adal.js', defaultExtension: 'js' }
        }
      });
    })(this);

Create a service that provides you with the AAD settings

This is a simple angular service that stores the AAD settings, so they are easily manageable and accessible

  1. create a file called secret.service.ts
    import {Injectable} from '@angular/core';
    
    @Injectable()
    export class SecretService {
        public get adalConfig(): any {
            return {
                tenant: '[your tenant]',
                clientId: '[a GUID, the application ID]',
                redirectUri: window.location.origin + '/',
                postLogoutRedirectUri: window.location.origin + '/'
            };
        }
    }

Create and use a routeguard

A route guard is used to control the routers behavior and returns true or false to indicate whether the route can be followed or not

  1. Create an authentication guard (LoggedInGuard.ts) and implement the canActivate() method that checks whether the user is authenticated over Adal, if authenticated returning true, otherwise navigating to a login page
    import { Injectable } from '@angular/core';
    import { Router, CanActivate } from '@angular/router';
    import { AdalService } from 'ng2-adal/core';
    
    @Injectable()
    export class LoggedInGuard implements CanActivate {
        constructor(private adalService: AdalService,
            private router: Router) { }
    
        canActivate() {
            if (this.adalService.userInfo.isAuthenticated) {
                return true;
            } else {
                this.router.navigate(['/login']);
                return false;
            }
        }
    }
  2. Protect the route with the authentication guard in your routing configuration (fragmented code sample shown):
    import { SecretService } from "./secret.service"; 
    import { AdalService } from "ng2-adal/core"; 
    import { LoggedInGuard } from './LoggedInGuard';
    
    // ...
    
    { path: 'protected', component: protectedComponent, canActivate: [LoggedInGuard] },
    
    // ...
    
    providers: [AdalService, SecretService, LoggedInGuard],

Add the Adal services to your application and initialize them

  1. In app.component.ts add following code to import the services
    import { Component, OnInit } from '@angular/core';
    import { SecretService } from './secret.service';
    import { AdalService } from "ng2-adal/core";
  2. Still in app.component.ts initialize the Adal service in the constructor with the settings stored in the Secret service
    export class AppComponent implements OnInit {
        profile: any;
      
        constructor(
            private adalService: AdalService,
            private secretService: SecretService) {
            this.adalService.init(this.secretService.adalConfig);
        }
    }
  3. To prevent the user having to log in every time again, the authentication token is stored in the browser cache. This allows us to try to retrieve this token and continue using the application without being redirected again to the Azure login page.
    Add following code to app.component.ts to get the user object from cache:

        ngOnInit(): void {
            this.adalService.handleWindowCallback();
            this.adalService.getUser();
        }

Create a component to login and logout

This is a very straightforward way to add a login and logout button to your application. The essence is to call the adalService.login() and adalService.logOut() functions. Integrate them in your application to meet your requirements:

  1. Create a login component (login.component.ts)
    import {Component, OnInit} from '@angular/core';
    import {Router} from "@angular/router";
    import {AdalService} from 'ng2-adal/core';
    
    @Component({
        selector: 'welcome',
        template: '<h1>You need to login first</h1><button (click)="logIn()">Login</button>'
    })
    export class LoginComponent {
    
        constructor(
            private router: Router,
            private adalService: AdalService
        ) {
            if (this.adalService.userInfo.isAuthenticated) {
                this.router.navigate(['/home']);
            }
        }
    
        public logIn() {
            this.adalService.login();
        }
    }

    If the user is already logged in with valid Azure Active Directory credentials, he will immediately be redirected to the /home page.
    Otherwise, the user is presented with the Azure login page to login first, and afterwards redirected to the home page URL you provided in the AAD application registration.

  2. Create a logout component (logout.component.ts)
    import {Component} from '@angular/core';
    import {AdalService} from 'ng2-adal/core';
    
    @Component({
        selector: 'logout',
        template: '<div protected><h1>This is the logout page.</h1><button (click)="logOut()">Logout</button></div>'
    })
    export class LogoutComponent {
    
        constructor(
            private adalService: AdalService
        ) {
        }
    
        public logOut() {
            this.adalService.logOut();
        }
    }

    This will sign out the user from Azure Active Directory, invalidate the users authentication token and redirect to the post logout URL.

  3. Add routes to the login and logout components
    import { LoginComponent} from './login.component';
    import { LogoutComponent} from './logout.component';
    { path: 'logout', component: LogoutComponent },
    { path: 'login', component: LoginComponent },
    declarations: [AppComponent, LoginComponent, LogoutComponent, /* ... */ ]

References

Posted by .Ronald on

Add Azure Active Directory to an existing ASP.NET MVC web application

There are 2 options to add Azure Active Directory to your existing ASP.NET MVC application.

The easiest one is in Visual Studio. Right-click on your web project, and you are presented with the possibility to configure Azure AD Authentication. This starts a wizard which will do some checks and configures your application for you. Prerequisites are described on the Diagnosing errors with the Azure Active Directory Connection Wizard page.

The other, and slightly more difficult option, is to configure your application yourself. And that is what’s described below.

Prepare your application to use Azure Active Directory

  1. If your application doesn’t already use SSL, you need to enable it now. AAD without SSL, thus running over an unsecure connection is not advisable and a real hassle to setup.
    Note the SSL url, as you will need it later to register the application in AAD.

Remove existing authentication (if any)

  1. If you have configured your application in web.config to use any form of authentication, remove it
    <system.web>
      <authentication mode="None" />
    </system.web>
  2. If you have any settings in web.config regarding AAD authentication, remove them also
    <add key="ida:ClientId" value="[some GUID]" />
    <add key="ida:AADInstance" value="https://login.microsoftonline.com/" />
    <add key="ida:Domain" value="[your domain]" />
    <add key="ida:TenantId" value="[some guid]" />
    <add key="ida:PostLogoutRedirectUri" value="https://localhost:44364/" />
  3. It might also be interesting to check the .csproj file for any left-over authentication elements. Disable them and only enable anonymous authentication
    <PropertyGroup>
      <IISExpressAnonymousAuthentication>enabled</IISExpressAnonymousAuthentication>
      <IISExpressWindowsAuthentication>disabled</IISExpressWindowsAuthentication>
    </PropertyGroup>
  4. Remove authentication NuGet packages

Register your application in Azure Active Directory

  1. Sign in to the Azure portal
  2. Choose Azure Active Directory from your services (search using More Services if it isn’t shown yet)
  3. Choose App registrations and Add
  4. Enter a Name, choose Web app / API for Application Type and enter the URL of your web application under Sign-on URL (without the trailing slash)
    The URL is the SSL url we got earlier when we enable SSL for our web application
  5. Click Create
  6. Still in your application registrations, choose your application, choose All settings and Properties
  7. Copy the Application ID
  8. Enter the Logout URL as the Sign-on URL you entered earlier, followed by /Account/EndSession
    This will link to the single sign out URL of our application
  9. Also from the Settings menu, add a Key with a duration of 1 or 2 years
    Note down the key, as you will not be able to retrieve it afterwards.

Configure your application to use your Azure AD tenant

  1. Open web.config and add appSettings for:
    <appSettings>
      <add key="ida:ClientId" value="[some GUID]" />
      <add key="ida:AppKey" value="[The key we created earlier]" />
      <add key="ida:Tenant" value="[Tenant name]" />
      <add key="ida:AADInstance" value="https://login.microsoftonline.com/{0}" />
      <add key="ida:RedirectUri" value="[Url of the application]" />
    </appSettings>
    
  2. Replace the AccountController with this code:
    public class AccountController : Controller
    {
        public void SignIn()
        {
            // Send an OpenID Connect sign-in request.
            if (!Request.IsAuthenticated)
            {
                HttpContext.GetOwinContext().Authentication.Challenge(new AuthenticationProperties { RedirectUri = "/" }, OpenIdConnectAuthenticationDefaults.AuthenticationType);
            }
        }
        public void SignOut()
        {
            // Remove all cache entries for this user and send an OpenID Connect sign-out request.
            string userObjectID = ClaimsPrincipal.Current.FindFirst("http://schemas.microsoft.com/identity/claims/objectidentifier").Value;
            AuthenticationContext authContext = new AuthenticationContext(Startup.Authority, new NaiveSessionCache(userObjectID));
            authContext.TokenCache.Clear();
    
            HttpContext.GetOwinContext().Authentication.SignOut(
                OpenIdConnectAuthenticationDefaults.AuthenticationType, CookieAuthenticationDefaults.AuthenticationType);
        }
    
        public void EndSession()
        {
            if (HttpContext.Request.IsAuthenticated)
            {
                // Remove all cache entries for this user and send an OpenID Connect sign-out request.
                string userObjectID = ClaimsPrincipal.Current.FindFirst("http://schemas.microsoft.com/identity/claims/objectidentifier").Value;
                AuthenticationContext authContext = new AuthenticationContext(Startup.Authority, new NaiveSessionCache(userObjectID));
                authContext.TokenCache.Clear();
            }
    
            // If AAD sends a single sign-out message to the app, end the user's session, but don't redirect to AAD for sign out.
            HttpContext.GetOwinContext().Authentication.SignOut(CookieAuthenticationDefaults.AuthenticationType);
        }
    }

    (Credits for this code go to https://github.com/Azure-Samples/active-directory-dotnet-webapp-webapi-openidconnect/)

  3. Add a reference to Microsoft.AspNet.Identity, Microsoft.Owin.Security.OpenIdConnect, Microsoft.Owin.Security.Cookies, Microsoft.IdentityModel.Clients.ActiveDirectory
    PM> Install-Package Microsoft.AspNet.Identity.Owin
    PM> Install-Package Microsoft.Owin.Security.OpenIdConnect
    PM> Install-Package Microsoft.Owin.Security.Cookies
    PM> Install-Package Microsoft.IdentityModel.Clients.ActiveDirectory
  4. Replace the Startup class in Startup.Auth.cs with this code:
    public partial class Startup
    {
        //
        // The Client ID is used by the application to uniquely identify itself to Azure AD.
        // The App Key is a credential used to authenticate the application to Azure AD.  Azure AD supports password and certificate credentials.
        // The Metadata Address is used by the application to retrieve the signing keys used by Azure AD.
        // The AAD Instance is the instance of Azure, for example public Azure or Azure China.
        // The Authority is the sign-in URL of the tenant.
        // The Post Logout Redirect Uri is the URL where the user will be redirected after they sign out.
        //
        private static string clientId = ConfigurationManager.AppSettings["ida:ClientId"];
        private static string appKey = ConfigurationManager.AppSettings["ida:AppKey"];
        private static string aadInstance = ConfigurationManager.AppSettings["ida:AADInstance"];
        private static string tenant = ConfigurationManager.AppSettings["ida:Tenant"];
        private static string redirectUri = ConfigurationManager.AppSettings["ida:RedirectUri"];
    
        public static readonly string Authority = String.Format(CultureInfo.InvariantCulture, aadInstance, tenant);
    
        // This is the resource ID of the AAD Graph API.  We'll need this to request a token to call the Graph API.
        string graphResourceId = ConfigurationManager.AppSettings["ida:GraphResourceId"];
    
        public void ConfigureAuth(IAppBuilder app)
        {
            app.SetDefaultSignInAsAuthenticationType(CookieAuthenticationDefaults.AuthenticationType);
    
            app.UseCookieAuthentication(new CookieAuthenticationOptions());
    
            app.UseOpenIdConnectAuthentication(
                new OpenIdConnectAuthenticationOptions
                {
                    ClientId = clientId,
                    Authority = Authority,
                    PostLogoutRedirectUri = redirectUri,
                    RedirectUri = redirectUri,
    
                    Notifications = new OpenIdConnectAuthenticationNotifications()
                    {
                        //
                        // If there is a code in the OpenID Connect response, redeem it for an access token and refresh token, and store those away.
                        //
                        AuthorizationCodeReceived = OnAuthorizationCodeReceived,
                        AuthenticationFailed = OnAuthenticationFailed
                    }
    
                });
        }
    
        private Task OnAuthenticationFailed(AuthenticationFailedNotification<OpenIdConnectMessage, OpenIdConnectAuthenticationOptions> context)
        {
            context.HandleResponse();
            context.Response.Redirect("/Home/Error?message=" + context.Exception.Message);
            return Task.FromResult(0);
        }
    
        private async Task OnAuthorizationCodeReceived(AuthorizationCodeReceivedNotification context)
        {
            var code = context.Code;
    
            ClientCredential credential = new ClientCredential(clientId, appKey);
            string userObjectID = context.AuthenticationTicket.Identity.FindFirst("http://schemas.microsoft.com/identity/claims/objectidentifier").Value;
            AuthenticationContext authContext = new AuthenticationContext(Authority, new NaiveSessionCache(userObjectID));
    
            // If you create the redirectUri this way, it will contain a trailing slash.  
            // Make sure you've registered the same exact Uri in the Azure Portal (including the slash).
            Uri uri = new Uri(HttpContext.Current.Request.Url.GetLeftPart(UriPartial.Path));
    
            AuthenticationResult result = await authContext.AcquireTokenByAuthorizationCodeAsync(code, uri, credential, graphResourceId);
        }
    

    (Credits for this code go to https://github.com/Azure-Samples/active-directory-dotnet-webapp-webapi-openidconnect/)

  5. Add the NaiveSessionCache utility class from https://github.com/Azure-Samples/active-directory-dotnet-webapp-webapi-openidconnect/blob/master/TodoListWebApp/Utils/NaiveSessionCache.cs to your web application
  6. Replace the content of _LoginPartial.cshtml with this code:
    @if (Request.IsAuthenticated)
    {
        <text>
            <ul class="nav navbar-nav navbar-right">
                <li class="navbar-text">
                    Hello, @User.Identity.Name!
                </li>
                <li>
                    @Html.ActionLink("Sign out", "SignOut", "Account")
                </li>
            </ul>
        </text>
    }
    else
    {
        <ul class="nav navbar-nav navbar-right">
            <li>@Html.ActionLink("Sign in", "SignIn", "Account", routeValues: null, htmlAttributes: new { id = "loginLink" })</li>
        </ul>
    }
  7. Decorate the controllers that require authorization with the [Authorize] attribute
    [Authorize]
    public class HomeController : Controller
    {
        public ActionResult Index()

Now run your application. It should require you to sign in to your Azure AD account and ask your permission to read your user profile, so the application knows who you are.

References

Posted by .Ronald on

How to create a zip archive and download it in ASP.NET

In a previous post How to download multiple files in ASP.NET, I explained how to generate multiple documents and offer them as separate downloads in ASP.NET. One of the options I had when looking for a solution to offer multiple downloads, was adding all the documents to 1 single zip archive container, and offer that as download to the user. This solution didn’t completely satisfy the end-users, but is also offered for those who want to use it.

In this post I will explain, how I take the same list of documents, and offer them as a zip archive to download. Starting from the multiple download solution, this only required 1 extra step in the process, namely, creating a zip archive and adding all the documents to it. The rest of the process is as described in the previous post.

The method takes the same argument as when creating separate download links, namely an a list of byte arrays. Each byte array in its turn contains the binary content of the document. I use the SharpZipLib from ICSharpCode, which can be downloaded here: The Zip, GZip, BZip2 and Tar Implementation For .NET. This is what this method looks like:


Private Function ZipDocuments(ByVal reports As IList(Of byte())) As Boolean

' Add documents to 1 ZIP file, and open in browser
Using zipOutMemoryStream As New MemoryStream()
Using zipOutStream As New ZipOutputStream(zipOutMemoryStream)

'Add documents to Zip File.
Dim cnt As Integer = 1
For Each buffer As byte() In reports
Dim entry As New ZipEntry(String.Format("{0}_{1}.pdf", "GeneratedFile", cnt))

zipOutStream.PutNextEntry(entry)
zipOutStream.Write(buffer, 0, buffer.Length)
zipOutStream.CloseEntry()
cnt += 1
Next

zipOutStream.Finish()
zipOutStream.Close()
zipOutMemoryStream.Close()

Dim responseBytes As Byte() = zipOutMemoryStream.ToArray()

'Return Null on Empty Zip File
Const ZIP_FILE_EMPTY As Integer = 22
If responseBytes.Length <= ZIP_FILE_EMPTY Then
Return Nothing
End If

RegisterDocumentDownload(Guid.NewGuid().ToString(), responseBytes, ContentTypes.ZIP)

End Using
End Using
End Function

I first create a (binary) memorystream (zipOutMemoryStream) to contain the content of the zip file (zipOutStream).
Then I loop over the list of documents (or files), create an entry in the zip file (entry as ZipEntry), and write the binary content to the zip entry.
After adding the files to the zip and cleaning up, I can use the same RegisterDocumentDownload() method from the previous post, and the zip archive will be added to the user and opened in the browser.

And that’s it…

Posted by .Ronald on

How to download multiple files in ASP.NET

The project I’m currently assigned to, already has an option to generate reports (pdf) which are just streams the binary output of the report generator to the response output stream. Something like this:


Dim binReader As New System.IO.BinaryReader(report.ExportToStream())

With Response
.ClearContent()
.ClearHeaders()
.ContentType = "application/pdf"
.AddHeader("Content-Disposition", "inline; filename=AankondigingControlesEnGevolgen.pdf")
.BinaryWrite(strStream.ReadBytes(CInt(strStream.BaseStream.Length)))
.Flush()
.Close()
End With

This piece of code streams the binary output of the report to the response object, and setting the right ContentType and Header, it opens the document in the user’s browsers. Works like a charm.

But now I was asked to create a form where the user can select multiple reports to download and open them in the browser. My first answer was: We can’t do that (easily). But then I started to look at the options we have when working in ASP.NET and generating output to the client browser.

The solution I ended up with was that easy, that I found myself kind of stupid that I didn’t think about it earlier. This is what I did:

  1. Generate the documents and store them (binary), together with a unique key, in a session variable
  2. Generate download links with that unique key as parameter
  3. Open the links with clientside javascript
  4. In the download page, retrieve the content from the session variable and stream it to the client browser

Let’s take a look at that in detail.

1. Generate the documents and store them (binary), together with a unique key, in a session variable

I created a custom class to hold the binary document content, together with extra information that can be helpful when generating the download:


Private Class ContentTypes
Public Const PDF As String = "application/pdf"
Public Const ZIP As String = "application/zip"
End Class

<Serializable()> _
Private Class Download
Public Name As String
Public Content() As Byte
Public ContentType As String
End Class

Name: The name of the file that is generated and is used when the user downloads the file (save to disk)
Content: The binary content of the file
ContentType: Because I don’t want to be limited to 1 specific file type, I include the content type with the download

Currently I’m only using 2 types of documents, but as you can see, this can be easily extended.

2. Generate download links with that unique key as parameter

For each document I created and stored, together with a unique key, in a session variable, I generated the client-side script to open a new window with the download link. Because I use the same page to download the document, I can create a URL starting with the querystring question mark:


Private Sub RegisterDocumentDownload(ByVal key As String, ByVal content() As Byte, ByVal contentType As String)
Dim script As String = String.Format("window.open('?key={0}');", key)
Dim download As New Download()
download.Content = content
download.ContentType = contentType
download.Name = key

Session.Add(key, download)
ScriptManager.RegisterStartupScript(Me, Me.GetType(), "Download_" & key, script, True)
End Sub

3. Open the links with clientside javascript

The JavaScript that is generated, will look something like this (when generating 3 downloads):


<script type="text/javascript">
//<![CDATA[
window.open('?key=ee00cb06-81f6-48d7-bed2-6cf0af90d5f8');
window.open('?key=a05d2567-5ab8-4c41-a000-bb0bd16498ca');
window.open('?key=ea7bb0aa-6686-442e-be6f-b14ac14aacf5');
//]]>
</script>

4. In the download page, retrieve the content from the session variable and stream it to the client browser

Because I use the same page to download the file as well, I added code to the Page_Load() event that checks for the “key” parameter


If Not Request.QueryString("key") Is Nothing Then
StreamDownload(Request.QueryString("key"))
Exit Sub
End If

This calls the StreamDownload() method which takes the download from the session, streams the content to the browser client and cleans up everything before ending processing


Private Sub StreamDownload(ByVal key As String)

Guard.ArgumentNotNull(Session(key), "download")

Dim download As Download = DirectCast(Session(key), Download)
Dim stream As New MemoryStream()
Dim formatter As New BinaryFormatter()

formatter.Serialize(stream, Session(key))
With Response
.Clear()
.ContentType = download.ContentType

Select Case download.ContentType
Case ContentTypes.ZIP
.AppendHeader("Content-Disposition", String.Format("filename={0}.zip", download.Name))
End Select

.BinaryWrite(download.Content)
.Flush()
End With

' cleanup temporary objects
Session.Remove(key)
Session.Remove(key & "_download")

Response.End()

End Sub

As you can see, I also have the possibility to generate zip archives. This is to offer the functionality of downloading multiple documents in 1 zip archive container. I could easily immediately offer this zip download from within the page. But I prefer to use this generic solution, even if I’m only offering 1 file to download. This also gives me the possibility to offer other file formats as well. I just need to add a new content type, and alter the code where needed in the StreamDownload() method.

In a next post, I will show how I created 1 zip archive which contains 3 documents, and offer this as a download to the user.

Posted by .Ronald on

Aspect Oriented Programming (AOP) with PostSharp

What is AOP (Aspect Oriented Programming)?

Aspect oriented programming breaks down programming logic in separate concerns. It separates and groups blocks of code that perform a specific operation and that can be applied to or re-used by different pieces of code, be it methods, classes, properties, and so on.
Commonly used examples of functionality that is often implemented using aspects are logging, exception handling, caching, authorization,…

Why use AOP?

You can write aspects as classes to perform specific functionality. These aspects can then be attached to code objects (classes, methods, properties, events,…) as attributes. This means that you only have to write the code once, and attach it anywhere you want with (mostly) one single line of code.
By separating this code from your business logic into aspects, changes made to these aspects have no impact on your business logic. In this way your code becomes much cleaner and robust, and it is much easier to maintain, resulting in fewer defects. And with no need write the same code over and over again, writing less code, that is more robust, means that you can focus on the important parts (the business logic) of your code, and you can be more productive as a programmer and save money on development time.

What can AOP do?

Aspect Oriented Programming can be applied in plenty of usage scenarios

  • Logging: Whether you log to logging files, a database, or any other device, it’s up to the logging aspect to determine what and where to log it, so there’s no need to do this inside your application logic over and over again.
  • Tracing: When you want to start tracing the performance of your application it can become a tedious task. It becomes even more complicated when you want to be able to turn tracing on and off when debugging or testing your code. By placing your tracing code in an aspect, you can do this in 1 single location, instead of muddling around your business code.
  • Exception handling: In production environments you don’t want your exceptions (yes, they will occur) to appear to your user and possibly reveal sensitive information. Aspects can handle these exceptions, take appropriate actions and show user-friendly messages.
  • Caching: You can write an aspect that captures a method’s output, store it in a cache, and return it from the cache the next time the method is called again.
  • Authorization: Go further than the built-in security functionality and write your own logic to grant or deny access to certain functionality or data.
  • Auditing: Keep an audit trail of who accesses or changes what data and when.
  • NotifyPropertyChanged: Remember implementing INotifyPropertyChanged into your classes over and over again? This can be solved with 1 aspect applied to your classes as 1 single attribute.
  • Even more examples:
    • Undo / Redo pattern
    • Thread dispatching & synchronization
    • Transaction handling
    • Persistence
    • And so on…

How does PostSharp work?

PostSharp weaves its aspects at compile into your code, so they get executed at the right time.
From the PostSharp website (http://www.sharpcrafters.com/postsharp/documentation#under-the-hood):

Source

Think of the source code for your project as the parts of a car, and the build process as the assembly line in the factory. PostSharp aspects are written as source code, and applied to other source code artifacts in your application using .NET attributes.

Compiler

The compiler for your language takes all of your application’s source files and converts them into executable binaries. It is just one of the many phases of the build process.

PostSharp

PostSharp is a compiler post-processor: it takes the output from the compiler, and instruments your assemblies and executables to execute your aspects at the appropriate times.

Run-time

Once compiled, your application only needs one or two lightweight PostSharp assemblies to execute. No need to ship the factory with the car!

AOP With PostSharp

No better explanation than a real example. In following example I will explain how to get started using PostSharp en create your first aspect for caching. In a second example, I will create another to prove that the caching aspect indeed improves performance with an easy tracing aspect.

Getting started using PostSharp

The first step is to download PostSharp from the PostSharp website at http://www.sharpcrafters.com/postsharp/download. There’s a free Community and a paid Professional Edition available. A comparison of the features of each version can be found on this page: http://www.sharpcrafters.com/purchase/compare.

The sample application: Ordering pizza’s

We start from the real beginning by creating a sample application. I’m creating a “Pizza Ordering System” in a MVC3 Web Application. To make development easier, I will use Entity Framework with SQL Server Compact Edition and MVC Scaffolding. This allows me to write a few model classes and let the scaffolding generate controllers and views for me. The Code First feature of Entity Framework creates the database for me based on the model.
ScreenShot001
This creates a good starting point to begin this example.
First of all we’ll add a reference to the PostSharp.dll (SharpCrafters announced that they will have NuGet package available very soon, in the meantime we’ll have the add it the old-fashioned way.
ScreenShot002
And because I want to quickly set up a sample application, I install the EntityFramework.SqlServerCompact and MvcScaffolding packages from NuGet. These packages install their dependencies themselves, so I don’t need to take care of that.
I create 3 Model classes, Pizza, PizzaSize and Order for our application, and use the Scaffold command to create controllers and views for them.
ScreenShot003
ScreenShot005
Now, as you can see when you take a look at the controllers, the scaffolding created a DbContext that is used and directly addressed in each of the controllers. This isn’t quite useful when we want to use caching. We need some sort of service or repository pattern for this. Let’s instruct scaffolding to use a repository J (I could have done that like this from the beginning, but I just also wanted to show some of the functionality and strength from the MvcScaffolding package):
ScreenShot006

Remember, when you instruct scaffolding to recreate controllers and datacontext, it needs to recreate the database when you changed something in your model classes. Follow the instructions in the context file to achieve this.

I also created 3 menu items to the Index action of each of these controllers, to make navigation easier.

Now, let’s create a really simple and straight-forward caching class. I know you can do this with the Caching Application block from the Enterprise Library, or some other framework, but I just want to keep the sample straight forward, and since the caching isn’t the subject from this blog post, I don’t go deeper into the caching subject.

public class Cache
{
private static readonly IDictionary<string, object> _cache = new Dictionary<string, object>();
private const int _timeout = 60 * 60 * 24;

public static bool Contains(string key)
{
return _cache.ContainsKey(key);
}

public static object Get(string key)
{
if (_cache.ContainsKey(key))
{
return _cache[key];
}
return null;
}

public static void Add(string key, object item)
{
Add(key, item, _timeout);
}

public static void Add(string key, object item, int timeout)
{
if (_cache.ContainsKey(key))
{
_cache.Remove(key);
}
_cache.Add(key, item);
}

public static void Remove(string key)
{
_cache.Remove(key);
}

public static string GenerateKey(Arguments arguments)
{
var key = new StringBuilder();

foreach (var argument in arguments)
{
key.AppendFormat("_{0}_{1}", argument.GetType(), argument);
}

return key.ToString();
}
}

This creates an in-memory cache and supports adding, retrieving, removing and checking the presence of an object in the cache. It also has a GenerateKey that I will use later to generate a unique key based on the arguments passed to the method that I want to cache the result from.

The caching aspect

Now, time for some action, create the caching aspect!

Start by creating an “Aspects” folder (we want our project to stay clean of course) and create a new class called “CacheAttribute”. To have our aspect execute code before and after a method is called, it must be derived from the OnMethodBoundary aspect parent class. Also, this class needs to be serializable, so apply the [Serializable] attribute.

To execute code before and after a method call, we must implement the OnEntry and OnSuccess methods. In the OnEntry we will check whether the item already exists in the cache, skip the further execution of the method, and set the return value as our cache value. In the OnSuccess method, we will add the return value to the cache.

[Serializable]
public class CacheAttribute : OnMethodBoundaryAspect
{
public override void OnEntry(MethodExecutionArgs args)
{
var key = args.Method + "_" + Cache.GenerateKey(args.Arguments);
var value = Cache.Get(key);

if (value == null)
{
args.MethodExecutionTag = key;
}
else
{
args.ReturnValue = value;
args.FlowBehavior = FlowBehavior.Return;
}
}

public override void OnSuccess(MethodExecutionArgs args)
{
var key = args.MethodExecutionTag.ToString();
Cache.Add(key, args.ReturnValue);
}
}

The next step is to apply the attribute to the methods that we want to cache the result from. We do this by applying the Cache attribute to the All(), AllIncluding() and Find(int id) methods of the PizzaRepository class that scaffolding created for each of our model classes.

When we launch the debugger after we have set breakpoints in the OnEntry() and OnSucces() methods of the CacheAttribute class, and in the All(), AllIncluding() and Find(int id) methods of the PizzaRepository class, we can see that the OnEntry() method of the CacheAttribute is executed first. When the PizzaRepository methods are executed the first time the execution is passed to the original method, and the result is stored in the cache after it is completed. The next time, the method execution is skipped, and the results are directly returned from the cache.

Nice, isn’t it? But does this really improve the performance of our application?

The performance aspect

To answer this question, we’ll create another aspect to trace the time of the execution of a method, the TimeTracingAttribute.

Again, start by creating an aatribute, called “TimeTracingAttribute” in the “Aspects” folder, make it Serializable and inherit from OnMethodBoundaryAspect.

Again, we use the OnEntry() and OnExit () methods, together with a Stopwatch this time. The Stopwatch will be a static instance on the TimeTracingAttribute. In the OnEntry() method we will store the value of the ElapsedTicks property in the MethodExecutionTag property of the attribute’s args. In the OnExit() method we’ll read it out to calculate the executed time (in ticks) and write that to the Trace.

[Serializable]
public class TimeTracingAttribute : OnMethodBoundaryAspect
{
static Stopwatch _stopwatch = new Stopwatch();

static TimeTracingAttribute()
{
_stopwatch.Start();
}

public override void OnEntry(MethodExecutionArgs args)
{
args.MethodExecutionTag = _stopwatch.ElapsedTicks;
}

public override void OnExit(MethodExecutionArgs args)
{
var executionTime = _stopwatch.ElapsedTicks - (long) args.MethodExecutionTag;
Trace.WriteLine(string.Format("{0}: {1} ticks.", args.Method.Name, executionTime));
}
}

Now, apply this TimeTracing attribute to the Index() and Details(int id) methods of the PizzaController and the Create() and Edit(int id) methods of the OrdersController.

When you start the debugger of Visual Studio, you will see the output of the TimeTracingAttribute written to the output window when you open the Pizza page or Edit an order multiple times. See the performance boost?

Now, this is nice when we have Visual Studio open in debugging mode, it would be even nice when we can see the results outside of Visual Studio. We don’t to that in our aspect, it has even nothing to do with AOP, but with another gem that is available from NuGet: Glimpse.

Glimpse is a web debugger used to gain a better understanding of what’s happening inside of your webserver. From the Glimpse website:

What Firebug is for the client, Glimpse does for the server… in other words, a client side Glimpse into what’s going on in your server.

Get the Glimpse package from NuGet, rebuild your application and start it in the browser. One action we must take before we can see Glimpse at work, is enabling it for our application. Do this by launching the /Glimpse/Config page of your browser and click the big “Turn Glimpse On” button.

Now when you open your page again, you will see a small eye-con in the bottom-right corner of you browse which will open the Firebug for your server. Clicking on it will open the Glimpse window with tracing information in the “Trace” tab.

ScreenShot007

ScreenShot008

Conclusion

Aspect Oriented Programming (AOP) with PostSharp, or another AOP tool significantly improves robustness of your application and keeps your code clean. It also improves productivity of the development team and allows developers to focus on their important tasks.

Posted by .Ronald on

Introducing NuGet

What is NuGet?

clip_image001

From the NuGet website:

NuGet is a Visual Studio extension that makes it easy to install and update open source libraries and tools in Visual Studio.

Installing NuGet

Method 1: NuGet comes with ASP.Net MVC3

The easiest way would be to have Visual Studio 2010 (any version, even Visual Studio Express is supported) installed, NuGet comes with it. You can easily get ASP.Net MVC3 from the ASP.NET MVC website or with the Microsoft Web Platform Installer. But of course, you can also use NuGet if you’re not developing ASP.Net MVC3 applications.

Method 2: Using the Extension Manager

The second way you can obtain NuGet on your system is by using the Visual Studio Extension Manager. Search in the Online Gallery for the NuGet Package Manager and hit the “Download” button.

ScreenShot001

ScreenShot002

After a restart of Visual Studio the Library Package Manager is available in the Tools menu.

ScreenShot001

Method 3: From the NuGet website

It can hardly be easier, just go to the NuGet website, and click the “Install NuGet” button. This will lead you to the NuGet Package Manager on the Visual Studio Gallery MSDN site click the “Download” button to initiate the installer.

ScreenShot002

This will pop up the same installer as described in the previous method.

Managing Packages

There are 2 ways you can manage packages. The easiest way is by using the Add Library Package Reference GUI. The other way is by using PowerShell commands in the Package Manager Console. In either way you can achieve the most common tasks. The PowerShell method is useful when you don’t have a solution open or when you need commands that are only available as PowerShell commands.

Finding a package using the Add Library Package Reference Dialog

Right-click References in your project, select Add Library Package Reference and look online in the NuGet official package source for the library you want to add to your project.

ScreenShot003

After you hit the Install button, the library will be added to your project.

ScreenShot003

ELMAH (Error Logging Modules and Handlers) is an application-wide error logging facility that is completely pluggable. It can be dynamically added to a running ASP.NET web application, or even all ASP.NET web applications on a machine, without any need for re-compilation or re-deployment.

As you can see, there is a new reference to the Elmah binary added to our project. And if you open the web.config file, you will see that the package installer added the required configuration settings for you.

ScreenShot004

Note: OK, I cheated in the samples above. If you install the 1.2 version (latest at the moment of writing) from the repository, the web.config transformation was removed, and installed the 1.1 version instead using the Package Manager Console.

NuGet is also able to determine if your library has dependencies on other libraries and will install or upgrade them if they are not already installed. All on one single click Smile

ScreenShot004

ScreenShot005

Removing a package

Removing a package is as easy as opening the Add Library Package Reference screen, selecting the Installed packages and hitting the Uninstall button. Et voilà, NuGet does not only remove the libraries, but also cleans up the web.config for you.

ScreenShot005

Updating a package

After a while, you will notice that some packages have been updated in the repository. If you want the updated versions in your project, open up the Add Library Package Reference screen, select Updates and see which packages have updates available. Clicking the Update button of a package will update your project to the latest version.

Finding a package using the Package Manager Console

Anything you did before with the Add Library Package Reference GUI can also be done with the command line. Open up the Package Manager Console PowerShell, start typing a command and hit the TAB button. Yes, the Package Manager Console supports intellisense!

To search for a package, use the Get-Package command (with –Filter or you will get a list of all the packages in the repository):

ScreenShot007

To install a package, use the Install-Package command, followed by the package name. You can even use autocomplete in the package name!

ScreenShot008

Removing a package

How about trying the command Uninstall-Package followed by the package name. Would that uninstall the package?

ScreenShot009

Yes it does! Smile

Updating a package

Then would the command Update-Package update a package just like that?

ScreenShot010

Man, this is easy…

Packages folder

Now, to better understand what happens when we install a package, we can take a look in the packages folder that NuGet has created in our solution’s folder:

ScreenShot011

You can see that a package folder contains several subfolder of which the lib folder is the minimal requirement, that’s where your assembly dll is. In this example of the SqlServerCompact EntityFramework package, we have also a Content folder with an App_Start subfolder, which contains script that runs every time the application starts. And that’s also why the package has a dependency on the WebActivator package.

Any structure of folders and files can be placed in the Content folder, they will be copied in the same structure to your project’s folder. Use this when you want to include JavaScript, CSS, images, or any other file in the project.

This example also contains a tools folder where you can place PowerShell scripts that automatically run when the package is installed or removed.

When we take a look at the NewtonsoftJson package, we can see that the lib folder can also contain subfolders to support multiple .NET Framework versions and profiles. Although this is not a requirement, it can help to better target your package and optimize your code for different frameworks. The NuGet package installer automatically selects the correct version of the assembly in the package.

ScreenShot012

Note: The WebActivator package allows other packages to execute some startup code in web applications by creating an App_Start folder in your project and executing the code that’s inside it. The NuGet documentation details the requirements to use the WebActivator.

NuGet Package Explorer

Another way to look at packages is by using the NuGet Package Explorer application. This is a separate tool that can be downloaded from the NuGet CodePlex page at http://nuget.codeplex.com/releases/view/59864.

This tool gives you all the information that is in the package spec (metadata) file and the package content.

With the Package Explorer you can also edit the package metadata, if you’re not fond of editing the XML spec file. More on that in the next chapter about creating packages.

ScreenShot013

Creating and Publishing Packages

When you want to create and publish packages, you also have the choice between using a GUI, the Package Explorer, and using the command line.

Creating a NuGet package from the command line

First, we need to download the NuGet Command Line from http://nuget.codeplex.com/releases/view/58939, and make sure the NuGet.Exe is in our path.

Let’s create a really straightforward assembly:

ScreenShot014

Before we can create our package, we need to create a Package.spec file; this is the file that contains all the metadata of our package:

ScreenShot015

This is basically a template which you’ll have to modify by yourself. A better way would be to add an AssemblyInfo.cs file with your assembly metadata to your project and include it when compiling the assembly.

ScreenShot017

ScreenShot018

As you can see, you can add or link to a license or license file, a url for an icon, tags, dependencies to other packages and so on. Omit them if you don’t need them. Open it up with the Package Explorer to see the details.

When you build the package with the “nuget pack” command, we get a package file with the version number from the nuspec file, this allows us to differentiate every version we make.

Creating a NuGet package with the NuGet Package Explorer

Everything we did with the NuGet Command Line can also be done with the NuGet Package Explorer. Open up the NuGet Package Explorer, select File – New, and edit your package metadata and contents.

ScreenShot019

The result is the same package, created with another tool. Choose which one fits you best, the NuGet Command Line or the NuGet Package Explorer.

Exploring a NuGet package with the NuGet Package Explorer

A NuGet package is essentially a zip file, so you can extract it and explore its contents. When you extract a package, you see the nuspec file is also included. When you open it with the NuGet Package Explorer, you can see the metadata and contents of the package.

Take a look at the ELMA package:

ScreenShot021

ScreenShot023

File and source code transformations

Often we need to add code to our project, or execute some script after we referenced an assembly and before we can start using it, think of configuration in web.config, or add references to our project.

This is where NuGet gives us Source Code Transformations. The idea is to add a file to your package with just the transformations you need to make to the project’s source file, append it with “.transform” and place it in the Content folder, or any subfolder as it would sit in your project. NuGet than takes that file and merges it with any existing file.

For example the Elmah package needs to add some configuration sections and an HttpHandler to the web.config file. It does this by including a web.config.transform file with just these entries that it needs to add. NuGet will take this file and merge it with the application’s web.config file. Even nicer is that when you decide to remove the Elmah package, NuGet is able to clean up the web.config for you!

ScreenShot024

By adding PowerShell scripts to the Tools subfolder of you package, you can do about anything, not only add install or uninstall of your package, or when the application starts up, possibilities go far beyond that. A good example is the MvcScaffolding package, you should definitely add this and take a look at the package, there is really nice stuff in there.

Publishing your own NuGet packages

One way to publish your packages, besides of pushing them to the official NuGet server at http://nuget.org, is to setup a shared (network) folder. You can do this in the Package Manager Settings dialogue, under Package Manager – Package Sources. Now you have a new package source available when you open the Add Library Package Reference dialogue.

Another way is to create a new empty web application and add the NuGet.Server application. This will download all the required software to setup your own NuGet server. The packages you add to the Packages subfolder of this web application become available in the package feed when you start the application.

The NuGet.org server itself, is an instance of Orchard, so you could set it up in that way, refer to the extensive documentation of Orchard.

Links:

Posted by .Ronald on

Telerik Grid ClientTemplate with collection inside column

I have defined a ClientTemplate which needs to display an employee and its roles as a list of items inside a single column.

Take following example:

Model person.cs:

public class Person
{
public Guid Id { get; set; }
public string FirstName { get; set; }
public string LastName { get; set; }
public string[] Roles { get; set; }
}

The grid looks like this:

<%= Html.Telerik().Grid()
.Name("Employees")
.DataBinding(dataBinding => dataBinding.Ajax()
.Select("Employees", "Persons"))
.Columns(columns =>
{
columns.Bound(e => e.Id);
})
.DetailView(dv => dv.ClientTemplate(
"Name: <#= LastName #>, <#= FirstName #><br%>" +
"Roles: <ul%>" +
/* Person.Roles comes here: "<li%><#= Roles[i] #>"
"</li%></ul%>"
))
%>

Is there a way to have the roles op the employee displayed as an list inside the same column?

Yes there is!

You can embed executable code in your client template like this:

ClientTemplate("Roles: <ul>" +
"<# for (var i = 0; i < Roles.length; i++) {" +
"#> <li><#= Roles[i] #></li> <#" +
"} #>" +
"</ul>")

Many thanks to Atanas Korchev of the Telerik team for answering my question: ClientTemplate with collection inside.

Posted by .Ronald on

Prevent caching of stylesheet and javascript files

First something about caching

The numerous caching options you have in ASP.NET (MVC) are mainly focused on data and page output caching. But caching also occurs at the webserver, network and browser level.  These you can’t always control from within your code.

When your content leaves your application, it is processed by the webserver, depending on the server and version it has numerous options to control how and when it is cached. When your content is processed by the webserver and sent to the browser, there is also the network that can control the caching, namely proxy and web acceleration servers. Finally the content arrives in the browser and the browser itself has also numerous options related to caching. Generally spoken, they all use the same parameters, or at least some of them, to determine when, what and how long the content should be cached.

How does this caching work? Generally spoken, following rules apply:

  1. If the response header says not to cache, it doesn’t cache
  2. If we use a secure or authenticated transfer, like HTTPS, it doesn’t cache either
  3. If the cache expiring time or any other age-controlling header says it’s still ‘fresh’, it doesn’t cache
  4. If there’s an old version in the cache, the server will be asked to validate the version.  If the version is still good, it is served from the cache.
  5. Sometimes when the server cannot be reached due to network failure or disconnectivity, the content is also directly served from the cache.

Then what parameters are used, and how are they used?

  • HTTP Headers: these are sent in the request, but are not visible in the content
    • Expires: tells the cache how long the content stays fresh. After that time, the cache will always check back with the server. It uses an HTTP date in Greenwich Mean Time (GMT), any other or invalid format will be interpreted as in the pas and makes the content uncachable.  For static data you can set a time in the very far future, for highly dynamic content, you can set a time much closer, or even in the past to have the cache refresh the content more often or at every request.
    • Cache-Control: In response to some of the drawbacks of the Expired header, the Cache-Control header class was introduced. It includes (some, not all):
      • max-age=[seconds]
      • public / private
      • no-cache / no-store
      • must-revalidate
    • Pragma: no-cache: the HTTP specifications aren’t clear of what it means, so don’t rely on it, use the ones above
  • HTML meta tags: Unlike HTTP Headers, HTML meta tags are present in the visible content, more precisely in the <HEAD> section of your HTML page. A huge drawback of the us of HTML meta tags is, is that they can only be interpreted by browsers, and not all of them use them like you would expect. So prefer HTTP headers over HTML meta tags

A great Caching Tutorial can be found here: http://www.mnot.net/cache_docs/, and another one here: Save Some Cash: Optimize Your Browser Cache

An easy solution

Now, all of the caching systems rely in some way on the full request string to identify the content that is being cached.

So, the easiest solution would be to request a new unique URL every time the resource has changed, with a new version number.

How we do it in ASP.NET MVC

ASP.NET MVC (and ASP.NET Webforms also) doesn’t generate a new version number automatically.  You need to tell it to do so in the AssemblyInfo.cs file.  After a default project setup it contains a line like:

[assembly: AssemblyVersion("1.0.0.0")]

The version number is a four-part string with the following format: <major version>.<minor version>.<build number>.<revision>.  You usually set the major and minor version manually, as they are used as the type library version number when the assembly is exported, and don’t (need to) care of the build and revision number.  Well, now we do.

When you change this line to (or add it if it doesn’t exist):

[assembly: AssemblyVersion("1.0.*")]

We tell the compiler to generate a build and revision number for us. The generated build number is the number of days since 1-01-2000 (so 9-08-2010 gives 3873) and the revision number is the number of two second intervals since midnight local time (so a build at 11:59:12 gives 19776).

Now we have instructed our application to generate a new unique build number for us with every build, and every (possible) change of a resource, we can use this number as a unique parameter value in the URL of the the resource.

First we need to pass this version number from controller to view.  In the constructor of the (base)controller we put the version number in the ViewData Dictionary. With the ViewData you easily can pass data from the controller to the view using a key-value pattern.

protected BaseController(){
ViewData["version"] = Assembly.GetExecutingAssembly().GetName().Version;
}

And finally in the view, all you need to do is append this version number to the URL of the files you want to be prevented from caching:

<script type="text/javascript" language="javascript" src="<%: Url.Content("~/Scripts/commonFunctions.js?" + ViewData["version"]) %>"></script>

This makes sure we have a unique URL for our resources and they are not cached by the browser or a proxy.

Of course, like stated above, there are other ways of preventing files from being cached anywhere between the server and the browser, but the advantage of this method is that you don’t need to poke around in IIS settings (in case when you don’t have access to it) and you can define when and which version of the file you want to be cached.  And you can of course use any other method to generate a unique URL.

One more remark: When building a multi-tier application, make sure you set the version number in the AssemblyInfo.cs of the project where you use it, meaning, that if you put your base controller in a shared assembly, you need to specify the version number in the shared assembly project.

Posted by .Ronald on

404 Best Practices

A 404 error on the web is what a web server responds with when it is tasked with serving up a resource that it can’t find.

  • It should still look like your website
  • Apologize
  • Search
  • Give readers useful links
  • Way to Contact / Report Error
  • Automatic Reporting
  • Humor
  • Redirect?
  • File Size

CSS-Tricks404 Best Practices