Thursday, October 13, 2016

Your First Progressive Web App

Introduction

Progressive Web Apps are experiences that combine the best of the web and the best of apps. They are useful to users from the very first visit in a browser tab, no install required. As the user progressively builds a relationship with the app over time, it becomes more and more powerful. It loads quickly, even on flaky networks, sends relevant push notifications, has an icon on the home screen, and loads as a top-level, full screen experience.

What is a Progressive Web App?

A Progressive Web App is:
  • Progressive - Works for every user, regardless of browser choice because it's built with progressive enhancement as a core tenet.
  • Responsive - Fits any form factor: desktop, mobile, tablet, or whatever is next.
  • Connectivity independent - Enhanced with service workers to work offline or on low-quality networks.
  • App-like - Feels like an app to the user with app-style interactions and navigation because it's built on the app shell model.
  • Fresh - Always up-to-date thanks to the service worker update process.
  • Safe - Served via HTTPS to prevent snooping and to ensure content hasn't been tampered with.
  • Discoverable - Is identifiable as an "application" thanks to W3C manifest and service worker registration scope, allowing search engines to find it.
  • Re-engageable - Makes re-engagement easy through features like push notifications.
  • Installable - Allows users to "keep" apps they find most useful on their home screen without the hassle of an app store.
  • Linkable - Easily share via URL, does not require complex installation.
This codelab will walk you through creating your own Progressive Web App, including the design considerations, as well as implementation details to ensure that your app meets the key principles of a Progressive Web App.

What are we going to be building?

What you'll learn

  • How to design and construct an app using the "app shell" method
  • How to make your app work offline
  • How to store data for use offline later

What you'll need

  • Chrome 52 or above
  • Web Server for Chrome, or your own web server of choice
  • The sample code
  • A text editor
  • Basic knowledge of HTML, CSS, JavaScript, and Chrome DevTools
This codelab is focused on Progressive Web Apps. Non-relevant concepts and code blocks are glossed over and are provided for you to simply copy and paste.

Wednesday, October 12, 2016

Features and Design of Angular 2.0

Now that you've got a bit of background regarding the motiviations for creating Angular 2.0, let's look at a few key feature areas.

AtScript


AtScript is a language that is a superset of ES6 and it's being used to author Angular 2.0. It uses TypeScript's type syntax to represent optional types which can be used to generate runtime type assertions, rather than compile-time checks. It also extends the language with metadata annotations. Here's an example of what some AtScript code looks like:

import {Component} from 'angular';
import {Server} from './server';

@Component({selector: 'foo'})
export class MyComponent {
  constructor(server:Server) {
      this.server = server;
  }
}
Here we have some baseline ES6 code with a couple of AtScript additions. The import statements at the top of the example and the class syntax come straight from ES6. There's nothing special there. But, take a look at the constructor function. Notice that the server parameter specifies a type. In AtScript, this type is used to generate a runtime type assertion. A reference is also stored in a known location so that a framework, such as a dependency injection framework, can locate the type information and use it. Notice also the @Component syntax above the class declaration. This is a metadata annotation. Component is actually a normal class like any other. When you decorate something with an annotation the compiler generates code that instantiates the annotation and stores it in a known location so that it can be accessed by a framework such as Angular. With that in mind, here's what the above code transpiles to in terms of straight ES6:

import * as rtts from 'rtts';
import {Component} from 'angular';
import {Server} from './server';

export class MyComponent {
  constructor(server) {
      rtts.types(server, Server);
      this.server = server;
  }
}

MyComponent.parameters = [{is:Server}];
MyComponent.annotate = [
  new Component({selector: 'foo'})
];
RTTS stands for RunTime Type System. This is a small assertion library geared around runtime type checking. Here the compiler injects some code that will assert that the server variable is of type Server. This is an instance of a nominal type check. You can also write custom type assertions in order to use structural typing or apply adhoc type rules. When you deploy to production, the compiler can leave out these assertions in order to improve performance.

One nice thing is that independent of the type assertions, the type annotation and the metadata annotation can be tracked. Both of these annotations were translated to very simple ES5 compatible data structures stored on the MyComponent function itself. This makes it easy for any framework or library to discover this metadata and use it. Over the years this has proved to be quite a handy tool on platforms such as .NET and Java. It also bears some resemblance to Ruby's metaprogramming capabilities. Indeed, when combined with a library, annotations can be used to do metaprogramming, which is exactly how Angular 2.0 makes building directives easier. More on that later.


Dependency Injection


A core feature of Angular 1.x was Dependency Injection (DI). Through DI you can more easily follow a "divide and conquer" approach to software development. Complex problems can be conceptualized in terms of their roles and responsibilities. These can then be represented in objects which collaborate together to achieve the end goal. Large (or small) systems that are deconstructed in this way can be assembled at runtime through the use of a DI framework. Such systems are usually easier to test since the resulting design is more modular and allows for easier isolation of components. All this was possible in Angular 1.x of course. However, there were a few problems.

The first problem that plagued the 1.x DI implementation was related to minification. Since the DI relied on parsing parameter names from functions, essentially treating them as string tokens, when these names were changed during minification, they no longer matched the registered services, controllers and other components. The result was a broken app. An API was added to allow a more minification-friendly approach to DI, but it lacked the elegance of the original. Other problems with the 1.x implementation center around missing features common to the more advanced server-side DI frameworks available in the .NET and Java worlds. Two big examples of missing features that put constraints on developers are lifetime/scope control and child injectors.

Annotations


Through AtScript we've introduced a generalized mechanism for associating metadata with any function. Also, the AtScript format for metadata is resilient in the face of minification and easy to write by hand with ES5. This makes it a fantastic candidate for supplying a DI library with the information it needs to construct object instances. I'm sure it's no surprise that that's exactly how the new DI works.

When the DI needs to instance a class (or call a function) it examines it to see if it has any associated metadata. Recall this code from the AtScript transpiled output above:

MyComponent.parameters = [{is:Server}];
If the new DI finds the parameters value it will use it to determine the dependencies of the function it's trying to invoke. In this case, it can tell that there is exactly one parameter of type Server. So it will acquire an instance of Server and push that into the function before invoking it. You can also be explicit by providing a special Inject annotation for the DI to use. This will override the parameter data. It's also easy to supply if you're using a language that doesn't automatically generate the parameter metadata. Here's what that looks like in pure ES5 code:

MyComponent.annotate = [new Inject(Server)];
The runtime affect of this is the same as the parameter data. It should be noted that you can actually use anything as an injection token. So you could do this:

MyComponent.annotate = [new Inject('my-string-token')];
As long as you configure the DI with something that it can map to 'my-string-token' it will work just fine. That said, the recommended usage is via constructor instances as I've shown in all previous examples.

Instance Scope


In Angular 1.x, all instances in the DI container were singletons. This is the default for Angular 2.0 as well. In order to get different behavior you had to use Services, Providers, Constants, etc. That's some confusing stuff. Fortunately, the new DI has a new, more general and more powerful feature. It now has instance scope control. So, if you want the DI to always create a new instance of a class, every time you ask for one, you can just do this:

@TransientScope
export class MyClass { ... }
This becomes even more powerful when you create your own scope identifiers for use in combination with child injectors...

Child Injectors


Child injectors are a major new feature. A child injector inherits from its parent all of its parent's services, but it has the ability to override them at the child level. When combining this with custom scope identifiers, you can easily call out certain types of objects in your system that should automatically be overridden in various scopes. It's pretty powerful. As an example of this, the new router has a "Child Routers" capability. Internally, each child router creates its own child injector. This allows for each part of the route to inherit services from parent routes or to override those services during different navigation scenarios.


Templating and Databinding


If you've read this far, you must be very curious about Angular 2.0. Thanks for taking so much time. We've still got a way to go and we're about to get into the really interesting stuff: templating and binding. I'm going to discuss them in tandem here. While the databinding system is technically separate from the templating system, you experience them as a single unit when you write apps. So, I think addressing them side-by-side makes the most sense.

Let's start by understanding how a view gets on screen, then break things down a bit. Essentially, you start with an HTML fragment. This will live inside a <template> element. That HTML fragment is handed to the template compiler. The compiler traverses the fragment, identifying any directives, binding expressions, event handlers, etc. All of this data is extracted from the DOM itself into data structures which can be used to eventually instantiate the template. As part of this phase, some processing is done on the data such as parsing the binding expressions, for example. Every node that contains one of these special instructions is then tagged with a special class. The result of this process is cached so that none of this work needs to be repeated. We call this result a ProtoView. Once you've got a ProtoView you can use it to create a View. When a ProtoView makes a View all the directives that were previously identified are instantiated and attached to their DOM nodes. Watches are set up on binding expressions. Event handlers are configured. You get the idea. The data structures that were previously processed in the compile phase allow us to do this very quickly. Once you've got a View, you can display it by adding it to a ViewPort. A ViewPort represents a region of the screen that you can display Views in. As a developer you won't see most of this, you'll just write templates and it will work. But I wanted to layout the process at a high level briefly before I dig into the details.

Dynamic Loading


One of the huge missing features in Angular 1.x was dynamic loading of code. If you wanted to add new directives or controllers on the fly, that was very hard or impossible. It certainly wasn't supported. In 2.0 we've designed things from scratch with async in mind. So, when you go to compile a template, that's actually an async process.

Now I need to mention a detail of template compilation I left out in my simplified explanation above. When you compile a template, you not only provide the compiler with a template, but you also provide a Component definition. We'll get into the details of that in a bit. For the sake of this explanation, the Component definition contains metadata about what directives, filters, etc. were used in the template. This ensures that the necessary dependencies are loaded before the template gets processed by the compiler. Because we are basing our code on the ES6 module spec, simply referencing the dependencies in the Component definition will cause the module loader to load them, if they haven't already been loaded. So, by integrating with ES6 modules in this way, we get dynamic loading of whatever we want for free.


Directives


Before we dig into the syntax of templates, we need to look at directives, Angular's means of extending HTML itself. In Angular 1.x the Directive Definition Object (DDO) was used to create directives. This seems to be one of the great sources of suffering for many an Angular developer.

What if we could make directives simpler?

We've been talking about modules, classes and annotations. What if we could leverage those core constructs to build directives? Well, of course that's what we did.

In Angular 2.0 there are three types of directives.

Component Directive - Creates a custom component composed of a View and a Controller. You can use it as a custom HTML element. Also, the router can map routes to Components.
Decorator Directive - Decorates an existing HTML element with additional behavior. A classic example is ng-show.
Template Directive - Transforms HTML into a reusable template. The directive author can control when and how the template is instantiated and inserted into the DOM. Examples include ng-if and ng-repeat.
You may have heard that Controllers are dead in Angular 2.0. Well, that's not exactly true. In reality, Controllers are one part of what we are calling a Component. The Component has a View and a Controller. The View is your HTML template and the Controller has your JavaScript behavior. Rather than needing an explicit registration API for the controller or some non-standard APIs like 1.x, in 2.0 you can just create a plain class with some annotations. Here's an example of the controller half of a tab container component (we'll look at the view a bit later):

@ComponentDirective({
    selector:'tab-container',
    directives:[NgRepeat]
})
export class TabContainer {
    constructor(panes:Query<Pane>) {
        this.panes = panes;
    }

    select(selectedPane:Pane) { ... }
}
There are several features to notice here.

First, the controller for the component is just a class. Its constructor will have its dependencies injected automatically. Because child injectors are used, it can get access to any service up the DOM hierarchy, but also services local to its own element. For example, here it's having a Query injected. This is a special collection that automatically stays synchronized with the child Pane elements and lets you know when things are added or removed. You could also have the Element itself injected. This allows you to handle the same logic as the $link callback from Angular 1.x, but it's handled in a more consistent fashion through the class's constructor.

Now, take a look at the @ComponentDirective annotation. This identifies the class as a Component and provides metadata that the compiler needs to plug it in. For example selector:'tab-container' is a CSS selector that will be used to match HTML. Any element that matches this selector will be turned into a TabContainer. Also, directives:[NgRepeat] indicates the dependencies that the template for this component has. I haven't shown you that yet. We'll look at that in a minute when we talk about syntax.

An important detail to note is that the template will bind directly to this class. That means any property or method of the class can be accessed directly in the template. This is similar to the "controller as" syntax from Angular 1.2. There's no $scope sitting between this class and the template. The result is a simplification of Angular's internals, a simpler syntax for the developer and less work marshaling things back and forth from the $scope object.

Let's look at a Decorator Directive next. What about a simple NgShow?

@DecoratorDirective({
    selector:'[ng-show]',
    bind: { 'ngShow': 'ngShow' },
    observe: {'ngShow': 'ngShowChanged'}
})
export class NgShow {
    constructor(element:Element) {
        this.element = element;
    }

    ngShowChanged(newValue){
        if(newValue){
            this.element.style.display = 'block';
        }else{
            this.element.style.display = 'none';
        }
    }
}
Here we can see a few more aspects of directives. Again, we have a class with annotations. The constructor gets injected with the HTML Element that the decorator is attached to. The compiler knows this is a decorator because of the DecoratorDirective and knows to apply it to any element that matches the selector:'[ng-show]' CSS selector.

There's a couple of other curious properties on this annotation though.

bind: { 'ngShow': 'ngShow' } is used to map class properties to HTML attributes. Not all your class's properties are surfaced to the HTML as attributes. If you want your property to be bindable in HTML, you specify it in the bind metadata. The observe: {'ngShow': 'ngShowChanged'} tells the binding system that you want to be notified whenever the ngShow property changes and that you want to be called back using the ngShowChanged method. Notice that the ngShowChanged callback responds to changes by altering the display of the HTML element that it's attached to. (Note that this is a very naive implementation, only for demonstration purposes.)

Ok, so what does a Template Directive look like? Why don't we look at NgIf?

@TemplateDirective({
    selector: '[ng-if]',
    bind: {'ngIf': 'ngIf'},
    observe: {'ngIf': 'ngIfChanged'}
})
export class NgIf {
    constructor(viewFactory:BoundViewFactory, viewPort:ViewPort) {
        this.viewFactory = viewFactory;
        this.viewPort = viewPort;
        this.view = null;
    }

    ngIfChanged(value) {
        if (!value && this.view) {
            this.view.remove();
            this.view = null;
        }

        if (value) {
            this.view = this.viewFactory.createView();
            this.view.appendTo(this.viewPort);
        }
    }
}
Hopefully you can make sense of the TemplateDirective annotation. It registers this directive with the compiler and provides the necessary metadata to set up properties and observation, just as with the NgShow example. Being that this is a TemplateDirective, it has access to a couple of special services which can be injected into its constructor. The first is the ViewFactory. As I mentioned earlier, a Template Directive transform the HTML it is attached to into a template. The template is automatically compiled and you now have access to the view factory in your template directive. Calling the createView API on the factory instantiates the template itself. You also have access to a ViewPort. This represents the location in the DOM where the template was extracted from. You can use it to add or remove instances of the template from the DOM. Notice how the ngIfChanged callback responds to changes by instantiating the template and adding it to the view port, or removing it from the view port. If you were implementing something like NgRepeat instead, you could instantiate the template multiple times and even provide a specific data item to the createView API and then you could add multiple instance into the view port. That's the basics.

Now you've seen some canonical examples of the three types of directives. I hope that clarifies things a bit in terms of how you'll be able to extend the HTML compiler with new behavior.

However, there's an important thing I still haven't adequately explained: Controllers.

How do you create a controller for your application? Let's say you want to set up the router so it navigates to a controller and displays its view. How do you do that? The simple answer is that you do it with a Component Directive.

In Angular 1.x Directives and Controllers were two different things. There were different APIs and different capabilities. In Angular 2.0, since we've removed the DDO and made Directives class-based, we were able to unify Directives and Controllers into the Component model. So, now you have one way to accomplish both. So, when you are setting up your routes, you simply map the router to a ComponentDirective (which consists of a view and controller essentially, just like before).

So, if you were creating a hypothetical customer edit controller, you might have something like this:

@ComponentDirective
export class CustomerEditController {
    constructor(server:Server) {
        this.server = server;
        this.customer = null;
    }

    activate(customerId) {
        return this.server.loadCustomer(customerId)
            .then(response => this.customer = response.customer);
    }
}
There's nothing new here really. We are just injecting our hypothetical server service and using it to load up the customer when we are activated by the router. What's interesting is that you don't need a selector or any of the other metadata. The reason is that this component is not being used as a custom element. It's being dynamically created by the router and rendered dynamically into the DOM. As a result, you get to leave off the unneeded details

So, if you know how to make ComponentDirectives, you know how to build the equivalent of Angular 1.x controllers used by the router. This would have been very painful in Angular 1.x to unify, but since we have this nice class and metadata-driven system in Angular 2.0, directives are significantly simplified and it becomes very easy to create your "controllers" in this way.

Note: I'd like to point out that the directive code samples shown above are based on a combination of early prototype code and newer design document specs. They should be interpreted as an explanatory tool, not the exact syntax for directives, which is still in flux. The template compiler and binding languages are the most volatile parts of Angular 2.0 right now, with design changes happening quite frequently.
Template Syntax

So, you've got an understanding of the high level compilation process, that it can load code asynchronously, how to write directives and how they are plugged in and how controllers fit into the puzzle, but we still haven't looked at an actual template. Let's do that now by looking at the template for the hypothetical TabContainer I showed earlier. I'll include the directive code again here for convenience:

@ComponentDirective({
    selector:'tab-container',
    directives:[NgRepeat]
})
export class TabContainer {
    constructor(panes:Query<Pane>) {
        this.panes = panes;
    }

    select(selectedPane:Pane) { ... }
}
<template>
    <div class="border">
        <div class="tabs">
            <div [ng-repeat|pane]="panes" class="tab" (^click)="select(pane)">
                <img [src]="pane.icon"><span>${pane.name}</span>
            </div>
        </div>
        <content></content>
    </div>
</template>
Please suspend any horror you have when looking at that syntax. Yes, that is valid HTML according to the spec. No it's not our final binding syntax. But let's use this as en example so we have a starting point for a richer discussion.

The key to understanding the data binding syntax is in the left side of the attribute declaration. With this in mind, let's first look at the image tag.

<img [src]="pane.icon"><span>${pane.name}</span>
When you see an attribute name surrounded with [] that tells you that the right side, the value of the attribute, has a binding expression.

When you see an expression surrounded with ${} that tells you that there's an expression that should be interpolated into the content as a string. (This is the same syntax as ES6 uses for string interpolation.)

Both of these bindings are unidirectional from model/controller to view.

Now let's look at that scary div:

<div [ng-repeat|pane]="panes" class="tab" (^click)="select(pane)">
ng-repeat is a TemplateDirective. You can tell that we are binding it with an expression because it's surrounded by []. However, it also has a | and the word "pane". This indicates the local variable name to be used inside the template is "pane".

Now look over at (^click). Parenthesis are used to indicate that we are attaching the expression as an event handler. If there's a ^ as well inside the parens, that means we don't attach the handler directly to the DOM node, rather we let it bubble and be handled at the document level.

I'm holding off on expressing my own opinion on this and other things I've discussed in templating until the commentary section below. Let's ignore what you or I may think upon first sight of this and first talk about why a syntax like this was chosen.

Web Components change everything. This is another instance of the Web changing underneath the framework. Most databinding-based frameworks assume a fixed set of HTML elements and understand certain special behaviors of elements like inputs, etc. However, in a world with Web Components, no assumptions can be made. A developer, not targeting Angular, can create a custom element with any number of properties and events which do whatever he/she wants. Unfortunately, there's no way to inspect the Web Component and gather some metadata about it that could be used to drive the binding system. There's no way to know what events it actually raises, for example. Look at this:

<x-foo bar="..." baz="..."></x-foo>
Looking at bar and baz, can you determine which is the event and which is the property? No....and unfortunately Angular can't tell either because the Web Components spec doesn't include the notion of self-describing components. That's unfortunate because it means that a databinding system can't tell whether or not it needs to connect up a binding expression or whether it needs to add an event handler to invoke expressions. In order to solve this problem, we need a generalized databinding system with syntax that allows the developer to tell it what is an event and what is a property binding.

That's not the only difficulty though. Additionally, the information has to be provided in such a way that it will not break the Web Component itself. What I mean by this is that the Web Component cannot be allowed to see the expression. That could break the component. It can only be allowed to see the result of evaluating the expression. Actually this doesn't only apply to Web Components, but also to built-ins. Consider this:

<img src="{{some.expression}}">
This code will cause a bad http request to be made to try to find the "some.expression" image. That's not what we want at all though. We never want img to see that expression, only the value of it. AngularJS 1.x solved this with ng-src, a custom directive. Now, back to Web Components....it would be a disaster if you had to create a custom directive for every attribute of any Web Component, wouldn't it? I don't think we want that, so we need to solve this problem more generally in the binding system.

To accomplish this, you have two options. The first is to remove the attribute from the DOM during template compilation. That will prevent the Web Component from getting ahold of the expression text. However, doing this means that inspecting the DOM will show no trace of the binding expression that is operating on the attribute. This would make debugging more difficult. Another option is to actually encode the attribute name so that the Web Component doesn't "recognize" it. This would allow Angular to to see the expression, but prevent it from being seen by the Web Component. It would also allow us to leave the attribute on the element after compilation so that inspecting the DOM would allow you to see it. That's a better debugging story for sure.

All About Angular 2.0


Performance


When AngularJS was first created, almost five years ago, it was not originally intended for developers. It was a tool targeted more at designers who needed to quickly build persistent HTML forms. Over time it has changed to accommodate a variety of scenarios and developers have picked it up and used it to build more and more complex applications. The Angular 1.x team has worked hard over the years to make incremental changes to the design, allowing it to continue to be relevant as the needs of modern web applications have changed. However, there are hard limits on the improvements that can be made, due to assumptions that were made as part of the original design. A number of these limits relate to performance problems resulting from the current binding and templating infrastructure. In order to fix those problems, new strategies are needed.

The Changing Web


In the five years since Angular was first conceived, the web has changed significantly. For example, five years ago it was almost impossible to build a proper cross-browser site without help from something like jQuery. However, today's browsers are not only more consistent in their DOM implementations, but these implementations are faster and offer new features particularly pertinent to application frameworks.

And the web continues to change...

While massive changes have happened in the last couple of years, they pale in comparison to what's coming in the next 1-3 years. In a few months the ES6 spec will be finalized. It's not unreasonable to think that we'll see a browser in 2015 that implements the full spec. Today's browsers already support some of these features and are working on implementations of the rest right now. This means browser support for things like modules, classes, lambdas, generators, etc. These features fundamentally transform the JavaScript programming experience. But big changes aren't constrained merely to JavaScript. Web Components are on the horizon. The term Web Components usually refers to a collection of four related W3C specifications:

Custom Elements - Enables the extension of HTML through custom tags.
HTML Imports - Enables packaging of various resources (HTML, CSS, JS, etc.).
Template Element - Enables the inclusion of inert HTML in a document.
Shadow DOM - Enables encapsulation of DOM and CSS.
By combining these four capabilities web developers can create declarative components (Custom Elements) which are fully encapsulated (Shadow DOM). These components can describe their own views (Template Element) and can be easily packaged for distribution to other developers (HTML Imports). When these specifications become available in all major browsers, we are likely to see developer creativity explode as many endeavor to create reusable components to solve common problems or address deficiencies in the standard HTML toolkit (from Databinding With Web Components). It's already possible today in Chrome and the other browsers have some of these specs implemented and are working on others. The future sounds awesome right? There's just one problem: most of today's databinding frameworks aren't prepared for this. Most frameworks, Angular 1.x included, have a databinding system that works based on the assumption of a small number of known HTML elements with well-known events and behaviors. In order for Angular developers to take advantage of Web Components, a new implementation of databinding is needed.

Mobile


Speaking of five years ago...me oh my how the computing landscape has changed! Phones and tablets are everywhere! While Angular can be used to build mobile apps, it wasn't designed with them in mind. This includes everything from the fundamental performance issues I've already mentioned to missing capabilities of its router, the inability to cache pre-compiled views and even lackluster touch support. Some of these things can be bolted on to Angular 1.3 (like the router for example) but others require fundamental changes to fix.

Ease of Use


Let's be honest...AngularJS isn't exactly the easiest thing to learn. Yeah, you pick it up and you think to yourself "This is awesome. It's really easy and magical!!!" Then you start building your app and find yourself going "holy...what!!?? I don't understand!!!" I've heard this story over and over again. There's even a handy graphic to illustrate the point. A lot of this, once again, stems back to the original design and intent of the library. Originally, there were no custom directives, for example. They were all hardcoded. Then, an API to add them was made available. Originally there were no controllers, then.... You get the picture. This bolting on of features, many which are today considered core ideas, has resulted in a less than elegant API. If Angular is to be truly easy to learn and use, then it has to have a clear understanding of its own core features from the outset. A framework where directives and controllers are part of the initial design, rather than bolted on after the fact is going to be better on many counts.

Monday, October 10, 2016

Django Interview Questions/Answers.

1) What is Django?

Django is a free and open source web application framework, written in Python.

2) What does Django mean?

Django is named after Django Reinhardt, a gypsy jazz guitarist from the 1930s to early 1950s who is known as one of the best guitarists of all time.

3) Which architectural pattern does Django Follow?

Django follows Model-View Controller (MVC) architectural pattern.

4) Explain the architecture of Django?

Django is based on MVC architecture. It contains the following layers:
Models: It describes the database schema and data structure.
Views: The view layer is a user interface. It controls what a user sees, the view retrieves data from appropriate models and execute any calculation made to the data and pass it to the template.
Templates: It determines how the user sees it. It describes how the data received from the views should be changed or formatted for display on the page.
Controller: Controller is the heart of the system. It handles requests and responses, setting up database connections and loading add-ons. It specifies the Django framework and URL parsing.

5) Which foundation manages Django web framework?

Django web framework is managed and maintained by an independent and non-profit organization named Django Software Foundation (DSF).

6) Is Django stable?

Yes, Django is quite stable. Many companies like Disqus, Instagram, Pinterest, and Mozilla have been using Django for many years.

7) What are the features available in Django web framework?

Features available in Django web framework are:
  • Admin Interface (CRUD)
  • Templating
  • Form handling
  • Internationalization
  • Session, user management, role-based permissions
  • Object-relational mapping (ORM)
  • Testing Framework
  • Fantastic Documentation

8) What are the advantages of using Django for web development?

  • It facilitates you to divide code modules into logical groups to make it flexible to change.
  • It provides auto-generated web admin to make website administration easy.
  • It provides pre-packaged API for common user tasks.
  • It provides template system to define HTML template for your web page to avoid code duplication.
  • It enables you to define what URL is for a given function.
  • It enables you to separate business logic from the HTML.

9) How to create a project in Django?

To start a project in Django, use the command $django-admin.py and then use the following command:
Project
_init_.py
manage.py
settings.py
urls.py

10) What are the inheritance styles in Django?

There are three possible inheritance styles in Django:
1. Abstract base classes: This style is used when you only want parent?s class to hold information that you don't want to type out for each child model.
2. Multi-table Inheritance: This style is used if you are sub-classing an existing model and need each model to have its own database table.
3. Proxy models: This style is used, if you only want to modify the Python level behavior of the model, without changing the model's fields.

11) How can you set up the database in Djanago?

To set up a database in Django, you can use the command edit mysite/setting.py , it is a normal python module with module level representing Django settings.
By default, Django uses SQLite database. It is easy for Django users because it doesn't require any other type of installation. In the case of other database you have to the following keys in the DATABASE 'default' item to match your database connection settings.
Engines: you can change database by using 'django.db.backends.sqlite3' , 'django.db.backeneds.mysql', 'django.db.backends.postgresql_psycopg2', 'django.db.backends.oracle' and so on
Name: The name of your database. In the case if you are using SQLite as your database, in that case database will be a file on your computer, Name should be a full absolute path, including file name of that file.
Note: You have to add setting likes setting like Password, Host, User, etc. in your database, if you are not choosing SQLite as your database.

12) What does the Django templates contain?

A template is a simple text file. It can create any text-based format like XML, CSV, HTML, etc. A template contains variables that get replaced with values when the template is evaluated and tags (%tag%) that controls the logic of the template.

13) Is Django a content management system (CMS)?

No, Django is not a CMS. Instead, it is a Web framework and a programming tool that makes you able to build websites.

14) What is the use of session framework in Django?

The session framework facilitates you to store and retrieve arbitrary data on a per-site visitor basis. It stores data on the server side and abstracts the receiving and sending of cookies. Session can be implemented through a piece of middleware.

15) How can you set up static files in Django?

There are three main things required to set up static files in Django:
1. Set STATIC_ROOT in settings.py
2. run manage.py collectsatic
3. set up a Static Files entry on the PythonAnywhere web tab

16) How to use file based sessions?

You have to set the SESSION_ENGINE settings to "django.contrib.sessions.backends.file" to use file based session.

17) What is some typical usage of middlewares in Django?

Some usage of middlewares in Django is:
  • Session management,
  • Use authentication
  • Cross-site request forgery protection
  • Content Gzipping, etc.

18) What does of Django field class types do?

The Django field class types specify:
  • The database column type.
  • The default HTML widget to avail while rendering a form field.
  • The minimal validation requirements used in Django admin.
  • Automatic generated forms.

Hosting Python WSGI applications using Docker.

Hosting Python WSGI applications using Docker.

As I mentioned in my previous blog post I see a lot of promise for Docker. The key thing that I personally see as being able to gain from Docker, as a provider of a hosting solution for Python WSGI applications, is that I can get back some control over the hosting experience that developers will have.

Right now things can quickly become a bit of a mess, because the experience that developers have of Apache/mod_wsgi is going to be dictated by how a Linux distribution or hosting provider has setup Apache, and how easy they have made customising it in order to add the ability to host Python WSGI applications and then tune the Apache server. The less than optimal experience that developers usually have means they do not get to appreciate how well Apache/mod_wsgi can work and simply desert it for other options.

In the case of Docker, I can provide a pre packaged image for hosting Python WSGI applications which uses my knowledge of how to set up Apache and mod_wsgi properly to give the best experience. I can therefore hope that Docker may help me to win back some of those who otherwise never really understood the strengths of Apache and mod_wsgi.

Current offerings for Python and Docker

Although Docker is still young, to be frank, the bulk of the information around about running Python WSGI application with Docker is pretty woeful. The instructions provided focus more on how to use Docker itself rather than how to create a production capable hosting solution for Python WSGI applications within the container. Nearly all explanations I have found describe the use of builtin development servers for Python web frameworks such as Flask and Django. Using inbuilt development servers for production is generally a very bad idea.

In some cases they will suggest the use of gunicorn or CherryPy WSGI servers, but these themselves cannot handle hosting of static files. How exactly you are meant to host the static files they don't really provide details on, at most perhaps suggesting the use of S3 as a completely separate mechanism for hosting them.

There are available some Docker images for using uWSGI, but they are generally setup with the specific requirements of that user in mind, rather than trying to provide a good reusable image that can be applied across many uses cases, without you yourself having to do some measure of re-configuration. Again they aren't exactly transparent as far as handling static files and leave that mostly up to you to work out how to solve.

The final problem with the uWSGI Docker images is that they are effectively unsupported efforts and haven't been updated in some time. They therefore are not keeping up to date with any security fixes or general bug fixes in the packages they are using.

Using Apache/mod_wsgi and Docker

To date I have not seen anyone attempt to describe how to use Apache and mod_wsgi with Docker. It isn't something that I am going to do exactly either, in as much as rather than describe how you yourself could create an image for using Apache and mod_wsgi with Docker, I am simply going to provide a pre packaged image instead. What I will describe therefore is how to use that image and how best to use it in its pre packaged form.

This blog post is therefore the first introduction to this endeavour. I will show you how to use the Docker image with a couple of different WSGI applications and then in subsequent blog posts I will start peeling apart the layers and explain the different parts that go into it and what capabilities it has. Provided I don't get too carried away with doing more coding, which is obviously the fun bit, I will back things up by finally starting to upgrade the mod_wsgi documentation to cover it and all the other new features that are available in mod_wsgi these days.

Running a Hello World WSGI application

Lets start out therefore with the canonical WSGI hello world application.

def application(environ, start_response):
    status = '200 OK'
    output = 'Hello World!'

    response_headers = [('Content-type', 'text/plain'),
                        ('Content-Length', str(len(output)))]

    start_response(status, response_headers)

    return [output]
Create a new empty directory and place this in a file called 'wsgi.py'.

This Hello World program has no associated static files, nor does it require any additional Python modules to be installed. Even though no separate modules are required at this point, we will still create a 'requirements.txt' file in the same directory. This 'requirements.txt' file will be left empty for this example.

The next step is to create a 'Dockerfile' to build up our Docker image. As we are going to use the pre packaged Docker image I am providing and it embeds various magic, all that the 'Dockerfile' needs to contain is:

FROM grahamdumpleton/mod-wsgi-docker:python-2.7-onbuild
CMD [ "wsgi.py" ]
For the image being derived from, an 'ENTRYPOINT' is already defined which will run up Apache/mod_wsgi. The 'CMD' instruction therefore only needs to provide any options, which at this point consists only of the path to the WSGI script file, which we had called 'wsgi.py'.

We can now build the Docker image for the Hello World example:

docker build -t my-python-app .
and then run it:

docker run -it --rm -p 8000:80 --name my-running-app my-python-app
The Hello World WSGI application will now be accessible by pointing your browser at port 8000 on the Docker host.

Running a Django based web site

We don't run Hello World applications as our web sites, so we also need to be able to run whole Python web sites constructed using web frameworks such as Django as well. It is with more complicated web applications that we start to also have static files that need to be hosted at the same time, so we need to deal with that somehow. The Python module search path can also require special setup so that the Python interpreter can actually find where the Python code for the web application is located.

So imagine that you have a Django web application constructed using the standard layout. From the top directory of this we would therefore have something like:

example/
example/example/
example/example/__init__.py
example/example/settings.py
example/example/urls.py
example/example/views.py
example/example/wsgi.py
example/htdocs/
example/htdocs/admin
example/htdocs/admin/...
example/manage.py
requirements.txt
The 'requirements.txt' which was used to create any local virtual environment used during development would already exist, and at the minimum would contain:

Django
Within the directory would then be the actual project directory which was created using the Django admin 'startproject' command.

As this example requires static files, we setup the Django settings file to define the location of a directory to keep the static files:

STATIC_ROOT = os.path.join(BASE_DIR, 'htdocs')
STATIC_URL = '/static/'
and then run the Django admin 'collectstatic' command. The 'collectstatic' command copies all the static file assets from any Django applications into the common 'htdocs' directory. This directory will then need to be mounted at the '/static' URL when we run Apache/mod_wsgi.

What we are going to do now is create a 'Dockerfile' in the same directory as the 'requirements.txt' file. This will be the root of our application when copied across to the Docker image.

Now normally when Apache/mod_wsgi gets run with with the pre packaged image, the root directory of the application would normally be the current working directory for the application and also be added to the Python module search path. For a Django site, what we really want is for the top level 'example' directory to be the current working directory and for it to be searched for Python modules. This is necessary so that the correct directory is searched from for the Django settings file, which for this example has the module path 'example.settings'.

With the way Django lays out the project and creates the 'wsgi.py' file such that it is importable as 'example.wsgi', it can be preferable to use it as a module rather than as a WSGI script file. I'll get into the distinction another time, but importing it as a module does allow me to show off that it is possible to use a WSGI script file, a module or even a Paste style ini configuration file as the application entry point.

With all that said, we now actually create the 'Dockerfile' and in it we place:

FROM grahamdumpleton/mod-wsgi-docker:python-2.7-onbuild
CMD [ "--working-directory", "example", \
      "--url-alias", "/static", "example/htdocs", \
      "--application-type", "module", "example.wsgi" ]
The options to the 'CMD' instruction in this case serve the following purposes.

The '--working-directory' option says that the 'example' directory should actually be set to be the current working directory for the WSGI application when run. That directory will also be added automatically to the Python module search path so that the 'example' package which contains all the code can be found.

The '--url-alias' option says that the static files in the 'examples/htdocs' directory should be mounted at the '/static' URL as was specified by the 'STATIC_URL' setting in the Django settings module.

The '--application-type' option says that rather than the WSGI application entry point being specified as a WSGI script file, it is defined by the listed module path. The default for this would have been 'script', with another possible value being 'paste' for a Paste style ini configuration file.

Finally, the 'example.wsgi' option is the Python module path for the 'wsgi.py' sub module in the project 'example' package.

As before we build the Docker image and then run it.

In a real Django site we would normally also have a database and possibly a key/value cache of some sort. Setting these up is beyond the scope of this post but would follow normal Docker practices. That or you might use a tool such as Fig to manage the linking and spin up of all the containers.

Setting up the Apache configuration file

In short there isn't one and you do not have to concern yourself with it.

This is a key point with being able to supply a pre packaged image for hosting using Apache/mod_wsgi. Since users often don't want to learn properly how to set up Apache and as such it causes so much grief, I can completely remove the need for a developer to have to worry about it.

Instead I can provide a simplified set of command line options which implement the basic features that most sites would want to use when setting up Apache/mod_wsgi. The scripts under pinning the pre packaged Docker image can then dynamically generate the Apache configuration on the fly based on the specific options provided.

In doing this, this is where I can apply my knowledge of how to set up Apache/mod_wsgi and ensure things are down correctly, securely and in a way that would give a good level of performance out of the box.

This doesn't mean you can't avoid needing to tune the settings to get Apache/mod_wsgi to run for your specific site, but the number of knobs you would have to worry about is greatly reduced as everything else would be handled automatically.

But how does this all actually work?

So this is an initial introduction to just one of a number of new things I have been working on related to mod_wsgi. As I start to peel back some of the layers to explain how all this works I will start to introduce some of the other things I have been cooking up over the last year, including alluding to other things that hopefully I will get to down the track.

If you can't wait and want to play around with these docker images they can be found, along with some basic setup information, on the Docker hub.

If you need help in working out how to use them or have any other questions, then rather than try and post questions against this blog, go and ask your questions on the mod_wsgi mailing list. Please do not use StackOverflow or related sites as I don't answer questions there any more and no one there will know anything anyway since this is all so new.