Tuesday, September 22, 2015

Create a Responsive Django website with REST Api's and Angular js

                       Using Angular JS with Django




What is Angular js and important ?

     AngularJS is a structural framework for dynamic web apps. It lets you use HTML as your template language and lets you extend HTML's syntax to express your application's components clearly and succinctly. Angular's data binding and dependency injection eliminate much of the code you currently have to write. And it all happens within the browser, making it an ideal partner with any server technology.

Advantage of AngularJs
    

  • AngularJS provides capability to create Single Page Application in a very clean and maintainable way.
  • AngularJS provides data binding capability to HTML thus giving user a rich and responsive experience
  • AngularJS code is unit testable.
  • AngularJS uses dependency injection and make use of separation of concerns.
  • AngularJS provides reusable components.
  • With AngularJS, developer write less code and get more functionality.
  • In AngularJS, views are pure html pages, and controllers written in JavaScript do the business processing.
  • On top of everything, AngularJS applications can run on all major browsers and smart phones including Android and iOS based phones/tablets
Disadvantage of Angular Js

Though AngularJS comes with lots of plus points but same time we should consider the following points −
  • Not Secure − Being JavaScript only framework, application written in AngularJS are not safe. Server side authentication and authorization is must to keep an application secure.
  • Not degradable − If your application user disables JavaScript then user will just see the basic page and nothing more.
Angular Js Components 

The AngularJS framework can be divided into following three major parts −
  • ng-app − This directive defines and links an AngularJS application to HTML.
  • ng-model − This directive binds the values of AngularJS application data to HTML input controls.
  • ng-bind − This directive binds the AngularJS Application data to HTML tags.
Integration with Django 
       When  try angular js with Django i feel very hard to learn , but when it started it going awesome coding , i create a dynamic responsive website its working fine in all browser and mobile .
create a project and create app , if you face any issues please go through with the django official documents 
django projects
 set the settings.py properly 
Here iam using django-tastypie for Rest framework , you need to install this package inside the settings.py and make sure the collectstatic and migrate , if you face any issues please go through with django-tastypie doc 
example code,

Having read posts on the subject of using Django and Angular together, I felt most were reinventing the wheel, so to speak. Although the example code I have given is crude it should highlight how I've been using them on projects.
Models
Lets start with a typical Model.

/jobs/models.py

 
class Job(models.Model):
     name = models.CharField(max_length=50)
     description = models.TextField(null=True, blank=True)
 
 
Alright, nothing special so far. All you have done is create a simple model to contain basic job details.

The REST API ( Tastypie )

AngularJS is built to consume webservices, so your gonna need a way to expose the Job Model you just created.
Django has a good set of choices to create RESTful APIs. TastyPie is an awesome webservice framework built for the Django framework. It's incredibly powerful, yet easy to setup and use. However, personal preference aside, the same results could be achieved using Django REST framework, or even constructing your own API responses directly using Django. The choice is entirely yours. For the purposes of this tutorial we'll be using TastyPie.
If you're not familiar with TastyPie head over to the documentation. I won't go into detail regarding installation. One will assume you've setup and added TastyPie to your installed applications and are ready to go.
First, you need to create a resource for your Jobs. TastyPie uses the concept of 'Resources'. It describes them as intermediaries between the end user and objects, in this case thats the Job Model.
Start by creating the appropriate resource for the Job Model:

 class JobResource(ModelResource):
     """
     API Facet
     """
     class Meta:
         queryset = Jobs.objects.all()
         resource_name = 'job'
         allowed_methods = ['post', 'get', 'patch', 'delete']
         authentication = Authentication()
         authorization = Authorization()
         always_return_data = True
 
 
From memory, TastyPies documentation suggests naming the file api.py within your application. This is also my preference, but it's not mandatory.You can name the Python file whatever you like, but it's nice to keep consistency.
There are a few things going on in JobResource which is beyond the scope of this tutorial. But, I would just like to draw attention to how JobResource inherits 'ModelResource'. You want to use Tastypie with Django's ORM (the Job Model). Extending this means that many of the API fundamentals are all handled for you.
TastyPie can handle non-ORM data too. By extending directly from Resource you can also get all the API goodies TastyPie has to offer, but without being tied to the ORM. This is particularly useful when making calls to a non ORM, No SQL database as described in the documentation.
So far you have created the Model (Job) and a way for the end user to interface with it. Next, you need a way to connect the resource to an actual URL that will eventually allow AngularJS to consume it. You do this in Django by hooking it up to the URLconf. Simply instantiate the resource in your Django URLconf then hook up the URL's:

 from tastypie.api import Api
 from .apps.your_app.api import JobResource
 
 v1_api = Api(api_name='v1')
 v1_api.register(JobResource())
 
 urlpatterns = patterns('',
 
      (r'^api/', include(v1_api.urls)),
 )
 
The 'resource_name' attribute specified in the JobResource is the end point of the url. With that you now have a working API with the Resource endpoint Job. Check it’s all working by running your local server, then visiting http://127.0.0.1:8000/api/job/?format=json in your browser.
You now have a working API for your Job model. Easy.
Forms
Before you begin diving into AngularJS we are going to need to create a Job Form using Django's framework. The Job form will later allow you to edit Jobs in the single page application. I know what you're thinking, "why in Django"?
One of Django's design philosophies is "Don’t repeat yourself (DRY)". So it doesn't make sense to build forms using HTML for AngularJS and then in Django too, besides Django does such a good job as this. You may also already have several forms you want to convert, so why repeat the process? Enter, django-angular. This is one cool package you will be glad you came across (I know I was).
Quote: "Django-Angular is a collection of utilities, which aim to ease the integration of Django with AngularJS by providing reusable components."
Again, I'm not going to go into any details regarding the setup and installation here. I suggest you head over and check Django-Angular right away! Suffice to say, one of its many ticks it to allow you to use Django forms thus its form validation within AngularJS. Combine this with a package such as 'crispy forms' and you have a powerful all-in-one solution - "this is why I love the Django framework and its community".

 from .app.your_app.models import Job
 from .apps.djangular.forms import NgFormValidationMixin, NgModelFormMixin, AddPlaceholderFormMixin
 
 class JobForm(NgModelFormMixin, forms.ModelForm):
     """
     Job Form with a little crispy forms added! 
     """
     def __init__(self, *args, **kwargs):
         super(JobForm, self).__init__(*args, **kwargs)
         setup_bootstrap_helpers(self)
 
     class Meta:
         model = Job
         fields = ('name', 'description',)
 
 def setup_bootstrap_helpers(object):
     object.helper = FormHelper()
     object.helper.form_class = 'form-horizontal'
     object.helper.label_class = 'col-lg-3'
     object.helper.field_class = 'col-lg-8'
On to Angular js
For simplicity you're going to create 3 new templates using the following structure:
 templates
    jobs/index.html
    jobs/new.html
 base.html
 
 This assumes you have a Job app setup and installed. Your base template will look something like this:
/jobs/base.html

 <!DOCTYPE html>
 <html>
 <head>
     <meta charset="utf-8">
     <link href="//cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.0.2/css/bootstrap.min.css" rel="stylesheet">
 
     <script src="//ajax.googleapis.com/ajax/libs/angularjs/1.2.7/angular.js"></script>
     <script src="/angular-ui-router.min.js"></script>
     <script type="text/javascript" src="http://cdn.jsdelivr.net/restangular/latest/restangular.js"></script>
 
 </head>
 <body>
    {% block content %}{% endblock content %}
    {% block extra_javascript %}{% endblock extra_javascript %}
 </body>
 </html>
 
Django-Angular does offer some nice template tags which will include the necessary javascript for you. I recommend using a content distribution network (CDN) to load the necessary files where possible. Doing so gives obvious geographically and bandwidth advantages.
From here you need to create a signal page template that will be served by our Django project. The index.html will serve as the main page for our single page application and later can be used to serve all your CRUD views for Jobs.
/jobs/index.html

{% extends "base.html" %}
 {% load i18n %}
 {% block content %}
 <div class="container content" ng-app="JobApp">
     <div ui-view >Loading...</div>
 </div>
 {% endblock content %}
 {% block extra_javascript %}
 <script src="{{ STATIC_URL }}/javascript/app.js"></script>
 {% endblock extra_javascript %}
 
/javascript/app.js

var app = angular.module('JobApp', [
     'ui.router',
     'restangular'
 ])
 
 app.config(function ($stateProvider, $urlRouterProvider, RestangularProvider) {
     // For any unmatched url, send to /route1
     $urlRouterProvider.otherwise("/");
     $stateProvider
         .state('index', {
 
             url: "/",
             templateUrl: "/static/html/partials/_job_list.html",
             controller: "JobList"
         })
 
        .state('new', {
 
             url: "/new",
             templateUrl: "/jobs/job-form",
             controller: "JobFormCtrl"
         })
 })
 
 app.controller("JobFormCtrl", ['$scope', 'Restangular', 'CbgenRestangular', '$q',
 function ($scope, Restangular, CbgenRestangular, $q) {
 
 
 }])// end controller
 
 
The template and js above is very simple, inheriting from the base template. There are a few attributes you may-not have seen before and will need to understand.
The first of which is ng-app='JobApp'. Without this tag, the AngularJS process does not start. This directive tells AngularJS which element is the root element of the application. Anything you add inside this element will be part of the template managed by AngularJS.
Next, look at the script you have included in the index.html. This app.js script defines the angular module. An Angular module is a collection of functions that are run when the application is 'booted'.

 var app = angular.module('JobApp', [
 
This above line creates the module called 'JobApp'. In the index.html you already instantiated this using the ng-app='JobApp' attribute. What you have basically done here is tell AngularJS you want app.js to own everything inside.
Infact, you could set ng-app on any element in the DOM. For example, if you didn't want a part of the template controlled by Angular you could do this:

 <h2>I am not inside an AngularJS app</h2>
 <div ng-app="embeddedApp">
   <h3>Inside an AngularJS app</h3>
 </div>
app.config in app.js also shows the beginnings of your URL routing. AngularJS supplies URL routing by default via $route service in Angular core, but it's inadequate, and has some limitations.
One of the modules you have included is AngularUI Router 'ui.router'. AngularUI Router is an another routing framework for AngularJS which is organised around states, which may optionally have routes, as well as other behaviour, attached.
You have provided just one state in this tutorial called 'new', but you could include lots of different states for you application and hopefully you're having a lightbulb moment right now. You can even add a default behaviour for when no state is detected:

  $urlRouterProvider.otherwise("/");
     $stateProvider
         .state('index', {
 
             url: "/",
             templateUrl: "static/html/somepage.html",
             controller: "SomeController"
         })
 
 
If unfamiliar with this then I suggest reading up on AngularUI Router when you have completed this tutorial.
The last element within index.html you should understand is 'ui-view'. This is part of AngularUI Router model too. The ui-view directive tells $state where to place your template ie. templateUrl: "/job/new/".
Final template you will be creating is /jobs/new.html. This will hold the basic form you created earlier using the Django-Angular.

 {% load crispy_forms_tags %}
 {% crispy JobForm %}
 <button type="button" class="btn btn-default"  ng-click="submitJob()">Create</button>
 
Now you just need the view and URL to connect up the form.
/jobs/views.py

 from .forms import JobForm
 
 class JobFormView(TemplateView):
     template_name = "jobs/new.html"
 
     def get_context_data(self, **kwargs):
         context = super(JobFormView, self).get_context_data(**kwargs)
         context.update(JobForm=JobForm())
         return context
 
/jobs/urls.py

 from django.conf.urls import url
 from django.conf.urls import patterns
 
 from .views import JobFormView
 
 urlpatterns = patterns('',
 
                         url(r'^job-form/$',
                            login_required(JobFormView.as_view()),
                            name='job_form'),
 
 )
 
 
Now in your browser navigate to http://127.0.0.1:8000/job/#new and you should see the job form in your new single page application.
Our last step is to post our form data when submitJob is clicked. You are going to change the controller, the example below will use restangular.
app.controller("JobFormCtrl", ['$scope', 'Restangular', 'CbgenRestangular', '$q',
 function ($scope, Restangular, CbgenRestangular, $q) {
 
    $scope.submitJob = function () {
       var post_update_data = create_resource($scope, CbgenRestangular);
       $q.when(post_update_data.then(
                         function (object) {
                             // success!
                         },
 
                         function (object){
                             // error!
                             console.log(object.data)
                         }
                            
                     ))
                 }
 
 }])// end controller
 
 app.factory('CbgenRestangular', function (Restangular) {
         return Restangular.withConfig(function (RestangularConfigurer) {
             RestangularConfigurer.setBaseUrl('/api/v1');
         });
     })
 
 populate_scope_values = function ($scope) {
     return {name: $scope.name, description: $scope.description };
 },
 
 create_resource = function ($scope, CbgenRestangular) {
 var post_data = populate_scope_values($scope)
     return CbgenRestangular.all('job').post(post_data)
 },
 
 

Thursday, September 17, 2015

Python Script To Monitor Site Uptime


                      Python Script To Monitor Site Uptime

I wrote the following script in an attempt to monitor my clients-sites uptime, essentially if a sites goes down for whatever reason, I will be notified via email, this doesn't include sites hosted by ourselves  as they are monitored already, this is for sites where we only do consulting and they are hosted by others.The reason I decided to make this was because I happened to be on a reviewing a clients site while it went down (I wasn't doing anything but viewing the source from the browser!), anyway I notified the client and their development team and both were not aware that the site went down so I potentially saved some losses as it was quickly put back online.The script itself, although it looks simple enough, was admittedly a little tricky; it uses multithreading in order to keep both loops running simultaneously.The first function email_sender() is what it sounds like, it sends emails, this is powered by gmail, you need to add the email address and password of the account you wish to send the notifications from. You will likely want to authorise the server you are running the script from, start by trying to send an email from it, if it fails go to this link and authorise it, you then need to sent it again within 10 minutes and Google will whitelist it - You'll need to be signed in.The next function site_up(), runs through the sites you list and checks each one looking for a 200 status response code, if it receives anything else, it passes it on to a temporary dictionary that the second function is watching and deletes it from the main dictionary, after 15 minutes of being sat in the temporary dictionary it checks it again, if it is still returning anything other that a 200 response then it fires an email to the corresponding email address alerting you there is an issue (it's set up like this so you can include colleagues with different email addresses) - Every 15 minutes it checks whether it is back up or not, each time sending an email, once it is back up it fires another email saying the site is once again live - it deletes it from the temporary dictionary and adds the site back into the main pool.The site_up() function will continue monitoring all the other sites even when a site goes down and into the site_down() monitoring state.code :

Wednesday, September 16, 2015

Easy Web Scraping with Python


             Easy Web Scraping with Python



The Tools

There are two basic tasks that are used to scrape web sites:
  1. Load a web page to a string.
  2. Parse HTML from a web page to locate the interesting bits.
Python offers two excellent tools for the above tasks. I will use the awesome requests to load web pages, and BeautifulSoup to do the parsing.
We can put these two packages in a virtual environment:
$ mkdir pycon-scraper
$ virtualenv venv
$ source venv/bin/activate
(venv) $ pip install requests beautifulsoup4
If you are using Microsoft Windows, note that the virtual environment activation command above is different, you should use venv\Scripts\activate.

Basic Scraping Technique

The first thing to do when writing a scraping script is to manually inspect the page(s) to scrape to determine how the data can be located.
To begin with, we are going to look at the list of PyCon videos at http://pyvideo.org/category/50/pycon-us-2014. Inspecting the HTML source of this page we find that the structure of the video list is more or less as follows:
<div id="video-summary-content">
    <div class="video-summary">    <!-- first video -->
        <div class="thumbnail-data">...</div>
        <div class="video-summary-data">
            <div>
                <strong><a href="#link to video page#">#title#</a></strong>
            </div>
        </div>
    </div>
    <div class="video-summary">    <!-- second video -->
        ...
    </div>
    ...
</div>
So the first task is to load this page, and extract the links to the individual pages, since the links to the YouTube videos are in these pages.
Loading a web page using requests is extremely simple:
import requests
response = requests.get('http://pyvideo.org/category/50/pycon-us-2014')
That's it! After this function returns the HTML of the page is available in response.text.
The next task is to extract the links to the individual video pages. With BeautifulSoup this can be done using CSS selector syntax, which you may be familiar if you work on the client-side.
To obtain the links we will use a selector that captures the <a> elements inside each <div> with classvideo-summary-data. Since there are several <a> elements for each video we will filter them to include only those that point to a URL that begins with /video, which is unique to the individual video pages. The CSS selector that implements the above criteria is div.video-summary-data a[href^=/video]. The following snippet of code uses this selector with BeautifulSoup to obtain the <a> elements that point to video pages:
import bs4
soup = bs4.BeautifulSoup(response.text)
links = soup.select('div.video-summary-data a[href^=/video]')
Since we are really interested in the link itself and not in the <a> element that contains it, we can improve the above with a list comprehension:
links = [a.attrs.get('href') for a in soup.select('div.video-summary-data a[href^=/video]')]
And now we have a list of all the links to the individual pages for each session!
The following script shows a cleaned up version of all the techniques we have learned so far:
import requests
import bs4

root_url = 'http://pyvideo.org'
index_url = root_url + '/category/50/pycon-us-2014'

def get_video_page_urls():
    response = requests.get(index_url)
    soup = bs4.BeautifulSoup(response.text)
    return [a.attrs.get('href') for a in soup.select('div.video-summary-data a[href^=/video]')]

print(get_video_page_urls())
If you run the above script you will get a long list of URLs as a result. Now we need to parse each of these to get more information about each PyCon session.

Scraping Linked Pages

The next step is to load each of the pages in our URL list. If you want to see how these pages look, here is an example: http://pyvideo.org/video/2668/writing-restful-web-services-with-flask. Yes, that's me, that is one of my sessions!
From these pages we can scrape the session title, which appears at the top. We can also obtain the names of the speakers and the YouTube link from the sidebar that appears on the right side below the embedded video. The code that gets these elements is shown below:
def get_video_data(video_page_url):
    video_data = {}
    response = requests.get(root_url + video_page_url)
    soup = bs4.BeautifulSoup(response.text)
    video_data['title'] = soup.select('div#videobox h3')[0].get_text()
    video_data['speakers'] = [a.get_text() for a in soup.select('div#sidebar a[href^=/speaker]')]
    video_data['youtube_url'] = soup.select('div#sidebar a[href^=http://www.youtube.com]')[0].get_text()
A few things to note about this function:
  • The URLs returned from the scraping of the index page are relative, so the root_url needs to be prepended.
  • The session title is obtained from the <h3> element inside the <div> with id videobox. Note that [0] is needed because the select() call returns a list, even if there is only one match.
  • The speaker names and YouTube links are obtained in a similar way to the links in the index page.
Now all that remains is to scrape the views count from the YouTube page for each video. This is actually very simple to write as a continuation of the above function. In fact, it is so simple that while we are at it, we can also scrape the likes and dislikes counts:
def get_video_data(video_page_url):
    # ...
    response = requests.get(video_data['youtube_url'])
    soup = bs4.BeautifulSoup(response.text)
    video_data['views'] = int(re.sub('[^0-9]', '',
                                     soup.select('.watch-view-count')[0].get_text().split()[0]))
    video_data['likes'] = int(re.sub('[^0-9]', '',
                                     soup.select('.likes-count')[0].get_text().split()[0]))
    video_data['dislikes'] = int(re.sub('[^0-9]', '', 
                                        soup.select('.dislikes-count')[0].get_text().split()[0]))
    return video_data
The soup.select() calls above capture the stats for the video using selectors for the specific id names used in the YouTube page. But the text of the elements need to be processed a bit before it can be converted to a number. Consider an example views count, which YouTube would show as "1,344 views". To remove the text after the number the contents are split at whitespace and only the first part is used. This first part is then filtered with a regular expression that removes any characters that are not digits, since the numbers can have commas in them. The resulting string is finally converted to an integer and stored.
To complete the scraping the following function invokes all the previously shown code:
def show_video_stats():
    video_page_urls = get_video_page_urls()
    for video_page_url in video_page_urls:
        print get_video_data(video_page_url)

Parallel Processing

The script up to this point works great, but with over a hundred videos it can take a while to run. In reality we aren't doing so much work, what takes most of the time is to download all those pages, and during that time the script is blocked. It would be much more efficient if the script could run several of these download operations simultaneously, right?
Back when I wrote the scraping article using Node.js the parallelism came for free with the asynchronous nature of JavaScript. With Python this can be done as well, but it needs to be specified explicitly. For this example I'm going to start a pool of eight worker processes that can work concurrently. This is surprisingly simple:
from multiprocessing import Pool

def show_video_stats(options):
    pool = Pool(8)
    video_page_urls = get_video_page_urls()
    results = pool.map(get_video_data, video_page_urls)
The multiprocessing.Pool class starts eight worker processes that wait to be given jobs to run. Why eight? It's twice the number of cores I have on my computer. While experimenting with different sizes for the pool I've found this to be the sweet spot. Less than eight make the script run slower, more than eight do not make it go faster.
The pool.map() call is similar to the regular map() call in that it invokes the function given as the first argument once for each of the elements in the iterable given as the second argument. The big difference is that it sends all these to run by the processes owned by the pool, so in this example eight tasks will run concurrently.
The time savings are considerable. On my computer the first version of the script completes in 75 seconds, while the pool version does the same work in 16 seconds!

The Complete Scraping Script

The final version of my scraping script does a few more things after the data has been obtained.
I've added a --sort command line option to specify a sorting criteria, which can be by views, likes or dislikes. The script will sort the list of results in descending order by the specified field. Another option, --max takes a number of results to show, in case you just want to see a few entries from the top. Finally, I have added a --csv option which prints the data in CSV format instead of table aligned, to make it easy to export the data to a spreadsheet.
The complete script is available for download at this location: https://gist.github.com/renjithsraj/9fc25b13ec875d128973
Below is an example output with the 25 most viewed sessions at the time I'm writing this:
Conclusion
I hope you have found this article useful as an introduction to web scraping with Python. I have been pleasantly surprised with the use of Python, the tools are robust and powerful, and the fact that the asynchronous optimizations can be left for the end is great compared to JavaScript, where there is no way to avoid working asynchronously from the start.