Showing posts with label Tutorials. Show all posts
Showing posts with label Tutorials. Show all posts

Wednesday, June 12, 2013

Getting Started With Node.js Modules and npm (for developers)

In spite of node.js' wild popularity, finding good informative docs and tutorials as to how to develop and distribute apps for the platform can be a chore. Google searches on the matter tend to return either out-of-date or end-user results. So, as a service to the community and for my own edification, I'm going to show you how to get started developing node.js and packaging with npm.

This tutorial assumes you have node.js version 0.8.1+ as well as npm version 1.2.18+ properly installed. It further assumes working on a POSIX-ish environment. Also assumed is that the reader has at least passing knowledge of JavaScript.


"Hello Google"

In order to better illustrate the steps, we are going to develop a super trivial app that will send a search query to google, retrieve the results in json format and then print the results to the screen. So, let's get moving:

1. Let's create a directory and cd into it:

   $ mkdir hello-google && cd $_

2. Now the crucial step to get your node module started:

   $ npm init

This will create a file named package.json which should look something like this:

{
  "name": "hello-google",
  "version": "0.0.1",
  "description": "A trivial app to illustrate node.js dev",
  "main": "main.js",
  "scripts": {
    "test": ""
  },
  "dependencies": {
    },
  "repository": "",
  "author": "Ruben Orduz",
  "license": "MIT"
}

3. Install request module (easier to use than the default http client):

   $ npm install request

4. Let's add request to the list of dependencies in package.json:

{
  "name": "hello-google",
  "version": "0.0.1",
  "description": "A trivial app to illustrate node.js dev",
  "main": "main.js",
  "scripts": {
    "test": ""
  },
  "dependencies": {
    "request": ""
    },
  "repository": "",
  "author": "Ruben Orduz",
  "license": "MIT"
}

5. Let's create main.js and write some code (you can use your text editor of choice in place of 'open'):

   $ touch main.js && open $_

6. Now copy and paste the following code in main.js:

7. Now, let's test everything is working (assuming you are inside hello-google directory):

   $ cd ..
   $ node hello-google/

In a few seconds, you should see something along the lines of:

   PRISM (surveillanc -> http://en.wikipedia.org/w

   Nine Companies Tied to http://www.usnews.com/new

   What Is PRISM? - G -> http://gizmodo.com/what-i

   NSA slides explain the http://www.washingtonpost

8. If you have never submitted any packages to npm before, you must register with the following command:

   $ npm adduser

it will prompt you some questions and within seconds you will have your user and machine authorized to publish to npm's public servers. 

9. Once you are 100% ready that you want to publish a module for public consumption, you can run the following command:

   $ npm publish

10. To make sure your module can be installed and run:

   $ npm install hello-google

11. Let's test it through node:

   $ echo "r = require('hello-google');" > test.js && node test.js

Note that the name the module will be published as it's not the name of the directory, it's the name declared in the package.json

Saturday, July 7, 2012

How-To: Setting Up Multiple Custom Domains for a GAE Application

Google's platform-as-a-service (PaaS), Google App Engine, can be a great solution for web applications that require massive scaling (or the potential thereof) and the scaling needing to be largely transparent for both developers and end-users. With its free usage tier, App Engine also serves as a great platform for proof-of-concepts and "live" testing grounds. It currently has a fairly robust list of services and support to accommodate most web application needs and they are on a fairly aggressive release schedule fixing and adding features on a regular basis.

In spite of all the work they, App Engine, have put into making the development and deployment of web applications easy, they have somewhat forgotten about one of the most important aspects of the whole process: custom domain setup. Not to say they have completely neglected the issue, but they have delegated all domain setup to Google Apps. This coupling means that you can't setup a custom domain for your app, unless you have that domain somehow "registered" with Google Apps. This also makes the workflow to add a custom domain to your app rather cumbersome requiring having a Google Apps account, singing in, adding, verifying, etc. What if you wanted to assign more than one domain name to your app? Well, the only documentation I could find was for adding a custom domain to an app, but it doesn't really explain the process or the requirements. So, to solve this problem I pretty much tried every trick in the book (and documentation) to no avail. After trying some clever permutations of steps and work-arounds, I was able to set up two custom domains for the same app and only one Google Apps account. Be advised that this work-around won't solve the problem of custom SSL certificates for your domains, for more info in this topic subscribe to their Google Groups group (google-appengine) which is usually brimming with Q&A's regarding SSL for applications. Below are the steps necessary to add multiple custom domains to the same application.

What you need:
  • An active App Engine application
  • You must be an "owner" of the App Engine app.
  • A Google Apps account (even if it's for a different domain) and you must have a domain admin or super admin role.
  • Access to your domain's DNS settings.
Steps:
Interestingly enough, the simplest way to add custom domains to an App Engine app is to not deal with App Engine at all. All the steps below are accomplished from within Google Apps (and your domain registrar), not App Engine.

  1. Log into your Google Apps control panel
  2. Click on Domain Settings
  3. Click on Domain Names sub tab
  4. Click on Add a domain or domain alias
  5. (Assuming the Google Apps domain is not the one you want associated with your GAE app) select Add a domain alias of [domain.name]
  6. Enter the domain alias in the text input below it
  7. Click on Continue and verify domain ownership
  8. After you verify ownership of the domain and the domain alias has been successfully added, click on Settings on the top navigation bar of the Google Apps control panel.
  9. On the left-hand vertical bar, click on the GAE application you want to associate the domains to (usually they are labeled as: <appname (App Engine)>)
  10. In the Web address section, click on Add new URL
  11. A text box and a dropdown menu will show up prefaced by a "http://" enter the subdomain (i.e. www or such). This implies you cannot setup naked (i.e. http://domain.name) domains for App Engine apps, at least not directly.
  12. Finally select the domain name from the dropdown menu and click on Add 
  13. Setup your domain's CNAME records as instructed.
You can repeat this process to add as many custom domain names to your GAE application as you wish. Now, as said above, you cannot add naked domains to your GAE apps; however, there's a trick you can use to accomplish the same effect. The specific steps change from registrar to registrar and also bear in mind that some DNS services take anywhere from a few minutes to a few hours for changes to be applied, but in essence you'd want to:
  1. Point your www.domain.name DNS to the CNAME that Google Apps provided you
  2. Set URL-Forwarding on the naked domain to forward requests to www.domain.name
Assuming everything went as expected, you should now be able to reach your App Engine app at both http://www.domain.name and http://domain.name.

I'll close by saying that most of the steps and work-arounds above would not be necessary if App Engine had its own domain/dns management service and here's to hoping they'll add such service to App Engine soon.

Monday, December 19, 2011

Setting up Queue-size-based auto scaling groups in AWS

One of AWS' coolest features is the ability to scale in and out according to custom criteria. It can be based on machine load, number of requests, and so forth. For the sake of this tutorial, we are going to focus on queue-size-based auto scaling, that is, once we have certain amount of messages in a queue, an alarm will go off and will trigger the auto scaling policy to go into effect; however, this feature can also be used in conjunction with some sort of load balancing mechanism. In the AWS ecosystem, we can accomplish this using some of their off-the-shelf services, namely SQS, CloudWatch, and AutoScaling. As of this writing, AutoScaling doesn't have console/UI access to AutoScaling groups or any of its underlying requirements, so we are going to use boto which is a great and easy to use Python-based AWS API library.

For all the steps that follow, please remember, as a general rule, AWS services are region-specific. That is to say, these services are only visible and usable within the region in which they were created. Also important to bear in mind is that each of these services have a cost associated to them as per AWS pricing (see each of the product pages above for pricing details).

First let's setup the queue we're going to use to post and receive messages from and base our auto scaling on. To do this, log into your AWS management console and head to the SQS tab and click on Create New Queue. A modal window will pop up like the one shown below:

SQS_STEP_1

A bit on the parameters of the queue creation:

Default Visibility Timeout: The number of seconds (up to 12 hours) your messages will remain invisible once the message has been delivered.

Message Retention Period: The time (up to 14 days) when your message will be automatically deleted if they aren't deleted by the receiver(s).

Maximum Message Size:  The maximum queue message size (in KB, up to 64).

Delay Delivery: The time (in seconds) that the queue will hold a message before it is delivered for the first time.

Our next steps will include setting up the auto scaling group (and all the underlying services) and then setting up CloudWatch to handle the monitoring and issuing of alarms.

So, assuming you have Python and boto already installed, we're going to create a script to do the heavy lifting for us. The way the API and auto scaling works in boto is as follows: first you need a Launch Configuration (LC). A Launch Configuration, as its name states, is metadata about what do you want to launch every time the alarm is triggered (i.e. which ami, security groups, kernel, userdata and so forth). Then you need an Auto Scaling Group (ASG). ASGs are the imaginary "containers" for you auto scaling instances and contain information about Availability Zones (AZ), LCs and group size parameters. Then, in order to actually do the scaling, you'll need at least one Scaling Policy (SP). SPs describe the desired scaling behavior of a group when certain criteria is met or an alarm is set off. The last piece of the puzzle is a CloudWatch alarm which I will address later.

So, back to our script. First, import the necessary modules:
from boto.ec2.autoscale import AutoScaleConnection, LaunchConfiguration, AutoScalingGroup
from boto.ec2.regioninfo import RegionInfo
from boto.ec2.autoscale.policy import AdjustmentType, MetricCollectionTypes, ScalingPolicy

As an aside, while in boto you can set your AWS credentials in a boto config file, I like having the credentials within the scripts themselves to make it more direct and explicit, but feel free to use boto config if that's what your preference.

First thing we need to do is to establish an auto scaling connection to our region of choice -- in this example, the Oregon region (aka us-west-2). To do so, we do as follows:
AWS_KEY = '[YOUR_AWS_KEY_ID_HERE]'
AWS_SECRET = '[YOUR_AWS_SECRET_KEY_HERE]'

reg = RegionInfo(name='us-west-2',  endpoint='autoscaling.us-west-2.amazonaws.com')
conn = AutoScaleConnection(AWS_KEY, AWS_SECRET,  region = reg,  debug = 0)

We then need to create the LC. In the code below I added many parameters for the sake of illustration, but not all of them are required by either AWS or boto. I believe that the only required fields are name and image_id. Bear in mind, though, that if you choose to use these optional parameters, they need to be accurate else you'll get an error in the create launch configuration API request.
lc = LaunchConfiguration(name="LC-name", image_id="ami-12345678",
instance_type="m1.large", key_name="Your-Key-Pair-Name", security_groups=['sg-12345678', 'sg-87654321'])
conn.create_launch_configuration(lc)

The next step is to setup the ASG. Choose your min and max size carefully, specially if your scenario will scale based on a queue that can be directly or indirectly DDoS attacked. While you wouldn't want your site to be unresponsive to your customers, you wouldn't want would-be attackers to scale you up a very hefty bill. So, as a good practice, set an upper bound to your scaling groups.
ag = AutoScalingGroup(group_name="your-sg-name",
availability_zones=['us-west-2a', 'us-west-2b'],
launch_config=conn.get_all_launch_configurations(names=['LC-name'])[0], min_size=0, max_size=10)
conn.create_auto_scaling_group(ag)

We are almost done with the auto scaling setup; however, without a way to trigger auto scaling, all is for naught. To this end AWS lets you set different scaling criteria in the form of Scaling Policies (SP). Any self-respecting AS scheme has some sort of symmetry, that is to say for every scale up, there's a scale down. If you don't have a scale down, chances are you won't be entirely happy with monthly bill and wasting resources/capacity. The way we set the ASPs with boto is as follows:
sp_up = ScalingPolicy(name='AS-UPSCALE', adjustment_type='ChangeInCapacity',
as_name='your-sg-name',scaling_adjustment=1, cooldown=30)
conn.create_scaling_policy(sp_up)

sp_down = ScalingPolicy(name='AS-DOWNSCALE', adjustment_type='ChangeInCapacity',
as_name='your-sg-name',scaling_adjustment=-1, cooldown=30)
conn.create_scaling_policy(sp_down)

Before I continue on, I will say that the whole topic of SPs is, as of this writing, sparsely covered in AWS' documentation. I found some general information, but nothing to the level of detail that is desired by most people trying to understand SPs and their nuance.

Alright, if everything thus far has gone according to plan, we should be ready to move on to the next step. For this part, we will use the AWS management console. We could, of course, do it via API but I like to use the console whenever possible. So, log into the management console and click on the CloudWatch tab and make sure you are working in the right region.

On the left navigation bar, click on Alarms. Then click on Create Alarm. A Create Alarm Wizard modal window will pop up. In the search field, next to the All Metrics dropdown, type "SQS". This will bring up the metrics associated with the queue we built at the beginning of this tutorial. For the sake of this exercise, click on NumberOfMessagesReceived (though you are welcome to try other options/metrics if you wish). After selecting the row, click Continue. Give it a name and description. In the threshold section set it to ">= 10 for 5 minutes". In the next step of the wizard, we are going to configure the actions to take once this creteria has been met. Set the "When Alarm State is" column to ALARM, set the "Take action" column to Auto Scaling Policy and finally set the "Action details" to the scaling group we just created. A new dropdown menu will appear where you choose which policy to apply (see this screnshot -- sorry but the image was too wide for this blog layout).  This will be our up-scale policy. To setup the down-scale step, on the last column of the configure actions step of the Create Alarm Wizard, click on "ADD ACTION". In this new row, select "OK" from the "When Alarm State is" dropdown menu, then, just as above, select "Auto Scaling Policy" from the "Take action" column dropdown menu,  in the "Action details" dropdown select your AS group, and select your downscaling policy from the policy dropdown. Click Continue. In the next step, check your metrics, alarms and actions are correct. Finally click on Create Alarm.

Now that we are completely done setting the auto scaling up, you might want to test it. The easiest way would be to send couple hundred messages to the queue via API/boto and see how it scales up. Then deleting the messages and seeing how it scales down, but that is something I might address in a later post.

Hope this tutorial was of help and easy to follow. For comments and suggestions, ping me via Twitter @WallOfFire





Wednesday, September 28, 2011

DIY Basic AWS EC2 Dashboard using Apache, Python, Flask and boto (PartII)

In part I of this tutorial we covered the basic stack setup as well as showing boto/Flask usage. In this part, I'll show how to handle posts, request instance details and rendering dynamic URLs.

So, arguably, you have your index page as your EC2 instance dashboard already working. Now, building on that let's say that you want to see the detail of any of the instances in display. So as per my explanation of the ideosyncracies of AWS' API, in order to see such details, you need two basic pieces of information: the region and the instance id. The way we are going pass this information from the dashboard page to Flask is through URLs. Flask has pretty neat ad-hoc URL routing with replaceable variables that you can then use in your code. So, in this instance, in the dashboard page we are going to dynamically generate a link that contains both the region end-point and the specific instance id. If you look closely, to the index.html template the link looks as follows:
<a href="/details/{{region['Name']}}/{{instance['Id']}}">See Details</button>

So, now we need to tell Flask to "listen" to that URL pattern. To do so, add the following line to your [WEB_APP_NAME].py file (please bear in mind that this is only for the sake of illustration only; so I'm putting code style aside):
@app.route("/details/<region_nm>/<instance_id>")

This tells Flask to match incoming requests to that URL and bind the incoming parameters to the variable names inside the angled brackets "<" ">".  Right below that line declare the method/function you are going to use explicitly declaring the parameters you expect.
def details(region_nm=None,instance_id=None)

That is it in so far as Flask is concerned. Now that we, presumably, have all the data we need, we leverage boto to do the "hard work" for us. So as seen in part I of this tutorial, whenever we need to issue calls to the AWS API, the first thing we need to do is to start a connection to whatever region you are interested in. So, we go ahead and do so:
rconn = boto.ec2.connect_to_region(region_nm,aws_access_key_id=akeyid,aws_secret_access_key=seckey)

With that active regional connection, we then query the API for the details of a particular instance (which in this case we are going to use the instance_id passed in):
instance = rconn.get_all_instances([instance_id])[0].instances[0]

the API call above takes an array of instance ids (in our case we are only interested in one) and it will return not the instance itself, but an array of the parent reservation that why we have that first [0] is for and then that reservation's instances collection with only one instance object in it, the one we requested. Once we have the instance object it should be clear and obvious that you can extract any information you want or need (type, tags, state, etc.).

Now what if we wanted to make changes to any of the instances or if we wanted to, say, start/stop any of them? It's actually not unlike what we've been doing thus far: we tell Flask route object what to look for, then use that machinery to get the data you want. So, for this step, as a matter of generally acceptable good practices, we are going to send the information via submitting a form through a POST request method. Our route should look something like this:
@app.route("/change", methods=['POST'])

We can now define out function to handle the request:
def change():

Within this method we can now leverage Flask's request object to get the form data, for instance:
        instance_id = request.form['instance_id']
state_toggle = request.form['sstoggle']
toggle_change = request.form['type_change']

And finally, to make changes to an instance, all you need is to modify the instance's attributes key-value pairs. Let's say we wanted to change the instance type, to do so we simply change the value via boto's modify_attribute method as shown below:
        instance.modify_attribute('instanceType', 'm1.large')

One thing to bear in mind is that regretfully AWS API does not provide a "valid types" list for each instance. So, if you are dealing with a mix of 32- and 64-bit machines, it is possible you can assign an instance a type it is not compatible with, so you must be mindful about that. Given that there is no list provided by the API also means you need to hard-code the instance types. A reference to the official instance type list can be found here (pay attention to the API name field).

To change the instance state, however, you do not use the attributes collection directly. Instead, use the API calls provided by boto to start/stop/terminate:

instance.stop()
instance.start()
...


Hope this tutorial was of help. If you got questions or comments, feel free to ping me on Twitter @WallOfFire.

Tuesday, September 27, 2011

DIY Basic AWS EC2 Dashboard using Apache, Python, Flask and boto (Part I)

While Amazon Web Services offers a nice web-based UI to handle and manage EC2 instances, it might well be the case that you do wish to give access to some of this functionality to more people in your organization, but you do not with to provide them with full access to the AWS EC2 dashboard or wish to limit the type of API calls they make (for instance, you might want to allow users to start/stop instances, but you do now want them to be able to launch/terminate them). Whatever your use case might be, you can create your own "in-house" EC2 Dashboard with relative ease. Our software stack will consist of:

  • Apache (for basic authentication, SSL and WSGI, virtual hosts) with mod_ssl and mod_wsgi.

  • OpenSSL (for SSL and certificate generation).

  • Python.

  • Flask (py-based web services micro-framework).

  • boto (py-based AWS API library).

  • AWS KeyID/SecretKey credentials.

  • Admin/sudo privileges


Please note, if you do not need SSL/htpasswd you can use Flask's bundled web server which is suitable for most in-house deployments; however in this example, I will be using Apache. Also to note: I'm not going to spend time in the installation process of the packages/SSL certicate generation above as it should be fairly straightforward for anyone with minimal dev/sysadmin experience as well as there being many well-written tutorials for the setup of  these tools floating around the 'net.

First we need to make sure your tools are working. Make sure Apache is working, make sure mod_ssl is working, etc. Open up a python prompt and try importing flask, boto and so forth. Once you are fairly confident your tools are good to go then let's get moving.

1. Create an Apache virtual host entry file specifying the SSL certificate location, port number, location of the wsgi file and other important parameters as show below:

<VirtualHost *:443>
ServerAdmin webmaster@localhost

DocumentRoot /var/www/[WEB_APP_NAME]
SSLEngine On
SSLCertificateFile /path/to/certs/server.crt
SSLCertificateKeyFile /path/to/certs/server.key
<Directory />
Options FollowSymLinks
AllowOverride None
</Directory>

WSGIDaemonProcess [WEB_APP_NAME] user=[APACHE_USER] group=[APACHE_USER_GROUP] threads=5
WSGIScriptAlias / /var/www/[WEB_APP_NAME]/[WEB_APP_NAME].wsgi

<Directory /var/www/[WEB_APP_NAME]>
WSGIProcessGroup [WEB_APP_NAME]
WSGIApplicationGroup %{GLOBAL}
WSGIScriptReloading On
Options Indexes FollowSymLinks MultiViews
AllowOverride All #important line for using htpasswd
Order allow,deny
allow from all
</Directory>

#other settings here

</VirtualHost>

2. We then need to write the wsgi file telling mod_wsgi which app and instance it should start when a request comes along. It's a very simple file. Just make sure its name matches the ones you provided in the vhost file above. Its contents should look something like this:
from [WEB_APP_NAME] import app as application

One thing to bear in mind is that this way of using wsgi requires that your web app be anywhere in the $PYTHONPATH environment variable. Save, close and restart Apache.

3. For this example, as said above, I'm working under the assumption that your authentication needs are very basic and those requirements can be fulfilled with Apache's htpasswd. Assuming you setup Apache correctly with the right mods, you can go to your application's root directory and tell htpasswd to create a password file (.htpasswd) with a username-password pair. You can do so by running the following command:
sudo htpasswd -c .htpasswd [USERNAME]

It will then prompt you for a password which it will then encrypt using basic symmetric encryption algorithms and the create the .htpasswd file.

4. Next step is to setup your .htaccess file so that Apache knows when to ask for the credentials. Your .htaccess should have (at least) the following rules:
AuthUserFile /path/to/[WEB_APP_NAME]/.htpasswd
AuthGroupFile /dev/null
AuthName "EnterPassword"
AuthType Basic
require valid-user

You can test it by pointing your web browser to whatever URL you set up for this site. You should now be prompted with a username/password dialog.

5. Now that we have the basics ready, let us get to the meat and substance. First I highly suggest you give a quick perusal to Flask's "Quickstart" tutorial which can be found here and trying out the first few trivial examples to make sure you have everything setup correctly.

6. Make a new file named [WEB_APP_NAME].py and copy-and-paste the text below:
from flask import Flask, flash, abort, redirect, url_for, request, render_template
from boto.ec2.connection import EC2Connection
import boto.ec2
app = Flask(__name__)
akeyid = '[AWS_KEY_ID]'
seckey = '[AWS_SECRET_KEY]'
conn = EC2Connection(akeyid,seckey)

7. One of the concepts to bear in mind with respect to AWS API connections is regions. There is no global end-point for your AWS API calls and calls made to that region's API only make sense for services within that Region. For instance, you can't "see" your instances in the west coast Region ("us-west-1") from any other region. So, whatever regions you wish to have access to, you need to specify those explicitly. By default, boto connects to the "us-east-1" region.

8. So, our index page will be the Dashboard itself, that is to say, a place where users will be able to see all the instances from all regions. You can choose to limit the regions you wish to show/scour fairly easily, but for the sake of this example I'm going to simply gather all the instance information from all of AWS' regions. I will the create an object data structure with all the data and eventually I'll pass that data to the template rendering engine that Flask comes with.  So, we're going to create an app route for the index page, use boto to create an EC2 connection, retrieve all available regions, get all the instance reservations in that region, get all the instances within each of those regions, then bundle the data in a data structure and pass it to the rendering engine.

a. Create the index route:
@app.route("/")

b.  declare your method:
def my_method_name():

c.  retrieve the list of all AWS available regions:
        allinfo  = []
regions = conn.get_all_regions()
for region in regions:

d. connect to each of those regions and retrieve all the instance reservations (something to note: I'm not sure if it's boto's or AWS' boo-boo, but retrieve_all_instances() method does not retrieve all instances per se, instead it retrieves all the instance reservations, which are instance "containers") :
                rconn = EC2Connection(akeyid,seckey,region=region)
rsvs = rconn.get_all_instances() #read note above

e. loop over the reservations and gather all the instance information:
                   for rsv in rsvs:
insts = rsv.instances
for inst in insts:
#do stuff

f. now all together (including populating our instance info data structure and pass to the rendering engine):
@app.route("/")
def my_method_name():
allinfo = []
regions = conn.get_all_regions()
for region in regions:
regioninfo = {}
regioninfo['Name'] = region.name
rconn = EC2Connection(akeyid,seckey,region=region)
rsvs = rconn.get_all_instances()
instances = []
for rsv in rsvs:
insts = rsv.instances
for inst in insts:
instances.append({'Id': inst.id, 'Name':inst.tags['Name'],'State':inst.state, 'Type':inst.get_attribute('instanceType')['instanceType']})
regioninfo['instances'] = instances
allinfo.append(regioninfo)

return render_template('index.html',all_info=allinfo)

9. Now that we have the route and the code, we are going to use Flask's nifty template rendering engine (Jinja). To do so we need to create a file that matches the name in the render_template call above.

a. create a file with nano or your favorite text editor named index.html (or whatever name you chose ). This file has to be (by Flask convention) inside a directory called 'templates' and this directory should be at the same level as your web app Py script.

b. copy-paste the following "boilerplate" html:
<!doctype html>
<html>
<head><title>EC2 Dashboard</title>
</head>
<body>
<div class="header">Welcome to EC2 Dashboard</div>
<div class="content">
<div class="region-text">Regions Available</div>
{% for region in all_info %}
{% if region['instances'] %}
<div class="region-info"><span style="font-weight:bold">Region: {{region['Name']}}</span>
<span>Instances Avaliable</span></div>
<div class="region-content">
<table><tr><th>Instance Name</th><th>Instance State</th><th>Instance Type</th><th>Instance Id</th><th>Instance Actions</th></tr>
{% for instance in region['instances'] %}
<tr>
<td><span>{{instance['Name']}}</span></td>
<td><span>{{instance['State']}}</span></td>
<td><span>{{instance['Type']}}</span></td>
<td><span>{{instance['Id']}}</span></td>
<td>
<a href="/details/{{region['Name']}}/{{instance['Id']}}">See Details</button>
</td>
</tr>
{% endfor %}
</table>
</div>
{% else %}
<div class="region-info">
<span style="font-weight:bold">Region: {{region['Name']}}</span>
<span>No Instances Avaliable in this region</span>
</div>
{% endif %}
{% endfor %}
</div>
</body>
</html>

It should be obvious that you can (and should!) use your own html and CSS styles. The above example was to illustrate the rendering engine usage and syntax, which is pretty self explanatory and simple. Flask's and Jinja's documentation is very good and when in doubt you should consult those as your primary source.

This concludes part I of this tutorial. In part II I will then show how to post information, how to use boto for modifying instance information and more on Flask's routing/url facilities.

Thursday, August 25, 2011

Installing Citrix XenApp 6 Fundamentals on Amazon EC2 (from scratch)

Let me begin by saying that it has been a rather painful experience to learn/deal with Citrix XenApp 6. The documentation Citrix provides in their site regarding XenApp on Amazon EC2 is either outdated or slightly inaccurate (and therein a bigger problem). The problem is fundamental and one I'm afraid of Citrix has done on purpose: even a small mistake during installation can spoil your install forever and leaving you with very little options other than to start anew (literally! start from a clean OS image). The same is also true for upgrading/downgrading XenApp: it's just not possible to uninstall and upgrade. You must do fresh install from scratch. Citrix's XenApp forums seem to be packed with troubleshooting threads full of  "I have that problem too"  replies most of which with no official response/answer. They have two official blog entries specific to XenApp on EC2, but they are either no longer true or assume you have substantial knowledge of Citrix XenApp (and all its tricky parts). So, in this blog entry, I'm going to try to document my steps so that if should someone else find themselves on the same boat, they can at least go over these steps and see if they are of any help.

For the steps that follow, I'm going to assume that, just like me, the reader has little or no experience installing or administering Citrix XenApp and that the XenApp install is intended for external access to apps (sans VPN). I'm further going to assume the reader has an active EC2 account and that he or she is able to launch new instances and that he or she is well aware that this will incur in charges in accordance to AWS EC2 pricing.

1. Launch (i.e. new) a large instance with Microsoft Windows Server 2008 R2 with SQL Server Express and IIS (AMI Id: ami-42bd442b). SQL Server and IIS are requirements. Make sure you do so with a valid keypair so that you can later retrieve and decrypt the auto-generated Administrator password.

2. After waiting a few minutes retrieving the instance's auto-generated admin password, fire up your favorite Remote Desktop client and start a session to the instance you just launched.

3. Download (to the instance) and install whatever ISO-mounting software you prefer and install, I use PowerISO.

4. In your instance go to start menu and type "EC2Config" and wait until "EC2ConfigService Settings" shows up, select to run it.

4.a. Once the utility comes up, make sure to uncheck "Set Computer Name", "Initialize Drives", and "Set Password". Click Apply then OK.

5. Go to start menu, find "Computer", right click, click on Properties in the contextual menu.

5.a. In the Computer name, domain and workgroup settings click on Change settings, then click on the Change... button and give your computer a new name (prefearably something you easily remember and you'll use this name for installation and licensing steps as well).

5.b. Click OK couple times and you'll be prompted to reboot, click ok and then click on Close. Reboot.

6. Go to Citrix -> Product And Solutions -> XenApp -> Try (here) -> choose "Turnkey solution for small businesses up to 75 users" follow registration procedures until you're prompted to download and you're given a license number. If you already have a license number for XenApp 6 Fundamentals, you can try downloading directly here.

7. Enable .NET 3.5 (and make sure you don't have .NET 4+ installed).

8. Mount the ISO as a logical drive (with PowerISO or whatever ISO tool you use).

9. In your file explorer navigate to: {DRIVE LETTER}:\W2k8 and click on setup.exe (after agreeing to the license terms, ofter a dialog warning will show up telling you that other users might be logged on, ignore that and click OK). Please note that setup.exe must be "Run as Administrator", else it will fail.

9.a. In the setup workflow set Application Server as installation type.

9.b. Since this will be a test/trial environment, select "Disable Shadowing" then click next.

10. Set your admin username and password. I recommend using the same Domain\UserName and password you have in that machine.

Hopefully the installation will now complete successfully. If it failed along the way, it can be rather difficult to debug since the log file messages are rather devoid of any semantic meaning. As a last ditch effort, try uninstalling the very last module that failed to install/configure. and the try setup.exe again (of course, don't forget to "Run as Administrator").

 

Re/Sources:

PowerCram

Citrix XenApp on EC2 blog entry