Showing posts with label boto. Show all posts
Showing posts with label boto. Show all posts
Wednesday, May 23, 2012
boto cheat sheet
I've been using python boto for nearly a year and have been greatly impressed with it from the get-go. However, I usually find myself forgetting key methods and parameter for the few AWS services I use the most, namely SQS, DynamoDB, S3 and EC2. So, in order to avoid going through the documentation every time, I made a cheat sheet for the most commonly used methods and functionality of the above said services. You can now download it here. If you see something amiss or inaccurate please let me know.
Monday, December 19, 2011
Setting up Queue-size-based auto scaling groups in AWS
One of AWS' coolest features is the ability to scale in and out according to custom criteria. It can be based on machine load, number of requests, and so forth. For the sake of this tutorial, we are going to focus on queue-size-based auto scaling, that is, once we have certain amount of messages in a queue, an alarm will go off and will trigger the auto scaling policy to go into effect; however, this feature can also be used in conjunction with some sort of load balancing mechanism. In the AWS ecosystem, we can accomplish this using some of their off-the-shelf services, namely SQS, CloudWatch, and AutoScaling. As of this writing, AutoScaling doesn't have console/UI access to AutoScaling groups or any of its underlying requirements, so we are going to use boto which is a great and easy to use Python-based AWS API library.
For all the steps that follow, please remember, as a general rule, AWS services are region-specific. That is to say, these services are only visible and usable within the region in which they were created. Also important to bear in mind is that each of these services have a cost associated to them as per AWS pricing (see each of the product pages above for pricing details).
First let's setup the queue we're going to use to post and receive messages from and base our auto scaling on. To do this, log into your AWS management console and head to the SQS tab and click on Create New Queue. A modal window will pop up like the one shown below:

A bit on the parameters of the queue creation:
Default Visibility Timeout: The number of seconds (up to 12 hours) your messages will remain invisible once the message has been delivered.
Message Retention Period: The time (up to 14 days) when your message will be automatically deleted if they aren't deleted by the receiver(s).
Maximum Message Size: The maximum queue message size (in KB, up to 64).
Delay Delivery: The time (in seconds) that the queue will hold a message before it is delivered for the first time.
Our next steps will include setting up the auto scaling group (and all the underlying services) and then setting up CloudWatch to handle the monitoring and issuing of alarms.
So, assuming you have Python and boto already installed, we're going to create a script to do the heavy lifting for us. The way the API and auto scaling works in boto is as follows: first you need a Launch Configuration (LC). A Launch Configuration, as its name states, is metadata about what do you want to launch every time the alarm is triggered (i.e. which ami, security groups, kernel, userdata and so forth). Then you need an Auto Scaling Group (ASG). ASGs are the imaginary "containers" for you auto scaling instances and contain information about Availability Zones (AZ), LCs and group size parameters. Then, in order to actually do the scaling, you'll need at least one Scaling Policy (SP). SPs describe the desired scaling behavior of a group when certain criteria is met or an alarm is set off. The last piece of the puzzle is a CloudWatch alarm which I will address later.
So, back to our script. First, import the necessary modules:
As an aside, while in boto you can set your AWS credentials in a boto config file, I like having the credentials within the scripts themselves to make it more direct and explicit, but feel free to use boto config if that's what your preference.
First thing we need to do is to establish an auto scaling connection to our region of choice -- in this example, the Oregon region (aka us-west-2). To do so, we do as follows:
We then need to create the LC. In the code below I added many parameters for the sake of illustration, but not all of them are required by either AWS or boto. I believe that the only required fields are name and image_id. Bear in mind, though, that if you choose to use these optional parameters, they need to be accurate else you'll get an error in the create launch configuration API request.
The next step is to setup the ASG. Choose your min and max size carefully, specially if your scenario will scale based on a queue that can be directly or indirectly DDoS attacked. While you wouldn't want your site to be unresponsive to your customers, you wouldn't want would-be attackers to scale you up a very hefty bill. So, as a good practice, set an upper bound to your scaling groups.
We are almost done with the auto scaling setup; however, without a way to trigger auto scaling, all is for naught. To this end AWS lets you set different scaling criteria in the form of Scaling Policies (SP). Any self-respecting AS scheme has some sort of symmetry, that is to say for every scale up, there's a scale down. If you don't have a scale down, chances are you won't be entirely happy with monthly bill and wasting resources/capacity. The way we set the ASPs with boto is as follows:
Before I continue on, I will say that the whole topic of SPs is, as of this writing, sparsely covered in AWS' documentation. I found some general information, but nothing to the level of detail that is desired by most people trying to understand SPs and their nuance.
Alright, if everything thus far has gone according to plan, we should be ready to move on to the next step. For this part, we will use the AWS management console. We could, of course, do it via API but I like to use the console whenever possible. So, log into the management console and click on the CloudWatch tab and make sure you are working in the right region.
On the left navigation bar, click on Alarms. Then click on Create Alarm. A Create Alarm Wizard modal window will pop up. In the search field, next to the All Metrics dropdown, type "SQS". This will bring up the metrics associated with the queue we built at the beginning of this tutorial. For the sake of this exercise, click on NumberOfMessagesReceived (though you are welcome to try other options/metrics if you wish). After selecting the row, click Continue. Give it a name and description. In the threshold section set it to ">= 10 for 5 minutes". In the next step of the wizard, we are going to configure the actions to take once this creteria has been met. Set the "When Alarm State is" column to ALARM, set the "Take action" column to Auto Scaling Policy and finally set the "Action details" to the scaling group we just created. A new dropdown menu will appear where you choose which policy to apply (see this screnshot -- sorry but the image was too wide for this blog layout). This will be our up-scale policy. To setup the down-scale step, on the last column of the configure actions step of the Create Alarm Wizard, click on "ADD ACTION". In this new row, select "OK" from the "When Alarm State is" dropdown menu, then, just as above, select "Auto Scaling Policy" from the "Take action" column dropdown menu, in the "Action details" dropdown select your AS group, and select your downscaling policy from the policy dropdown. Click Continue. In the next step, check your metrics, alarms and actions are correct. Finally click on Create Alarm.
Now that we are completely done setting the auto scaling up, you might want to test it. The easiest way would be to send couple hundred messages to the queue via API/boto and see how it scales up. Then deleting the messages and seeing how it scales down, but that is something I might address in a later post.
Hope this tutorial was of help and easy to follow. For comments and suggestions, ping me via Twitter @WallOfFire
For all the steps that follow, please remember, as a general rule, AWS services are region-specific. That is to say, these services are only visible and usable within the region in which they were created. Also important to bear in mind is that each of these services have a cost associated to them as per AWS pricing (see each of the product pages above for pricing details).
First let's setup the queue we're going to use to post and receive messages from and base our auto scaling on. To do this, log into your AWS management console and head to the SQS tab and click on Create New Queue. A modal window will pop up like the one shown below:
A bit on the parameters of the queue creation:
Default Visibility Timeout: The number of seconds (up to 12 hours) your messages will remain invisible once the message has been delivered.
Message Retention Period: The time (up to 14 days) when your message will be automatically deleted if they aren't deleted by the receiver(s).
Maximum Message Size: The maximum queue message size (in KB, up to 64).
Delay Delivery: The time (in seconds) that the queue will hold a message before it is delivered for the first time.
Our next steps will include setting up the auto scaling group (and all the underlying services) and then setting up CloudWatch to handle the monitoring and issuing of alarms.
So, assuming you have Python and boto already installed, we're going to create a script to do the heavy lifting for us. The way the API and auto scaling works in boto is as follows: first you need a Launch Configuration (LC). A Launch Configuration, as its name states, is metadata about what do you want to launch every time the alarm is triggered (i.e. which ami, security groups, kernel, userdata and so forth). Then you need an Auto Scaling Group (ASG). ASGs are the imaginary "containers" for you auto scaling instances and contain information about Availability Zones (AZ), LCs and group size parameters. Then, in order to actually do the scaling, you'll need at least one Scaling Policy (SP). SPs describe the desired scaling behavior of a group when certain criteria is met or an alarm is set off. The last piece of the puzzle is a CloudWatch alarm which I will address later.
So, back to our script. First, import the necessary modules:
from boto.ec2.autoscale import AutoScaleConnection, LaunchConfiguration, AutoScalingGroup from boto.ec2.regioninfo import RegionInfo from boto.ec2.autoscale.policy import AdjustmentType, MetricCollectionTypes, ScalingPolicy
As an aside, while in boto you can set your AWS credentials in a boto config file, I like having the credentials within the scripts themselves to make it more direct and explicit, but feel free to use boto config if that's what your preference.
First thing we need to do is to establish an auto scaling connection to our region of choice -- in this example, the Oregon region (aka us-west-2). To do so, we do as follows:
AWS_KEY = '[YOUR_AWS_KEY_ID_HERE]' AWS_SECRET = '[YOUR_AWS_SECRET_KEY_HERE]' reg = RegionInfo(name='us-west-2', endpoint='autoscaling.us-west-2.amazonaws.com') conn = AutoScaleConnection(AWS_KEY, AWS_SECRET, region = reg, debug = 0)
We then need to create the LC. In the code below I added many parameters for the sake of illustration, but not all of them are required by either AWS or boto. I believe that the only required fields are name and image_id. Bear in mind, though, that if you choose to use these optional parameters, they need to be accurate else you'll get an error in the create launch configuration API request.
lc = LaunchConfiguration(name="LC-name", image_id="ami-12345678", instance_type="m1.large", key_name="Your-Key-Pair-Name", security_groups=['sg-12345678', 'sg-87654321']) conn.create_launch_configuration(lc)
The next step is to setup the ASG. Choose your min and max size carefully, specially if your scenario will scale based on a queue that can be directly or indirectly DDoS attacked. While you wouldn't want your site to be unresponsive to your customers, you wouldn't want would-be attackers to scale you up a very hefty bill. So, as a good practice, set an upper bound to your scaling groups.
ag = AutoScalingGroup(group_name="your-sg-name", availability_zones=['us-west-2a', 'us-west-2b'], launch_config=conn.get_all_launch_configurations(names=['LC-name'])[0], min_size=0, max_size=10) conn.create_auto_scaling_group(ag)
We are almost done with the auto scaling setup; however, without a way to trigger auto scaling, all is for naught. To this end AWS lets you set different scaling criteria in the form of Scaling Policies (SP). Any self-respecting AS scheme has some sort of symmetry, that is to say for every scale up, there's a scale down. If you don't have a scale down, chances are you won't be entirely happy with monthly bill and wasting resources/capacity. The way we set the ASPs with boto is as follows:
sp_up = ScalingPolicy(name='AS-UPSCALE', adjustment_type='ChangeInCapacity', as_name='your-sg-name',scaling_adjustment=1, cooldown=30) conn.create_scaling_policy(sp_up) sp_down = ScalingPolicy(name='AS-DOWNSCALE', adjustment_type='ChangeInCapacity', as_name='your-sg-name',scaling_adjustment=-1, cooldown=30) conn.create_scaling_policy(sp_down)
Before I continue on, I will say that the whole topic of SPs is, as of this writing, sparsely covered in AWS' documentation. I found some general information, but nothing to the level of detail that is desired by most people trying to understand SPs and their nuance.
Alright, if everything thus far has gone according to plan, we should be ready to move on to the next step. For this part, we will use the AWS management console. We could, of course, do it via API but I like to use the console whenever possible. So, log into the management console and click on the CloudWatch tab and make sure you are working in the right region.
On the left navigation bar, click on Alarms. Then click on Create Alarm. A Create Alarm Wizard modal window will pop up. In the search field, next to the All Metrics dropdown, type "SQS". This will bring up the metrics associated with the queue we built at the beginning of this tutorial. For the sake of this exercise, click on NumberOfMessagesReceived (though you are welcome to try other options/metrics if you wish). After selecting the row, click Continue. Give it a name and description. In the threshold section set it to ">= 10 for 5 minutes". In the next step of the wizard, we are going to configure the actions to take once this creteria has been met. Set the "When Alarm State is" column to ALARM, set the "Take action" column to Auto Scaling Policy and finally set the "Action details" to the scaling group we just created. A new dropdown menu will appear where you choose which policy to apply (see this screnshot -- sorry but the image was too wide for this blog layout). This will be our up-scale policy. To setup the down-scale step, on the last column of the configure actions step of the Create Alarm Wizard, click on "ADD ACTION". In this new row, select "OK" from the "When Alarm State is" dropdown menu, then, just as above, select "Auto Scaling Policy" from the "Take action" column dropdown menu, in the "Action details" dropdown select your AS group, and select your downscaling policy from the policy dropdown. Click Continue. In the next step, check your metrics, alarms and actions are correct. Finally click on Create Alarm.
Now that we are completely done setting the auto scaling up, you might want to test it. The easiest way would be to send couple hundred messages to the queue via API/boto and see how it scales up. Then deleting the messages and seeing how it scales down, but that is something I might address in a later post.
Hope this tutorial was of help and easy to follow. For comments and suggestions, ping me via Twitter @WallOfFire
Wednesday, September 28, 2011
DIY Basic AWS EC2 Dashboard using Apache, Python, Flask and boto (PartII)
In part I of this tutorial we covered the basic stack setup as well as showing boto/Flask usage. In this part, I'll show how to handle posts, request instance details and rendering dynamic URLs.
So, arguably, you have your index page as your EC2 instance dashboard already working. Now, building on that let's say that you want to see the detail of any of the instances in display. So as per my explanation of the ideosyncracies of AWS' API, in order to see such details, you need two basic pieces of information: the region and the instance id. The way we are going pass this information from the dashboard page to Flask is through URLs. Flask has pretty neat ad-hoc URL routing with replaceable variables that you can then use in your code. So, in this instance, in the dashboard page we are going to dynamically generate a link that contains both the region end-point and the specific instance id. If you look closely, to the index.html template the link looks as follows:
So, now we need to tell Flask to "listen" to that URL pattern. To do so, add the following line to your [WEB_APP_NAME].py file (please bear in mind that this is only for the sake of illustration only; so I'm putting code style aside):
This tells Flask to match incoming requests to that URL and bind the incoming parameters to the variable names inside the angled brackets "<" ">". Right below that line declare the method/function you are going to use explicitly declaring the parameters you expect.
That is it in so far as Flask is concerned. Now that we, presumably, have all the data we need, we leverage boto to do the "hard work" for us. So as seen in part I of this tutorial, whenever we need to issue calls to the AWS API, the first thing we need to do is to start a connection to whatever region you are interested in. So, we go ahead and do so:
With that active regional connection, we then query the API for the details of a particular instance (which in this case we are going to use the instance_id passed in):
the API call above takes an array of instance ids (in our case we are only interested in one) and it will return not the instance itself, but an array of the parent reservation that why we have that first [0] is for and then that reservation's instances collection with only one instance object in it, the one we requested. Once we have the instance object it should be clear and obvious that you can extract any information you want or need (type, tags, state, etc.).
Now what if we wanted to make changes to any of the instances or if we wanted to, say, start/stop any of them? It's actually not unlike what we've been doing thus far: we tell Flask route object what to look for, then use that machinery to get the data you want. So, for this step, as a matter of generally acceptable good practices, we are going to send the information via submitting a form through a POST request method. Our route should look something like this:
We can now define out function to handle the request:
Within this method we can now leverage Flask's request object to get the form data, for instance:
And finally, to make changes to an instance, all you need is to modify the instance's attributes key-value pairs. Let's say we wanted to change the instance type, to do so we simply change the value via boto's modify_attribute method as shown below:
One thing to bear in mind is that regretfully AWS API does not provide a "valid types" list for each instance. So, if you are dealing with a mix of 32- and 64-bit machines, it is possible you can assign an instance a type it is not compatible with, so you must be mindful about that. Given that there is no list provided by the API also means you need to hard-code the instance types. A reference to the official instance type list can be found here (pay attention to the API name field).
To change the instance state, however, you do not use the attributes collection directly. Instead, use the API calls provided by boto to start/stop/terminate:
Hope this tutorial was of help. If you got questions or comments, feel free to ping me on Twitter @WallOfFire.
So, arguably, you have your index page as your EC2 instance dashboard already working. Now, building on that let's say that you want to see the detail of any of the instances in display. So as per my explanation of the ideosyncracies of AWS' API, in order to see such details, you need two basic pieces of information: the region and the instance id. The way we are going pass this information from the dashboard page to Flask is through URLs. Flask has pretty neat ad-hoc URL routing with replaceable variables that you can then use in your code. So, in this instance, in the dashboard page we are going to dynamically generate a link that contains both the region end-point and the specific instance id. If you look closely, to the index.html template the link looks as follows:
<a href="/details/{{region['Name']}}/{{instance['Id']}}">See Details</button>
So, now we need to tell Flask to "listen" to that URL pattern. To do so, add the following line to your [WEB_APP_NAME].py file (please bear in mind that this is only for the sake of illustration only; so I'm putting code style aside):
@app.route("/details/<region_nm>/<instance_id>")
This tells Flask to match incoming requests to that URL and bind the incoming parameters to the variable names inside the angled brackets "<" ">". Right below that line declare the method/function you are going to use explicitly declaring the parameters you expect.
def details(region_nm=None,instance_id=None)
That is it in so far as Flask is concerned. Now that we, presumably, have all the data we need, we leverage boto to do the "hard work" for us. So as seen in part I of this tutorial, whenever we need to issue calls to the AWS API, the first thing we need to do is to start a connection to whatever region you are interested in. So, we go ahead and do so:
rconn = boto.ec2.connect_to_region(region_nm,aws_access_key_id=akeyid,aws_secret_access_key=seckey)
With that active regional connection, we then query the API for the details of a particular instance (which in this case we are going to use the instance_id passed in):
instance = rconn.get_all_instances([instance_id])[0].instances[0]
the API call above takes an array of instance ids (in our case we are only interested in one) and it will return not the instance itself, but an array of the parent reservation that why we have that first [0] is for and then that reservation's instances collection with only one instance object in it, the one we requested. Once we have the instance object it should be clear and obvious that you can extract any information you want or need (type, tags, state, etc.).
Now what if we wanted to make changes to any of the instances or if we wanted to, say, start/stop any of them? It's actually not unlike what we've been doing thus far: we tell Flask route object what to look for, then use that machinery to get the data you want. So, for this step, as a matter of generally acceptable good practices, we are going to send the information via submitting a form through a POST request method. Our route should look something like this:
@app.route("/change", methods=['POST'])
We can now define out function to handle the request:
def change():
Within this method we can now leverage Flask's request object to get the form data, for instance:
instance_id = request.form['instance_id']
state_toggle = request.form['sstoggle']
toggle_change = request.form['type_change']
And finally, to make changes to an instance, all you need is to modify the instance's attributes key-value pairs. Let's say we wanted to change the instance type, to do so we simply change the value via boto's modify_attribute method as shown below:
instance.modify_attribute('instanceType', 'm1.large')
One thing to bear in mind is that regretfully AWS API does not provide a "valid types" list for each instance. So, if you are dealing with a mix of 32- and 64-bit machines, it is possible you can assign an instance a type it is not compatible with, so you must be mindful about that. Given that there is no list provided by the API also means you need to hard-code the instance types. A reference to the official instance type list can be found here (pay attention to the API name field).
To change the instance state, however, you do not use the attributes collection directly. Instead, use the API calls provided by boto to start/stop/terminate:
instance.stop()
instance.start()
...
Hope this tutorial was of help. If you got questions or comments, feel free to ping me on Twitter @WallOfFire.
Tuesday, September 27, 2011
DIY Basic AWS EC2 Dashboard using Apache, Python, Flask and boto (Part I)
While Amazon Web Services offers a nice web-based UI to handle and manage EC2 instances, it might well be the case that you do wish to give access to some of this functionality to more people in your organization, but you do not with to provide them with full access to the AWS EC2 dashboard or wish to limit the type of API calls they make (for instance, you might want to allow users to start/stop instances, but you do now want them to be able to launch/terminate them). Whatever your use case might be, you can create your own "in-house" EC2 Dashboard with relative ease. Our software stack will consist of:
2. We then need to write the wsgi file telling mod_wsgi which app and instance it should start when a request comes along. It's a very simple file. Just make sure its name matches the ones you provided in the vhost file above. Its contents should look something like this:
One thing to bear in mind is that this way of using wsgi requires that your web app be anywhere in the $PYTHONPATH environment variable. Save, close and restart Apache.
3. For this example, as said above, I'm working under the assumption that your authentication needs are very basic and those requirements can be fulfilled with Apache's htpasswd. Assuming you setup Apache correctly with the right mods, you can go to your application's root directory and tell htpasswd to create a password file (.htpasswd) with a username-password pair. You can do so by running the following command:
It will then prompt you for a password which it will then encrypt using basic symmetric encryption algorithms and the create the .htpasswd file.
4. Next step is to setup your .htaccess file so that Apache knows when to ask for the credentials. Your .htaccess should have (at least) the following rules:
You can test it by pointing your web browser to whatever URL you set up for this site. You should now be prompted with a username/password dialog.
5. Now that we have the basics ready, let us get to the meat and substance. First I highly suggest you give a quick perusal to Flask's "Quickstart" tutorial which can be found here and trying out the first few trivial examples to make sure you have everything setup correctly.
6. Make a new file named [WEB_APP_NAME].py and copy-and-paste the text below:
7. One of the concepts to bear in mind with respect to AWS API connections is regions. There is no global end-point for your AWS API calls and calls made to that region's API only make sense for services within that Region. For instance, you can't "see" your instances in the west coast Region ("us-west-1") from any other region. So, whatever regions you wish to have access to, you need to specify those explicitly. By default, boto connects to the "us-east-1" region.
8. So, our index page will be the Dashboard itself, that is to say, a place where users will be able to see all the instances from all regions. You can choose to limit the regions you wish to show/scour fairly easily, but for the sake of this example I'm going to simply gather all the instance information from all of AWS' regions. I will the create an object data structure with all the data and eventually I'll pass that data to the template rendering engine that Flask comes with. So, we're going to create an app route for the index page, use boto to create an EC2 connection, retrieve all available regions, get all the instance reservations in that region, get all the instances within each of those regions, then bundle the data in a data structure and pass it to the rendering engine.
a. Create the index route:
b. declare your method:
c. retrieve the list of all AWS available regions:
d. connect to each of those regions and retrieve all the instance reservations (something to note: I'm not sure if it's boto's or AWS' boo-boo, but retrieve_all_instances() method does not retrieve all instances per se, instead it retrieves all the instance reservations, which are instance "containers") :
e. loop over the reservations and gather all the instance information:
f. now all together (including populating our instance info data structure and pass to the rendering engine):
9. Now that we have the route and the code, we are going to use Flask's nifty template rendering engine (Jinja). To do so we need to create a file that matches the name in the render_template call above.
a. create a file with nano or your favorite text editor named index.html (or whatever name you chose ). This file has to be (by Flask convention) inside a directory called 'templates' and this directory should be at the same level as your web app Py script.
b. copy-paste the following "boilerplate" html:
It should be obvious that you can (and should!) use your own html and CSS styles. The above example was to illustrate the rendering engine usage and syntax, which is pretty self explanatory and simple. Flask's and Jinja's documentation is very good and when in doubt you should consult those as your primary source.
This concludes part I of this tutorial. In part II I will then show how to post information, how to use boto for modifying instance information and more on Flask's routing/url facilities.
- Apache (for basic authentication, SSL and WSGI, virtual hosts) with mod_ssl and mod_wsgi.
- OpenSSL (for SSL and certificate generation).
- Python.
- Flask (py-based web services micro-framework).
- boto (py-based AWS API library).
- AWS KeyID/SecretKey credentials.
- Admin/sudo privileges
Please note, if you do not need SSL/htpasswd you can use Flask's bundled web server which is suitable for most in-house deployments; however in this example, I will be using Apache. Also to note: I'm not going to spend time in the installation process of the packages/SSL certicate generation above as it should be fairly straightforward for anyone with minimal dev/sysadmin experience as well as there being many well-written tutorials for the setup of these tools floating around the 'net.
First we need to make sure your tools are working. Make sure Apache is working, make sure mod_ssl is working, etc. Open up a python prompt and try importing flask, boto and so forth. Once you are fairly confident your tools are good to go then let's get moving.
1. Create an Apache virtual host entry file specifying the SSL certificate location, port number, location of the wsgi file and other important parameters as show below:
<VirtualHost *:443>
ServerAdmin webmaster@localhost
DocumentRoot /var/www/[WEB_APP_NAME]
SSLEngine On
SSLCertificateFile /path/to/certs/server.crt
SSLCertificateKeyFile /path/to/certs/server.key
<Directory />
Options FollowSymLinks
AllowOverride None
</Directory>
WSGIDaemonProcess [WEB_APP_NAME] user=[APACHE_USER] group=[APACHE_USER_GROUP] threads=5
WSGIScriptAlias / /var/www/[WEB_APP_NAME]/[WEB_APP_NAME].wsgi
<Directory /var/www/[WEB_APP_NAME]>
WSGIProcessGroup [WEB_APP_NAME]
WSGIApplicationGroup %{GLOBAL}
WSGIScriptReloading On
Options Indexes FollowSymLinks MultiViews
AllowOverride All #important line for using htpasswd
Order allow,deny
allow from all
</Directory>
#other settings here
</VirtualHost>
2. We then need to write the wsgi file telling mod_wsgi which app and instance it should start when a request comes along. It's a very simple file. Just make sure its name matches the ones you provided in the vhost file above. Its contents should look something like this:
from [WEB_APP_NAME] import app as application
One thing to bear in mind is that this way of using wsgi requires that your web app be anywhere in the $PYTHONPATH environment variable. Save, close and restart Apache.
3. For this example, as said above, I'm working under the assumption that your authentication needs are very basic and those requirements can be fulfilled with Apache's htpasswd. Assuming you setup Apache correctly with the right mods, you can go to your application's root directory and tell htpasswd to create a password file (.htpasswd) with a username-password pair. You can do so by running the following command:
sudo htpasswd -c .htpasswd [USERNAME]
It will then prompt you for a password which it will then encrypt using basic symmetric encryption algorithms and the create the .htpasswd file.
4. Next step is to setup your .htaccess file so that Apache knows when to ask for the credentials. Your .htaccess should have (at least) the following rules:
AuthUserFile /path/to/[WEB_APP_NAME]/.htpasswd
AuthGroupFile /dev/null
AuthName "EnterPassword"
AuthType Basic
require valid-user
You can test it by pointing your web browser to whatever URL you set up for this site. You should now be prompted with a username/password dialog.
5. Now that we have the basics ready, let us get to the meat and substance. First I highly suggest you give a quick perusal to Flask's "Quickstart" tutorial which can be found here and trying out the first few trivial examples to make sure you have everything setup correctly.
6. Make a new file named [WEB_APP_NAME].py and copy-and-paste the text below:
from flask import Flask, flash, abort, redirect, url_for, request, render_template
from boto.ec2.connection import EC2Connection
import boto.ec2
app = Flask(__name__)
akeyid = '[AWS_KEY_ID]'
seckey = '[AWS_SECRET_KEY]'
conn = EC2Connection(akeyid,seckey)
7. One of the concepts to bear in mind with respect to AWS API connections is regions. There is no global end-point for your AWS API calls and calls made to that region's API only make sense for services within that Region. For instance, you can't "see" your instances in the west coast Region ("us-west-1") from any other region. So, whatever regions you wish to have access to, you need to specify those explicitly. By default, boto connects to the "us-east-1" region.
8. So, our index page will be the Dashboard itself, that is to say, a place where users will be able to see all the instances from all regions. You can choose to limit the regions you wish to show/scour fairly easily, but for the sake of this example I'm going to simply gather all the instance information from all of AWS' regions. I will the create an object data structure with all the data and eventually I'll pass that data to the template rendering engine that Flask comes with. So, we're going to create an app route for the index page, use boto to create an EC2 connection, retrieve all available regions, get all the instance reservations in that region, get all the instances within each of those regions, then bundle the data in a data structure and pass it to the rendering engine.
a. Create the index route:
@app.route("/")
b. declare your method:
def my_method_name():
c. retrieve the list of all AWS available regions:
allinfo = []
regions = conn.get_all_regions()
for region in regions:
d. connect to each of those regions and retrieve all the instance reservations (something to note: I'm not sure if it's boto's or AWS' boo-boo, but retrieve_all_instances() method does not retrieve all instances per se, instead it retrieves all the instance reservations, which are instance "containers") :
rconn = EC2Connection(akeyid,seckey,region=region)
rsvs = rconn.get_all_instances() #read note above
e. loop over the reservations and gather all the instance information:
for rsv in rsvs:
insts = rsv.instances
for inst in insts:
#do stuff
f. now all together (including populating our instance info data structure and pass to the rendering engine):
@app.route("/")
def my_method_name():
allinfo = []
regions = conn.get_all_regions()
for region in regions:
regioninfo = {}
regioninfo['Name'] = region.name
rconn = EC2Connection(akeyid,seckey,region=region)
rsvs = rconn.get_all_instances()
instances = []
for rsv in rsvs:
insts = rsv.instances
for inst in insts:
instances.append({'Id': inst.id, 'Name':inst.tags['Name'],'State':inst.state, 'Type':inst.get_attribute('instanceType')['instanceType']})
regioninfo['instances'] = instances
allinfo.append(regioninfo)
return render_template('index.html',all_info=allinfo)
9. Now that we have the route and the code, we are going to use Flask's nifty template rendering engine (Jinja). To do so we need to create a file that matches the name in the render_template call above.
a. create a file with nano or your favorite text editor named index.html (or whatever name you chose ). This file has to be (by Flask convention) inside a directory called 'templates' and this directory should be at the same level as your web app Py script.
b. copy-paste the following "boilerplate" html:
<!doctype html>
<html>
<head><title>EC2 Dashboard</title>
</head>
<body>
<div class="header">Welcome to EC2 Dashboard</div>
<div class="content">
<div class="region-text">Regions Available</div>
{% for region in all_info %}
{% if region['instances'] %}
<div class="region-info"><span style="font-weight:bold">Region: {{region['Name']}}</span>
<span>Instances Avaliable</span></div>
<div class="region-content">
<table><tr><th>Instance Name</th><th>Instance State</th><th>Instance Type</th><th>Instance Id</th><th>Instance Actions</th></tr>
{% for instance in region['instances'] %}
<tr>
<td><span>{{instance['Name']}}</span></td>
<td><span>{{instance['State']}}</span></td>
<td><span>{{instance['Type']}}</span></td>
<td><span>{{instance['Id']}}</span></td>
<td>
<a href="/details/{{region['Name']}}/{{instance['Id']}}">See Details</button>
</td>
</tr>
{% endfor %}
</table>
</div>
{% else %}
<div class="region-info">
<span style="font-weight:bold">Region: {{region['Name']}}</span>
<span>No Instances Avaliable in this region</span>
</div>
{% endif %}
{% endfor %}
</div>
</body>
</html>
It should be obvious that you can (and should!) use your own html and CSS styles. The above example was to illustrate the rendering engine usage and syntax, which is pretty self explanatory and simple. Flask's and Jinja's documentation is very good and when in doubt you should consult those as your primary source.
This concludes part I of this tutorial. In part II I will then show how to post information, how to use boto for modifying instance information and more on Flask's routing/url facilities.
Subscribe to:
Posts (Atom)