As while working with apache many of us feel like having a secure way to access our data.
Simple but effective http basic auth is probably the quickest and the easiest answer.
Setting it up requires only two things:
- htpasswd file (containing valid user name and password)
- And apache configuration file to read it.
Creating a htpasswd file:
- htpasswd -cm </Path/tp/htpasswd-file> <username>
- While adding more users just remove c from the above command.
Configuring apache:
- Add this to default (vhost file) configuration file
<location>
Allow from all
AuthType Basic
AuthName "Restricted Area"
AuthUserFile </Path/to/htpasswd-file>
Require valid-user
</location>
- Now reload apache and enjoy.
Tuesday, December 1, 2009
Tuesday, November 17, 2009
RM updates: Automation and upgrade of Mantis @ Openbravo
We are almost close to achieve milestone 2 of Continuous Integration. The team is working really hard on finding a solution for existing challenges as well as proposing ways to automate current repetitive tasks.
Last sprint we have completed one of the most challenging tasks "automated code migration from pi - main". Now we have obx generated from main branch if all the tests are successful. Plans to generate an obx on every commit are heavily debated within the team.
Now we have tecnicia14 resurrected. This will help our developers as well as our QA team to see the code changes in the live environment (live and liveqa).
Apart from the CI infrastructure, we have also upgraded the Issue Tracker version to 1.1.8 which is the latest stable mantis version available. We are also in the process of ensuring we have a hard backup of all the important instances running in Amazon ec2.
For a complete list of the on-going stories that we are working on, please check the Sprint 28 page of our Scrum spreadsheet
Last sprint we have completed one of the most challenging tasks "automated code migration from pi - main". Now we have obx generated from main branch if all the tests are successful. Plans to generate an obx on every commit are heavily debated within the team.
Now we have tecnicia14 resurrected. This will help our developers as well as our QA team to see the code changes in the live environment (live and liveqa).
Apart from the CI infrastructure, we have also upgraded the Issue Tracker version to 1.1.8 which is the latest stable mantis version available. We are also in the process of ensuring we have a hard backup of all the important instances running in Amazon ec2.
For a complete list of the on-going stories that we are working on, please check the Sprint 28 page of our Scrum spreadsheet
Monday, June 29, 2009
UML (User Mode Linux)
In this Blog I will take you to a tour to a new dimension of Virtualization. That is the world of UML.
Many of us have spent time using VMware, VBox, Qemu etc, or debugging on how to start Xen/OpenVZ.
But the truth of above two is, applications like VMware are heavy on system resources and applications like Xen are a bit tricky (kernel should be Xen specific).
One thing I will admit, that I was also amongst the people who have spent a huge amount of time on different Virtualization technologies, until here at Openbravo I was given an opportunity to setup virtual environment on Amazon servers.
* For those who haven't worked with Amazon servers, Amazon servers are like domU of Xen virtual environment.
The aim was to setup an Openbravo instance (which can be made available on demand) on the top of an Amazon instance.
The main problem in working with amazon server is that you don't have access to host (domO) and neither Amazon people allow you to use custom kernel for domU (custom kernel: so that we can build another xen domU over it).
So taking these things in to account we were left with options like VMware but looking upon their load on system we needed something light and usable.
In our hunt for a perfect virtual environment my manager told me about UML. And truly speaking it turned up to be the perfect tool (it matched all our requirements and expectations).
UML works very differently from all other Virtualization techniques, all it needs to work smoothly is uml-utilities package, one kernel (binary script) and a block device(which contains minimal/full OS).
It is like doing chroot to a directory and installing a full OS in that.
It has many other features like
This COW file is used if you want to use same block device (filesystem) for more than one Virtual system. This COW file writes the difference to a separate file (just like a diff patch).
The use of UML gave us one more flexibility, that was using Xen image (pre-installed Openbravo instance) with UML kernel.
And an advantage over Xen (i.e. access over host system is not required in UML).
Reference:
UML Howto
UML Wiki
Download Page
Many of us have spent time using VMware, VBox, Qemu etc, or debugging on how to start Xen/OpenVZ.
But the truth of above two is, applications like VMware are heavy on system resources and applications like Xen are a bit tricky (kernel should be Xen specific).
One thing I will admit, that I was also amongst the people who have spent a huge amount of time on different Virtualization technologies, until here at Openbravo I was given an opportunity to setup virtual environment on Amazon servers.
* For those who haven't worked with Amazon servers, Amazon servers are like domU of Xen virtual environment.
The aim was to setup an Openbravo instance (which can be made available on demand) on the top of an Amazon instance.
The main problem in working with amazon server is that you don't have access to host (domO) and neither Amazon people allow you to use custom kernel for domU (custom kernel: so that we can build another xen domU over it).
So taking these things in to account we were left with options like VMware but looking upon their load on system we needed something light and usable.
In our hunt for a perfect virtual environment my manager told me about UML. And truly speaking it turned up to be the perfect tool (it matched all our requirements and expectations).
UML works very differently from all other Virtualization techniques, all it needs to work smoothly is uml-utilities package, one kernel (binary script) and a block device(which contains minimal/full OS).
It is like doing chroot to a directory and installing a full OS in that.
It has many other features like
- Mounting host filesystem.
- Adding a COW (copy on write) file.
- etc.
This COW file is used if you want to use same block device (filesystem) for more than one Virtual system. This COW file writes the difference to a separate file (just like a diff patch).
The use of UML gave us one more flexibility, that was using Xen image (pre-installed Openbravo instance) with UML kernel.
And an advantage over Xen (i.e. access over host system is not required in UML).
Reference:
UML Howto
UML Wiki
Download Page
Labels:
UML,
User Mode Linux,
Virtual OS,
Virtualization
Monday, June 15, 2009
Screenshots and DAlbum
In one of my recent tasks I explored a couple of tools available to publish screen shots (Web Based).
The aim was to get the screen shots when Hudson (CI tool) is running a build and to publish them using some web based gallery creator, for later reference of the developer or QA team.
Well the task was not tough but was a bit tricky, as the screen shot took by Xvnc plugin of Hudson was not sufficient (it took the screen shot at the end of the build, mostly an empty screen), so we decided to get a command line tool to get screen shots. And in the hunt for a command line tool we got ImageMagic and discovered that Xvnc plugin was also using the same tool.
Now by running a FOR loop in the background (when required) gave us a directory full of relevant screen shots.
Now the job remaining was to publish them, for that lot of tools are available in opensource world, basically most of them are either python or php based, but we wanted some thing which required less/no installation (extra) and could run on Apache, so we choose DAlbum (php based). Selecting this tool was not enough as it required placing the images in to it's root directory and clicking or executing Reindex.php script. Here also we made some changes as doing it with the above mentioned script increased disk usage, it created 3 copies of each screen shot (1 for thumbnail, 2nd for full screen view and the 3rd for downloading).
We overcome this issue by creating our own script which did the same job but now the size of all the three images were defined by us, the script (bash) converted the screen shot in to three different sized imaged (using ImageMagic) and then placed them into the respective directory required by DAlbum.
Here is sample command we used to get the screen shots
import -window root -display $display $screenshotpath/screenshot$i.jpg
DAlbum looks like this
For more information on ImageMagic click here.
For more information on DAlbum click here.
The aim was to get the screen shots when Hudson (CI tool) is running a build and to publish them using some web based gallery creator, for later reference of the developer or QA team.
Well the task was not tough but was a bit tricky, as the screen shot took by Xvnc plugin of Hudson was not sufficient (it took the screen shot at the end of the build, mostly an empty screen), so we decided to get a command line tool to get screen shots. And in the hunt for a command line tool we got ImageMagic and discovered that Xvnc plugin was also using the same tool.
Now by running a FOR loop in the background (when required) gave us a directory full of relevant screen shots.
Now the job remaining was to publish them, for that lot of tools are available in opensource world, basically most of them are either python or php based, but we wanted some thing which required less/no installation (extra) and could run on Apache, so we choose DAlbum (php based). Selecting this tool was not enough as it required placing the images in to it's root directory and clicking or executing Reindex.php script. Here also we made some changes as doing it with the above mentioned script increased disk usage, it created 3 copies of each screen shot (1 for thumbnail, 2nd for full screen view and the 3rd for downloading).
We overcome this issue by creating our own script which did the same job but now the size of all the three images were defined by us, the script (bash) converted the screen shot in to three different sized imaged (using ImageMagic) and then placed them into the respective directory required by DAlbum.
Here is sample command we used to get the screen shots
import -window root -display $display $screenshotpath/screenshot$i.jpg
DAlbum looks like this
For more information on ImageMagic click here.
For more information on DAlbum click here.
Wednesday, June 3, 2009
Monitoring system with munin/monit
Munin with Muninnode or Muninlite
Recently while deploying monitoring system on Amazon servers I came across a new tool Muninlite, I would rather prefer to call it a script.
And the hands on experience of this script forced me to think on to which one is better (muninnode / muninlite) and to write this blog so that it can be of help to others.
The goal was to monitor several Amazon servers from one centralized Master and do the resource planning based on the usage graphs.
So we decided to do the job using munin (as front end) and muninnode as client nodes to collect data from different servers.
While exploring on how to deploy and take the best use of the setup we came across muninlite, it's a bash script which works just similar to muninnode, but as name suggests it's light on system resources and has has less response time.
To go through the installation steps for munin and muninnode click here.
And if looking for a better and lighter way to replace muninnode click here.
The resultant graph produced by munin looks like this
Monit and M/Monit
While accomplishing the job mentioned above we also deployed monit, it has proven itself in the terms of alerts (i.e sending mails etc. as alerts at specified times).
Despite of monitoring system as a whole (CPU load, memory usage etc.) monit is also capable of monitoring services eg. apache, ssh, mysql etc.
It also has an extended hand as M/Monit which gives a cool dashboard to manage and monitor different monit instances, but the worst part is yet to come this (M/Monit) is not free.
To go through the installation steps of monit click here.
And if you need a demo version of M/Monit click here.
In a web browser monit looks like this
And M/Monit something like this
I think one thing I left out to share is as we were working on the domU of Amazon servers the only thing that would have worked for us was the thing which can read the desired data from proc files.
As during the process of finalizing munin and monit as our tools we stopped on cacti also for sometime as it had more features and to collect data it used SNMP (Simple Network Management Tool), which gives it a great flexibility to work on almost every linux environment, but it failed in our case.
Even it was not useful for me at that instance but I can share a veiw of cacti
Recently while deploying monitoring system on Amazon servers I came across a new tool Muninlite, I would rather prefer to call it a script.
And the hands on experience of this script forced me to think on to which one is better (muninnode / muninlite) and to write this blog so that it can be of help to others.
The goal was to monitor several Amazon servers from one centralized Master and do the resource planning based on the usage graphs.
So we decided to do the job using munin (as front end) and muninnode as client nodes to collect data from different servers.
While exploring on how to deploy and take the best use of the setup we came across muninlite, it's a bash script which works just similar to muninnode, but as name suggests it's light on system resources and has has less response time.
To go through the installation steps for munin and muninnode click here.
And if looking for a better and lighter way to replace muninnode click here.
The resultant graph produced by munin looks like this
Monit and M/Monit
While accomplishing the job mentioned above we also deployed monit, it has proven itself in the terms of alerts (i.e sending mails etc. as alerts at specified times).
Despite of monitoring system as a whole (CPU load, memory usage etc.) monit is also capable of monitoring services eg. apache, ssh, mysql etc.
It also has an extended hand as M/Monit which gives a cool dashboard to manage and monitor different monit instances, but the worst part is yet to come this (M/Monit) is not free.
To go through the installation steps of monit click here.
And if you need a demo version of M/Monit click here.
In a web browser monit looks like this
And M/Monit something like this
I think one thing I left out to share is as we were working on the domU of Amazon servers the only thing that would have worked for us was the thing which can read the desired data from proc files.
As during the process of finalizing munin and monit as our tools we stopped on cacti also for sometime as it had more features and to collect data it used SNMP (Simple Network Management Tool), which gives it a great flexibility to work on almost every linux environment, but it failed in our case.
Even it was not useful for me at that instance but I can share a veiw of cacti
Labels:
M/monit,
Monit,
Monitoring Amazon servers.,
Munin,
Muninlite
Subscribe to:
Posts (Atom)