Friday, October 29, 2010

Fun With SFTP

Until now I never knew that using FTP can be so easy.
Many of you would have started thinking about the FTP servers till now, but to be more clear here I am talking about SFTP (SSH File Transfer Protocol).

But as Shakespeare said "What's in name, the thing we call rose would smell as sweet with some other name".
And so is true for SFTP, as this provided the usability that I was looking for and that too with minimal configuration with some extra benefits which we will talk about in the last.

And not only that I was able to do this in two different ways.
  • One is like my as colleague Juan Pablo says "It should be a JAIL for the user", so that he cannot move outside the directory.
  • And the other one is like simple ftp which allows you to move around but not able to read or write unless you have permissions.

To know it better I think a use case will be really helpful.
So I will put down the requirement that pushed me to learn about it.
We needed to grant permissions to one directory to a user, with one directory I literally mean that, as we wanted to block him from peeping into other things.
That too with minimal access to system binaries and it should be secure etc etc.

And SFTP was the best suit for the requirement, you will get the answer of how in the next section where I have shown the configurations of both the cases and believe me it couldn't have been simpler.

Lets get into the jail first ;-)
  • Edit /etc/ssh/sshd_config to include this.
Subsystem sftp internal-sftp
Match User sftpuser
ChrootDirectory /var/www/sftpdir (this makes the user stay under one directory)
ForceCommand internal-sftp
  • Create the sftpuser and set it's shell acess to false, so that user is not able to do ssh.
useradd -m -s /bin/false sftpuser
  • Give correct permissions to sftpdir.
chown root:root /var/www/sftpdir
  • For increasing security I changed these also in /etc/ssh/sshd_config:
PasswordAuthentication no
PubkeyAuthentication yes
And I also added my pub key to /home/sftpuser/.ssh/authorized_keys file, but this is optional as this doesn't make any effect on the SFTP working.

For this jail method we are done.

Try connecting:
sftp sftpuser@localhost
Connecting to localhost...
Enter passphrase for key '/home/user/.ssh/id_dsa':
sftp>

Now lets get back and see the next way (I know most of us will not read this, as first one will work like a charm):
  • Create a user with /usr/lib/openssh/sftp-server as shell and /var/www/sftpdir as home dir.
sudo useradd -s /usr/lib/openssh/sftp-server -d /var/www/sftpdir sftpuser
  • Add this /usr/lib/openssh/sftp-server to /etc/shells file
echo "/usr/lib/openssh/sftp-server" >> /etc/shells
  • For increasing security I changed /etc/ssh/sshd_config and also added my key to /var/www/sftpdir/.ssh/authorized_keys file.
PasswordAuthentication no
PubkeyAuthentication yes
  • Set correct permissions of the sftpdir.
chmod go-w /var/www/sftpdir
chmod 700 /var/www/sftpdir/.ssh
chmod 600 /var/www/sftpdir/.ssh/authorized_keys

And done.

Try connecting:
sftp sftpuser@localhost
Connecting to localhost...
Enter passphrase for key '/home/user/.ssh/id_dsa':
sftp>



Now about the extra benefits:
  • Easy to configure.
  • Good in Security.
  • Can work with PubkeyAuthentication.
  • No extra installation (as uses SSH).
  • Easy to use SFTP client.
at least these things are enough to pull me towards it.

That's it for taday. Happy SFTPing.

Tuesday, October 26, 2010

Sizing Openbravo: EC2 cost calculation and experimenting with around 270 concurrent users

To help Openbravo partners and users we have extended our Sizing Tool Guidelines to include some more facts and findings.

"As more the better" --Harpreet Singh ;-)
Same has been proved by the new Amazon Cluster Compute Instance (cc1.4xlarge).

Amazon recently anounced the availability of it's biggest Instance, ideal for cluster infrastructure, as it promises high connectivity between cluster instances (as high as 10 Gigabit Ethernet).
But we tested this instance for the standalone test with Oracle DB and Openbravo on same instance.
The results were realy exciting as it was able to handle around 270 concurrent users.

Now truely speaking "Thats what I call results."
And the same results have been added to our Sizing Tool Results.

"There is no such thing as a free lunch." --Milton Friedman
This instance also has some drawbacks:
- It costs a lot (almost $1.60 per hour)
- Till now it is only available in US (N Virginia) region.
- And is only available with CentOS.

Now here comes another one.
As cost is the biggest concern when we think of any new infrastructure. For example: Running an instance (which can support 10 concurrent users) for 3 (THREE!!!) years on Amazon EC2 would cost only $1217.60, I think these figures can help one think about on-site and in-cloud (EC2) deployments.
So we extended our Sizing Guidelines to help you choose your Amazon Instance.

In the last section of the Guidelines we have added:
- Steps you can follow to calculate your yearly cost with Amazon cost calculator.
- As Amazon calculator is a bit complex so we created simple calculator to help you out.
- And pre calculated cost for most common scenarios.

Monday, October 18, 2010

Openbravo Ubuntu Maverick (be the first to use it).

We were planning to blog and call for users to come up for early testing of your own trusted Openbravo with the upcoming Ubuntu's 10.10. But as Ubuntu Maverick (10.10) is already released, I think now all of us have a chance to be the first to use Openbravo with Ubuntu Maverick.

To be the first all you need to do is get Ubuntu Maverick Meerkat up and running any where you like may it be your hardware system, Amazon EC2 or a virtual machine.
This link will help you if you are planning to install on hardware system or virtual machine and these AMIs to boot one in EC2

Once you are set then lets rock and roll. I mean start installation.

So all you have to do to install Openbravo is:
- Enable the Partner’s Repository:
* sudo add-apt-repository "deb http://archive.canonical.com/ubuntu maverick partner"

- Install the openbravo-erp package:
* sudo apt-get update
* sudo apt-get install openbravo-erp

You can also install it using Synaptic or the Ubuntu Software Center:


Can installing of any comprehensive ERP be simpler than this?

You can do it even on a Friday. ;-)
As installing most ERPs on a Friday means forgetting about your Friday and Saturday night fun. With Openbravo on Maverick Meerkat, you can start the process at 7 and be at the party by 9!

So once you are done with the party sorry I mean installation you are set to use it and be a proud user of Openbravo ERP.
As it's you love and support that we have been able to live up to your expectations.

Also the users/developers who want to upgrade from 10.04 to 10.10, can do that without a fear of breaking the installation, the only concern should be that 10.10 is not a LTS version :-(

For more on installing Openbravo in Ubuntu please follow this wiki.

Friday, July 16, 2010

PostgreSQL: Performance Tuning

"Need is the mother of discovery" -Harpreet Singh
I wrote this line just few minutes before writing this blog, as my need of optimizing PostgreSQL's performance lead me to search/discover for some cool facts and features of postgres and tools related to it.

For any postgres user thinking about 100 or more concurrent users is like a nightmare. I will admit that some time back I was also a bit scared on thinking about 100 concurrent users with postgres, but with the end of my search I am happy that I found a usable way to achieve that.
"Knowledge increases by sharing"
So I thought I will pass it to everyone who is searching for it on the internet.
The need that triggered me to search for this was to recommend Hardware as well as Software configuration to support 100-200 concurrent users on Openbravo ERP and postgres/Oracle as the database.
For me as I am a postgres supporter I believed that postgres will be able to handle it. And yippee I was right.
Coming back to the main point:
Postgres doesn't support too many users (concurrent) by default, it comes with very solid configuration aimed at everyone's best guess as to how an "average" database on "average" hardware should be.
Postgres has some default configuration options to fine tune it, like:
- max_connections
- shared_buffers
- effective_cache_size
- etc etc.
But these are not enough for postgres to support 100+ (concurrent) users.
In a reply of my query to postgres performance mailing list, I came to know about connection pooling.
One and the only con that I saw in this is that it is external, I mean we have to configure an external tool to do connection pooling.
There are tools like pgpool to make the job easy for us (pgpool is a middleware that works between PostgreSQL servers and a PostgreSQL database client).
Connection pooling tools provide us features like:
- Connection Pooling: It reduces connection overhead, and improves system's overall throughput.
- Replication: Using the replication function enables creating a real-time backup on 2 or more physical disks.
- Load Balance: As the name suggests it distributes the queries on two or more replicated servers.
- Limiting Exceeding Connections: With the use of this extra connections are queued instead of returning an error immediately.
- Parallel Query: Using the parallel query function, data can be divided among the multiple (replicated) servers.
Configuring these properly can fine tune postgres' performance to handle 100-200 concurrent users.

Happy *postgresing*

To read more about performance tuning in postgreSQL read this.
For more on pgpool click here.

Thursday, July 1, 2010

Module Integration with CI (Hudson)

Module Integration with CI (Hudson)

Long back we (RM @ Openbravo) introduced CI (Continuous Integration) tool (Hudson) for testing code of our core ERP development branch.
Which allowed our developers to do:
- Daily Builds (Full/Incremental).
- Smoke Tests.
- DB Consistency Tests.
- etc.

But as most of the developers were becoming modular (Openbravo became modular with Openbravo ERP Version 2.50), CI was not able to maintain the pace and provide similar help for the module testing.

To set things in place we enabled our developers to integrate and test their modules using new CI even after committing even a single changeset to their module repository.

According to me the goal or I will say the purpose of this whole effort was to enable a developer to have a nice and sound sleep after he pushes his commit to the module repository.
Sounds confusing?

Let me explain it.
Earlier developers use to develop a module and used to do time consuming small manual testing to make sure that their code is bug free.
From a developers perspective he cannot sleep properly until his module is tested and deployed properly.

To enable CI for modules and save developers time (from manual testing) we created the setup which will help developers to directly configure a new job in Hudson to test their changesets. This new setup empowers them to do:
- Sanity Check
- Source compilation check and Create OBX
- Database consistency test
- Module's JUnit test
- Installation of the generated OBX
- Un-Installation of the installed module
- Selenium test (module smoke)
- Upgrade the module from previous published version in CR (Central Repository) to generated OBX
Even for a single new changeset in the module's repository.

And still it has endless possibilities, where we can integrate new test cases to this.

We have also created a template job (which is pre-configured with all these test cases) to help developers configure and run tests for their modules easily.
Developers will just have to copy the template job to a new job, change the variables to their modules related variables and then run the job. We have also created a simplified wiki for step by step instructions.

* Currenty Openbravo developers working on any Modules can take benefits of this tool (but sky is the limit, maybe someday we can allow partners/community to take advantage of this tool).

Thursday, January 7, 2010

RM Updates: Amazon backup stratergy, Mantis Upgrade, Establish automatic process for releasing 2.40, OB@OB

These are the latest news from the Openbravo's Release Management Team:


Backup Strategy: EBS boot.

Amazon has a new feature ebs boot, this helps us to keep our root partition in ebs volume and also allows us to have data up to 1TB in root partition. This helps us in the following way : better backup strategy and from now we can pause & re-start an instance and thus saving cost. My colleague gnuyoga has a blog about the same.


Mantis Upgrade: Upgrade issues.openbravo.com to mantis-1.2.0

As you know our existing issue tracker is based on mantis 1.1.8. With the release of mantis 1.2.0, it promises lot of interesting productivity boosters. We are migrating our current mantis to latest. This involves quiet bit of challenge. In this sprint we address customization like SSO (Single Sign On, etc), and custom css. If you want to be a beta tester to testing our new mantis please drop us an email for us to give you a test account.


Continuous release of 2.40 branch

So the mantra of 2.40 branch is continuous release as detailed in my colleague juan pablo's blog post. Now this task is complete and for details see here



OB@OB: Documentation and Linux tool

This task was about documenting the process of replicating production environment to testing environment and creating a new tool that automates this process in linux.