Sunday, July 31, 2011

T760 Specification Teclast [ My Tablet]

Processor: Rockchip RK2918 Cortex-A8 single-core 1.2GHz
Memory: 512MB DDR3
Storage Capacity: 8GB
Screen size: 7.0 inches
Standard Resolution: 1024 × 600
Wifi: 802.11b / g (54Mbps)
Operating System: Android 2.3
Camera: 300,000 pixels front + rear two-megapixel camera
Interface:

Mini USB

3.5mm headphone jack

HDMI Interface

Micro SD (TF) card slot

Colors:White

Product Weight:383.3g

Length:200mm

Width:125mm

Thickness:13mm

Battery capacity: 3600 mAh

Battery Life:About 5 hours, the specific time depending on use environment

Standard List:
Earphones, USB cable, charger, product manual, warranty card, USB female adapter cable 

Rockchip RK2918 Mobile Internet Platform for Android System

Posted Image RK2918 parameters
1.2G ARM Cortex-A8 core
Neon co-processor and 512KB L2 cache
2D/3D graphics processor with OpenGL ES 2.0 and Open VG, up to 60M tri / s
Support H.264 1080P video coding
Support H.264, VP8, RV, WMV, AVS, H.263, MPEG4 video format such as decoding 1080P
Supports DDR III, DDR II, Mobile DDR memory
Support for 24bit hardware ECC MLC NAND, support e-MMC boot
Two SD ports, SD card and WIFI module
Support before and after the 5-megapixel camera
Three USB ports, support USB device, host and 3G modules
Supports a standard Ethernet interface
Standard TFT / EPD controller
Support high-definition movie channel I2S and SPDIF output 8
With a TS interface, mobile and terrestrial TV reception (ATSC-T/MH)
Support the VoIP network video calls
Support for android 2.3 and later
 


Posted Image
RK2918 datasheet,
Posted Image

Bus Achitecture,
Posted Image

1.1.6 Video Codec,
Posted Image

1.1.9 Graphica Engine.
Posted Image 

0

Apache Rewrite Rules

The Apache module mod_rewrite provides a powerful mechanism for hiding, redirecting, and reformatting request URLs. I just finished implementing a mod_rewrite scheme for timfanelli.com to accomplish 3 things:

1. Redirect old URLs with a 301 redirect code
2. Hide certain parts of the URL from my readers.
3. Optimize my Google pagerank.

Using a two simple rewrite rule
RewriteEngine on
RewriteRule ^/index.cgi(.*) /old.html [R=301]

To hide the /cgi-bin/blog.cgi portion of my URL.
RewriteRule ^/blog/(.*)$ /cgi-bin/blog.cgi/ [PT]
RewriteRule ^/$ /blog/ [R=301]
RewriteRule ^/blog$ /blog/ [R=301]

To redirect to domain name to www.domain.name
RewriteCond %{HTTP_HOST} ^yourdomain.com$
RewriteRule .* http://www.yourdomain.com [QSA]

Friday, July 29, 2011

Reverse Proxy with ModProxy


Mod proxy is a versatile module for apache that has many uses. One of its many uses is the reverse proxy feature. Lets say you have multiple web servers behind a router and want to give the outside world access to each server. Your router can only open port 80 for one host, but with modproxy you can direct users to different servers depending on which sub domain or directory they are requesting. This also works for external sites that may not be on your private network.
Hit the jump to see how.
To do this is quite simple:
ProxyRequests Off

Order deny,allow
Allow from all
ProxyPass /foo http://foo.example.com/bar
ProxyPassReverse /foo http://foo.example.com/bar
Example from: http://httpd.apache.org/docs/2.0/mod/mod_proxy.html
Basically when some one requests /foo on your web server that the public port 80 points to, it proxies the request to foo.example.com/bar. The reverse is to make sure that the requests come back to the user that requested it. As the documentation states you should secure your servers before using mod_proxy which is probably a wise choice, but you should always secure your servers.

Why The Antivirus?


Although viri on Linux are not very common, it is not unusual to find anti virus utilities available. You may ask what the point is if your operating system is not as vulnerable to these types of threats but perhaps you are looking at it the wrong way. What better platform is there to act as an anti-virus scanner then one that isn’t as likely to get owned?
Take this example: a Linux file server Vs. a Windows 2003 file server. Just by plugging the Windows server in it may be at risk, in an hostile environment (eg Internet), while the Linux server may not have as much risk (at least from a Virus attack)
We all know the benefits of running Linux file servers such as cost, stability and coolness so we won’t touch on those but there are downsides to running a Linux file server. One of the major downsides is that Linux servers have a perception of being hard to manage. While they can be significantly different from managing a Windows server this myth is often on the top of the list for decision makers.
Often system administrators (myself included in this one) get lazy in their samba configurations. This is a potential problem because a sneaky virus could attempt to write its self to any writable volume, which could cause a lot of grief for the poor Windows machines. Or perhaps in tandem with the writable volume an exploit for a piece of out dated software allowing the writable file to be executed.
A friend of mine first introduced me to the concept of anti-virus scanners on a machine he had created specifically for the purpose of housing his virus collection. He had made a script that extracted information about the virus and cataloged it for easy reading and searching. All he had to do to add a virus to his collection was copy it to a folder. With this method he was able to quickly search and find any virus he had on file for specific traits or purposes for analysis. While some may call this overkill for him it was a hobby. Would you keep your entire virus collection on a Windows machine?
As with any operating system, it is only as secure as you make it, therefore running an anti virus on your Linux machine may not be as stupid as it first sounds. Especially if they interact with the dirty Windows boxes on a regular basis. Then again, if you’re purely a Linux shop, enjoy the cleanliness while it lasts.

Linux Server Monitoring



Linux Server MonitoringThere is a bunch of Linux Server Monitoring software available. The problem is sifting through it all. The first thing to do is identify what it is you want to monitor then you can find what software will work best for your needs. As far as system monitoring goes there is old school and new school. Any combination of tools could be used and there are also a number of ways you can home brew some monitoring solutions. Click for the large version of the Linux Server Monitoring Image.

Workstation Monitoring

Although not exactly server monitoring, workstation monitoring is a good starting point if you want to explore the available options.
Top
The most basic form of monitoring is top. This comes installed with most distributions, live cd’s and servers. To use it just open up a terminal and type the command.  It will auto update by default every 5 seconds but there are some tweaks to make it more useful, my favorite is the < and keys, which change which field it is ordered by. It is very useful to see what is taking up precious CPU or memory resources, the man page is the best resource to learn more.
htopA slightly newer top, with colors and textual bar graphs.
GKrellMIt’s not quite as old as top, but still old school and its still available in many distributions repositories. Its great to have sitting on your desktop and has a whole range of plugins to make it do some pretty cool stuff.
Torsmo / Conky
Torsmo (http://torsmo.sourceforge.net/) and Conky (http://conky.sourceforge.net/) are pretty much the same thing. Conky started out as a fork of Torsmo which is why I’ve included them under the same category. Basically these are little apps that live on your desktop like GKrellM except in text mode. You can display pretty much any information you want, and run external applications including shell scripts.
phpSysInfo
A tool long forgotten about is phpSysInfo (http://phpsysinfo.sourceforge.net/) It requires a webserver such as apache to run, but provides a broad range of information. This tool really has improved over the years becoming more visually appealing, supporting more operating systems and languages. It gives the most basic information out of the box, and to be honest Top gives you more info, but it can now be extended with plugins, meaning you can do whatever you want with it.
rrdtool
Perhaps for the most hardcore users that wish to and some zing to their monitoring rrdtool (http://www.mrtg.org/rrdtool/) provides a great interface to creating graphs from data. Check out their Gallery for some examples of what rrdtool can do. A lot of the other tools in this document also utilize rrdtool.
Cacti
If you’re not as daring as the die-hard rrdtool users Cacti (http://cacti.net) is a great piece of software that provides an interface and simplifies using rrdtool. You can create your own templates or use the pre-built templates that allow you to monitor a variety of things.

Web Server monitoring

I couldn’t get away with talking about Linux Server Monitoring without mentioning Web Server monitoring. If you’ve ran a website before you know how exciting it can be to monitor your server. *Chuckles*, seriously though these tools can make it more fun and worthwhile. These tools may monitor, but some have very different uses from others. I’ll start with the most basic.
Apache Logs
While not exactly server monitoring software, these deserve a mention. You can  monitor a number of things such as access logs:
tail -f /var/log/httpd/access_log
tail -f /var/log/httpd/ssl_access_log
or error logs:
tail -f /var/log/httpd/error_log
tail -f /var/log/httpd/ssl_error_log
If you want to get really snazzy with it you can use a tool like multitail.
apachetop
A great piece of software that simplifies using a pager and opening the log files and puts it into a top like interface. Handy to see real time statistics.
mod_status
Possibly the most complex apache monitoring software available. I’ve honestly never used this before, but after reading this I may just have to give it a try.

Web Analytic Software

These don’t per say monitor but rather give you statistics. They can be useful to identify issues related to traffic spikes.
Apart from Google Analytics My favorite is analogstats, I have not yet got a chance to run PiWik but it looks pretty good too. Evaluate them all and make a decision.

Database Monitoring

MySQLTop
A hard one to find in some distributions repositories. I actually found mysqltop:http://jeremy.zawodny.com/mysql/mytop by mistake one day. Its similar to top and apache top except it monitors mysql databases.
InnoTop (http://sourceforge.net/projects/innotop)
I perceive innotop to be the same as mysqltop except for innodb.
check_postgresWithout mentioning postgres I wouldn’t be completing the database section. check_postgres (http://bucardo.org/wiki/Check_postgres) is a set of scripts to help monitor it. I’ve always found that configuring postgres can be tricky, and since I haven’t tried this one I’m wondering how hard it would be to set up.

Network Monitoring

What good is all this server monitoring if your network is crap non-functioning? A start would be to monitor your network, after all even if your server is up, its no good if your network doesn’t work.
netstat (http://freshmeat.net/projects/netstat/)
Possibly the simplest form of monitoring for the network. Not very functional for more than a workstation listening post monitor unless you were to hack something together. Worth a mention none the less.
etherape
I’m assuming this is pronounce ether ape as in the chimp variety and not eth-rape, since there is an “e” in between. Etherape is a powerful graphic network monitoring tool. Check out the screenshots to see what I’m talking about.http://sourceforge.net/projects/etherape/
iptraf
This one has been around about 10 or so years, but the website hasn’t been updated in 5. Some may think that it looks like Kismet, but I say Kismet looks like iptraf.
mrtg
If you have the Multi Router Traffic Grapher open to the public it gives them an idea on how much traffic you get. Oh what I’d give to get a sneak at Youtubes graph. Used by script kiddies everywhere to see if their DDoS’s are working. Mrtg makes nice rrdtool graphs and wraps an interface around them.
netmonitor (http://netmonitor.sourceforge.net/)
Top like network interface to view network bandwidth / usage. Updated slightly more recently than iptraf.
Use:
netmonitor –config
to generate a config file, startup netmonitor and watch magic in the making.
jnettop
Project page is here: http://jnettop.kubs.info/wiki/ but the freshmeat pagehttp://freshmeat.net/projects/jnettop/ has working screenshots. This is a top like interface that you guessed it, displays information like top.
ntop (http://www.ntop.org)
I know what your thinking, another top interface, give it a rest already. Well, you are wrong. Although ntop shares its name the user interface is far from the same.
even runs on win32 since it uses the libpcap library.
Smokeping
Sounding like a deep purple song we have smokeping, measure and track your network latency in style. http://oss.oetiker.ch/smokeping/

Linux Enterprise Server Monitoring

Lets get to the part that every one wants to hear about. Linux Server Monitoring forsuits enterprise. So, these are the top picks that are either open source, free or little support cost. I’ll be honest here, I don’t have that much experience in this arena.
Monit
Monit (http://mmonit.com/monit/) was suggested to me by NOGREP while writing this article. I’m not sure how well it scales but it certainly has all the makings of an enterprise solution. It can monitor process, files, and network stuff either locally or on remote hosts. It also has its own web server for graphical server monitoring.
Nagios
The world famous Nagios (http://nagios.org) Huge community here and for good reason. Possibly one of the most robust monitoring software solutions out there. I’ve talked to a few IT managers that swear by it, Nagios is also available in many distributions repositories already making it a great choice. Monitoring, Alerting, Response, Reporting, Maintenance and Planning are the larger areas that Nagios supports.
Zenoss
Perhaps known more for its enterprise services zenoss has a community edition. Look for the community edition link on their home page, at the time of this writing the URL is http://community.zenoss.org. Zenoss provides availability monitoring, performance monitoring, event monitoring, alerting and more. A neat feature is the XML-RPC and REST api’s making it integrable and extensible.  The community edition released under the GPL license.
OpenNMS
OpenNMS Either Network Management software or solution (http://www.opennms.org/) is perhaps geared more towards the network infrastructure side of the house, although as stated before this can be useful. Its perhaps the oldest available and can be highly customized like the others.
PandoraFMS
The Pandora Flexible monitoring system (http://pandorafms.org/) doesn’t mean its bendy. Its a really pretty monitoring system with some unique features I haven’t seen in any of the others. The web cam overlay is nice, although I don’t really know how practical. The graphs are pretty and not so rrd like and the maps look awesome. The network auto generation is not unique but gives a cool visualization of a network fairly quickly.

Conclusion

This could be the most comprehensive list I’ve ever come up with. As you can see there is a ton of open source Linux Server Monitoring software available. Give them a try, use the comments to tell what you think of if any nuggets were missed.

Linux Server Management


linux server management
Management… What more is there to say? Management has the ability to either make your life easier or make your life a living hell, server management can go the same way. With these utilities Linux Server Management doesn’t have to be quite as grim. In fact having the right server management software can be very rewarding. In this post we’ll cover some of the Linux Server Management software that is available. As always, feel free to contribute your favorite tools!

                                  Linux Server Management Software

Webmin
I used to run webmin when I wasn’t as experienced with server management. What a long way it has come since then!
Probably one of the more well known pieces of server management software.
OpenNMS is the world’s first enterprise grade network management application platform developed under the open source model.
next generation, open-source Data-center management platform”
splunk
Strange Name neat software. Where’s my t-shirt? All joking aside Splunk is rather unique and powerful. Definitely worth a gander.

Configuration Management

Samba Administration

SWAT
We finish with SWAT, the Samba Web Administration Tool. I wouldn’t exactly call this enterprise ready but for those managing small to medium sized workgroups SWAT may just do the trick.

htaccess allow from


htaccess allow from gives you the ability to allow (or deny) specific IP’s or domain names from a directory on your server. To do this the syntax is quite simple. UsingVIM or nano open up the .htaccess file in the directory that you want to restrict access to. You need to add the following:
Order Deny,Allow
Deny from all
Allow from 127.0.0.1
This allows access from your local host and the IP address you specify. Using .htaccess you can also allow by host name. This is useful if you wish to allow or deny a friend access to a directory. (note: it will also work if you have them in your hosts file)
Order Deny,Allow
Deny from all
Allow from LinuxBlog
Allow from .thelinuxblog.com
Using htaccess to allow from your LAN is also pretty easy. You use your CIDR address (ip/subnet) to do this try something like this (changing to match your LAN):
Order Deny,Allow
Deny from all
Allow from 192.168.1.1/24
I run into htaccess allow problems a lot, and hope that this will clear the air up for me. htaccess can be very handy if you do not want to keep turning your firewall on and off, but do not want your directories wide open. Just remember, if you want to stop everyone except those you choose to access your apache web directories, use htaccess allow from!

Using SSH as a Proxy


A helpful reader left a comment on this blog about using SSH as a Socks Proxy. Here is how to do it.
ssh -D
Thats it, once your logged in you are good to go.
Now, the problem I ran into was making Firefox use this proxy. I found a great extension called SwitchProxy which can be installed from the extensions site. Once installed, you can easily switch between proxies. This is really useful to use while at a coffee shop.
Check out the screenshot of the toolbar that it installs (click for a larger image):
Using SSH as a proxy on Linux
It makes it really easy to turn the proxy on or off. One day if there is interest I might try to extend this to establish the SSH connection.

Remote SSH Port Forwarding


SSH is an amazing tool, I often find myself finding new and interesting ways (at least to me) to use it. It is a great tool to have in your toolbox.
This may be hard to explain in works, but here goes.
Picture this: you have 3 hosts, Host A has outbound access only and is on the same network as Host B. Host B has port 22 open, accepts ssh and is allowed to ssh to Host A. Host C is the computer you are sitting at and on a different network. So, you need to connect to Host A from host C. The way to do this is with SSH port forwarding.
Lets say Host A is 192.168.1.2, Host B is 192.168.1.1 and Host C is 10.0.0.1 on the different network. Host C also has port 22 open.
So, in order to connect to Host A from Host C you can do the following with local port forwarding:
ssh -L 2222::22 user@HostB
Since this is a local forward in another terminal you use ssh -p 2222 remoteuser@localhost (on your localmachine host c) to connect to Host A. This works, but you have to keep the SSH session to Host B open. Which may or may not be a problem.
One thing I like to do is use the SSH Remote port forward, this gives the advantage of not needing to keep the Host C (local) -> Host B connection open. Here is how it goes:
SSH from your current workstation (local) to the host that has access to your target host (host A in this case)
user@hostC: ssh hostB
From that connection, ssh to host A (the final target)
user@hostB: ssh hostA
Now you’re on your target host, you can open screen (to resume if you need to) and then ssh back to your current workstation (host C) and use the remote forward option (-r) and use a port that is open on your current workstation (2222) to connect to localhost (host A on port 22.
user@hostA: screen
user@hostA: ssh hostC -r 2222:localhost:22
Finally, from your workstation, in another terminal window, you’ll connect to your local port 2222, to connect to Host A.
user@hostC: ssh localhost -p2222
Once this is done, you can actually de-attach from your screen session on host a, logout of host A and host B. Once that is done, you’ll essentially have a connection from host A, to host C with a port forward that allows host C to connect to host A even though you cannot SSH directly to host A from host C.
So, if you’ve followed it this far good job. Here is an attempt at drawing a graphic to represent what I typed. It should make the text a little easier to follow.
Remote SSH Port Forwarding

Thursday, July 28, 2011

Linux find command

For those a little scared of the terminal using the Linux find command may seem a little daunting. To be honest though the find command really isn’t that hard to get the hang of. By effectively learning and using the Linux find command you’ll open up a whole new can of searching capabilities. You’ll increase your capabilities, boost productivity, and be more likely to find what your looking for. Alright, enough of the pep talk already and lets get to the core that is the powerful Linux find command.

Basic Searching

So, the first thing you need to know about the Linux find command, is how to find stuff. The simplest syntax for find is as follows:

find . -name “example_filename.txt”

What this does is find the file named (-name) example_filename.txt (“example_filename.txt”) from the present working directory “.” This may seem like a lot to find this file, but if you have a thousand directories on your desktop and you don’t know where you put that blasted example_filename.txt then it will find it. It will also do it quickly.

If you don’t remember the case of the example_filename.txt (perhaps you named it ExAmPlE_FiLeNaMe.TXT) you can use the -iname switch. You use it just like the above example.

What if you’re too lazy to cd into the directory you know the file is in. Or better put: search for files in another directory. Well, instead of the . you can use the path. So if you want to search in /usr/share/icons you’d do

find /usr/share/icons -iname “*mail*”

Woah, what are those stars? Those are wildcards and they’re covered in the next section.


Wild Cards

So, you’ve mastered how to find a file if you know the file name, lets move on to wildcards. With the Linux Find Command, wildcards are a way of searching without knowing the everything about what you are trying to find.

The most commonly used wildcard for find is the asterisk. * will help you find what you need with only knowing part of what you wish to find.

Lets say you have a picture named example_filename.png only you don’t remember what type of picture it is, it could be .jpg, .jpeg, .gif or even the .exe that your gross aunt fanny’s infected Windows box sent you. You’d do something like this:


find . -iname “example_filename.*”


If you need to find a file where you only know the middle of a filename you can also use the asterisk at the beginning or anywhere inbetween as follows:


find . -iname “*foo*bar*”


This would match foobar.*, foodbar.* tofoobar* tofootbarf* and so on.

Pretty simple huh? But wait there’s more! You can also use “?” as a wildcard. The question mark is a great way of finding things that you know have one certain digit. Take the following example:

You have a number of pdf files, they are sorted by a letter, followed by a number so A1.pdf through whatever, and B1.pdf through whatever, and C1.pdf through however many you want. Well, you only want the .pdf’s that have the number 2 in them, but you want all letters. You’d do something like this:

find . -iname “?2.pdf”


This is just as useful to know as the * command which I find myself using more often.


Date / Time


Handily the Linux find command also knows how to find files by date. You can search for files if you know the last accessed time, last changed time or last modified date. There are a huge number of options available with this one but the easiest is mmin (-mmin n). Make sure you represent n as a negative number.


find . -mmin -60


You can also use -mmin with a pattern


find . -mmin -60 -iname “*.doc”


Size

Finding files by size is pretty straight forward. The most common are search by kilobyte, megabyte and gigabyte these are represented respectively below.


find . -size 512k

find . -size 256M

find . -size 2G

Type

Sometimes it’s nice to be able to search just for directories, or only search for files. It can be useful for getting counts of directories or counts of files when used with wc. If you want to use the Linux find command to find all directories you can use the following:

find . -type d -iname “*test*”

The same goes for the file type, except f is used. (The following is piped to wc to get a file count)

find . -type f -iname “*test*”
wc -l

There are other types that can be used, but the ones that are useful to the average user are f (file) d (directory) and l (symlink)

Permissions

If you need to you can use the Linux find command to search for files by permission type. The easiest method is -perm and the octal representation. For more information on the octal format try: Introduction to CHMOD – Octal Format.

To simply put it you’d do something like this:

find . -perm -777

Tuesday, July 26, 2011

Linux OS Image Backup & Restore Guide (Using PING Tool)

Using PING for backup and Restore Linux OS Images

PING (Ping is not Ghost) is a live Linux ISO, created to take image backups of HDD/partitions, either locally or over the network. It can be burnt on a CD and booted, or integrated into a PXE / RIS environment.


Its a good idea to take an image backup of all the servers. so that, if something goes wrong, we can restore easily.


Look at the Quick HOWTO Guide for Backup and Restore Linux Images using PING Tool


Download PING from here

Hope this Article will help System Admins. Using PING you can take image backup of Linux and Windows Operating Systems.

Upgrade iLO firmware from Linux shell


You can upgrade the iLO firmware from the Linux shell itself.
1. Download the right firmware version from HP site and upload it to the server which you want to upgrade.

2. Now login to the server and simply run the file which you downloaded from hp site.

mytestsrv1:/var/tmp # ./CP014890.scexe
FLASH_iLO2 v1.12 for Linux (Aug 31 2009)
Copyright 2009 Hewlett-Packard Development Company, L.P.
Firmware image: ilo2_206.bin
Current iLO 2 firmware version  2.01; Serial number ILOXXXXXXX
Component XML file: CP014890.xml
CP014890.xml reports firmware version 2.06
This operation will update the firmware on the
iLO 2 in this server with version 2.06.
Continue (y/N)?y
Current firmware is 2.01 (Aug 04 2010 11:16:29)
Firmware image is 0x300000 bytes
Committing to flash part...
******** DO NOT INTERRUPT! ********
Flashing completed!
Attempting to reset device.
Succeeded.
Waiting for iLO 2 to reboot...
iLO 2 reboot completed. 

That's all. you are done. iLO firmware has been upgraded to newer version.

Wednesday, July 20, 2011

12 Reasons Why Every Linux System Administrator Should be Lazy

System administrators job is not visible to other IT groups or end-users. Mostly they look at administrators and wonder why sysadmins don’t seem to have any work.

If you see a sysadmin who is always running around, and trying to put down fire, and constantly dealing with production issues, you might think he is working very hard, and really doing his job. But in reality he is not really doing his job.
If you see a sysadmin (UNIX/Linux sysadmin, or DBA, or Network Administrators), who doesn’t seem to be doing much around the office that you can see, he always seem to be relaxed, and he don’t seem to have any visible work, you can be assured that he is doing his job.
The following are the 12 reasons why a lazy sysadmin is the best sysadmin.
  1. Who is the boss? The main reason why lazy sysadmin is the best sysadmin is because of his attitude. They look at the machines little differently than how other IT departments looks at them. There is a difference between developers and sysadmins. Developers thinks they are here to serve the machines by developing code. There is nothing wrong in this approach, as developers have lot of fun developing the code. But, sysadmins think other way around. They think the machines are there to serve them. All they have to do is feed the machine and keep it happy, and let the machine do all the heavy duty job, while they can relax and just be lazy. The first step in being a lazy sysadmin is a slight change in attitutde, and letting the machine know that you are the boss.
  2. Write scripts for repeated jobs. Being lazy means being smart. A smart sysadmin is a master in all scripting languages (bash, awk, sed, etc.,). Anytime he is forced to do some work, and if there is a remote possibility that the work might be needed in the future, he writes a script to complete the job. This way, in the future when he was requested to do the same job, he doesn’t have to think; he just have to execute the script, and get back to being lazy.
  3. Backup everything. Being lazy means taking backup. A lazy sysadmin knows that he has to put little work in creating a backup process, and write backup scripts for all critical systems and applications. When the disk space is not an issue, he schedules the backup job for every application, even for those that are not critical. This way, when something goes wrong, he doesn’t have to break a sweat, and just have to restore from the backup, and get back to whatever lazy stuff he was doing before.
  4. Create a DR plan. Sysadmins doesn’t like to run around when things go wrong. When things are running smoothly, they take some time to create a DR plan. This way, when things go wrong, they can follow the DR plan and quickly get things back to normal, and get back to being lazy again.
  5. Configure highly redundant systems. Lazy sysadmins don’t like to get calls in the middle of the night because of some silly hardware failure problem. So, they make sure all the components are highly redundant. This includes both hardware and software. They have dual network card configured, they have dual power, they have dual hard drives, they have dual of everything. This way, when one component fails, the system still keeps running, and the lazy sysadmin can work on fixing the broken component after he wakes-up in the morning.
  6. Head room for unexpected growth. Lazy sysadmin never allows his system to run in full capacity. He always has enough head room for unexpected growth. He make sure the system has plenty of CPU, RAM and hard disk available. When the business unit decides to dump tons of data over night, he doesn’t have to think about how to handle that unexpected growth.
  7. Be proactive. Being lazy doesn’t mean you just sit and do nothing all the times. Being lazy means being proactive. Lazy sysadmins hate being reactive. They are always anticipating issues and anticipating growth. When they have some free time in their hand, they always work on proactive projects that helps them to avoid unexpected future issues, and to handle future growth.
  8. Loves keyboard shortcut. Lazy sysadmin knows all the keyboard shortcuts for all his favorite applications. If he spends significant time everyday on an application, the first thing he’ll do is to master the keyboard shortcut for that application. He likes to spends less them on the application to get his things done, and likes to get back to being lazy.
  9. Command line master. Every lazy sysadmin is a command line master. This applies to Linux sysadmin, dba, network administrator, etc. If you see an administrator launching a GUI, when the same task can be done from the command line, then you know he is not a lazy sysadmin. There are two reasons why lazy sysadmin loves command line. For one, he can do things quickly at the command line. For another, it makes him feel that he is the boss and not the system. When you use the command line, you are in control, you know exactly what you want to do. When you use GUI, you are at the mercy of the GUI workflow, and you are not in control.
  10. Learns from mistake. Lazy sysadmin never likes to make the same mistake twice. He hates to work on unexpected issues. But, when an unexpected issue happens, he works on fixing it, and thinks about why it happened, and he immediately puts necessary things in place so that the same issue doesn’t happen again. Working on the same problem twice is a sin according to lazy sysadmin. He likes to work on the problem only once, do things to prevent the same mistake from happening in the future, and get back to being lazy.
  11. Learn new technology. There is nothing wrong in learning a new technology to get a better job, or just to keep up with technology growth. But, lazy sysadmin doesn’t learn new technology for this reason. Instead, he learns new technology because he likes to be in control of the systems all the times. He knows he is the boss, and not the machine. So, when a new technology comes, he takes time to study them. Now he has new tools that he can use to keep the system busy, while he continue to be lazy. He learns new technology just for selfish lazy reason.
  12. Document everything. Not every lazy sysadmin does this. Only the best lazy sysadmins does this. You see, lazy sysadmin never likes to be disturbed when he is on the beach enjoying his vacation. So, what does he do? He documents everything, so that when he is not around, other junior sysadmins can do the routine job, and get things moving without disturbing his vacation. There is also another reason for the lazy sysadmin to document everything; because he forgets things. Since he is lazy, he tends to forget what he did a month ago. Since he never likes to think and research the same topic twice, he documents everything, and when he needs to do the same thing in the future, he goes back to his documentation to understand what he did earlier.
Probably you are now convinced that begin a lazy sysadmin is not that easy. It is lot of hard work. If you are not a sysadmin, you can now appreacie a lazy sysadmin when you see one. If you are sysadmin, and always running around, now you know what you need to do to be lazy.

Friday, July 15, 2011

Share Internet with squid & Iptables


implement a nice trick to share internet with squid and block website using port redirection.
open your sysctl.conf
vim /etc/sysctl.conf and change line
net.ipv4.ip_forward = 1 (by default its 0)
save & exit from file 
1. Install Squid
yum -y install squid*
2. Edit Squid.conf file 
To Block website apply copy and paste this below lines as per your source network
acl blocksite dstdomain .orkut.com
http_access deny blocksite 
acl our_networks src
http_access allow our_networks
vim /etc/squid/squid.conf then search for http_port you will see 3128 by default port there so change it to 8888
save & exit from file
3. service squid restart
4.Now share your internet using iptables by executing following command:-
iptables -t nat -A POSTROUTING -o eth1 < Live-IP-Lan-Card> -j MASQUERADE
5. Now redirect your 80 port to 8888 
iptables -t nat -A PREROUTING -i eth0 (Local-Lan-Card) -p -tcp –dport 80 -j REDIRECT –to-port 8888
Now your Outlook will work directly without doing anything but client will not able to surf block websites which you blocked in squid.