What is Ansible ?
Ansible is a simple automation language that can perfectly describe an Information Technology application infrastructure. With Ansible, Information Technology admins can begin automating away the drudgery from their daily repetitive tasks.
Ansible is an open source automation platform. It is very simple to setup and yet powerful.
Ansible can help you with configuration management as well as task automation. It can also do Information Technology orchestration. For eample, Ansible can upgrade the web servers one at a time and while upgrading it can add the new web server to the load balancer and enable it in your Nagios monitoring system. So in short you can handle complex tasks with a tool which is easy to use.
Some Useful Ansible Terms:
⦁ Controlar Machine: The Machine where Ansible is installed. This machine is responsible for running the provisioning on the server you managing.
⦁ Inventory: An initialization file that contains information about the servers you are managing.
⦁ Playbooks: The Entry point for Ansible provisioning, where the Automation is defined through tasks using YAML format.
⦁ Tasks: A block that defines a single procedure to be executed e.g. Install a package.
⦁ Module: A module typically abstracts a system task, like dealing with packages or creating and changing files. Ansible has a mltitude of buit-in modules, but you can also creat custom ones.
⦁ Role: A pre-defined way for organizing playbooks and other files in order to facillitate sharing and reusing portion of a provisioning.
⦁ Play: A provisioning executed from start to finish is called a play. in simple word, execution of a playbook is called a play.
⦁ Facts: Globle variables containing information about the system, like network interfaces or oprating system.
⦁ Handlers: Used to Trigger service status changes, like restarting or stopping a service.
Ansible allows you to creat groups of machines, describe how these machines should be configured or what actions should be taken on them. Ansible issues all commands from a central location to perfom these tasks. Ansible can also be used to automate different networks.
Ansible architecture is fairly straightforward. Refer to the diagram below to understand the Ansible architecture:
As you can see, in the diagram above, the Ansible automation engine has a direct interaction with the users who write playbooks to execute the Ansible Automation engine. It also interacts with cloud services and Configuration Management Database (CMDB)
If you are interested in getting trained in Ansible, Click here.
For Applicable exam fees Ansible Certification, feel free to call on 09371005898 / or You can also fill up this enquiry form with your information and we will get back to you.
Network interface card (NIC) bonding (also referred to as NIC teaming) is the bonding together of two or more physical NICs so that they appear as one logical device. This allows for improvement in network performance by increasing the link speed beyond the limits of one single NIC and increasing the redundancy for higher availability. For example, you can use two 1-gigabit NICs bonded together to establish a 2-gigabit connection to a central file server.
When bonded together, two or more physical NICs can be assigned one IP address. And they will represent the same MAC address. If one of the NICs fails, the IP address remains accessible because it is bound to the local NIC rather than to a single physical NIC.
Here are the list of available options
balance-rr or 0 : Sets a round-robin policy for fault tolerance and load balancing. Transmissions are received and sent out sequentially on each bonded slave interface beginning with the first one available.
active-backup or 1: Sets an active-backup policy for fault tolerance. Transmissions are received and sent out via the first available bonded slave interface. Another bonded slave interface is only used if the active bonded slave interface fails.
balance-xor or 2: Sets an XOR(exclusive-or) policy for fault tolerance and load balancing. Using this method the interface matches up the incoming request’s MAC Address with the MAC Address for one of the slave NICs. Once the link is established, transmissions are sent out sequentially beginning with the first available interface.
broadcast or 3: Sets a broadcast policy for fault tolerance. All transmissions are sent on all slave interfaces.
802.3ad or 4: Sets an IEEE802.3ad dynamic link aggregation policy. Creates aggregation groups that share the same speed and duplex settings. Transmits and receives on all slave in the active aggregator. Requires a switch that is 802.3ad compliant
balance-tlb or 5: Sets a Transmit Load Balancing (TLB) policy for fault tolerance and load balancing. The outgoing traffic is distributed according to the current load on each slave interface. Incoming traffic is received by the current slave. If the receiving slave fails, another slave takes over the MAC address of the failed slave.
balance-alb or 6: Sets and Active Load balancing (ALB) policy for fault tolerance and load balancing. Includes transmit and receive and load balancing for IPV4 traffic. Receive load balancing is achieved thorugh ARP negotiation
Steps to configure :
Step #1: Create a bond0 configuration file
Red Hat Linux stores network configuration in /etc/sysconfig/network-scripts/ directory. First, you need to create bond0 config file:
# vi /etc/sysconfig/network-scripts/ifcfg-bond0 #Append following lines to it: DEVICE=bond0 IPADDR=192.168.1.20 NETWORK=192.168.1.0 NETMASK=255.255.255.0 USERCTL=no BOOTPROTO=none ONBOOT=yes
Replace above IP address with your actual IP address. Save file and exit to shell prompt.
Step #2: Modify eth0 and eth1 config files:
Open both configuration using vi text editor and make sure file read as follows for eth0 interface
# vi /etc/sysconfig/network-scripts/ifcfg-eth0 #Modify/append directive as follows: DEVICE=eth0 USERCTL=no ONBOOT=yes MASTER=bond0 SLAVE=yes BOOTPROTO=none
Open eth1 configuration file using vi text editor:
# vi /etc/sysconfig/network-scripts/ifcfg-eth1Make (sure file read as follows for eth1 interface:) DEVICE=eth1 USERCTL=no ONBOOT=yes MASTER=bond0 SLAVE=yes BOOTPROTO=none
Save file and exit to shell prompt.
Step # 3: Load bond driver/module
Make sure bonding module is loaded when the channel-bonding interface (bond0) is brought up. You need to modify kernel modules configuration file:
# vi /etc/modprobe.conf #Append following two lines: alias bond0 bonding options bond0 mode=balance-alb miimon=100
Step # 4: Test configuration
First, load the bonding module:
# modprobe bonding #Restart networking service in order to bring up bond0 interface: # service network restart Verify everything is working: # less /proc/net/bonding/bond0Output: Bonding Mode: load balancing (round-robin) MII Status: up MII Polling Interval (ms): 0 Up Delay (ms): 0 Down Delay (ms): 0 Slave Interface: eth0 MII Status: up Link Failure Count: 0 Permanent HW addr: 00:0c:29:c6:be:59 Slave Interface: eth1 MII Status: up Link Failure Count: 0
Public vs Private, Amazon Web Services EC2 compared to OpenStack®
How to choose a cloud platform and when to use both
The public vs private cloud debate is a path well trodden. While technologies and offerings abound, there is still confusion among organizations as to which platform is suited for their agile needs. One of the key benefits to a cloud platform is the ability to spin up compute, networking and storage quickly when users request these resources and similarly decommission when no longer required. Among public cloud providers, Amazon has a market share ahead of Google, Microsoft and others. Among private cloud providers, OpenStack® presents a viable alternative to Microsoft or VMware.
This article compares Amazon Web Services EC2 and OpenStack® as follows:
- What technical features do the two platforms provide?
- How do the business characteristics of the two platforms compare?
- How do the costs compare?
- How to decide which platform to use and how to use both
- OpenStack® and Amazon Web Services (AWS) EC2 defined
From OpenStack.org “OpenStack software controls large pools of compute, storage, and networking resources throughout a datacenter, managed through a dashboard or via the OpenStack API. OpenStack works with popular enterprise and open source technologies making it ideal for heterogeneous infrastructure.”
From AWS “Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers..”
Technical comparison of OpenStack® and AWS EC2
The tables below name and briefly describe the feature in OpenStack® and AWS.
Why you need it?
To run an application you need a server with CPU, memory and storage, with or without pre-installed operating systems and applications.
|Compute is virtual machines/servers|
|Sizes||Flavors: Variety of sizes: micro, small, medium, large etc.||Variety of sizes: micro, small, medium, large etc.|
|How much memory and CPU and temporary (ephemeral) storage is assigned to the instances/VM.|
|Operating systems offered||Whatever operating systems the cloud administrators host on the OpenStack cloud. (Red Hat certifiesMicrosoft Windows, RHEL and SUSE)||AMIs provided by the AWS marketplace.|
|What operating systems does the cloud offer to end-users|
|Templates/images||Glance||(AMI) Amazon Machine Image|
|A base configuration of a virtual machine, from which other virtual machines can be created.||OpenStack administrators upload images and create catalogs for users.||AWS provides anonline marketplace of pre-defined images.|
|Catalogs of virtual machine images can be created from which users can select a virtual machine.||Users can upload their own images.||Users can upload their own images.|
How can understanding Linux enhance a career? This question is interesting because there are two drastically different answers. The first is the obvious answer that you can find through websites and studies everywhere, but the second is a little more subtle. And a lot more awesome.
You might be reading this post because you read articles like this one from The Linux Foundation regarding hiring demands for Linux experts. Or perhaps you read the 2013 report and realized there’s a trend for hiring Linux professionals. Basically, if you want a job in technology, being a Linux expert is like finding a golden ticket in your Wonka bar.
But what about non-Linux experts who are professionals in their own fields? Does the unemployed or underemployed Microsoft administrator have to start over and look for an entry level job in a field they don’t know, with zero experience and almost zero enthusiasm?
Let me start by telling you about my last job. This is part six of the blog series, so by now you probably realize that I’m a Linux guy, and couldn’t hide it if I tried. But my last full-time position? Managing director of the database department at a private university. This university was Microsoft-centric and all of our database systems were Microsoft SQL. We had proprietary Windows applications running on a large array of Windows servers. There wasn’t a single Linux operating system in the entire IT department. (Well, except for the Xubuntu VM on my laptop, but that doesn’t really count)
How on earth did I get that job when my resume screams Linux and Open Source? It’s simple: because working with Linux forces you to be a thinker.
My boss (an incredible man, and now a great friend) saw the Linux stuff on my resume and didn’t think, “This guy doesn’t know Microsoft stuff at all!” Rather he saw it and thought, “This guy knows Linux? He can do anything!”
Sure, that’s a generalization, but it’s pretty common. It’s also often the truth too. Being comfortable with Linux means that you’re flexible. There are tons of Microsoft-only server rooms, but in an office environment, there’s rarely a Linux-only server room. That means Linux users have to be comfortable working with multiple operating systems. It also means they tend to have incredible troubleshooting skills, and by their mere interest in Linux, it shows they can (and do) think outside the box.
So how has Linux helped my career? It helped me land a job at a university that doesn’t have a single Linux server in their entire infrastructure. Linux professionals don’t just fix computers, they solve problems. That’s what makes them so invaluable.
How can Linux change your career?
Yes, I’m about to get a little grandiose. But I’m passionate about changing people’s lives, and I’ve seen it happen, so at least consider this list of ways Linux can help your career.
- Quite simply, you can get a job. Obviously, there are many, many places looking for individuals who are skilled with Linux. The links above will attest to that. But that’s just the obvious answer.
- Learning Linux helps you look at your skillset in a different light. No longer do you see yourself as a list of certifications and abilities, but rather a forward-thinking problem solver. All of your skills are just arrows in your quiver, and your brain is what makes you so valuable. Remember, a Google search can teach you how to install an Apache server, but only a well-trained problem solver can know when it’s appropriate to do so.
- You can find a job you love. Once you realize how valuable and flexible you’ve become, you can focus more on finding a job you love. We all need to pay our mortgage, but if your job options are broader, the chances of finding your calling are much greater.
- You can offer employers or clients well-rounded advice. Remember from past blog posts, there are times Linux isn’t the right choice. The only people who will be able to tell the difference are those familiar with Linux and the alternatives. Your Linux expertise can be invaluable to someone who is implementing a SharePoint infrastructure. Should they be using Linux-based solutions instead? Be that person who can help them decide. Your rewards will be more than just monetary. I promise.
- Reread number 2. Truly, making the mental shift from a technician to a solutions provider is the key to success in IT. Be the answer that a Google search can’t provide. You don’t need all the answers; you need to know how to ask all the right questions.
I’m excited about the future of technology, and the future Linux professionals will play in it. It’s certainly not too late to jump into the mix and start learning Linux. As the hiring focus shifts more and more toward DevOps type skills, a Linux skillset (and more importantly an open source mindset) will be the types of things that will make you very employable. Even more important than that, however, is that it will likely leave you a fulfilled person. At the end of the day, that’s the key to a successful career.
OpenStack is a cloud computing platform that controls large number of compute nodes , storage, and networking resources throughout a datacenter, all managed through a dashboard(Horizon) that gives administrators control while empowering their users to provision resources through a web interface. Openstack provides an Infrastructure-as-a-Service (IaaS) solution through a set of interrelated services.
Here is the list of openstack Services , project name and description. Continue reading
Are You Ready for Red Hat Enterprise Linux Higher End Certification?
Get trained today!
For More Info enquiry form
With all the changes coming with Red Hat Enterprise Linux (RHEL) Higher end certification program and the training around it, I’ve heard Red Hat admins ask, “But what about their certifications?” The reality is that not much has changed around Red Hat certifications at the basic levels. But if you’ve got your sights set higher for the RHCA and some of the speciality areas, things have changed a bit!
At the lower levels — Red Hat Certified System Administrator (RHCSA) and Red Hat Certified Engineer (RHCE) — nothing has really changed. You still need to take the same courses and exams (EX200 for the RHCSA and EX300 for the RHCE), and you’ll need both of these certifications to get your Red Hat Certified Architect (RHCA) certification. Continue reading
Linux Scenario Questions
Ques 1. What is the difference between name based virtual hosting and IP based virtual hosting ? Explain the scenario where name based virtual hosting seems useful ?
Ans – Virtual hosts are used to host multiple domains on a single apache instance. You can have one virtual host for each IP your server has, or the same IP but different ports, or the same IP, the same port but different host names. The latter are called “name based vhosts”.
IP-based virtual hosting, we can run more than one web site on the same server machine, but each web site has its own IP address while In Name-based virtual hosting, we host multiple websites on the same IP address. But for this to succeed, you have to put more than one DNS record for your IP address in the DNS database.
In the production shared webhosting environment, getting a dedicated IP address for every domains hosted in the server is not feasible in terms of cost. Most of the customers wont be able to afford the cost of having a dedicated IP address. Here is the place where the concepts of Name based virtual hosting find its place.
Ques 2. What is network bonding in Linux and where the important configuration files involved? What is the advantage of Network Bonding ?
Ans – Network Bonding is a Linux kernel feature that allows to aggregate multiple network interfaces into a single virtual link. This is a great way to achieve redundant links, fault tolerance or load balancing networks in production system. If one of the physical NIC is down or unplugged, it will automatically move traffic to the other NIC card. Similar way the bonding will increase the interface throughput to handle the traffic it it is configured in active-active mode.
There are 7 modes starting from 0 to 6 which decides how the bonding configuration behaves.
mode=0 (balance-rr) – Round-robin policy
It the default mode. It transmits packets in sequential order from the first available slave through the last.
This mode provides load balancing and fault tolerance.
Active-backup policy: In this mode, only one slave in the bond is active. The other one will become active, only when the active slave fails. The bond’s MAC address is externally visible on only one port (network adapter) to avoid confusing the switch. This mode provides fault tolerance.
Transmit the traffic based on [(source MAC address XOR’d with destination MAC address) modulo slave count]. This selects the same slave for each destination MAC address. This mode provides load balancing and fault tolerance.
Broadcast policy: transmits everything on all slave interfaces. This mode provides fault tolerance.
Creates aggregation groups that share the same speed and duplex settings. Utilizes all slaves in the active aggregator according to the 802.3ad specification.
mode=5 (balance-tlb) – Adaptive transmit load balancing
channel bonding that does not require any special switch support. The outgoing traffic is distributed according to the current load (computed relative to the speed) on each slave. Incoming traffic is received by the current slave. If the receiving slave fails, another slave takes over the MAC address of the failed receiving slave.
mode=6 (balance-alb) – Adaptive load balancing
It includes balance-tlb plus receive load balancing (rlb) for IPV4 traffic, and does not require any special switch support. The receive load balancing is achieved by ARP negotiation.
Important Configuration Files involved :
/etc/sysconfig/network-scripts/ifcfg-bond0 /etc/modprobe.d/bonding.conf /etc/sysconfig/network-scripts/ifcfg-eth[0-4] /proc/net/bonding/bond0
Ques 3. Explain briefly the procedure for re-installing Grub in Linux ?
Ans – 1) Download Ubuntu Installation / Live cd
2) Boot from Ubuntu Installation / Live cd – usb, burned cd etc.
3) During boot select “Try Ubuntu” , Don’t select install !
4) Mount your Linux root partition
sudo mount /dev/sda6 /mnt
( Assuming /dev/sda6 is the Linux root partition)
5) Install / reinstall grub
$ sudo grub-install --root-directory=/mnt/ /dev/sda
( where /dev/sda is your primary disk)
Installation finished. No error reported.
6) Reboot your system, remove bootable CD and we should have the boot menu ready when the system starts.
Note : There would be slight difference when using with other distros.
Ques 4. Explain the fields in /etc/passwd and /etc/shadow ?
Ans – The /etc/shadow file stores actual password in encrypted format with some additional properties related to user password.It mainly holds athe account aging parameters. All fields are separated by a colon (:) symbol. It contains one entry per line for each user listed in /etc/passwd file Generally, shadow file entry looks as below.
Here is the explanation of each field.
User name : Your login name
Password: Your encrypted password.
Last password change : Days since Jan 1, 1970 that password was last changed
Minimum: The minimum number of days required between password changes.
Maximum: The maximum number of days the password is valid.
Warn : The number of days before password is to expire that user is warned that his/her password must be changed
Inactive : The number of days after password expires that account is disabled
Expire : days since Jan 1, 1970 that account is disabled. It indicates an absolute date specifying when the login may no longer be used
The /etc/passwd file stores essential information, which is required during login /etc/passwd is a text file, that contains a list of user account related parameters like user ID, group ID, home directory, shell, etc.
Here is the sample entry from /etc/passwd file
Username: User’s login name.
Password: An x character indicates that encrypted password is stored in /etc/shadow file.
User ID (UID): Each user must be assigned a user ID (UID). UID 0 (zero) is reserved for root.
Group ID (GID): The primary group ID
User Info: The comment field. It allow you to add extra information about the user.
Home directory: The absolute path to the directory the user will be in when they log in.
Command/shell: The absolute path of a command or shell (/bin/bash).
Ques 5. How do you boot your system into the following modes, when you are in some trouble ?
Ans – a) Rescue mode
b) Single user mode
c) Emergency mode
Rescue mode provides the ability to boot a small Linux environment from an external bootable device like a CD-ROM, or USB drive instead of the system’s hard drive.Rescue mode is provided to help you with your system from repairing the file system or fixing certain issues which prevent your normal operations.
In order to get into the rescue mode, change the BIOS settings of the machine to boot from the external media. Once the system started booting using bootable disk, add the keyword rescue as a kernel parameter or else you can give the parameter “linux rescue” in the graphical boot interface.
In single-user mode, the system boots to runlevel 1, but it will have many more additional functionalities compared to switching to runlevel 1 from other levels.
The local file systems can be mounted in this mode, but the network is not activated.
Use the following steps to boot into single-user mode:
1) At the GRUB splash screen during the booting process, press any key to enter the GRUB interactive menu.
2) Select the proper version of kernel that you wish to boot and type “a” to append the line.
3) Go to the end of the line and type “single” as a separate word.
4) Press Enter to exit edit mode and type “b” to boot into single usermode now.
In emergency mode, you are booting into the most minimal environment possible. The root file system is mounted read-only and almost nothing is set up. The main advantage of emergency mode over single-user mode is that the init files are not loaded. If the init is corrupted , you can still mount file systems to recover data that could be lost during a re-installation. To boot into emergency mode, use the same method as described for single-user mode, with one exception, replace the keyword single with the keyword “emergency”.
6. In the ps results few of the processes are having process state as “D” . What does it mean ? Briefly explain different process states ?
Ans : To have a dynamic view of a process in Linux, always use the top command. This command provides a real-time view of the Linux system in terms of processes. The eighth column in the output of this command represents the current state of processes. A process state gives a broader indication of whether the process is currently running, stopped, sleeping etc.
A process in Linux can have any of the following four states…
Running – A process is said to be in a running state when either it is actually running/ executing or waiting in the scheduler’s queue to get executed (which means that it is ready to run). That is the reason that this state is sometimes also known as ‘runnable’ and represented by (R).
Waiting or Sleeping – A process is said to be in this state if it is waiting for an event to occur or waiting for some resource-specific operation to complete. So, depending upon these scenarios, a waiting state can be subcategorised into an interruptible (S) or uninterruptible (D) state respectively.
Stopped – A process is said to be in the stopped state when it receives a signal to stop. This usually happens when the process is being debugged. This state is represented by (T).
Zombie – A process is said to be in the zombie state when it has finished execution but is waiting for its parent to retrieve its exit status. This state is represented by (Z).
Apart from these four states, the process is said to be dead after it crosses over the zombie state; ie when the parent retrieves its exit status. ‘Dead’ is not exactly a state, since a dead process ceases to exist.
Ques 7. What is drop cache in Linux and how do you clear it ?
Ans – Cache in Linux memory is where the Kernel stores the information it may need later, as memory is incredible faster than disk.
It is great that the Linux Kernel takes care about that.Linux Operating system is very efficient in managing your computer memory, and will automatically free the RAM and drop the cache if some application needs memory.
Kernels 2.6.16 and newer provide a mechanism to have the kernel drop the page cache and/or inode and dentry caches on command, which can help free up a lot of memory. Now we can throw away that script that allocated a ton of memory just to get rid of the cache.
To free pagecache:
# echo 1 > /proc/sys/vm/drop_caches
To free dentries and inodes:
# echo 2 > /proc/sys/vm/drop_caches
To free pagecache, dentries and inodes:
#echo 3 > /proc/sys/vm/drop_caches
This is a non-destructive operation in normal scenarios and will only free things that are completely unused. Dirty objects will continue to be in use until written out to disk and are not freeable. However it is always preferred to run “sync” first to flush useful things out to disk.
Ques 8. Password based authentication is disabled in your infrastructure. So how do you login to the servers ?
Ans – To improve the system security even further, most of the organizations turned to use keybased authentications instead of Password based authentication. We can enforce the key-based authentication by disabling the standard password authentication, which involves a public key private key pair. The public key is added in the server configuration file while private key is kept kept confidential on the client side.
Below listed is the procedure, to set up keybased authentication.
1) Generating Key Pairs
a) Generate an RSA key pair by typing the following at a shell prompt:
$ ssh-keygen -t rsa Generating public/private rsa key pair. Enter file in which to save the key (/home/steve/.ssh/id_rsa):
b) Press Enter to confirm the default location (that is, ~/.ssh/id_rsa) for the newly created key.
c) Enter a passphrase, and confirm it by entering it again when prompted to do so.
d) Copy the content of ~/.ssh/id_rsa.pub into the ~/.ssh/authorized_keys on the machine to which you want to connect,
appending it to its end if the file already exists.
e) Change the permissions of the ~/.ssh/authorized_keys file using the following command:
$ chmod 600 ~/.ssh/authorized_keys
2) Now on your client side, open the remote connection agent like putty and browse your public key and try SSH to the server, you should be able to login without a password now.
# ssh server1.myserver.com The authenticity of host 'server1.myserver.com (192.168.44.2)' can't be established. RSA key fingerprint is e3:c3:89:37:4b:94:37:d7:0c:d5:6f:9a:38:62:ce:1b. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'server1.myserver.com' (RSA) to the list of known hosts. Last login: Tue July 13 12:40:34 2014 from server2.myserver.com
3) Public key authentication can prevent brute force SSH attacks, but only if all password-based authentication methods are disabled. Once public key authentication has been confirmed to be working, disable regular password authentication by editing /etc/ssh/sshd_config and set the following option to “no”.
Ques 9. Explain the different Scenarios involved in TCP 3 way handshake ?
Ans – The TCP three way handshake is the process for establishing a TCP connection.We can explain 3 way handshake with a simple scenario where we assume a client computer is contacting a server to send it some information.
a) The client sends a packet with the SYN bit set and a sequence number of N.
b) The server sends a packet with an ACK number of N+1, the SYN bit set and a sequence number of X.
c) The client sends a packet with an ACK number of X+1 and the connection is established.
d) The client sends the data.
The first three steps in the above process is called the three way handshake.
Ques 10. As the disk space utilization was so high in the server, the Administrator has removed few files from the server but still the disk utilization is showing as high. What would be the reason ?
Ans – In Linux even if we remove a file from the mounted file system, that will still be in use by some application and for this application it remains available. Its because file descriptor in /proc/ filesystem is held open..So if there are such open descriptors to files already removed, space occupied by them considered as used. You find this difference by checking them using the “df” and “du” commands. While df is to show the file system usage, du is to report the file space usage. du works from files while df works at filesystem level, reporting what the kernel says it has available.
You can find all unlinked but held open files with:
# lsof | grep '(deleted)'
This will list the filename which is open witht he pid in which it is running. We can kill those Pids and which will stop these process and will recover the disk space responsible for this file.
Ques 11. What is rDNS and explain its benefits in the Linux Domain Name Systems ?
Ans – A typical DNS lookup is used to determine which IP address is associated with a hostname, and this is called Forward DNS lookup. A reverse DNS lookup is used for the opposite, to determine which hostname is associated with an IP address. Sometimes reverse DNS lookups are required for diagnostic purposes. Today, reverse DNS lookups are used mainly for security purposes to trace a hacker or spammer. Many modern mailing systems use reverse mapping to provide simple authentication using dual lookup: hostname-to-address and address-to-hostname. The rDNS ( reverse DNS ) is implemented using a specialized zone record for reverse lookups called PTR record. PTR records always resolve to names, never IP addresses.
Ques 12. What is sosreport, how do you generate it while working with your Redhat Support Team in production ?
Ans : Sosreport is a command-line utility in Redhat based linux destros (RHEL / CentOS) which collects system configuration and diagnostic information of your linux box like running kernel version, loaded modules, and system and service configuration files. This command also runs external programs to collect further information, and stores this output in the resulting archive. Sosreport is required when you have open a case with redhat for technical support. Redhat support Engineers will require sosreport of your server for troubleshooting purpose. To run sosreport, sos package should be installed. Sos package is part of default installation in most of linux. If for any reason this package is no installed , then use below yum command to install it manually :
# yum install sos
Generate the report
Open the terminal type sosreport command :
This command will normally complete within a few minutes. Depending on local configuration and the options specified in some cases the command may take longer to finish. Once completed, sosreport will generate a compressed a file under /tmp folder. The file should be provided to Redhat support representative as an attachment to open a support case.
Ques 13. What is swappiness in Linux Memory Management and how do we configure that ?
Ans – The swappiness parameter controls the tendency of the kernel to move processes out of physical memory and onto the swap disk. Because disks are much slower than RAM, this can lead to slower response times for system and applications if processes are too aggressively moved out of memory.
swappiness can have a value of between 0 and 100
swappiness=0 tells the kernel to avoid swapping processes out of physical memory for as long as possible
swappiness=100 tells the kernel to aggressively swap processes out of physical memory and move them to swap cache
The default setting in Redhat/Ubuntu based Linux distros is swappiness=60. Reducing the default value of swappiness will probably improve overall performance for a typical Ubuntu desktop installation.
~$ cat /proc/sys/vm/swappiness 60
If we have enough RAM, we can turn that down to 10 or 15. The swap file will then only be used when the RAM usage is around 80 or 90 percent.
To change the system swappiness value, open /etc/sysctl.conf as root. Then, change or add this line to the file:
vm.swappiness = 10
Reboot for the change to take effect
You can also change the value while your system is still running
We can also clear swap by running swapoff -a and then swapon -a as root instead of rebooting to achieve the same effect.
Ques 14. What is git ?
Ans : Git is a very popular and efficient open source Version Control System. It tracks content such as files and directories. It stores the file content in BLOBs – binary large objects. The folders are represented as trees. Each tree contains other trees (subfolders) and BLOBs along with a simple text file which consists of the mode, type, name and Secure Hash Algorithm of each blob and subtree entry. During repository transfers, even if there are several files with the same content and different names, the GIT software will transfer the BLOB once and then expand it to the different files.
Ques 15. What is inode ? Briefly explain the structure ?
Ans : Inode is a data structure that keeps track of all the information about a file. When we keep our information in a file and the OS stores the information about a file in an inode. Information about files is sometimes called metadata. We can say that an inode is metadata of the data. In a file system, inodes consist roughly of 1% of the total disk space, whether it is a whole storage unit or a partition on a storage unit. The inode space is used to ?track? the files stored on the hard disk. The inode entries store metadata about each file, directory or object, but only points to these structures rather than storing the data. Each entry is 128 bytes in size. The metadata contained about each structure can include the following:
Access Control List (ACL)
Direct/indirect disk blocks
Number of blocks
File access, change and modification time
File deletion time
File generation number
Number of links
Inode structure of a directory consists of a name to inode mapping of files and directories in that directory.In a directory, You can find the inode number corresponding to the files using the command “ls -i”
#ls -i 786727 -rw------- 1 root root 4226530 May 29 13:17 sudo.log 786437 -rw-------. 1 root root 32640 Jun 23 20:11 tallylog 786440 -rw-rw-r--. 1 root utmp 276096 Jul 20 06:45 wtmp 786741 -rw------- 1 root root 9653 Jul 17 09:38 yum.log
Similar way, the number of inodes allocated, used and free in a Filesystem can be listed using “df -i” command
# df -i /root Filesystem Inodes IUsed IFree IUse% Mounted on /dev/mapper/RootVol-lvmroot 524288 80200 444088 16%
Ques 1. What is the difference between umask and ulimit ?
umask stands for ‘User file creation mask’, which determines the settings of a mask that controls which file permissions are set for files and directories when they are created. While ulimit is a linux built in command which provides control over the resources available to the shell and/or to processes started by it.
You can limit user to specific range by editing /etc/security/limits.conf at the same time system wide settings can be updated in /etc/sysctl.conf
Ques 2. What are the run levels in linux and how to change them ?
A run level is a state of init and the whole system that defines what system services are operating and they are identified by numbers.There are 7 different run levels present (run level 0-6) in Linux system for different purpose. The descriptions are given below.
0: Halt System (To shutdown the system)
1: Single user mode
2: Basic multi user mode without NFS
3: Full multi user mode (text based)
5: Multi user mode with Graphical User Interface
6: Reboot System
To change the run level, edit the file “/etc/inittab” and change initdefault entry ( id:5:initdefault:). If we want to change the run level on the fly, it can be done using ‘init’ command.
For example, when we type ‘init 3′ in the commandline , this will move the system from current runlevel to runlevl 3. Current level can be listed by typing the command ‘who -r’
Ques 3. What is the functionality of a Puppet Server ?
Puppet is an open-source and enterprise application for configuration management toll in UNIX like operating system. Puppet is an IT automation software used to push configuration to its clients (puppet agents) using code. Puppet code can do a variety of tasks from installing new software, to check file permissions, or updating user accounts and lots of other tasks.
Ques 4. What is SeLinux?
SELinux is an acronym for Security-enhanced Linux. It is an access control implementation and security feature for the Linux kernel. It is designed to protect the server against misconfigurations and/or compromised daemons. It put limits and instructs server daemons or programs what files they can access and what actions they can take by defining a security policy.
Ques 5. What is crontab and explain the fields in a crontab ?
The cron is a deamon that executes commands at specific dates and times in linux. You can use this to schedule activities, either as one-time events or as recurring tasks. Crontab is the program used to install, deinstall or list the tables used to drive the cron daemon in a server. Each user can have their own crontab, and though these are files in /var/spool/cron/crontabs, they are not intended to be edited directly. Here are few of the command line options for crontab.
crontab -e Edit your crontab file. crontab -l Show your crontab file. crontab -r Remove your crontab file.
Traditional cron format consists of six fields separated by white spaces:
The format is explained in detail below.
* * * * * *
| | | | | |
| | | | | +– Year (range: 1900-3000)
| | | | +—- Day of the Week (range: 1-7, 1 standing for Monday)
| | | +—— Month of the Year (range: 1-12)
| | +——– Day of the Month (range: 1-31)
| +———- Hour (range: 0-23)
+———— Minute (range: 0-59)
Ques 6. What are inodes in Linux ? How to find the inode associated with a file ?
An inode is a data structure on a filesystem on Linux and other Unix-like operating systems that stores all the information about a file except its name and its actual data. When a file is created, it is assigned both a name and an inode number, which is an integer that is unique within the filesystem. Both the file names and their corresponding inode numbers are stored as entries in the directory that appears to the user to contain the files. The concept of inodes is particularly important to the recovery of damaged filesystems. When parts of the inode are lost, they appear in the lost+found directory within the partition in which they once existed.
The following will show the name of each object in the current directory together with its inode number:
# ls -i
The avialble number inodes in a filesystem can be found using the below command :
# df -i
The other way we can get the inode details of a file by using the stat commmand.
Usage : # stat
-sh-4.1$ stat note.txt File: `note.txt' Size: 4 Blocks: 8 IO Block: 4096 regular file Device: fd05h/64773d Inode: 8655235 Links: 1 Access: (0644/-rw-r--r--) Uid: (69548/nixuser) Gid: (25000/ UNKNOWN) Access: 2014-06-29 15:27:56.299214865 +0000 Modify: 2014-06-29 15:28:28.027093254 +0000 Change: 2014-06-29 15:28:28.027093254 +0000
Apart from the above basic questions, be prepared for answers for the below questions
1. How to set linux file/directory permissions ?
2. How to set ownership for files/directories ?
3. How to create user/group and how to modify it ?
4. How to find kernel / OS version and its supported bit (32/64) version ?
5. How to set / find interface ip address ?
6. How to find linux mount points and disk usage ?
7. What command to find memory and swap usage ?
8. Have a look on ps, top, grep, find, awk and dmesg commands ?
Ques 1 : – How to increase disk read performance from single command in Linux ?
Ans : – In Linux like Operating System the Read performance of a Disk can be improved by increasing a parameter called “Read+Ahead” using ‘blockdev’ command. By default the Linux OS will read 128 KB of data in advance so that it is already in Memory cache before the program needs it. This value can be increased so as to get better Read Performance.
# blockdev –setra 16384 /dev/sda
Ques 2 : – What is the use of tmfs File System ?
Ans : – Tmpfs is a file system which keeps all files in virtual memory. Everything in tmpfs is temporary in the sense that no files will be created on your hard drive. If you unmount a tmpfs instance, everything stored therein is lost.
tmpfs puts everything into the kernel internal caches and grows and shrinks to accommodate the files it contains and is able to swap unneeded pages out to swap space. It has maximum size limits which can be adjusted on the fly via ‘mount -o remount …’
Ques 3 : – What is anacron and its usage ?
Ans : – Anacron is a service that runs after every system reboot, checking for any cron and at scheduled jobs that were to run while the system was down and hence, have not yet run. It scans the /etc/cron.hourly/0anacron file for three factors to determine whether to run these missed jobs. The three factors are the presence of the /var/spool/anacron/cron.daily file, the elapsed time of 24 hours since anacron last ran, and the presence of the AC power to the system. If all of the three factors are affirmative, anacron goes ahead and automatically executes the scripts located in the /etc/cron.daily, /etc/cron.weekly, and /etc/cron.monthly directories, based on the settings and conditions defined in anacron’s main configuration file /etc/anacrontab
Ques 4 : – What is difference between Soft Link & Hard Link ?
Ans : – A soft link (symbolic link or a symlink) makes it possible to associate one file with another. It is similar to a shortcut in MS Windows where the actual file is resident somewhere in the directory structure but you may have multiple shortcuts or pointers with different names pointing to it. Each soft link has a unique inode number.A soft link can cross file system boundaries and can be used to link directories.
A hard link associates two or more files with a single inode number. This allows the files to have identical permissions, ownership, time stamp, and file contents. Changes made to any of the files are reflected on the other linked files. All files actually contain identical data.A hard link cannot cross file system boundaries and cannot be used to link directories.
Ques 5 : – What is the difference between hardware RAID and Software RAID?
Ans : – The hardware-based RAID is independent from the host. A Hardware RAID device connects to the SCSI controller and presents the RAID arrays as a single SCSI drive. An external RAID system moves all RAID handling “intelligence” into a controller located in the external disk subsystem. The whole subsystem is connected to the host via a normal SCSI controller and appears to the host as a single disk.
Software RAID is implemented under OS Kernel level. The Linux kernel contains an MD driver that allows the RAID solution to be completely hardware independent. The performance of a software-based array depends on the server CPU performance and load.
Ques 6 : – Explain the command “rpm -qf “?
Ans : – it queries the RPM database for which package owns
Ques 7. What is initrd image and what is its function in the linux booting process ?
Ans : The initial RAM disk (initrd) is an initial root file system that is mounted prior to when the real root file system is available.The initrd is bound to the kernel and loaded as part of the kernel boot procedure. The kernel then mounts this initrd as part of the two-stage boot process to load the modules to make the real file systems available and get at the real root file system. Thus initrd image plays a vital role in linux booting process.
Ques 8. Explain the terms suid, sgid and sticky bit ?
Ans : In addition to the basic file permissions in Linux, there are few special permissions that are available for executable files and directories.
SUID : If setuid bit is set, when the file is executed by a user, the process will have the same rights as the owner of the file being executed.
SGID : Same as above, but inherits group previleges of the file on execution, not user previleges. Similar way when you create a file within directory,it will inherit the group ownership of the directories.
Sticky bit : Sticky bit was used on executables in linux so that they would remain in the memory more time after the initial execution, hoping they would be needed in the near future. But mainly it is on folders, to imply that a file or folder created inside a stickybit enabled folder could only be deleted by the owner. A very good implementation of sticky bit is /tmp ,where every user has write permission but only users who own a file can delete them.
Ques 9. List out few of the differences between Softlink and Hardlink ?
Ans : a) Hardlink cannot be created for directories. Hard link can only be created for a file.
b) Symbolic links or symlinks can link to a directory.
c) Removing the original file that your hard link points to does not remove the hardlink itself; the hardlink still provides the content of the underlying file.
d) If you remove the hard link or the symlink itself, the original file will stay intact.
e) Removing the original file does not remove the attached symbolic link or symlink, but without the original file, the symlink is useless
Ques 10. How do you sent a mail attachment via bash console ?
“mutt” is an opensource tool for sending emails with attachments from the linux bash command line. We can install “mutt” from the binary rpm or via packagemanager.
For Ubuntu / Debian based destros.
# apt-get install mutt
For Redhat / Fedor based destros,
# yum install mutt
# mutt -s "Subject of Mail" -a "path of attachment file" "email address of recipient" < "message text containing body of the message"
mutt -s "Backup Data" -a /home/backup.tar.gz firstname.lastname@example.org < /tmp/message.txt
Ques 1 : – What command would you use to alter the priority of a running process?
Ans : – The renice command.
Ques 2 : – When would the cron daemon execute a job that is submitted as */10 * 2-8 */6 1
Ans : – The cron daemon will run the script every tenth minute of the hour on the 2 nd , 3 rd , 4 th , 5 th , 6 th , 7 th ,and 8 th of every 6 th month provided the day falls on a Monday.
Ques 3 : – What is the other command besides the ps command to view processes running on the system?
Ans : -The top command.
Ques 4 : – What is the command to list the PID of a specific process?
Ans : -The pidof command can be used to list the PID of a specific process.
Ques 5 : – What are the background processes normally referred to in Linux?
Ans : – The background processes are referred to as daemons
Ques 6 : – Which command is used to run a process immune to hangup signals?
Ans : – The nohup command with an ampersand sign at the end of the command line.
Ques 7 : – What is the default nice value?
Ans : – The default nice value is zero.
Ques 8 : – What are the four ls* commands to view pci, usb, cpu, and hal information?
Ans : – The lspci, lsusb, lscpu, and lshal commands.
Ques 9 : – The parent process gets the nice value of its child process. True or False?
Ans : – True
Ques 10 : – Every process running on the system has a unique identication number called UID. True or False?
Ans : – False. It is called the PID.
Ques 11 : – Why would you use the renice command?
Ans : – The renice command can be used to change the niceness of a running process.
Ques 12 : – Which user does not have to be explicitly defined in either *.allow or *.deny file to be able to run the at and cron jobs?
Ans : – The root user.
Ques 13 : – What command would you use to list open files?
Ans : – The lsof command.
Ques 14 : – What does the run-parts command do?
Ans : – The run-parts command is used to run scripts listed in the specified directory.
Ques 15 : – When would the at command execute a job that is submitted as at 01:00 12/12/15
Ans : – The at command will run it at 1am on December 12, 2015
Ques 16 : – What would the nice command display without any options or arguments?
Ans : – The nice command displays the default nice value when executed without any options.
Ques 1: – What are different type of variables in Linux ?
Ans: – There are two types of variables :
System Defined Variable: These are the variables which are created and maintained by Operating System(Linux) itself. Generally these variables are defined in CAPITAL LETTERS. We can see these variables by using the command “set”
User Defined Variable : These variables are defined by users. A shell script allows us to set and use our own variables within the script. Setting variables allows you to temporarily store data and use it throughout the script, making the shell script more like a real computer program. Some Examples are listed below :
var4=“still more testing”
The Linux shell automatically determines the data type used for the variable value.
Ques 2: – What does chroot SFTP means ?
Ans: – SFTP stands for SSH File Transfer protocol or Secure File Transfer Protocol. SFTP provides file access, file transfer, and file management functionalities over any reliable data stream. When we configure SFTP in chroot environment , then only allowed users will be limited to their home directory , or we can say allowed users will be in jail like environment where they can’t even change their directory.
Ques 3: – How to check syntax of named.conf is correct or not ?
Ans: – named-checkconf is the command, which checks the syntax of named.conf file.
# named-checkconf /etc/named.conf
If bind is running in chroot environment use below command
# named-checkconf -t /var/named/chroot /etc/named.conf
Ques 4: – What are the different types of DNS records or Resource records ?
Ans: – Below are the list of resource records or DNS records :
SOA – start of authority, for a given zone
NS – name server
A – name-to-address mapping
PTR – address-to-name mapping
CNAME – canonical name (for aliases)
MX – mail exchanger (host to receive mail for this name)
TXT – textual info
RP – contact person for this zone
WKS – well known services
HINFO – host information
Ques 5: – How To limit the data transfer rate, number of clients & connections per IP for local users in VSFTPD ?
Ans: – Edit the ftp server’s config file (/etc/vsftpd/vsftpd.conf) and set the below directives :
local_max_rate=1000000 # Maximum data transfer rate in bytes per second
max_clients=50 # Maximum number of clients that may be connected
max_per_ip=2 # Maximum connections per IP
Ques 6: – How to change the default directory for ftp / Anonymous user in vsftpd ?
Ans: -Edit the file ‘/etc/vsftpd/vsftpd.conf’ and change the below directive :
After making above change either restart or reload vsftpd service
Ques 7: – What are the important daemons in postfix ?
Ans: – Below are the lists of impportant daemons in postfix mail server :
master :The master daemon is the brain of the Postfix mail system. It spawns all other daemons.
smtpd: The smtpd daemon (server) handles incoming connections.
smtp :The smtp client handles outgoing connections.
qmgr :The qmgr-Daemon is the heart of the Postfix mail system. It processes and controls all messages in the mail queues.
local : The local program is Postfix’ own local delivery agent. It stores messages in mailboxes.
Ques 8: – What is the use of Domain Keys(DKIM) in mail servers ?
Ans: – DomainKeys is an e-mail authentication system designed to verify the DNS domain of an e-mail sender and the message integrity. The DomainKeys specification has adopted aspects of Identified Internet Mail to create an enhanced protocol called DomainKeys Identified Mail (DKIM).
Ques 9: – What is use of sshpass command in linux ?
Ans: – sshpass is a command which allows us to automatically supply password to the command prompt so that automated scripts can be run as desired by users. sshpass supplies password to ssh prompt using a dedicated tty , fooling ssh to believe that a interactive user is supplying password.
Ques 10: – What is the use of blowfish options in scp command ?
Ans: -Using blowfish options in scp command , we can increase the speed, by default scp uses the Triple-DES cipher to encrypt the data being copied.
Example : scp -c blowfish /home/itstuff.txt root@
Ques 11: – What is Initrd ?
Ans: – Initrd stands for initial ram disk , which contains the temporary root filesystem and neccessary modules which helps in mounting the real root filesystem in read mode only.
Ques 12: – What is an Open mail relay ?
Ans: – An open mail relay is an SMTP server configured in such a way that it allows anyone on the Internet to send e-mail through it, not just mail destined to or originating from known users.This used to be the default configuration in many mail servers; indeed, it was the way the Internet was initially set up, but open mail relays have become unpopular because of their exploitation by spammers and worms.