Thursday, May 17, 2012

Printer Configuration

Printer Configuration

The Printer Configuration Tool allows users to configure a printer. This tool helps maintain the printer configuration file, print spool directories, and print filters.
Red Hat Enterprise Linux 3 uses the CUPS printing system. If a system was upgraded from a previous Red Hat Enterprise Linux version that used CUPS, the upgrade process preserved the configured queues.
Using the Printer Configuration Tool requires root privileges. To start the application, select Main Menu Button (on the Panel) => System Settings => Printing, or type the command redhat-config-printer. This command automatically determines whether to run the graphical or text-based version depending on whether the command is executed in the graphical desktop environment or from a text-based console.
To force the Printer Configuration Tool to run as a text-based application, execute the command redhat-config-printer-tui from a shell prompt.


Figure 1  Printer Configuration Tool
The following types of print queues can be configured:
  • Locally-connected — a printer attached directly to the computer through a parallel or USB port.
  • Networked CUPS (IPP) — a printer that can be accessed over a TCP/IP network via the Internet Printing Protocol, also known as IPP (for example, a printer attached to another Red Hat Enterprise Linux system running CUPS on the network).
  • Networked UNIX (LPD) — a printer attached to a different UNIX system that can be accessed over a TCP/IP network (for example, a printer attached to another Red Hat Enterprise Linux system running LPD on the network).
  • Networked Windows (SMB) — a printer attached to a different system which is sharing a printer over a SMB network (for example, a printer attached to a Microsoft Windows™ machine).
  • Networked Novell (NCP) — a printer attached to a different system which uses Novell's NetWare network technology.
  • Networked JetDirect — a printer connected directly to the network through HP JetDirect instead of to a computer.

    Clicking the Apply button saves any changes that you have made and restarts the printer daemon. The changes are not written to the configuration file until the printer daemon is restarted. Alternatively, you can choose Action => Apply.


    Adding a Local Printer

    To add a local printer, such as one attached through a parallel port or USB port on your computer, click the New button in the main Printer Configuration Tool Figure 2 Click Forward to proceed. 

     

    Figure 2. Adding a Printer
    In the window shown in Figure 3, enter a unique name for the printer in the Name text field. The printer name cannot contain spaces and must begin with a letter. The printer name may contain letters, numbers, dashes (-), and underscores (_). Optionally, enter a short description for the printer, which can contain spaces.

    Figure 3. Selecting a Queue Name
    After clicking Forward, Figure 4 appears. Select Locally-connected from the Select a queue type menu, and select the device. The device is usually /dev/lp0 for a parallel printer or /dev/usb/lp0 for a USB printer. If no devices appear in the list, click Rescan devices to rescan the computer or click Custom device to specify it manually. Click Forward to continue.

    Figure 4. Adding a Local Printer

    Adding an IPP Printer

    An IPP printer is a printer attached to a different Linux system on the same network running CUPS or a printer configured on another operating system to use IPP. By default, the Printer Configuration Tool browses the network for any shared IPP printers. (This option can be changed by selecting Action => Sharing from the pulldown menu.) Any networked IPP printer found via CUPS browsing appears in the main window under the Browsed queues category.
    If you have a firewall configured on the print server, it must be able to send and receive connections on the incoming UDP port, 631. If you have a firewall configured on the client (the computer sending the print request), it must be allowed to send and accept connections on port 631.
    If you disable the automatic browsing feature, you can still add a networked IPP printer by clicking the New button in the main Printer Configuration Tool window to display the window in Figure 2. Click Forward to proceed.
    In the window shown in Figure 3, enter a unique name for the printer in the Name text field. The printer name cannot contain spaces and must begin with a letter. The printer name may contain letters, numbers, dashes (-), and underscores (_). Optionally, enter a short description for the printer, which can contain spaces.
    After clicking Forward, Figure 5 appears. Select Networked CUPS (IPP) from the Select a queue type menu.
    Figure 5. Adding an IPP Printer
    Text fields for the following options appear:
  • Server — The hostname or IP address of the remote machine to which the printer is attached.
  • Path — The path to the print queue on the remote machine.
Click Forward to continue.
 

 

Adding a Remote UNIX (LPD) Printer

To add a remote UNIX printer, such as one attached to a different Linux system on the same network, click the New button in the main Printer Configuration Tool window. The window shown in Figure 2 will appear. Click Forward to proceed.
In the window shown in Figure 3, enter a unique name for the printer in the Name text field. The printer name cannot contain spaces and must begin with a letter. The printer name may contain letters, numbers, dashes (-), and underscores (_). Optionally, enter a short description for the printer, which can contain spaces.
Select Networked UNIX (LPD) from the Select a queue type menu, and click Forward.
Figure 6. Adding a Remote LPD Printer
Text fields for the following options appear:
  • Server — The hostname or IP address of the remote machine to which the printer is attached.
  • Queue — The remote printer queue. The default printer queue is usually lp.
Click Forward to continue.

Adding a Samba (SMB) Printer

To add a printer which is accessed using the SMB protocol (such as a printer attached to a Microsoft Windows system), click the New button in the main Printer Configuration Tool window. The window shown in Figure 2 will appear. Click Forward to proceed.
In the window shown in Figure 3, enter a unique name for the printer in the Name text field. The printer name cannot contain spaces and must begin with a letter. The printer name may contain letters, numbers, dashes (-), and underscores (_). Optionally, enter a short description for the printer, which can contain spaces.
Select Networked Windows (SMB) from the Select a queue type menu, and click Forward. If the printer is attached to a Microsoft Windows system, choose this queue type.
Figure 7. Adding a SMB Printer
As shown in Figure 7, SMB shares are automatically detected and listed. Click the arrow beside each share name to expand the list. From the expanded list, select a printer.
If the printer you are looking for does not appear in the list, click the Specify button on the right. Text fields for the following options appear:
  • Workgroup — The name of the Samba workgroup for the shared printer.
  • Server — The name of the server sharing the printer.
  • Share — The name of the shared printer on which you want to print. This name must be the same name defined as the Samba printer on the remote Windows machine.
  • User name — The name of the user you must log in as to access the printer. This user must exist on the Windows system, and the user must have permission to access the printer. The default user name is typically guest for Windows servers, or nobody for Samba servers.
  • Password — The password (if required) for the user specified in the User name field.
Click Forward to continue. The Printer Configuration Tool then attempts to connect to the shared printer. If the shared printer requires a username and password, a dialog window appears prompting you to provide a valid username and password for the shared printer. If an incorrect share name is specified, you can change it here as well. If a workgroup name is required to connect to the share, it can be specified in this dialog box. This dialog window is the same as the one shown when the Specify button is clicked.

Adding a Novell NetWare (NCP) Printer

To add a Novell NetWare (NCP) printer, click the New button in the main Printer Configuration Tool window. The window shown in Figure 1 will appear. Click Forward to proceed.
In the window shown in Figure 3, enter a unique name for the printer in the Name text field. The printer name cannot contain spaces and must begin with a letter. The printer name may contain letters, numbers, dashes (-), and underscores (_). Optionally, enter a short description for the printer, which can contain spaces.
Select Networked Novell (NCP) from the Select a queue type menu.
Figure 8. Adding an NCP Printer
Text fields for the following options appear:
  • Server — The hostname or IP address of the NCP system to which the printer is attached.
  • Queue — The remote queue for the printer on the NCP system.
  • User — The name of the user you must log in as to access the printer.
  • Password — The password for the user specified in the User field above.

    Adding a JetDirect Printer

    To add a JetDirect printer, click the New button in the main Printer Configuration Tool window. The window shown in Figure 1 will appear. Click Forward to proceed.
    In the window shown in Figure 3, enter a unique name for the printer in the Name text field. The printer name cannot contain spaces and must begin with a letter. The printer name may contain letters, numbers, dashes (-), and underscores (_). Optionally, enter a short description for the printer, which can contain spaces.
    Select Networked JetDirect from the Select a queue type menu, and click Forward.
    Figure 9. Adding a JetDirect Printer
    Text fields for the following options appear:
  • Printer — The hostname or IP address of the JetDirect printer.
  • Port — The port on the JetDirect printer that is listening for print jobs. The default port is 9100.


  • Selecting the Printer Model and Finishing

    After selecting the queue type of the printer, the next step is to select the printer model.
    You will see a window similar to Figure 10. If it was not auto-detected, select the model from the list. The printers are divided by manufacturers. Select the name of the printer manufacturer from the pulldown menu. The printer models are updated each time a different manufacturer is selected. Select the printer model from the list.
    Figure 10. Selecting a Printer Model
    The recommended print driver is selected based on the printer model selected. The print driver processes the data that you want to print into a format the printer can understand. Since a local printer is attached directly to your computer, you need a print driver to process the data that is sent to the printer.
    If you are configuring a remote printer (IPP, LPD, SMB, or NCP), the remote print server usually has its own print driver. If you select an additional print driver on your local computer, the data is filtered multiple times and is converted to a format that the printer can not understand.
    To make sure the data is not filtered more than once, first try selecting Generic as the manufacturer and Raw Print Queue or Postscript Printer as the printer model. After applying the changes, print a test page to try out this new configuration. If the test fails, the remote print server might not have a print driver configured. Try selecting a print driver according to the manufacturer and model of the remote printer, applying the changes, and printing a test page.


    Confirming Printer Configuration

    The last step is to confirm your printer configuration. Click Apply to add the print queue if the settings are correct. Click Back to modify the printer configuration.
    Click the Apply button in the main window to save your changes and restart the printer daemon. After applying the changes, print a test page to ensure the configuration is correct.

    Printing a Test Page

    After you have configured your printer, you should print a test page to make sure the printer is functioning properly. To print a test page, select the printer that you want to try out from the printer list, then select the appropriate test page from the Test pulldown menu.
    If you change the print driver or modify the driver options, you should print a test page to test the different configuration.
    Figure 11. Test Page Options

Modifying Existing Printers

To delete an existing printer, select the printer and click the Delete button on the toolbar. The printer is removed from the printer list. Click Apply to save the changes and restart the printer daemon.
To set the default printer, select the printer from the printer list and click the Default button on the toolbar. The default printer icon appears in the Default column of the default printer in the list. A IPP browsed queue printer can not be set as the default printer in the Printer Configuration Tool. To make an IPP printer the default, either add it as described in Section Adding an IPP Printer and make it the default or use the GNOME Print Manager to set it as the default. To start the GNOME Printer Manager, select Main Menu => System Tools => Print Manager. Right-click on the queue name, and select Set as Default. Setting the default printer in the GNOME Print Manager only changes the default printer for the user who configures it; it is not a system-wide setting.
After adding the printer(s), the settings can be edited by selecting the printer from the printer list and clicking the Edit button. The tabbed window shown in Figure 12 is displayed. The window contains the current values for the selected printer. Make any necessary changes, and click OK. Click Apply in the main Printer Configuration Tool window to save the changes and restart the printer daemon.
Figure 12. Editing a Printer

Queue Name

To rename a printer or change its short description, change the value in the Queue name tab. Click OK to return to the main window. The name of the printer should change in the printer list. Click Apply to save the change and restart the printer daemon.

Queue Type

The Queue type tab shows the queue type that was selected when adding the printer and its settings. The queue type of the printer can be changed or just the settings. After making modifications, click OK to return to the main window. Click Apply to save the changes and restart the printer daemon.
Depending on which queue type is chosen, different options are displayed. Refer to the appropriate section on adding a printer for a description of the options.

Printer Driver

The Printer driver tab shows which print driver is currently being used. If it is changed, click OK to return to the main window. Click Apply to save the change and restart the printer daemon.

Driver Options

The Driver Options tab displays advanced printer options. Options vary for each print driver. Common options include:
  • Prerender Postscript should be selected if characters beyond the basic ASCII set are being sent to the printer but they are not printing correctly (such as Japanese characters). This option prerenders non-standard PostScript fonts so that they are printed correctly.
    If the printer does not support the fonts you are trying to print, try selecting this option. For example, select this option to print Japanese fonts to a non-Japanese printer.
    Extra time is required to perform this action. Do not choose it unless problems printing the correct fonts exist.
    Also select this option if the printer can not handle PostScript level 3. This option converts it to PostScript level 1.
  • GhostScript pre-filtering — allows you to select No pre-filtering, Convert to PS level 1, or Convert to PS level 2 in case the printer can not handle certain PostScript levels. This option is only available if the PostScript driver is used.
  • Page Size allows the paper size to be selected. The options include US Letter, US Legal, A3, and A4.
  • Effective Filter Locale defaults to C. If Japanese characters are being printed, select ja_JP. Otherwise, accept the default of C.
  • Media Source defaults to Printer default. Change this option to use paper from a different tray.
To modify the driver options, click OK to return to the main window. Click Apply to save the change and restart the printer daemon.

Saving the Configuration File

When the printer configuration is saved using the Printer Configuration Tool, the application creates its own configuration file that is used to create the files in the /etc/cups directory. You can use the command line options to save or restore the Printer Configuration Tool file. If the /etc/cups/ directory is saved and restored to the same locations, the printer configuration is not restored because each time the printer daemon is restarted, it creates a new /etc/printcap file from the Printer Configuration Tool configuration file. When creating a backup of the system's configuration files, use the following method to save the printer configuration files.
To save your printer configuration, type this command as root:
/usr/sbin/redhat-config-printer-tui --Xexport > settings.xml
Your configuration is saved to the file settings.xml.
If this file is saved, it can be used to restore the printer settings. This is useful if the printer configuration is deleted, if Red Hat Enterprise Linux is reinstalled, or if the same printer configuration is needed on multiple systems. The file should be saved on a different system before reinstalling. To restore the configuration, type this command as root:
/usr/sbin/redhat-config-printer-tui --Ximport < settings.xml
If you already have a configuration file (you have configured one or more printers on the system already) and you try to import another configuration file, the existing configuration file will be overwritten. If you want to keep your existing configuration and add the configuration in the saved file, you can merge the files with the following command (as root):
/usr/sbin/redhat-config-printer-tui --Ximport --merge < settings.xml
Your printer list will then consist of the printers you configured on the system as well as the printers you imported from the saved configuration file. If the imported configuration file has a print queue with the same name as an existing print queue on the system, the print queue from the imported file will override the existing printer.
After importing the configuration file (with or without the merge command), you must restart the printer daemon. Issue the command:
/sbin/service cups restart


Command Line Configuration

If you do not have X installed and you do not want to use the text-based version, you can add a printer via the command line. This method is useful if you want to add a printer from a script or in the %post section of a kickstart installation.

Adding a Local Printer

To add a printer:
redhat-config-printer-tui --Xadd-local options
Options:
--device=node
(Required) The device node to use. For example, /dev/lp0.
--make=make
(Required) The IEEE 1284 MANUFACTURER string or the printer manufacturer's name as in the foomatic database if the manufacturer string is not available.
--model=model
(Required) The IEEE 1284 MODEL string or the printer model listed in the foomatic database if the model string is not available.
--name=name
(Optional) The name to be given to the new queue. If one is not given, a name based on the device node (such as "lp0") will be used.
--as-default
(Optional) Set this as the default queue.
After adding the printer, use the following command to start/restart the printer daemon:
service cups restart

Removing a Local Printer

A printer queue can also be removed via the command line.
As root, to remove a printer queue:
redhat-config-printer-tui --Xremove-local options
Options:
--device=node
(Required) The device node used such as /dev/lp0.
--make=make
(Required) The IEEE 1284 MANUFACTURER string, or (if none is available) the printer manufacturer's name as in the foomatic database.
--model=model
(Required) The IEEE 1284 MODEL string, or (if none is available) the printer model as listed in the foomatic database.
After removing the printer from the Printer Configuration Tool configuration, restart the printer daemon for the changes to take effect:
service cups restart
If all printers have been removed, and you do not want to run the printer daemon anymore, execute the following command:
service cups stop


Setting the Default Printer

To set the default printer, use the following command, and specify the queuename:
redhat-config-printer-tui --Xdefault --queue=queuename

Managing Print Jobs

When you send a print job to the printer daemon, such as printing text file from Emacs or printing an image from The GIMP, the print job is added to the print spool queue. The print spool queue is a list of print jobs that have been sent to the printer and information about each print request, such as the status of the request, the username of the person who sent the request, the hostname of the system that sent the request, the job number, and more.
If you are running a graphical desktop environment, click the Printer Manager icon on the panel to start the GNOME Print Manager as shown in Figure 13.
Figure 13. GNOME Print Manager
It can also be started by selecting Main Menu Button (on the Panel) => System Tools => Print Manager.
To change the printer settings, right-click on the icon for the printer and select Properties. The Printer Configuration Tool is then started.
Double-click on a configured printer to view the print spool queue as shown in Figure 14.
Figure 14. List of Print Jobs
To cancel a specific print job listed in the GNOME Print Manager, select it from the list and select Edit => Cancel Documents from the pulldown menu.
If there are active print jobs in the print spool, a printer notification icon might appears in the Panel Notification Area of the desktop panel as shown in Figure 15. Because it probes for active print jobs every five seconds, the icon might not be displayed for short print jobs.
Figure 15. Printer Notification Icon
Clicking on the printer notification icon starts the GNOME Print Manager to display a list of current print jobs.
Also located on the Panel is a Print Manager icon. To print a file from Nautilus, browse to the location of the file and drag and drop it on to the Print Manager icon on the Panel. The window shown in Figure 16 is displayed. Click OK to start printing the file.
Figure 16. Print Verification Window
To view the list of print jobs in the print spool from a shell prompt, type the command lpq. The last few lines will look similar to the following:
Rank   Owner/ID            Class  Job Files       Size Time
active user@localhost+902    A    902 sample.txt  2050 01:20:46
Example 1. Example of lpq output
If you want to cancel a print job, find the job number of the request with the command lpq and then use the command lprm job number. For example, lprm 902 would cancel the print job in Example 1. You must have proper permissions to cancel a print job. You can not cancel print jobs that were started by other users unless you are logged in as root on the machine to which the printer is attached.
You can also print a file directly from a shell prompt. For example, the command lpr sample.txt will print the text file sample.txt. The print filter determines what type of file it is and converts it into a format the printer can understand.


Sharing a Printer

The Printer Configuration Tool's ability to share configuration options can only be used if you are using the CUPS printing system.
Allowing users on a different computer on the network to print to a printer configured for your system is called sharing the printer. By default, printers configured with the Printer Configuration Tool are not shared.
To share a configured printer, start the Printer Configuration Tool and select a printer from the list. Then select Action => Sharing from the pulldown menu.


On the Queue tab, select the option to make the queue available to other users.
Figure 17. Queue Options
After selecting to share the queue, by default, all hosts are allowed to print to the shared printer. Allowing all systems on the network to print to the queue can be dangerous, especially if the system is directly connected to the Internet. It is recommended that this option be changed by selecting the All hosts entry and clicking the Edit button to display the window shown in Figure 18.
If you have a firewall configured on the print server, it must be able to send and receive connections on the incoming UDP port, 631. If you have a firewall configured on the client (the computer sending the print request), it must be allowed to send and accept connections on port 631.
Figure 18. Allowed Hosts
The General tab configures settings for all printers, including those not viewable with the Printer Configuration Tool. There are two options:
  • Automatically find remote shared queues — Selected by default, this option enables IPP browsing, which means that when other machines on the network broadcast the queues that they have, the queues are automatically added to the list of printers available to the system; no additional configuration is required for a printer found from IPP browsing. This option does not automatically share the printers configured on the local system.
  • Enable LPD protocol — This option allows the printer to receive print jobs from clients configured to use the LPD protocol using the cups-lpd service, which is an xinetd service.

Figure 19. System-wide Sharing Options


Additional Resources

To learn more about printing on Red Hat Enterprise Linux, refer to the following resources.

 Installed Documentation

  • map lpr — The manual page for the lpr command that allows you to print files from the command line.
  • man lprm — The manual page for the command line utility to remove print jobs from the print queue.
  • man mpage — The manual page for the command line utility to print multiple pages on one sheet of paper.
  • man cupsd — The manual page for the CUPS printer daemon.
  • man cupsd.conf — The manual page for the CUPS printer daemon configuration file.
  • man classes.conf — The manual page for the class configuration file for CUPS.

Useful Websites


Wednesday, May 16, 2012

Installation and Configuration RHEL cluster suite for an Open-Xchange Cluster

Installation and Configuration RHEL cluster suite for an Open-Xchange Cluster 

This document gives a rough description about how to set up and configure Redhat Cluster software including LVS as load balancer for Open-Xchange. The Document can be used as starting point for designing own clusters and is not meant as step by step howto.

LVS

Abstract

LVS is running in an active-passive configuration on two nodes. A “reduced” three-tier topology is used: the “real servers” are provided as virtual services by another two node RHEL cluster. The OX service will benefit from load balancing because it is running in active-active mode as two virtual services on the RHEL cluster and it is ensured that each virtual OX will run on one of the physical nodes of the cluster. The Mysql and Mail services are configured to each run on a single node and will failover in case of a node outage. LVS is using NAT as routing method and the round robin algorithm for scheduling. See the sketch for IPs and general configuration.
  • LVS is configured on two nodes: node1111/192.168.109.105 and node2222/192.168.109.106
  • A single “external” IP is configured for all services: 192.168.109.117
  • The routing IP (used as gateway from the “internal” part of the setup) is configured as 192.168.109.116.
  • The LVS heartbeat network is using the private network 10.10.10.0/24.
  • Network devices are bundled together as bond devices to gain higher redundancy.
Firewall marks are used to get session persistence. 

Illustration1.png
Illustration 1: Setup of LVS and HA cluster

Installation

On both systems a RHEL 5.2 operating system has been installed. On top of the base installation the LVS-parts of the RHEL cluster suite has to be installed. Because the piranha web interface is used to ease the configuration tasks the package piranha-gui has to installed on the system 192.168.109.105 as well.
  • All packet filters are disabled to avoid impacts on the cluster: 


system-config-securitylevel disable
service iptables stop
chkconfig –del iptables
Check/edit the file /etc/sysconfig/system-config-securitylevel, the service must be set to disabled.
  • Also selinux is disabled resp. set to permissive mode:
system-config-securitylevel disable
Check/edit the file /etc/selinux/config, the service must be set either to disabled or to permissive. If selinux was set to disabled and should be enabled again: set to permissive, boot, execute touch /.autorelabel, reboot, set to enabled.
  • IP forwarding is enabled in /etc/sysctl.conf, set net.ip4.ip_forward=1
  • Network zeroconf is disabled in /etc/sysconfig/network a line is added NOZEROCONF=yes

Configuration

Services
  • On both systems the sshd and pulse services are enabled:
chkconfig –level 35 sshd|pulse on
  • On the piranha system (192.168.109.105/node1111) the piranha web service is enabled:
chkconfig –level 35 piranha-gui on
  • Also the apache web server is enabled and started:
chkconfig –add httpd
service httpd start
  • The access to the apache server is allowed to anybody via /etc/sysconfig/ha/web/secure/.htaccess. This can be changed and adapted to the needs/policies in place.

Network

A single network 192.168.109.0/24 is used for all real and virtual IPs and the private network 10.10.10.0/24 for the LVS internal communication. Via bonding of two physical devices the network devices bond0/1 need to be configured for both networks during the base installation.
On node1111 the bond devices are configured as:

DEVICE=bond0 
BOOTPROTO=none 
BROADCAST=192.168.109.255 
IPADDR=192.168.109.105 
NETMASK=255.255.255.0 
NETWORK=192.168.109.0 
ONBOOT=yes 
TYPE=Ethernet
...
DEVICE=bond1 
BOOTPROTO=none 
BROADCAST=10.10.10.255 
IPADDR=10.10.10.3
NETMASK=255.255.255.0 
NETWORK=10.10.10.0 
ONBOOT=yes 
TYPE=Ethernet 
And on node2222 the bond devices are configured as:

DEVICE=bond0 
BOOTPROTO=none 
BROADCAST=192.168.109.255 
IPADDR=192.168.109.106 
NETMASK=255.255.255.0 
NETWORK=192.168.109.0 
ONBOOT=yes 
TYPE=Ethernet
...
DEVICE=bond1 
BOOTPROTO=none 
BROADCAST=10.10.10.255 
IPADDR=10.10.10.4 
NETMASK=255.255.255.0 
NETWORK=10.10.10.0 
ONBOOT=yes 
TYPE=Ethernet 

Firewall Marks

Firewall marks together with a timing parameter are used by LVS to make considerations about packages from the same source and the same destination but different ports. Marked packages are assumed to belong to the same sessions if they bear the same mark and appear during a configurable time frame. The following marking rules are configured:
Mark sessions to port 80/http, 443/https and 44335/oxtender with “80”:
iptables -t mangle -A PREROUTING -p tcp -d 192.168.109.117/32 --dport 80 -j MARK --set-mark 80
iptables -t mangle -A PREROUTING -p tcp -d 192.168.109.117/32 –dport 443 -j MARK --set-mark 80
iptables -t mangle -A PREROUTING -p tcp -d 192.168.109.117/32 --dport 44335 -j MARK --set-mark 80
Mark sessions to port 143/imap and port 993/imaps with “143”:
iptables -t mangle -A PREROUTING -p tcp -d 192.168.109.117/32 --dport 143 -j MARK --set-mark 143
iptables -t mangle -A PREROUTING -p tcp -d 192.168.109.117/32 --dport 993 -j MARK --set-mark 143
The rules are saved and the enabled via:
service iptables save
chkconfig –add iptables
After the configuration the file/etc/sysconfig/iptables should contain a paragraph with the following rules:
...
-A PREROUTING -d 192.168.109.117 -p tcp -m tcp --dport 80 -j MARK --set-mark 0x50 
-A PREROUTING -d 192.168.109.117 -p tcp -m tcp --dport 443 -j MARK --set-mark 0x50 
-A PREROUTING -d 192.168.109.117 -p tcp -m tcp --dport 143 -j MARK --set-mark 0x8f 
-A PREROUTING -d 192.168.109.117 -p tcp -m tcp --dport 993 -j MARK --set-mark 0x8f 
-A PREROUTING -d 192.168.109.117 -p udp -m udp --dport 44335 -j MARK --set-mark 0x50 
...

Configure LVS

To ease the configuration of LVS the RHEL piranha-gui is used.
  • A password for the piranha-gui needs to be set:
piranha-passwd
  • The piranha-gui is started:
service piranha-gui start
The gui can be accessed at the URL: http://192.168.109.105:3636/. It should be considered to limit access to the piranha GUI.
The following screenshots are showing configuration examples:
Illustration2.png
Illustration 2: Setting of primary IP and address routing/NAT
Illustration3.png
Illustration 3: Setting of secondary IP and heartbeat configuration
Illustration4.png
Illustration 4: Defining a virtual servers
Illustration5.png
Illustration 5: Specify details: IP, port, virtual network device, firewall mark/persistence and scheduling algorithm for a virtual server (example: imap/143)
Illustration6.png
Illustration 6: Monitoring: specify an answer to a telnet session (example: smtp)

After the configuration has been finished the configuration must be synchronized to the second server. To accomplish this task the following files are copied via scp to node2222:
/etc/sysconfig/ha/lvs.cf
/etc/sysctl.conf
/etc/sysconfig/iptables
Firewall/packet filtering, ip forwarding and the pulse service must be enabled by rebooting the system.
The LVS configuration on the systems as example:
serial_no = 213
primary = 192.168.109.105
primary_private = 10.10.10.3
service = lvs
backup_active = 1
backup = 192.168.109.106
backup_private = 10.10.10.4
heartbeat = 1
heartbeat_port = 539
keepalive = 6
deadtime = 18
network = nat
nat_router = 192.168.109.116 bond0:1
nat_nmask = 255.255.255.0
debug_level = NONE
monitor_links = 0
virtual ox {
     active = 1
     address = 192.168.109.117 bond0:2
     vip_nmask = 255.255.255.0
     fwmark = 80
     port = 80
     persistent = 120
     expect = "OK"
     use_regex = 0
     send_program = "/bin/ox-mon.sh %h"
     load_monitor = none
     scheduler = rr
     protocol = tcp
     timeout = 6
     reentry = 15
     quiesce_server = 0
     server ox1 {
         address = 192.168.109.114
         active = 1
         weight = 1
     }
     server ox2 {
         address = 192.168.109.115
         active = 1
         weight = 1
     }
}
virtual imap {
     active = 1
     address = 192.168.109.117 bond0:3
     vip_nmask = 255.255.255.0
     fwmark = 143
     port = 143
     persistent = 120
     send = ". login xxx pw"
     expect = "OK"
     use_regex = 1
     load_monitor = none
     scheduler = rr
     protocol = tcp
     timeout = 6
     reentry = 15
     quiesce_server = 0
     server imap {
         address = 192.168.109.113
         active = 1
         weight = 1
     }
}
virtual smtp {
     active = 1
     address = 192.168.109.117 bond0:4
     vip_nmask = 255.255.255.0
     port = 25
     persistent = 120
     send = ""
     expect = "220"
     use_regex = 1
     load_monitor = none
     scheduler = rr
     protocol = tcp
     timeout = 6
     reentry = 15
     quiesce_server = 0
     server smtp {
         address = 192.168.109.113
         active = 1
         weight = 1
     }
}
virtual oxs {
     active = 0
     address = 192.168.109.117 bond0:5
     vip_nmask = 255.255.255.0
     fwmark = 80
     port = 443
     persistent = 120
     expect = "OK"
     use_regex = 0
     send_program = "/bin/ox-mon.sh %h"
     load_monitor = none
     scheduler = rr
     protocol = tcp
     timeout = 6
     reentry = 15
     quiesce_server = 0
     server ox1 {
         address = 192.168.109.114
         active = 1
         weight = 1
     }
     server ox2 {
         address = 192.168.109.115
         active = 1
         weight = 1
     }
}
virtual oxtender {
     active = 1
     address = 192.168.109.117 bond0:5
     vip_nmask = 255.255.255.0
     port = 44335
     persistent = 120
     expect = "OK"
     use_regex = 0
     send_program = "/bin/ox-mon.sh %h"
     load_monitor = none
     scheduler = rr
     protocol = udp
     timeout = 6
     reentry = 15
     quiesce_server = 0
     server ox1 {
         address = 192.168.109.114
         active = 1
         weight = 1
     }
     server ox2 {
         address = 192.168.109.115
         active = 1
         weight = 1
     }
}
  • monitoring script for the Open-Xchange Server ox-mon.sh :
#!/bin/sh 

/usr/bin/curl $1/servlet/TestServlet 2>&1 | grep TestServlet 2>&1    
1>/dev/null
if test $? -gt 0
then
 echo "FALSE"
else
 echo "OK"
fi

Remarks

In the current configuration IMAP and IMAPS are bond together as one “service” with iptables firewall marks. The check for this service is a simple telnet check against IMAP. User name and password from the example must be set to the real ones ( xxx is the user and pw the password: send = ". login xxx pw").
In the current configuration HTTP, HTTPS and OXtender are bind together as one “service” with iptables firewall marks. The check for this service is done via a script /bin/ox-mon.sh.
In the current configuration SMTP is monitored with a simple telnet check.

HA Cluster

Abstract

The RHEL HA Cluster is running on the two nodes node8888/192.168.109.102 and node9999/192.168.109.103. A “reduced” three-tier topology is used: in front of the cluster a LVS load balancer is routing the traffic to the services on the cluster. See sketch for the setup, IPs etc. Dell DRAC management hardware is used for fencing.
Illustration7.png Illustration 7: Ideal Cluster configuration: network is heavily reduced in this documentation

  • RHEL HA Cluster is running on node8888/192.168.109.102 and node9999/192.168.109.103
  • Virtual Services are defined for Ox (active-active), IMAP+SMTP and Mysql. The services are build from globally defined cluster resources (e.g. IPs, filesystems etc.).
  • Shared Storage is configured with LVM and three logical volumes are defined as resources for Ox (GFS2), Mysql (EXT3) and IMAP/SMTP (EXT3)
  • Virtual IPs (seen as Real IPs from LVS) are defined for OX1/2 192.168.109.114 and 192.168.109.15, Mysql 192.168.109.112 and 192.168.109.118 and IMAP/SMTP 192.168.109.113
  • Dell DRAC devices host fence devices are configured (192.168.105.234/192.168.105.235)
  • The Conga web configuration interface is used to configure the cluster

Installation

On both systems a RHEL 5.3 operating system has to be installed and on top of the RHEL cluster suite. Because the Conga web interface is used to ease the configuration tasks the packages for the client-server system of Conga: ricci and luci need to be available on both systems.
  • All packet filters are disabled to avoid impacts on the cluster:
system-config-securitylevel disable
service iptables stop
chkconfig –del iptables.
Check/edit the file /etc/sysconfig/system-config-securitylevel, the service must be set to disabled.
  • Also selinux is disabled resp. set to permissive mode:
system-config-securitylevel disable
Check/edit the file /etc/selinux/config, the service must be set either to disabled or to permissive. If selinux was set to disabled and should be enabled again: set to permissive, boot, touch /.autorelabel, boot and set to enabled.
  • IP forwarding is enabled in /etc/sysctl.conf, set net.ip4.ip_forward=1
  • Network zeroconf is disabled in /etc/sysconfig/network a line is added NOZEROCONF=yes
  • On both systems ACPI/the acpid is disabled to allow for immediate shutdown via fence device:
chkconfig --del acpid
  • To configure both systems from conga the ricci serivce is started on both system:
chkconfig --level 2345
ricci on
service ricci start
  • On the system node8888 also the luci service is enabled:
chkconfig --level 2345 luci on

Configuration

Network

In this example, a single network 192.168.109.0/24 for all real and virtual IPs and the private network 10.10.10.0/24 for the LVS internal communication is used. Via bonding of two physical devices the network devices bond0/1 have to be configured on both networks during the base installation.
On node8888 the network devices are configured as:
DEVICE=bond0 
BOOTPROTO=none 
BROADCAST=192.168.109.255 
IPADDR=192.168.109.102 
NETMASK=255.255.255.0 
NETWORK=192.168.109.0 
ONBOOT=yes 
TYPE=Ethernet 
GATEWAY=192.168.109.116 
USERCTL=no 
IPV6INIT=no 
PEERDNS=yes 
...
DEVICE=eth1 
BOOTPROTO=none 
BROADCAST=10.10.10.255 
HWADDR=00:22:19:B0:73:2A 
IPADDR=10.10.10.1 
NETMASK=255.255.255.0 
NETWORK=10.10.10.0 
ONBOOT=yes 
TYPE=Ethernet 
USERCTL=no 
IPV6INIT=no 
PEERDNS=yes 
And on node9999:
DEVICE=bond0 
BOOTPROTO=none 
BROADCAST=192.168.109.255 
IPADDR=192.168.109.103
NETMASK=255.255.255.0 
NETWORK=192.168.109.0 
ONBOOT=yes 
TYPE=Ethernet 
GATEWAY=192.168.109.116 
USERCTL=no 
IPV6INIT=no 
PEERDNS=yes 
...
DEVICE=eth1 
BOOTPROTO=none 
BROADCAST=10.10.10.255 
HWADDR=00:22:19:B0:73:2A 
IPADDR=10.10.10.2
NETMASK=255.255.255.0 
NETWORK=10.10.10.0 
ONBOOT=yes 
TYPE=Ethernet 
USERCTL=no 
IPV6INIT=no 
PEERDNS=yes 

Storage

A shared storage for the cluster LVM daemon has to be installed and running (part of the RHEL cluster suite). Physical volume, volume group and the logical volumes have to be created on the command line using the standard LVM tools (pvcreate/vgcreate etc.). The mount points /u01, /u02 and /u03 are to be created. This is the LVM configuration for this example:
  --- Physical volume ---
  PV Name               /dev/sdc
  VG Name               VGoxcluster
  PV Size               1.91 TB / not usable 2.00 MB
  Allocatable           yes 
  PE Size (KByte)       4096
  Total PE              499782
  Free PE               80966
  Allocated PE          418816
  PV UUID               xxxxxxxxxxxxxxxxxxxx

  --- Volume group ---
  VG Name               VGoxcluster
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  16
  VG Access             read/write
  VG Status             resizable
  Clustered             yes
  Shared                no
  MAX LV                0
  Cur LV                3
  Open LV               2
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               1.91 TB
  PE Size               4.00 MB
  Total PE              499782
  Alloc PE / Size       418816 / 1.60 TB
  Free  PE / Size       80966 / 316.27 GB
  VG UUID               yyyyyyyyyyyyyyyyyyyy

  --- Logical volume ---
  LV Name                /dev/VGoxcluster/LVmysql_u01
  VG Name                VGoxcluster
  LV UUID                Z4XFjg-ITOV-y19M-Rf7m-fJbI-0YXA-e8ueF8
  LV Write Access        read/write
  LV Status              available
  # open                 0
  LV Size                100.00 GB
  Current LE             25600
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0

  --- Logical volume ---
  LV Name                /dev/VGoxcluster/LVimap_u03
  VG Name                VGoxcluster
  LV UUID                o0SvbN-buT9-4WvC-0Baw-rX3d-QwJo-t1jDL3
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                512.00 GB
  Current LE             131072
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:1

  --- Logical volume ---
  LV Name                /dev/VGoxcluster/LVox_u02
  VG Name                VGoxcluster
  LV UUID                udx7Ax-L8ev-M5Id-W42Y-VbYs-8ag7-T78seI
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                1.00 TB
  Current LE             262144
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:2

Initialize Conga

Configure luci admin password and restart the service:
  • luci_admin init, and specify the admin password
  • service luci restart
Open a browser and navigate to the luci address/port:

Populate the Cluster

Now the cluster, it's nodes, storage, resources and services can be defined in the luci web frontend.
The following services must be defined:

MySQL

  • IP 192.168.109.112
  • IP 192.168.109.118
  • Script /etc/init.d/mysql
  • Filesystem /dev/mapper/VGoxcluster-LVmysql_u01; EXT3; mount on /u01
  • Data directory: /var/lib/mysql is installed on /u01
  • The original data is (re)moved on the systems to the shared storage
  • A soft link points from the default data location to the shared storage

IMAP/S, Sieve, SMTP

  • IP 192.168.109.113
  • Script /etc/init.d/cyrus-imapd
  • Script /etc/init.d/postfix
  • Filesystem /dev/mapper/VGoxcluster-LVimap_u03; EXT3; mount on /u03
  • Data directories: /var/spool/postfix, /etc/postfix, /var/spool/imap, /var/lib/imap are installed on /u03
  • The original data is (re)moved on the systems to the shared storage
  • Soft links point from the default data location to the shared storage

Ox1, Ox2

The Ox services may not run on one node in case of a node failure!. Therefore a failover domain must be defined with just a single node as domain member. This service should only run on node “Ox1”!
  • IP 192.168.109.114, 192.168.109.115
  • Script /etc/init.d/open-xchange-admin
  • Script /etc/init.d/open-xchange-groupware
  • Filesystem /dev/mapper/VGoxcluster-LVox_u02; GFS2; mount on /u02
  • Data directory: /filestore is installed on /u02
  • The original data is (re)moved on the systems to the shared storage
  • Soft links point from the default data location to the shared storage
  • Failoverdomains must be specified for each of the services
In the following pictures the configuration of a cluster is shown.
Illustration8.png
Illustration 8: Creating a new cluster
Illustration9.png
Illustration 9: Defining node fence devices

Storage configuration: three volumes on the shared storage are available: an EXT3 volume for the MySQL database, another EXT3 volume for the mail services (smtp, imap) and a store for the Ox servers, the latter a GFS2 volume. The mount points for the storage must already exist.
Afterwards the storage can be configured from Conga:
  • Click on storage tab and then on one of the both node IPs in the node list
  • Choose the volume group which contains the shared volumes
  • A graphical view of the volumes in this group is shown
Illustration10.png
Illustration 10: On the top (blue) column the available volumes on this volume group are visible as column slices, to modify a volume (moving the mouse over the slice will reveal the name of the volume) click on it. If a volume is selected the column view will change: the part of the chosen volume is hatched now.
Illustration11.png
Illustration 11: Defining a logical volume

The ox services should run on only one node under any circumstances. Two failover domains are configured with a single node as member and both ox services will be bound to one of the failover domains. If one node will fail the ox service running on this node will not be relocated to the other node.
Illustration12.png
Illustration 12: Configuring a failover domain

Defining services

Every resources is defined as a global resource. Therefore the services are build from globally available resources. Each service is build up as a independent tree of resources. The priority of RHEL cluster is used to group the resources in a service. The order of resources is the same as the priority order (filesystems first, IPs second, followed by services -which are not used in this installation- and finally followed by scripts).
Illustration13.png
Illustration 13: Example of a resource list
Illustration14.png
Illustration 14: Example of a Script Resource
Illustration15.png
Illustration 15: Example of a Filesystem Resource
Illustration16.png
Illustration 16: Defining a Service from Resources, 1)
Illustration17.png
Illustration 17: Defining a Service from Resources, 2)

Cluster Configuration File

The configuration file /etc/cluster/cluster.conf
<?xml version="1.0"?>
<cluster alias="oxcluster2" config_version="61" name="oxcluster2">
<fence_daemon clean_start="0" post_fail_delay="60" post_join_delay="600"/>
<clusternodes>
<clusternode name="10.10.10.1" nodeid="1" votes="1">
<fence>
<method name="1">
<device name="ox1drac"/>
</method>
</fence>
<multicast addr="239.192.223.20" interface="eth1"/>
</clusternode>
<clusternode name="10.10.10.2" nodeid="2" votes="1">
<fence>
<method name="1">
<device name="ox2drac"/>
</method>
</fence>
</clusternode>
</clusternodes>
<cman expected_votes="1" two_node="1">
<multicast addr="239.192.223.20"/>
</cman>
<fencedevices>
<fencedevice agent="fence_drac" ipaddr="192.168.105.234" login="root" name="ox1drac" passwd="pass"/>
<fencedevice agent="fence_drac" ipaddr="192.168.105.235" login="root" name="ox2drac" passwd="pass"/>
</fencedevices>
<rm>
<failoverdomains>
<failoverdomain name="ox1_failover" nofailback="0" ordered="0" restricted="1">
<failoverdomainnode name="10.10.10.1" priority="1"/>
</failoverdomain>
<failoverdomain name="ox2_failover" nofailback="0" ordered="0" restricted="1">
<failoverdomainnode name="10.10.10.2" priority="1"/>
</failoverdomain>
</failoverdomains>
<resources>
<ip address="192.168.109.112" monitor_link="1"/>
<ip address="192.168.109.113" monitor_link="1"/>
<ip address="192.168.109.114" monitor_link="1"/>
<ip address="192.168.109.115" monitor_link="1"/>
<ip address="192.168.109.118" monitor_link="1"/>
<fs device="/dev/mapper/VGoxcluster-LVimap_u03" force_fsck="1" force_unmount="0" fsid="8322" fstype="ext3" mountpoint="/u03" name="imap_filesystem" self_fence="0"/>
<fs device="/dev/mapper/VGoxcluster-LVmysql_u01" force_fsck="1" force_unmount="0" fsid="22506" fstype="ext3" mountpoint="/u01" name="mysql_filesystem" self_fence="0"/>
<script file="/etc/init.d/postfix" name="postfix_script"/>
<script file="/etc/init.d/cyrus-imapd" name="imap_script"/>
<script file="/etc/init.d/open-xchange-admin" name="ox_admin"/>
<script file="/etc/init.d/open-xchange-groupware" name="ox_groupware"/>
<script file="/etc/init.d/mysql" name="mysql_server"/>
<script file="/etc/init.d/mysql-monitor-agent" name="mysql_monitor"/>
<clusterfs device="/dev/mapper/VGoxcluster-LVox_u02" force_unmount="0" fsid="45655" fstype="gfs2" mountpoint="/u02" name="ox_filesystem" self_fence="0"/>
</resources>
<service autostart="1" exclusive="0" name="imap_smtp" recovery="restart">
<fs ref="imap_filesystem"/>
<ip ref="192.168.109.113"/>
<script ref="imap_script"/>
<script ref="postfix_script"/>
</service>
<service autostart="1" domain="ox1_failover" exclusive="0" name="ox1" recovery="restart">
<clusterfs fstype="gfs" ref="ox_filesystem"/>
<ip ref="192.168.109.114"/>
<script ref="ox_admin"/>
<script ref="ox_groupware"/>
</service>
<service autostart="1" domain="ox2_failover" exclusive="0" name="ox2" recovery="restart">
<clusterfs fstype="gfs" ref="ox_filesystem"/>
<ip ref="192.168.109.115"/>
<script ref="ox_admin"/>
<script ref="ox_groupware"/>
</service>
<service autostart="1" exclusive="0" name="mysql" recovery="restart">
<fs ref="mysql_filesystem"/>
<ip ref="192.168.109.112"/>
<ip ref="192.168.109.118"/>
<script ref="mysql_server"/>
</service>
</rm>
</cluster>


















Red Hat 6 RHEL Installation

Red Hat 6 RHEL Installation

1. Select Install or upgrade an existing system option on Grub Menu


1. Select Install or upgrade an existing system option on Grub Menu 


2. Choose a language

2. Choose a language

3. Choose a keyboard type

3. Choose a keyboard type

4. Choose a installation media

4. Choose a installation media


5. Skip DVD media test (or select media test, if you want to test installation media before installation)

5. Skip DVD media test (or select media test, if you want to test installation media before installation)

6. Red Hat 6 graphical installer starts, select next

6. Red Hat 6 graphical installer starts, select next

7. Accepct Pre-Release Installation

7. Accepct Pre-Release Installation

8. Select storage devices

8. Select storage devices

9. Insert computer name

9. Insert computer name

10. Select time zone

10. Select time zone

11. Enter a password for root user

11. Enter a password for root user

12. Select type of installation

Read every options info carefully. And select encrypting if needed and option to review and modify partition layout.
12. Select type of installation

13. Review partition layout

Modify if needed. Default setup with ext4 and LVM looks good for desktop machine.
13. Review partition layout and modify if needed

14. Accept write changes to disc

14. Accept write changes to disc

15. Writing changes (creating partitions) to disc

15. Writing changes (creating partitions) to disc

16. Configure boot loader options

Select device to install bootloader and check/create boot loader operating system list.
16. Configure boot loader options

17. Select softwares to install and enable repositories

This case we select Software Development Workstation and enable Red Hat Enterprise Linux 6.0 Beta Repository and select Customize now.
17. Select softwares to install and enable repositories

18. Customize package selection

Select PHP and Web Server to installation.
18. Customize package selection - Select PHP and Web Server to installation
Select MySQL and PostgreSQL Databases.
18. Customize package selection - Select MySQL and PostgreSQL Databases
Select set of Development tools like Eclipse IDE.
18. Customize package selection - Select set of Development tools like Eclipse IDE

19. Checking dependencies for installation

19. Checking dependencies for installation

20. Starting installation process

20. Starting installation process

21. Installing packages

21. Installing packages 1
21. Installing packages 2

22. Installation is complete

Click reboot computer and remove installation media.
22. Installation is complete - Click reboot computer and remove installation media

Red Hat 6 RHEL Finishing Installation

23. Selecting RHEL 6 from grub

23. Selecting RHEL 6 from grub

24. Booting Red Hat 6

24. Booting Red Hat 6

25. Red Hat 6 Welcome screen

25. Red Hat 6 Welcome screen

26. Create normal user

26. Create normal user

27. Setup date and time and keep up-to-date with NTP

27. Setup date and time
27. Setup date and time and keep up-to-date with NTP

28. Login Red Hat 6 Gnome Desktop

28. Login Red Hat 6 Gnome Desktop

29. Red Hat (RHEL) 6 Gnome Desktop, empty and default look

29. Red Hat (RHEL) 6 Gnome Desktop, empty and default look