iSCSI provides a cost-effective, unified method for accessing storage from various vendors. My example is an old free-standing Compaq Drive array and a rack mounted Dell Drive array. Both of these attach to the head server via regular SCSI. Keep in mind that the local connections to the data doesn’t matter, they could be external SAS enclosures or SATA drives, the "SCSI" part of iSCSI is an over the network thing and not a local requirement thing.
Ideally Microsoft would provide this software as a purchased product or an add-on but sadly it is only available via OEM channels, which usually means it’s bundled in an iSCSI solution. Don’t get me wrong there are several very nice iSCSI solutions from all the big vendors. But if cost is important to you then the cheaper you can get it, the better.
There are two methods that I’ll cover the first is a Windows solution using freely available iSCSI Target software from MySAN, and the other is a Linux solution using completely free software available from anywhere. For my purposes I’ll be covering Ubuntu 8.04 LTS.
I will provide links to all the software that I mention in the links section below.
From the Nimbus website.
Nimbus Data Systems, Inc. develops Unified Storage systems and software that dramatically simplify storage management, lower operating costs, and improve IT availability. Nimbus Unified Storage is the premier storage infrastructure for server and desktop virtualization, rich content, cloud computing, storage consolidation, and high-performance computing. To date, over 15,000 companies in 28 countries have implemented Nimbus technology.
In 2006 Nimbus released free iSCSI target software for use in Microsoft Windows.
San Francisco, CA, August 14, 2006 – Nimbus Data Systems today announced MySAN™, the first and only free iSCSI target software for Microsoft Windows. With MySAN, anyone can create an IP SAN in seconds using their existing server and storage hardware. MySAN works by turning any Windows partition (such as a hard drive, internal RAID array, external storage system, or even Fiber Channel storage) into an iSCSI target. This storage can then be assigned to any computer on an Ethernet network using iSCSI, giving users a vendor-neutral IP SAN instantly.
The first thing you will need to do is perform a basic Windows installation onto your computer. The server I am using is an EOL Dell PowerEdge 1750. The machine that you decide to perform this installation on must have at least two network cards. There is nothing fancy you need to worry about for the installation but the MySAN software required Windows Server 2003 SP1 and .net 2.
I had no success getting this software to install on a Windows 2003 R2 Server with SP2 and .net 2 installed. If you have access to the software like AdminStudio you could potentially modify the InstallShield installer to not perform this check as I’m pretty certain it doesn’t matter.
The following list is a set of steps you can follow to successfully install the pre-req’s for the MySAN software:
- Install Windows Server 2003 *
- Install Windows Installer 3.0 *
- Install .net 2.0 Redist
- Install Windows Server 2003 SP1 *
* Reboot required
Obtain the MySAN software from the vendor as well as the license key. The registration is free and provides access to a portal that provides links to both the software and key.
- Install Nimbus MySAN
The process in Linux is significantly less complicated, any Linux distro can do this just how you obtain the iSCSITarget software will differ based on your preference. For this I’m using Ubuntu 8.04 LTS Server edition so the steps will work perfectly well on any of the Ubuntu versions supported.
The first thing you will need to do is configure a basic server install for Ubuntu, there is no special requirements for either hardware or software. The lone exception is once the install is complete you will need to perform an update and then install the iSCSITarget software. Perform the following tasks after the intial installation is complete:
- sudo apt-get updates
- sudo apt-get upgrade
- sudo apt-get install iscsitarget
Installation is complete at this point and all that is left is configuring the target, you can use either a loopback file or actual media. The benefit of using a file is the ability to run a cron job that would run once a day that would check the utulization of the file and expand as needed.
Configuring the Linux iSCSITarget
There is only one file that needs to be modified on the server, /etc/ietf.conf. This file contains the settings for the iSCSI target software and there are a few things that you define here. You will need to set the target name and the path for the disk you are sharing out.
You will want to decide if you want to share the entire filesystem or if you want to share out a file. To the client it doesn’t matter, they see a drive with however much space you define. If you decide to share out an entire filesystem like /dev/sdb modify the /etc/ietf.conf file:
LUN 0 Path=/dev/sdb, Type=fileio
If you decide to share out a file, you will need to create the file first using dd, then export the path to the actual file:
dd if=/dev/zero of=templun3 count=0 obs=1 seek=200G
You can then edit your /etc/ietf.conf file:
LUN 3 Path=/path/to/file/templun3, Type=fileio
MySAN Target Configuration
Once the software has been installed you will need to configure it. On the General tab you may need to click refresh to see your network card, otherwise select the network card you wish to use. Under the targets tab you will need to define which drives you want to make available, if no drives appear you may need to add them through Disk Management.
Create a partition using Disk Management, define a drive letter for your storage and format it. MySAN does not support mounting a drive into a folder. Once the drive has been formatted open the Nimbus MySAN application and click the Refresh button. Next you will need to define the target name for the disk, select the disk and click Add, a dialog appears asking for a target name. This will be the name that your clients will see when you configure the iSCSI initiator, I chose "iscsi.san" for my target name.
Once you have defined your network settings, drive settings and provided a name for the drive you wish to share out, the On button should light up on the General tab. Select On and click Ok, this will start the Nimbus MySAN service and make the disk available on the network.
iSCSI Initiator Configuration
I will cover the Windows iSCSI initiator as there may be differences between the various implementations but the main things are covered. You will need to provide the MySAN software with the name of the client’s iSCSI initiator. In Windows this can be found in the Control Panel and the iSCSI Initiator applet.
* If you do not see the iSCSI Initiator you can download it from Microsoft for free and install it, no reboot is required.
Open the iSCSI Initiator on the client, the node name is displayed on the General tab. You may need to change the default name generated at install as it may not work with MySAN. I changed mine to iscsi.client, you will need to provide this name to the MySAN software on your server.
On the server under the Hosts tab of MySAN click Add and provide the name of your client. Click on the Targets tab, select the drive you wish to make available to this client and in the Host with Access dropdown select your client and click Ok.
Back on the client open the iSCSI Initiator, click the Discovery tab, enter the IP or DNS of your server and click Ok. On the Targets tab select the newly listed target and click Logon, the status will switch from inactive to Connected. You can optionally decide if you want the multi-pathing, and to automatically restore the connection on boot.
Windows Server 2003 Trial (http://technet.microsoft.com/en-us/windowsserver/bb430831.aspx)
.NET Framework 2.0 Redist (http://www.microsoft.com/downloads/details.aspx?displaylang=en&FamilyID=0856eacb-4362-4b0d-8edd-aab15c5e04f5)
Microsoft iSCSI Software Initiator (http://www.microsoft.com/downloads/details.aspx?displaylang=en&FamilyID=12cb3c1a-15d6-4585-b385-befd1319f825)
MySAN Free Registration Site (http://www.nimbusdata.com/skyline/index.php)
Ubuntu 8.04 LTS (http://www.ubuntu.com/getubuntu/downloading?release=server-lts)
AdminStudio Professional (http://www.acresso.com/downloads/downloads_4886.htm)
iSCSI Defined (http://en.wikipedia.org/wiki/ISCSI)
iSCSI at Microsoft (http://www.microsoft.com/windowsserver2003/technologies/storage/iscsi/default.mspx)
Nimbus MySAN Press Release (http://www.nimbusdata.com/company/pr_2006_08_14.php)
Install Windows Server in a default configuration for both servers, install the latest service pack and all updates. The SharePoint server should also have the Application Server Role and SMTP.
Active Directory Configuration
Setting up SharePoint in a farm is slightly more complex than a stand-alone installation. For a simple farm like the one we’re setting up a handful of accounts need to be created in advance. These accounts provide the needed functionality for SharePoint as well as provide required security that most administrators want.
The following accounts should be created as regular domain users with complex passwords:
- Setup Account: This will be the SharePoint local admin and used during installation
- This account needs to have a login on the SQL instance
- Farm Account: This is the Database Access Account used to connect to SQL
- This account needs to have the following roles on the SQL instance hosting SharePoint
- This account needs to have the following roles on the SQL instance hosting SharePoint
- Index Account: This is used by the indexing service on SharePoint
- Content Account: This account is used by the indexing service to search the content
SQL Server Setup
Install SQL onto the server that will become your SQL Server. My preference is to create a named instance for each app that will be connecting to a database, otherwise use the default instance. Make sure that you have set the proper collation during SQL setup. Stop all services for your newly create SQL instance before the service pack install to avoid a reboot. Apply the most recent SQL Server service pack from the Microsoft Download site. Then restart the services related to your SQL instance.
Logon to your SharePoint server with the Setup Account you created, you may need to add it to the local Administrator group first. You may also want to add the user accounts from the domain that will be your Farm Administrators to the local Administrators group if they are not Domain Admins.
Download the appropriate build of SharePoint for our preferred architecture. Run the SharePoint.exe from your download location and choose the Advanced option. After setup is complete you may want to download any updates there may be to your computer using Microsoft Update.
Run the configuration wizard to finish the SharePoint configuration. The Database server will be the name of your SQL server, then a backslash, then the name of your SQL instance, if you created one. The Database name you can leave at the default or change it to something more meaningful. The Database access account is the Farm account you created earlier. This account should also have the dbCreator and SecurityAdmin roles on the instance or the wizard will fail.
You can specify an alternate port number for the Central Administration website, I would recommend you do this otherwise you may forget the random one. For authentication you can leave the default, which is NTLM or you can choose Kerberos. If choosing Kerberos you will need to configure your SPN properly.
The advanced button on the last page of the wizard gives you the option of allowing SharePoint to create users in your domain. I’m not sure what your stance may be on this, but in production that may not be a good idea. Please consult with either your Domain Administrator or Security Administrator if you have questions.
Once everything is defined the installation should progress normally. If things are working properly the final configuration will take a while to complete. If there is a problem logs are stored in the web server extensions folder in Common Files on the drive where SharePoint was installed.
Central Administration Site Configuration
Some things will need to be configured after the setup and configuration wizard complete. You will need to add the user accounts of the SharePoint administrators to the Farm Administrators group. This can be done under the Operations tab. You will need to configure the Search service with the user accounts you defined for Indexing and Content, this can also be done under Operations. Finally you will need to create your initial site, this is done under the Application Management tab. After you have the Administration site and the initial site created you may want to define more friendly names to them, this is done using the Alternate Mappings on the Operations tab.
In order for each site you create to be hosted on a separate content database, you will first need to limit the number of sites that can be created on the initial or portal site. This is done in Application Management, using the Content Databases tool. The value you want to change is the Maximum Number of Sites. This number needs to be larger than the Site Level Warning which can be set to zero.
Once you have defined these values then you will simply add new content databases for each of your sub sites. Each site is accessed through a special URL that is displayed after your default url. The default path is /site/ and you can have as many of these as you want to help define what each site collection contains. Like departmental sites, research sites, organizational sites the list can be as long as you need.
Roll Sophos to the computing labs
Re-IP the CX3-20
Migrate KUTC to SOE
The SharePoint server which host’s the School of Engineering’s (SOE) intranet has exceeded the Microsoft recommended size for a single server hosting content. The recommendation is that if your data exceeds 5GB that you should move to hosting that content on a SQL server.
Create a new server to host the front end for the SOE intranet and have the data hosted on an existing SQL server.
Several things will change as a result of this problem. Currently each tab on SharePoint represents a departmental site and is logically separated from other sites. In order to host the data on SQL these tabs will be separated into individual site collections. A site collection represents a database object on a SQL server.
The main intranet site will become in effect a “portal” to other sites hosted on the server. This change will provide greater flexibility for future growth. We have seen an increase in requests for SharePoint sites and this can now be supported. The intranet will be broken down into categories and each category will be represented by a tab on the main page.
- Research Projects
- Student Projects
- Educational Departments
- SOE Departments
These categories will become the top navigation bar on the main site and each tab will contain a list of site links below it. Each link is a separate site collection on the SQL server as well as a separate site on SharePoint. Access to these sites will be based on group membership, but everyone will see all site links on the main Engineering portal.
Additionally this change will allow us to provision sites for specific security needs if needed. For example, currently it is very difficult to allow three people the ability to change a document, but only four people the ability to read it. With the new structure in place, a Document Workspace site can be created for a given project and the three people who need to change something would become that sites Members, while the four people who need read access would become that sites Visitors. There is no limit within the foreseeable future on the number of these sites as we can expand the SQL server to store data on the SOE’s Storage Area Network (SAN).
We rely on a lot of Microsoft tech to keep things clicking here at the School, and nothing is more important than our Active Directory Infrastructure. When I first started about the only thing you could get out of the Active Directory was a list of computers, users and groups.
Currently, we use the Active Directory as much as possible. One of the scripts that I posted a while back performs our inventory and actually stores pertinent information regarding each computer in it’s description property. One of the cool things that I’ve started doing, and I know I may behind the curve here, is to place objects in the Active Directory.
Two things that are incredibly important to users are accessing their data, and printing it out. So I have taken advantage of our OU structure, based on location, to make this much simpler. I have moved all the PrintQueue objects out of the print server and placed them in the office OU where the printer is located. In addition I have started publishing all of our shares into the Active Directory, more at the building level as most drives are fairly common for everyone, but there are some that are in individual offices.
Then I modified my scripts to take advantage of this, basically at logon the scripts figures out what OU the computer is in, and starts mapping PrintQueue or SharedFolder objects there, and then progressively works up the tree until it gets to the root of the Active Directory. This has made things much simpler to manage as I can literally just edit the object if a server changes and the script automatically maps to the new location or server!
The scripts will be posted as soon as they have been commented.
In environments were more than 4 VLAN’s exist you will have some difficulties getting all of them trunked into your VM’s. The best way seems to be trunking all the VLAN’s into single interface in your VM and then building multiple virtual interfaces within the guest OS.
There are only 4094 VLAN’s allowed so in order to create this properly the interface in the guest OS needs to be set to 4095. Then you can create a virtual interface for each VLAN on your network.
The following information was taken from RedHat support article 3681.
When connected to a properly configured network device, your Red Hat Enterprise Linux 3 system can communicate over a network using 802.1q Virtual Local Area Network (VLAN) tagged frames. The necessary kernel module, 8021q, is already available in the 2.4 kernel.
To use tagged traffic exclusively, create additional ifcfg-ethX.Y files, where X is the interface on which you will use the VLAN and Y is the VLAN ID. For example, on a system with one network card (eth0) that needs to talk to two different VLANs with VLAN IDs 10 and 11, you’ll need these files:
/etc/sysconfig/network-scripts/ifcfg-eth0 /etc/sysconfig/network-scripts/ifcfg-eth0.10 /etc/sysconfig/network-scripts/ifcfg-eth0.11
These files will configure your system to have two virtual Ethernet interfaces called eth0.10 and eth0.11 that use tagged frames for communication to VLANs 10 and 11. To create the configuration files for the virtual tagged interfaces, copy the contents of your original ifcfg-eth0 file to ifcfg-eth0.10 and ifcfg-eth0.11. Then comment out or remove everything in your ifcfg-eth0 file except forDEVICE=eth0 ONBOOT=yes
Next, edit the DEVICE= line in the ifcfg-eth0.10 and ifcfg-eth0.11 files so that they read eth0.10 and eth0.11 respectively. Add the line VLAN=yes to both files. Finish configuring these virtual adapters with the correct IP address and subnet mask for each VLAN, or with a BOOTPROTO=dhcp line if addresses are given out via DHCP. Don’t forget to include a default gateway in /etc/sysconfig/network. It’s important to remember that you can only have one default gateway.
Issue the command:
# service network restart
to complete the process. The VLAN=yes entries cause the network startup scripts to automatically run the vconfig command to add the necessary VLAN entries in /proc/net/vlan for each VLAN tag.
Here are the completed files for a network set up to only transmit tagged frames and with both virtual adapters set to use DHCP:
/etc/sysconfig/network-scripts/ifcfg-eth0.10DEVICE=eth0.10 BOOTPROTO=dhcp HWADDR=XX:XX:XX:XX:XX:XX ONBOOT=yes TYPE=Ethernet VLAN=yes
/etc/sysconfig/network-scripts/ifcfg-eth0.11DEVICE=eth0.11 BOOTPROTO=dhcp HWADDR=XX:XX:XX:XX:XX:XX ONBOOT=yes TYPE=Ethernet VLAN=yes
If you accidentally created a virtual adapter with the wrong VLAN ID, you may need to use the vconfig command to remove it from the /proc filesystem. Just restarting the network service won’t do that for you. For example, if you accidentally created a virtual adapter called eth0.12, the following command will remove it from /proc/net/vlan:
# vconfig rem eth0.12
The School hosts its own Active Directory Infrastructure, this infrastructure depends on a DNS server that fully supports RFC 2136. Our administrative model relies on the ability of our clients to receive their network address from a DHCP server that fully supports RFC 2131. In a production environment we require that the services we rely upon do not implement "draft" or "beta" code.
This is not the state we find ourselves in while using Central IT’s implementation of DHCP and DNS. Their appliance has two serious flaws:
- TTL’s for all records do not decrement, resulting in stale forward and reverse lookups.
- The appliance relies on expired DHCP draft to handle failover between servers, which results in the lease files never being updated.
In order to provide a reliable and stable network environment we have decided to provide the requisite services ourselves. We will continue to troubleshoot the various issues with Central IT, but must move forward before the issues get worse. We have also provided a path to migrate back once a stable environment can be provided to the School.
Several steps need to be accomplished before we can roll these services out.
- Build two servers to host DNS and DHCP and provide redundancy
DNS and DHCP services will be provided by two RedHat EL 5.1 virtual machines. We settled on Linux as the built-in services offered by Microsoft did not offer the flexibility we needed.
- Assign IP’s to authorized MAC addresses
- Create scopes based on MAC addresses
Our virtual servers have all the VLAN’s that comprise the School trunked into their switch. This trunk allows us to deliver an interface into each VLAN for the two virtual servers.