259 lines
12 KiB
Text
259 lines
12 KiB
Text
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" [
|
|
<!ENTITY base CDATA "../..">
|
|
<!ENTITY date "$FreeBSD: www/en/projects/netperf/cluster.sgml,v 1.10 2005/11/28 19:59:42 rwatson Exp $">
|
|
<!ENTITY title "FreeBSD Network Performance Project (netperf)">
|
|
<!ENTITY email 'mux'>
|
|
<!ENTITY % navincludes SYSTEM "../../includes.navdevelopers.sgml"> %navincludes;
|
|
<!ENTITY % includes SYSTEM "../../includes.sgml"> %includes;
|
|
|
|
<!ENTITY % developers SYSTEM "../../developers.sgml"> %developers;
|
|
|
|
]>
|
|
|
|
<html>
|
|
&header;
|
|
|
|
<h2>Contents</h2>
|
|
<ul>
|
|
<li><a href="#introduction">Introduction</a></li>
|
|
<li><a href="#donors">Donors</a></li>
|
|
<li><a href="#admins">Netperf Cluster Admins</a></li>
|
|
<li><a href="#resources">Netperf Cluster Resources</a></li>
|
|
<li><a href="#procedures">Netperf Cluster Procedures</a></li>
|
|
<li><a href="#notes">Current Configuration Notes and News</a></li>
|
|
<li><a href="#reservations">Reservations</a></li>
|
|
</ul>
|
|
|
|
<a name="introduction"></a>
|
|
<h2>Introduction</h2>
|
|
|
|
<p>The netperf cluster provides a multi-node, SMP-capable, network
|
|
functionality and performance test capability for the <a
|
|
href="../../">FreeBSD Project</a>, supporting a variety of on-going
|
|
sub-projects including the <a href="./index.html">netperf project</a>,
|
|
<a href="../dingo/index.html">Dingo project</a>, and on-going work on
|
|
high performance threading. The cluster is available on a check out
|
|
basis for developers, who must request accounts be created by
|
|
contacting one of the <a href="#admins">netperf cluster admins</a>.</p>
|
|
|
|
<a name="donors"></a>
|
|
<h2>Donors</h2>
|
|
|
|
<p>The netperf cluster was made possible through the generous donation
|
|
of a number of organizations, including:</p>
|
|
|
|
<ul>
|
|
<li><p><a href="http://www.sentex.ca/">Sentex Data Communications</a>,
|
|
who not only host the complete cluster, provide front-end build
|
|
system, and the management infrastructure (remote power, serial
|
|
console, network switch, etc), but also appear to be endlessly
|
|
willing to help configure, reconfigure, and troubleshoot at almost
|
|
any time of day or night.</p></li>
|
|
|
|
<li><p><a href="http://www.freebsdsystems.com/">FreeBSD Systems</a>,
|
|
who through a generous matching grant with the FreeBSD Foundation,
|
|
provide the majority of testing hardware used in the cluster,
|
|
including three dual-Xeon test systems.</p></li>
|
|
|
|
<li><p>The <a href="http://www.FreeBSDFoundation.org/">FreeBSD
|
|
Foundation</a>, who provided a matching grant for the purposes of
|
|
purchasing testing hardware, as well as taking ownership of hardware,
|
|
offering tax receipts to donors in its role as a non-profit, and
|
|
participating in cluster planning.</p></li>
|
|
|
|
<li><p><a href="http://www.ironport.com">IronPort Systems</a>, who have
|
|
generously donated additional test hardware for use in the netperf
|
|
cluster.</p></li>
|
|
</ul>
|
|
|
|
<p>Donations to support the netperf cluster have an immediate and
|
|
substantial impact on the success of a number of on-going performance
|
|
projects, providing access to high-end hardware to a large number of
|
|
developers. If you or your company are interested in helping to
|
|
support continued development of the netperf cluster as a resource for
|
|
FreeBSD development, please contact the <a href="#admins">netperf
|
|
cluster admins</a>.</p>
|
|
|
|
<a name="admins"></a>
|
|
<h2>Netperf Cluster Admins</h2>
|
|
|
|
<p>The FreeBSD netperf cluster is managed by a small team of
|
|
developer/administrators to support SMP development and performance
|
|
testing on high-end hardware. If you have any questions, including
|
|
questions about access to the cluster as a developer, or about possible
|
|
future donations of testing hardware, please feel free to contact the
|
|
following:</p>
|
|
|
|
<ul>
|
|
<li><p>&a.rwatson;</p></li>
|
|
<li><p>&a.bmilekic;</p></li>
|
|
</ul>
|
|
|
|
<a name="resources"></a>
|
|
<h2>Netperf Cluster Resources</h2>
|
|
|
|
<p>The Netperf cluster consists of several systems interconnected using a
|
|
management network, as well as individual back-to-back gigabit ethernet
|
|
links for a test network. The following systems are available as
|
|
testing resources on a check-out basis:</p>
|
|
|
|
<ul>
|
|
<li><p><b>zoo.FreeBSD.org</b> is the front-end build and management
|
|
system. All netperf cluster users are provided with accounts on this
|
|
box, typically using the same account name and SSH keys as used to
|
|
access the FreeBSD.org central cluster. Connected to zoo's second
|
|
network interface is a gigabit link to the internal management
|
|
network. Zoo provides DHCP, tftp, and NFS services to boxes in the
|
|
cluster, as well as having serial access to the remote power system
|
|
and serial console access to the test boxes. Test kernels and
|
|
software will typically be built and configured on zoo, then exported
|
|
to the cluster via NFS. Zoo exports its /zoo file system to the
|
|
cluster, and cluster users will have a directory, /zoo/username, in
|
|
which they can place any files to export. Each machine has a
|
|
/zoo/hostname directory, which consists of the root of an NFS root
|
|
file system, as well as the tree from which tftp exports pxeboot
|
|
loaders. By substituting kernels and configuration files in these
|
|
trees, most aspects of the test systems may be directly managed.
|
|
This system was donated by Sentex Communications.</p></li>
|
|
|
|
<li><p><b>elephant</b> is a dual-PIII 800MHz system with ATA disk
|
|
subsystem. Currently, the ATA disk holds two partitions, one with
|
|
FreeBSD 4.x, and one with FreeBSD 5.x user space configuration
|
|
on. This system was donated by Robert Watson.</p></li>
|
|
|
|
<li><p><b>orangutan</b> is a dual-Xeon 2GHz system equipped with an
|
|
Adaptec SCSI RAID array. Currently, the RAID array is configured to
|
|
expose a single volume holding FreeBSD 6.x. This system was donated
|
|
by IronPort Systems.</p></li>
|
|
|
|
<li><p><b>tiger-1</b>, <b>tiger-2</b>, and <b>tiger-3</b> are a set of
|
|
interconnected, matching dual-Xeon 3GHz systems with ATA disk
|
|
subsystems. Each has four if_em network interfaces, and these are
|
|
interconnected so that various topologies can be created. Two ATA
|
|
disks are connected, one with a FreeBSD 4.x and one with a FreeBSD
|
|
5.x user space configuration on. These systems were donated by
|
|
FreeBSD Systems and the FreeBSD Foundation.</p></li>
|
|
|
|
<li><p><b>cheetah</b> is a dual core Opteron 270 system with two
|
|
2GHz CPUs each with two cores using a Tyan K8S Pro (S2882)
|
|
motherboard. The machine identifies as a quad processor machine
|
|
in dmesg. The system has SATA disk, 2GB of RAM, 1GB for each
|
|
processor, and 5 ethernet ports. fxp0 is the management port
|
|
and em0, em1, bge0 and bge1 are gigE interfaces which will
|
|
eventually connect cheetah to elephant and orangutan. This system
|
|
was donated by George Neville-Neil.</p></li>
|
|
|
|
<li><p><b>hippo</b> is a quad-processor Pentium III 500MHz system
|
|
with 50GB RAID array, donated by Sentex Communications.</p></li>
|
|
|
|
<li><p><b>apc2</b>, <b>apc3</b>, and <b>apc4</b> are the remote power
|
|
consoles for the test network. These systems were donated by
|
|
Sentex Communications.</p></li>
|
|
</ul>
|
|
|
|
<p>The current serial port and network configuration of test systems, as
|
|
well as password information, can be found in /etc/motd on zoo. We
|
|
are currently interested in adding amd64 and em64t hardware to the
|
|
cluster.</p>
|
|
|
|
<a name="procedures"></a>
|
|
<h2>Netperf Cluster Procedures</h2>
|
|
|
|
<p>As the netperf cluster is a centrally managed and shared resource,
|
|
understanding and consistent following of its procedures is important.
|
|
In particular, following of the procedures makes it easier for
|
|
developers to have reasonable expectations about the configuration of
|
|
systems in the cluster, as well as to avoid treading on each others
|
|
toes.</p>
|
|
|
|
<ul>
|
|
<li><p><b>Reserving a system</b> is currently done on an ad hoc basis.
|
|
A wiki used to system reservation will shortly be introduced. In
|
|
general, it is recommended that reservations of systems occur for
|
|
periods of 8 hours or shorter, in order to facilitate shared use,
|
|
although longer tests or experiments are certainly possible.</p></li>
|
|
|
|
<li><p><b>Power cycling a system</b> is currently performed by
|
|
connecting to apc2, the remote power system, using a serial port on
|
|
zoo. Because there is a single serial connection to the power
|
|
system, it is requested that developers minimize the amount of time
|
|
spent connected to it, so that parallel use of the system can occur
|
|
more readily. In particular, please do not leave a cu/tip session
|
|
to the power system open for extended periods of time.</p></li>
|
|
|
|
<li><p><b>Booting a specific configuration</b> is currently performed
|
|
by interrupting the boot of a system using the serial console, or
|
|
modifying the /zoo/hostname/boot/loader.conf file to point at a new
|
|
kernel. Please do not replace the kernel in the NFS boot tree for a
|
|
host; instead, install kernels in your /zoo/username tree, and
|
|
explicitly configure hosts to use that kernel. Please restore the
|
|
defaults for a system, including removing any non-standard entries
|
|
from loader.conf, when you are done with a test or
|
|
experiment.</p></li>
|
|
|
|
<li><p><b>Reconfiguring on-disk installs</b> should be done only in
|
|
coordination with the cluster admins. If you change configuration
|
|
settings for hosts during an experiment, please restore them. If
|
|
you install packages on the file systems of tests, please remove
|
|
them when done. A useful technique is to store required packages for
|
|
experiments in NFS, and to add the packages for the duration of a
|
|
development session. While data can be left on systems between test
|
|
runs, developers who are using data sets for common applications,
|
|
such as MySQL, may wish to store that data in a personal directory
|
|
as other testing on the host could interfere with it. Developers
|
|
are advised to check the configuration of systems before using them
|
|
to make sure all is as expected.</p></li>
|
|
|
|
<li><p><b>Reconfiguring system BIOSes</b> should be avoided if at all
|
|
possible. Please restore any settings as soon as a test or
|
|
experiment is complete. <b>Please do not turn off BIOS serial console
|
|
redirection, or modify any settings associated with console
|
|
redirection!</b></p></li>
|
|
|
|
<li><p><b>Check software configurations before testing</b> --
|
|
specifically, use uname to confirm that the right kernel is running,
|
|
that it's not configured for WITNESS or INVARIANTS if that's not
|
|
needed, and that /etc/malloc.conf is set to "aj" if no debugging is
|
|
desired. Use 'ps' and 'pkg_info' to make sure only the software you
|
|
expect is running, and that the version of the software you expect is
|
|
installed.</p></li>
|
|
</ul>
|
|
|
|
<a name="notes"></a>
|
|
<h2>Current Configuration Notes and News</h2>
|
|
|
|
<p>A few hopefully up-to-date configuration notes that may be relevant
|
|
to users of the netperf cluster:</p>
|
|
|
|
<ul>
|
|
<li><p><b>20050624</b> - <b>cheetah</b> is now online!</p></li>
|
|
|
|
<li><p><b>20050204</b> - <b>orangutan</b> is now configured to use
|
|
PXEboot, thanks to help from Sentex.</p></li>
|
|
|
|
<li><p><b>20050203</b> - system upgrades to <b>tiger-1</b>,
|
|
<b>tiger-2</b>, and <b>tiger-3</b> have been completed -- the latest
|
|
versions of 4.x (ar0s1) and 6.x (ar0s2) are now installed.</p></li>
|
|
|
|
<li><p><b>20050203</b> - <b>zoo.FreeBSD.org</b> has been updated to
|
|
the most recent version of 5-STABLE.</p></li>
|
|
</ul>
|
|
|
|
<a name="reservations"></a>
|
|
<h2>Reservations</h2>
|
|
|
|
<p>Currently, no automated reservation system is in place for the netperf
|
|
cluster, but we hope to introduce some system for reservation shortly.
|
|
In the meantime, the following standing reservations are in place for
|
|
the netperf cluster:</p>
|
|
|
|
<table class="tblbasic">
|
|
<tr><th>System</th><th>Developer(s)</th><th>Project</th></tr>
|
|
<tr><td>elephant</td><td>&a.gnn;</td><td>KAME IPv6/IPSEC locking</td></tr>
|
|
<tr><td>orangutan</td><td>&a.davidxu;</td><td>libthread development,
|
|
testing, and performance measurement</td></tr>
|
|
</table>
|
|
|
|
&footer;
|
|
</body>
|
|
</html>
|