Add an initial project document describing the Netperf cluster, its
facilities, procedures for use, and some maintenance notes.
This commit is contained in:
parent
6aa6011766
commit
d8fa2355ef
Notes:
svn2git
2020-12-08 03:00:23 +00:00
svn path=/www/; revision=23696
1 changed files with 217 additions and 0 deletions
217
en/projects/netperf/cluster.sgml
Normal file
217
en/projects/netperf/cluster.sgml
Normal file
|
@ -0,0 +1,217 @@
|
|||
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" [
|
||||
<!ENTITY base CDATA "../..">
|
||||
<!ENTITY date "$FreeBSD: www/en/projects/netperf/index.sgml,v 1.10 2004/12/04 12:18:00 ceri Exp $">
|
||||
<!ENTITY title "FreeBSD Network Performance Project (netperf)">
|
||||
<!ENTITY email 'mux'>
|
||||
<!ENTITY % includes SYSTEM "../../includes.sgml"> %includes;
|
||||
|
||||
<!ENTITY % developers SYSTEM "../../developers.sgml"> %developers;
|
||||
|
||||
]>
|
||||
|
||||
<html>
|
||||
&header;
|
||||
|
||||
<h2>Contents</h2>
|
||||
<ul>
|
||||
<li><a href="#introduction">Introduction</a></li>
|
||||
<li><a href="#donors">Donors</a></li>
|
||||
<li><a href="#admins">Netperf Cluster Admins</a></li>
|
||||
<li><a href="#resources">Netperf Cluster Resources</a></li>
|
||||
<li><a href="#procedures">Netperf Cluster Procedures</a></li>
|
||||
<li><a href="#notes">Current Configuration Notes</a></li>
|
||||
</ul>
|
||||
|
||||
<a name="introduction"></a>
|
||||
<h2>Introduction</h2>
|
||||
|
||||
<p>The netperf cluster provides a multi-node, SMP-capable, network
|
||||
functionality and performance test capability for the <a
|
||||
href="../../">FreeBSD Project</a>, supporting a variety of on-going
|
||||
sub-projects including the <a href="./index.html">netperf project</a>,
|
||||
<a href="../dingo/index.html">Dingo project</a>, and on-going work on
|
||||
high performance threading. The cluster is available on a check out
|
||||
basis for developers, who must request accounts be created by
|
||||
contacting one of the <a href="#admins">netperf cluster admins</a>.</p>
|
||||
|
||||
<a name="donors"></a>
|
||||
<h2>Donors</h2>
|
||||
|
||||
<p>The netperf cluster was made possible through the generous donation
|
||||
of a number of organizations, including:</p>
|
||||
|
||||
<ul>
|
||||
<li><p><a href="http://www.sentex.ca/">Sentex Data Communications</a>,
|
||||
who not only host the complete cluster, provide front-end build
|
||||
system, and the management infrastructure (remote power, serial
|
||||
console, network switch, etc), but also appear to be endlessly
|
||||
willing to help configure, reconfigure, and troubleshoot at almost
|
||||
any time of day or night.</p></li>
|
||||
|
||||
<li><p><a href="http://www.freebsdsystems.com/">FreeBSD Systems</a>,
|
||||
who through a generous matching grant with the FreeBSD Foundation,
|
||||
provide the majority of testing hardware used in the cluster,
|
||||
including three dual-Xeon test systems.</p></li>
|
||||
|
||||
<li><p>The <a href="http://www.FreeBSDFoundation.org/">FreeBSD
|
||||
Foundation</a>, who provided a matching grant for the purposes of
|
||||
purchasing testing hardware, as well as taking ownership of hardware,
|
||||
offering tax receipts to donors in its role as a non-profit, and
|
||||
participating in cluster planning.</p></li>
|
||||
|
||||
<li><p><a href="http://www.ironport.com">IronPort Systems</a>, who have
|
||||
generously donated additional test hardware for use in the netperf
|
||||
cluster.</p></li>
|
||||
</ul>
|
||||
|
||||
<p>Donations to support the netperf cluster have an immediate and
|
||||
substantial impact on the success of a number of on-going performance
|
||||
projects, providing access to high-end hardware to a large number of
|
||||
developers. If you or your company are interested in helping to
|
||||
support continued development of the netperf cluster as a resource for
|
||||
FreeBSD development, please contact the <a href="#admins">netperf
|
||||
cluster admins</a>.</p>
|
||||
|
||||
<a name="admins"></a>
|
||||
<h2>Netperf Cluster Admins</h2>
|
||||
|
||||
<p>The FreeBSD netperf cluster is managed by a small team of
|
||||
developer/administrators to support SMP development and performance
|
||||
testing on high-end hardware. If you have any questions, including
|
||||
questions about access to the cluster as a developer, or about possible
|
||||
future donations of testing hardware, please feel free to contact the
|
||||
following:</p>
|
||||
|
||||
<ul>
|
||||
<li><p>&a.rwatson;</p></li>
|
||||
<li><p>&a.bmilekic;</p></li>
|
||||
</ul>
|
||||
|
||||
<a name="resources"></a>
|
||||
<h2>Netperf Cluster Resources</h2>
|
||||
|
||||
<p>The Netperf cluster consists of several systems interconnected using a
|
||||
management network, as well as individual back-to-back gigabit ethernet
|
||||
links for a test network. The following systems are available as
|
||||
testing resources on a check-out basis:</p>
|
||||
|
||||
<ul>
|
||||
<li><p><b>zoo.FreeBSD.org</b> is the front-end build and management
|
||||
system. All netperf cluster users are provided with accounts on this
|
||||
box, typically using the same account name and SSH keys as used to
|
||||
access the FreeBSD.org central cluster. Connected to zoo's second
|
||||
network interface is a gigabit link to the internal management
|
||||
network. Zoo provides DHCP, tftp, and NFS services to boxes in the
|
||||
cluster, as well as having serial access to the remote power system
|
||||
and serial console access to the test boxes. Test kernels and
|
||||
software will typically be built and configured on zoo, then exported
|
||||
to the cluster via NFS. Zoo exports its /zoo file system to the
|
||||
cluster, and cluster users will have a directory, /zoo/username, in
|
||||
which they can place any files to export. Each machine has a
|
||||
/zoo/hostname directory, which consists of the root of an NFS root
|
||||
file system, as well as the tree from which tftp exports pxeboot
|
||||
loaders. By substituting kernels and configuration files in these
|
||||
trees, most aspects of the test systems may be directly managed.</p>
|
||||
</li>
|
||||
|
||||
<li><p><b>elephant</b> is a dual-PIII 800MHz system with ATA disk
|
||||
subsystem. Currently, the ATA disk holds two partitions, one with
|
||||
FreeBSD 4.x, and one with FreeBSD 5.x user space configuration
|
||||
on.</p></li>
|
||||
|
||||
<li><p><b>orangutan</b> is a dual-Xeon 2GHz system equipped with an
|
||||
Adaptec SCSI RAID array. Currently, the RAID array is configured to
|
||||
expose a single volume holding FreeBSD 6.x.</p></li>
|
||||
|
||||
<li><p><b>tiger-1</b>, <b>tiger-2</b>, and <b>tiger-3</b> are a set of
|
||||
interconnected, matching dual-Xeon 3GHz systems with ATA disk
|
||||
subsystems. Each has four if_em network interfaces, and these are
|
||||
interconnected so that various topologies can be created. Two ATA
|
||||
disks are connected, one with a FreeBSD 4.x and one with a FreeBSD
|
||||
5.x user space configuration on.</p></li>
|
||||
|
||||
<li><p><b>apc2</b> is the remote power console for the test
|
||||
network.</p></li>
|
||||
</ul>
|
||||
|
||||
<p>The current serial port and network configuration of test systems, as
|
||||
well as password information, can be found in /etc/motd on zoo. We
|
||||
are currently interested in adding amd64 and em64t hardware to the
|
||||
cluster.</p>
|
||||
|
||||
<a name="procedures"></a>
|
||||
<h2>Netperf Cluster Procedures</h2>
|
||||
|
||||
<p>As the netperf cluster is a centrally managed and shared resource,
|
||||
understanding and consistent following of its procedures is important.
|
||||
In particular, following of the procedures makes it easier for
|
||||
developers to have reasonable expectations about the configuration of
|
||||
systems in the cluster, as well as to avoid trading on each others
|
||||
toes.</p>
|
||||
|
||||
<ul>
|
||||
<li><p><b>Reserving a system</b> is currently done on an ad hoc basis.
|
||||
A wikki used to system reservation will shortly be introduced. In
|
||||
general, it is recommended that reservations of systems occur for
|
||||
periods of 8 hours or shorter, in order to facilitate shared use,
|
||||
although longer tests or experiments are certainly possible.</p></li>
|
||||
|
||||
<li><p><b>Power cycling a system</b> is currently performed by
|
||||
connecting to apc2, the remote power system, using a serial port on
|
||||
zoo. Because there is a single serial connection to the power
|
||||
system, it is requested that developers minimize the amount of time
|
||||
spent connected to it, so that parallel use of the system can occur
|
||||
more readily. In particular, please do not leave a cu/tip session
|
||||
to the power system open for extended periods of time.</p></li>
|
||||
|
||||
<li><p><b>Booting a specific configuration</b> is currently performed
|
||||
by interrupting the boot of a system using the serial console, or
|
||||
modifying the /zoo/hostname/boot/loader.conf file to point at a new
|
||||
kernel. Please do not replace the kernel in the NFS boot tree for a
|
||||
host; instead, install kernels in your /zoo/username tree, and
|
||||
explicitly configure hosts to use that kernel. Please restore the
|
||||
defaults for a system, including removing any non-standard entries
|
||||
from loader.conf, when you are done with a test or
|
||||
experiment.</p></li>
|
||||
|
||||
<li><p><b>Reconfiguring on-disk installs</b> should be done only in
|
||||
coordination with the cluster admins. If you change configuration
|
||||
settings for hosts during an experiment, please restore them. If
|
||||
you install packages on the file systems of tests, please remove
|
||||
them when done. A useful technique is to store required packages for
|
||||
experiments in NFS, and to add the packages for the duration of a
|
||||
development session. While data can be left on systems between test
|
||||
runs, developers who are using data sets for common applications,
|
||||
such as MySQL, may wish to store that data in a personal directory
|
||||
as other testing on the host could interfere with it. Developers
|
||||
are advised to check the configuration of systems before using them
|
||||
to make sure all is as expected.</p></li>
|
||||
|
||||
<li><p><b>Reconfiguring system BIOSes</b> should be avoided if at all
|
||||
possible. Please restore any settings as soon as a test or
|
||||
experiment is complete. <b>Please do not turn off BIOS serial console
|
||||
redirection, or modify any settings associated with console
|
||||
redirection!</b></p></li>
|
||||
</ul>
|
||||
|
||||
<a name="notes"></a>
|
||||
<h2>Current Configuration Notes</h2>
|
||||
|
||||
<p>A few hopefully up-to-date configuration notes that may be relevant
|
||||
to users of the netperf cluster:</p>
|
||||
|
||||
<ul>
|
||||
<li><p><b>20050130</b> - <b>orangutan</b> is not currently configured
|
||||
to PXEboot or export its BIOS serial console. Sentex will be
|
||||
performing maintenance on this machine on the morning of 20050131 to
|
||||
configure PXE and serial console redirection.</p></li>
|
||||
|
||||
<li><p><b>20050129</b> - <b>tiger-1</b>, <b>tiger-2</b>, and
|
||||
<b>tiger-3</b> will be upgraded this weekend, and so may be
|
||||
unavailable for several hours during upgrades. Please coordinate any
|
||||
use of these machines over the weekend with &a.bmilekic;</p></li>
|
||||
</ul>
|
||||
|
||||
&footer;
|
||||
</body>
|
||||
</html>
|
Loading…
Reference in a new issue