Some updates to the Netperf cluster page, including quite a lot of new

hardware, new donors, and the 10gbps test network.
This commit is contained in:
Robert Watson 2007-07-27 21:16:13 +00:00
parent 273c8c7e89
commit 7454b09973
Notes: svn2git 2020-12-08 03:00:23 +00:00
svn path=/www/; revision=30531

View file

@ -1,6 +1,6 @@
<!DOCTYPE HTML PUBLIC "-//FreeBSD//DTD HTML 4.01 Transitional-Based Extension//EN" [
<!ENTITY base CDATA "../..">
<!ENTITY date "$FreeBSD: www/en/projects/netperf/cluster.sgml,v 1.22 2007/02/02 07:41:09 joel Exp $">
<!ENTITY date "$FreeBSD: www/en/projects/netperf/cluster.sgml,v 1.23 2007/02/12 20:50:30 joel Exp $">
<!ENTITY title "FreeBSD Netperf Cluster">
<!ENTITY email 'mux'>
<!ENTITY % navinclude.developers "INCLUDE">
@ -32,7 +32,9 @@
and on-going work on
high performance threading. The cluster is available on a check out
basis for developers, who must request accounts be created by
contacting one of the <a href="#admins">netperf cluster admins</a>.</p>
contacting one of the <a href="#admins">netperf cluster admins</a>.
The cluster includes both 1gbps and 10gbps test segments, with
network hardware from a number of vendors.</p>
<a name="donors"></a>
<h2>Donors</h2>
@ -43,10 +45,10 @@
<ul>
<li><p><a href="http://www.sentex.ca/">Sentex Data Communications</a>,
who not only host the complete cluster, provide front-end build
system, and the management infrastructure (remote power, serial
console, network switch, etc), but also appear to be endlessly
willing to help configure, reconfigure, and troubleshoot at almost
any time of day or night.</p></li>
system, several test systems, and the management infrastructure
(remote power, serial console, network switch, etc), but also appear
to be endlessly willing to help configure, reconfigure, and
troubleshoot at almost any time of day or night.</p></li>
<li><p><a href="http://www.freebsdsystems.com/">FreeBSD Systems</a>,
who through a generous matching grant with the FreeBSD Foundation,
@ -60,8 +62,31 @@
participating in cluster planning.</p></li>
<li><p><a href="http://www.ironport.com">IronPort Systems</a>, who have
generously donated additional test hardware for use in the netperf
cluster.</p></li>
donated a test server.</p></li>
<li><p><a href="http://www.ixsystems.com/">iXsystems</a>, who have
donated several test servers.</p></li>
<li><p><a href="http://www.google.com/">Google, Inc.</a>, who have
donated two test servers.</p></li>
<li><p><a href="http://www.cisco.com/">Cisco, Inc.</a>, who have
donated a 10gbps switch.</p></li>
<li><p><a href="http://www.chelsio.com/">Chelsio Communications</a>,
who have donated two 10gbps network cards.</p></li>
<li><p><a href="http://www.myricom.com/">Myricom, Inc.</a>, who have
donated two 10gbps network cards.</p></li>
<li><p><a href="http://www.intel.com/">Intel Corporation</a>, who
have donated two 10gbps network cards.</p></li>
<li><p>&a.gnn;, who has donated a quad-core AMD test
system.</p></li>
<li><p>&a.rwatson;, who has donated a dual-CPU PIII system and a
Portmaster terminal server.</p></li>
</ul>
<p>Donations to support the netperf cluster have an immediate and
@ -79,13 +104,8 @@
developer/administrators to support SMP development and performance
testing on high-end hardware. If you have any questions, including
questions about access to the cluster as a developer, or about possible
future donations of testing hardware, please feel free to contact the
following:</p>
<ul>
<li><p>&a.rwatson;</p></li>
<li><p>&a.bmilekic;</p></li>
</ul>
future donations of testing hardware, please feel free to contact them
via netperf-admin at FreeBSD.org.</p>
<a name="resources"></a>
<h2>Netperf Cluster Resources</h2>
@ -97,40 +117,21 @@
<ul>
<li><p><b>zoo.FreeBSD.org</b> is the front-end build and management
system. All netperf cluster users are provided with accounts on this
box, typically using the same account name and SSH keys as used to
access the FreeBSD.org central cluster. Connected to zoo's second
network interface is a gigabit link to the internal management
network. Zoo provides DHCP, tftp, and NFS services to boxes in the
cluster, as well as having serial access to the remote power system
and serial console access to the test boxes. Test kernels and
software will typically be built and configured on zoo, then exported
to the cluster via NFS. Zoo exports its /zoo file system to the
cluster, and cluster users will have a directory, /zoo/username, in
which they can place any files to export. Each machine has a
/zoo/hostname directory, which consists of the root of an NFS root
file system, as well as the tree from which tftp exports pxeboot
loaders. By substituting kernels and configuration files in these
trees, most aspects of the test systems may be directly managed.
This system was donated by Sentex Communications.</p></li>
system. This system was donated by Sentex Communications.</p></li>
<li><p><b>elephant</b> is a dual-PIII 800MHz system with ATA disk
subsystem. Currently, the ATA disk holds two partitions, one with
FreeBSD 4.x, and one with FreeBSD 5.x user space configuration
on. This system was donated by Robert Watson.</p></li>
subsystem.</p></li>
<li><p><b>orangutan</b> is a dual-Xeon 2GHz system equipped with an
Adaptec SCSI RAID array. Currently, the RAID array is configured to
expose a single volume holding FreeBSD 6.x. This system was donated
by IronPort Systems.</p></li>
Adaptec SCSI RAID array. This system was donated by IronPort
Systems.</p></li>
<li><p><b>tiger-1</b>, <b>tiger-2</b>, and <b>tiger-3</b> are a set of
interconnected, matching dual-Xeon 3GHz systems with ATA disk
subsystems. Each has four if_em network interfaces, and these are
interconnected so that various topologies can be created. Two ATA
disks are connected, one with a FreeBSD 4.x and one with a FreeBSD
5.x user space configuration on. These systems were donated by
FreeBSD Systems and the FreeBSD Foundation.</p></li>
interconnected so that various topologies can be created.
These systems were donated by FreeBSD Systems and the FreeBSD
Foundation.</p></li>
<li><p><b>cheetah</b> is a dual core Opteron 270 system with two
2GHz CPUs each with two cores using a Tyan K8S Pro (S2882)
@ -155,6 +156,15 @@
<li><p><b>apc2</b>, <b>apc3</b>, and <b>apc4</b> are the remote power
consoles for the test network. These systems were donated by
Sentex Communications.</p></li>
<li><p><b>leopard1</b>, <b>leopard2</b>, and <b>leopard3</b> are
dual-core Intel systems hooked up to the 10gbps test cluster,
and use Chelsio and Myricom 10gbps cards. These systems were
donated by iXsystems.</p></li>
<li><p><b>hydra1</b> and <b>hydra2</b> are 8-core Intel systems
hooked up to the 10gbps test cluster. These systems were donated
by Google and the FreeBSD Foundation.</p></li>
</ul>
<p>The current serial port and network configuration of test systems, as
@ -183,6 +193,10 @@
to users of the netperf cluster:</p>
<ul>
<li><p><b>20070727</b> - The 10gbps testbed is now being configured,
thanks to donations from iXsystems, Chelsio, Myricom, Intel, Google,
Cisco, and the FreeBSD Foundation.</p></li>
<li><p><b>20061211</b> - The <a
href="http://wiki.freebsd.org/NetperfClusterReservations">Netperf
Cluster Reservations page</a> is now online on the wiki. Also, a