High Performance Computing at NJIT

Table : High Performance Computing at NJIT

Table last modified: 5-Apr-2019 14:54
HPC Machine Specifications
 Kong.njit.edu Phi.njit.edu Gorgon.njit.eduStheno.njit.eduStheno.njit.edNJIT HPC
Expansion[1]Kong-5Kong-6Kong-7Kong-8Kong-9Kong-10Kong-11Kong-12Kong-13Kong-14Kong-15Cluster Total Stheno-1Stheno-2Stheno-3Stheno-4Stheno-5Cluster TotalGrand Totals
Tartan designation[2]Tartan-9Tartan-11Tartan-12Tartan-15Tartan-16Tartan-17Tartan-18Tartan-19Tartan-20Tartan-21Tartan-22Tartan-4Tartan-3Tartan-5Tartan-6Tartan-10Tartan-13Tartan-14   
Manufacturer IBMIBMSupermicroSunDellMicrowayMicrowayMicrowayMicrowayMicrowayMicrowayVMware[5]MicrowayMicrowayIBMIBMIBM   
Model iDataPlex dx360 M4iDataPlex dx360 M4SS2016X4600PowerEdge R630NumberSmasherNumberSmasher-4XNumberSmasher DualXeon Twin ServerNumberSmasher DualNumberSmasher Xeon VMware[5] NumberSmasher-4XiDataPlex dx360 M4iDataPlex dx360 M4iDataPlex dx360 M4   
Nodes 222701351224113031188132132337
• PROCESSORS •                         
CPUs per node222822222222422222   
Cores per CPU610441010101010101688666610   
Cores per node12208322020202020203216321212121220   
Total CPU cores2440216032601002404080203228281632969615624203923268
Processor model[4]Intel Xeon E5-2630Intel Xeon E5-2660v2Intel Xeon L5520AMD Opteron 8384Intel Xeon E5-2660-v3Intel Xeon E5-2630-v4Intel Xeon E5-2630-v4Intel Xeon E5-2630-v4Intel Xeon E5-2630-v4Intel Xeon E5-2630-v4Intel Xeon 6142Intel Xeon E5-2680 AMD Opteron
6134
Intel Xeon E5649Intel Xeon E5-2630Intel Xeon E5-2630Intel Xeon E5-2630Intel Xeon E5-2660 v2   
Processor µarchSandy BridgeIvy BridgeNehalemK10 ShanghaiHaswellBroadwellBroadwellBroadwellBroadwellBroadwellSkylakeSandy BridgeK10 MaranelloWestmereSandy BridgeSandy BridgeSandy BridgeIvy Bridge   
Processor launch 2012 Q12013 Q32009 Q12008 Q42014 Q32016 Q12016 Q12016 Q12016 Q12016 Q12017 Q32012 Q12010 Q12011 Q12012 Q12012 Q12012 Q12013 Q3   
Processor speed, GHz 2.32.22.272.692.62.22.22.22.22.22.62.72.32.532.32.32.32.2   
• MEMORY •                         
RAM per node, GB 12812846128128256256256256256192646496128128128128   
RAM per CPU, GB 64642316641281281281281289632164864646464   
RAM per core, GB 10.676.45.7546.412.812.812.812.812.8642810.6710.6710.676.4   
Total RAM, GB2562561242012838412803072512102425619219780646476810241664256128384023748
• CO-PROCESSORS •                         
GPU ModelNvidia K20XNvidia Tesla P100 16GB “Pascal”Nvidia Tesla P100 16GB “Pascal”Nvidia Tesla P100 16GB “Pascal”NVIDIA GeForce Titan Xp "Pascal"Nvidia K20Nvidia K20m   
GPUs4104442642632
Cores per GPU2688358435843584384024962668   
Total GPU cores1075235840143361433615360906249984533615320105944
RAM per GPU, GB61616161256   
Total GPU RAM, GB24160646448360201232392
• STORAGE •                         
Local disk per node, GB[6] 500500100014610241024102410241024102427370 117117500500500   
Total local disk, GB10001000270000146307251201228820484096102427370327164936936650010005009872337036
Shared scratch[7]33743374/nscratch, 151GB/scratch, 938GB/gscratch, 361GB   
NFS /home/, GB826182612728   
Node interconnect 10GbE10GbEGigEGigE10GbE10GbEInfiniBand FDR10GbE10GbE10GbE10GbE InfiniBand QDRInfiniBand FDRInfiniBand FDRInfiniBand FDR   
• SOFTWARE •                         
Scheduler SunGridEngine 6.2   
Cluster mgmt Warewulf   
Operating System SL 5.5 SL 5.5 SL 5.5    
Kernel Release 398504313711725   
• RATINGS •                         
Max GFLOPS [9] 20733018387322.8585825198033066016531224103.8162276910.88281345.52071653456.327998.1
CPU Mark, per CPU [11]19106136594357705168146814811819106191061910613659   
CPU Mark, per node38212273188714564082273318807188071880718807188072884813628272561623638212382123821227318   
CPU Mark, per node totaled76,4245463623527805640868199940352256843761475228188072884830886631362827256129888305696496756764242731810360824165629
Max GPU GFLOPS[10]395035203950   
Total GPU GFLOPS15800140807900   
• POWER •                         
Watts per node104010403001975150016001620160010001600220010001000104010401040   
Total Watts208020808100019754500800019440320040001600220013007580008000135202080832039920169995
MFLOPS per Watt                         
• ETC •                         
Access modelReserved[12]PublicPublicPublicReserved[12]Partly[13] Reserved[14] Reserved[13] Reserved[16] Reserved[17]Reserved[19] Public Reserved[1] Reserved[1]   
Head node AFS client YesYesYes   
Compute nodes AFS client YesYesYes   
In-service date Aug 2013Oct 2013Mar 2015Aug 2015Nov 2016Aug 2017Sep 2017Sep 2017May 2018Aug 2018Apr 2019 Oct 2010
Sep 2017[15]
Aug 2010Nov 2011Sep 2012Aug 2013May 2015Jun 2015   
Node numbers147-150151, 152100-111, 200-401, 500-599 [18]153402-404412-416417-428429-430431-434435436“Phi”“Gorgon”0-78-1516-2730-3132   
URL Link   
 Kong.njit.edu Phi.njit.edu Gorgon.njit.eduStheno.njit.eduStheno.njit.edNJIT HPC

Notes: Notes last modified: 5-Apr-2019 14:54
[1]  Access to Stheno and Gorgon is restricted to Department of Mathmatics use.
[2]  See https://ist.njit.edu/tartan-high-performance-computing-initiative
[3]  A small number of Kong nodes are reserved by specific faculty.
[4]  All active systems are 64-bit.
[5]  Phi is virtual machine running on VMware provisioned as shown here; actual hardware is irrelevant.
[6]  A small portion of compute nodes' local disk is used for AFS cache and swap; the remainder is /scratch available to users.
[7]  Shared scratch writable by all nodes via NFS (/nscratch) or locally mounted for one-node systems (Phi, Gorgon)
[8]  Core counts do not include hyperthreading
[9]  Most GFLOPS estimated by cores*clock*(FLOPs/cycle), however 3.75 FLOPs/cycle conservatively assumed instead of the typical 4.0
[10]  Peak single precision floating point performance as per manufacturer's specifications
[11]  PassMark CPU Mark from http://cpubenchmark.net/ or https://www.cpubenchmark.net/multi_cpu.html
[12]  Access to Kong-5 and Kong-9 is reserved for Dr. C.Dias and designees.
[13]  Access to Kong-10 is reserved for Data Sciences faculty and students, contact ARCS@NJIT.EDU for additional information.
[14]  Access to Kong-11 is reserved for Dr. G.Gor and designees.
[15]  Phi upgraded, was originally 1-CPU of 4-cores and 32GB RAM
[16]  Access to Kong-13 is reserved for Dr. E. Nowadnick and designees.
[17]  Access to Kong-14 is reserved for Dr. D. Datta and designees.
[18]  On in-service there were 314 nodes average 64GB RAM, but failed nodes are generally not repaired. Count shown in “Nodes” and “RAM per node” above is as of the “Table last modified” date.
[19]  Access to Kong-15 is reserved for Dr. H. Jin and designees.